# Marginal distribution

In probability theory, given a joint probability density function of two parameters *x* and *y*, the **marginal distribution** of *x* is the probability distribution of *x* after information about *y* has been averaged over. For example from a Bayesian probability perspective, if we are doing parameter estimation we can consider the joint probability density as a joint inference about the true values of the two parameters and the marginal distribution of (say) *x*, as our inference about *x* after the uncertainty about *y* had been averaged over. We can say that in this case, we are considering *y* as a nuisance parameter.

For continuous probability densities, this marginal probability density function can be written as *m*_{y}(*x*). Such that

where *p*(*x*,*y*) gives the joint distribution of *x* and *y*, and *c*(*x*|*y*) gives the conditional distribution for *x* given *y*. The second integral was formulated by use of the Bayesian product rule. Note that the marginal distribution has the form of an expectation.

For a discrete probability mass function, the marginal probability for x_{k} can be written as *p*_{k} Such that

where the *j* index spans all values of the discrete *y*. The notation *p*_{kj} here means the joint probability value when *x* has the value *x*_{k} and *y* has the value *y*_{j} while *p*_{k|j} here references the conditional probability value for *x*_{k} for y fixed at the value *y*_{j}. With *k* fixed in the above summation and *p*_{k,j} considered as a matrix, this can be thought of as summing over all columns in the k^{th} row. Similarly, the marginal mass function for *y* can be computed by summing over all rows in a particular column. When all of the *p*_{k} are determined this way for all k, this set of *p*_{k} constitute the discrete probability mass function for the relevant discrete values of *x*, in this particular case calculated as a marginal mass function from an original joint probability mass function.