# Difference between revisions of "Marginal distribution"

(Add category (please change if there is a better one)) |
|||

Line 7: | Line 7: | ||

where ''p''(''x'',''y'') gives the joint distribution of ''x'' and ''y'', and ''c''(''x''|''y'') gives the [[conditional distribution]] for ''x'' given ''y''. Note that the marginal distribution has the form of an [[expectation]]. | where ''p''(''x'',''y'') gives the joint distribution of ''x'' and ''y'', and ''c''(''x''|''y'') gives the [[conditional distribution]] for ''x'' given ''y''. Note that the marginal distribution has the form of an [[expectation]]. | ||

− | For a [[discrete probability mass function]], the [[marginal probability]] for | + | For a [[discrete probability mass function]], the [[marginal probability]] for x<sub>k</sub> can be written as ''p''<sub>''k''</sub> Such that |

Line 13: | Line 13: | ||

:<math>p_{k} = \sum_{j} p_{kj} = \sum_{j} p_{j}p_{k|j} </math> | :<math>p_{k} = \sum_{j} p_{kj} = \sum_{j} p_{j}p_{k|j} </math> | ||

− | where the ''j'' index spans all values of the discrete ''y''. With ''k'' fixed | + | where the ''j'' index spans all values of the discrete ''y''. The notation ''p''<sub>''kj''</sub> here means the joint probability value when ''x'' has the value ''x''<sub>k</sub> and ''y'' has the value ''y''<sub>j</sub> while ''p''<sub>''k|j''</sub> here references the conditional probability value for ''x''<sub>k</sub> for y fixed at the value ''y''<sub>j</sub>. With ''k'' fixed in the above summation and ''p''<sub>''k'',''j''</sub> considered as a matrix, this can be thought of as summing over all columns in the k<sup>th</sup> row. Similarly, the marginal mass function for ''y'' can be computed by summing over all rows in a particular column. When all of the ''p''<sub>''k''</sub> are determined this way for all k, this set of ''p''<sub>''k''</sub> constitute the [[discrete probability mass function]] for the relevant discrete values of ''x'', in this particular case calculated as a marginal mass function from an original joint probability mass function. |

[[Category:mathematics]] | [[Category:mathematics]] |

## Revision as of 12:47, 6 December 2007

In probability theory, given a joint probability density function of two parameters *x* and *y*, the **marginal distribution** of *x* is the probability distribution of *x* after information about *y* has been averaged over. From a Bayesian probability perspective, we can consider the joint probability density as a joint inference about the true values of the two parameters and the marginal distribution of (say) *x*, as our inference about *x* after the uncertainty about *y* had been averaged over. We can say that in this case, we are considering *y* as a nuisance parameter.

For continuous probability densities, this marginal probability density function can be written as *m*_{y}(*x*). Such that

where *p*(*x*,*y*) gives the joint distribution of *x* and *y*, and *c*(*x*|*y*) gives the conditional distribution for *x* given *y*. Note that the marginal distribution has the form of an expectation.

For a discrete probability mass function, the marginal probability for x_{k} can be written as *p*_{k} Such that

where the *j* index spans all values of the discrete *y*. The notation *p*_{kj} here means the joint probability value when *x* has the value *x*_{k} and *y* has the value *y*_{j} while *p*_{k|j} here references the conditional probability value for *x*_{k} for y fixed at the value *y*_{j}. With *k* fixed in the above summation and *p*_{k,j} considered as a matrix, this can be thought of as summing over all columns in the k^{th} row. Similarly, the marginal mass function for *y* can be computed by summing over all rows in a particular column. When all of the *p*_{k} are determined this way for all k, this set of *p*_{k} constitute the discrete probability mass function for the relevant discrete values of *x*, in this particular case calculated as a marginal mass function from an original joint probability mass function.