Difference between revisions of "Marginal distribution"

From Conservapedia
Jump to: navigation, search
m
m
Line 1: Line 1:
In [[probability theory]], given a joint probability density function of two parameters ''x'' and ''y'', the '''marginal distribution''' of ''x'' is the [[probability distribution]] of ''x'' after information about ''y'' has been averaged over.  For example from a [[Bayesian probability]] perspective, if we are doing [[parameter estimation]] we can consider the joint probability density as a joint inference about the true values of the two parameters, and the marginal distribution of (say) ''x'' as our inference about ''x'' after the uncertainty about ''y'' had been averaged over.  We can say that, in this case, we are considering ''y'' as a [[nuisance parameter]].
+
In [[probability theory]], given a joint probability density function of two parameters or variables ''x'' and ''y'', the '''marginal distribution''' of ''x'' is the [[probability distribution]] of ''x'' after information about ''y'' has been averaged over.  For example from a [[Bayesian probability]] perspective, if we are doing [[parameter estimation]] we can consider the joint probability density as a joint inference about the true values of the two parameters, and the marginal distribution of (say) ''x'' as our inference about ''x'' after the uncertainty about ''y'' had been averaged over.  We can say that, in this case, we are considering ''y'' as a [[nuisance parameter]].
  
For this continuous [[probability density function]] (pdf), an associated marginal pdf can be written as ''m''<sub>''y''</sub>(''x'').  Such that   
+
For a continuous [[probability density function]] (pdf), an associated marginal pdf can be written as ''m''<sub>''y''</sub>(''x'').  Such that   
  
 
:<math>m_{y}(x) = \int_y p(x,y) \, dy = \int_y c(x|y) \, p(y) \, dy </math>
 
:<math>m_{y}(x) = \int_y p(x,y) \, dy = \int_y c(x|y) \, p(y) \, dy </math>
Line 13: Line 13:
 
:<math>p_{k} = \sum_{j} p_{kj} = \sum_{j} p_{j}p_{k|j}  </math>
 
:<math>p_{k} = \sum_{j} p_{kj} = \sum_{j} p_{j}p_{k|j}  </math>
  
where the ''j'' index spans all values of the discrete ''y''.  The notation ''p''<sub>''kj''</sub> here means the joint probability value when ''x'' has the value ''x''<sub>k</sub> and ''y'' has the value ''y''<sub>j</sub> while ''p''<sub>''k|j''</sub> here references the conditional probability value for ''x''<sub>k</sub> for y fixed at the value ''y''<sub>j</sub>.  With  ''k'' fixed in the above summation and ''p''<sub>''k'',''j''</sub> considered as a matrix, this can be thought of as summing over all columns in the  k<sup>th</sup> row.  Similarly, the marginal mass function for ''y'' can be computed by summing over all rows in a particular column.  When all of the ''p''<sub>''k''</sub> are determined this way for all k, this set of ''p''<sub>''k''</sub> constitute the pmf for the all relevant discrete values of ''x'', in this particular case calculated as a marginal mass function from an original joint probability mass function.
+
where the ''j'' index spans all indices of the discrete ''y''.  The notation ''p''<sub>''kj''</sub> here means the joint probability value when ''x'' has the value ''x''<sub>k</sub> and ''y'' has the value ''y''<sub>j</sub> while ''p''<sub>''k|j''</sub> here references the conditional probability value for ''x''<sub>k</sub> for y fixed at the value ''y''<sub>j</sub>.  With  ''k'' fixed in the above summation and ''p''<sub>''k'',''j''</sub> considered as a matrix, this can be thought of as summing over all columns in the  k<sup>th</sup> row.  Similarly, the marginal mass function for ''y'' can be computed by summing over all rows in a particular column.  When all of the ''p''<sub>''k''</sub> are determined this way for all k, this set of ''p''<sub>''k''</sub> constitute the pmf for the all relevant discrete values of ''x'', in this particular case calculated as a marginal mass function from an original joint probability mass function.
  
 
[[Category:mathematics]]
 
[[Category:mathematics]]

Revision as of 22:30, December 10, 2007

In probability theory, given a joint probability density function of two parameters or variables x and y, the marginal distribution of x is the probability distribution of x after information about y has been averaged over. For example from a Bayesian probability perspective, if we are doing parameter estimation we can consider the joint probability density as a joint inference about the true values of the two parameters, and the marginal distribution of (say) x as our inference about x after the uncertainty about y had been averaged over. We can say that, in this case, we are considering y as a nuisance parameter.

For a continuous probability density function (pdf), an associated marginal pdf can be written as my(x). Such that

where p(x,y) gives the joint probability distribution of x and y, and c(x|y) gives the conditional probability distribution for x given y. The second integral was formulated by use of the Bayesian product rule. Note that the marginal distribution has the form of an expectation value.

For a discrete probability mass function (pmf), the marginal probability for xk can be written as pk Such that


where the j index spans all indices of the discrete y. The notation pkj here means the joint probability value when x has the value xk and y has the value yj while pk|j here references the conditional probability value for xk for y fixed at the value yj. With k fixed in the above summation and pk,j considered as a matrix, this can be thought of as summing over all columns in the kth row. Similarly, the marginal mass function for y can be computed by summing over all rows in a particular column. When all of the pk are determined this way for all k, this set of pk constitute the pmf for the all relevant discrete values of x, in this particular case calculated as a marginal mass function from an original joint probability mass function.