Bayes' theorem

From Conservapedia
Jump to: navigation, search

Bayes' Theorem, also known as Bayes' Rule, is used in statistics and probability theory to relate marginal probabilities and conditional probabilities. In the context of Bayesian probability theory, it is used to update degrees of belief (probabilities) given new information. The theorem was first developed by the Reverend Thomas Bayes (1702-1761), an English nonconformist minister, and later improved by the French mathematician Pierre Laplace.

While held in disdain by some mathematicians for years, Bayes Theorem is credited with helping decipher the Enigma code of the Nazis during World War II.[1]

Rev. Bayes' interest in this was follows:

Apart from his faith, Bayes had a deep love and interest in mathematics and was considered an amateur mathematician. In his later years, he became fascinated with probability, specifically inverse probability. No one knows why for sure, but one thing is clear: Bayes became consumed with figuring out ... the approximate probability of a future event he knew nothing about except its past, that is, the number of times it had occurred or failed to occur.


Bayes' theorem is stated mathematically as


where X and Y are independent statements, I is available background information, and

is the posterior probability for X given Y and I,

is the likelihood for Y given X and I,

is the prior probability for X given only I, and

is sometimes called the evidence or probability for Y given only I.

In a scientific context, X may be a hypothesis, and Y may be experimental data. The theorem can then be used to determine the degree of belief in the hypothesis by using the experimental data.

Applications and Examples

One popular example of the use of Bayes' theorem is the Monty Hall problem, inspired by the television show Let's Make a Deal.

Another example of how Bayes' theorem would be used is:

Suppose a particular disease afflicts 1% of the population. Suppose that a test for the disease is 95% accurate. Suppose that someone tests positive for the disease but there is no other evidence that they have the disease. What is the probability that they have the disease?

Let X be the event that the test result is positive. Let Y be the event that the person actually has the disease.

Before the test result is known our probability for the person having the disease is p(Y) = 1% = 0.01. The probability that the person has a positive test result, giving that they have the disease, is 95% = 0.95. The denominator term, p(X) is a little more complex since X can occur in two different ways. If the person has the disease, they will test positive with probability 0.95. If the person does not have the disease they will test positive with probability 5%. We denote the probability that Y is not true by Y'. The laws of probability require that Y' = 1 - Y. Thus


In other words, there is only a 16% chance that a person testing positive actually has the disease.

An extended form of Bayes's theorem is obtained by noting that it applies to probability distributions as well as to events. Let y be a (vector valued) observable quantity that we want to use to estimate some unknown, unobservable (vector valued) quantity . Prior to seeing the data y, we summarize our knowledge about by a probability distribution . Assume that we have a model of the relationship between y and . Call this . We can use Bayes' theorem to update our knowledge of by incorporating the information contained in the observed data y.

We have