Anda di halaman 1dari 5

ASSIGNMENT

ON
BAYES’ THEORM

SUBMITTED TO:-
SUBMITTED BY:-
MRS. NEERU
MOHIT NARANG

MBA-Ist
03417003909

Bayes theorem
In probability theory, Bayes' theorem (often called Bayes' law or Bayes'
rule, and named after Rev. )shows how one conditional probability (such
as the probability of a hypothesis given observed evidence) depends on
its inverse (in this case, the probability of that evidence given the
hypothesis). The theorem expresses the posterior probability (i.e. after
evidence E is observed) of a hypothesis H in terms of the prior
probabilities of H and E, and the probability of E given H. It implies that
evidence has a stronger confirming effect if it was more unlikely before
being observed. Bayes theorem is valid in all common interpretations of
probability, and is applicable in science and engineering. However, there
is disagreement between frequentist and Bayesian and subjective and
objective statisticians in regards to the proper implementation and extent
of Bayes´ theorem.

Simple statement of theorem


Bayes gave a special case involving continuous prior and posterior
probability distributions and discrete probability distributions of data,
but in its simplest setting involving only discrete distributions, Bayes'
theorem relates the conditional and marginal probabilities of events A
and B, where B has a non-vanishing probability:

.
Each term in Bayes' theorem has a conventional name:

• P(A) is the prior probability or marginal probability of A. It is


"prior" in the sense that it does not take into account any
information about B.
• P(A|B) is the conditional probability of A, given B. It is also called
the posterior probability because it is derived from or depends
upon the specified value of B.
• P(B|A) is the conditional probability of B given A.
• P(B) is the prior or marginal probability of B, and acts as a
normalizing constant.

Bayes' theorem in this form gives a mathematical representation of how


the conditional probability of event A given B is related to the converse
conditional probablity of B given A.

Likelihood functions and continuous prior and posterior


distributions

Suppose a continuous probability distribution with probability density


function ƒΘ is assigned to an uncertain quantity Θ. (In the conventional
language of mathematical probability theory Θ would be a "random
variable", but in certain kinds of scientific applications such language
may be considered objectionable.) The probability that the event B will
be the outcome of an experiment depends on Θ; it is P(B | Θ). As a
function of Θ this is the likelihood function:

Then the posterior probability distribution of Θ, i.e. the conditional


probability distribution of Θ given the observed data B, has probability
density function

where the "constant" is a normalizing constant so chosen as to make the


integral of the function equal to 1, so that it is indeed a probability
density function. This is the form of Bayes' theorem actually considered
by Thomas Bayes.
Simple example
Suppose there is a school having 60% boys and 40% girls as students.
The female students wear trousers or skirts in equal numbers; the boys
all wear trousers. An observer sees a (random) student from a distance;
all the observer can see is that this student is wearing trousers. What is
the probability this student is a girl? The correct answer can be
computed using Bayes' theorem.

The event A is that the student observed is a girl, and the event B is that
the student observed is wearing trousers. To compute P(A|B), we first
need to know:

• P(A), or the probability that the student is a girl regardless of any


other information. Since the observers sees a random student,
meaning that all students have the same probability of being
observed, and the fraction of girls among the students is 40%, this
probability equals 0.4.
• P(B|A), or the probability of the student wearing trousers given that
the student is a girl. As they are as likely to wear skirts as trousers,
this is 0.5.
• P(B), or the probability of a (randomly selected) student wearing
trousers regardless of any other information. Since P(B) = P(B|
A)P(A) + P(B|A')P(A'), this is 0.5×0.4 + 1×0.6 = 0.8.

Given all this information, the probability of the observer having spotted
a girl given that the observed student is wearing trousers can be
computed by substituting these values in the formula:
Another, essentially equivalent way of obtaining the same result is as
follows. Assume, for concreteness, that there are 100 students, 60 boys
and 40 girls. Among these, 60 boys and 20 girls wear trousers. All
together there are 80 trouser-wearers, of which 20 are girls. Therefore
the chance that a random trouser-wearer is a girl equals 20/80 = 0.25.
Put in terms of Bayes´ theorem, the probability of a student being a girl
is 40/100, the probability that any given girl will wear trousers is 1/2.
The product of two is 20/100, but you know the student is wearing
trousers, so you remove the 20 non trouser wearing students and receive
a probability of (20/100)/(80/100), or 20/80.

It is often helpful when calculating conditional probabilities to create a


simple table containing the number of occurrences of each outcome, or
the relative frequencies of each outcome, for each of the independent
variables. The table below illustrates the use of this method for the
above girl-or-boy example

Girls Boys Total

Trousers 20 60 80

Skirts 20 0 20

Total 40 60 100

Anda mungkin juga menyukai