Anda di halaman 1dari 5

Bayess Decision Criterion

P( A B ) = P ( B A) P ( A) P( B )

Each term in Bayes's theorem has a conventional name: P(A) is the prior probability or marginal probability of A. It is "prior" in the sense that it does not take into account any information about B. P(A|B) is the conditional probability of A, given B. It is also called the posterior probability because it is derived from or depends upon the specified value of B. P(B|A) is the conditional probability of B given A. P(B) is the prior or marginal probability of B, and acts as a normalizing constant.
posterior = likelihood prior normalizin gcons tan t

In words: the posterior probability is proportional to the prior probability times the likelihood. In addition, the ratio P(B|A)/P(B) is sometimes called the standardised likelihood, so the theorem may also be paraphrased as

posterior = standard likelihood prior


1.0 Decision costs

Two ways to lose information 1. 2. Cost C0 information lost when a transmitted digital 1 is received digital 0 (error) Cost C1 information lost when a transmitted digital 0 is received digital 1 (error)

There is no cost when no information is lost, i.e., when correct decisions are made.

2.0

Expected conditional decision costs

Expected conditional cost, C ( 0 v ) , incurred when a detected voltage is interpreted as digital 0 is given by C ( 0 v ) = C 0 P (1 v ) (1) C 0 = cost if decision is in error ( v ) is interpreted as digital 0; P(1 v ) = the a posteriori probability that decision is in error.

Similarly, by symmetry,

C (1 v ) = C1 P( 0 v ) ,
(2)

i.e., expected conditional cost incurred when

is interpreted as digital 1.

3.0

Optimum decision rule

Rational rule: interpret each detected voltage, the expected conditional cost, i.e.,

v , as either a 0 a 1 so as to minimize
(3)

< C (1 v ) C ( 0 v ) >
0

Interpretation of (3): If upper inequality holds, decides binary 1; if lower inequality holds, decides binary 0. Substituting (2) into (3) gives the following equation.

P( 0 v ) < C 0 P(1 v ) > C1


0

(4)

If costs of both types of error are same, then (4) represents a maximum a posteriori (MAP) probability decision criteria (one form of Bayess decision criterion). Bayess theorem:
P ( v,0 ) = p ( v ) P ( 0 / v ) = P ( 0) p ( v 0 ) and P ( v,1) = p ( v ) P (1 v ) = P (1) p ( v 1)

The following can be deduced. p( v 0) P( 0) P( 0 v ) = p( v ) and p ( v 1) P(1) P (1 v ) = p( v )

(5)

(6)

Figure below illustrates the conditional probability density function for the case of zero mean Gaussian noise.

The following can be deduced from (5) and (6):


P( 0 v ) p( v 0 ) P( 0) = P (1 v ) p ( v 1) P (1)

(7)

Substituting (7) into (4) gives

p( v 0) P( 0) < C 0 p ( v 1) P (1) > C1


0

(8)

or

p ( v 0 ) < C 0 P (1) p ( v 1) > C1 P ( 0 )


0

(9)

p(v 0) is called the likelihood ratio, a function of p ( v 1)

and

C 0 P (1) is called likelihood C1 P ( 0 )

threshold Lth . If C 0 = C1 and P ( 0) = P (1) (or C 0 P(1) = C1 P ( 0) ), then Lth =1 and (9) is called maximum likelihood decision criterion. Relationship between maximum likelihood, MAP and Bayess decision criteria: Receiver A priori probabilities known Yes Decision costs known Yes Assumptions Decision criterion

Bayes

None

p ( v 0 ) < C 0 P (1) p ( v 1) > C1 P ( 0 )


0

MAP

Yes

No

C 0 = C1

Max. Likelihood

No

No

C 0 P(1) = C1 P ( 0)

p ( v 0) < P (1) p ( v 1) > P ( 0)


0

p( v 0) < 1 p ( v 1) >
0

4.0

Optimum decision threshold voltage

Bayess decision criterion sets optimum reference (threshold voltage), v th in receiver decision circuit. v th minimizes the expected conditional cost of each decision and satisfies: p ( v 0) C 0 P (1) = = Lth (10) p( v 1) C1 P( 0 ) If Lth =1 (for statistically independent, equiprobable symbols with equal error costs), then voltage threshold would occur at the intersection of the two conditional pdfs. Assume voltage levels for binary 0 and 1 are represented by 0 volt and V volts, respectively. If binary 1 and 0 are equi-probable the decision threshold is located exactly hslfway between the voltage levels representing 1 and 0. If 0 is transmitted more often than 1 the threshold moves towards the transmitted voltage representing 1. Once the decision threshold has been established the total probability of error can then be calculated. If binary 0 (0 volt) is sent, the probability that it will be received as a 1 is the probability that noise will exceed +
P0 = P ), i.e., 1

A volts (assuming 2

Pe 0 =

exp ( v / 2 2 )

A/ 2

( 2 )
2

dv

(11)

If a 1 is sent (A volts) the probability that it will be received as a 0 is the probability that the noise voltage will be between A / 2 and , i.e.,
Pe1 =
A / 2

exp v / 2 2

( 2 )
2

)dv

(11)

These types of error are mutually exclusive, since sending a 0 precludes sending a 1. Symmetry of Gaussian function results in Pe 0 = Pe1 , and the total probability of error is P0 Pe 0 +P Pe1 . This can be reduced to 1

Pe = Pe1 ( P0 + P1 ) = Pe1

i.e.,

Pe =

A / 2

exp v / 2 2

( 2 )
2

)dv

(12)

This equation can be written in terms of two integrals


Pe =
0

exp v / 2 2

( 2 )
2

)dv

exp v 2 / 2 2

A / 2

( 2 )
2

) dv

(13)

Using the fact that Gaussian distribution is symmetrical about its mean value, equation (13) reduces to 2 2 A / 2 exp ( v / 2 ) 1 Pe = + dv 2 0 ( 2 2 ) Substituting y = v / 2 2 , the equation reduces to
Pe = 1 1 + 2

A / 2 2

exp y 2 dy

i.e.,

Pe =

1 1 erf A / 2 2 2

[ (

)]}

(14)

The error probability therefore depends solely on the ratio of the peak pulse voltage A to the rms noise voltage .

Anda mungkin juga menyukai