Anda di halaman 1dari 24

Fundamental of Digital

Communications
Chapter 2: Probability and
Stochastic Processes
Lectured by Assoc Prof. Dr. Thuong Le-Tien

August 2015
1

Probabilistic models are needed for the design systems


that are reliable in performance in the face of uncertainty,
efficient in computational terms, and cost effective.
Wireless channels is subject to uncertainties,
the source of which include:
Noise due to thermal agitation of electrons in the
conductors and devices
Fading due to the multipath phenomenon
Interference, representing spurios electromagnetic
waves emitted by other communication systems
or microwave devices.
2

Probability theory
Probabilistic models
The mathematical description of an experiment with uncertain outcomes
Is called a probabilistic modelformulation with three fundamental ingredients:
1. Sample space or universal set S, outcomes of a random experiment
2. A class E of events that are subsets of S
3. Probabilty law, P[A] is called the probabilty of event A

Axioms of Probability

Conditional probability

BayesRule

Examples of Bayes Rule


Radar Detection

Suppose:

Random variables
* The random variable is a function whose domain is a sample space
and whose range is some set of real numbers
* Upper case characters denote random variables and lower case
characters denote real values taken by random variables

Distribution Function

Monotonicity of the distribution

Examples
* Uniform Distribution

* Bernoulli Random Variable

10

Multiple Random Variable


The joint distribution function FX,Y(x,y) is the probability that the random
variable X is less than or equal to a specified value x, and that the random
Variable Y is less than or equal to another specified value y

11

Concept of Expectation
Expected value or Mean:

nth order moments


nth central moments

Variances

Covariances

12

Bayesian Inference
The parameter space being
hidden from observer.
A parameter vector t,
drawn from the parameter
space, is mapped
probabilistically onto
the observation space
producing the observation
vector x which is the sample
value of a random vector X.

Probabilistic model for Bayesian inference

13

Introduce 4 notions

14

Parameter estimation in additive noise


Consider a set N of scalar observation, defined by:

Where the unknown parameter is drawn from the Gaussian distribution

Each ni is drawn from the Gaussian distribution

Assumed that the random variable Ni are all independent of each other,
and also independent from , the issue of interest to find the Maximum
a Posteriori MAP of the . Using the vector x denote the N observation,
then the observation density of x as

The problem is to determine the MAP estimate of the unknown parameter

15

To solve this problem, we need to know the posterior density


Where the normalization factor c(x)

Rearranging terms and completing the square in the exponent, and introducing
A new normalization factor c(x) that absorbs all terms involving

where
This equation shows that the posterior density of the is Gaussian with mean
and variance
Therefore the MAP estimate of is

16

Hypothesis testing
Binary Hypotheses: source of binary data with o and 1 are denoted by H0 and H1

17

Likelihood receiver
An introduce notations:

The two conditional probability density function


are referred to likelihood functions, then two kinds of errors

The conditional probability of errors

and
18

Define the Bayes risk for the binary hypothesis-testing problem as

The optimum decision rule proceed as follows,


1. If

0 f X H ( x H 0 ) 1 f X H ( x H1 )
0

Then the observation vector x should be assigned to Z0, in this case,


we say H0 is true
2. If, on the other hand,

0 f X H ( x H 0 ) 1 f X H ( x H1 )
0

Then the observation vector x should be exculded from Z0, in this


case, we say H1 is true
19

The likelihood ratio defined by


And the scalar quantity
Then then two decisions

is called the threshold of the test


or

20

Example binary hypothesis testing

The likelihood ratio

21

Multiple Hypotheses for M possible output source


First case for M=3

Given an observation vector x in a multiple by hypothesis test, the average


Probability pf error is minimized by choosing the hypothesis Hi for which the
the largest value for i=0, 1, , M-1
H i xhas
Posterior probability

22

Some important random variables

23

24