Anda di halaman 1dari 24

Fundamental of Digital

Chapter 2: Probability and
Stochastic Processes
Lectured by Assoc Prof. Dr. Thuong Le-Tien

August 2015

Probabilistic models are needed for the design systems

that are reliable in performance in the face of uncertainty,
efficient in computational terms, and cost effective.
Wireless channels is subject to uncertainties,
the source of which include:
Noise due to thermal agitation of electrons in the
conductors and devices
Fading due to the multipath phenomenon
Interference, representing spurios electromagnetic
waves emitted by other communication systems
or microwave devices.

Probability theory
Probabilistic models
The mathematical description of an experiment with uncertain outcomes
Is called a probabilistic modelformulation with three fundamental ingredients:
1. Sample space or universal set S, outcomes of a random experiment
2. A class E of events that are subsets of S
3. Probabilty law, P[A] is called the probabilty of event A

Axioms of Probability

Conditional probability


Examples of Bayes Rule

Radar Detection


Random variables
* The random variable is a function whose domain is a sample space
and whose range is some set of real numbers
* Upper case characters denote random variables and lower case
characters denote real values taken by random variables

Distribution Function

Monotonicity of the distribution

* Uniform Distribution

* Bernoulli Random Variable


Multiple Random Variable

The joint distribution function FX,Y(x,y) is the probability that the random
variable X is less than or equal to a specified value x, and that the random
Variable Y is less than or equal to another specified value y


Concept of Expectation
Expected value or Mean:

nth order moments

nth central moments




Bayesian Inference
The parameter space being
hidden from observer.
A parameter vector t,
drawn from the parameter
space, is mapped
probabilistically onto
the observation space
producing the observation
vector x which is the sample
value of a random vector X.

Probabilistic model for Bayesian inference


Introduce 4 notions


Parameter estimation in additive noise

Consider a set N of scalar observation, defined by:

Where the unknown parameter is drawn from the Gaussian distribution

Each ni is drawn from the Gaussian distribution

Assumed that the random variable Ni are all independent of each other,
and also independent from , the issue of interest to find the Maximum
a Posteriori MAP of the . Using the vector x denote the N observation,
then the observation density of x as

The problem is to determine the MAP estimate of the unknown parameter


To solve this problem, we need to know the posterior density

Where the normalization factor c(x)

Rearranging terms and completing the square in the exponent, and introducing
A new normalization factor c(x) that absorbs all terms involving

This equation shows that the posterior density of the is Gaussian with mean
and variance
Therefore the MAP estimate of is


Hypothesis testing
Binary Hypotheses: source of binary data with o and 1 are denoted by H0 and H1


Likelihood receiver
An introduce notations:

The two conditional probability density function

are referred to likelihood functions, then two kinds of errors

The conditional probability of errors


Define the Bayes risk for the binary hypothesis-testing problem as

The optimum decision rule proceed as follows,

1. If

0 f X H ( x H 0 ) 1 f X H ( x H1 )

Then the observation vector x should be assigned to Z0, in this case,

we say H0 is true
2. If, on the other hand,

0 f X H ( x H 0 ) 1 f X H ( x H1 )

Then the observation vector x should be exculded from Z0, in this

case, we say H1 is true

The likelihood ratio defined by

And the scalar quantity
Then then two decisions

is called the threshold of the test



Example binary hypothesis testing

The likelihood ratio


Multiple Hypotheses for M possible output source

First case for M=3

Given an observation vector x in a multiple by hypothesis test, the average

Probability pf error is minimized by choosing the hypothesis Hi for which the
the largest value for i=0, 1, , M-1
H i xhas
Posterior probability


Some important random variables