Information
t Theory and
Information Theory and Coding
Coding
1
Definition of
information
We defin the amount informati gaine afte observi
th hi hoccurs with a defined
theeeventofS which d fi probability
d b bilit
as
on d r ng
the
( l,
/ ( sk )
logarithmic function
log pk
Where pk is probability of occurance of event sk
Rememb
er
di i l
Conditional b
Joint
bili
Probability
Probability
Important
If we ar absolute certai of th outcom of an
event,
e lyproperties
before it
n
there no
e e
even occurs, is information
The occurrence of an event either
gained.
provides some or no information, but
information never about
never brings about a loss of information.
loss
The less probable an the more information
event is, we
Itinformation
is the standard practice today to use a
logarithm to base 2.
The resulting unit of information is called the bit
University
Entropy of a discrete memoryless
University
Properties of
1. Entropy is a measure of the uncertainty
Entropy
of the random variable
this upper
uncertainty. University
Proof of these properties of
2nd Property
H(s)
is zero if,
and only if pk = 0 or 1.
We therefore deduce that H(s) = 0 if, and only if
pk= 0 or 1,
University
zero.
Example: Entropy of Binary Memoryless
We consider a binary source for which symbol 0 occurs
with probability P(0) and symbol 1 with probability P(1) =
Source
1 P(0) We assume that the source is memory-less
University
9/4/2013 Lt Col A K Nigam, ITM University
11
Proof of 3rd statement:Condition for Maximum
Entropy
We know that the entropy can achieve valu of
log M wher M is the number of
maximum e
If we e assume
2
If ll
that all symbols
symbols. i
are equiprobable
b bl
then
H (s) =
pk log 2
The associated entropy is therefore
M
2
M l/ M
k =l pk
= log 2 M
= M . l log l
University
probability occurrence
EXAMPLE: Entropy of
Source
University
EXAMPLE: Entropy of
Six messages with probabilities
Source
0.30, 0.25,
0.15, 0.12, 0.10, and 0.08, respectively
arel
= (.30log
.30l
l0
.30 + .25log l0 .25 + .l5log l0 .l5 + .l2log l0
.l2 + .l0log l0
.l0 + .08log l0.08)
transmitted.
=+
l
.7292
Find the entropy
.30l
H (x) = (.30log2 .30 +.25log2 .25 +.l5log2 .l5 +.l2log2 .l2 +.l0log2 .l0 +.08log2 .08)
= +2.422644
University
Discrete Memoryless
Channel
A discrete memoryless channel is a statistical
input X and an output Y isa nois versio of X bot X
model with an
an Y ar rando variable
that y n ; h
d e m s.
University
Channel matrix, or transition
A convenient way of describing a discrete memory-less
matrix
channel is to arrange the various transition probabilities of
the channel in the form of a matrix as follows:
University
Capacity of a Discrete Memoryless
Channel capacity
Channel of a discrete
memoryless channel is defined as the
maximum mutual information I(x; y) in any
single use of the channel where the
possible probability {p( j on
maximization is over all possible input
probability distributions {p(xj)} on X.
{ p ( x j )}
or per
The channel capacity C is measured in bits
per channel use, or bits per transmission.
Binary symmetric channel
P(X2)=.4 .7 y2
.8 .2
P( y / x) =
.3
.7
University
Types of channels and associated
Lossless channel
Entropy
Deterministic channel
Noiseless channel
i l h l
Binary symmetric channel
University
Lossless
column.
)
Deterministic
Channel matrix has
non element in each fo
only one
example channel l
zero
row, r
l
0l 00
[ P(Y / X )] = 0
0
0
0 l 0
0 0 l
given that
In case ofxDeterministic
has occurred
channel
is 0/1 p(y/x)=0/1 as the
Putting this
probability ofinyeq 3 we get
H(y/x)=0
Thus from eq. 1 we
get
I(x, y)=H(y)
Also C=max
9/4/2013 Lt Col A K Nigam, ITM 30
H(y) University
CHANNEL CAPACITY OF A CONTINUOUS
University
With noise spectral density N0 , the total noise
spectr densit multiplie byBW ieBN0 Thu weca be
in BW B is
.
al y d s n
write
University