Reference books:
Objectives:
B.P. Lathi, Modern Digital and Analog Communication Systems,
S. Haykin, Communication Systems, Wiley, 2001
L.W. Couch II, Digital and Analog Communication Systems,
Determine the probability of error for digital systems
Understand information theory and its significance in determining
Course material: system performance
http://www.ee.ic.ac.uk/dward/
Lecture 1 2 Lecture 1 3
Lecture 1 Definitions
Signal: a single-valued function of time that conveys information
What is the course about, and how does it fit together Deterministic signal: completely specified function of time
1.
Random signal: cannot be completely specified as function of time
2. Some definitions (signals, power, bandwidth, phasors)
Lecture 1 4 Lecture 1 16
Definitions Definitions
Analog signal: continuous function of time with continuous amplitude Instantaneous power:
2
Discrete-time signal: only defined at discrete points in time, amplitude v(t ) 2 2
p= = i (t ) R = g (t )
continuous R
Digital signal: discrete in both time and amplitude (e.g., PCM signals, Average power:
see Chapter 4) 1 T /2 2
P = lim −T / 2 g (t ) dt
T →∞ T
1 To / 2 2
P= −To / 2 g (t ) dt
To
Lecture 1 17 Lecture 1 18
Definitions Bandwidth
Bandwidth: extent of the significant spectral content of a
signal for positive frequencies Magnitude-square spectrum
3dB bandwidth
null-to-null bandwidth
Lecture 1 19 Lecture 1 20
Phasors Phasors
General sinusoid: Alternative representation:
x(t ) = A cos( 2πft + θ ) x(t ) = A cos( 2πft + θ )
{ jθ
x(t ) = ℜ Ae e j 2π f t
} x(t ) =
A jθ j 2π f t A − jθ − j 2π f t
e e + e e
2 2
Lecture 1 21 Lecture 1 32
Phasors Summary
A jθ j 2π f t A − jθ − j 2π f t
x(t ) = e e + e e 1. The fundamental question: How do communications
2 2 systems perform in the presence of noise?
Anti-clockwise rotation (positive frequency): exp( j 2π f t ) Some definitions:
2.
Signals
Clockwise rotation (negative frequency): exp(− j 2π f t )
Average power
1
T /2 2
P = lim −T / 2 x(t ) dt
T →∞ T
Phasors – complex conjugate representation (negative frequency)
A jθ j 2π f t A − jθ − j 2π f t
x(t ) = e e + e e
2 2
Lecture 1 43 Lecture 1 44
Lecture 2 Sources of noise
Lecture 2 1 Lecture 2 2
synthetic (e.g. other users) B = 20 MHz
−11
atmospheric (e.g. lightning) Assume the average thermal noise at its output is Po = 2.2 ×10 W
galactic (e.g. cosmic radiation)
2. Internal noise 1. What is the amplifier’s effective noise temperature?
shot noise 2. What is the noise output if two of these amplifiers are cascaded?
thermal noise 3. How many stages can be used if the noise output must be less than 20
mW?
Average power of thermal noise: P = kTB
Lecture 2 3 Lecture 2 4
Gaussian noise Noise model
Gaussian noise: amplitude of noise signal has a Gaussian probability Model for effect of noise is additive Gaussian noise channel:
density function (p.d.f.)
Central limit theorem : sum of n independent random variables
approaches Gaussian distribution as n→∞
Statistical tools
Lecture 2 5 Lecture 2 6
sample point in the sample space
1 0 1 1 0
Data
1
Data +
noise 0
Lecture 2 7 Lecture 2 8
Probability density function Statistical averages
Probability that the r.v. x is within a certain range is: Expectation of a r.v. is:
∞
x2 E{x} = x px ( x) dx
P( x1 < x < x2 ) = px ( x) dx −∞
x1 where E{x} is the expectation operator
Example, Gaussian pdf:
In general, if y = g(x)
∞
1 2 2
p x ( x) then E{y} = E{g(x)} = g ( x) px ( x) dx
2σ −∞
px ( x) = e −( x − m )
σ 2π
For example, the mean square amplitude of a signal is the mean of the
square of the amplitude, ie,
E x2{ }
Lecture 2 9 Lecture 2 10
1 T /2
T →∞ T −T / 2
n(t ) = lim n(t ) dt
Ensemble average:
∞
E{n} = n p(n) dn
−∞
Average power:
{ }
Lecture 2 11 Lecture 2 12
Example Autocorrelation
Consider the signal: x(t ) = A e j (ω ct +θ ) How can one represent the
spectrum of a random process?
where θ is a random variable, uniformly distributed over 0 to 2π
Rx (τ ) = E{x (t ) x (t + τ )}
{
P = E x 2 (t ) }
= Rx (0)
Lecture 2 13 Lecture 2 14
Filtered speech Wiener-Khinchine theorem:
Original H(f) (smaller bandwidth) ∞
speech
Sx ( f ) = Rx (τ ) e− j 2π fτ dτ
−∞
= FT{Rx (τ )}
∞
Hence, Rx (τ ) = S x ( f ) e j 2π fτ df
−∞
Average power:
∞
P = Rx (0) = S x ( f ) df
−∞
Original LPF 4kHz LPF 1kHz
Lecture 2 15 Lecture 2 16
Power spectral density Summary
B kT
Thermal noise: P= kTB = df
−B 2
Autocorrelation: Rx (τ ) = E{x (t ) x (t + τ )}
Power spectral density:
∞
S x ( f ) = Rx (τ ) e − j 2π fτ dτ
−∞
No
White noise: S( f ) =
2 White noise: No
S( f ) =
2
pdf random
PSD is same for all frequencies Expectation operator: variable
∞
E{g ( x)} = [ g ( x)] px ( x) dx
−∞
Lecture 2 17 Lecture 2 18
Lecture 3 Analog communication system
Lecture 3 1 Lecture 3 2
has a bandwidth matched to the transmission bandwidth PSD of n(t) is centred about fc (and – fc)
Lecture 3 3 Lecture 3 6
PSD of n(t)
2W
nk (t ) = ak cos(ω k t + θ k )
let ω k = (ω k − ω c ) + ω c
nk (t ) = ak cos[(ω k − ω c )t + θ k + ω ct ]
A B
use cos( A + B) = cos A cos B − sin A sin B
nc (t ) term
In slice shown (for ∆f small):
− ak sin[(ω k − ω c )t + θ k ]sin(ω c t )
ns (t ) term
Lecture 3 7 Lecture 3 8
n(t) n(t)
nc(t) nc(t)
ns(t) ns(t)
Lecture 3 9 Lecture 3 10
Example – histogram Probability density functions
W=1000 Hz
n(t ) = ak cos(ω k t + θ k )
k
n(t)
nc (t ) = ak cos[(ω k − ω c )t + θ k ]
k
ns (t ) = ak sin[(ω k − ω c )t + θ k ]
nc(t) k
Central limit theorem
Mean of each waveform is 0
ns(t)
Lecture 3 11 Lecture 3 12
n(t ) = ak cos(ω k t + θ k )
Find using the power spectral density (PSD):
k
2W
Power in ak cos(ωt+θ) is E{ak2}/2 (see Example 1.1, or study group sheet 2, Q1)
2
E{ak }
Average power in n(t) is: Pn =
k 2
nc (t ) = ak cos((ω k − ω c )t + θ k ) From Lecture 2: ∞
P = S ( f ) df
k −∞
ns (t ) = ak sin ((ω k − ω c )t + θ k ) f c +W No N
k
= 2 df = 2 o (W − −W ) = 2 N oW
2 2 f c −W 2 2
E{ak } E{ak } (one for positive freqs,
Average power in nc(t) and ns(t) is: Pnc = , Pns =
n(t) , nc(t) and ns(t) all have same average power! Average power in n(t) , nc(t) and ns(t) is: 2NoW
Lecture 3 13 Lecture 3 14
Power spectral densities Example – power spectral densities
W=1000 Hz
n(t)
PSD of n(t):
2W nc(t)
--W W
Lecture 3 15 Lecture 3 16
{
so n(t ) = ℜ g (t ) e jω ct }
ns(t)
Lecture 3 17 Lecture 3 18
Analog communication system Baseband system
Performance measure:
Lecture 3 19 Lecture 3 20
Baseband SNR:
PT
SNRbase =
N oW
n(t ) = nc (t ) cos(ω c t ) − ns (t ) sin(ω ct )
Transmitted (message) power is: PT
−W 2
Average power
SNR at receiver output:
N oW
Lecture 3 21 Lecture 3 22
Lecture 4 Analog communication system
Noise (AWGN) in AM systems:
DSB-SC
average message power at receiver output
SNRo =
average noise power at receiver output
Lecture 4 1 Lecture 4 2
Modulation index:
mp
µ=
Ac
Lecture 4 3 Lecture 4 4
DSB-SC Synchronous detection
y (t ) AM = [Ac + m(t )]cos ω c t × 2 cos ω c t
= [Ac + m(t )](1 + cos 2ω c t )
Ps = ( Ac m(t ) ) = Ac2 P
2
s (t ) DSB-SC = Ac m(t ) cos ω c t
power of message
Predetection signal:
PSD of nc(t)
Receiver output (after LPF):
Output SNR:
y (t ) = Ac m(t ) + nc (t )
Ac2 P
SNRDSB-SC =
2 N oW PSD of baseband noise nc(t)
Lecture 4 7 Lecture 4 8
SNR of DSB-SC Noise in AM (synch. detector)
Predetection signal:
Transmitted power: x(t ) = [Ac + m(t )]cos ω c t + nc (t ) cos ω c t − ns (t ) sin ω c t
2
AP
PT = ( Ac m(t ) cos ω c t ) 2 = c
Transmitted signal Bandlimited noise
2
Receiver output:
Output SNR:
P y (t ) = Ac + m(t ) + nc (t )
SNRDSB-SC = T = SNRbase
N oW Output signal power: Output SNR:
PS = m 2 (t ) = P P
SNRAM =
DSB-SC has no performance advantage over baseband 2 N oW
PN = 2 N oW
Lecture 4 9 Lecture 4 10
x(t ) = [Ac + m(t )]cos ω c t + nc (t ) cos ω c t − ns (t ) sin ω c t
Transmitted power:
PT P P
SNRAM = = 2 SNRbase
N oW Ac + P Ac + P
2
Lecture 4 11 Lecture 4 13
Noise in AM, envelope detector Noise in AM, envelope detector
Receiver output: Small noise case:
y (t ) ≈ Ac + m(t ) + nc (t )
= output of synchronous detector
y (t ) = envelope of x(t)
= [ Ac + m(t ) + nc (t )]2 + ns2 (t ) Large noise case:
y (t ) ≈ En (t ) + [ Ac + m(t )] cosθ n (t )
y(t) Not really a problem in practice
Lecture 4 14 Lecture 4 15
Example Summary
An unmodulated carrier (of amplitude Ac and frequency fc) and
bandlimited white noise are summed and then passed through an ideal
envelope detector. Synchronous detector:
Assume the noise spectral density to be of height No/2 and bandwidth
P
SNRAM = SNRbase
1. Calculate the carrier-to-noise ratio at the output of the envelope detector, A +P
2
c
and compare it with the carrier-to-noise ratio at the detector input.
Envelope detector:
threshold effect
detector
Lecture 4 16 Lecture 4 17
Lecture 5 Frequency modulation
Noise in FM systems FM waveform:
pre-emphasis and de-emphasis
t
(See section 3.4) s (t ) FM = Ac cos 2πf c t + 2πk f −∞ m(τ ) dτ
= Ac cos θ (t )
Comparison of analog systems
(See section 3.5)
θ(t) is instantaneous phase
Instantaneous frequency:
1 dθ (t )
fi =
2π dt
= f c + k f m(t )
Lecture 5 4 Lecture 5 5
Bandwidth considerations FM receiver
frequency away from carrier frequency
Carson’s rule:
BT ≈ 2W (β + 1) = 2(∆f + W )
Lecture 5 6 Lecture 5 7
AM: FM:
signal carries message signal carries message Transmitted signal Bandlimited noise
Noise adds directly to Zero crossings of modulated
x(t ) = Ac cos(2πf c t + φ (t ) ) + nc (t ) cos(2πf c t ) − ns (t ) sin(2πf c t )
baseband than in AM −∞
Lecture 5 8 Lecture 5 9
Assumptions Assumptions
1. Noise does not affect signal power at output 2. Signal does not affect noise at output
Instantaneous frequency:
1 dφ (t )
f i (t ) = = k f m(t )
2π dt
Instantaneous frequency:
1 dθ (t ) 1 d
Output signal power: ns (t ) 1 d ns (t )
f i (t ) = = tan −1 ≈
2π dt 2π dt Ac + nc (t ) 2π dt Ac
PS = k 2f P
We know the PSD of ns(t), but what about its derivative??
power of message signal
Lecture 5 10 Lecture 5 11
X(f ) H( f ) Y( f ) = H ( f )X ( f )
1 dns (t ) 1 d
f i (t ) ≈ ns (t ) f i (t ) 2
SY ( f ) = H ( f ) S X ( f )
2π Ac dt 2π Ac dt SX ( f )
dx(t )
dt
⇔ j 2π f X ( f ) Ns ( f ) j
f
Ac
Fi ( f )
| f |2
SN ( f ) SF ( f ) = 2
SN ( f )
Ac
1
Ns ( f ) j 2π f Fi ( f )
2π Ac
Lecture 5 12 Lecture 5 13
PSD of LPF noise term:
1 dns (t ) | f |2
PSD of = 2 N o , | f |< W
2 Ac dt Ac
PSD of ns(t)
W
| f |2 f3 2 N oW 3
W 2N
PSD of
1 dns (t ) PN = 2
N o df = 2o = 2
2π A dt
−W
Ac Ac 3 0
3 Ac
PSD after LPF Increasing the carrier power has a noise quieting effect
Lecture 5 14 Lecture 5 15
Predetection signal is:
3 Ac2 k 2f P
SNRo =
2 N oW 3 x(t ) = Ac cos(2πf c t + φ (t ) ) + nc (t ) cos(2πf c t ) − ns (t ) sin(2πf c t )
Transmitted power:
pre
2 N o BT
Ac2
PT = ( Ac cos[ω c t + φ (t )])
2
=
2 Threshold point is: Ac2
> 10
SNR at output: 4 N oW ( β + 1)
3k 2f P 3β 2 P
SNRFM = SNRbase = SNRbase
W2 mp Cannot arbitrarily increase SNRFM by increasing β
Lecture 5 16 Lecture 5 18
Pre-emphasis and De-emphasis Example
The improvement in output SNR afforded by using pre-
emphasis and de-emphasis in FM is defined by:
SNR with pre - /de - emphasis
I=
SNR without pre - /de - emphasis
average output noise power without pre - /de - emphasis
=
average output noise power with pre - /de - emphasis
find an expression for the improvement, I.
Lecture 5 19 Lecture 5 20
Parameters:
Message bandwidth W = f m
AM system µ =1
FM system β =5
Performance: Bandwidth:
Lecture 5 21 Lecture 5 22
Summary
Noise in FM:
Increasing carrier power reduces noise at receiver output
Has threshold effect
Pre-emphasis
Lecture 5 23
Lecture 6 Digital vs Analog
Digital communication systems
Digital vs Analog communications Analog message: Digital message:
Pulse Code Modulation
Lecture 6 1 Lecture 6 2
accurately sent
Performance criterion is Performance criterion is
SNR at receiver output probability of receiver
making a decision error
Advantages of digital:
1. Digital signals are more immune to noise
Nyquist Sampling Theorem:
2. Repeaters can re-transmit a noise-free signal
A signal whose bandwidth is limited to W Hz can be reconstructed
exactly from its samples taken uniformly at a rate of R>2W Hz.
Lecture 6 3 Lecture 6 4
Maximum information rate Pulse-code modulation
Represent an analog waveform in digital form
Channel
B Hz
Lecture 6 5 Lecture 6 6
levels
Assign each quantization level a code
Lecture 6 7 Lecture 6 8
Sampling vs Quantization Quantization noise
Sampling:
Non-destructive if fs>2W
Can reconstruct analog waveform exactly by using a low-pass filter
Sampled signal
Quantization:
Destructive
Once signal has been rounded off it can never be reconstructed Quantized signal
exactly (step size of 0.1)
Quantization error
Lecture 6 9 Lecture 6 10
2m p Noise power is: PN = E{q 2 } (since zero mean)
∆=
( )
L 2
∆2 2m p
m 2p
where L=2n is the no. of quantization levels = = L
=
mp is peak allowed signal amplitude 12 12 3 × 22n
Output SNR of quantizer:
Round-off effect of quantizer ensures that |q|< ∆/2, where q is a
PS 3P 2 n
random variable representing the quantization error SNRQ = = ×2
PN m 2p
Assume q is zero mean with uniform pdf, so mean square error is: or in dB:
∞ 3P
E{q 2 } = q 2 p (q) dq SNRQ = 6.02n + 10 log10 dB
−∞ m 2p
∆/2 ∆2
= q2 1
∆ dq =
−∆ / 2 12
Lecture 6 11 Lecture 6 12
Bandwidth of PCM Nonuniform quantization
Each message sample requires n bits For audio signals (e.g. speech), small signal amplitudes occur more
often than large signal amplitudes
If message has bandwidth W Hz, then PCM contains 2nW bits per Better to have closely spaced quantization levels at low signal
second amplitudes, widely spaced levels at large signal amplitudes
Quantizer has better resolution at low amplitudes (where signal spends
more time)
Bandwidth required is: BT = nW
m 2p
Lecture 6 13 Lecture 6 15
Compress signal first, then use uniform quantizer, then expand signal
(i.e., compand)
Lecture 6 16 Lecture 6 17
Summary
Pulse-code modulation:
Lecture 6 18
Lecture 7 Digital receiver
Performance of digital systems in noise:
Baseband
ASK
PSK, FSK
Compare all schemes
Lecture 7 1 Lecture 7 2
t
s(t)
( n − m) 2
1
p ( n) = exp −
σ 2π 2σ 2
= (m, σ 2)
Normal distribution
mean, m
variance, σ2
b
prob(a < n < b) = p(n) dn
a
Lecture 7 3 Lecture 7 4
Gaussian noise, spectrum Baseband system – “0” transmitted
Lecture 7 5 Lecture 7 6
1. Symbol “0” transmitted, receiver decides “1”
Transmitted signal 2. Symbol “1” transmitted, receiver decides “0”
s1(t)
Noise signal
n(t) Pe = po Pe 0 + p1 Pe1
A/ 2
Error if y1 (t ) < A / 2 Pe1 = N ( A, σ 2 ) dn
−∞
Lecture 7 7 Lecture 7 8
Baseband system – errors How to calculate Pe?
∞ 1 n2
Pe = exp − dn
For equally-probable symbols: A/ 2
σ 2π 2σ 2
1 1 1. Complementary error function (erfc in Matlab)
Pe = Pe 0 + Pe1
2 2 2 ∞
A
Pe = 12 erfc
σ2 2
Pe1
Pe0
2. Q-function (tail function)
− n2
%'$ $ &
!' #
1 ∞
Q(u ) = u exp dn
Hence, Pe = ½ Pe0+ ½ Pe0= Pe0 2π 2
%"$ &
!" #
∞
Pe = N (0, σ 2 ) dn Pe = Q
A/ 2 2σ
Lecture 7 9 Lecture 7 10
(
represent a “ 0” , and a level of 0.22 volts to represent a “ 1” . The digital
waveform has a bandwidth of 15 kHz.
If this digital waveform is to be transmitted over a baseband channel
(
having additive noise with flat power spectral density of No/2=3 x 10-8
W/Hz, what is the probability of error at the receiver output?
Lecture 7 11 Lecture 7 12
Amplitude-shift keying Synchronous detector
(
s0 (t ) = 0
s1 (t ) = A cos(ω c t )
Lecture 7 13 Lecture 7 14
)
x0 (t ) = 0 + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t ) x1 (t ) = A cos(ω c t ) + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t )
ASK “ 0” Bandpass noise ASK “ 1” Bandpass noise
)
r0 (t ) = x0 (t ) × 2 cos(ω c t ) r1 (t ) = x1 (t ) × 2 cos(ω c t )
= nc (t ) 2 cos 2 (ω c t ) − ns (t )2 sin(ω c t ) cos(ω c t ) = [ A + nc (t )] 2 cos 2 (ω c t ) − ns (t ) 2 sin(ω c t ) cos(ω c t )
= nc (t )[1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t ) = [ A + nc (t )][1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t )
y0 (t ) = nc (t ) y1 (t ) = A + nc (t )
Lecture 7 15 Lecture 7 16
PDFs at receiver output Phase-shift keying
) ASK – “ 0” transmitted: ASK – “ 1” transmitted:
)
PDF of y0 (t ) = nc (t ) PDF of y1 (t ) = A + nc (t )
s0 (t ) = − A cos(ω c t )
PSK
s1 (t ) = A cos(ω c t )
FSK
Same as baseband!
)
.--/
+**,
A
Pe,ASK = 12 erfc
σ2 2
Lecture 7 17 Lecture 7 18
Predetection signal:
1
x0 (t ) = − A cos(ω c t ) + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t )
PSK “ 0” Bandpass noise
After multiplier:
1
r0 (t ) = x0 (t ) × 2 cos(ω c t )
= [− A + nc (t )] 2 cos 2 (ω c t ) − ns (t ) 2 sin(ω c t ) cos(ω c t )
= [− A + nc (t )][1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t )
Band-pass filter bandwidth matched to modulated signal bandwidth
0 0 0
Carrier frequency is ωc
Receiver output:
1
Lecture 7 19 Lecture 7 20
PSK – “ 1” transmitted PSK – PDFs at receiver output
1
x1 (t ) = A cos(ω c t ) + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t )
PSK “ 1” Bandpass noise PDF of y0 (t ) = − A + nc (t ) PDF of y1 (t ) = A + nc (t )
After multiplier:
1
r1 (t ) = x1 (t ) × 2 cos(ω c t )
= [ A + nc (t )] 2 cos 2 (ω c t ) − ns (t ) 2 sin(ω c t ) cos(ω c t )
= [ A + nc (t )][1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t )
Set threshold at 0:
1
Receiver output:
1
Lecture 7 21 Lecture 7 22
PSK
∞ 0
Pe 0 = N(− A, σ 2 ) Pe1 = N( A, σ 2 )
0 −∞
476 6 8
− (n + A) 2
432 2 5
− (n − A) 2
4:9 9 ;
∞ 1 0 1
= exp dn = exp dn
0
σ 2π 2σ 2 −∞
σ 2π 2σ 2
s1 (t ) = A cos(ω1t )
1 A
DAC E
@A? B
Pe,PSK = erfc
2 σ 2
Lecture 7 23 Lecture 7 24
FSK detector FSK
Receiver output:
F
[
y0 (t ) = − A + n1c (t ) − nc0 (t ) ]
[
y1 (t ) = A + nc1 (t ) − nc0 (t ) ]
s0 (t ) = A cos(ω 0t )
s1 (t ) = A cos(ω1t ) Independent noise sources,
variances add
F
1 A
LIK M
HIG J
Pe,FSK = erfc
2 2σ
Lecture 7 25 Lecture 7 26
N
∞ A
Pe = N (0, σ 2 ) dn = 12 erfc
A/ 2
σ 2 2
F
1 A
DAC E
@A? B
1 A
LIK M
HIG J
Pe,PSK = erfc Pe,FSK = erfc
2 σ 2 2 2σ
Lecture 7 27 Lecture 7 28
Lecture 8 Why information theory?
Information theory
Why?
Information
Entropy
Source coding (a little)
“What would be the characteristics of an ideal system, [one that] is not
limited by our engineering ingenuity and inventiveness but limited
rather only by the fundamental nature of the physical universe”
Taub & Schilling
Lecture 8 1 Lecture 8 2
1 p
I ( s ) = log 2 = − log 2 ( p ) bits 1. If p=1, I(s)=0
p
(symbol that is certain to occur conveys no information)
Probability of Conventional unit
2. 0<p<1, ∞<I(s)<0
Information in
symbol s occurrence of of information 3. If p=p1×p2, I(s)=I(s1)+I(s2)
symbol s
Lecture 8 3 Lecture 8 4
Example Sources and symbols
Suppose we have two symbols:
s0 = 0
s1 = 1
Each has probability of occurrence: Symbols:
p0 = p1 = 12 may be binary (“0” and “1”)
can have more than 2 symbols, e.g. letters of the alphabet, etc.
Each symbol represents: Sequence of symbols is random (otherwise no information is conveyed)
I ( s) = − log 2 ( 12 ) = 1 bit of information
Definition:
If successive symbols are statistically independent, the information
In this example, one symbol = one information bit, source is a zero-memory source (or discrete memoryless source)
but it is not always so!
How much information is conveyed by symbols?
Lecture 8 5 Lecture 8 6
H ( S ) = − pk log 2 ( pk ) S = {s0 , s1 }
all k
collection of all possible symbols
Probabilities: p0 = 1 − p1
where S = {s1 , s2 , , s K } is alphabet
Entropy:
symbol H ( S ) = − pk log 2 ( pk )
pk is probability of occurence of symbol sk all k
= −(1 − p1 ) log 2 (1 − p1 ) − p1 log 2 ( p1 )
let s0 = 0, s1 = 1
Entropy: average information per symbol
Entropy: H ( S ) = − pk log 2 ( pk )
all k
= 1.157 bits/symbol
let A = 00
B = 01 Symbols generated at Bitstream generated at System needs to
C = 10 rate of 1 symbol/sec rate of 2 bits/sec process 2 bits/sec
(amongst other things) by how many bits we need to System needs to
Speech waveform 8000 symbols/sec 64000 bits/sec
transmit for each symbol process 64 kb/s
Cell phone:
System needs to
Speech waveform 8000 symbols/sec 13000 bits/sec process 13 kb/s
Lecture 8 11 Lecture 8 12
Source vs channel coding Source coding
number of bits
p A = 0.7, pB = 0.2, pC = 0.1
Example:
let A = 0
B = 10
C = 11
Lecture 8 13 Lecture 8 14
Symbol that occurs frequently (i.e., relatively high pk)
L = pk l k should have short code word
all k
Symbol that occurs rarely should have long code word
Probability of occurrence Number of bits used to
of symbol sk represent symbol sk
let A = 0, B = 10, C = 11
Lecture 8 15 Lecture 8 16
Summary
Information content (of a particular symbol):
1
I ( s ) = log 2 = − log 2 ( p ) bits
p
Source coding:
Lecture 8 17
Lecture 9 Source coding
Source coding theorem
All symbols do not need to be encoded with the same
Huffman coding algorithm
number of bits
Example:
(See sections 5.4.2, 5.4.3) p A = 0.7, pB = 0.2, pC = 0.1 probabilities
6 symbols 8 bits
Lecture 9 1 Lecture 9 2
Lecture 9 3 Lecture 9 4
Unequal probabilities? Unequal probabilities ? (cont.)
Np1 Np2
Probabilities: p1 , p2 , , pK p ( S N ) = p1 × p2 ×
Any random sequence of N symbols (large N): Number of bits required to represent a sequence of N symbols:
s1: N × p1 occurrences
( )
1 Np Np
l N = log 2 = − log 2 p1 1 × p2 2 ×
s2: N × p2 occurrences
p( S N )
Particular sequence of N symbols:
= − Np1 log 2 ( p1 ) − Np2 log 2 ( p2 ) −
S N = {s1 , s2 , s1 , s3 , s3 , s2 , s1 , }
= − N pk log 2 ( pk ) = N H ( S )
all k
Probability of this particular sequence occurring:
Average number of bits for one symbol is:
p ( S N ) = p1 × p2 × p1 × p3 × p3 × p2 × p1 × lN
Np Np
= p1 1 × p2 2 × L= = H (S )
N
Lecture 9 5 Lecture 9 6
Source Coding Theorem: codeword length
For a general alphabet S, the minimum average codeword
length is given by the entropy, H(S).
Significance:
For any practical source coding scheme, the average codeword length
will always be greater than or equal to the source entropy, i.e.,
L ≥ H (S ) bits/symbol
Lecture 9 7 Lecture 9 8
Example Huffman coding algorithm
Consider a five-symbol alphabet having the probabilities Uniquely decodable
indicated: i.e., only one way to break bit stream into valid code words
Symbols : A, B, C , D, E
Instantaneous
Probabilities : p A = 0.05, p B = 0.15, pC = 0.4, p D = 0.3, p E = 0.1
i.e., know immediately when a code word has ended
Lecture 9 9 Lecture 9 10
Summary
Source coding theorem:
Lecture 9 11
Lecture 10 Reliable transfer of information
How much information can be reliably transferred over a
noisy channel?
(Channel capacity)
How much information?
Lecture 10 1 Lecture 10 2
Intuition:
R can be increased arbitrarily by increasing symbol rate r Channel Capacity Theorem
Lecture 10 3 Lecture 10 4
Channel capacity Channel capacity
Sometimes (however infrequent) noise must over-ride signal → bit channel capacity is:
error Average signal power
But, theorem says we can transfer information without error!! P at the receiver
C = B log 2 1 + S
PN
Basic limitation due to noise is on speed of Average noise power
Bandwidth of
at the receiver
communication, not on reliability the channel
Lecture 10 5 Lecture 10 6
Example Example
Consider a baseband channel with a bandwidth of B=4 kHz. Assume a
Consider a baseband system message signal with an average power of Ps=10 W, is transmitted over
Noise power is: this channel which has additive noise with a flat spectral density of
height No/2 with No=10-6 W/Hz.
BNo 1. Calculate the channel capacity of this channel.
PN = df =N o B
−B 2 2. If the message signal is amplified by a factor of n before transmission,
calculate the channel capacity when (a) n=2, and (b) n=10.
3. If the bandwidth of the channel is doubled to 8 kHz, what is the
Channel capacity: channel capacity now?
PS
C = B log 2 1 +
No B
Lecture 10 7 Lecture 10 8
Example Comments
PS
C = B log 2 1 +
PN
B=4000
No=10-6 Can increase capacity arbitrarily through PS
More bandwidth allows more symbols per second, but also increases
the noise
Ps=10
No=10-6 PS
Can show that: lim C = 1.44
B →∞ No
Cannot increase capacity arbitrarily through B
Lecture 10 9 Lecture 10 10
PS
C = B log 2 1 +
PN
schemes
But, no deterministic method exists to do it !
receiver output
Lecture 10 11 Lecture 10 12
Optimum analog system Optimum analog system
Maximum rate that information can arrive at receiver:
Assume that channel noise is AWGN, having PSD: No/2
Cin = B log 2 (1 + SNRin )
Average noise power at demodulator input is: PN = N o B
Cout = W log 2 (1 + SNRout ) Transmitted power
Baseband SNR
Ideally, no information is lost: PT W PT
SNRin = =
Cout = Cin N o B B N oW
Bandwidth spreading ratio
Equating gives:
transmission bw/message bw
SNR at receiver output:
SNRout = (1 + SNRin ) − 1
B /W
B /W
W
!
For any increase in bandwidth, output SNR increases SNRout = 1 + SNRbase −1
B
exponentially
Lecture 10 13 Lecture 10 14
"
Channel Capacity Theorem:
"
If R≤C, then there exists a coding scheme such that
symbols can be transmitted over a noisy channel with an
arbitrarily small probability of error
' && (
$ ## %
P
C = B log 2 1 + S
bits/sec
PN
Analog communication systems:
"