Anda di halaman 1dari 47

Information

Lecturer: Dr. Darren Ward, Room 811C Aims:


Email: d.ward@imperial.ac.uk, Phone: (759) 46230 The aim of this part of the course is to give you an understanding of
how communication systems perform in the presence of noise.

Reference books:
Objectives:
B.P. Lathi, Modern Digital and Analog Communication Systems,


By the end of the course you should be able to:


Oxford University Press, 1998
Compare the performance of various communication systems


S. Haykin, Communication Systems, Wiley, 2001


Describe a suitable model for noise in communications


L.W. Couch II, Digital and Analog Communication Systems,


Prentice-Hall, 2001 Determine the SNR performance of analog communication systems


Determine the probability of error for digital systems


Understand information theory and its significance in determining


Course material: system performance
http://www.ee.ic.ac.uk/dward/


Lecture 1 2 Lecture 1 3

Lecture 1 Definitions
Signal: a single-valued function of time that conveys information


What is the course about, and how does it fit together Deterministic signal: completely specified function of time


1.
Random signal: cannot be completely specified as function of time


2. Some definitions (signals, power, bandwidth, phasors)

See Chapter 1 of notes

Lecture 1 4 Lecture 1 16
Definitions Definitions
Analog signal: continuous function of time with continuous amplitude Instantaneous power:


2
Discrete-time signal: only defined at discrete points in time, amplitude v(t ) 2 2
p= = i (t ) R = g (t )


continuous R
Digital signal: discrete in both time and amplitude (e.g., PCM signals, Average power:


see Chapter 4) 1 T /2 2
P = lim −T / 2 g (t ) dt
T →∞ T

For periodic signals, with period To (see Problem sheet 1):

1 To / 2 2
P= −To / 2 g (t ) dt
To

Lecture 1 17 Lecture 1 18

Definitions Bandwidth
Bandwidth: extent of the significant spectral content of a
signal for positive frequencies Magnitude-square spectrum

3dB bandwidth

null-to-null bandwidth

noise equivalent bandwidth


0
Baseband signal, B/W = B Bandpass signal, B/W = 2B

Lecture 1 19 Lecture 1 20
Phasors Phasors
General sinusoid: Alternative representation:



x(t ) = A cos( 2πft + θ ) x(t ) = A cos( 2πft + θ )
{ jθ
x(t ) = ℜ Ae e j 2π f t
} x(t ) =
A jθ j 2π f t A − jθ − j 2π f t
e e + e e
2 2

Lecture 1 21 Lecture 1 32

Phasors Summary
A jθ j 2π f t A − jθ − j 2π f t
x(t ) = e e + e e 1. The fundamental question: How do communications
2 2 systems perform in the presence of noise?
Anti-clockwise rotation (positive frequency): exp( j 2π f t ) Some definitions:


2.
Signals


Clockwise rotation (negative frequency): exp(− j 2π f t )


Average power
1


T /2 2
P = lim −T / 2 x(t ) dt
T →∞ T

Bandwidth: significant spectral content for positive frequencies


Phasors – complex conjugate representation (negative frequency)


A jθ j 2π f t A − jθ − j 2π f t
x(t ) = e e + e e
2 2

Lecture 1 43 Lecture 1 44
Lecture 2 Sources of noise

1. Model for noise


2. Autocorrelation and Power spectral density

See Chapter 2 of notes, sections 2.1, 2.2, 2.3

Lecture 2 1 Lecture 2 2

Sources of noise Example


1. External noise Consider an amplifier with 20 dB power gain and a bandwidth of


synthetic (e.g. other users) B = 20 MHz
−11
atmospheric (e.g. lightning) Assume the average thermal noise at its output is Po = 2.2 ×10 W


galactic (e.g. cosmic radiation)
2. Internal noise 1. What is the amplifier’s effective noise temperature?
shot noise 2. What is the noise output if two of these amplifiers are cascaded?
thermal noise 3. How many stages can be used if the noise output must be less than 20
mW?
Average power of thermal noise: P = kTB


Effective noise temperature:




Temperature of fictitious thermal


P
Te = noise source at i/p, that would be
kB required to produce same noise
power at o/p

Lecture 2 3 Lecture 2 4
Gaussian noise Noise model
Gaussian noise: amplitude of noise signal has a Gaussian probability Model for effect of noise is additive Gaussian noise channel:



density function (p.d.f.)

For n(t) a random signal, need more info:


Central limit theorem : sum of n independent random variables


What happens to noise at receiver?


approaches Gaussian distribution as n→∞
Statistical tools


Lecture 2 5 Lecture 2 6

Motivation Random variable


A random variable x is a rule that assigns a real number xi to the ith


sample point in the sample space
1 0 1 1 0
Data

1
Data +
noise 0

How often is a decision error made?




Noise Error for 0


only
Error for 1

Lecture 2 7 Lecture 2 8
Probability density function Statistical averages
Probability that the r.v. x is within a certain range is: Expectation of a r.v. is:




x2 E{x} =  x px ( x) dx
P( x1 < x < x2 ) =  px ( x) dx −∞
x1 where E{x} is the expectation operator
Example, Gaussian pdf:


In general, if y = g(x)



1 2 2
p x ( x) then E{y} = E{g(x)} =  g ( x) px ( x) dx
2σ −∞
px ( x) = e −( x − m )
σ 2π

For example, the mean square amplitude of a signal is the mean of the


square of the amplitude, ie,
E x2{ }
Lecture 2 9 Lecture 2 10

Random process Averages


Time average:


1 T /2
T →∞ T −T / 2
n(t ) = lim n(t ) dt

Ensemble average:



E{n} =  n p(n) dn
−∞

pdf random variable


Ergodic
DC component:
E{n(t )} = n(t )


Average power:
{ }


E n 2 (t ) = n 2 (t ) = σ 2 (zero-mean Gaussian process only)


Stationary

Lecture 2 11 Lecture 2 12
Example Autocorrelation
Consider the signal: x(t ) = A e j (ω ct +θ ) How can one represent the



spectrum of a random process?
where θ is a random variable, uniformly distributed over 0 to 2π

1. Calculate its average power using time averaging. Autocorrelation:


2. Calculate its average power (mean-square) using statistical averaging.

Rx (τ ) = E{x (t ) x (t + τ )}

NOTE: Average power is


{
P = E x 2 (t ) }
= Rx (0)

Lecture 2 13 Lecture 2 14

Frequency content Power spectral density


PSD measures distribution of power with frequency, units watts/Hz


Filtered speech Wiener-Khinchine theorem:
Original H(f) (smaller bandwidth) ∞
speech
Sx ( f ) =  Rx (τ ) e− j 2π fτ dτ
−∞
= FT{Rx (τ )}



Hence, Rx (τ ) =  S x ( f ) e j 2π fτ df
−∞

Average power:



P = Rx (0) =  S x ( f ) df
−∞
Original LPF 4kHz LPF 1kHz

Lecture 2 15 Lecture 2 16
Power spectral density Summary

Additive Gaussian noise channel:


B kT
Thermal noise: P= kTB =  df


−B 2
Autocorrelation: Rx (τ ) = E{x (t ) x (t + τ )}


Power spectral density:



S x ( f ) =  Rx (τ ) e − j 2π fτ dτ
−∞
No
White noise: S( f ) =


2 White noise: No
S( f ) =


2
pdf random
PSD is same for all frequencies Expectation operator: variable



E{g ( x)} =  [ g ( x)] px ( x) dx
−∞

Lecture 2 17 Lecture 2 18
Lecture 3 Analog communication system

Representation of band-limited noise


Why band-limited noise?


(See Chapter 2, section 2.4)

Noise in an analog baseband system


(See Chapter 3, sections 3.1, 3.2)

Lecture 3 1 Lecture 3 2

Receiver Bandlimited Noise


For any bandpass (i.e., modulated) system, the predetection
noise will be bandlimited
Bandpass noise signal can be expressed in terms of two
baseband waveforms
Predetection Filter Detector
n(t ) = nc (t ) cos(ωct ) − ns (t ) sin(ωct )

Predetection filter: bandpass baseband carrier baseband carrier




removes out-of-band noise




has a bandwidth matched to the transmission bandwidth PSD of n(t) is centred about fc (and – fc)


PSDs of nc(t) and ns(t) are centred about 0 Hz

Lecture 3 3 Lecture 3 6
PSD of n(t)
2W
nk (t ) = ak cos(ω k t + θ k )
let ω k = (ω k − ω c ) + ω c
nk (t ) = ak cos[(ω k − ω c )t + θ k + ω ct ]
A B
use cos( A + B) = cos A cos B − sin A sin B
nc (t ) term
In slice shown (for ∆f small):


nk (t ) = ak cos( 2π f k t + θ k ) nk (t ) = ak cos[(ω k − ω c )t + θ k ]cos(ω c t )

− ak sin[(ω k − ω c )t + θ k ]sin(ω c t )

ns (t ) term

Lecture 3 7 Lecture 3 8

Example – frequency Example – time


W=1000 Hz W=1000 Hz

n(t) n(t)

nc(t) nc(t)

ns(t) ns(t)

Lecture 3 9 Lecture 3 10
Example – histogram Probability density functions
W=1000 Hz

n(t ) =  ak cos(ω k t + θ k )
k
n(t)
nc (t ) =  ak cos[(ω k − ω c )t + θ k ]
k

ns (t ) =  ak sin[(ω k − ω c )t + θ k ]
nc(t) k

Each waveform is Gaussian distributed


Central limit theorem


Mean of each waveform is 0


ns(t)

Lecture 3 11 Lecture 3 12

Average power Average power


What is the average power in n(t) ?


n(t ) =  ak cos(ω k t + θ k )
Find using the power spectral density (PSD):


k
2W

Power in ak cos(ωt+θ) is E{ak2}/2 (see Example 1.1, or study group sheet 2, Q1)


2
E{ak }
Average power in n(t) is: Pn = 


k 2
nc (t ) =  ak cos((ω k − ω c )t + θ k ) From Lecture 2: ∞
P =  S ( f ) df

k −∞
ns (t ) =  ak sin ((ω k − ω c )t + θ k ) f c +W No N
k
= 2 df = 2 o (W − −W ) = 2 N oW
2 2 f c −W 2 2
E{ak } E{ak } (one for positive freqs,
Average power in nc(t) and ns(t) is: Pnc =  , Pns = 


2 2 one for negative)


k k

n(t) , nc(t) and ns(t) all have same average power! Average power in n(t) , nc(t) and ns(t) is: 2NoW



Lecture 3 13 Lecture 3 14
Power spectral densities Example – power spectral densities
W=1000 Hz

n(t)
PSD of n(t):

2W nc(t)

PSD of nc(t) and ns(t): ns(t)

--W W

Lecture 3 15 Lecture 3 16

Example - zoomed Phasor representation


W=1000 Hz

n(t ) = nc (t ) cos(ω c t ) − ns (t ) sin(ω ct )


n(t) let g (t ) = nc (t ) + jns (t )
g (t ) e jω ct = nc (t ) cos ω c t + jnc (t ) sin ω c t
nc(t) + jns (t ) cos ω c t − ns (t ) sin ω c t

{
so n(t ) = ℜ g (t ) e jω ct }
ns(t)

Lecture 3 17 Lecture 3 18
Analog communication system Baseband system

Performance measure:


average message power at receiver output


SNRo =
average noise power at receiver output

Compare systems with same transmitted power




Lecture 3 19 Lecture 3 20

Baseband SNR Summary

Baseband SNR:


PT
SNRbase =
N oW

Bandpass noise representation:


n(t ) = nc (t ) cos(ω c t ) − ns (t ) sin(ω ct )
Transmitted (message) power is: PT


Noise power is:




No All waveforms have same:



W
PN =  df = NoW Probability density function (zero-mean Gaussian)

−W 2

Average power
SNR at receiver output:


PT nc(t) and ns(t) have same power spectral density


SNRbase =

N oW

Lecture 3 21 Lecture 3 22
Lecture 4 Analog communication system
Noise (AWGN) in AM systems:
DSB-SC


AM, synchronous detection




AM, envelope detection




(See Chapter 3, section 3.3)


Performance measure:


average message power at receiver output
SNRo =
average noise power at receiver output

Compare systems with same transmitted power


Lecture 4 1 Lecture 4 2

Amplitude modulation Amplitude modulation


Modulated signal:


s (t ) AM = [Ac + m(t )]cos ω ct

m(t) is message signal

Modulation index:


mp
µ=
Ac

mp is peak amplitude of message

Lecture 4 3 Lecture 4 4
DSB-SC Synchronous detection

s(t )DSB-SC = Ac m(t ) cosωct

Signal after multiplier:


y (t ) AM = [Ac + m(t )]cos ω c t × 2 cos ω c t
= [Ac + m(t )](1 + cos 2ω c t )

y (t ) DSB − SC = Ac m(t )(1 + cos 2ω c t )


Lecture 4 5 Lecture 4 6

Noise in DSB-SC SNR of DSB-SC


Transmitted signal: Output signal power:



Ps = ( Ac m(t ) ) = Ac2 P
2
s (t ) DSB-SC = Ac m(t ) cos ω c t
power of message

Predetection signal:


PSD of bandpass noise n(t)


Output noise power:
x(t ) = Ac m(t ) cos ω c t + nc (t ) cos ω c t − ns (t ) sin ω c t

∞ W

Transmitted signal Bandlimited noise


PN =  PSD df =  N o df = 2N oW
−∞ −W

PSD of nc(t)
Receiver output (after LPF):


Output SNR:
y (t ) = Ac m(t ) + nc (t )


Ac2 P
SNRDSB-SC =
2 N oW PSD of baseband noise nc(t)

Lecture 4 7 Lecture 4 8
SNR of DSB-SC Noise in AM (synch. detector)
Predetection signal:


Transmitted power: x(t ) = [Ac + m(t )]cos ω c t + nc (t ) cos ω c t − ns (t ) sin ω c t


2
AP
PT = ( Ac m(t ) cos ω c t ) 2 = c
Transmitted signal Bandlimited noise
2
Receiver output:


Output SNR:


P y (t ) = Ac + m(t ) + nc (t )
SNRDSB-SC = T = SNRbase
N oW Output signal power: Output SNR:


PS = m 2 (t ) = P P
SNRAM =
DSB-SC has no performance advantage over baseband 2 N oW


Output noise power:


PN = 2 N oW

Lecture 4 9 Lecture 4 10

Noise in AM (synch. detector) Noise in AM, envelope detector


Transmitted signal:


s (t ) AM = [Ac + m(t )]cos ω c t Predetection signal:


x(t ) = [Ac + m(t )]cos ω c t + nc (t ) cos ω c t − ns (t ) sin ω c t
Transmitted power:


A2 P Transmitted signal Bandlimited noise


PT = c +
2 2
Output SNR:


PT P P
SNRAM = = 2 SNRbase
N oW Ac + P Ac + P
2

The performance of AM is always worse than baseband




Lecture 4 11 Lecture 4 13
Noise in AM, envelope detector Noise in AM, envelope detector
Receiver output: Small noise case:



y (t ) ≈ Ac + m(t ) + nc (t )
= output of synchronous detector
y (t ) = envelope of x(t)
= [ Ac + m(t ) + nc (t )]2 + ns2 (t ) Large noise case:


y (t ) ≈ En (t ) + [ Ac + m(t )] cosθ n (t )

Envelope detector has a threshold effect


y(t) Not really a problem in practice


Lecture 4 14 Lecture 4 15

Example Summary
An unmodulated carrier (of amplitude Ac and frequency fc) and


bandlimited white noise are summed and then passed through an ideal
envelope detector. Synchronous detector:


Assume the noise spectral density to be of height No/2 and bandwidth


2W, centred about the carrier frequency. SNRDSB-SC = SNRbase


Assume the input carrier-to-noise ratio is high.


P
SNRAM = SNRbase
1. Calculate the carrier-to-noise ratio at the output of the envelope detector, A +P
2
c
and compare it with the carrier-to-noise ratio at the detector input.


Envelope detector:
threshold effect


for small noise, performance is same as synchronous




detector

Lecture 4 16 Lecture 4 17
Lecture 5 Frequency modulation
Noise in FM systems FM waveform:
pre-emphasis and de-emphasis

t
(See section 3.4) s (t ) FM = Ac cos 2πf c t + 2πk f −∞ m(τ ) dτ 
 

= Ac cos θ (t )
Comparison of analog systems
(See section 3.5)
θ(t) is instantaneous phase
Instantaneous frequency:
1 dθ (t )
fi =
2π dt
= f c + k f m(t )

Frequency is proportional to message signal


Lecture 5 1 Lecture 5 2

FM waveforms FM frequency deviation

Instantaneous frequency varies between fc-kf mp and fc+kf mp,


where mp is peak message amplitude
Message:
Frequency deviation (max. departure of carrier wave from fc):
∆f = k f m p

AM: Deviation ratio:


∆f
β=
W
where W is bandwidth of message signal
FM:
Commercial FM uses: ∆f=75 kHz and W=15 kHz

Lecture 5 4 Lecture 5 5
Bandwidth considerations FM receiver

Discriminator: output is proportional to deviation of instantaneous


frequency away from carrier frequency
Carson’s rule:


BT ≈ 2W (β + 1) = 2(∆f + W )

Lecture 5 6 Lecture 5 7

Noise in FM versus AM Noise in FM


Predetection signal:


AM: FM:


Amplitude of modulated Frequency of modulated




signal carries message signal carries message Transmitted signal Bandlimited noise
Noise adds directly to Zero crossings of modulated
x(t ) = Ac cos(2πf c t + φ (t ) ) + nc (t ) cos(2πf c t ) − ns (t ) sin(2πf c t )


modulated signal signal important


t
Performance no better than Effect of noise should be less
where φ (t ) = 2π k f  m(τ ) dτ


baseband than in AM −∞

 If carrier power is much larger than noise power:


1. Noise does not affect signal power at output
2. Message does not affect noise power at output

Lecture 5 8 Lecture 5 9
Assumptions Assumptions

1. Noise does not affect signal power at output 2. Signal does not affect noise at output

Signal component at receiver: Message-free component at receiver:




xn (t ) = Ac cos(2πf ct ) + nc (t ) cos(2πf c t ) − ns (t ) sin(2πf c t )


xs (t ) = Ac cos(2πf c t + φ (t ) )

Instantaneous frequency:


1 dφ (t )
f i (t ) = = k f m(t )
2π dt
Instantaneous frequency:


1 dθ (t ) 1 d






  
Output signal power: ns (t ) 1 d ns (t )
f i (t ) = = tan −1 ≈


2π dt 2π dt Ac + nc (t ) 2π dt Ac
PS = k 2f P
We know the PSD of ns(t), but what about its derivative??


power of message signal
Lecture 5 10 Lecture 5 11

Discriminator output: PSD property:





X(f ) H( f ) Y( f ) = H ( f )X ( f )
1 dns (t ) 1 d
f i (t ) ≈ ns (t ) f i (t ) 2
SY ( f ) = H ( f ) S X ( f )
2π Ac dt 2π Ac dt SX ( f )

Fourier theory property: x(t ) ⇔ X ( f ) PSD of discriminator output:




dx(t )
dt
⇔ j 2π f X ( f )  Ns ( f ) j
f
Ac
Fi ( f )
| f |2
SN ( f ) SF ( f ) = 2
SN ( f )
Ac
1
Ns ( f ) j 2π f Fi ( f )
2π Ac

Lecture 5 12 Lecture 5 13
PSD of LPF noise term:


1 dns (t ) | f |2
PSD of = 2 N o , | f |< W
2 Ac dt Ac


PSD of ns(t)

Average power of noise at output:


W
| f |2 f3 2 N oW 3

 
 
W 2N
PSD of
1 dns (t ) PN =  2
N o df = 2o = 2
2π A dt
−W
Ac Ac 3 0
3 Ac

PSD after LPF Increasing the carrier power has a noise quieting effect


Lecture 5 14 Lecture 5 15

SNR of FM Threshold effect in FM


SNR at output: SNRFM is valid when the predetection SNR > 10


 
Predetection signal is:
3 Ac2 k 2f P
SNRo =
2 N oW 3 x(t ) = Ac cos(2πf c t + φ (t ) ) + nc (t ) cos(2πf c t ) − ns (t ) sin(2πf c t )

Predetection SNR is: SNR = Ac2


Transmitted power:


pre
2 N o BT
Ac2
PT = ( Ac cos[ω c t + φ (t )])
2
=
2  Threshold point is: Ac2
> 10
SNR at output: 4 N oW ( β + 1)


3k 2f P 3β 2 P
SNRFM = SNRbase = SNRbase
W2 mp Cannot arbitrarily increase SNRFM by increasing β


Lecture 5 16 Lecture 5 18
Pre-emphasis and De-emphasis Example
The improvement in output SNR afforded by using pre-


emphasis and de-emphasis in FM is defined by:
SNR with pre - /de - emphasis
I=
SNR without pre - /de - emphasis
average output noise power without pre - /de - emphasis
=
average output noise power with pre - /de - emphasis

If Hde(f) is the transfer function of the de-emphasis filter,


find an expression for the improvement, I.

Can improve output SNR by about 13 dB




Lecture 5 19 Lecture 5 20

Analog system performance Analog system performance

Parameters:


Single-tone message m(t ) = cos(2π f mt )


   

Message bandwidth W = f m
AM system µ =1
FM system β =5

Performance: Bandwidth:



SNRDSB − SC = SNRbase BDSB − SC = 2W


SNRAM = 13 SNRbase B AM = 2W
SNRFM = 752 SNRbase BFM = 12W

Lecture 5 21 Lecture 5 22
Summary
 Noise in FM:
Increasing carrier power reduces noise at receiver output
  
Has threshold effect
Pre-emphasis

Comparison of analog modulation schemes:




AM worse than baseband


  

DSB/SSB same as baseband


FM better than baseband

Lecture 5 23
Lecture 6 Digital vs Analog
Digital communication systems
  Digital vs Analog communications Analog message: Digital message:
Pulse Code Modulation

(See sections 4.1, 4.2 and 4.3)

Lecture 6 1 Lecture 6 2

Digital vs Analog Sampling: discrete in time


Analog: Digital:
Recreate waveform Decide which symbol was
 

 

accurately sent


  
Performance criterion is Performance criterion is

 
SNR at receiver output probability of receiver

 
making a decision error


Advantages of digital:
1. Digital signals are more immune to noise
Nyquist Sampling Theorem:
2. Repeaters can re-transmit a noise-free signal
A signal whose bandwidth is limited to W Hz can be reconstructed
exactly from its samples taken uniformly at a rate of R>2W Hz.

Lecture 6 3 Lecture 6 4
Maximum information rate Pulse-code modulation
Represent an analog waveform in digital form
Channel
B Hz

How many bits can be transferred over a channel of bandwidth B Hz


   
(ignoring noise)?
Signal with a bandwidth of B Hz is not distorted over this channel
Signal with a bandwidth of B Hz requires samples taken at 2B Hz
Can transmit:
2 bits of information per second per Hz

Lecture 6 5 Lecture 6 6

Quantization: discrete in amplitude Encode

Round amplitude of each sample to nearest one of a finite number of




levels
Assign each quantization level a code


Lecture 6 7 Lecture 6 8
Sampling vs Quantization Quantization noise
Sampling:
  Non-destructive if fs>2W
Can reconstruct analog waveform exactly by using a low-pass filter
Sampled signal
Quantization:
Destructive
 

Once signal has been rounded off it can never be reconstructed Quantized signal
exactly (step size of 0.1)

Quantization error

Lecture 6 9 Lecture 6 10

Quantization noise Quantization noise


Let ∆ be the separation between quantization levels Let message power be P


 
2m p Noise power is: PN = E{q 2 } (since zero mean)
∆=
( )
L 2
∆2 2m p
m 2p
where L=2n is the no. of quantization levels = = L
=
mp is peak allowed signal amplitude 12 12 3 × 22n
Output SNR of quantizer:


Round-off effect of quantizer ensures that |q|< ∆/2, where q is a


PS 3P 2 n
random variable representing the quantization error SNRQ = = ×2
PN m 2p
Assume q is zero mean with uniform pdf, so mean square error is: or in dB:





 
∞ 3P
E{q 2 } =  q 2 p (q) dq SNRQ = 6.02n + 10 log10 dB
−∞ m 2p
∆/2 ∆2
= q2 1
∆ dq =
−∆ / 2 12

Lecture 6 11 Lecture 6 12
Bandwidth of PCM Nonuniform quantization
Each message sample requires n bits For audio signals (e.g. speech), small signal amplitudes occur more


often than large signal amplitudes
If message has bandwidth W Hz, then PCM contains 2nW bits per Better to have closely spaced quantization levels at low signal
second amplitudes, widely spaced levels at large signal amplitudes
Quantizer has better resolution at low amplitudes (where signal spends
more time)
Bandwidth required is: BT = nW

SNR can be written: SNRQ =


3P 2 BT / W
×2

m 2p

Small increase in bandwidth yields a large increase in SNR


Uniform quantization Non-uniform quantization

Lecture 6 13 Lecture 6 15

Nonuniform quantization Companding

Uniform quantizer is easier to implement that nonlinear


Compress signal first, then use uniform quantizer, then expand signal
(i.e., compand)

Lecture 6 16 Lecture 6 17
Summary

Analog communications system:



Receiver must recreate transmitted waveform
   
Performance measure is signal-to-noise ratio
Digital communications system:


Receiver must decide which symbol was transmitted


Performance measure is probability of error

Pulse-code modulation:


Scheme to represent analog signal in digital format


Sample, quantize, encode


Companding (nonuniform quantization)

Lecture 6 18
Lecture 7 Digital receiver
Performance of digital systems in noise:
    Baseband
ASK
PSK, FSK
Compare all schemes

   


 
Filter +
s(t) Demodulator

(See sections 4.4, 4.5)

Lecture 7 1 Lecture 7 2

Baseband digital system Gaussian noise, probability

Noise waveform, n(t) Probability density function, f(n)

t
s(t)
   
 

( n − m) 2

  


1
p ( n) = exp −
σ 2π 2σ 2
= (m, σ 2)

Normal distribution
mean, m
variance, σ2
b
prob(a < n < b) =  p(n) dn
a

Lecture 7 3 Lecture 7 4
Gaussian noise, spectrum Baseband system – “0” transmitted

White noise NOTE: For zero-mean noise,


variance ≡ average power Transmitted signal
i.e., σ 2=P s0(t)

LPF (use for baseband) Noise signal


n(t)
W No
P= df = N oW
−W 2
Received signal
y0(t)= s0(t)+n(t)
BPF (use for bandpass)
No
f c +W − f c +W N
P= df +  o
df
f c −W 2 − f c −W 2 ∞
= 2 N oW Error if yo (t ) > A / 2 Pe 0 =  N(0, σ 2 ) dn
A/ 2

Lecture 7 5 Lecture 7 6

Baseband system – “1” transmitted Baseband system – errors


Possible errors:


1. Symbol “0” transmitted, receiver decides “1”
Transmitted signal 2. Symbol “1” transmitted, receiver decides “0”
s1(t)

Total probability of error:


Noise signal
n(t) Pe = po Pe 0 + p1 Pe1

Received signal Probability of Probability of making an


y1(t)= s1(t)+n(t) “0” being sent error if “0” was sent

A/ 2
Error if y1 (t ) < A / 2 Pe1 =  N ( A, σ 2 ) dn
−∞

Lecture 7 7 Lecture 7 8
Baseband system – errors How to calculate Pe?

  
  
∞ 1 n2
Pe =  exp − dn
For equally-probable symbols: A/ 2
σ 2π 2σ 2

1 1 1. Complementary error function (erfc in Matlab)
Pe = Pe 0 + Pe1
2 2 2 ∞

Can show that Pe0 = Pe1 erfc(u ) = u exp( −n 2 ) dn


π




A



Pe = 12 erfc
σ2 2
Pe1
Pe0
2. Q-function (tail function)
− n2

%'$ $ &
!' #
1 ∞
Q(u ) = u exp dn
Hence, Pe = ½ Pe0+ ½ Pe0= Pe0 2π 2


%"$ &
!" #

Pe =  N (0, σ 2 ) dn Pe = Q
A/ 2 2σ
Lecture 7 9 Lecture 7 10

Baseband error probability Example


Consider a digital system which uses a voltage level of 0 volts to

(
represent a “ 0” , and a level of 0.22 volts to represent a “ 1” . The digital
waveform has a bandwidth of 15 kHz.
If this digital waveform is to be transmitted over a baseband channel

(
having additive noise with flat power spectral density of No/2=3 x 10-8
W/Hz, what is the probability of error at the receiver output?

Lecture 7 11 Lecture 7 12
Amplitude-shift keying Synchronous detector

Identical to analog synchronous detector

(
s0 (t ) = 0
s1 (t ) = A cos(ω c t )

Lecture 7 13 Lecture 7 14

ASK – “ 0” transmitted ASK – “ 1” transmitted


Predetection signal: Predetection signal:
)

)
x0 (t ) = 0 + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t ) x1 (t ) = A cos(ω c t ) + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t )
ASK “ 0” Bandpass noise ASK “ 1” Bandpass noise

After multiplier: After multiplier:


)

)
r0 (t ) = x0 (t ) × 2 cos(ω c t ) r1 (t ) = x1 (t ) × 2 cos(ω c t )
= nc (t ) 2 cos 2 (ω c t ) − ns (t )2 sin(ω c t ) cos(ω c t ) = [ A + nc (t )] 2 cos 2 (ω c t ) − ns (t ) 2 sin(ω c t ) cos(ω c t )
= nc (t )[1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t ) = [ A + nc (t )][1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t )

Receiver output: Receiver output:


)

y0 (t ) = nc (t ) y1 (t ) = A + nc (t )

Lecture 7 15 Lecture 7 16
PDFs at receiver output Phase-shift keying
) ASK – “ 0” transmitted: ASK – “ 1” transmitted:

)
PDF of y0 (t ) = nc (t ) PDF of y1 (t ) = A + nc (t )

s0 (t ) = − A cos(ω c t )
PSK
s1 (t ) = A cos(ω c t )

FSK
Same as baseband!
)

.--/
+**,
A


Pe,ASK = 12 erfc
σ2 2
Lecture 7 17 Lecture 7 18

PSK demodulator PSK – “ 0” transmitted

Predetection signal:

1
x0 (t ) = − A cos(ω c t ) + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t )
PSK “ 0” Bandpass noise

After multiplier:

1
r0 (t ) = x0 (t ) × 2 cos(ω c t )
= [− A + nc (t )] 2 cos 2 (ω c t ) − ns (t ) 2 sin(ω c t ) cos(ω c t )
= [− A + nc (t )][1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t )
Band-pass filter bandwidth matched to modulated signal bandwidth
0 0 0

Carrier frequency is ωc
Receiver output:
1

Low-pass filter leaves only baseband signals


y0 (t ) = − A + nc (t )

Lecture 7 19 Lecture 7 20
PSK – “ 1” transmitted PSK – PDFs at receiver output

Predetection signal: PSK – “ 0” transmitted: PSK – “ 1” transmitted:


1

1
x1 (t ) = A cos(ω c t ) + nc (t ) cos(ω c t ) − ns (t ) sin(ω c t )
PSK “ 1” Bandpass noise PDF of y0 (t ) = − A + nc (t ) PDF of y1 (t ) = A + nc (t )

After multiplier:
1

r1 (t ) = x1 (t ) × 2 cos(ω c t )
= [ A + nc (t )] 2 cos 2 (ω c t ) − ns (t ) 2 sin(ω c t ) cos(ω c t )
= [ A + nc (t )][1 + cos( 2ω c t )] − ns (t ) sin( 2ω c t )

Set threshold at 0:

1
Receiver output:
1

if y < 0, decide "0"


y1 (t ) = A + nc (t ) if y > 0, decide "1"

Lecture 7 21 Lecture 7 22

PSK – probability of error Frequency-shift keying

PSK
∞ 0
Pe 0 =  N(− A, σ 2 ) Pe1 =  N( A, σ 2 )
0 −∞
476 6 8

− (n + A) 2
432 2 5

4=< < >

− (n − A) 2
4:9 9 ;

∞ 1 0 1
= exp dn = exp dn
0
σ 2π 2σ 2 −∞
σ 2π 2σ 2

Probability of bit error: FSK s0 (t ) = A cos(ω 0t )


F

s1 (t ) = A cos(ω1t )
1 A
DAC E
@A? B

Pe,PSK = erfc
2 σ 2
Lecture 7 23 Lecture 7 24
FSK detector FSK
Receiver output:

F
[
y0 (t ) = − A + n1c (t ) − nc0 (t ) ]
[
y1 (t ) = A + nc1 (t ) − nc0 (t ) ]
s0 (t ) = A cos(ω 0t )
s1 (t ) = A cos(ω1t ) Independent noise sources,
variances add

PDFs same as for PSK, but variance is doubled:

F
1 A

LIK M
HIG J
Pe,FSK = erfc
2 2σ

Lecture 7 25 Lecture 7 26

Digital performance comparison Summary

For a baseband (or ASK) system:

N
∞  A 
Pe =  N (0, σ 2 ) dn = 12 erfc 

A/ 2
 σ 2 2 

Probability of error for PSK and FSK:

F
1 A

DAC E
@A? B
1 A

LIK M
HIG J
Pe,PSK = erfc Pe,FSK = erfc
2 σ 2 2 2σ

Comparison of digital systems:


F

PSK best, then FSK, ASK and baseband same

Lecture 7 27 Lecture 7 28
Lecture 8 Why information theory?
Information theory
    Why?
Information
Entropy
Source coding (a little)

(See sections 5.1 to 5.4.2)


What is the performance of the “best” system?


“What would be the characteristics of an ideal system, [one that] is not
limited by our engineering ingenuity and inventiveness but limited
rather only by the fundamental nature of the physical universe”
Taub & Schilling

Lecture 8 1 Lecture 8 2

Information Properties of I(s)


The purpose of a communication system is to convey
information from one point to another I(s)
What is information?
I ( s ) = − log 2 ( p ) bits
Definition:
 
  

1 p
I ( s ) = log 2 = − log 2 ( p ) bits 1. If p=1, I(s)=0
p
(symbol that is certain to occur conveys no information)
Probability of Conventional unit
2. 0<p<1, ∞<I(s)<0
Information in
symbol s occurrence of of information 3. If p=p1×p2, I(s)=I(s1)+I(s2)
symbol s

Lecture 8 3 Lecture 8 4
Example Sources and symbols
Suppose we have two symbols:
s0 = 0
s1 = 1
Each has probability of occurrence: Symbols:


p0 = p1 = 12 may be binary (“0” and “1”)

can have more than 2 symbols, e.g. letters of the alphabet, etc.
Each symbol represents: Sequence of symbols is random (otherwise no information is conveyed)


I ( s) = − log 2 ( 12 ) = 1 bit of information
Definition:


If successive symbols are statistically independent, the information
In this example, one symbol = one information bit, source is a zero-memory source (or discrete memoryless source)
but it is not always so!
How much information is conveyed by symbols?


Lecture 8 5 Lecture 8 6

Entropy Example – binary source


Definition: Alphabet:


H ( S ) = − pk log 2 ( pk ) S = {s0 , s1 }
all k
collection of all possible symbols
Probabilities: p0 = 1 − p1


where S = {s1 , s2 , , s K } is alphabet
Entropy:

symbol H ( S ) = − pk log 2 ( pk )
pk is probability of occurence of symbol sk all k
= −(1 − p1 ) log 2 (1 − p1 ) − p1 log 2 ( p1 )

We’re certain that the symbol


Note that :  pk = 1 How to represent (encode) each symbol?

comes from the known alphabet
all k

let s0 = 0, s1 = 1
Entropy: average information per symbol

this requires 1 bit/symbol to transmit


Lecture 8 7 Lecture 8 8
Example – three symbol alphabet Example – three symbol alphabet
Alphabet:
S = {A, B, C } Code words

Probabilities: p A = 0.7, p B = 0.2, pC = 0.1


Entropy: H ( S ) = − pk log 2 ( pk )
all k
= 1.157 bits/symbol

How to represent (encode) each symbol?


let A = 00
B = 01 Symbols generated at Bitstream generated at System needs to
C = 10 rate of 1 symbol/sec rate of 2 bits/sec process 2 bits/sec

this requires 2 bits/symbol to transmit


Lecture 8 9 Lecture 8 10

Source coding Examples

Amount of information we need to transmit, is determined Telephone:



(amongst other things) by how many bits we need to System needs to
Speech waveform 8000 symbols/sec 64000 bits/sec
transmit for each symbol process 64 kb/s

In the binary case, only 1 bit required to transmit each symbol


In the {A,B,C} case, 2 bits required to transmit each symbol

 Cell phone:
System needs to
Speech waveform 8000 symbols/sec 13000 bits/sec process 13 kb/s

Lecture 8 11 Lecture 8 12
Source vs channel coding Source coding

All symbols do not need to be encoded with the same


number of bits
p A = 0.7, pB = 0.2, pC = 0.1
Example:


let A = 0
B = 10
C = 11

Source coding: minimize the number of bits to be transmitted


Channel coding: add extra bits to detect/correct errors 6 symbols 8 bits

Lecture 8 13 Lecture 8 14

Average codeword length Source coding


Definition: Use variable-length code words


 
Symbol that occurs frequently (i.e., relatively high pk)
L =  pk l k should have short code word
all k
Symbol that occurs rarely should have long code word


Probability of occurrence Number of bits used to
of symbol sk represent symbol sk

Example: p A = 0.7, pB = 0.2, pC = 0.1




let A = 0, B = 10, C = 11

L = 0.7 ×1 + 0.2 × 2 + 0.1× 2


= 1.3 bits/symbol

Lecture 8 15 Lecture 8 16
Summary
 Information content (of a particular symbol):

  
  
1
I ( s ) = log 2 = − log 2 ( p ) bits
p

Entropy (for a complete alphabet, is the average




information content per symbol):


H ( S ) = − pk log 2 ( pk ) bits/symbol
all k

Source coding:


How many bits do we need to represent each symbol?

Lecture 8 17
Lecture 9 Source coding
Source coding theorem
All symbols do not need to be encoded with the same
Huffman coding algorithm
number of bits
Example:
(See sections 5.4.2, 5.4.3) p A = 0.7, pB = 0.2, pC = 0.1 probabilities

A = 0, B = 10, C = 11 code words

L = 0.7 ×1 + 0.2 × 2 + 0.1× 2


= 1.3 bits/symbol average codeword length

6 symbols 8 bits

Lecture 9 1 Lecture 9 2

Source coding Equal probability symbols


How can we reduce the number of bits we need to transmit?
Example:
What is the minimum number of bits we need for a
particular symbol? Alphabet: S = {A, B}
(Source coding theorem) Probabilities: p A = 0.5, pB = 0.5
How can we encode symbols to achieve this minimum Code words: A = 0, B = 1
number of bits?
(Huffman coding algorithm) Requires 1 bit for each symbol

In general, for n equally-likely symbols:


Probability of occurrence of each symbol is p=1/n
Number of bits to represent each symbol is
  
  
1
l = log 2 = log 2 (n )
p

Lecture 9 3 Lecture 9 4
Unequal probabilities? Unequal probabilities ? (cont.)

Alphabet: S = {s1 , s2 , , sK } Probability of any sequence of N symbols occurring:


Np1 Np2
Probabilities: p1 , p2 , , pK p ( S N ) = p1 × p2 ×


Any random sequence of N symbols (large N): Number of bits required to represent a sequence of N symbols:
s1: N × p1 occurrences
( )

  
 
1 Np Np
l N = log 2 = − log 2 p1 1 × p2 2 ×



s2: N × p2 occurrences
p( S N )


Particular sequence of N symbols:
= − Np1 log 2 ( p1 ) − Np2 log 2 ( p2 ) −
S N = {s1 , s2 , s1 , s3 , s3 , s2 , s1 , }


= − N  pk log 2 ( pk ) = N H ( S )

all k
Probability of this particular sequence occurring:
Average number of bits for one symbol is:
p ( S N ) = p1 × p2 × p1 × p3 × p3 × p2 × p1 × lN


Np Np
= p1 1 × p2 2 × L= = H (S )
N

Lecture 9 5 Lecture 9 6

Minimum codeword length Huffman coding algorithm

Optimum coding scheme – yields the shortest average


Source Coding Theorem: codeword length
For a general alphabet S, the minimum average codeword
length is given by the entropy, H(S).

Significance:


For any practical source coding scheme, the average codeword length
will always be greater than or equal to the source entropy, i.e.,
L ≥ H (S ) bits/symbol

How can we design an efficient coding scheme?




Lecture 9 7 Lecture 9 8
Example Huffman coding algorithm
 Consider a five-symbol alphabet having the probabilities Uniquely decodable


indicated: i.e., only one way to break bit stream into valid code words

Symbols : A, B, C , D, E
Instantaneous


Probabilities : p A = 0.05, p B = 0.15, pC = 0.4, p D = 0.3, p E = 0.1
i.e., know immediately when a code word has ended

1. Calculate the entropy of the alphabet.


2. Using the Huffman algorithm, design a source coding
scheme for this alphabet, and comment on the average
codeword length achieved.

Lecture 9 9 Lecture 9 10

Summary
Source coding theorem:


For a general alphabet S, the minimum average codeword


length is given by the entropy, H(S).

Huffman coding algorithm:




Practical coding scheme that yields the shortest average


codeword length

Lecture 9 11
Lecture 10 Reliable transfer of information
How much information can be reliably transferred over a
noisy channel?
(Channel capacity)

What does information theory have to say about analog


communication systems?

(See sections 5.5, 5.6)

If channel is noisy, can information be transferred reliably?

 
How much information?

Lecture 10 1 Lecture 10 2

Information rate Channel capacity


Definition:
Definition:
R=r H bits/sec
Channel capacity, C, is maximum rate of information
transfer over a noisy channel with arbitrarily small
Avg no. of information Avg. no. of symbols Avg. no. of information probability of error
bits transferred per second per second bits per symbol

Intuition:
R can be increased arbitrarily by increasing symbol rate r Channel Capacity Theorem
  

For noisy channel, errors are bound to occur


If R≤C, then there exists a coding scheme such that
Is there a value of R where probability of error is arbitrarily small?
symbols can be transmitted over a noisy channel with
an arbitrarily small probability of error

Lecture 10 3 Lecture 10 4
Channel capacity Channel capacity

Channel capacity theorem is a surprising result:

Gaussian noise has PDF



Hartley-Shannon Theorem
This is non-zero for all noise amplitudes For an additive white Gaussian noise channel, the
  

Sometimes (however infrequent) noise must over-ride signal → bit channel capacity is:
error Average signal power

 
  
But, theorem says we can transfer information without error!! P at the receiver
C = B log 2 1 + S
PN
Basic limitation due to noise is on speed of Average noise power
Bandwidth of
at the receiver
communication, not on reliability the channel

So what is the channel capacity C ??

Lecture 10 5 Lecture 10 6

Example Example
Consider a baseband channel with a bandwidth of B=4 kHz. Assume a


Consider a baseband system message signal with an average power of Ps=10 W, is transmitted over
Noise power is: this channel which has additive noise with a flat spectral density of
height No/2 with No=10-6 W/Hz.
BNo 1. Calculate the channel capacity of this channel.
PN =  df =N o B
−B 2 2. If the message signal is amplified by a factor of n before transmission,
calculate the channel capacity when (a) n=2, and (b) n=10.
3. If the bandwidth of the channel is doubled to 8 kHz, what is the
Channel capacity: channel capacity now?
  

PS
C = B log 2 1 +

No B

Lecture 10 7 Lecture 10 8
Example Comments

  
   
PS
C = B log 2 1 +


PN

More signal power increases capacity, but increase is slow


B=4000
No=10-6 Can increase capacity arbitrarily through PS

More bandwidth allows more symbols per second, but also increases


the noise
Ps=10
No=10-6 PS
Can show that: lim C = 1.44


B →∞ No
Cannot increase capacity arbitrarily through B

Lecture 10 9 Lecture 10 10

More comments Information theory and analog


  
  

PS
C = B log 2 1 +


PN

This is capacity of an ideal “best” system


 

How can we design something that comes close?


Through channel coding, modulation/demodulation


schemes
But, no deterministic method exists to do it !


Optimum communication system achieves the largest SNR at the




receiver output

Lecture 10 11 Lecture 10 12
Optimum analog system Optimum analog system
 Maximum rate that information can arrive at receiver:
Assume that channel noise is AWGN, having PSD: No/2

 
Cin = B log 2 (1 + SNRin )
Average noise power at demodulator input is: PN = N o B

Maximum rate that information can leave receiver:




SNR at receiver input:


Cout = W log 2 (1 + SNRout ) Transmitted power
Baseband SNR
Ideally, no information is lost: PT W PT


SNRin = =
Cout = Cin N o B B N oW
Bandwidth spreading ratio
Equating gives:


transmission bw/message bw
SNR at receiver output:


SNRout = (1 + SNRin ) − 1
B /W
B /W
W

 !

 
For any increase in bandwidth, output SNR increases SNRout = 1 + SNRbase −1


B
exponentially
Lecture 10 13 Lecture 10 14

Analog performance Summary


Information rate: R = r H bits/sec

"
Channel Capacity Theorem:

"
If R≤C, then there exists a coding scheme such that
symbols can be transmitted over a noisy channel with an
arbitrarily small probability of error

Hartley-Shannon Theorem (Gaussian noise channel):


"

' && (
$ ## %
P
C = B log 2 1 + S



bits/sec
PN
Analog communication systems:
"

Information theory tells us the best SNR


Ideal performance Actual performance
Lecture 10 15 Lecture 10 16

Anda mungkin juga menyukai