Anda di halaman 1dari 15

ECE 544 Basic Probability and Random Processes

J. V. Krogmeier
August 26, 2014

Contents 4.5.1 AM DSB-SC . . . . . . . . . . 10


4.5.2 AM-LC . . . . . . . . . . . . . 10
1 Probability [1] 2 4.6 Coherent Demodulators for Linear
1.1 Discrete Distributions . . . . . . . . . 2 Modulations . . . . . . . . . . . . . . . 11
1.2 Continuous Distributions . . . . . . . 2 4.6.1 AM-DSB . . . . . . . . . . . . 11
1.3 Gaussian Properties . . . . . . . . . . 3 4.6.2 AM-LC . . . . . . . . . . . . . 11
1.4 Useful Theorems . . . . . . . . . . . . 3
A Basic Math 12
2 Random Processes [2] 4 A.1 Trig. Identities . . . . . . . . . . . . . 12
2.1 Second Order RPs . . . . . . . . . . . 4 A.2 Expansions/Sums . . . . . . . . . . . . 12
2.2 And LTI Systems . . . . . . . . . . . . 4 A.3 Taylor Series . . . . . . . . . . . . . . 12
2.2.1 Special Lemma on Correlation 4 A.4 Integration by Parts . . . . . . . . . . 12
2.2.2 The Standard Correlation For- A.5 Convolution . . . . . . . . . . . . . . . 12
mula . . . . . . . . . . . . . . . 4
B Spectral Analysis of Continuous Time
2.3 Wide Sense Stationary RPs . . . . . . 4
Signals 12
2.4 WSS and LTI Systems . . . . . . . . . 5
B.1 Fourier Series . . . . . . . . . . . . . . 12
2.5 Theorem on Modulation . . . . . . . . 5
B.2 Continuous Time Fourier Transform
2.5.1 Gaussian RPs . . . . . . . . . . 5 (CTFT) . . . . . . . . . . . . . . . . . 12
2.5.2 AWGN . . . . . . . . . . . . . 6
C Deterministic Autocorrelation and
3 Communication Link Parameters [3] 6 Power Spectral Density 12
3.1 Nyquists Noisy R Model . . . . . . . 6 C.1 Energy Signals . . . . . . . . . . . . . 12
3.2 Communication System Component C.2 Power Signals . . . . . . . . . . . . . . 15
Models . . . . . . . . . . . . . . . . . . 6
3.2.1 Noiseless Components . . . . . 6 D Hilbert Transform 15
3.2.2 Noisy Components . . . . . . . 7
3.2.3 Combining Basic Blocks . . . . 8
3.2.4 Cascades of Blocks . . . . . . . 9
3.3 Signal Power via Friis Equation . . . . 9

4 Linear Analog Communications in


AWGN [4] 9
4.1 Performance Metric . . . . . . . . . . 9
4.2 Message and Noise . . . . . . . . . . . 9
4.3 Baseband Transmission . . . . . . . . 9
4.4 Generic Passband Transmission . . . . 10
4.5 SNR at IF for Specific Passband Mod-
ulations . . . . . . . . . . . . . . . . . 10

1
1 Probability [1] Rayleigh: A r.v. R is said to be Rayleigh dis-
tributed with parameter if its pdf is
1.1 Discrete Distributions  r
r 2 /2 2
for r 0
fR (r) = 2 e
Bernoulli: A random variable (r.v.) X is said to 0 for r < 0
be a Bernoulli r.v. with parameter p (0 p For such a r.v.
1) if it only takes two values 0 and 1 and its r

probability mass function (pmf) is of the form E(R) =
2
pX (1) = Pr(X = 1) = p 1
Var(R) = (4 ) 2 .
pX (0) = Pr(X = 0) = 1 p. 2

Binomial: A r.v. X is said to be a Binomial Gaussian (single variate): A r.v. X is said to


r.v. with parameters (N, p) where N is a positive be a normal (or Gaussian) r.v. with parameters
integer and 0 p 1 if its pmf is of the form (, 2 ), written X N (, 2 ), if its pdf is
1 2 2
e(x) /2 .
 
N fX (x) =
pX (k) = pk (1 p)N k 2
k
For such a r.v.
for k = 0, 1, 2, . . . , N . For such a r.v. X
E(X) =
E(X) = Np
Var(X) = 2 .
Var(X) = N p(1 p).
The Gaussian Q function gives the tail probabil-
Poisson: A r.v. X is said to be a Poisson r.v. ity of a N (0, 1) r.v.,
with parameter > 0 if its pmf is of the form Z
1 2

k Q(x) = ez /2 dz.
pX (k) = e 2 x
!
Note that Q(x) = 1 Q(x). A table of values
for k = 0, 1, 2, . . .. For such a r.v. X of the Q function is given on the next page.
E(X) =
Gaussian (two variable): Two r.v.s X and Y are
Var(X) = . said to be bivariate normal (or Gaussian) if their
joint pdf is
1.2 Continuous Distributions 1
fX,Y (x, y) = p
Uniform: A r.v. X is said to be uniform on an 2x y 1 2
 
interval a x b if its probability density func- 1
exp F (x, y)
tion (pdf) is 2(1 2 )
where F (x, y) is the quadratic form:
 1
ba for a x b
fX (x) = .
0 for x < a or x > b 2  2
x x y y (x x )(y y )

+ 2 .
For such a r.v. x y x y
E(X) = (a + b)/2 For such r.v.s
2
Var(X) = (b a) /12. E(X) = x
Exponential: A r.v. X is said to be an exponen- E(Y ) = y
tial r.v. with parameter > 0 if its pdf is Var(X) = x2
Var(Y ) = y2
ex for x 0

fX (x) = . Cov(X, Y ) = x y
0 for x < 0
For such a r.v. where 1 +1. (The bivariate normal dis-
tribution can be generalized to an arbitrary num-
E(X) = 1/ ber of random variables. Such are called jointly
Var(X) = 1/2 . Gaussian r.v.s.)

2
1.3 Gaussian Properties 1.4 Useful Theorems
Jointly Gaussian r.v.s X and Y are statistically Markovs Inequalitiy: X a r.v. taking only non-
independent if and only if (iff) they are uncorre- negative values. Then for any a > 0
lated, i.e., = 0.
E[X]
Pr{X a} .
A linear combination of an arbitrary number of a
jointly Gaussian r.v.s is a Gaussian r.v.
Chebyshevs Inequalitiy: X a r.v. with finite
Conditional Gaussian: Let r.v.s X and Y be mean and variance 2 , then for any value k > 0
jointly Gaussian with the pdf given in the previ-
ous bullet. Then the conditional pdf of X given 2
Y = y is a single variable Gaussian pdf with Pr{|X | k} .
k2
x
E(X|Y = y) = x + (y y ) Weak Law of Large Numbers: X1 , X2 , . . . a
y sequence of independent and identically dis-
Var(X|Y = y) = x2 (1 2 ). tributed (i.i.d.) r.v.s, each having a finite mean
E[Xi ] = . Then, for any  > 0
Gaussian Moments: Let X be a Gaussian ran-
X1 + + Xn
 
dom variable with mean zero and variance 2 ,

Pr >  0
i.e., N (0, 2 ). Then n

E(X 2n ) = 1 3 (2n 1) 2n as n . (Sample mean converges to true


mean in probability.)
and
Central Limit Theorem: X1 , X2 , . . . a sequence
E(X 2n1 ) = 0
of i.i.d. r.v.s, each having mean and variance
for n = 1, 2, 3, . . .. 2 . Then the cdf of

Connection between Gaussian and Rayleigh: X1 + + Xn n



Let X and Y be zero mean jointly Gaussian n
r.v.s with equal variances 2 and = 0 (i.e., they
tends to the cdf of the standard unit normal as
are statistically
independent). Then the derived n . (Convergence in distribution.)
r.v.s R = X 2 + Y 2 and = arctan(Y /X)
(four quadrant inverse tangent) are themselves Strong Law of Large Numbers: X1 , X2 , . . . a
statistically independent and R is Rayleigh with sequence of independent and identically dis-
parameter and is uniform on [0, 2). tributed (i.i.d.) r.v.s, each having a finite mean

3
E[Xi ] = . Then 2.2.1 Special Lemma on Correlation

X1 + + Xn
 Let A(t) and B(t) be 2nd order rps. Let h1 (t) and
Pr lim = =1 h2 (t) be BIBO stable impulse responses. Generate
n n
two additonal rps via:
(i.e., the sample mean converges to the true
mean with probability one.) C(t) = h1 A(t)
D(t) = h2 B(t).

Then the cross-correlation of the outputs is


2 Random Processes [2] obtained via:

2.1 Second Order RPs RC,B (t, s)


h1 (t) h2 (s) RC,D (t, s)
Assume all signals, impulse responses, and random
RA,B (t, s)
processes X(t), Y (t) are real-valued in this section.
Assume that all random variables have finite variance h2 (s) h1 (t) RC,D (t, s)
RA,D (t, s)
(hence also have finite means). Define moment func-
tions:
2.2.2 The Standard Correlation Formula
Mean: X (t) = E[X(t)]. def def
Let A B = X and h1 h2 = h. Then the
Cross-Correlation: RX,Y (t, s) = E[X(t)Y (s)]. correlation formula for the standard case reduces to:

Cross-covariance RY,X (t, s)


h(t) h(s) RY,Y (t, s)

CX,Y (t, s) = RX,Y (t, s) X (t)Y (s). RX,X (t, s)

h(s) h(t) RY,Y (t, s)


RX,Y (t, s)
We get auto-correlation RX,X (t, s) and auto-
covariance CX,X (t, s) when Y X in the def-
initions above. 2.3 Wide Sense Stationary RPs
To the assumption of finite variance used in the pre-
2.2 And LTI Systems . . . vious sections we here add the assumption that the
mean functions are independent of time and that cor-
Let the impulse response of an LTI system be BIBO relations and cross correlations depend only upon the
stable. Then if a second order rp is input to the time difference or time lag. A single process with this
system, the output is also second order: property is called wide sense stationary (WSS); for a
pair of rps we use the term jointly wide sense station-
LTI ary (JWSS).
X(t) Y (t)
h(t) H(f ) In symbols, rps X(), Y () are JWSS if
X (t) X , Y (t) Y for all times t R and
The mean of the output rp is equal to the
result of passing the input mean through the LTI RX,Y (t, s) = RX,Y (t + , s + )
system:
for all times t, s, R. This means that the auto-
correlation function really only depends upon the
X (t) LTI Y (t) time lag = s t between the two time samples.
h(t) H(f )
When JWSS one typically redefines the no-
tation as shown below:
The cross-correlation of input and output
and the auto-correlation of the output can be com- Mean: X = E[X(t)].
puted via application of the LTI filter as well. First,
we give a general lemma. Autocorrelation: RX ( ) = E[X(t)X(t + )].

4
Autocovariance: CX ( ) = RX ( ) 2X . 2.5 Theorem on Modulation
Cross-correlation RX,Y ( ) = E[X(t)Y (t + )]. A(t), B(t) jointly WSS. r.v. uniform on [0, 2),
statistically independent of A(t), B(t):
Cross-covariance CX,Y ( ) = RX,Y ( ) X Y . Thm Part A: Then X(t) = A(t) cos(2fc t + ) is
WSS with X = 0 and
Then the following definitions make sense:
RX ( ) = 0.5RA ( ) cos(2fc )
Power: l
SX (f ) = 0.25[SA (f fc ) + SA (f + fc )]
def
power[X(t)] = RX (0)
= CX (0) + 2X Thm Part B: If RA ( ) = RB ( ) and RA,B ( ) =
RB,A ( ), then
= ac power + dc power
X(t) = A(t) cos(2fc t) B(t) sin(2fc t)
Power spectral density (SX (f )):
has
RX ( ) SX (f ); RX ( ) = RA ( ) cos(2fc )
Z RA,B ( ) sin(2fc )
power[X(t)] = RX (0) = SX (f )df. l

SX (f ) = 0.5[SA (f fc ) + SA (f + fc )]
+ j0.5[SA,B (f fc ) SA,B (f + fc )].
2.4 WSS and LTI Systems
Moreover, if A(t), B(t) are zero mean, then X(t) has
WSS X(t) input to LTI system with h(t) H(f ).
zero mean and is WSS.
Then output Y (t) = X h(t) is WSS, X(t) and Y (t)
are jointly WSS, and: Thm Part C: Then
R X(t) = A(t) cos(2fc t + ) B(t) sin(2fc t + )
Y = X
h(t)dt = X H(0).
is zero mean, WSS with
Crosscorrelation / cross spectral density
RX ( ) = 0.5[RA ( ) + RB ( )] cos(2fc )
RX,Y ( ) = h RX ( ) 0.5[RA,B ( ) RA,B ( )] sin(2fc )
l l
SX,Y (f ) = H(f )SX (f ). SX (f ) = 0.25[SA (f fc ) + SB (f fc )
+ SA (f + fc ) + SB (f + fc )]
Autocorrelation / psds:
+ j0.25[SA,B (f fc ) SA,B (f + fc )
RY ( ) = h h RX ( ) SA,B (f + fc ) + SA,B (f fc )]
l
SY (f ) = |H(f )|2 SX (f ). 2.5.1 Gaussian RPs

def
X(t) is a Gaussian r.p. if any finite col-
where h(t) = h(t). lection of time samples from the process:
X(t1 ), X(t2 ), . . . , X(tN ) is a set of jointly Gaus-
LTI sian random variables.
h(t) H(f )
X(t) Y (t)
If input to LTI system is WSS Gaussian r.p.,
then output is WSS Gaussian. Moreover, input
and output are jointly Gaussian r.p.s.
LTI LTI
RX (t)
h(t) H(f )
RX,Y (t) h(t) H (f ) R (t) X(t) WSS and Gaussian. If CX ( ) = 0, then
Y
X(t) and X(t + ) are statistically independent
SX (f ) SX,Y (f ) SY (f ) for any t.

5
X(t), Y (t) jointly WSS and jointly Gaussian. If Maximum power from a noisy resistor is deliv-
CX,Y ( ) = 0, then X(t) and Y (t + ) are sta- ered to a matched load and the maximum power
tistically independent for any t. that can be delivered over a one-sided bandwidth
B is
kT
2.5.2 AWGN Pmax = 2B = kT B.
2
WSS Gaussian r.p. N (t) with zero mean and Therefore, the two-sided available power spec-
autocorrelation / psd tral density (psd) of a resistive noise source is
N0 flat with height = kT /2, depending only on the
RN ( ) = ( ) physical temperature T .
2
l
N0 3.2 Communication System Compo-
SN (f ) = for < f <
2 nent Models
said to be a white Gaussian noise (WGN). When Important components in communications systems
N (t) appears in a problem added to a desired include antennas, filters, oscillators, noise sources,
signal, we call it additive white Gaussian noise mixers, amplifiers, and attenuators. These blocks are
(AWGN). used to implement modulators, demodulators, and
detectors. They are used to process signals in the
presence of noise with the goal of minimizing noise
3 Communication Link Param- related impairments. However, since the blocks are
eters [3] contructed from electronics and lossy elements, they
introduce noise in their own right.
Noise arises in communications systems from two The tables below show the basic blocks. The
main sources: 1) Thermal noise1 associated with the first contains the fundamental noiseless components
random motion of electrons in a resistor or other lossy and the second contains the fundamental noisy com-
devices, and 2) shot noise associated with the dis- ponents. Basic parameters are given.
crete nature of charge carriers in tubes and semicon- All noise psds in the equations of this section
ductors. Thermal noise voltages are proportional to will be assumed to be flat (white) over the frequency
the square root of temperature while shot noise is in- band of interest. All filters are assumed to be ideal
dependent of temperature. Both are wideband and and all impedances are assumed to be matched2 .
often modeled as white.
3.2.1 Noiseless Components
3.1 Nyquists Noisy R Model Component Parameters
Resistor of value R at temperature T K has
across its terminals a random noise voltage with
Gaussian distribution, zero mean, and flat psd Antenna Gant
given in the below model. noiseless LPF fH
noiseless BPF fL fH

R +
v(t)
R
noiseless HPF fL

Oscillator Posc fosc


Noiseless R in series with noise
Antenna. An ideal noiseless antenna is charac-
Noisy R at temp T voltage source v(t): terized by its gain, Gant , which is the peak of its
WSS, zero-mean. power gain pattern vs. direction. A secondary
parameter is the effective area, Aant , related to
Sv (f ) = 2kT R ( < f < )

(k = 1.38 1023 J/K) 2 Non-ideal filters can be accomodated under this assump-

tion provided that bandwidths are defined as noise equivalent


1 Also called Johnson or Nyquist noise. bandwidths.

6
gain via * Effect of internal noise sources modeled by
4Aant placing an additive white noise source at
Gant =
2 the amplifier input. Parameter is equivalent
noise temperature Te . Then the noise power
where is the operating wavelength.
in the amplifier output in a one-sided BW
Filters. Ideal noiseless filters will have symmet- B due to internal sources is
ric, brickwall responses and unity passband gain.
Pout,internal = kTe Gamp B.
Therefore, the only parameters are passband
cutoff frequencies. noisy amp model

LPF BPF Gamp


1.0 1.0
(noiseless)

f f
0 fH 0 fL fH

nw (t) : Snw (f ) = kTe /2


Oscillators. An ideal sinusoidal oscillator pro-
duces a signal
p * Standard noise figure defined by comparing
vosc (t) = 2Posc cos(2fosc t + ) output noise power due to internal sources
to output noise power due to an external
where may be modeled as either a determin- noise source at standard temperature T0 =
istic or random phase offset. 290 K. That is in a band of one-sided BW
B:
3.2.2 Noisy Components Pout = kT0 Gamp B + kTe Gamp B .
| {z } | {z }
Component Parameters Pout,external Pout,internal

Then
Noise Source Te
Pout
F =
Amplifier Te Gamp Pout,external
Pout,internal Te
= 1+ =1+ 1
Attenuator T L Pout,external T0

Also Te = T0 (F 1).
Mixer Te * May also interpret noise figure and equiv-
Noise Sources. Modeled as though they were alent noise temperature in terms of the
noisy resistors, i.e., they are zero mean, degradation in signal-to-noise ratio occur-
white, Gaussian random processes with two- ring due to the addition of internal noise.
sided power spectral density In below figure

kTe Ps
Sv (f ) = < f < . SNRin =
2 Pn
Gamp Ps
The only parameter is the equivalent noise tem- SNRout =
Gamp (kTe B + Pn )
perature, Te , given in K. Boltzmans constant is
k = 1.38 1023 J/K. assuming that the system does not filter or
reduce the bandwidth of either the input
Amplifiers. Parameters are power gain Gamp and signal or input noise. Can show that in a
equivalent temperature Te . Ideal amplifiers are band of one-sided BW B:
assumed of infinite bandwidth.
SNRin kTe B
=1+ 1.
SNRout Pn
p
* Voltage gain is Vamp = Gamp .

7
noisy amp model Receiver Antenna: Reciprocity holds. There-
s(t); Ps
fore, antenna properties such as pattern, gain,
Gamp
(noiseless)
impedance, loss must be the same for an antenna
be it used in receive or transmit mode.
n(t); Pn

nw (t) : Snw (f ) = kTe /2 Rx antenna model


Gant

Attenuators. Can be treated in the same way as


amplifiers with the exception that the two inde- nw (t) : Snw (f ) = kTant /2
pendent parameters are the power loss Latten =
1/Gatten and the physical temperature of the at-
tenuator T . In these terms

Te = (Latten 1)T * However, since signal powers input to an


F = 1 + (Latten 1)T /T0 antenna in receive mode are typically many
orders of magnitude (i.e., many tens of dB)
Mixers. From the random process Modulation smaller than signal powers output from an
Theorem (Part A) the mixer in the block di- antenna in transmit mode, thermal noise
agram below looks to a white input as a sim- associated with antenna losses are only sig-
ple power gain Gmixer = 1/2. Hence, there nificant in receive mode.
is only one fundamental parameter Te . Also
F = 1 + Te /T0 . * Furthermore, in receive mode an antenna
noisy mixer model will pick up background noise3 from the ex-
ternal environment along with any desired
signal.

* Noisy receive antenna model is cascade of


nw (t) : Snw (f ) = kTe /2 ideal antenna with noise source referenced
to the output of the receive antenna. If Tant
cos(2f0 t + ) is the antenna noise temperature then

3.2.3 Combining Basic Blocks kTant B = Pow. input to rest of Rx


More complex blocks can be created from the in one-sided BW B Hz.
above by combining them with noiseless blocks.
Any noisy block can be made noiseless by setting * If combine background black-body noise
Te = 0 K in the model. with atmospheric attenuation effects4 , then
Noisy bandlimited amplifier: the noise temperature Tant in the model
would approximately follow the figure be-
low from Pozar [3]. The angle parameter
noisy amp model
BPF noiseless BPF is the elevation of the antenna above the
Gamp
1.0
horizon.
(noiseless)

f
0 fL fH
3 Sources of natural and man-made background noise in-
nw (t) : Snw (f ) = kTe /2 clude: cosmic remnants of big bang, sun and stars, thermal
noisy bandpass amplifier
noise from ground, lightening, high voltage lines, interference
from electronics and lighting, and other undesired radio trans-
Lossy Filter: This can be made using the noisy missions.
4 Note that if we wish also to model the attenuation of
bandlimited amplifier block. Here, however,
the signal due to atmospheric loss, then a noiseless attenua-
Te = T , the physical temperature of the filter,
tor should follow the noisy antenna model. Attenuation would
and Gamp < 1 since a passive filter attenuates depend strongly on frequency particularly near water and oxy-
an input signal. gen resonances and it would also depend weakly on elevation.

8
Sky noise temperature vs. frequency7 shows that low angles and high frequencies are
disadvantaged relative to frequencies below 10 GHz.

Possibilities: mean-squared error (MSE), signal-


to-noise ratio (SNR), etc.
We choose SNR; goal is to maximize
def power[yD (t|m)]
SNRD =
power[yD (t|nw )]

Issues with SNR as metric:

* Decompostion of yD (t) into sum of message


and noise components must be unambigu-
ous (true for linear demodulators, an ap-
proximation for non-linear demodulators).
* Must maximize SNR under constraint of
bounded transmit power.
The microwave window8 is defined as frequencies lying above galactic noise and below
absorption 3.2.4
noise, as shown here. of Blocks
Cascades
4.2 Message and Noise
For a cascade of input-output blocks the over-
Model message m(t) as WSS random process
all noise figure and equivalent noise temperature
with Rm ( ) Sm (f ) and bandlimited
are:
Sm (f ) = 0 for f > W.
F2 1 F3 1
F = F1 + + +
G1 G1 G2 nw (t) modeled as AWGN with psd height N0 /2.
Te2 Te3
Te = Te1 + + + Message and noise are assumed statistically in-
G1 G1 G2
dependent.

3.3 Signal Power via Friis Equation


4.3 Baseband Transmission


2 Corresponds to analog comm. model with x(t) =
PR = GT GR PT . m(t), Hchan (f ) = 1, and the demodulator is an
4D
ideal LPF of bandwidth W . Then
GT GR
xr (t) = m(t) + nw (t)
yD (t) = hlpf m(t) + hlpf nw (t)
7 Pozar, D., Microwave and RF Design of Wireless Systems, J. Wiley, 2001,, pg. 127
8 Sklar, pg. 225 PT D = m(t) + n(t)
P R

(assume LPF passes m(t) without distortion).


-7-

4 Linear Analog Communica- Transmitted signal power PT = Pm .

tions in AWGN [4] Received signal component: identical to trans-


mitted signal component power in received
signal component = PT .
m(t) x(t) xr (t) yD (t)
modulator Hchan (f ) demodulator
Noise component n(t): Assume ideal LPF with
gain 1.0 and bandwidth set to the minimum
channel
nw (t)
needed to pass the signal undistorted, then the
psd Sn (f ) is
4.1 Performance Metric Sn (f )
N0 /2
Assume decomposition: (demodulator output
equals sum of components due to message and
noise) f
W 0 W
yD (t) = yD (t|m) + yD (t|nw ). which has power N0 W .

9
Therefore SNRD = Pm /(N0 W ) = PT /(N0 W ). with zero mean and power Pm = Rm (0). Also as-
sume that the r.v. below is uniform on [0, 2) and
statistically independent of all other r.v.s.
4.4 Generic Passband Transmission
Zero SNR: For simplicity set Hchan (f ) = 1. 4.5.1 AM DSB-SC
Then xr (t) = x(t) + nw (t). The SNR at this
point
def power[x(t)]
x(t) = Ac m(t) cos(2fc t + ).
SNRr =
power[nw (t)]
Autocorrelation:
is always zero since white Gaussian noise has in-
finite power. Rx ( ) = 0.5A2c Rm ( ) cos(2fc ).
Nonzero SNR: In order to have a non-zero SNR
at passband we need to apply a predetection Power spectrum:
BPF which passes the signal component and lim-
Sx (f ) = 0.25A2c [Sm (f fc ) + Sm (f + fc )] .
its the power in the noise component.
Computing SNR at IF:
m(t) x(t) xr (t)
modulator Hchan (f )
* power[x(t)] = Rx (0) = 0.5A2c Pm .
* Transmission BW = 2W minimum noise
channel
nw (t) power = 2N0 W .
* Ratio:
A2c Pm PT
SNRif = = .
4N0 W 2N0 W
xr (t) xif (t) yD (t)
HBP (f ) demodulator
M0
Sm (f )

f
2B
1.0 W W

Sx (f )
f 0.25M0 A2c
f0 f0
f
fc fc W fc fc + W
Now SNRif > 0 since the noise component in
xif (t) will have finite power. In fact
4.5.2 AM-LC
xif (t) = x hBP (t) + nw hBP (t)
= x(t) + n(t)
x(t) = Ac [1 + ka m(t)] cos(2fc t + ).
where assume BPF does not distort the signal
and n(t) is a bandpass Gaussian noise with zero Autocorrelation:
mean and psd of height N0 /2 and shape identical
to that of HBP (f ). Then Rx ( ) = 0.5A2c [1 + ka2 Rm ( )] cos(2fc ).

def power[x(t)] Power spectrum:


SNRif =
power[n(t)]
Rx (0) Sx (f ) = 0.25A2c [(f fc ) + (f + fc )]
= .
2N0 B + 0.25ka2 A2c [Sm (f fc ) + Sm (f + fc )] .

4.5 SNR at IF for Specific Passband Computing SNR at IF:


Modulations * power[x(t)] = Rx (0) = 0.5A2c (1 + ka2 Pm ).
In order to make x(t) WSS, let m(t) be the message * Transmission BW = 2W minimum noise
with baseband BW = W . Assume it is a WSS r.p. power = 2N0 W .

10
Sn (f )
* Ratio: N0 /2
2W

A2c (1 + ka2 Pm ) PT
SNRif = = . f
4N0 W 2N0 W 2fc fc fc 2fc

M0
Sm (f )
N0 /4
N0 /8
f
2fc fc fc 2fc
f 2W
W W
Passes LPF to output
Sx (f )
(0.25A2c )
0.25M0 ka2 A2c

4.6.2 AM-LC
f
fc fc W fc fc + W

x(t) = Ac [1 + ka m(t)] cos(2fc t + ).


4.6 Coherent Demodulators for Lin-
Signal component of yD :
ear Modulations
* Passes LPF 0.5Ac [1 + ka m(t)].
The coherent demodulator for linear modulations is
a mixer followed by a low pass filter. With xif (t) = * Passes dc block 0.5Ac ka m(t).
x(t) + n(t) : * Power in desired signal component:
xif (t) yD (t) 0.25A2c ka2 Pm .
HLP (f )
Noise component of yD :
* Identical to DSB case.
cos(2fc t + ) * Power in noise component: 0.5N0 W .
Demodulated SNR:
4.6.1 AM-DSB
0.25A2c ka2 Pm ka2 Pm PT
SNRD = = 2
.
0.5N0 W 1 + k a Pm N 0 W
x(t) = Ac m(t) cos(2fc t + ).

Signal component of yD :

Ac m(t) cos2 (2fc t + ) = 0.5Ac m(t)


+ 0.5Ac m(t) cos(2(2fc )t + 2).

* Passes LPF 0.5Ac m(t).


* Power in signal component: 0.25A2c Pm .
Noise component of yD :

n(t) cos(2fc t + )

* Above is WSS with

R( ) = 0.5Rn ( ) cos(2fc )
l
S(f ) = 0.25Sn (f fc ) + 0.25Sn (f + fc )

* Power in noise component: 0.5N0 W .


Demodulated SNR:
0.25A2c Pm PT
SNRD = = .
0.5N0 W N0 W

11
A Basic Math A.5 Convolution

A.1 Trig. Identities Z +


x y(t) = x( )y(t )d

ej =
cos() + j sin()
1 j
cos() = (e + ej ) B Spectral Analysis of Contin-
2
sin() =
1 j
(e ej )
uous Time Signals
2j
sin( + ) = sin() cos() + cos() sin() B.1 Fourier Series
cos( + ) = cos() cos() sin() sin()
x(t) periodic, e.g., x(t) = x(t + T0 ) for all t, and
1 1 def
sin() sin() = cos( ) cos( + ) f0 = 1/T0 . Then
2 2
1 1
cos() cos() = cos( ) + cos( + ) X
2 2 x(t) = Xn ej2nf0 t
1 1 n=
sin() cos() = sin( ) + sin( + )
2 2 l
1 T0
sin2 () 1
Z
= [1 cos(2)]
2 Xn = x(t)ej2nf0 t dt
1 T0 0
cos2 () = [1 + cos(2)]
2
B.2 Continuous Time Fourier Trans-
A.2 Expansions/Sums form (CTFT)


x2 x3
Z
exp(x) = 1+x+ + + x(t) = X(f )ej2f t df
2! 3!
x2 x4 l
cos(x) = 1 +
2! 4! Z
x3 x5 X(f ) = x(t)ej2f t dt
sin(x) = x +
3! 5!

X 1 CTFT properties in Figure 1.
an = if |a| < 1
1a
n=0 CTFT pairs in Figure 2.
N N +1
X 1a x(t), periodic in time with Fourier Series Xk has
an = if a 6= 1
n=0
1a CTFT which is aPweighted impulse train in fre-
N quency: X(f ) = k Xk (f kf0 ).
X N (N + 1)
k =
2
k=0
C Deterministic Autocorrela-
A.3 Taylor Series tion and Power Spectral
f 00 (a)
Density
f (x) = f (a) + f 0 (a)(x a) + (x a)2 +
2!
C.1 Energy Signals
f (n) (a)
+ (x a)n +
n! x(t) X(f ) of finite energy.
def R
A.4 Integration by Parts Autocorrelation: Rx ( ) =
x(t)x(t + )dt.

Z Z Energy Density Spectrum: Sx (f ) = |X(f )|2 .


udv = uv vdu
Fact: Rx ( ) Sx (f ).

12
Figure 1: Some Properties of Fourier Transforms. (From [5])

13
Figure 2: Some Fourier Transform Pairs (From [5])

14
1 The complex-valued signal m(t) + j m(t) has a
hHilbert (t) = Fourier Transform that is identically zero on the
m(t) t m(t)
negative frequencies. Such are called analytic
HHilbert (f ) = jsgn(f ) signals.
HT can also be defined for power signals.
Figure 3: Hilbert transform as LTI system.

References
C.2 Power Signals
[1] S. M. Ross. A First Course in Probability.
If CTFT exists denote it: x(t) X(f ).
Prentice-Hall, Upper Saddle River, NJ, fifth edi-
Time Average Operator: For an arbitrary func- tion, 1998.
tion f (t) [2] Athanasios Papoulis. Probability, Random Vari-
T ables, and Stochastic Processes. McGraw-Hill,
1
Z
def
hf (t)i = lim x(t)x(t + )dt New York, 1965.
T 2T T
[3] D. M. Pozar. Microwave and RF Design of Wire-
(hf (t)i is not a function of t, notation is used less Systems. Wiley, New York, 2001.
only to show the averaging variable).
[4] Roger E. Ziemer and William H. Tranter. Prin-
def
Autocorrelation: Rx ( ) = hx(t)x(t + )i. ciples of Communications. Wiley, Hoboken, NJ,
sixth edition, 2009.
Properties of autocorrelation:
[5] W. M. Siebert. Circuits, Signals, and Systems.
* Rx (0) |Rx ( )|. The MIT Press, Cambridge, MA, 1986.

* Rx ( ) = Rx ( ).
* lim| | Rx ( ) = hx(t)i2 if x(t) does not
contain periodic components.
* If x(t) is periodic with period T0 then so is
Rx ( ).
* CTFT of Rx ( ) is non-negative for all f .

Power Density Spectrum: Defined to be the


CTFT of the autocorrelation:

Rx ( ) Sx (f ).

D Hilbert Transform
The Hilbert Transform of a real-valued finite energy
signal m(t) is the the output of the LTI system shown
in Fig. 3 below (which is a /2 phase shifter).

Notes:

m(t) and m(t) have equal energy.

m(t) and m(t) are orthogonal, i.e.,


Z
m(t)m(t)dt = 0.

15