Transmission
4.1 Introduction
o Transmission of digital data (bit stream) over a noisy baseband channel typically suffers two channel imperfections
n Intersymbol interference (ISI)
n Background noise (e.g., AWGN)
o These two interferences/noises often occur simultaneously. However, for simplicity, they are often separately considered
Chapter 4-2
1
4.1 ISI
(t) Impulse response
ISI
channel
ISI
ISI
channel
ISI
ISI
channel
Chapter 4-3
Chapter 4-4
4.2 Design Criterion
o To find h(t) such that the output signal-to-noise ratio SNRO
is maximized.
x(t) g(t) w(t) for 0 t T y(t) [g(t) w(t)]* h(t)
g(t)* h(t) w(t)* h(t)
go (t) n(t)
| g (T ) |2
SNR O o
E[n2 (T )]
Chapter 4-5
2 | H ( f ) | df
Cauchy-Schwarz inequality
2
| (x) |
2 2
dx|
(x) | dx
(x) 2 (x) dx
1 1 2
*
with equality holding if, and only if, 1(x) k 2(x) for some constant k.
Chapter 4-7
2 2
| H ( f ) | df | G( f )exp( j2 fT ) | df
H ( f )G( f )exp( j2fT )df
2
| H ( f ) | df | G( f ) |2 df
2
2
| G( f ) | df
N0
N0
2
| H ( f ) | df
2
This is a constant bound, independent of the choice of h(t). Hence, the optimal is achieved by:
2E
max s , where E s| G( f ) |2 df .
0
N
Chapter 4-
Example 4.1 Matched Filter for Rectangular Pulse
Also h(t)
Chapter 4-11
Chapter 4-12
4.3 Error Rate due to Noise
o In what follows, we analyze the error rate of polar non- return-to-zero (NRZ) signaling in a syste
I h( )g(T )d h( )w(T )d
I kg (T )g(T )d kg (T )w(T )d
* *
I k | g( ) | 2d k g (* )w( )d
I kE g kn, where E g | g( ) | d2 and n g (* )w( )d .
For notational convenience, brief y(T ) / k by y.
(The integration can be taken over [0,T ) since g(t) is zero outside this range, as does in text. I however use the entire
real line as the integration range here for convenience.)
Chapter 4-14
By AWGN assumption of w(t), and real g(t) assumption,
n g * ( )w( )d is Gussian distributed with
E[n] g ( )E[w(
*
)]d 0.
2
E[n ] g(s)g(t)E[w(s)w(t)]dsdt
N0
g(s)g(t)2 (s t)dsdt
N 0 g 2 (s)ds N0 E
2
2
g
1 ( y) Normal (E g , E g N 0 / 2), if I 1;
n
yg I E 1 ( y) Normal (E g , E g N 0 / 2), if I 1
Chapter 4-15
Chapter 4-16
1 ( y) Normal (E g , E g N 0 / 2), if I 1;
1 ( y) Normal (E g , E g N 0 / 2), if I 1
Let Eg and Eg N0 / 2.
2
1 ( y )2
1 exp 2
(1 p) 1( y) 2 2 2
p 1 ( y) 1 ( y )2
exp
2 2
1
2 2
exp 2y exp 2Eg y exp 4 y
2
E g N 0 / 2 N0
1
Thisy threshold
N 0 log (1 p) on N0; hence, the best decision relies on the accuracy of N0 estimate.
depends
1 4 p
Chapter 4-17
o The best decision now becomes y0.
1
1 0 1
BER opt
20 ( y)dy
1
2 ( y)dy
1
1 1 ( y )2
exp dy
0 2 2
2
2
2 0 1
( y )2
1 exp dy
2
2
2 2 2
Chapter 4-18
1 1 y2
exp dy
BERopt 2 2
2 2
2
1 1 y 2
exp 2 2 dy
2 2
2
1 y2 y
exp 2 2 dy, z
2 2 2 2
1
/ exp zdz
2
2 2
1 1erfc Eg
erfc
2 2 2 2 N0
2 2
where erfc(u) exp(z )dz is the complementary error function.
u
Chapter 4-19
2
2
o Complementary error function erfc(u) exp(z )dz
u
1 z2
o Q-function Q(u) exp dz
u 2
2
erf (u) erf (u)
erfc(u) 1 erf (u)
Q(u) 1 erfc u
2
2
Chapter 4-20
10
4.3 Error Function
o Bounds for error function
1 x
2
e1 11 3 1 3 5 !
erfc( x)
x
22 43 6
2x2 x2 x
1 x
2
e1 1 erfc( x) 1e x 2
For x 0, 2
x 2x x
Chapter 4-21
Chapter 4-22
11
s(t) I g(t), where I {1,1}.
TT
In this case, 0 0
b E E[s 2 (t)]dt E[I 2 ]g 2 (t)dt E g
Chapter 4-23
Chapter 4-25
Information of ak is carried at [kTb, (k+1)Tb).
s(t) k ak g(t kTb )
b
y(iT ) a [g(t kT )*b h(t)* c(t)]
k
k t iTb
w(t)* c(t) t iTb
Chapter 4-26
p(t kTb )
where p(t) G( f )H ( f )C( f )exp{ j2ft}dt.
y(t) k
ak p(t kTb ) n(t), where n(t) w(t) * c(t)
Chapter 4-28
The text sets p(0) = 1 for simplicity but this is a litle confused (See Slide 4-28)! The text is correct whe
during [(i-1)Tb, iTb).
Information of ai is actually carried at [iTb, (i+1)Tb).
So in order to recover ai, correlation (convolution) operation should start at iTb, and end (i.e., is sam
Hence, y((i+1)Tb) is used to reconstruct ai.
Chapter 4-29
Z 1
Choose g(t) and c(t) such that p(t) =t(f )H(f )C(f ) exp(j2f t)dt
1
( 0, if i 6= 0
satisfies p(iTb) = .
1, if i = 0
(Here, I assume that information of ai is carried at [(i 1)Tb, iTb).)
Chapter 4-30
4.5 Nyquists Criterion for Distortionless
Baseband Binary Transmission
Let P(f) = G(f)H(f)C(f).
Sample p(t) with sampling period Tb to produce P(f).
From Slide 3-4, we get:
1 n
P ( f ) T bn
P f T
b
n
n
nP f T b Tb (Indeed, f
P T constant.)
b
n
Chapter 4-32
4.5 Ideal Nyquist Channel
o The simplest P(f) that satisfies Nyquist criterion is the rectangular function:
T ,| f | W 1
b 2Tb 1
P( f )
2Tb and P(W ) P(W ) Tb.
0, | f | W
p(t) sin(2Wt) sinc(2Wt)
2Wt
Chapter 4-33
Chapter 4-34
(i 1)Tb ~ iTb
1
0
1
1
0
1
0
Chapter 4-35
Chapter 4-36
4.5 Infeasibility of Ideal Nyquist Channel
o Examination of timing error margin
n Let t be the sampling time difference between transmiter and receiver.
y(iTb t) akk
p((i k)Tb t)
n For simplicity, set i = 0.
sin[2Wt k ]
y(t)
ak 2Wt k
k
(1) k sin[2Wt] 2Wt k
a
k
k
sin[2Wt] sin[2Wt](1)
ka
k
a0
2Wt k 2Wt k
k 0
(1) a k k
There exists {ak} such that 2Wt k for any fixed small t 0.
k
k 0
Question: How to make p(t) decays faster? Answer: Make P(f) smoother.
Chapter 4-38
4.5 Raised Cosine Spectrum
For a nonnegative function p(t),
k k P( f )
ift p(t)dt , then
exists.
f k
Chapter 4-39
20
The text puts BT = W(1+) may not be justifiable!
Chapter 4-41
cos(2Wt)
p(t) 22 2
sinc(2Wt) 1 16 W t
1
~as | t | large
| t |3
Chapter 4-42
21
4.5 Raised Cosine Spectrum
p(t) sinc(2Wt)
o 22 2 consists of two terms:
cos(2Wt) 1 16 W t
n The first term ensures the desired zero crossing of p(t).
n The second term provides the necessary tail convergence rate of p(t).
Chapter 4-43
125s 0.647688s
Tb
193
1
W 772kHz.
2Tb WW
Chapter 4-45
Chapter 4-46
4.6 Correlative-Level Coding
A channel with bandwidth W Hz
-BB
-W W
-W W
The maximum signal rate is
2W samples per second.
-WW
Chapter 4-47
Chapter 4-48
4.6 One Example of Correlative-Level Coding
o Duobinary signaling (or class I partial response)
HduoB( f )
Chapter 4-49
Chapter 4-50
4.6 Duobinary Signaling
o HNyqusit(f):
n Only for derivation purpose (do not need it finally)
Chapter 4-51
Chapter 4-52
4.6 Duobinary Signaling
o hI(t):
Chapter 4-53
Chapter 4-54
4.6 Duobinary Signaling
o Bandwidth efficiency of duobinary signaling
n Example.
The transmited signal k a k g(t kTb ) ak (t kTb ) * g(t)
k
a (t kT )
k
k b
g(t)
Chapter 4-55
1 T
RY ( ) RY (t,t )dt
T 2T T
lim
1 T
h( )h(12X )R (t 1 ,t )d212d dt
lim T 2T T
(Assume that limit and integration are interchangeable.)
Chapter 4-56
1 T
RY ( ) h( 1 )h( 2 ) RX (t 1 ,t 2 )dt d1d 2
T 2T T
lim
h( 1 )h( 2 )RX ( 2 1 )d 1d
2
SY ( f ) RY ( ) exp(j2f )d
h( 1 )h( 2 ) R X (
2 1 ) exp(j2f )dd 1d 2
h(
1)h( 2 ) RX ( ' ) exp(j2f [ ' 2 1])d ' d 1d 2
h( )e j 2f d2 h(1)e j 2f d1 X
j 2f '
2 1
R ( ' )e d '
2
H ( f )H *( f )S X( f ), if h( ) is real
| H ( f ) |2 SX ( f )
Chapter 4-57
1 T
RX ( ) lim E k a (t kT b)a
j (t jT )dtb
2T
T T
k
j
1 T
lim E[ajak ] (t jTb ) (t kTb )dt
T 2T T k j
(1, j = k
Assume E[ajak]= .
0, j 6= k Chapter 4-58
4.6 Duobinary Signaling
1 T
T 2T (t kTb ) (t kTb )dt
RX ( ) lim
T k
1 T
( T)lim
T k
(t kT b )dt
2T
1 1 2
( ) SY ( f ) | G( f ) |
Tb Tb
Approximately 2T of them
Tb
T 0 T
Chapter 4-59
1
SY ( f ) | G( f ) |2| H ( f ) |2
DuoB
b T
Chapter 4-60
30
4.6 Duobinary Signaling
sinc
No 2 ( fTb ),
Signal ISI
With2 Signal ISI
sinc (2 fTb ),
Chapter 4-61
1.2
0
-2-1.5-1-0.500.511.52
fTb
Chapter 4-62
31
4.6 Duobinary Signaling
o Conclusions
n By adding ISI to the transmited signal in a controlled (and reversible) manner, we can reduce the requirement of b
n Hence, in the previous example, {ck} can be transmited in every
Tb/2 seconds!
o Doubling the transmission capacity without introducing additional requirement in bandwidth!
n Duobinary signaling : Duo means doubling the transmission capacity of a straight binary system.
n A larger SNR is required to yield the same error rate because of an increase in the number of signal levels (from 1
Detailed discussion on error rate impact is omited here!
Chapter 4-63
Chapter 4-64
4.6 Decision Feedback for Correlative-Level Coding
o Recovering of {ak} from {ck}
ak ck ak HduoB( f )
1
Chapter 4-65
o Final notes
n The precode must not change the duo- of the transmission capacity of a straight binary system
n Hence, {b!
k
} must have the same distribution ask{b }
and hence must be i.i.d.
Chapter 4-67
Pr( ~
bk 1
~ ~ bk ~0 | 0) Pr(b
k
0)
Pr(bk 01/|2bk 1 1) Pr(bk 1) ~~~ ~
~
1/ 2~
Pr( bk b1k 1| 0) Pr(b
k
1)
Pr b |kk b,b
1k 2 ,...
bk
~ ~
Pr
Pr(b 1/
k 1 |2bk 1 1) Pr(bk 0)
1/ 2
Chapter 4-68
n For uniformity,
Pr(~ ~ ~ ~
bk 1 bk 1
bk 0) 0) bk 0 |
~ ~ ~
Pr(
Pr(bk 1 1)Pr(
Pr(bk 0)
0 | bk 1
1)~ 1 ~ 1
Pr(bk 1 0) Pr(bk 1
2 2
1)
1
2
~ ~ ~ ~
Pr(bk 1) Pr(bk 1 0) Pr(bk 1 | bk
1 0) ~ ~ ~
b
Pr( k 1
b 1) b 1 k 1
k
~ 1 ~ 11)
Pr(bk 1 0)Pr( 2 Pr(bk |1
1) 2
1
2 Q.E.D.
Chapter 4-69
{b~k}
{ }
bk
{ak}
{ck}
{b
}
{bk {0,1} i.i.d.} ~ ~ ~
bk bk bk 1 ak 2bk 1 ck ak
ak 1
Chapter 4-70
4.6 Modified Duobinary Signaling
The PSD of the signal is nonzero at the origin.
This is considered to be an undesirable feature in some applications, since many communication
Solution: Class IV partial response or modified duobinary technique. 1.2
1
0.8
0.6
0.4
0.2
0
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
Ch
MDuoB (f)
Chapter 4-72
4.6 Modified Duobinary Signaling
Assume g(t) 0 t Tb
. | G f | b T sinc fT
22 2
1,
0,otherwise
b
2
S ( f ) / T sinc (2 bfT
bY Duobinary (See Slide 4-61)
), 2 2
SY ( f ) /(4Tb ) sin (2fTb )sinc ( fTb ),Modified
Duobinary
Chapter 4-73
0.2
0
-2-1.5-1-0.500.511.52
fTb
Chapter 4-74
4.6 Modified Duobinary Signaling
o Precoding is added to eliminate error propagation in decision system.
ck ak ak 2 ~ ~
bk bk 2 bk ck
~ ~ 1)
(2bk 1) (2bk 2 0 0 0
~ ~ 0 1 1 2
2b k 2 0 1 2
~ ~ k 2 1
bk bbk bk 2 1 1 0 0
~ ~ ~
2
{bk {0,1} bk bk bk 2 ak bk 1 ck ak ak 2
i.i.d.}
Po-Ning Chen@ece.nctu Chapter 4-75
4.6 Generalized
Form of Correlative Level Coding (CLC) or Partial Response Signaling
1
w z ! w N 1 zN 1,
H CLC ( f ) w 01
where z exp( j2fTb ).
Chapter 4-76
4.6 Generalized Form of Correlative-Level Coding
or Partial-Response Signaling
Type of Class N w0 w1 w2 w3 w4 Comments
I Duobinary coding
II
III
IV Modified duobinary coding
V
4 cos (fTb )
2
I
16cos (fT )
4
2 II
| G( f ) |
S (f) 4 cos2 (fT ) 8sin2b(2fT ) III
Y
Tb b b
4sin (2fT )
2
IV
b
16sin (2fTb )
4
V
Chapter 4-77
Assume g(t) 0 t Tb
. | G f | bTsinc fTb
22 2
1,
0, otherwise
2
III
1.5
1
IV I
0.5 V II
0
-1.5 -1 -0.5 0 0.5 1 1.5
Chapter 4-78
4.7 Baseband M-ary PAM Transmission
Gray code
Any dibit differs from an adjacent dibit in a single bit position.
Chapter 4-79
Chapter 4-80
40
4.7 Baseband M-ary PAM Transmission
o Some equivalences
n Virtually fix the symbol error, namely, fix the level distance (to be 2). For example, (, ) for M
E[S 2 ] M
1 [(M
T
(M 2 1)
3Tb log2 (
E[S 2 ] 1 (M 2 1)
T T 3log (M )
b 2
For fixed Rb = 1/Tb (bps) and level distance = 2, the transmited power of an M-ary PAM transmission signal is
increased by a factor M2/log2M.
Chapter 4-82
41
4.8 Digital Subscriber Lines (DSL)
o A DSL operates over a local loop (often less than 1.5km) that provides a direct connection between a use
n Since it is a direct connection, no dialup is necessary.
n The information-bearing signal is kept in the digital domain all the way from the user terminal to an Inte
SO
Chapter 4-84
Digital Subscriber Lines
n Time-compression multiplexing (TCM) mode
A guard time is often inserted between bursts in the two opposite directions of data.
So the required line rate is slightly greater than twice the data rate.
Transmiter Transmiter
Receiver Receiver
Chapter 4-85
Chapter 4-86
4.8 Digital Subscriber Lines
o Hybrid transformer for DSL
n Two-to-four-wire conversion
Zref zl
Chapter 4-87
Chapter 4-
4.8 Digital Subscriber Lines
o Other impairments to DSL
n ISI and Crosstalk
o The transfer function of a twisted pair line can be approximated by
2
| Htwist pair ( f ) | exp f
l
where k, k is a physical constant of the twisted pair, and
l0
l0 and l are respectively the reference length and actual length of the twisted pair.
Chapter 4-89
2.5 116 2 (
2
1.5 (t) (t 1) htwist pair ( ) 4
1
1 16 2 ( 0.1)2
0.5
z0 = 0.1
0
-1 0 1 2 3 4 5
Chapter 4-90
4.8 Digital Subscriber Lines
n Crosstalk
o Capacitive coupling that exists between adjacent twisted pairs in a cable
n Near-end crosstalk (NEXT) and Far-end crosstalk (FEXT)
Chap
4.8 Digital Subscriber Lines
3/ 2
HNEXT( f ) f
Interference (input of HNEXT(f)) often assumes to have the same PSD as the transmited signal, but is Gaussian distribu
Chapter 4-
Chapte
4.8 Digital Subscriber Lines
o Possible candidates for line codes that are suitable for DSL
n Manchester code
o Zero DC component but large spectrum at high frequency so it is vulnerable to NEXT and ISI.
n Bipolar return to zero (BRZ) or Alternate mark inversion (AMI) code
Successive 1s are represented alternately by positive and negative but equal levels, and 0 is repre
Zero DC component. Its NEXT and ISI performance is slightly inferior to the modified duobinary co
Chapter 4-95
Chapter 4-96
4.8 Digital Subscriber Lines
o Possible candidates for line codes that are suitable for DSL
n 2B1Q code (cont.)
Chapter 4-97
Chapter 4-98
4.8 Asymmetric Digital Subscriber Lines
o ADSL is targeted to simultaneously support three services at a single twisted-wire pair
n Data transmission downpstream at 9 Mbps
n Data transmission upstream at 1Mpbs
n Plain old telephone service (POTS)
o Some notes
n It is named asymmetric because the downstream bit rate is much higher than the upstream bit
n The actually achievable bit rates depend on the length of the twisted pair used to do the transm
Chapter 4-99
spliter
Chapter 4-100
50
4.8 Asymmetric Digital Subscriber Lines
Various applications can be applied to asymmetric transmissions, such as video-on-demand (VoD
n For example
Downstream = 1.544 Mbps (DS1) for video data
Upstream = 160 kbps for real-time control commands.
Chapter 4-101
Chapter 4-102
51
4.9 Optimum Linear Receiver
o Zero-forcing equalizer (cont.)
n This reduces to the Nyquist criterion.
n 1, n 0
nP f T b
Tb or p(nT b ) 0,
n0
Chapter 4-103
Chapter 4-104
4.9 Optimum Linear Receiver
o Example of noise enhancement.
n Suppose that the receiver filter is a tapped-delay-line equalizer, which is of the form
c(t) ck (t kTb )
k 0
p(t) h( )c(t )d
h( )ck (t kTb )d
k 0
ck h( ) (t kTb )d
k 0
k
0
ck h(t kTb )
n0
1,
pn p(nTb ) ck h((nk 0knk0)T0b ) ck hnk 0,
Chapter 4-106
It is reasonable to assume that hn 0 for n 0, and h0 1.
1 1
0 0 " 0c0
0 h1 1 0 " 0c
h1 1 " 1
0 h2 0 ! for arbitrary N 0.
! ! ! ! ! c
! hN 1 hN 2 " N 1
0 hN 1 c N
1 1 | | /(2T
b ),0 2Tb
Suppose h( )
0,otherwise
2Tb
h 1, h 1 , and h 0 for n 0,1.
0 1 n
2
n n
cn (1) 2for (N ) n 0, and zero, otherwise.
Chapter 4-107
=w(z
w(z )c(nTb z )dz
) . ck6(nTb kTb z )dz
k=0
=. ckw(z )6(nTb kTb z )dz = . c
kw(nTb kT b )= . ckwnk
Chapter 4-1
y(t) c(t)* x(t) c( )x(t )d
x(t) ak q(t kTb ) w(t)
k
y(iTb ) ak
k
c( )q(iTb kTb )d c( )w(iTb )d i ni
Chapter 4-111
2 2 2
E[i ] E[ni ] E[a i ] 2E[ ini ] 2E[n iai ] 2E[ iai ]
1st term
Chapter 4-112
Observe that Rq (1, 2;i) q(iTb kTb 1 )q(iTb kTb 2 ) only
k
depends on the difference between 1 and 2, and is invariant with respect to i. We can then
E[i 2] = c(z1)c(z2)Rq(z1 z2)dz1z2
. .
= c(z1)c(z2) Sq (f )e2uf (z1 z2 ) df dz1z2
. . . .
=Sq(f ) c(z2 )e2uf z2 dz2 df
c(z1 )e2u(f )z1 dz1
= Sq(f )C(f )C(f )df
Sq(f )|C(f )|2df.
=
Chapter 4-113
2
E[n i ] c( )c(
12 )E[w(iT b1 )w(iT )]d
b212 d
c( )c( ) N0 ( N0 2
12
2
1212
) d d
2
c ( ) d
11
N0 . . . .
= C(f1 )e2uf1 z df1 C(f2 )e2uf2 z df2 dz
N0 2 . .
= C(f1)C(f2) df1df2
2
e2u[(f1 +f2 )]z dz
N0
=
2 C(f1)C(f2)6(f1 f2)df1df2
N0
= 2 C(f1)C(f1)df1
N0
= 2 |C(f )|2df.
Chapter 4-114
3rd term
2
For i.i.d. {ak} where a k = 1, E[a i ]
1.
4th and 5th term
By independence of {ak} and w(t), and zero mean of ni,
E[ini ] E[i ]E[ni ] 0 and E[niai ] E[ni ]E[ai ] 0.
6th term
Chapter 4-115
E[iai] = C(f1)Q(f2)6(f2 f1)df1df2
= C(f )Q(f )df
= [Cr(f )Qr(f ) Ci(f )Qi(f )] df
where the last step follows from the observation that E[iai] must be a real number, and Cr(f ) and Ci(f ) are respectively the real and imaginary part
N0 .2 .
Ji = Sq(f )2+ |C(f ) | 2Qrr(f )C (f ) + 2Q (f ii)C (f ) df + 1
s x
A(f )
Chapter 4-116
. N0 . 2
A(f ) = S q (f )2+ |C(f ) | 2Q rr(f )C (f ) + 2Q (f )C
ii (f )
.
= N0 . 2
S q(f )+ rC (frr ) 2Q (f )C (f )
2
.
N . 2
q
+ 0S (f )+ iiiC (f ) + 2Q (f )C (f )
2
.
N
=S0 q..(f ) + Q2(fQ)r(f ).2 r
Cr(f )
2 (Sq(f ) + N0/2)(Sq(f ) + N0/2)
N0 .. Q2(f ) . Qi(f ).2
i
+
Ci(f ) + S q(f ) +
2 (Sq(f ) + N0/2)(Sq(f ) + N0/2)
Q*( f )
Sq ( f ) N0 / 2 for
C(MMSE
f ) equalizer.
An equalizer that is so designed is referred to as the minimum- mean square error (MMSE) equal
Chapter 4-117
Chapter 4-118
4.9 MMSE Equalizer
o Property of Sq(f)
2
1 Q f k
n The text wrote that Sq ( f )
Tb k
Tb
, which
Rq ( 1 2 ) q(kTb 1 )q(kTb 2 )
k
Chapter 4-119
q(kT )q(kT ) exp( j2f )d
q S ( f ) R (q ) exp( j2f )d
b
k
b
k
q(kTb ) q(v) exp( j2f (kTb v))dv
k
q(kTb ) exp( j2fkTb ) q(v) exp( j2fv)dv
*
b
Q ( f )q(kT ) exp( j2 fkTb )
k
*
Q (f) q(t) (t kTb ) exp( j2 ft)dt
k
* 1 k
Q ( f ) Tb kQ fTb
Chapter 4-120
60
4.9 Implementation of MMSE Equalizer
o One can approximate 1/[Sq(f) + N0/2] by a periodic function with:
2
Sq ( f ) Q * ( f ) 1 Q f k1 k ~(f)
Q f Sq
Tb k TT
b b
k
T
b
~
o Since q(f) = 1/[Sq ( f ) N0 / 2] is now periodic with period
1/Tb, we obtain by Fourier series that
Chapter 4-121
N N
q ( f ) k ck exp( j2kfTb )
N
q ( ) k ck (t kTb )
N
61
4.9 Implementation of MMSE Equalizer
o Final notes
n In a real-life telecommunication environment, the channel is usually time-varying.
n Therefore, an adaptive receiver that provides for the adaptive implementation of both the matc
Chapter 4-124
4.10 Adaptive Equalization
o Least-mean-square (LMS) algorithm
N
k 0
e[n] d[n] y[n] d[n] wk x[n k ]
o Design objective
n To find the filter coefficients w0, w1, , wN so as to minimize index of performance J:
J e2[n]
Chapter 4-125
where is a chosen constant step size, and is included only for convenience of analysis.
Chapter 4-126
4.10 Adaptive Equalization
2 N
J d[n] w x[n k
] k
k 0
N
N N
k 0 k 0 j 0
i
wi k
k 0
N
2x[n i] d[n] w x[n k ]
k
k 0
2x[n i]e[n]
Chapter 4-127
8 N
><e[n] =>X
>d[n]
> wk,currentx[n k]
) Repeat k=0
>>For
> 0 i N, wi,next = wi,current + x[n i]e[n]
>:For 0 i N, wi,current = wi,next
Chapter 4-128
4.10 Adaptive Equalization
o Some notes on LMS algorithm (cont.)
n If is too large, high excess mean-square error may occur.
n If is too small, a slow rate of convergence may arise.
Chapter 4-129
Chapter 4-130
4.10 Decision-Directed Mode
In normal operation, the decisions made by the receiver are correct with high probability.
Under such premise, we can use the previous decisions to
calibrate or track the tap coefficients.
In this mode,
n if is too large, high excess mean-square error may occur.
n if is too small, a too-slow tracking may arise.
o We can further extend the idea of decision-directed or
decision-feedback to the decision-feedback equalizer
(DFE).
Chapter 4-131
wn (1) xn
Let cn w ( 2) and vn , where n sample time at nT
n a n .
T
Denote e annn nc v , where a nis the nth transmited symbol.
Chapter 4-132
4.10 Decision-Feedback Equalizer
o Then DFE gives:
(1)
wn1 wn(1) 1en xn
( 2) .
w( 2)n1 w
n
e2 nan
As anticipated, DFE suffers from error propagation due to
incorrect decisions.
However, error propagation will not persist indefinitely; rather, it tends to occur in bursts.
n E.g., if the number of taps in the feedback section is L, then the influence of one decision error
-1 0
12 34567 t / Tb
Chapter 4-134
4.11 Computer Experiments: Eye Paterns
o Eye patern for pulse shaping function p(t) is half-cycle sine wave with duration Tb, and error-fre
1
-1 0 t / Tb
0.5 1 1.5 2
Chapter 4-135
-1
0 0.5 1 1.5 2 t / Tb
Chapter 4-136
Interpretation of Eye Patern
Chapter 4-137
-1
0 0.5 1 1.5 2
Chapter 4-138
Experiment 1: Effect of
channel noise
(Raise-cosine pulse-shaping with roll-off factor = 0.5, W = 0.5 Hz, M = 4)
Eye diagram for noiseless quaternary system.
Eye diagram for quaternary system with SNR 20 dB.
Eye diagram for quaternary system with SNR 10 dB.
Chapter 4-139
Experiment 2: Effect of
bandwidth limitation (Raise-cosine pulse-shaping with roll-off factor = 0.5, W
= 0.5 Hz, M = 4)
(a) Eye diagram for noiseless band-limited quaternary system: cutoff frequency f o 0.975 Hz (b) Eye diagram
for
noiseless band-limited quaternary system: cutoff frequency fo 0.5 Hz
(The channel is now modeled by
a low-pass Buterworth filter with
1
| H ( f ) |2
1 ( f / f 0 )50
Chapter 4-140
70