Anda di halaman 1dari 29

Communication Theory

4/12/2007

EE 561
Communication Theory
Spring 2007

Instructor: Matthew Valenti


Date: April 12, 2007
Channel Coding
copyright 2007

Where Are We?


Analog
Input
signal

D/A
Conversion

Sample

Direct
Digital
Input

Quantize

Source
Decoder

Source
Encode

Decryption

Encryption

Channel
Decoder

Channel
Encoder

Equalizer

Modulator

4/12/2007

Spring 2007

Channel

2007

Analog
Output
signal

Digital
Output

This is the final part


of the system diagram
that we will cover.

Demodulator

2/57

Communication Theory

4/12/2007

Channel Coding

Channel coding selectively adds redundant


information into the transmitted data stream so
that the receiver can correct errors caused by the
channel.
Channel coding generally improves error
performance (energy efficiency).

However, we are trading off a reduction in data rate


(bandwidth efficiency).
At very low signal to noise ratios, performance (as a
function of Eb/No) may actually be worse when coding
is used.
2007

4/12/2007

3/57

Code Rate

Let k be the number of message bits at encoder


input.

Let n be the number of code symbols at encoder


output.

Each symbol drawn from constellation of M symbols.

Then the code rate R = k/n

4/12/2007

Spring 2007

A message bit is either 0 or 1.

For binary codes, 0 < R log2M


Most codes are binary and thus 0 < R 1

2007

4/57

Communication Theory

4/12/2007

Channel Capacity

The channel capacity is the maximum rate that


can be supported by a channel

R<C
Measured in either:
bits per channel symbol (code rate).
bits per second (bps).

We can approach the channel capacity bound by


using error correction coding.

Long random codes offer near optimal performance.


However, we wish to work with practical codes, which
must possess an underlying mathematical structure.
2007

4/12/2007

5/57

Capacity of Unconstrained
Vector-Input AWGN Channel

AWGN channel with


unconstrained input
Maximize over p(s)

Q: What kind of input distribution maximizes C?

In bits per channel symbol:


C =

FG
H

IJ
K

2 RE b
K
log 2
+1
2
No

Minimum Eb/No:
Eb
22R/K 1
>
No
2R / K

4/12/2007

Spring 2007

2007

6/57

Communication Theory

4/12/2007

Modulation Constrained Capacity

Now suppose that s must be drawn from a constellation S


of M points.
C = I (s; r ) = E[i (s; r )]
= log M +

Where:
f (r | s) =

1
M

E [ f (r | s ) max* f (r | s' )]
M

i =1

1
r s
No

s 'S

max* ( x, y ) = log(e x + e y ) = max( x, y ) + log(1 + exp( | x y |))

Comments:
Can be found using Monte Carlo integration.
Units=nats/symbol; change of base to get in bits.

2007

4/12/2007

7/57

Channel Capacity: Unconstrained


and BPSK-Constrained
It is theoretically
impossible to operate
in this region.

ound
apacity B
BPSK C

Cap
acit
yB
Sha
nno
n

Code Rate r

Spectral Efficiency

oun
d

1.0

0.5

-2

-1

It is theoretically
possible to operate
in this region.

10

Eb/No in dB

Spring 2007

Communication Theory

4/12/2007

Power Efficiency of
Standard Binary Channel Codes
ound
apacity B
BPSK C

Uncoded
BPSK

Cap
acit
yB

Iridium
1998

Sha
nno
n

Code Rate r

Spectral Efficiency

oun
d

1.0

Pioneer
1968-72

Turbo Code
1993
LDPC Code
2001
Chung, Forney,
Richardson, Urbanke

0.5

Galileo:LGA
1996

IS-95
1991

Voyager
1977

Odenwalder
Convolutional
Codes 1976

Galileo:BVD
1992
Mariner
1969

-2

-1

arbitrarily low
BER: Pb = 10 5
8

10

Eb/No in dB

Constrained Capacity of
Higher-Order Modulation
7

Blue Dotted Line is QAM


Red Dashed Line is PSK
Black Dotted Line is Shannon
Capacity Limit

M=64
M=32

Capacity

M=16
4

8PSK
3

QPSK
2

BPSK
1

0
-20

Spring 2007

-10

10
Es/No in dB

20

30

40

Communication Theory

4/12/2007

Capacity as a Function of Eb/No


7

Blue Dotted Line is QAM


Red Dashed Line is PSK
Black Dotted Line is
Shannon Capacity
Limit

M=64
M=32
M=16

Capacity

8PSK
3

QPSK
2

BPSK

0
-5

10
15
Eb/No in dB

20

25

30

Binary Block Codes

The encoder for a binary block code accepts


blocks of k message bits and produces blocks of
n code bit.

The input bits can be placed in a vector

x = [x1, x2, , xk]


There are 2k possible inputs X = { x1, x2, , x2k }

The output bits can also be placed in a vector

4/12/2007

Spring 2007

Notation: (n,k) code.

c = [c1, c2, , cn]


There are 2k code words C = { c1, c2, , c2k }
2007

12/57

Communication Theory

4/12/2007

Modulo-2 Vector Arithmetic

With codes, we use modulo-2 addition and multiplication


(so assume that + means )
We can add two vectors modulo-2.

Given x1 = [0 1 1 0] and x2 = [1 1 0 0], find x1+x2

Now find x1+x1

We can multiply two vectors modulo-2:

Find x1x2T

We can do vector-matrix multiplication:

LM
MM
N

OP
PP
Q

1 1 0
[1 0 1] 0 0 1 =
0 1 1
2007

4/12/2007

13/57

Linear Codes

Linear code

A code C is linear if the modulo-2 sum of any two code


words is also a valid code word.
If ci and cj C then ci + cj C

The modulo-2 sum of a code word with itself must be a


valid code word.
Since ci + ci = 0 and ci + ci C, the all-zeros code word must
be present in all linear codes

4/12/2007

Spring 2007

2007

14/57

Communication Theory

4/12/2007

Generator Matrices

Generator matrix.

Let x be the input vector of k data bits.


Let c be the code word of n code bits.
Then if C is linear,
c = xG
Multiplication is modulo-2.

G is called the generator matrix.

Dimensionality k by n
Spans the code space C
Rank k (linearly independent rows)
Rows of G form the basis for the code
2007

4/12/2007

15/57

Example (7,3) Code

LM1
G = M0
MN0
H=

d min =

Spring 2007

OP
P
1PQ

0 0 1 1 1 0
1 0 0 1 1 1
0 1 1 1 0

input bits
output code word
x1 x2 x3 c1 c2 c3 c4 c5 c6 c7
0 0 0
0 0 1
0 1 0
0 1 1
1 0 0
1 0 1
1 1 0
1 1 1

Communication Theory

4/12/2007

Parity Check Matrix

A codeword c is valid (in C) iff cHT = 0

H is called the parity-check matrix


Dimensionality n-k by n
Determined by G.
In particular, H spans the null-space of G.

Here, 0 is the length (n-k) all-zeros vector.

Since every row of G is a valid codeword,

GHT = 0

2007

4/12/2007

17/57

Error Detection and Correction

The code can be used to either detect or correct


errors.

Detect: We know there is an error, but dont know


which bits are incorrect.
ARQ: Automatic Repeat Request.

Correct: We not only know that there are errors, but we


know which bits are incorrect.
FEC: Forward Error Correction

4/12/2007

Spring 2007

2007

18/57

Communication Theory

4/12/2007

Detecting and Correcting Errors

Suppose we receive the vector y=c+e

We can detect errors by computing yHT

e is an error vector.
yHT=0 only if e = 0 or is another valid code word.
Thus all error patterns can be detected except for error patterns
that happen to be valid code words.

We can correct a single error as follows:

Assume e has only a single one.


Flip the first bit in y and multiply by HT
If the result is all-zeros, then the error was in the first position.

Repeat for all other bit positions until the position of the error is
determined.

Process can be generalized to more than one error.


2007

4/12/2007

19/57

Systematic Codes

Systematic codes

A code is systematic if the code words can be


partitioned into 2 parts:
Systematic bits: The data x itself.
Parity bits: bits that are a function of 2 or more data bits.

A consequence is that the generator matrix is:


G = Ik P
P has dimensionality k by n-k

Any nonsystematic code can be made systematic by


Gaussian elimination:
Adding rows and permuting columns.

4/12/2007

Spring 2007

The parity-check matrix of a systematic code is:


H = PT Ink
2007

20/57

10

Communication Theory

4/12/2007

Hamming Distance

Hamming weight

The (Hamming) weight wi of a code word ci is the


number of ones in the code word.

The Hamming distance between two code


words c1 and c2 is the number of bits in which the
two code words differ:

d H c1 , c 2 = c1,i + c2 ,i

Example:
c1 = (0,0,1,1,1,0,1)
c2 = (0,1,0,0,1,1,1)

i =1

2007

4/12/2007

21/57

Minimum Distance

The minimum distance is the smallest distance


separating any two code words:

o d

d min = min d H ci , c j
i j

it

For a linear code, the minimum distance is simply the


smallest Hamming weight of any nonzero code word.
d min = min{wi }
ci 0

Just as we wanted to maximize Euclidian distance for


modulation, we want to maximize Hamming distance for
block codes.

4/12/2007

Spring 2007

2007

22/57

11

Communication Theory

4/12/2007

Importance of
Minimum Distance

The decoders job is to chose the legal code word


that most closely resembles the received
sequence of bits.
Decoding example:

Suppose we receive (0,1,1,1,1,1,0)


What is the most likely transmitted code word?

Want the code words to be as distinct as


possible.

In other words, we want the minimum distance to be


large.
2007

4/12/2007

23/57

Error Detection and


Correction Capability

A code with minimum distance dmin can detect any


combination of up to (dmin 1) errors.
A code with minimum distance dmin can correct any
combination of up to t errors, where:
t=

Spring 2007

min

PP
Q

t is called the error correction capability.


There will be at least one combination of t+1 errors which will
cause an error.

In general,

4/12/2007

MM d
N

dmin = 1 + # detectable errors + # correctable errors


Note: an error must be detectable before it is corrected.
2007

24/57

12

Communication Theory

4/12/2007

Classes of Block Codes

See section 1.2 of my dissertation for a brief history.


Perfect Binary Codes

Imperfect Binary Codes

Repetition
Hamming: Richard Hamming, Bell Labs, 1946
Golay: Marcel Golay, 1949
Reed Muller: Reed & Muller, 1954
CRC: Prange, 1957
BCH: Bose, Ray-Chaudhuri, Hocquenghem, 1959-60

Nonbinary codes

Reed Solomon: Reed & Solomon, 1960

2007

4/12/2007

25/57

Repetition Codes

Idea: repeat each bit n times.

Example (n=7):

0 0000000
1 1111111

Repetition codes are not very bandwidth efficient


and not particularly power efficient.

4/12/2007

Spring 2007

k=1
n = n (usually odd)
r = 1/n
dmin = n
t = (n-1)/2

2007

26/57

13

Communication Theory

4/12/2007

Hamming Codes

For m = any positive integer:

Therefore there are (n-k)=m parity bits.


dmin = 3
t=1

Example Hamming codes (see table 6-9):

n = 2m 1
k = 2m 1 m

(7,4), m = 3, r = 4/7 0.57


(15,11), m = 4, r = 11/15 0.73
(63, 57), m = 6, r = 57/63 0.905
(255, 247), m = 8, r = 247/255 0.969

Bluetooth uses a shortened (15,10) Hamming code.

Shortened: dmin = 4
Our example was a shortened (7,3) Hamming code.
2007

4/12/2007

27/57

Golay Codes

A very specific (23,12) binary block code:

The Golay code is called a perfect code

4/12/2007

Spring 2007

n = 23, k = 12
r = 12/23 0.52
dmin = 7, t = 3
Every received code word lies within distance t = 3 of
exactly one Golay code word.
Therefore, when the number of code bit errors is
greater than 3, the Golay code will always be
incorrectly decoded.
The only other known perfect codes are Hamming
codes and odd length repetition codes.
2007

28/57

14

Communication Theory

4/12/2007

Reed Muller Codes


(1st order)

For m = any positive integer

n = 2m
k = m+1
dmin = 2m-1
t = (2m-1 -1)/2

Examples:

(8,4), m = 3, t = 1, r = 4/8 = 0.5


(16,5), m = 4, t = 3, r = 5/16 0.31
(32,6), m = 5, t = 7, r = 6/32 = 0.1875
Used by Mariner and Viking missions to Mars.

(64,7), m = 6, t = 15, r = 7/64 0.11

Used in UMTS 3G cellular system to send an indicator of


the transport format.

(32,10) modified Reed Muller code.


2007

4/12/2007

29/57

CRC Codes

Cyclic Redundancy Check (CRC) codes are a broad class


of cyclic codes.

The cyclic property makes it easy to build encoders and


decoders.

Hamming, Golay, Reed Muller, etc.


Making a code cyclic does not change dmin

CRCs are widely used for error detection.

4/12/2007

Spring 2007

Cyclic means every cyclic shift of a codeword is a valid codeword.


Encoder is just a linear feedback shift register with n-k memory
cells (flip-flops).

Most codes can be made cyclic.

Broad: wide range of parameters n, k, r, dmin, t.

Not used for error correction as often.


2007

30/57

15

Communication Theory

4/12/2007

BCH Codes

BCH: Bose-Chaudhuri-Hocquenghem

A specific type of cyclic code.

For m = any positive integer

n = 2m 1
k = any value
There will exist some t such that
t

nk
m

Exact value of t found by looking at a table


See table 6-11 in text (pp. 439-440).

dmin = 2t + 1
2007

4/12/2007

31/57

Reed Solomon Codes

Reed Solomon codes are a generalization of BCH codes.

Nonbinary BCH codes.


The symbols are M-ary symbols instead of bits.

For m = any positive integer

M = 2m
i.e. each symbol represents m bits
m=8 is common (a symbol is then a byte)

RS codes are maximum distance separable

4/12/2007

Spring 2007

n = 2m 1
k = any value < n
dmin = n - k + 1
t = (n-k)/2
They have the largest possible dmin for any code with the same
values of n and k.
2007

32/57

16

Communication Theory

4/12/2007

Reed Solomon Codes

Example uses of RS codes.

Let m = 8 and t = 4
(n,k) = (255, 247)
M = 28 = 256

Could use this in conjunction with M-ary modulation.

Or could use this with binary modulation.

e.g. 256-FSK or 256-QAM


Total number of code bits = n*m = 255*8 = 2040
There will be 255 symbols
Each symbol is actually 8 bits.
A symbol error occurs if any one of the 8 bits is incorrect.

When used with binary modulation, RS codes can correct burst


errors.
e.g. if there are m*t = 8*4 = 32 bit errors in a row, the RS code may be
able to correct the received code word.
2007

4/12/2007

33/57

Applications of RS Codes

Reed Solomon codes are the most common block


codes used for error correction.
Compact Disc (CD)
Digital Versatile Disc (DVD)
Deep space communication systems.

4/12/2007

Spring 2007

Voyager, Galileo, Cassini


Usually as part of a more complex system involving
concatenations of different types of codes
(Convolutional and RS).

2007

34/57

17

Communication Theory

4/12/2007

Error Correction of Digitally


Modulated Signals

Consider the following system:


k data bits

rate r = k/n
block encoder

n code bits

Modulator

s(t)

+
estimates of
data bits

block
decoder

estimates of
code bits

MAP
Detector

AWGN
n(t)

r(t)

hard decision
decoding
2007

4/12/2007

35/57

Error Probability:
Hard Decision Decoding

Assumptions:

p is the probability that any single code symbol is in error.


p = Ps for the type of modulation used (i.e. BPSK)
Replace Eb/No with rEb/No in the symbol error rate expression.

Errors occur independently.


All combinations of t or less errors are correctable.
Most combinations of more than t errors are not correctable.
Perfect codes cannot correct any combination of more than t errors.

Then the code word error probability is:


Pc

F nI

GH i JK p (1 p)
n

i = t +1

equality for
perfect codes
(Golay, Hamming)

4/12/2007

Spring 2007

1
i =0

n i

FG nIJ p (1 p)
HiK
i

2007

n i

36/57

18

Communication Theory

4/12/2007

Example:
Performance of (7,3) Code

Compute Pc for the example code if p = 0.01

Since dmin = 4, t = 1

FG 7IJ (0.01) (0.99)


HiK
F 7I
1 G J (0.01) (0.99)
H iK
F 7I
F 7I
1 G J (0.01) (0.99) G J (0.01) (0.99)
H 0K
H 1K
7

Pc

7 i

i =2

7 i

i =0

7!
7!
(0.99) 7
(0.01)(0.99) 6
1! 6!
(0!)(7 !)

1 (0.99) 7 7(0.01)(0.99) 6
0.00203
2007

4/12/2007

37/57

Error Correction with


Digital Modulation

The bit error probability is found by using the


appropriate equation for the modulation that is
being used.
However, we want performance as a function of
Eb/No.

Therefore, we must replace Eb/No with rEb/No in


all our error formulas for different modulation
types.

4/12/2007

Spring 2007

Eb is energy per data bit (not code bit).


The energy per code symbol is Es = rEb

2007

38/57

19

Communication Theory

4/12/2007

Example: Performance of (7,3) Code


with BPSK Modulation

For uncoded BPSK:

I
JK
For our coded system:
F 2rE I = QF
p = QG
H N JK GH
F
GH

2Eb
No

Pb = Q

6Eb
7 No

I
JK

Therefore the code word error probability is:


t

Pc 1
i =0

FG nIJ p (1 p)
HiK
i

n i

1 (1 p) 7 7 p(1 p) 6
1 (1 p) 6 (1 + 6 p)

F F
1 G 1 QG
H H

6Eb
7 No

I I F1 + 6QF
JK JK GH GH
6

6Eb
7 No

II
JK JK

A More Interesting Example

Find the error probability for a (63,45) BCH code,


which has t = 3 (see table 6-11).
First, compute p:

F
GH

p=Q

I F
JK GH

2rEb
=Q
No

I F
JK GH

2( 45)Eb
=Q
63 No

10Eb
7 No

I
JK

The compute code word probability:

FG nIJ p (1 p)
HiK
F 63I
1 G J p (1 p)
HiK
t

Pc 1

n i

i =0
3

63i

i =0

1 (1 p) 63 63 p(1 p) 62 1953 p 2 (1 p) 61 39,711 p 3 (1 p) 60


4/12/2007

Spring 2007

2007

40/57

20

Communication Theory

4/12/2007

Performance of Golay Code:


Hard-decision Decoding

(23,12), t = 3 Golay code


BPSK modulation
Hard decision decoding:

F 23IJ p (1 p)
i K
F 23I
= 1 G J p (1 p )
HiK

Pc =

GH
23

Bound is exact
because Golay code is perfect

23 i

i =4

23 i

i =0

Where:

F 2 FG 12 IJ E I
G H 23 K JJ
p = QG
GG N JJ
K
H
b

2007

4/12/2007

41/57

Codeword Error Probability


and Bit Error Probability

If a code word is received correctly, then all data bits will


be correctly decoded.
If a code word is received incorrectly, then between 1 and
k bits will be incorrect at the output of the decoder, so:
1
Pc Pb Pc
k
To find the exact BER, we need to know how many errors
there are at the output of the decoder whenever it makes
an error:

Let (i) be the average number of bit errors when i code bits are in
error.
n
(i ) n i
Then:
Pb
p (1 p) n i
i
i = t +1 k

FG IJ
HK

4/12/2007

Spring 2007

(t + 1)
k

Pc

2007

42/57

21

Communication Theory

4/12/2007

BER of Golay code

To find the BER, we need to know (i).


First find the distance spectrum
d

ad

f(d)

253

924

3.65

506

2112

4.17

11

1288

7392

5.74

12

1288

8064

6.26

15

506

3960

7.83

16

253

2112

8.35

23

12

12

Most common
error at high SNR

Pb

253 / 924
Pc
12

2007

4/12/2007

43/57

Coding Gain

The coding gain is the difference between the


uncoded and coded Eb/No required to achieve a
desired Pb.

The stronger the code, the higher the coding gain.

4/12/2007

Spring 2007

Usually we use a reference of Pb = 10-5

2007

44/57

22

Communication Theory

4/12/2007

Performance Curve for (23,12) Code


10

Note that at low SNR,


coded performance is
actually worse than uncoded

BER

10

10

-2

capacity
curve is at
Eb/No = (22R-1)/(2R)
= 0.07 dB

-4

uncoded
BPSK
2.1 dB
coding
gain

7.4 dB away from capacity


10

-6

(23,12)
Golay
code
10

-8

5
6
Eb/No (in dB)

10

Soft Decision Decoding

With hard-decision decoding, a hard-decision is made on


the bits before decoding takes place.

Whenever a hard-decision is made, valuable information


is lost.

We are interested in not only if the receiver thinks the received


code bit was a 0 or a 1, but how confident it was about that
decision.
The decoder should rely more on strong signals, and less on
weaker signals.

Any type of decoder that uses soft-information about the


confidence of the bit decision is called a soft-decision
decoder.

4/12/2007

Spring 2007

Input to the decoder is hard bit decisions {0,1}

2007

46/57

23

Communication Theory

4/12/2007

Hard Decision Decoder for BPSK

Assume bits are equally likely:

c$

Tb

r(t)

dt

block
decoder

x$

1(t)
This is where the hard decision is made:
1 for r > 0
c$ =
0 for r 0

RS
T

Information is lost!
Essentially a 1-bit (2 level) quantizer
2007

4/12/2007

47/57

Softer Decision Decoder for BPSK

Replace the 1 bit quantizer with a p bit quantizer.

Tb

r(t)

dt

p bit
quantizer

rQ

block
decoder

x$

f1(t)

Note that this requires a more complicated decoder.

4/12/2007

Spring 2007

Must be able to work with more finely quantized samples from


output of correlator.
Benefit is that decoder can place more confidence on strong signals
and less confidence on weak signals.
Large (~2 dB) performance gain even for p = 3 bits.
2007

48/57

24

Communication Theory

4/12/2007

Soft Decision Decoder for BPSK

To achieve a fully soft decoder, simply pass the


output of the correlator to the decoder.

Equivalent to letting p

Tb

r(t)

dt

block
decoder

x$

f1(t)

The vector r contains more information than c$


Due to the data processing theorem
Therefore we can obtain better performance with soft-decision
decoding

4/12/2007

2007

49/57

Comments on
Soft Decision Decoding

Hard decision decoding chooses the code word with


smallest Hamming distance from the received code word.
Soft decision decoding chooses the code word with the
smallest Euclidian distance from the received code word.
For block codes, soft-decision decoders are usually much
more complex than hard-decision decoders.

However, soft-decision decoding is easy for convolutional codes.


Efficient soft-decision decoding algorithms exist for RS codes
Hot research topic right now.

4/12/2007

Spring 2007

2007

50/57

25

Communication Theory

4/12/2007

Performance of
Soft Decision Decoding

Calculate the pairwise error probability between


all pairs of code words i j:

F
I
h GH dcc2 N, c hJK

h c

P2 c i , c j = P c$ = c j | c = c i = Q

Where d dc , c i is the Euclidian distance between


modulated code words ci and cj.
i

Euclidian distance is related to Hamming distance.


Depends on type of modulation
For BPSK:

d ci , c j = 2 Eb rd H ci , c j

2007

4/12/2007

51/57

Performance of
Soft Decision Decoding

Apply Union bound to compute overall code word


error probability:
Pc

2k

2k

1
2k

2k

i =1

2k

F
H

ad Q

d = d m in

a d m in Q

Spring 2007

2 b rw j

F
GH

No

F
GH

Assume the 2k code words


are equally likely

i IJ
JK
Assume a linear code:

2 b rd H c i , c j

j =1
ji

P2 c i , c j

j =1
ji

QG

j =1
w j 0

d
F
Q GG
H

i =1

2k

1
2k

No

I
JK

2 b rd
No

2 b rd m in
No

Conditional probability of error is the same


for all possible transmitted code words.
Just assume that all-zeros was sent
This is called the uniform error property

I
a is the number of code words of
JK
weight w = d
I For high SNR, performance is dominated
JK by the code words of weight w = d
d

min

Free distance asymptote

26

Communication Theory

4/12/2007

Bit Error Rate

Now use total information weight


n

d = d m in

Pb

m in

F
GH

F
GH

2 b rd
No

2 b rd m in
No

I
JK
I
JK

2007

4/12/2007

53/57

Weight Distribution

The weight distribution or distance spectrum is the


number of code words for each possible weight.
Example: Golay code (table 8-1-1 of Proakis)

4/12/2007

Spring 2007

ad

f(d)

253

924

3.65

506

2112

4.17

11

1288

7392

5.74

12

1288

8064

6.26

15

506

3960

7.83

16

253

2112

8.35

23

12

12

2007

54/57

27

Communication Theory

4/12/2007

Performance of Golay Code:


Soft Decision Decoding

Soft decision decoding, BPSK modulation.

F 2 E F 12 I d I
G H 23 K JJ
P a QG
GG N JJ
H
K
F 2 E F 12 I 7 I
F 2 E F 12 I 8 I
F 2 E F 12 I 11 I
G
J
G
J
G H 23 K JJ
H
K
H
K
23
23
253Q G
+ 506 Q G
+ 1288Q G
J
J
N
N
GG
JJ
GG
JJ
GG N JJ
H
K
H
K
H
K
F 2 E F 12 I 12 I
F 2 E F 12 I 15 I
F 2 E F 12 I 16 I F
G
J
G
J
G H 23 K JJ + Q GG
H
K
H
K
23
23
+ 1288Q G
+ 506 Q G
+ 253Q G
J
J
N
N
GG
JJ
GG
JJ
GG N JJ GG
H
H
H
K
K
K H
F 2 E F 12 I 7 I
G H 23 K JJ
253Q G
GG N JJ
H
K
23

d=7

2 Eb

F 12 I 23 IJ
H 23 K J
N
JJ
K
o

Performance of Golay Code:


BER of Soft Decision Decoding
F 1 2 I d IJ
H 23 K J
12
N
JJ
K
F 2E F 12 I 7 I
F 2E F 12 I 8 I
J
H
K
H 2 3 K JJ
924 G
2112 G
2
3

+
QG
QG
J
N
12
1
2
GG
JJ
GG N JJ
H
K
H
K
F 2E F 12 I11 I
F F 12 I I
H 2 3 K JJ + 8 0 6 4 Q GG 2 E H 2 3 K 1 2 JJ
7392 G
+
QG
N
N
12
GG
JJ 1 2 GG
JJ
H
K
H
K
F 2E F 12 I15 I
F 2E F 12 I16 I
F
H 2 3 K JJ + 2 1 1 2 Q GG H 2 3 K JJ + 1 2 Q GG
3960 G
+
QG
N
N
12
GG
JJ 1 2 GG
JJ 1 2 GG
H
K
H
K
H
F 2E F 12 I 7 I
G H 2 3 K JJ
77Q G
GGH N JJK

Pb

23

d=7

F
GG
GG
H

2 Eb

2 Eb

F 1 2 I 2 3 IJ
H 23 K J
N
JJ
K
o

Spring 2007

28

Communication Theory

4/12/2007

Soft-decision vs. Hard-decision


Decoding of Golay Code
10

BER

10

10

10

-2

Soft
Decision
Decoding
Minimum Distance
Asymptote

-4

Uncoded
Soft decision
decoding
Union
Bound

-6

Hard decision
decoding

2 dB difference
at high Eb/No
10

Spring 2007

-8

5
Eb/No (in dB)

10

29

Anda mungkin juga menyukai