Anda di halaman 1dari 56

Convolutional Codes

ECEN 5682 Theory and Practice of Error Control


Codes
Convolutional Codes

Peter Mathys

University of Colorado

Spring 2007

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Basic Definitions, Convolutional Encoders

Linear (n, k) block codes take k data symbols at a time and encode
them into n code symbols. Long data sequences are broken up into
blocks of k symbols and each block is encoded independently of all
others. Convolutional encoders, on the other hand, convert an
entire data sequence, regardless of its length, into a single code
sequence by using convolution and multiplexing operations. In
general, it is convenient to assume that both the data sequences
(u0 , u1 , . . .) and the code sequences (c0 , c1 , . . .) are semi-infinite
sequences and to express them in the form of a power series.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The power series associated with the data sequence


u = (u0 , u1 , u2 , . . .) is defined as

X
u(D) = u0 + u1 D + u2 D 2 + . . . = ui D i ,
i=0

where u(D) is called the data power series. Similarly, the code
power series c(D) associated with the code sequence
c = (c0 , c1 , c2 , . . .) is defined as

X
2
c(D) = c0 + c1 D + c2 D + . . . = ci D i .
i=0

The indeterminate D has the meaning of delay, similar to z 1 in


the z transform, and D is sometimes called the delay operator.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

A general rate R = k/n convolutional encoder converts k data


sequences into n code sequences using a k n transfer function
matrix G(D) as shown in the following figure.

D u(1) (D) c(1) (D)


u(D) e .. Convolutional .. M c(D)
. .
m Encoder u
(k) (n)
u u (D) G(D) c (D) x
x

Fig.1 Block Diagram of a k-Input, n-Output Convolutional Encoder

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

The data power series u(D) is split up into k subsequences,


denoted u (1) (D), u (2) (D), . . . , u (k) (D) in power series notation,
using a demultiplexer whose details are shown in the figure below.

u(D) (u0 , u1 , . . . , uk1, uk , uk+1, . . . , u2k1 , u2k , u2k+1 , . . . , u3k1 , . . .)

(1) (1) (1)


u0 ... u1 ... u2 ... . . . u(1) (D)
(2) (2) (2)
u0 ... u1 ... u2 ... . . . u(2) (D)
..
.
(k)
u0 . . . u(k)
1
... (k)
u2 . . . u(k) (D)

Fig.2 Demultiplexing from u(D) into u(1) (D), . . . , u(k) (D)

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

The code subsequences, denoted by c (1) (D), c (2) (D), . . . , c (n) (D)
in power series notation, at the output of the convolutional
encoder are multiplexed into a single power series c(D) for
transmission over a channel, as shown below.

c(1) (D) c(1)


0
... (1)
c1 ... (1)
c2 ... ...
(2) (2) (2)
c(2) (D) c0 ... c1 ... c2 ... ...
..
.
(n)
c(n) (D) c0 . . . c(n)
1
... (n)
c2 ...

(c0 , c1 , . . . , cn1 , cn , cn+1 , . . . , c2n1 , c2n , c2n+1 , . . . , c3n1 , . . .) c(D)

Fig.3 Multiplexing of c(1) (D), . . . , c(n) (D) into Single Output c(D)

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: A q-ary generator polynomial of degree m is a


polynomial in D of the form
m
X
g (D) = g0 + g1 D + g2 D 2 + . . . + gm D m = gi D i ,
i=0

with m + 1 q-ary coefficients gi . The degree m is also called the


memory order of g (x).

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Consider computing the product (using modulo q arithmetic)


c(D) = u(D) g (D) .
Written out, this looks as follows
c 0 + c1 D + c 2 D 2 + . . . =
= (g0 + g1 D + g2 D 2 + . . . + gm D m ) (u0 + u1 D + u2 D 2 + . . .) =
= g0 u0 + g0 u1 D + g0 u2 D 2 + . . . + g0 um D m + g0 um+1 D m+1 + g0 um+2 D m+2 + ...
+ g1 u0 D + g1 u1 D 2 + . . . + g1 um1 D m + g1 um D m+1 + g1 um+1 D m+2 + ...
+ g2 u0 D 2 + . . . + g2 um2 D m + g2 um1 D m+1 + g2 um D m+2 + ...
.. .. ..
. . .
+ gm u0 D m + gm u1 D m+1 + gm u2 D m+2 + ...

Thus, the coefficients of c(D) are


m
X
cj = gi uji , j = 0, 1, 2, . . . , where u` = 0 for ` < 0 ,
i=0

i.e., the code sequence (c0 , c1 , c2 , . . .) is the convolution of the


data sequence (u0 , u1 , u2 , . . .) with the generator sequence
(g0 , g1 , . . . , gm ).
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

A convenient way to implement the convolution


m
X
cj = gi uji , j = 0, 1, 2, . . . , where u` = 0 for ` < 0 ,
i=0

is to use a shift register with m memory cells (cleared to zero at


time t = 0), as shown in following figure.

m memory cells
. . . , u2 , u1 , u0

g0 g1 g2 gm

. . . , c2 , c1 , c0
+ + +

Fig.4 Block Diagram for Convolution of u(D) with g(D)

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

A general k-input, n-output convolutional encoder consists of k


such shift registers, each of which is connected to the outputs via
n generator polynomials.
Definition: A q-ary linear and time-invariant convolutional encoder
with k inputs and n outputs is specified by a k n matrix G(D),
called transfer function matrix, which consists of generator
(`)
polynomials gh (D), h = 1, 2, . . . , k, ` = 1, 2, . . . , n, as follows
(1) (2) (n)

g1 (D) g1 (D) . . . g1 (D)
(1)
g2 (D) g2(2) (D) . . . g2(n) (D)

G(D) = .
.. .. .
.. . .
(1) (2) (n)
gk (D) gk (D) . . . gk (D)
The generator polynomials have q-ary coefficients, degree mh` , and
are of the form
(`) (`) (`) (`) (`)
gh (D) = g0h + g1h D + g2h D 2 + . . . + gmh` h D mh` .
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Define the power series vectors

u(D) = [u (1) (D), u (2) (D), . . . , u (k) (D)] ,


c(D) = [c (1) (D), c (2) (D), . . . , c (n) (D)] .
The operation of a k-input n-output convolutional encoder can
then be concisely expressed as c(D) = u(D) G(D). Each individual
output sequence is obtained as
k
(`)
X
c (`) (D) = u (h) (D) gh (D) .
h=1

Note: By setting u (h) (D) = 1 in the above equation, it is easily


seen that the generator sequence
(`) (`) (`) (`)
(g0h , g1h , g2h , . . . , gmh` h ) ,
is the unit impulse response from input h to output ` of the
convolutional encoder.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The total memory M of a convolutional encoder is the


total number of memory elements in the encoder, i.e.,
k
X
M= max mh` .
1`n
h=1

Note that max1`n mh` is the number of memory cells or the


memory order of the shift register for the input with index h.

Definition: The maximal memory order m of a convolutional


encoder is the length of the longest input shift register, i.e.,

m = max max mh` .


1hk 1`n

Equivalently, m is equal to the highest degree of any of the


generator polynomials in G(D).

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The constraint length K of a convolutional encoder is


the maximum number of symbols in a single output stream that
can be affected by any input symbol, i.e.,

K = 1 + m = 1 + max max mh` .


1hk 1`n

Note: This definition for constraint length is not in universal use.


Some authors define constraint length to be the maximum number
of symbols in all output streams that can be affected by any input
symbol, which is nK in the notation used here.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Encoder #1. Binary rate R = 1/2 encoder with


constraint length K = 3 and transfer function matrix

G(D) = g (1) (D) g (2) (D) = 1 + D 2 1 + D + D 2 .


   

A block diagram for this encoder is shown in the figure below.

(1) (1) (1)


. . . , c2 , c1 , c0
+

. . . , u2 , u1 , u0

(2) (2) (2)


. . . , c2 , c1 , c0
+ +

Fig.5 Binary Rate 1/2 Convolutional Encoder with K = 3

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

At time t = 0 the contents of the two memory cells are assumed to


be zero. Using this encoder, the data sequence

u = (u0 , u1 , u2 , . . .) = (1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0 . . .) ,

for example, is encoded as follows


u = 110100111010... u = 110100111010...
uD = 110100111010...
uD 2 = 110100111010... uD 2 = 110100111010...
----------------- -----------------
c (1) = 111001110100..... c (2) = 100011101001.....

After multiplexing this becomes


(1) (2) (1) (2) (1) (2)
c = (c0 c1 , c2 c3 , c4 c5 , . . .) = (c0 c0 , c1 c1 , c2 c2 , . . .)
= (11, 10, 10, 00, 01, 11, 11, 10, 01, 10, 00, 01, . . .) .

The pairs of code symbols that each data symbol generates are
called code frames.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: Consider a rate R = k/n convolutional encoder, let

u = (u0 u1 . . . uk1 , uk uk+1 . . . u2k1 , u2k u2k+1 . . . u3k1 , . . .)


(1) (2) (k) (1) (2) (k) (1) (2) (k)
= (u0 u0 . . . u0 , u1 u1 . . . u1 , u2 u2 . . . u2 , . . .) ,

and let

c = (c0 c1 . . . cn1 , cn cn+1 . . . c2n1 , c2n c2n+1 . . . c3n1 , . . .)


(1) (2) (n) (1) (2) (n) (1) (2) (n)
= (c0 c0 . . . c0 , c1 c1 . . . c1 , c2 c2 . . . c2 , . . .) .

Then the set of data symbols (uik uik+1 . . . u(i+1)k1 ) is called the
i-th data frame and the corresponding set of code symbols
(cin cin+1 . . . c(i+1)n1 ) is called the i-th code frame for
i = 0, 1, 2, . . . .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Encoder #2. Binary rate R = 2/3 encoder with constraint length
K = 2 and transfer function matrix
" (1) (2) (3)
# " #
g1 (D) g1 (D) g1 (D) 1+D D 1+D
G(D) = (1) (2) (3)
=
g2 (D) g2 (D) g2 (D) D 1 1
A block diagram for this encoder is shown in the figure below.
(1) (1) (1)
. . . , c2 , c1 , c0
+ +
(1) (1) (1) (2) (2) (2)
. . . , u2 , u1 , u0 . . . , c2 , c1 , c0
+
(3) (3) (3)
. . . , c2 , c1 , c0
+ +

(2) (2) (2)


. . . , u2 , u1 , u0

Fig.6 Binary Rate 2/3 Convolutional Encoder with K = 2

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

In this case the data sequence

u = (u0 u1 , u2 u3 , u4 u5 , . . .) = (11, 01, 00, 11, 10, 10, . . .) ,

is first demultiplexed into u(1) = (1, 0, 0, 1, 1, 1, . . .) and


u(2) = (1, 1, 0, 1, 0, 0, . . .), and then encoded as follows
u (1) = 100111... u (1) = 100111...
(1) (1) (1)
u D = 100111... u D = 100111... u D = 100111...
u (2) D = 110100... u (2) = 110100... u (2) = 110100...
---------- ---------- ----------
c (1) = 101110.... c (2) = 100111.... c (3) = 000000....

Multiplexing the code sequences c(1) , c(2) , and c(3) yields the
single code sequence
(1) (2) (3) (1) (2) (3)
c = (c0 c1 c2 , c3 c4 c5 , . . .) = (c0 c0 c0 , c1 c1 c1 , . . .)
= (110, 000, 100, 110, 110, 010, . . .) .

Because this is a rate 2/3 encoder, data frames of length 2 are


encoded into code frames of length 3.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: Let u = (u0 , u1 , u2 , . . .) be a data sequence (before


demultiplexing) and let c = (c0 , c1 , c2 , . . .) be the corresponding
code sequence (after multiplexing). Then, in analogy to block
codes, the generator matrix G of a convolutional encoder is defined
such that
c = uG .
Note that G for a convolutional encoder has infinitely many rows
and columns.
(`)
Let G(D) = [gh ] be the transfer function matrix of a
convolutional encoder with generator polynomials
(`) (`) i
gh (D) = m
P
i=0 gih D , h = 1, 2, . . . , k, ` = 1, 2, . . . , n, where m
is the maximal memory order of the encoder. Define the matrices
(1) (2) (n)
2 3
gi1 gi1 ... gi1
6 (1) (2) (n) 7
g
6 i2 gi2 ... gi2 7
Gi = 6
6 .. .. .. 77, i = 0, 1, 2, . . . , m .
4 . . . 5
(1) (2) (n)
gik gik ... gik

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

In terms of these matrices, the generator matrix G can be


conveniently expressed as (all entries below the diagonal are zero)

G0 G1 G2 . . . Gm 0 0 ...

G0 G1 . . . Gm1 Gm 0 . . .

G0 . . . Gm2 Gm1 Gm . . .
.. .. .. ..
. . . .
G= .

G0 G1 G2 . . .

G0 G1 . . .

G 0 . . .

..
.

Note that the first row of this matrix is the unit impulse response
(after multiplexing the outputs) from input stream 1, the second
row is the unit impulse response (after multiplexing the outputs)
from input stream 2, etc.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Encoder #1 has m = 2,


     
G0 = 1 1 , G1 = 0 1 , G2 = 1 1 ,

and thus generator matrix



11 01 11 00 00 00 ...
00 11 01 11 00 00 . . .

G = 00 00 11 01 11 00 . . .
.

00 00 00 11 01 11 . . .

.. .. .. .. .. ..
. . . . . .

Using this, it is easy to compute, for example, the list of (non-zero)


datawords and corresponding codewords shown on the next page.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

u = (u0 , u1 , . . .) c = (c0 c1 , c2 c3 , . . .)
1,0,0,0,0,... 11,01,11,00,00,00,00,...
1,1,0,0,0,... 11,10,10,11,00,00,00,...
1,0,1,0,0,... 11,01,00,01,11,00,00,...
1,1,1,0,0,... 11,10,01,10,11,00,00,...
1,0,0,1,0,... 11,01,11,11,01,11,00,...
1,1,0,1,0,... 11,10,10,00,01,11,00,...
1,0,1,1,0,... 11,01,00,10,10,11,00,...
1,1,1,1,0,... 11,10,01,01,10,11,00,...

One thing that can be deduced from this list is that most likely the
minimum weight of any non-zero codeword is 5, and thus, because
convolutional codes are linear, the minimum distance, called
minimum free distance for convolutional codes for historical
reasons, is dfree = 5.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Encoder #2 has m = 1,


   
1 0 1 1 1 1
G0 = , G1 = ,
0 1 1 1 0 0

and therefore generator matrix



101 111 000 000 000 ...
011 100 000 000 000 . . .

000 101 111 000 000 . . .

000 011 100 000 000 . . .

G = 000 000 101 111 000 . . .
.

000 000 011 100 000 . . .

000 000 000 101 111 . . .

000 000 000 011 100 . . .

.. .. .. .. ..
. . . . .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

The first few non-zero codewords that this encoder produces are

u = (u0 u1 , . . .) c = (c0 c1 c2 , . . .)
10,00,00,... 101,111,000,000,...
01,00,00,... 011,100,000,000,...
11,00,00,... 110,011,000,000,...
10,10,00,... 101,010,111,000,...
01,10,00,... 011,001,111,000,...
11,10,00,... 110,110,111,000,...
10,01,00,... 101,100,100,000,...
01,01,00,... 011,111,100,000,...
11,01,00,... 110,000,100,000,...
10,11,00,... 101,001,011,000,...
01,11,00,... 011,010,011,000,...
11,11,00,... 110,101,011,000,...

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The code generated by a q-ary convolutional encoder


with transfer function matrix G(D) is the set of all vectors of
semi-infinite sequences of encoded symbols c(D) = u(D) G(D),
where u(D) is any vector of q-ary data sequences.
Definition: Two convolutional encoders with transfer function
matrices G1 (D) and G2 (D) are said to be equivalent if they
generate the same codes.
Definition: A systematic convolutional encoder is a convolutional
encoder whose codewords have the property that each data frame
appears unaltered in the first k positions of the first code frame
that it affects.
Note: When dealing with convolutional codes and encoders it is
important to carefully distinguish between the properties of the
code (e.g., the minimum distance of a code) and the properties of
the encoder (e.g., whether an encoder is systematic or not).

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Neither encoder #1 nor encoder #2 are systematic. But


the following binary rate 1/3 encoder, which will be called encoder
#3, with constraint length K = 4 and transfer function matrix

G(D) = 1 1 + D + D 3 1 + D + D 2 + D 3 ,
 

is a systematic convolutional encoder. Its generator matrix is



111 011 001 011 000 000 000 . . .
000 111 011 001 011 000 000 . . .

G = 000 000 111 011 001 011 000 . . ..

000 000 000 111 011 001 011 . . .

.. .. .. .. .. .. ..
. . . . . . .

Note that the first column of each triplet of columns has only a
single 1 in it, so that the first symbol in each code frame is the
corresponding data symbol from the data sequence u.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Much more interesting systematic encoders can be obtained if one


allows not only FIR (finite impulse response), but also IIR (infinite
impulse response) filters in the encoder. In terms of the transfer
function matrix G(D), this means that the use of rational
polynomial expressions instead of generator polynomials as matrix
elements is allowed. The following example illustrates this.

Example: Encoder #4. Binary rate R = 1/3 systematic encoder


with constraint length K = 4 and rational transfer function matrix
 
1 + D + D3 1 + D + D2 + D3
G(D) = 1 .
1 + D2 + D3 1 + D2 + D3
A block diagram of this encoder is shown in the next figure.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm


1 + D + D3 1 + D + D2 + D3
G(D) = 1 .
1 + D2 + D3 1 + D2 + D3

c(1) (D)

c(2) (D)
+ +

c(3) (D)
+

u(D)
+

Fig.7 Binary Rate 1/3 Systematic Convolutional Encoder with K = 4

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Encoder State Diagrams

Convolutional encoders have total memory M. Thus, a


time-invariant q-ary encoder can be regarded as a finite state
machine (FSM) with q M states and it can be completely described
by a state transition diagram called encoder state diagram. Such a
state diagram can be used to encode a data sequence of arbitrary
length. In addition, the encoder state diagram can also be used to
obtain important information about the performance of a
convolutional code and its associated encoder.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Encoder state diagram for encoder #1. This is a binary encoder with
G(D) = [1 + D 2 1 + D + D 2 ] that uses 2 memory cells and 22 = 4 states. With
reference to the block diagram in Figure 5, label the encoder states as follows:

S0 = 00 , S1 = 10 , S2 = 01 , S3 = 11 ,

where the first binary digit corresponds to the content of the first (leftmost)
delay cell of the encoder, and the second digit corresponds to the content of
the second delay cell.

At any given time t (measured in frames), the encoder is in a particular state


S (t) . The next state, S (t+1) , at time t + 1 depends on the value of the data
frame at time t, which in the case of a rate R = 1/2 encoder is just simply ut .
(1) (2)
The code frame ct ct that the encoder outputs at time t depends only on
(t)
S and ut (and the transfer function matrix G(D), of course). Thus, the
(1) (2)
possible transitions between the states are labeled with ut /ct ct . The
resulting encoder state diagram for encoder #1 is shown in the following figure.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

S1

1/11 1/10

0/00 S0 0/01 S3 1/01


1/00

0/11 0/10

S2

Fig.8 Encoder State Diagram for Binary Rate 1/2 Encoder with K = 3

To encode the data sequence u = (0, 1, 0, 1, 1, 1, 0, 0, 1, . . .), for instance, start


in S0 at t = 0, return to S0 at t = 1 because u0 = 0, then move on to S1 at
t = 2, S2 at t = 3, S1 at t = 4, S3 at t = 5, S3 at t = 6 (self loop around S3 ),
S2 at t = 7, S0 at t = 8, and finally S1 at t = 9. The resulting code sequence
(after multiplexing) is
c = (00, 11, 01, 00, 10, 01, 10, 11, 11, . . .) .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Encoder state diagram for encoder #2 with


" #
1+D D 1+D
G(D) =
D 1 1

and block diagram as shown in Figure 6. This encoder also has


M = 2, but each of the two memory cells receives its input at time
t from a different data stream. The following convention is used to
label the 4 possible states (the upper bit corresponds to the upper
memory cell in Figure 6)

0 1 0 1
S0 = , S1 = , S2 = , S3 = .
0 0 1 1

Because the encoder has rate R = 2/3, the transitions in the


encoder state diagram from time t to time t + 1 are now labeled
(1) (2) (1) (2) (3)
with ut ut /ct ct ct . The result is shown in the next figure.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

10/010

00/111 S1
10/110

10/101 11/001
10/001
11/110

00/000 S0 S3 11/101

00/011
01/100
00/100 01/000

01/011 11/010
S2

01/111

Fig.9 Encoder State Diagram for Binary Rate 2/3 Encoder with K = 2

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: The figure on the next slide shows the encoder state
diagram for encoder #4 whose block diagram was given in Figure
7. This encoder has rational transfer function matrix
 
1 + D + D3 1 + D + D2 + D3
G(D) = 1 ,
1 + D2 + D3 1 + D2 + D3
and M = 3. The encoder states are labeled using the following
convention (the leftmost bit corresponds to the leftmost memory
cell in Figure 7)

S0 = 000 , S1 = 100 , S2 = 010 , S3 = 110 ,


S4 = 001 , S5 = 101 , S6 = 011 , S7 = 111 .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

1/100
S1 S3

1/111 0/011 0/011 0/001


0/010 1/110
S0 0/000 S2 S5 1/110 S7
0/000 1/100
1/111 1/101 1/101 0/001

S4 S6
0/010

Fig.10 Encoder State Diagram for R = 1/3, K = 4 Systematic Encoder

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Trellis Diagrams

Because the convolutional encoders considered here are


time-invariant, the encoder state diagram describes their behavior
for all times t. But sometimes, e.g., for decoding convolutional
codes, it is convenient to show all possible states of an encoder
separately for each time t (measured in frames), together with all
possible transitions from states at time t to states at time t + 1.
The resulting diagram is called a trellis diagram.
Example: For encoder #1 with G(D) = 1 + D 2 1 + D + D 2
 

and M = 2 (and thus 4 states) the trellis diagram is shown in the


figure on the next slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

01 01 01
S3
10 10 10

S2 10 10 10 10
01 01 01 01

00 00 00
S1 11 11 11

11 11 11 11 11

00 00 00 00 00
S0
t=0 t=1 t=2 t=3 t=4 t=5

Fig.11 Trellis Diagram for Binary Rate 1/2 Encoder with K = 3

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Note that the trellis always starts with the all-zero state S0 at time
t = 0 as the root node. This corresponds to the convention that
convolutional encoders must be initialized to the all-zero state
before they are first used. The labels on the branches are the code
frames that the encoder outputs when that particular transition
from a state at time t to a state at time t + 1 is made in response
to a data symbol ut . The highlighted path in Figure 11, for
example, coresponds to the data sequence u = (1, 1, 0, 1, 0, . . .)
and the resulting code sequence

c = (11, 10, 10, 00, 01, . . .) .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Viterbi Decoding Algorithm

In its simplest and most common form, the Viterbi algorithm is a


maximum likelihood (ML) decoding algorithm for convolutional
codes. Recall that a ML decoder outputs the estimate c = ci iff i
is the index (or one of them selected at random if there are
several) which maximizes the expression p (v|ci ) , over all
Y |X
codewords c0 , c1 , c2 , . . . . The conditional pmf p defines the
Y |X
channel model with input X and output Y which is used, and v is
the received (and possibly corrupted) codeword at the output of
the channel. For the important special case of memoryless
channels used without feedback, the computation of p can be
Y |X
considerably simplified and brought into a form where metrics
along the branches of a trellis can be added up and then a ML
decision can be obtained by comparing these sums. In a nutshell,
this is what the Viterbi algorithm does.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: A channel with input X and output Y is said to be


memoryless if
p(yj |xj , xj1 , . . . , x0 , yj1 , . . . , y0 ) = p (yj |xj ) .
Y |X

Definition: A channel with input X and output Y is used without


feedback if
p(xj |xj1 , . . . , x0 , yj1 , . . . , y0 ) = p(xj |xj1 , . . . , x0 ) .

Theorem: For a memoryless channel used without feedback


N1
Y
p (y|x) = p (yj |xj ) ,
Y |X Y |X
j=0

where N is the length of the channel input and output vectors X


and Y.
Proof: Left as an exercise.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The ML decoding rule at the output Y of a discrete


memoryless channel (DMC) with input X , used without feedback
is: Output code sequence estimate c = ci iff i maximizes the
likelihood function
N1
Y
p (v|ci ) = p (vj |cij ) ,
Y |X Y |X
j=0

over all code sequences ci = (ci0 , ci1 , ci2 , . . .) for i = 0, 1, 2, . . . .


The pmf p is given by specifying the transition probabilities of
Y |X
the DMC and vj are the received symbols at the output of the
channel. For block codes N is the blocklength of the code. For
convolutional codes we set N = n (L + m), where L is the number
of data frames that are encoded and m is the maximal memory
order of the encoder.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The log likelihood function of a received sequence v at


the channel output with respect to code sequence ci is the
expression

  N1
X  
log p (v|ci ) = log p (vj |cij ) ,
Y |X Y |X
j=0

where the logarithm can be taken to any basis.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The path metric (v|ci ) for a received sequence v given


a code sequence ci is computed as
N1
X
(v|ci ) = (vj |cij ) ,
j=0

where the symbol metrics (vj |cij ) are defined as



(vj |cij ) = log[p (vj |cij )] + f (vj ) .
Y |X

Here is any positive number and f (vj ) is a completely arbitrary


real-valued function defined over the channel output alphabet B.
Usually, one selects for every y B
 
f (y ) = log min p (y |x) ,
xA Y |X

where A is the channel input alphabet. In this way the smallest


symbol metric will always be 0. The quantity is then adjusted so
that all nonzero metrics are (approximated by) small positive
integers.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: A memoryless BSC with transition probability  < 0.5 is


characterized by

p (v |c) v =0 v =1
Y |X

c =0 1 
c =1  1
minc p (v |c)  
Y |X

Thus, setting f (v ) = log[min p (v |c)], yields


c Y |X

f (0) = f (1) = log  .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

With this, the bit metrics become

(v |c) v =0 v =1
c =0 (log(1) log ) 0
c =1 0 (log(1) log )

Now choose as
1
= ,
log(1 ) log 

so that the following simple bit metrics for the BSC with  < 0.5
are obtained

(v |c) v =0 v =1
c =0 1 0
c =1 0 1

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: The partial path metric (t) (v|ci ) at time t,


t = 1, 2, . . ., for a path, a received sequence v, and given a code
sequence ci , is computed as
t1 tn1
(`)
X X
(t) (v|ci ) = (v(`) |ci ) = (vj |cij ) ,
`=0 j=0

(`)
where the branch metrics (v(`) |ci ) of the `-th branch,
` = 0, 1, 2, . . ., for v and a given ci are defined as
(`+1)n1
(`)
X
(v(`) |ci ) = (vj |cij ) .
j=`n

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

The Viterbi algorithm makes use of the trellis diagram to


compute the partial path metrics (t) (v|ci ) at times t = 1, 2, . . . , N
for a received v, given all code sequences ci that are candidates for
a ML decision, in the following well defined and organized manner.
(1) Every node in the trellis is assigned a number that is equal to
the partial path metric of the path that leads to this node.
By definition, the trellis starts in state 0 at t = 0 and
(0) (v|ci ) = 0.
(2) For every transition from time t to time t + 1, all q (M+k)
(there are q M states and q k different input frames at every
(t)
time t) t-th branch metrics (v(t) |ci ) for v given all t-th
codeframes are computed.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

(3) The partial path metric (t+1) (v|ci ) is updated by adding the
t-th branch metrics to the previous partial path metrics
(t) (v|ci ) and keeping only the maximum value of the partial
path metric for each node in the trellis at time t + 1. The
partial path that yields the maximum value at each node is
called the survivor, and all other partial paths leading into the
same node are eliminated from further consideration as a ML
decision candidate. Ties are broken by flipping a coin.
(4) If t + 1 = N (= n(L + m) where L is the number of data
frames that are encoded and m is the maximal memory order
of the encoder), then there is only one survivor with
maximum path metric (v|ci ) = (N) (v|ci ) and thus c = ci
is announced and the decoding algorithm stops. Otherwise,
set t t + 1 and return to step 2.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Theorem: The path with maximum path metric (v|ci ) selected by


the Viterbi decoder is the maximum likelihood path.
Proof: Suppose ci is the ML path, but the decoder outputs
c = cj . This implies that at some time t the partial path metrics
satisfy (t) (v|cj ) (t) (v|ci ) and ci is not a survivor. Appending
the remaining path that corresponds to ci to the survivor cj at
time t thus results in a larger path metric than the one for the ML
path cj . But this is a contradiction for the assumption that cj is
the ML path. QED
Example: Encoder #1 (binary R = 1/2, K = 3 encoder with
G(D) = [1 + D 2 1 + D + D 2 ]) was used to generate and transmit
a codeword over a BSC with transition probability  < 0.5. The
following seqence was received:
v = (10, 10, 00, 10, 10, 11, 01, 00, . . .) .
To find the most likely codeword c that corresponds to this v, use
the Viterbi algorithm with the trellis diagram shown in Figure 12.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

3 4 5 7 8. 10 11.
01 01 01 01 01 01
S3 X X X
10 10 10 10 10 10
X X X
X X X
10 1 10 4 10 6 10 7 10 8. 10 11 10 11.
S2 01 01 01 01 01 01 01
X X X

00 X 00 00 X 00 X 00 00
S1 11 11 11 11 11 11
1 2 3 5. 7 9 10 13
11 11 11 X 11 11 X 11 11 11 X
X X X
00 00 00 00 00 00 00 00
S0 X X X
0 1 2 4 5. 7 9 10 12
v: 10 10 00 10 10 11 01 00

Fig.12 Viterbi Decoder: R = 1/2, K = 3 Encoder, Transmission over BSC

At time zero start in state S0 with a partial path metric (0) (v|ci ) = 0. Using
the bit metrics for the BSC with  < 0.5 given earlier, the branch metrics for
each of the first two brances are 1. Thus, the partial path metrics at time
t = 1 are (1) (10|00) = 1 and (1) (10|11) = 1.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

(1)
Continuing to add the branch metrics (v(1) |ci ), the partial path metrics
(2) ((10, 10)|(00, 00)) = 2, (2) ((10, 10)|(00, 11)) = 2,
(2) ((10, 10)|(11, 01)) = 1, and (2) ((10, 10)|(11, 10)) = 3 are obtained at time
t = 2. At time t = 3 things become more interesting. Now two branches enter
into each state and only the one that results in the larger partial path metric is
kept and the other one is eliminated (indicated with an X). Thus, for
instance, since 2 + 2 = 4 > 1 + 0 = 1, (3) ((10, 10, 00)|(00, 00, 00)) = 4
whereas the alternative path entering S0 at t = 3 would only result in
(3) ((10, 10, 00)|(11, 01, 11)) = 1. Similarly, for the two paths entering S1 at
t = 3 one finds either (3) ((10, 10, 00)|(00, 00, 11)) = 2 or
(3) ((10, 10, 00)|(11, 01, 00)) = 3 and therefore the latter path and
corresponding partial path metric survive. If there is a tie, e.g., as in the case
of the two paths entering S0 at time t = 4, then one of the two paths is
selected as survivor at random. In Figure 12 ties are marked with a dot
following the value of the partial path metric. Using the partial path metrics at
time t = 8, the ML decision at this time is to choose the codeword
corresponding to the path with metric 13 (highlighted in Figure 12), i.e.,

c = (11, 10, 01, 10, 11, 11, 01, 00, . . .) = = (1, 1, 1, 0, 0, 1, 0, 1, . . .).
u

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Definition: A channel whose output alphabet is the same as the


input alphabet is said to make hard decisions, whereas a channel
that uses a larger alphabet at the output than at the input is said
to make soft decisions.
Note: In general, a channel which gives more differentiated output
information is preferred (and has more capacity) than one which
has the same number of output symbols as there are input
symbols, as for example the BSC.
Definition: A decoder that operates on hard decision channel
outputs is called a hard decision decoder, and a decoder that
operates on soft decision channel outputs is called a soft decision
decoder.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Example: Use again encoder #1, but this time with a soft
decision channel model with 2 inputs and 5 outputs as shown in
the following figure.
1 1

Input Output

X Y

0 0

Fig.13 Discrete Memoryless Channel (DMC) with 2 Inputs and 5 Outputs

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

The symbols @ and ! at the channel output represent bad


0s and bad 1s, respectively, whereas is called an erasure
(i.e., it is uncertain whether is closer to a 0 or a 1, whereas a
bad 0, for example, is closer to 0 than to 1). The transition
probabilities for this channel are

p (v |c) v =0 v =@ v = v =! v =1
Y |X

c =0 0.5 0.2 0.14 0.1 0.06


c =1 0.06 0.1 0.14 0.2 0.5

After taking (base 2) logarithms

log2 [p (v |c)] v =0 v =@ v = v =! v =1
Y |X
c=0 1.00 2.32 2.84 3.32 4.06
c=1 4.06 3.32 2.84 2.32 1.00
log2 [minc p (v |c)] 4.06 3.32 2.84 3.32 4.06
Y |X

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

Using

(v |c) = log2 [p (v |c)] log2 [min p (v |c)]
Y |X c Y |X

with = 1 and rounding to the nearest integer yields the bit


metrics

(v |c) v =0 v =@ v = v =! v =1
c =0 3 1 0 0 0
c =1 0 0 0 1 3

The received sequence

v = (11, !@, @0, 0, 1!, 00, 0, 10, . . .) ,

can now be decoded using the Viterbi algorithm as shown in Figure


14 on the next page.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm

8 9 13 17 20 25 29
01 01 01 01 01 01
S3 X X X X
10 10 10 10 10 10
X X
10 6 10 11 10 12 10 16 10 20 10 23 10 31
S2 01 01 01 01 01 01 01
X X X X X X

00 00 X 00 00 00 X 00
S1 11 11 11 11 11 11
6 1 10 14 15 22 23 28
11 11 11 X 11 X 11 11 X 11 X 11
X X X
00 00 00 00 00 00 00 00
S0 X X X
0 0 1 6 11 16 22 25 28
v: 11 !@ @0 0 1! 00 0 10

Fig.14 Viterbi Decoder: R = 1/2, K = 3 Encoder, 2-Input, 5-Output DMC

Clearly, the Viterbi algorithm can be used either for hard or soft decision
decoding by using appropriate bit metrics. In this example the ML decision (up
to t = 8) is

c = (11, 01, 00, 10, 10, 00, 10, 10, . . .) ,
= (1, 0, 1, 1, 0, 1, 1, 0, . . .).
corresponding to u
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Anda mungkin juga menyukai