Peter Mathys
University of Colorado
Spring 2007
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Linear (n, k) block codes take k data symbols at a time and encode
them into n code symbols. Long data sequences are broken up into
blocks of k symbols and each block is encoded independently of all
others. Convolutional encoders, on the other hand, convert an
entire data sequence, regardless of its length, into a single code
sequence by using convolution and multiplexing operations. In
general, it is convenient to assume that both the data sequences
(u0 , u1 , . . .) and the code sequences (c0 , c1 , . . .) are semi-infinite
sequences and to express them in the form of a power series.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
where u(D) is called the data power series. Similarly, the code
power series c(D) associated with the code sequence
c = (c0 , c1 , c2 , . . .) is defined as
X
2
c(D) = c0 + c1 D + c2 D + . . . = ci D i .
i=0
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
The code subsequences, denoted by c (1) (D), c (2) (D), . . . , c (n) (D)
in power series notation, at the output of the convolutional
encoder are multiplexed into a single power series c(D) for
transmission over a channel, as shown below.
Fig.3 Multiplexing of c(1) (D), . . . , c(n) (D) into Single Output c(D)
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
m memory cells
. . . , u2 , u1 , u0
g0 g1 g2 gm
. . . , c2 , c1 , c0
+ + +
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
. . . , u2 , u1 , u0
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
u = (u0 , u1 , u2 , . . .) = (1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 0 . . .) ,
The pairs of code symbols that each data symbol generates are
called code frames.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
and let
Then the set of data symbols (uik uik+1 . . . u(i+1)k1 ) is called the
i-th data frame and the corresponding set of code symbols
(cin cin+1 . . . c(i+1)n1 ) is called the i-th code frame for
i = 0, 1, 2, . . . .
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Example: Encoder #2. Binary rate R = 2/3 encoder with constraint length
K = 2 and transfer function matrix
" (1) (2) (3)
# " #
g1 (D) g1 (D) g1 (D) 1+D D 1+D
G(D) = (1) (2) (3)
=
g2 (D) g2 (D) g2 (D) D 1 1
A block diagram for this encoder is shown in the figure below.
(1) (1) (1)
. . . , c2 , c1 , c0
+ +
(1) (1) (1) (2) (2) (2)
. . . , u2 , u1 , u0 . . . , c2 , c1 , c0
+
(3) (3) (3)
. . . , c2 , c1 , c0
+ +
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Multiplexing the code sequences c(1) , c(2) , and c(3) yields the
single code sequence
(1) (2) (3) (1) (2) (3)
c = (c0 c1 c2 , c3 c4 c5 , . . .) = (c0 c0 c0 , c1 c1 c1 , . . .)
= (110, 000, 100, 110, 110, 010, . . .) .
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Note that the first row of this matrix is the unit impulse response
(after multiplexing the outputs) from input stream 1, the second
row is the unit impulse response (after multiplexing the outputs)
from input stream 2, etc.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
u = (u0 , u1 , . . .) c = (c0 c1 , c2 c3 , . . .)
1,0,0,0,0,... 11,01,11,00,00,00,00,...
1,1,0,0,0,... 11,10,10,11,00,00,00,...
1,0,1,0,0,... 11,01,00,01,11,00,00,...
1,1,1,0,0,... 11,10,01,10,11,00,00,...
1,0,0,1,0,... 11,01,11,11,01,11,00,...
1,1,0,1,0,... 11,10,10,00,01,11,00,...
1,0,1,1,0,... 11,01,00,10,10,11,00,...
1,1,1,1,0,... 11,10,01,01,10,11,00,...
One thing that can be deduced from this list is that most likely the
minimum weight of any non-zero codeword is 5, and thus, because
convolutional codes are linear, the minimum distance, called
minimum free distance for convolutional codes for historical
reasons, is dfree = 5.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
The first few non-zero codewords that this encoder produces are
u = (u0 u1 , . . .) c = (c0 c1 c2 , . . .)
10,00,00,... 101,111,000,000,...
01,00,00,... 011,100,000,000,...
11,00,00,... 110,011,000,000,...
10,10,00,... 101,010,111,000,...
01,10,00,... 011,001,111,000,...
11,10,00,... 110,110,111,000,...
10,01,00,... 101,100,100,000,...
01,01,00,... 011,111,100,000,...
11,01,00,... 110,000,100,000,...
10,11,00,... 101,001,011,000,...
01,11,00,... 011,010,011,000,...
11,11,00,... 110,101,011,000,...
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
G(D) = 1 1 + D + D 3 1 + D + D 2 + D 3 ,
Note that the first column of each triplet of columns has only a
single 1 in it, so that the first symbol in each code frame is the
corresponding data symbol from the data sequence u.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
1 + D + D3 1 + D + D2 + D3
G(D) = 1 .
1 + D2 + D3 1 + D2 + D3
c(1) (D)
c(2) (D)
+ +
c(3) (D)
+
u(D)
+
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Example: Encoder state diagram for encoder #1. This is a binary encoder with
G(D) = [1 + D 2 1 + D + D 2 ] that uses 2 memory cells and 22 = 4 states. With
reference to the block diagram in Figure 5, label the encoder states as follows:
S0 = 00 , S1 = 10 , S2 = 01 , S3 = 11 ,
where the first binary digit corresponds to the content of the first (leftmost)
delay cell of the encoder, and the second digit corresponds to the content of
the second delay cell.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
S1
1/11 1/10
0/11 0/10
S2
Fig.8 Encoder State Diagram for Binary Rate 1/2 Encoder with K = 3
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
0 1 0 1
S0 = , S1 = , S2 = , S3 = .
0 0 1 1
10/010
00/111 S1
10/110
10/101 11/001
10/001
11/110
00/000 S0 S3 11/101
00/011
01/100
00/100 01/000
01/011 11/010
S2
01/111
Fig.9 Encoder State Diagram for Binary Rate 2/3 Encoder with K = 2
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Example: The figure on the next slide shows the encoder state
diagram for encoder #4 whose block diagram was given in Figure
7. This encoder has rational transfer function matrix
1 + D + D3 1 + D + D2 + D3
G(D) = 1 ,
1 + D2 + D3 1 + D2 + D3
and M = 3. The encoder states are labeled using the following
convention (the leftmost bit corresponds to the leftmost memory
cell in Figure 7)
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
1/100
S1 S3
S4 S6
0/010
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Trellis Diagrams
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
01 01 01
S3
10 10 10
S2 10 10 10 10
01 01 01 01
00 00 00
S1 11 11 11
11 11 11 11 11
00 00 00 00 00
S0
t=0 t=1 t=2 t=3 t=4 t=5
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Note that the trellis always starts with the all-zero state S0 at time
t = 0 as the root node. This corresponds to the convention that
convolutional encoders must be initialized to the all-zero state
before they are first used. The labels on the branches are the code
frames that the encoder outputs when that particular transition
from a state at time t to a state at time t + 1 is made in response
to a data symbol ut . The highlighted path in Figure 11, for
example, coresponds to the data sequence u = (1, 1, 0, 1, 0, . . .)
and the resulting code sequence
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
N1
X
log p (v|ci ) = log p (vj |cij ) ,
Y |X Y |X
j=0
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
p (v |c) v =0 v =1
Y |X
c =0 1
c =1 1
minc p (v |c)
Y |X
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
(v |c) v =0 v =1
c =0 (log(1) log ) 0
c =1 0 (log(1) log )
Now choose as
1
= ,
log(1 ) log
so that the following simple bit metrics for the BSC with < 0.5
are obtained
(v |c) v =0 v =1
c =0 1 0
c =1 0 1
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
(`)
where the branch metrics (v(`) |ci ) of the `-th branch,
` = 0, 1, 2, . . ., for v and a given ci are defined as
(`+1)n1
(`)
X
(v(`) |ci ) = (vj |cij ) .
j=`n
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
(3) The partial path metric (t+1) (v|ci ) is updated by adding the
t-th branch metrics to the previous partial path metrics
(t) (v|ci ) and keeping only the maximum value of the partial
path metric for each node in the trellis at time t + 1. The
partial path that yields the maximum value at each node is
called the survivor, and all other partial paths leading into the
same node are eliminated from further consideration as a ML
decision candidate. Ties are broken by flipping a coin.
(4) If t + 1 = N (= n(L + m) where L is the number of data
frames that are encoded and m is the maximal memory order
of the encoder), then there is only one survivor with
maximum path metric (v|ci ) = (N) (v|ci ) and thus c = ci
is announced and the decoding algorithm stops. Otherwise,
set t t + 1 and return to step 2.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
3 4 5 7 8. 10 11.
01 01 01 01 01 01
S3 X X X
10 10 10 10 10 10
X X X
X X X
10 1 10 4 10 6 10 7 10 8. 10 11 10 11.
S2 01 01 01 01 01 01 01
X X X
00 X 00 00 X 00 X 00 00
S1 11 11 11 11 11 11
1 2 3 5. 7 9 10 13
11 11 11 X 11 11 X 11 11 11 X
X X X
00 00 00 00 00 00 00 00
S0 X X X
0 1 2 4 5. 7 9 10 12
v: 10 10 00 10 10 11 01 00
At time zero start in state S0 with a partial path metric (0) (v|ci ) = 0. Using
the bit metrics for the BSC with < 0.5 given earlier, the branch metrics for
each of the first two brances are 1. Thus, the partial path metrics at time
t = 1 are (1) (10|00) = 1 and (1) (10|11) = 1.
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
(1)
Continuing to add the branch metrics (v(1) |ci ), the partial path metrics
(2) ((10, 10)|(00, 00)) = 2, (2) ((10, 10)|(00, 11)) = 2,
(2) ((10, 10)|(11, 01)) = 1, and (2) ((10, 10)|(11, 10)) = 3 are obtained at time
t = 2. At time t = 3 things become more interesting. Now two branches enter
into each state and only the one that results in the larger partial path metric is
kept and the other one is eliminated (indicated with an X). Thus, for
instance, since 2 + 2 = 4 > 1 + 0 = 1, (3) ((10, 10, 00)|(00, 00, 00)) = 4
whereas the alternative path entering S0 at t = 3 would only result in
(3) ((10, 10, 00)|(11, 01, 11)) = 1. Similarly, for the two paths entering S1 at
t = 3 one finds either (3) ((10, 10, 00)|(00, 00, 11)) = 2 or
(3) ((10, 10, 00)|(11, 01, 00)) = 3 and therefore the latter path and
corresponding partial path metric survive. If there is a tie, e.g., as in the case
of the two paths entering S0 at time t = 4, then one of the two paths is
selected as survivor at random. In Figure 12 ties are marked with a dot
following the value of the partial path metric. Using the partial path metrics at
time t = 8, the ML decision at this time is to choose the codeword
corresponding to the path with metric 13 (highlighted in Figure 12), i.e.,
c = (11, 10, 01, 10, 11, 11, 01, 00, . . .) = = (1, 1, 1, 0, 0, 1, 0, 1, . . .).
u
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Example: Use again encoder #1, but this time with a soft
decision channel model with 2 inputs and 5 outputs as shown in
the following figure.
1 1
Input Output
X Y
0 0
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
p (v |c) v =0 v =@ v = v =! v =1
Y |X
log2 [p (v |c)] v =0 v =@ v = v =! v =1
Y |X
c=0 1.00 2.32 2.84 3.32 4.06
c=1 4.06 3.32 2.84 2.32 1.00
log2 [minc p (v |c)] 4.06 3.32 2.84 3.32 4.06
Y |X
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes
Basic Definitions, Convolutional Encoders
Convolutional Codes Encoder State Diagrams
Viterbi Decoding Algorithm
Using
(v |c) = log2 [p (v |c)] log2 [min p (v |c)]
Y |X c Y |X
(v |c) v =0 v =@ v = v =! v =1
c =0 3 1 0 0 0
c =1 0 0 0 1 3
8 9 13 17 20 25 29
01 01 01 01 01 01
S3 X X X X
10 10 10 10 10 10
X X
10 6 10 11 10 12 10 16 10 20 10 23 10 31
S2 01 01 01 01 01 01 01
X X X X X X
00 00 X 00 00 00 X 00
S1 11 11 11 11 11 11
6 1 10 14 15 22 23 28
11 11 11 X 11 X 11 11 X 11 X 11
X X X
00 00 00 00 00 00 00 00
S0 X X X
0 0 1 6 11 16 22 25 28
v: 11 !@ @0 0 1! 00 0 10
Clearly, the Viterbi algorithm can be used either for hard or soft decision
decoding by using appropriate bit metrics. In this example the ML decision (up
to t = 8) is
c = (11, 01, 00, 10, 10, 00, 10, 10, . . .) ,
= (1, 0, 1, 1, 0, 1, 1, 0, . . .).
corresponding to u
Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes