Anda di halaman 1dari 21

Concatenated codes (CH 15)

● Simple (classical, single-level) concatenation

• Length of concatenated code is n1n2


• Dimension of concatenated code is k1k2
• If minimum distances of component codes are d1 and d2, respectively,
then the concatenated code has minimum distance ≥ d1d2
• Decoding:
• Two-stage: Decode (hard-decision) inner code, then outer code
• Not optimum! Can decode up to approximately ¼ of dmin
1
• Good for decoding mixture of random and burst errors
Multiple inner codes
● Not necessary that all inner codes are identical
● Justesen codes:
• n2 different inner codes
• Can show that an asymptotically good class of codes can
be constructed this way
• A class {Ci} of codes of increasing lengths {ni} is
asymptotically good if the normalized dimensions
{ki/ni} and the normalized minimum distances {di/ni}
are both bounded away from zero as i approaches
infinity
• A theoretical result; first known class of good codes
2
Generalization of the model

m may range from 1 to ”large”

Permutes the order of outer code symbols

3
Example of interleaved serial concatenation

4
Example

5
Multilevel concatenated codes
● Multiple outer and inner codes • A1 ⊃ A2 ⊃...⊃ Am ⊃ {0}
• ki dimension of Ai
• d(Ai) min. dist. of Ai
K1 N
• [Ai / Ai+1] coset code:
Set of coset
representatives;
K2 N dimension ki - ki+1
k -k
• qi = 2 i i+1
• Bi code over GF(qi)
Km N • K = ∑ Ki(ki - ki+1)
• d(C) ≥ min{d(Bi)d(Ai)}
6
Multistage decoding
● Decode stage B1 ° [A1 / A2] first, ..., stage Bm ° Am last
1. Decode r = r(1) into a codeword b1 in B1
Inner decoding: Find the closest word in [A1 / A2]
Outer decoding: Use inner decoder’s results
Set i = 2
2. Let r(i) = r(i-1) – fi-1(bi-1)
Decode r(i) into a codeword bi in Bi
Set i = i + 1
If (i ≤ m), repeat from 2

7
Soft decision multistage decoding
a) Requires soft decision (and usually trellis based) decoding at each
decoding stage
b) Decode stage B1 ° [A1 / A2] first, ..., stage Bm ° Am last
1. Decode r = r(1) into a codeword b1 in B1
Inner decoding: Find the closest word in [A1 / A2]
Outer decoding: Use inner decoder’s results
Set i = 2
2. Compute modify received vector r(i): rj,l(i) = rj,l(i-1) ⋅ (1 - 2cj,l(i-1))
Decode r(i) into a codeword bi in Bi
Set i = i + 1
If (i ≤ m), repeat from 2

8
Inner and outer decoding
a) Inner decoder:
• Find the word (label) in each coset in Ai / Ai+1 with largest metric
for each symbol of the outer code. This gives N metric tables
• Pass these N metric tables to the outer decoder
b) Outer decoder:
• Find word with largest metric
c) Not MLD because of possible error propagation
d) Simpler than known MLD algorithms for such codes
e) Can be improved by passing a list of L candidates from one decoding
stage to the next; and by selecting as the final decoded word the one
with the largest metric at the final stage
9
Code decomposition
● Expressing a code in terms of a multilevel concatenation
● µ-level decomposable code: Can be expressed as a µ-level
concatenated code
● Some classical code constructions may be expressed in this way. This
may facilitate decoding of such codes, and can provide soft decision
(sub-optimum) decoding
● r-th order Reed-Muller code of length 2m is denoted by RM(r,m)
● Idea: Decompose trellis into µ trellises, each trellis is significantly less
complex than the original trellis

10
Properties of RM(r,m)
● v0 = (1...1) of length 2m
● vi = (0...0, 1...1, 0...0, ..., 1...1) (groups of length 2i-1)
● RM(r,m) is spanned by vectors v0, v1, v2, ..., vm, v1v2, v1v3, ..., vm-
1vm, ... all products of degree up to r for r > 0

● RM(0,m) is spanned by the vector v0 and RM(-1,m) = {0}


● k(r,m) = 1 + q(m,1) +...+ q(m,r), where q(m,i) = binom(m,i)
● Minimum distance is 2m-r

RM(r,m) ⊃ RM(r-1,m) ⊃...⊃ RM(0,m) ⊃ RM(-1,m)
● RM(m-1,m) is the single parity check code
● RM(m-r-1,m) is the dual code of RM(r,m)
11
Example 15.2
RM(3,3) ⊃ RM(2,3) ⊃ RM(1,3) ⊃ RM(0,3) ⊃ {0}

12
RM codes and interleaving
a) RM(r,m) =
{RM(0,ν)q(r,m-ν), RM(1,ν)q(r-1,m-ν), ..., RM(µ,ν)q(r- µ,m-ν)} °
{RM(r,m-ν), RM(r-1,m-ν), ..., RM(r-µ,m-ν)} where µ = ν
for r > ν and µ = r for r ≤ ν, 1 ≤ ν ≤ m-1
b) Example: RM(3,6) is a (64,42,8) code. Select µ = ν = 3
c) RM(3,6) =
{RM(0,3)q(3,3), RM(1,3)q(2,3), RM(2,3)q(1,3), RM(3,3)q(0,3)} °
{RM(3,3), RM(2,3), RM(1,3), RM(0,3)}
d) = {(8,1), (8,4)3, (8,7)3, (8,8)} ° {(8,8), (8,7), (8,4), (8,1)}
= (8,1) ° [(8,8) / (8,7)] ⊕ (8,4)3 ° [(8,7) / (8,4)]
⊕ (8,7)3 ° [(8,4) / (8,1)] ⊕ (8,8) ° [(8,1) / {0}] 13
Example
a) RM(4,7) is a (128,99,8) code
b) Can show that the trellis has maximum state space dimension of 19
c) Can be decomposed into a 3-level concatenation
d) Subtrellises of length 16, and at most 256 states in each trellis

14
Another example
a) RM(3,7) is a (128,64,16) code
b) Can show that the trellis has maximum state space dimension of 26
c) Can be decomposed into a 3-level concatenation
d) Subtrellises of length 16, and at most 512 states in each trellis

15
Iterative multistage MLD (IMS-MLD)

16
IMS-MLD algorithm
● Decoding algorithm (m = 2)
1. Compute best estimate b(1),1 of first decoding stage and its metric
M(b(1),1). If coset label sequence L(b(1),1) ∈ C, then the best
codeword is found, so stop, otherwise proceed to 2
2. Perform second stage decoding and obtain L(b(2),1) and M(b(2),1).
Store b(1),1, b(2),1, and M(b(2),1), and set i0 = 1
3. For i > i0, calculate b(1),i (the i-th best estimate). If M(b(2),i0)  M(b
(1),i
), decoding is finished, and b(1),i0 and b(2),i0 give the most likely
codeword. Otherwise, go to 4
4. If coset label L(b(1),i) ∈ C, then the best codeword is found, so
stop, otherwise proceed to 5
5. Generate b(2),i. Update i0, b(1),i0, b(2),i0, and M(b(2),i0). Go to 3
● Can be generalized to m-level concatenated codes 17
Example: IMS-MLD
RM(3,7) is a (128,64,16) code whose performance is displayed below

18
Example: IMS-MLD
Decoder complexity comparisons for the (128,64,16) example code

19
Convolutional inner codes
a) Can of course use convolutional codes as inner codes. This facilitates
soft decision decoding
b) Example in book

20
Concatenation of binary codes
a) Also possible with binary outer codes (block or
convolutional)
b) More difficult to make statements about overall minimum
distance
c) Interleaver useful for increasing distance
d) SISO algorithms are useful for decoding
e) Iterative decoding is useful
f) Serial concatenation / parallel concatenation

21

Anda mungkin juga menyukai