Anda di halaman 1dari 10

LDPC Options for Next Generation

Wireless Systems
T. Lestable* and E. Zimmermann#

*Advanced Technology Group, Samsung Electronics Research Institute, UK


#
Technische Universität Dresden, Vodafone Chair Mobile Communications Systems

Abstract—Low-Density Parity-Check (LDPC) codes complexity decoding algorithms [16] to


have recently drawn much attention due to their near- complete flexible architecture design [6][7][8],
capacity error correction performance, and are some pragmatic and realistic implementation
currently in the focus of many standardization solutions allow LDPC codes to be more and
activities, e.g., IEEE 802.11n, IEEE 802.16e, and ETSI more attractive as enhancement of current
DVB-S. In this contribution, we discuss several
aspects related to the practical application of such
(B3G) or next generation wireless systems
codes to wireless communications systems. We (4G) [29].
consider flexibility, memory requirements, en- and The aim of this paper is thus to present and
decoding complexity and different variants of
evaluate non-exhaustive solutions that allow
decoding algorithms for LDPC codes that enable to
effectively trade-off error correction performance for
to decrease the global complexity of
implementation simplicity. We conclude that many of encoding/decoding. The first part presents
what have been considered significant disadvantages basic properties of LDPC codes, together with
of LDPC codes (inflexibility, high encoding the message-passing principle. Then we
complexity, etc.) can be overcome by appropriate use tackle the encoder complexity issue, where
of different algorithms and strategies that have been Hardware (HW) requirements in terms of
recently developed – making LDPC codes a highly dimensioning are assessed, relying on the
attractive option for forward error correction for Block-LDPC approach. The decoder side is
B3G/4G systems.
kept for the final part, as it represents the
more voracious element within the joint
Index Terms—LDPC, Belief Propagation, Bit design. We thus review and evaluate
Flipping, Scheduling, Complexity, TGnSync, performance/complexity trade-off of sub-
4G. optimal low-complexity decoding algorithms,
and highlight common trends in the
INTRODUCTION architecture of such LDPC codes, by

R ecently many standards


proposals, namely TGnSync
[15][27], or WWise [28] for IEEE
evaluating their HW requirements.
LDPC Codes: Fundamentals
802.11n, together with IEEE 802.16e [14],
have considered LDPC coding schemes as In this part, we briefly introduce the basic
key component of their system features. The principle and notations of LDPC codes. For
adoption by such standards activities proves further reading see [1][16][17]. LDPC codes
the increasing maturity of the LDPC related are linear block codes whose parity-check
technology, especially the affordable joint matrix H has the favourable property of being
complexity from encoder and decoder sparse, i.e. contains only a low number of
implementation. From sub-optimal lower- non-zero elements. Tanner graphs of such
codes are bipartite graphs containing two
Page 1 (10)
different kinds of nodes, code (bit, or variable) Nevertheless, a fully parallel implementation
nodes and check nodes. A (n, k) LDPC code is prohibitive due to large block lengths.
is thus represented by a m x n parity-check Consequently a strong trend is currently going
matrix H, where m=n-k, is the number of towards semi-parallel architectures [6][8], with
redundancy (parity bits) of the coding scheme. Block-LDPC being the centrepiece for this
We can then distinguish regular from irregular approach.
LDPC codes, depending on the degree
distribution of code nodes (column weights) Another important practical issue when
and check nodes (row weights). A regular dealing with coding schemes for adaptive air
scheme means that these distributions are interfaces is flexibility in terms of block sizes
constant along column and rows, and are and code rates. While designing LDPC codes
usually represented by the notation (dv, dc). we have therefore to keep in mind the direct
For such a code, the number of non-zero and strong relation between the structure of
elements is thus given either by n*dv or m*dc, the parity check matrix and the total encoding,
leading to the following code rate relation decoding and implementation complexity.
Rc=1-(m/n)=1-(dv/dc). Indeed, a completely random LDPC might
achieve better performance at the expense of
The decoding of LDPC codes is relying on the a very complex interconnections (shuffle)
Belief-Propagation Algorithm (BPA) network that might be prohibitive for large
framework extensively discussed in literature block lengths in terms of HW wirings, together
[19]. This involves two major steps, the check with leading to potentially high complexity
node update and the bit node update. (Fig.1), encoding, low achievable parallelization level,
where intrinsic values from the channel feed and most importantly –low flexibility in terms
first bit nodes (parents), then extrinsic of block sizes and code rates. Therefore the
information is processed and forwarded to sequel intends to highlight and assess the
check nodes (child), that themselves will most relevant performance/HW requirements
produce new extrinsic information relying on trade-offs.
parity-check constraints, feeding their
connected bit nodes. Random-Like LDPC

One typical way of constructing good LDPC


codes is to take a degree distribution that
promises good error correction performance
[17] (e.g. by EXIT chart curve matching of
variable and check node decoder transfer
curves) as a starting point and then use e.g.
progressive edge growth (PEG) [18]
algorithms to ensure a good distance
spectrum, i.e., low error floors. Codes that are
Figure 1: Message-Passing Illustration contructed following this framework usually
Check Node Update followed by Bit Node Update come very close to the bounds of what is
achievable in terms of error correction
The way of switching between bit and check
performance [17]. They are hence considered
nodes updates is referred as scheduling, and
as the baseline comparison case for
will be discussed later on, as this can impact
performance assessment. The disadvantage
the decoder complexity.
of this approach for practical implementation
is that a new code needs to be designed for
Joint Design Methodology each block length and code rate, leading to
the above mentioned low flexibility. The
With parallelization holding the promise of obtained codes are often non-systematic, thus
keeping delays low while continuously requiring appropriate preprocessing to enable
increasing data rates, the major attraction near linear-time encoding [2].
from a design point of view is that LDPC
codes are inherently fully parallel oriented.
Page 2 (10)
Structured LDPC (Block-LDPC) together with maintaining the degree
distributions of H. Indeed, for a given block
Structured (Block-)LDPC codes on the other length N, the expansion factor p (denoted Zf in
hand, such as Pi-Construction Codes [15] or standards) is obtained through p=N/Nb. As N
LDPC Array Codes [14] proposed in the must therefore be a multiple of Nb, the
framework of IEEE 802 have been shown to maximum achievable granularity is obviously
have good performance and high flexibility in bounded by the size of the base model matrix.
terms of code rates and block sizes at the In the case of the TGnSync LDPC code the
same time. The parity-check matrix H of such block length is hence scalable in steps of 24
codes can be seen as an array of square sub- bits – it is not very probable that a higher
block matrices. These latter sub-block granularity could be required.
matrices are obtained by circular shift and/or Another interesting aspect is that the whole
rotation of the identity matrix, or are all-zero expansion process is independent from the
matrices. The parity check matrix is hence circular shift values, leading to many different
fully determined by means of these circular possible LDPC code designs [10][11][12] with
shifts, and the square sub-block matrices different performance, but capable of being
dimension p. LDPC codes defined by such mapped onto the same semi-parallel decoding
standards rely on the concept of a base model architecture.
matrix introduced by Zhong and Zhang [7],
which contains the circular shift values. Figure Encoding Complexity
2 shows the base model matrix (Mb,Nb)=(12,
24) Hb for the Rc=1/2 LDPC code defined in One considerable challenge for the
TGnSync [27]: application of LDPC in practical wireless
systems has long been encoding complexity
(it is in fact a still quite widespread mis-
conception that this remains an open issue). If
the parity check matrix is in non-systematic
format, straightforward methods for encoding
destroy (or do not exploit) the sparse nature of
the matrix – thus leading to an encoding
complexity quadratic with block length.
However, in his famous paper [2], Richardson
presented several pre-processing techniques
to transform H into an approximate lower
triangular (and thus approximate systematic)
form, leading to a complexity quadratic w.r.t.
to only a small fraction of the block length n
for the encoding. The resulting H format is
Figure 2: Base model matrix for TGnSync Rc=1/2 [27] given below (Fig. 3) for the Block-LDPC after
expansion factor p (=Zf):
Adaptation to different block lengths can e.g.
be done by expanding elements of the base N −M g M −g
matrix (e.g. by replacing each “1” in the parity

A B T
check matrix by an identity matrix and each M −g
“0” by an all-zeros matrix). Different code
rates are obtained by appending more H=   M = m⋅ p
elements to the matrix in only one dimension
(i.e., add more variable nodes, but no check C D E g =γ ⋅ p
nodes). Note that the decoder must be flexible
enough to support such changes of the code
structure. Using such base matrices hence N = n⋅ p
adds flexibility in terms of packet length Figure 3: Approximated Lower Triangular H to
facilitate near linear-time encoding

Page 3 (10)
The Block-LDPC considered in this paper The total number of logical gates (NAND) is
[27], have exactly the above requested given in Fig.5, as a function of both the block
format, and thus enable first to estimate length (indexed by its expansion factor Zf),
accurately the encoding complexity, and then and the code rate Rc. Therefore dimensioning
to take advantage of the pipelined processing requires around 11K gates for the worst case,
described hereafter (Fig. 4): here a rate Rc=1/2 code of codeword length
sT 2304 bits (Zf=96).
(1) (2) p1T

s T
A E −T −1
− Φ −1 B − T −1 p T2

(3)
C
Figure 4: Pipelined encoder structure

The remaining complexity comes from the


inversion of two matrices. Fortunately, due to
the triangular nature of blocks (1) and (3), this
can be solved by back-substitution, thus
considerably decreasing the amount of
operations. Then (2) involves only matrix-
vector multiplications, enabling the use of
dedicated techniques (cf. the colouring
problem described in [7]) for proposing
efficient architectures. Nevertheless, the Figure 6: ROM width (bits) for TGnSync Encoder w.r.t.
complexity of (2) is still O(g2), where g is Block length (Zf) and Code Rate (Rc)
proportional to the expansion factor. It is thus
recommended to fix g=p=Zf. Alternatively, one Figure 6 above depicts the memory
may right away construct the parity check requirements (ROM in bits), for storing
matrix in systematic form [15] and achieve full counters initialization values. This amount is
linear time encoding. directly related to the Non-Zero blocks in H,
underlining sparsity differences among the
Encoder HW requirements coding schemes available. The case Rc=3/4 is
Taking into account all these HW now the worst case.
requirements, we follow estimations given in
[7], and apply them to the TGnSync LDPC
proposal.

Figure 7: Register width (bits) for TGnSync Encoder


w.r.t. Block length (Zf) and Code Rate (Rc)

Figure 5: Number of Logical Gates for TGnSync


The register storage is related with the
Encoder w.r.t. Block length (Zf) and Code Rate (Rc) pipelined structure (Fig. 4) used for encoding.

Page 4 (10)
The code rate ranking is respected here (Fig. introducing correction terms in the calculation
7), and Rc=0.5 is once again the worst case of check node messages [20][21]. However,
requesting around 15.6Kbits of memory the most efficient method by far is a simple
(1Kbits=1024 bits). As a result, to implement a scaling of the check node messages [16] that
fully pipelined encoder working with the whole enables to recover close-to-optimal error
range of coding schemes available in correction performance. We will refer to this
TGnSync we need 11K gates, together with algorithm as the corrected MinSum algorithm.
around 16.2Kbits of memory. Note that for a Calculating variable and check node
random-like LDPC the required amount of messages then only involves simple sum and
memory will be significantly higher, even if the minimum operations, respectively, plus a
sparsity of the check matrix is retained for scaling of the check node messages. It is
encoding, as will be highlighted later when conjectured that no further substantial
discussing interleaver complexity. reduction in the node complexity is possible
without accepting quite significant
performance losses.
Decoding Complexity
In this context, Bit-Flipping Algorithms can be
On the receiver side, there are mainly three
considered as a viable solution for low-end
different topics that have to be investigated:
terminals where some loss in performance is
how to decrease the decoder complexity, how
acceptable when large savings in decoding
to increase the efficiency by means of generic
complexity can be achieved. Such types of
architectures, and/or how to achieve high
algorithms based on majority-logic decoding
throughput. We will start by considering
have recently received increased interest from
decoding complexity. As is obvious from the
the research community [3][4][5]. Proposals
structure of the message passing process, the
for weighted bit-flipping [5][22] show
average complexity of LDPC decoding
reasonable performance at extremely low
process is the product of three factors:
implementation complexity. A drawback of
o the node complexity
such approaches is the absence of reliability
o the average number of iterations, and
information (soft output) at the output of the
o the number of nodes in each iteration.
decoder. However, since low-end terminals
In the following, we will discuss how these
will most probably not use iterative
three factors can be minimized using state-of-
equalisation and/or channel estimation
the-art algorithms.
techniques in any case, this appears to be no
significant limitation.
Node Complexity – Sub-Optimal Decoding
Convergence Speed – Scheduling
The standard algorithm for decoding of LDPC
codes is the so-called “belief propagation
It has been shown that the classical method of
algorithm (BPA)”, also known as sum-product
message passing (called flooding), that
algorithm [1][19]. For implementation
consists in updating all the nodes on one side
simplicity, it is convenient to execute the
of the Tanner graph before going into the next
algorithm in the log-domain, turning the
half-iteration, leads to the highest memory
typically required multiplications into simple
requirements, together with a higher number
additions (e.g. at the bit nodes). Following this
of iterations (delay). Alternative schedulings of
path, however, has a significant disadvantage:
the message passing [23][24][25][13] also
calculating the check node messages then
known as “shuffled” or “layered” belief
requires the non-linear “box-plus” operation.
propagation in fact yield faster convergence,
This drawback can be overcome by applying
thus enabling to reduce the average and
the maxLog approximation known from Turbo
maximum number of decoder iterations while
decoding – resulting in the well-known
retaining performance. The methods
“MinSum” algorithm [16]. Unfortunately,
proposed by Mansour and Fossorier
introducing this approximation results in a
([30][13]), are updating all bit nodes
typical performance loss of 0.5-1dB. Several
connected to each check node (horizontal
proposals aim at reducing this offset by
Page 5 (10)
scheduling), or all check node connected to allows to trade computational complexity for
each bit nodes (vertical scheduling), decoding performance. Decoding complexity
respectively. A speedup factor of 2 in the can be lowered by around 20% at a target
average number of iterations is usually FER of 1%, for both message passing and
achieved [13]. MinSum decoding at negligible losses in error
Moreover, if we denote Γ as the number of correction performance [26].
non-zero elements of H, and q as the number
of quantization bits, we can obtain the Decoder Architecture
following memory requirements (in bits):
• Flooding: Γ.q+3.n.q Recently many architecture proposals [6][8]
• Horizontal: Γ.q+n.q are more or less relying on the same trend,
• Vertical: Γ.q+(2-Rc).n.q which is generic semi-parallel architecture,
where the degree of parallelism can be tuned
depending on throughput and HW
requirements. While using Block-LDPC from
TGnSync, these architectures converge
towards the following scheme (Fig. 9):

MBV1 MBV1 MBV1

……
VNU1 VNU 2 …… VNU n

Control
Unit
Shuffle Shuffle −−11

……
CNU1 CNU 2 …… CNU m

Figure 8: Memory Requirement w.r.t. Scheduling


MBC1 MBC2 MBCm
algorithm (e.g. q=4 bits)
Figure 9: Generic Semi-Parallel Architecture for LDPC
If we consider the case of 4 quantization bits Decoder
(q=4), then we can compare the sensitivity of
memory requirement w.r.t. scheduling process The decoder thus constitutes of two
in TGnSync (Fig. 8). The amount of memory processing sets, the Variable Node Units
saved by applying alternative scheduling (VNU) and the Check Node Units (CNU)
emphasizes that it is important to investigate respectively, which are connected through an
design alternatives. interleaving (shuffle) network. Each
Processing Unit takes care either of a subset
Number of Nodes – Forced Convergence of code nodes (columns), or a subgroup of
check nodes (rows). In our case (TGnSync),
The “forced convergence” [26] or “sleeping the number of VNUs and CNUs is given by
node” [9] approach aims at reducing the third the dimensions of the respective base model
factor – the number of active nodes. The matrices. That means VNUs are fixed to
basic idea is to exploit the fact that a large Nb=24 elements, and CNUs vary as a function
number of variable nodes converges to a of code rate Rc as m=Nb.(1-Rc):
strong belief after very few iterations, i.e., • Rc=1/2, #CNUs=12
these bits have already been reliably decoded • Rc=2/3, #CNUs=8
and one may skip updating their messages in • Rc=3/4, #CNUs=6
subsequent iterations. In order to identify such • Rc=5/6, #CNUs=4
nodes, “aggregate messages'” at the variable The size of the Memory Banks (MB) attached
and check nodes are compared with threshold to the Processing Unit (avoiding memory
values. Choosing the thresholds appropriately access conflicts), that store the messages
from each edge within each column or row, is
Page 6 (10)
given by the number of edges in each slice,
which is exactly (due to Block-LDPC
construction) the expansion factor Zf, and their
total number is simply the number of non-zero
elements within the base model matrix (often
denoted as P ).
We just need now to add some memory to
store the channel messages, together with the
hard decision. Finally we need 3.Nb more
memory banks of size Zf.

Decoder HW requirements
By applying now the above used estimation
technique for memory usage to all the coding
schemes available in TGnSync proposal, this Figure 11: Number of Logical Gates for TGnSync
results in the following requirements (Fig. 10): Decoder w.r.t. Block length (Zf) and Code Rate (Rc)

Obviously, this is totally independent of the


block length, and in order to be fully compliant
with TGnSync proposal the decoder might
request around 40K gates, which is quite
affordable for an FPGA.

Interleaver Complexity

This last point can lead to prohibitive


complexity for the implementation of LDPC
codes, in terms of wiring (routing problems),
but memory consumption as well. To serve as
a reference, a randomly generated LDPC
code will require:

 log 2 ( N ⋅ d v )  ⋅ N ⋅ d v bits
Figure 10: Memory Requirement for TGnSync for storing addresses. For instance, in our
decoder w.r.t. Block length (Zf) and Code Rate (Rc)
case with Zf=96 (2304 bits), code rate Rc=1/2,
It is worth noticing that the code of rate ¾ is and an average bit node degree of 3.4, we
requesting the largest amount of memory, would need around 6.13*107 bits,i.e., around
around 53.7Kbits. 10Mbytes of memory! Fortunately using such
Now, considering the complexity itself, Block-LDPC allows to for determining directly
authors from [7] evaluate code nodes and the interleaver relation by means of the base
check nodes complexity equivalent to 250 and model matrix and the circular shifts values.
320 NAND gates, respectively (nevertheless For the current case, we need to store 12*24
this can vary depending on the position of the cyclic shift values from -1 to 46, leading to a
shuffle network, thus impacting the complexity memory requirement of 1728 bits, which
of Node Units). In our case we apply the represents 0.28% of the memory consumed in
shuffle positioning proposed by Zhong in his the random case.
thesis [6], and thus leading to following
complexity evaluation (Fig. 11):

Page 7 (10)
Decoding Performance 10
0

BPA, 20 it.
BPA, 100 it.
The following two items can be regarded as MinSum, 20 it.
central when considering the application of -1
MinSum, 100 it.
10 Corr. MinSum, 20 it.
LDPC codes to wireless communications Corr. MinSum, 50 it.
systems, i.e., their implementation for a real-

FER
time system:
-2
o How much is lost in error correction 10
performance by replacing true belief
propagation by a reduced complexity
variant? -3
10
o How much is performance 0 1 2 3 4
deteriorated if we prefer structured to Eb/N0 [dB]
random-like LDPC codes? Figure 13: Comparison between different decoding
algorithms, for a random-like LDPC of length N=2048
To answer these two questions, we evaluate
Using the corrected Min-Sum algorithm is
the performance of the above mentioned
clearly the most promising option, especially
Block-LDPC, as well as several random-like
for larger block lengths (cf. the results in
codes of comparable block length from [31]
Figure 13). The SNR loss w.r.t. true BPA
for different decoding algorithms and block
decoding is usually below 0.2dB – which is
lengths. All of the random-like LPDC codes
quite acceptable when considering a practical
were constructed using the PEG [18]
implementation of LDPC codes.
algorithm and many of them are in fact
considered to be the best available codes for
0
the considered block length [31]. 10
N=504, Random-like code
N=576, Structured code
N=1024, Random-like code
0 N=1152, Structured code
10 N=2048, Random-like code
-1
10 N=1728, Structured code
FER

-1
10 -2
10
FER

BPA, 100 it.


-2
10 MinSum, 20 it. -3
MinSum, 100 it. 10
0 1 2 3 4
Corr. MinSum, 20 it. Eb/N0 [dB]
-3
Corr. MinSum, 50 it. Figure 14: Comparison between performance of
10 structured and random-like LDPC codes (BPA, 50 it.)
0 1 2 3 4
Eb/N0 [dB]
Figure 14 illustrates that the loss in
Figure 12: Comparison between different decoding
algorithms, for a Block-LDPC of length N=288 performance incurred by going from random-
like LDPC to Block-LPDC is not substantial
As can be observed from Figures 12 and 13, (below 0.2 dB at a target FER of 5%). Note
the difference between standard belief that the exhibited differences are partly due to
propagation and Min-Sum decoding is in the slightly different block sizes. These results
order of 0.5-1dB. The losses tend to be higher clearly motivate the use of Block-LDPC
at larger block lengths. Increasing the number instead of random-like LDPC.
of (maximum) iterations for Min-Sum decoding
obviously allows for reducing this gap
(especially for short codes) – at the expense
of lower savings in decoding complexity.

Page 8 (10)
Acknowledgements
0
10

This work has been performed in the


-1
10 framework of the IST project IST-2003-
507581 WINNER, which is partly funded by
the European Union. The authors would like
-2
to acknowledge the contributions of their
FER

10

BPA, len=288 colleagues, although the view expressed are


BPA, len=576
BPA, len=1152
those of the authors and do not necessarily
-3
10 BPA, len=1728 represent the project.
WBFA, len=288
WBFA, len=576
WBFA, len=1152
WBFA, len=1728
REFERENCES
-4
10
0 1 2 3 4 5 6 7 8 9 10
Eb/N0 (dB)
[1] R.G Gallager, ‘Low-Density Parity-Check Codes’,
Figure 15: Performance Comparison between BPA, Cambridge MA, MIT Press, 1963.
WBFA [2] T. Richardson, and R. Urbanke, ‘Efficient Encoding of
Low-Density Parity-Check Codes’, IEEE Trans. on Info.
Theory, Vol.47, N.2, Feb. 2001.
From figure 15 it can be seen that using low
complexity Weighted Bit-Flipping Algorithm [3] L. Bazzi, T. J. Richardson, and R. Urbanke, ‘Exact
Thresholds and Optimal Codes for the Binary-Symmetric
(WBFA) [5] instead of “standard” algorithms Channel and Gallager’s Decoding Algorithm A’, IEEE Trans.
for LDPC decoding results in a performance on Info. Theory, Vol. 50, N.9, Sept. 2004.

loss between 5-6 dB, which is quite [4] P. Zarrinkhat, and A. H. Banihashemi, ‘Threshold
Values and convergence Properties of Majority-Based
significant. Nevertheless, this degradation Algorithms for Decoding Regular Low-Density Parity-Check
should be balanced by considering that BFA Codes’, IEEE trans. on Comm., Vol.52, N.12, Dec. 2004.
don’t require any message-passing storage, [5] J. Zhang and M. P. C. Fossorier, ‘A Modified Weighted
or complex Processing Units. They thus might Bit-Flipping Decoding of Low-Density Parity-Check Codes’,
IEEE comm. Letters, Vol. 8, N.3, March 2004.
be suitable for very low end terminals.
[6] T. Zhang, ‘Efficient VLSI Architectures for Error-
Correcting Coding’, University of Minnesota, Ph.D Thesis,
Conclusions July 2002.

[7] H. Zhong, and T. Zhang, ‘Block-LDPC: A Practical


LDPC Coding System Design Approach’, IEEE Trans. on
After their creation in the early sixities by Circ. And Syst.-I: Reg. Papers, Vol. 52, N. 4, April 2005.
Gallager, LDPC codes sank into oblivion until [8] F. Guilloud, ‘Generic Architecture for LDPC Codes
they were rediscovered in the mid-nineties. Decoding’, ENST Paris, Ph.D Thesis, July 2004.
Their recent introduction within standards is [9] R. Bresnan, ‘Novel Code Construction and Decoding
the proof that many of what have been Techniques for LDPC Codes’, National University of Ireland,
M. Eng. Sc Thesis, September 2004.
considered serious challenges for the
[10] B. Vasic, E. M. Kurtas, and A. V. Kuznetsov, ‘LDPC
practical implementation of LDPC codes have Codes Based on Mutually Orthogonal Latin Rectangles and
been efficiently tackled by recent research Their Application in Perpendicular Magnetic Recording’, IEEE
Trans. on Magn., Vol. 38, N. 5, Sept. 2002.
advances. Their adoption from the
implementation oriented community (and [11] O. Milenkovic, I. B. Djordjevic, and B. Vasic, ‘Block-
Circulant Low-Density Parity-Check Codes for Optical
eventually their industry take-off) is facilitated Communications systems’, IEEE Journal Of Selected Topics
by increasing research activities targeting to in Quantum Electronics, Vol. 10, N. 2, March/April 2004.

formalize the joint framework of the LDPC [12] B. Vasic, and O. Milenkovic, ‘Combinatorial
Constructions of Low-Density Parity-Check Codes for
encoding/decoding architecture. In this paper, Iterative Decoding’, IEEE Trans. on Info. Theory, Vol.50, N.6,
we reviewed the keystone elements of such a June 2004.
formal process. We are convinced that the [13] J. Zhang, M. Fossorier, “Shuffled iterative decoding,”
combination of implementation oriented code IEEE Trans. Comm., vol. 53, no.2, pp 209-213, Feb. 2005.

design (Block-LPDC) and sub-optimal [14] IEEE 802.16e, “LDPC coding for OFDMA PHY,” IEEE
Doc. C802-16e-05/066r3, January 05
decoding algorithms (corrected MinSum
decoding) will make LDPC codes a viable [15] IEEE 802.11n, “Structured LDPC Codes as an
Advanced Coding Scheme for 802.11n,” IEEE Doc. 802.11-
option for next generation wireless systems. 04/884r0, August 2004

Page 9 (10)
[16] J. Chen and M. Fossorier, “Near Optimum Universal
Belief Propagation Based Decoding of Low-Density Parity
Check Codes,” IEEE Transactions on Communications, Vol.
50, No. 3, pp. 406–414, Mar. 2002.
[17] T.J. Richardson, M.A. Shokrollahi, R. Urbanke,
“Design of Capacity-Approaching Irregular Low-Density
Parity-Check Codes”, IEEE Trans. Inform. Theory, vol. 47,
No. 2, pp 617-637, Feb. 2001
[18] X.Y. Hu, E. Eleftheriou and D.M. Arnold, “Regular and
Irregular Progressive Edge-Growth Tanner Graphs”,
submitted to IEEE Trans. On Inf. Theory 2003
[19] D. J. C. MacKay and R. M. Neal, “Near Shannon Limit
Performance of Low-Density Parity-Check Codes”, Electron.
Lett., vol. 32, pp. 1645-1646, August 1996
[20] T. Clevorn and P. Vary. “Low-Complexity Belief
Propagation Decoding by Approximations with Lookup-
Tables.” In Proc. of the 5th International ITG Conference on
Source and Channel Coding, pp. 211–215, Erlangen,
Germany, January 2004.
[21] G. Richter, G. Schmidt, M. Bossert, and E. Costa.
„Optimization of a Reduced-Complexity Decoding Algorithm
for LDPC Codes by Density Evolution.” In Proceedings of the
IEEE International Conference on Communications 2005
(ICC2005), Seoul, Korea, March 2005.
[22] Y. Kou, S. Lin and M. Fossorier, “Low Density parity-
check codes based on finite geometries: A rediscovery and
more,” IEEE Trans. Inform. Theory, vol. 47, pp. 2711-2736,
Nov. 2001.
[23] F. R. Kschischang, B. J. Frey, “Iterative Decoding of
Compound Codes by Probabilistic Propagation in Graphical
Models,” Journal on Select. Areas Commun., pp. 219-230,
1998.
[24] Y. Mao, A. H. Banihashemi, “Decoding Low-Density
Parity-Check Codes with Probabilistic Scheduling,” Comm.
Letters, 5:415-416, Oct. 2001.
[25] E. Yeo, B. Nikolie, and V. Anantharam, “High
Throughput Low-Density Parity-Check Decoder
Architectures,” IEEE Globecom 2001.

[26] E. Zimmermann, W. Rave, and G. Fettweis, “Forced


Convergence Decoding of LDPC Codes - EXIT Chart
Analysis and Combination with Node Complexity Reduction
Techniques,” in Proceedings of the 11th European Wireless
Conference (EW’2005), Nicosia, Cyprus, Apr. 2005.
[27] TGn Sync proposal: http://www.tgnsync.org/home

[28] WWISE proposal: http://www.wwise.org/

[29] WINNER FP6: https://www.ist-winner.org/


[30] Mansour, “Turbo-decoder Architectures for Low-
Density Parity-Check codes”. IEEE Globecom 2002.
[31] D.J.C. MacKay, “Encyclopedia of Sparse Graph
Codes”. Available online at
http://wol.ra.phy.cam.ac.uk/mackay/codes/data.html

Page 10 (10)

Anda mungkin juga menyukai