The present invention provides methods and apparatus for LDPC decoding using a hardware-sharing and serial sum-product architecture. The disclosed LDPC decoder performs the sum-product decoding algorithm with less memory and fewer clock cycles, relative to a conventional implementation.
The following background discussion of LDPC codes and LDPC decoding is based on a discussion in, A. J. Blanksby and C. J. Howland, “A 690-mW 1-Gb/s 1024-b, Rate-1/2 Low-Density Parity-Check Decoder,” IEEE J. Solid-State Circuits, Vol. 37, 404-412 (March 2002), incorporated by reference herein. For a more detailed discussion, the reader is referred to the full Blanksby and Howland paper.
Matrix Representation of LDPC Codes
LDPC codes are linear block codes. The set of all codewords, x ∈ Cx, spans the null space of a parity check matrix H:
HχT=0, ∀∈Cχ. (1)
r=(n−m)/n (2)
The set row and column elements of the parity check matrix H are selected to satisfy a desired row and column weight profile, where the row and column weights are defined as the number of set elements in a given row and column, respectively. In a regular LDPC code, all rows are of uniform weight, as are all columns. If the rows and columns are not of uniform weight, the LDPC code is said to be irregular.
Graph Representation of LDPC Codes
LDPC codes can also be represented using a bipartite graph, where one set of nodes represents the parity check constraints and the other set represents the data bits.
The algorithm used for decoding LDPC codes is known as the sum-product algorithm. For good decoding performance with this algorithm, it is important that the length of cycles in the graph representation of the LDPC code is as long as possible. In the exemplary representation of
The Sum-Product Algorithm The sum-product algorithm is an iterative algorithm for decoding LDPC codes.
The sum-product algorithm is also known as the message passing algorithm or belief propagation. For a more detailed discussion of the sum-product algorithm, see, for example, A. J. Blanksby and C. J. Howland, “A 690-mW 1-Gb/s 1024-b, Rate-1/2 Low-Density Parity-Check Decoder,” IEEE J. Solid-State Circuits, Vol. 37, 404-412 (March 2002), and D. E. Hocevar, “LDPC Code Construction With Flexible Hardware Implementation,” IEEE Int'l Conf. on Comm. (ICC), Anchorage, Ak., 2708-2712 (May, 2003), each incorporated by reference herein.
The message from bit node i to check node j is given by:
It is noted that the notations used herein are defined in a table at the end of the specification. The message from check node j to bit node i is given by:
where:
The a-posteriori information value, which is also called a-posteriori log-likelihood ratio (LLR), for bit i, Λi, is given by:
A significant challenge when implementing the sum-product algorithm for decoding LDPC codes is managing the passing of the messages. As the functionality of both the check and bit nodes is relatively simple, their respective realizations involve only a small number of gates. The main issue is the implementation of the bandwidth required for passing messages between the functional nodes.
Hardware-Sharing Decoder Architecture
It has been recognized that such a hardware-sharing architecture reduces the area of the decoder.
Modified LDPC Parity Check Equations
The present invention recognizes that the components of the above LDPC parity check equations can be reorganized to yield improvements in the memory and clock cycle requirements. The check node computation shown in equation (4) can be separated into several components, as follows:
ρi,j=φ(|Qi,j|) (5)
where ρi,j is thus a transformed magnitude of the bit-to-check node message Qij between bit node i and check node j and “| |” is the notation for magnitude.
σi,j=sign(Qi,j) (6)
Then, the magnitude and sign of the message from check node j to bit node i is given by
|Rj,i|=φ(Pj−ρi,j) (9)
sign(Rj,i)=Sj·σi,j (10)
Thus, while the conventional computation of the check-to-bit node messages according to equation (4) excludes the current bit node i from the computation (I ∈ Cj, l≠i), the present invention computes an intermediate value Pj as the sum of the transformed magnitudes ρij of the bit-to-check node messages for all the bit nodes l connected to the check node j, and then subtracts ρij from Pj to compute the magnitude of the message Rj,i from check node j to bit node i.
In the exemplary embodiment, the bit node computations are performed in 2's-complements and the check node computations are performed in sign-magnitude (SM).
LDPC Decoder With Serial Sum-Product Architecture
These ρi,jk,j∈Bi are fed to dc transformed magnitude update units 440 (to compute the sum Pjk over all bit nodes connected to the check node j, and then Pjk is taken out later in the next iteration) that consist of adders and read/write circuitry. The parallel transformed magnitude update units 440 can access any dc elements from m memory elements in the memory 460, where each element stores q bits, where q is the number of bits used to represent Pjk in the exemplary embodiment. These parallel transformed magnitude update units 440 update the relevant memory elements such that at the end of an iteration (i.e., (n·k)th time cycle) there are pjk,j∈1 . . . m as defined in equation (7), stored in the m memory elements. In other words, these memory elements keep a running sum of the relevant ρ values.
If the intermediate values in the memory element j for the kth iteration is given by Pjk(i) for i=1 . . . n, then the running sum at the (n·(k−1)+i)th time instance is given by:
Then, at the end of an iteration, such as at (n·k)th time instance:
P
j
k
=P
j
k(n) (12)
The signs of the bit-to-check node messages Qi,jk,j∈Bi, which are σi,jk,j∈Bi as defined by equation (6), are processed as follows. Similar to the procedure discussed above, σi,jk,j∈Bi are fed to another set of dc sign update units 450 that consist of XOR gates and read/write circuitry. These parallel sign update units update the relevant memory elements in the memory 460 such that at the end of an iteration, Sjk,j∈1 . . . m as defined in equation (8), are stored in m memory elements, where each memory element stores one bit. In other words, these memory elements keep a running product of the sign-bit σ values. The product is obtained by the XOR gates. As explained in the previous paragraph, if the intermediate value of the memory element j for the kth iteration is given by Sjk(i) for i=1 . . . n, then the running product is given by:
Then, at the end of an iteration, such as at (n·k)th time instance:
S
j
k
=S
j
k(n) (14)
The ρi,jk,j∈Bi computed at each time cycle are also sent to a first in first out (FIFO) buffer 420-1 (buffer-1 in
The σi,jk,j∈Bi computed at each time cycle are fed to another FIFO buffer 420-2 (buffer-2 in
Now, the procedure is explained for the computation of the magnitude of check-to-bit node messages Rj,ik−1 from the Pjk−1, j=1 . . . m saved in the memory 460 and the ρi,jk−1, j∈Bi saved in the first FIFO buffer 420-1 from the previous iteration k−1. The required Pjk−1, j∈Bi are read from memory 460 and the ρi,jk−1, j∈Bi are read from the first FIFO buffer 420-1. Then, the differences (Pjk−1−ρpi,jk−1) are computed for each j∈Bi by parallel transformed magnitude subtraction units 470-1 through 470-3 and the corresponding results are passed to parallel transformation units 475-1 through 475-3 that perform the function of φ. The outputs of these transformations units 475 are the magnitudes of the corresponding messages from the dc check nodes to the ith bit node at the (k−1)th iteration, namely, |Rj,ik−1, j∈Bi according to equation (9).
The sign of the check-to-bit node messages at the (k−1)th iteration (sign(Rj,ik−1)) is computed in a similar manner. The required Sjk−1,j∈Bi are read from memory 460 and the σi,jk−1, j∈Bi are read from the second FIFO buffer 420-2. Then, the products Sjk−1·σi,jk−1 are computed using parallel sign processing units 478-1 through 478-3 (each using an XOR gate in the exemplary embodiment) for each j∈Bi. These products are the sign bits of the corresponding messages from the dc check nodes to the ith bit node at the (k−1)th iteration, namely sign(Rj,ik−1), j∈Bi according to equation (10).
The sign and magnitude of the check-to-bit node messages are passed through dc sign-magnitude to 2's-complement conversion units 480. The results from these units 480 are Rj,ik−1, j∈Bi which in turn are the inputs to the bit node update unit 410. The bit node update unit 410 computes the bit-to-check node messages Qi,jk for the kth iteration according to (see also equation (3)):
Memory Requirements and Throughput
The exemplary first FIFO buffer 420-1 has a size of n·dc·q bits and the exemplary second FIFO buffer 420-2 has a size of n·dc bits.
The amount of memory required for Pj, j equal to 1 . . . m is 2·q·m bits, where the multiple of two is for values pertaining to iterations k and k−1.
The amount of memory required for Sj, j equal to 1 . . . m is 2·m bits, where the multiple of two is for values pertaining to iterations k and k−1.
The total memory requirement is (2·m+n·dc)·(q+1) bits.
At each time cycle, the disclosed method computes dc check-to-bit node messages and dc bit-to-check node messages. Thus, it takes n cycles to process length n blocks of data per iteration with one bit node update unit 410.
When compared with standard LDPC decoding architectures, the disclosed architecture only requires (2·m+n·dc)·(q+1) bits of memory space and takes only n cycles per iteration. It is noted that a typical serial sum-product architecture requires 2·n·dc·(q+1) bits of memory space and m+n cycles per iteration.
Concatenated LDPC Decoder With Serial Sum-Product Architecture
LDPC codes can be used for channels impaired by intersymbol interference (ISI) and noise to improve the bit error rate.
{circumflex over (i,j)}=λi, ∀j∈Bi. (16)
The check node computation is broken into parts in a similar manner as described above.
The sign and magnitude of the message from check node j to bit node i are given by:
|{circumflex over (R)}j,i|=φ({circumflex over (P)}j−{circumflex over (ρ)}i) (21)
sign({circumflex over (R)}j,i)=Ŝj·σi (22)
Using these {circumflex over (R)}j,i, j∈Bi, the extrinsic information value from the LDPC decoder, which is passed to the SISO detector 615, can be computed by the extrinsic information computation unit 610:
This extrinsic information value is used by the SISO detector 615 as a-priori information value.
The check-to-bit node messages for the (k−1)th iteration, namely {circumflex over (R)}j,ik−1, j∈Bi are computed in a similar manner as the procedure described above in conjunction with
Memory Requirements and Throughput
The exemplary first FIFO buffer 620-1 has a size of n q bits and the exemplary second FIFO buffer 620-2 has a size of n bits.
The amount of memory required for Pj, j equal to 1 . . . m is 2 q m bits.
The amount of memory required for Sj, j equal to 1 . . . m is 2 m bits.
The total memory requirement is (2·m+n)(q+1).
At each time cycle, the disclosed method computes dc check-to-bit node messages and the extrinsic information value. Thus, it takes n cycles to process length n blocks of data per iteration.
The disclosed architecture performs the sum-product algorithm as defined by the equations (17) to (23). The disclosed method has better error rate performance than the less optimum method proposed by Wu and Burd, where only the minimum LLR values are considered in the sum-product algorithm. Moreover, the method proposed by Wu and Burd applies only to a concatenated LDPC decoder system. The present invention only requires (2·m+n)(q+1) memory space and takes only n cycles per iteration.
Notation
The following notation has been used herein:
i is the index of a bit node;
j is the index of a check node;
k is the index of the iteration;
Qi,j is the message from bit node i to check node j;
Rj,i is the message from check node j to bit node i;
λi is the a-priori information value or a-priori log-likelihood ratio (LLR) pertaining to bit i;
Λi is the a-posteriori information value or a-posteriori LLR pertaining to bit i;
Λext,i is the extrinsic information value or extrinsic LLR pertaining to bit i;
Bi is the set of check nodes connected to bit node i;
Cj is the set of bit nodes connected to check node j;
n is the number of bit nodes;
m is the number of parity check nodes;
dr is the row weight of the parity check matrix;
dc is the column weight of the parity check matrix;
bit nodes are 1 . . . n;
check nodes are 1 . . . m.
While exemplary embodiments of the present invention have been described with respect to digital logic units, as would be apparent to one skilled in the art, various functions may be implemented in the digital domain as processing steps in a software program, in hardware by circuit elements or state machines, or in combination of both software and hardware. Such software may be employed in, for example, a digital signal processor, micro-controller, or general-purpose computer. Such hardware and software may be embodied within circuits implemented within an integrated circuit.
Thus, the functions of the present invention can be embodied in the form of methods and apparatuses for practicing those methods. One or more aspects of the present invention can be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a device that operates analogously to specific logic circuits.
As is known in the art, the methods and apparatus discussed herein may be distributed as an article of manufacture that itself comprises a computer readable medium having computer readable code means embodied thereon. The computer readable program code means is operable, in conjunction with a computer system, to carry out all or some of the steps to perform the methods or create the apparatuses discussed herein. The computer readable medium may be a recordable medium (e.g., floppy disks, hard drives, compact disks, memory cards, semiconductor devices, chips, application specific integrated circuits (ASICs)) or may be a transmission medium (e.g., a network comprising fiber-optics, the world-wide web, cables, or a wireless channel using time-division multiple access, code-division multiple access, or other radio-frequency channel). Any medium known or developed that can store information suitable for use with a computer system may be used. The computer-readable code means is any mechanism for allowing a computer to read instructions and data, such as magnetic variations on a magnetic media or height variations on the surface of a compact disk.
The memories and buffers could be distributed or local and the processors could be distributed or singular. The memories and buffers could be implemented as an electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the terms “memory,” “buffer” and “FIFO buffer” should be construed broadly enough to encompass any information able to be read from or written to a medium. With this definition, information on a network is still within a memory because the associated processor can retrieve the information from the network.
It is to be understood that the embodiments and variations shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention.