This invention relates generally to low-density parity-check (LDPCs) decoders, and more specifically to a method and apparatus for an LDPC decoder.
LDPC codes are linear block codes. The codeword space and the encoding procedure of LDPC codes are specified by a generator matrix G, given by:
x=uG
where G is a K×N matrix with full-row rank, u is a 1×K vector representing information bits and x is a 1×N vector for the codeword. Usually, the generator matrix can be written as follows:
G=└IK×K PK×(N−K)┘
Alternatively, a linear block code can be equivalently specified by a parity-check matrix H, given by
Hxt=0
for any codeword x, where H is an M×N matrix, and M=(N−K). Because Hxt=0 implies HGt=0, if a parity-check matrix H is known, so is the generator matrix G, and vice-versa. Matrix G generally describes an encoder, while H is usually used to check if a given binary vector x is a valid codeword in the decoder.
The parity-check matrix H for an LDPC code is sparse, which means a small portion of the entries are one while others are zeros, and the one's positions are determined in a random fashion. These randomly selected positions of one's are critical to the performance of an associated LDPC code, which is analogous to an interleaver of turbo codes.
LDPC code can be represented by a “bipartite” or Tanner graph in which the nodes can be separated into two groups of check nodes and bit nodes with connections allowed only between nodes in differing groups. For example, an LDPC code can be specified by a parity-check matrix, which defines a set of parity-check equations for codeword x as follows:
For a binary LDPC code, all multiplications and additions are defined for binary operations. Consequently, the LDPC code, or more specifically, the parity-check equations can be represented by the Tanner graph of
An LDPC encoder with a code rate of K/N can be implemented as illustrated in
The LDPC decoder is based on an iterative message-passing, or a “turbo-like” belief propagation. A sum-product algorithm is a well-known method for LDPC decoding and can be implemented in a logarithm domain (see method depicted in
Roughly speaking, rm→b0 (or rm→b1) is the likelihood information for xb=0 (or xb=1) from the mth parity-check equation, when the probabilities for other bits are designated by the qb→m's. Therefore, rm→b0 can be considered as the “extrinsic” information for the bth bit from the mth check node. The soft decision or log-likelihood ratio of a bit is calculated by adding a priori probability information to the extrinsic information from all check nodes that connect to it.
In the logarithm domain, all probability information is equivalently characterized by the log-likelihood ratios (LLRs) as follows:
where qb0 (or qb1) is an posteriori probability of xb=0 (or xb=1) and pb0 (or pb1) is an priori probability of xb=0 (or xb=1) of received information from a channel. The LDPC decoding procedure described above is summarized in the flowchart in
In case of high order QAM modulations, each QAM symbol contains multiple code bits while the input to the LDPC decoder is a sequence of LLRs for each bit. Therefore, the received QAM soft symbols must be converted into LLRs for each bit. Assuming the received QAM soft symbol is represented as r=r1+jrQ=s+n, where s=sI+jsQ is its associated QAM hard symbol and n is complex noise with variance 2σ2. The LLR for bit k can be approximated by using a dual-max method as follows:
where K is the LLR scalar that depends on a noise variance, where S1 and S−1 are sets of (sI sQ) corresponding to bk=1 and −1, respectively. In the present case bk=1 and bk=−1 are equivalent to xb
sI=−3, −1, 1, 3 for (bk, bk+1)=(−1, −1), (−1, 1), (1, −1), (1, 1)
sQ=−3, −1, 1, 3 for (bk+2, bk+3)=(−1, −1), (−1, 1), (1, −1), (1, −1)
The log-likelihood function of bk=1, LL(bk=1), is approximately the largest quantity among eight values determined by {2rIsI−sI2+2rQsQ−sQ2} corresponding to sI>0. Similarly, log-likelihood function of bk=−1 is approximately the largest one among eight quantities of {2rIsI−sI2+2rQsQ−sQ2} evaluated at eight symbols corresponding to sI≦0.
The foregoing description of an LDPC codes can be applied to FEC (Forward Error Correction) applications in many wireless air interfaces such as WiMax (IEEE802.16e), advanced WiFi (IEEE802.11n) and Mobile Broadband Wireless Access (IEEE802.20). Typically, air interfaces such as these utilize Orthogonal Frequency Division Modulation (OFDM) where each tone carries QPSK, 16QAM or 64QAM symbols. During the demodulation process, the soft QAM symbols are converted into LLRs, which feed the LDPC decoder described above. The above-described dual-max method, however, serves to approximate LLR values of each bit. Such approximation can therefore lead to performance degradation.
A need therefore arises for a method and apparatus that improves LDPC decoding.
Embodiments in accordance with the invention provide a system and method for an LDPC decoder.
In a first embodiment of the present invention, a low-density parity-check (LDPC) decoder has a memory, and a processor. The processor is programmed to initialize the LDPC decoder, calculate a probability for each check node, calculate a probability for each bit node, calculate soft decisions, update the bit nodes according to the calculated soft decisions, calculate values from the calculated soft decisions, perform a parity check on the calculated values, update log-likelihood ratios (LLRs) if a bit error is detected in the calculated values, update the bit nodes according to the updated LLRs, and repeat the foregoing post initialization steps.
In a second embodiment of the present invention, a computer-readable storage medium has computer instructions for initializing a plurality of bit nodes with log-likelihood ratios (LLRs), initializing a plurality of check nodes to a predetermined setting, associating each bit node to one or more corresponding check nodes, associating each check node to one or more corresponding bit nodes, calculating a probability for each check node, calculating a probability for each bit node, calculating soft decisions, updating the bit nodes according to the calculated soft decisions, calculating values according to a sign of the calculated soft decisions, performing a parity check on the calculated values, updating the LLRs according to initial and intermediate LLRs adjusted by first and second factors if a bit error is detected in the calculated values, updating the bit nodes according to the updated LLRs, and repeating the foregoing post initialization steps.
In a third embodiment of the present invention, a base station has a transceiver, a memory, and a processor. The processor is programmed to intercept messages from a selective call radio, and decode said messages by initializing a plurality of bit nodes with log-likelihood ratios (LLRs), initializing a plurality of check nodes to a predetermined setting, associating each bit node to one or more corresponding check nodes, associating each check node to one or more corresponding bit nodes, calculating a probability for each check node, calculating a probability for each bit node, calculating soft decisions according to corresponding check nodes and previous soft decisions of the bit nodes, updating the bit nodes according to the calculated soft decisions, calculating values according to a sign of the calculated soft decisions, performing a parity check on the calculated values, updating the LLRs if a bit error is detected in the calculated values, updating the bit nodes according to the updated LLRs, and repeating the foregoing post initialization steps.
The conventional dual-max method of equation (1) in the aforementioned prior art approximates an LLR bit by calculating all possible likelihoods and selecting the largest one. However, if additional information is available about which constellation points should be used to determine an LLR bit, an approximation is not necessary.
If additional information about bits 2, 3 and 4 are available, say b2=b3=b4=−1, then only two constellation points (1-1-1-1) colored in gray in FIG 5 as point 108, and (−1-1-1-1) uncolored point 110 should be used for LLR calculation for the first bit b1. That is, the LLR of bit b1 is the difference between the distances of point 102 to point 108 (i.e., 1-1-1-1) and point 102 to point 110 (i.e., −1-1-1-1). These calculations are the true LLR of bit b1 without approximation.
Unfortunately, the additional information about bits 2, 3 and 4 are generally not available before the information bits are decoded in a conventional decoder. However, in an LDPC decoder intermediate results can be used to update the decoder input such that the input to the decoder is approaching a true LLR for each bit. As described earlier, an LDPC decoder can calculate an LLR or a soft decision for each bit iteratively. The sign of the soft decision determines the value of an associated bit (1 or −1), while the magnitude of a soft decision indicates the confidence of the decoded bit. The larger the soft decision magnitude, the higher the confidence for the decoded bit.
During the decoding iterations, an intermediate hard bit decision can be determined for the soft decision according to the following relationship:
where M is a threshold for a hard bit decision that can be adaptively determined as a scaled average magnitude of intermediate soft decisions. From this relationship, it is apparent that the intermediate bit sequence is ternary instead of binary valued. A value of 0 indicates the hard decision for an associated bit is not available due to an insufficient confidence level. Based on the intermediate ternary bit sequence, the LLR bits can be updated. For example, when determining the LLR of bit 3, and knowing the intermediate hard decisions for bits 1, 2 and 4 are 1, 0, and −1, respectively, then four constellation points 130-136 can be used for the LLR calculation as illustrated in
That is, the distances between received soft symbol 102 to points 130 and 132 (i.e., 1-11-1 and 111-1) can be calculated to determine the minimum distance, which in this illustration is the distance between point 102 and point 132, i.e., 111-1. Similarly, the distances between received soft symbol 102 to points 134 and 136 (i.e., 1-1-1-1 and 11-1-1) can be computed and the closest point selected, which in this illustration is the distance between point 102 and point 136, i.e., 11-1-1. The LLR of bit 3 is the difference between the two minimum distances calculated. For every non-zero hard decision in a group of bits associated with one QAM symbol, the number of points in the constellation used for calculating an LLR bit is scaled down by a factor of 2. Thus, a size of a set over which a distance minimization is calculated to update a portion of the LLR bits can be reduced by 2N if N of the ternary values has a non-zero value. If all the ternary values have a non-zero value, a portion of the LLRs can be updated by subtraction without distance minimization. Alternatively, if all of the ternary values are zero, a full size of a set over which a distance minimization is calculated can be used to update a portion of the LLRs.
The conventional dual-max method is a special case where all hard bit decisions are zeros. The initial input to LDPC decoder in this case is determined by the dual-max method. After a few iterations when intermediate hard bit decisions are available, the input to LDPC decoder can be updated or fine-tuned.
It is also possible that an intermediate hard decision is incorrect even though the threshold M has been introduced to reduce a probability of error. Thus, the updated LLR bit can be determined as a combination of an initial LLR and a current LLR given by:
LLR updated=α×LLRinitial+(1−α)×LLRintermediate
where LLRinitial and LLRintermediate are determined by dual-max techniques as described by the present invention, where α is a coefficient valued between 0 and 1 depending on the number of iterations and average magnitude of intermediate soft decisions.
In step 208, soft and corresponding hard decisions are made on each bit node according to the formulas shown. In step 210, a parity check is performed on the bit values determined in step 208. If no error is detected, then the decoder ceases operation in step 212 and supplies the decoded bits to a targeted device (as will be described later in
It should be noted that if multiplication operations cost more than addition, the belief message from check nodes to bit nodes can be determined as:
where function Φ(x) is defined as
for x>0, which can be evaluated by a table look-up method.
It should be noted that the value of threshold M can affect decoder performance. If M is too small, extra error propagation can be introduced during the LLR update based on the decoder feedback. On the other hand, if M is too large, the benefit of the LLR update in step 218 is limited. To achieve optimum performance, M can be adapted during the iterative decoding procedure. A proposed method for determining M can be based on the average magnitude of the LDPC decoder soft output. In general, the larger the average soft decision magnitude is the lower the bit error rate (BER) will be.
where {tilde over (b)}i is the ith soft bit and N is a number of coded bits per LDPC decoder code word. β∈(0, 1) is a parameter to control usage of the feedback information provided to the LDPC decoder.
For illustration purposes, simulations were performed using 16QAM and an LDPC code with a 4/5 rate to compare the BER for a prior art LDPC decoder (herein referred to as the old LDPC decoder) versus the BER of an LDPC decoder operating according to method 200 (herein referred to as the new LDPC decoder). The results of the simulation are demonstrated in a plot shown in
It is well known in the art that the performance of an LDPC decoder depends on the maximum number of iterations. The more iterations, the better the expected performance.
It should be noted that when the maximum number of iterations goes from 30 to 60, the increase does not double the decoding complexity. For example, as shown in
It would be apparent to an artisan with ordinary skill in the art that the present invention can be used in many applications. For instance, the present invention can be applied to a base station 300 as shown in
The processor 306 can utilize a combination of computing devices such as a microprocessor and/or digital signal processor (DSP), or an ASIC (Application Specific Integrated Circuit) designed to perform the operations of the present invention. The memory 308 can utilize any conventional storage media such as RAM, SRAM, Flash, and/or conventional hard disk drives. A utility company can source the power supply 310, and/or represent a battery powered uninterrupted power source for supplying power to the components of the base station 300. In this embodiment, the functions of the new LDPC decoder described by way of example as method 200 of
It should be evident to an artisan with skill in the art that portions of embodiments of the present invention can be embedded in a computer program product, which comprises features enabling the implementation stated above. A computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form.
It should also be evident that the present invention can be realized in hardware, software, or combinations thereof. Additionally, the present invention can be embedded in a computer program, which comprises all the features enabling the implementation of the methods described herein, and which enables said devices to carry out these methods. A computer program in the present context means any expression, in any language, code or notation, of a set of instructions intended to cause a system having an information processing capability to perform a particular function either directly or after either or both of the following: a) conversion to another language, code or notation; b) reproduction in a different material form. Additionally, a computer program can be implemented in hardware as a state machine without conventional machine code as is typically used by CISC (Complex Instruction Set Computers) and RISC (Reduced Instruction Set Computers) processors.
The present invention may also be used in many arrangements. Thus, although the description is made for particular arrangements and methods, the intent and concept of the invention is suitable and applicable to other arrangements and applications not described herein. The embodiments of method 300 therefore can in numerous ways be modified with additions thereto without departing from the spirit and scope of the invention.
Accordingly, the described embodiments ought to be construed to be merely illustrative of some of the more prominent features and applications of the invention. It should also be understood that the claims are intended to cover the structures described herein as performing the recited function and not only structural equivalents. Therefore, equivalent structures that read on the description are to be construed to be inclusive of the scope of the invention as defined in the following claims. Thus, reference should be made to the following claims, rather than to the foregoing specification, as indicating the scope of the invention.
Number | Name | Date | Kind |
---|---|---|---|
6594318 | Sindhushayana | Jul 2003 | B1 |
6757337 | Zhuang et al. | Jun 2004 | B2 |
6829308 | Eroz et al. | Dec 2004 | B2 |
6895547 | Eleftheriou et al. | May 2005 | B2 |
6954832 | Suzuki et al. | Oct 2005 | B2 |
7000167 | Coker et al. | Feb 2006 | B2 |
7023936 | Sutskover et al. | Apr 2006 | B2 |
7149953 | Cameron et al. | Dec 2006 | B2 |
Number | Date | Country | |
---|---|---|---|
20070089024 A1 | Apr 2007 | US |