This invention relates to decoding digital communications, and, more particularly, to a method of generating signals corresponding to a first output table of corrected values of symbols of a word transmitted according to a LDPC code and a second output table representing the corresponding probability values, and a LIST decoder that implements it.
Low Density Parity Check (LDPC) coding is an Error Correction Code (ECC) technique that is being increasingly regarded as a valid alternative to Turbo Codes. LDPC codes have been incorporated into the specifications of several real systems, and the LDPC decoder may turn out to constitute a significant portion of the corresponding digital transceiver.
Non-Binary Low Density Parity Check (LDPC) codes are defined by a sparse parity check matrix
Every valid codeword
with
where both the codeword symbols ci and the parity check coefficients Hji belong to the GF(2p) and VC(j) is the set of variables (symbols ci) involved in the j-th check.
The transmitter transmits a valid codeword
As contemplated in LDPC coding, the product of the check matrix
thus the receiver may determine which codeword has been received by implementing an appropriate decoding technique based on properties of LDPC codes. The a posteriori probability that a codeword
More specifically, LDPC are decoded by means of a belief propagation algorithm that is described in the following paragraphs. The symbols are characterized by the PMF (Probability Mass Function). In the preferred embodiment PMF are represented in the log-domain:
Λiq=−log(P(Xi=φq)) φqεGF(2p)
Using the vector representation
Λi=└Λi0,Λi1, . . . Λiq-1┘
The full complexity algorithm to be used as reference is the log-domain symbol based belief propagation and is calculated as
where Qij, Rji represent the variable to check and the check to variable messages (vectors each including a single PMF relative to the i-th symbol). In a preferred embodiment the belief propagation is implemented with the layered schedule (Dale E. Hocevar, “A reduced complexity decoder architecture via layered decoding of LDPC Codes”, IEEE Workshop on Signal Processing Systems (SIPS), October 2004, pp. 107-112).
The check node processing (CNP) is performed by the SPC function. This is the most computational intensive part of the decoding algorithm. It is worth reviewing the approaches proposed so far for the decoding of Non Binary LDPC.
M. C. Davey and D. MacKay, “Low-density parity-check codes over GF(q)”, IEEE Commun. Lett., vol. 2, no. 6, pp. 165, that first introduced the use of Non binary LDPC, proposed a very general implementation of the belief propagation (called also Sum Product Algorithm). It works in the probability domain. It has been noted very soon that this it has stability problems when probability values are represented in finite digit. Moreover, its complexity increases as the square of the size of the Galois field 2p.
Henk Wymeersch, Heidi Steendam and Marc Moeneclaey “Log-domain decoding of LDPC codes over GF (q)” Proc. IEEE International Conference on Communications, proposed the Sum Product Algorithm in logarithm domain. Its complexity increases as the square of the size of the Galois field 2p. The log domain is the preferred embodiment also for the present invention.
H. Song and J. R. Cruz, Reduced-complexity decoding of q-ary LDPC codes for magnetic recording,” IEEE Trans. Magn., vol. 39, no. 2, pp. 1081, introduced the SPC processing with the forward-backward approach describe below. Moreover they proposed Q-ary LDPC decoding using Fast Fourier Transform (FFT), both in probability and in logarithm domains. Unfortunately, probability domain gives instability problem when finite representation is used and the FFT in the logarithm domain involves doubling the used quantities. The advantage of using FFT approach is therefore is lost.
A. Voicila, D. Declereq, F. Verdier, M. Fossorier, P. Urard, “Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes” IEEE International Conference on Communications, 2007, ICC '07, proposed the so-called Extended Min Sum algorithm: this very generic approach introduces the concept of reduction of candidates at the input of the SPC but does not provide a computationally simple and effective way to perform the CNP.
Therefore, there is the need for a simpler and faster algorithm usable for decoding non-binary LDPC codes, especially in the most computationally intensive part that is the SPC.
The exact formulation of the SPC is the following:
The straight-forward approach to solve the SPC is based on the forward-backward recursion over a fully connected trellis with q=2p states (see for example H. Song and J. R. Cruz, Reduced-complexity decoding of q-ary LDPC codes for magnetic recording,” IEEE Trans. Magn., vol. 39, no. 2, pp. 1081 where the same approach is introduce in the probability domain).
States correspond to the symbols Xi. The branches are given by Qijp and the state connections are determined by the parity check coefficients.
The forward recursion for a generic parity check equation with coefficients Ht where the inputs PMF are Qtp=−log(Pin(Xt=φp) is given by:
Backward recursion is defined analogously. The combining step is given by
The straightforward approximation available at the Check Node Processing is the substitution of the max* operator with the max. In order to compensate the well-known overestimation of the max operator a proper scaling factor is applied at the SPC output so that the recursion is given by:
and the combining is
where the scaling factor γ is—in the context of magnetic recording—about 0.75.
This algorithm is the natural extension of the normalized min-sum belief propagation in the binary field (Zarkeshvari, F.; Banihashemi, A. H.; “On implementation of min-sum algorithm for decoding low-density parity-check (LDPC) codes”, Global Telecommunications Conference, 2002. GLOBECOM '02. IEEE Volume 2, 17-21 November 2002 Page(s): 1349-1353.).
In order to further simplify the decoding algorithm, both input and SPC processing can be restricted to a subset of symbol candidates (A. Voicila, D. Declereq, F. Verdier, M. Fossorier, P. Urard, “Low-complexity, Low-memory EMS algorithm for non-binary LDPC codes” IEEE International Conference on Communications, 2007, ICC '07).
An algorithm useful for decoding Non Binary LDPC in digital communications, a relative decoder and software code adapted to implement it have been devised.
This algorithm may be repeated for each check node related to a parity check equation of a LDPC code, for generating signals representing a first output table
According to the method, the values of the output tables are obtained by processing the components of a first input table
The above algorithm may be implemented via software, stored in a computer readable medium, or in a hardware LIST decoder, and executed by a computer or processor. LIST decoders internally generate the input tables
The present invention relies upon the following approach that results in minimal performance losses. The approach includes first sorting the symbol candidates according to the corresponding value in the PMF so that the first candidate results to be the most likely choices, and then preserving in input the PMF of the first n symbol candidates with the exact probabilities and set the 2p-n candidate probabilities equal the value of the n-th candidate. The approach then includes computing as output of the SPC the first n symbol candidates with the exact probabilities and set the 2p-n equal the value of the n-th candidate.
In order to explain how the SPC works in the present invention, it is beneficial to introduce a new description of Qij. Let consider the generic check i and form a matrix QWi with the PMF associated to the symbols belonging to the check I QWi=└Qi1Qi2 . . . Qij . . . Qid
Then sort every column of QWi and produce two matrices
The objective of the SPC LIST decoder incorporated in the LDPC decoder is to generate for each set of symbol
This invention is more specifically related to the way the check-node processing (CNP) step is performed. In the context of the CNP that solves the problem of delivering new symbols estimate given the constraint of a single parity check it has been found that, considered the set of symbols belonging to a specific SPC of the non binary LDPC, starting from a list of the possible symbols values over a Galois Field
The operations further include determining the probability values of the third row of the second output table
The probability values of the fourth and successive rows of the second output table
In order to better understand how the method works, reference is made to a practical exemplary case of a code in GF(23) with a single check equation (thus with a single check-node), though the method may be applied mutatis mutandis to codes belonging to a Galois field of any order and for any number of check equations.
Moreover, the method will be illustrated referring to a particular case in which all symbols of words to be received are involved in the single check node. This is not a limitation and the method remains applicable also if some of the symbols of a received word are involved in a check equation. In this case, what will be stated shall refer to the symbols involved in the considered check equation, the other symbols being ignored as if they were not present.
Let us suppose to use a LDPC defined by a generic check matrix
The matrix
The generic value Minput(i, j) of the matrix
According to LDPC decoding, the value Sinput(1, j) is considered and in each (generic k-th) check node the following check value is calculated in the Galois field:
According to the method, for each check node k, a corrected value Soutput(1, j) of maximum probability of correctness for each symbol is calculated according to the following equation:
All operations are executed in the Galois field GF (23) (in the general case, in the Galois field 2N). The above equation for calculating Soutput(1, j) is to be applied to the symbols that are involved in check calculations performed at the k-th check node (in this case the values H(k, j) at the denominator differ from 0).
The values of maximum probability of correctness in the first row of
Moutput(1, j)
In order to fill in the second row of the matrix
Minput(2,a1)≦Minput(2,j)∀jε{1, . . . , dc}
Minput(2,b1)≦Minput(2,j)∀jε{1, . . . , dc}−{a1}
In practice, excluding a priori the values of maximum probability of correctness, the above operations correspond to identifying the first most probable value Sinput(2, a1) and its corresponding symbol a1, and identifying the second immediately most probable value Sinput(2, b1) not belonging to the symbol a1.
In the considered numerical example, the a1-th symbol is the second symbol and the b1-th symbol is the fifth symbol. Two cases are possible: a) the j-th symbol to be corrected is the a1-th symbol (in the exemplary case: j=2); and b) the j-th symbol to be corrected is not the a1-th symbol (in the exemplary case: j≠2).
In case a), the value Soutput(2, a1) and its corresponding logarithm of normalized probability Moutput(2, a1) are determined according to the following general formulae:
It is worth nothing here that the difference (Sinput(2, b1)−Sinput(1, b1)) is part of the value that will be used to store in a compressed way the whole
In case b), the value Soutput(2, j) and its corresponding probability Moutput(2, j)∀j≠a1 are determined according to the following general formulae:
It is worth nothing here that the difference (Sinput(2, a1)−Sinput(1, a1)) is part of the value that will be used to store in a compressed way the whole
Considering the numerical example, the first two rows of the output probability matrix are
Moutput(1, j), Moutput(2, j)
Independently from the degree of the check matrix, the second row of the output probability matrix
The a1-th column of the output probability matrix
It is easy to verify that the output matrix
The values of the row Soutput(3, j) are calculated in order starting from the value Soutput(3, a1); then the value Soutput(3, b1) and the other values are calculated according to a procedure similar to that used for calculating the second row. Differently from the previous step, now attention should be paid to avoid filling in a same column of the matrix
The value Soutput(3, a1) is calculated by:
a) looking at the matrix of input probabilities
Minput(ia2,a2)≦Minput(i,j)∀i≧2,a2≠a1 and (ia2,a2)≠(2,b1);
b) calculating the value Soutput(3, a1) and the corresponding logarithm of normalized probability Moutput(3, a1) using the following formulae:
It is worth nothing here that the difference (Sinput(ia2, a2)−Sinput(1, a2)) is part of the value that will be used to store in a compressed way the whole
c) checking whether or not Soutput(3, a1)=Soutput(2, a1) and, in the affirmative case, restarting the procedure from point a) choosing a different pair (ia2, a2).
In the exemplary numerical case the pair (ia2, a2) is (3, 5).
The output probability matrix is being filled as follows
The value Soutput(3, b1) and the other values Soutput(3, j)∀j≠a1, b1 are calculated with a procedure similar to that for calculating Soutput(2, a1) and the other values Soutput(2, j)∀j≠a1.
According to the method, the symbol b2-th and c2-th for which:
Minput(ib2,b2)≦Minput(i,j)∀jε{1, . . . , dc}−{b1},b2≠b1 and (ib2,b2)≠(2,a1);
Minput(ic2,c2)≦Minput(i,j)∀jε{1, . . . , dc} and (ic2,c2)≠(2,a1),
are identified.
In practice, excluding a priori the values of maximum probability of correctness and the value Sinput(2, a1) already considered in the previous step, the above operation corresponds to identifying the first most probable value Sinput(ic2, c2) and its corresponding symbol c2, and identifying the second most probable value Sinput(ib2, b2) and its corresponding symbol b2 different from the symbol b1.
It may occur that the value Sinput(ic2, c2) is the value Sinput(2, b1). In the considered numerical example, the c2-th symbol is again the fifth symbol and the b2-th symbol is again the second symbol.
Two cases are possible: a) the j-th symbol to be corrected is the b1-th symbol (in the exemplary case: j=5); and b) the j-th symbol to be corrected is neither the b1-th or a1-th or c2-th symbol (in the exemplary case: j≠5, 2). As stated before, the c2-th symbol is the b2-th symbol.
In case a), the value Soutput(3, b1) and its corresponding logarithm of normalized probability Moutput(3, b1) are determined according to the following general formulae:
It is worth nothing here that the difference (Sinput(ib2, b2)−Sinput(1, b2)) is part of the value that will be used to store in a compressed way the whole
Also in this case, it is to be checked whether or not Soutput(3, b1)≠Soutput(2, b1) and eventually to identify the immediately less probable value than Sinput(ib2, b2) to be used in place thereof.
By looking at the input probability matrix of the numerical example, Sinput(ib2, b2)=Sinput(3, 2) because Minput(3, 2)=7. Therefore, the output probability matrix is
In case b), the value Soutput(3, j) and its corresponding probability Moutput(3, j)∀j≠a1, b1 are determined according to the following general formulae:
It is worth nothing here that the difference (Sinput(ic2, c2)−Sinput(1, c2)) is part of the value that will be used to store in a compressed way the whole
Also in this case, it is to be checked whether or not Soutput(3, j)≠Soutput(2, j) and eventually to identify the immediately less probable value than Sinput(ic2, c2) to be used in place thereof.
In the exemplary numerical case, the output probability matrix is filled as follows
Moutput(1, j) Moutput(2, j) Moutput(3, j)
At the end of this step there are at most two columns (the a1-th and the b1-th) in the output probability matrix different from the other columns.
For sake of example, let us suppose that the condition
H(k,c2)·(Sinput(ic2,c2)−Sinput(1,c2))≠H(k,a1)·(Sinput(2,a1)−Sinput(1,a1))
is not satisfied. A different value Sinput(ic2, c2) is identified, which in the numerical example is used in place of Sinput(3, 5), thus obtaining the following alternative output probability matrix
Moutput(1, j) Moutput(2, j) Moutput(3, j)
The values of the fourth row of the output matrix of values
A difference in respect to the algorithm used for filling the third row exists in that, in the algorithm for filling from the fourth row onwards, it may not be possible to exclude a priori that the logarithm of normalized probability of a value identified in this step be larger than the sum of two nonnull values stored in the input probability matrix. This case will be considered later and corresponds to the event in which a received symbol is assuming its value of fourth best probability of correctness and two other symbols are not assuming their values of maximum probability of correctness.
Let us now suppose that this is not the case, as in the considered numerical example. The value Soutput(4, a1) is calculated by looking at the matrix of input probabilities
It is worth nothing here that the difference (Sinput(ia3, a3)−Sinput(1, a3)) is part of the value that will be used to store in a compressed way the whole
The value Soutput(4, a1) is further calculated by checking whether or not Soutput(4, a1)=Soutput(2, a1) or Soutput(4, a1)=Soutput(3, a1) and, in the affirmative case, restarting the procedure from point a) choosing a different pair (ia3, a3).
In the exemplary numerical case, the pair (ia3, a3) is (2, 1). Therefore, the output probability matrix is being filled as follows:
The value Soutput(4, b1) is calculated by:
a) looking at the matrix of input probabilities
b) calculating the value Soutput(4, b1) and the corresponding logarithm of normalized probability Moutput(4, b1) using the following formulae:
It is worth nothing here that the difference (Sinput(ib3, b3)−Sinput(1, b3)) is part of the value that will be used to store in a compressed way the whole
c) checking whether or not Soutput(4, b1)=Soutput(2, b1) or Soutput(4, b1)=Soutput(3, b1) and, in the affirmative case, restarting the procedure from point a) choosing a different pair (ib3, b3).
In the exemplary numerical case, the pair (ib3, b3) is (2, 1). The output probability matrix is being filled as follows:
Then the remaining values Soutput(4, j) for the other symbols are calculated neglecting the values of maximum probability of correctness and the already considered values Sinput(2, a1) and Sinput(Ic2, c2) through the following steps:
a) identifying usable the first most probable value Sinput(id3, d3) and its corresponding symbol d1, and
b) identifying the usable second immediately most probable value Sinput(ic3, c3).
The output probability matrix becomes
Moutput(1, j) Moutput(2, j) Moutput(3, j), Moutput(4, j)
At the end of this step there will be at most three columns (a1-th, b1-th and c3-th) different from the other columns in the output probability matrix
As stated before, it should be checked whether the logarithm of normalized probability of a value identified in this step for any symbol is larger than the sum of two nonnull values stored in the input probability matrix usable for that symbol, i.e.:
Minput(i,j)>Minput(i1,j1)+Minput(i2,j2).
This check has been schematically indicated in
Minput(ia3,a3)>Minput(ia3-1,a3-1)+Minput(ia3-2,a3-2);
In this case, the matrices
The symbols a3-1 and a3-2 may be different, otherwise they stay on the same column of the input matrix and therefore they cannot be taken. It may happen that, for a same symbol s, the second and third value of the output probability values are equal to values of a same symbol l of the input probability matrix,
Moutput(2,s)=Minput( . . . , l);
Moutput(3,s)=Minput( . . . , l)
This one may not be accepted as double error but such possibility cannot be excluded.
In fact two symbols may be looked for:
These two symbols give the same contribution of the original one (in term of symbol of the Galois Field) but are not on the same column.
The magnitude of the two pairs is computed and compared: the smallest between the triplet {Minput(i1, 1), (Minput(id, d)+Minput(2, Z)), (Minput(i1, t)+Minput(2, n))} is the selected magnitude. Concerning the Soutput, they give exactly the same contribution.
The above check should be carried out for every symbol. The algorithm could be stopped at this step, because in general four alternative values for each symbol are sufficient.
As an alternative, in the unlikely event that four alternative values for each symbols be insufficient, a fifth step may be executed.
The fifth most probable values are calculated starting from the different columns (a1-th, b1-th and c3-th) following the same reasoning used for calculating the symbol Soutput(4, a1); then the symbol Soutput(5, c3) is calculated (if it has not yet been calculated) as done for the symbol Soutput(3, b1); finally the remaining symbols are calculated as done for the other symbols Soutput(3, j).
At each calculation, it is to be checked whether or not there two identical symbols for a same column of the output matrix
At the end of this fifth step, the output probability matrix is as follows:
Moutput(1, j), Moutput(2, j), Moutput(3, j) Moutput(4, j), Moutput(5, j)
It would be possible to continue calculating the other less probable values by repeating the above procedure, though five values for each symbol are commonly considered enough for every practical application.
The fifth candidate has the same exception of the fourth one. There are three possibilities that we indicate for simplicity using the matrix
Again, if the symbols that generate one or all this pairs are on the same column in the input matrix, a search to identify S1′, S2′, S3′ is performed and all the possibilities are analyzed. Note that in case a double error is present in fourth candidate, it is not necessary to look for it in the fifth.
The above algorithm is summarized in Figures from 1 to 6, that will appear self-explaining in view of the above discussion and for this reason will not be illustrated further.
The above disclosed method may be implemented in a hardware LIST decoder or with a software computer program executed by a computer.
The method is relatively simple because it does not require calculation of Fourier transforms or logarithms, but additions and multiplications in a Galois field, thus it may be implemented in real time applications.
Another key advantage of the present invention is related to the amount of memory used to store the temporary values Rji that are basically the table
In practice, the output tables
The present embodiment is focused on the approach of computing the first five candidates, but the invention is not limited to this case. Given a parity check output
To recover the information to reproduce the whole tables
When five different candidates are considered, the storage of Soutput(5, i) is not required.
Overall memory used to represent
Decoder Architecture
A macro-architecture suggested for the implementation of the algorithm is depicted in
The memory WM contains the PMFs of the symbols At the beginning, they are initialized with the demodulator (or detector) outputs and then they are updated accordingly to the decoding algorithm schedule.
The variable-to-check PMF are first sorted to identify the first n candidates magnitudes (five magnitudes in the preferred implementation over GF(23)). The remaining magnitudes are assumed to be equal to the fifth.
The sorted PMF are passed to the SPC block that performs the CNP (computation of Rji). The CORRECTION MEM contains the check-to-variable messages Rji. The memory size can be greatly reduced following the rules given in the last part of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
VA2010A0050 | Jun 2010 | IT | national |
Number | Name | Date | Kind |
---|---|---|---|
6789227 | De Souza et al. | Sep 2004 | B2 |
20090013237 | Lin et al. | Jan 2009 | A1 |
20090063931 | Rovini et al. | Mar 2009 | A1 |
20090106622 | Yokokawa et al. | Apr 2009 | A1 |
20100088575 | Sharon et al. | Apr 2010 | A1 |
Entry |
---|
Davey et al., “Low-density parity-check codes of GF(q)”, IEEE Communications Letter, vol. 2, No. 6, Jun. 2008, pp. 165-167. |
Wymeersch et al., “Log-domain decoding of LDPC codes over GF(q)”, IEEE International Conference on Communications, 2004, pp. 772-776. |
Song et al., “Reduced-complexity decoding of q-ary LDPC codes for magnetic recording”, IEEE Transactions on Magnetics, vol. 39, No. 2, 2003, pp. 1-7. |
Voicila et al., “Low-complexity, low-memory LEMS algorithm for non-binary LDPC codes”, IEEE International Conference on Communications, 2007, pp. 1-6. |
Chang et al., “Performance and decoding complexity of nonbinary LDPC codes for magnetic recording”, IEEE Transaction on Magnetics, vol. 44, No. 1, Jan. 2008, pp. 211-216. |
Declercq et al., “Decoding algorithms for nonbinary LDPC codes over GF(q)”, IEEE Transactions on Communication, vol. 55, No. 4, Apr. 2007, pp. 633-643. |
Zarkeshvari et al., “On implementation of min-sum algorithm for decoding low-density parity-check (LDPC) codes”, Global Telecommunications Conference, IEEE, vol. 2, Nov. 2002, pp. 1349-1353. |
Hocevar, “A reduced complexity decoder architecture via layered decoding of LDPC codes”, IEEE Workshop on Signal Processing Systems, Oct. 2004, pp. 107-112. |
Liao et al., An O(qlogq) log-domain decoder for non-binary LDPC over GF(q), IEEE Circuits and Systems, Nov. 2008, pp. 1644-1647. |
Number | Date | Country | |
---|---|---|---|
20110320916 A1 | Dec 2011 | US |