The present invention relates to the decoding of error correcting codes in the telecommunications or data recording fields. To be more precise, the invention relates to a method for iterative message-passing decoding of error correcting codes susceptible of representation by bipartite graphs, such as LDPC (Low Density Parity Check) codes or turbo-codes.
Error correcting codes susceptible of representation by bipartite graph cover a wide variety of codes, particularly LDPC codes, initially described by R. Gallager in his article entitled “Low density parity check codes” published in IEEE Trans. Inform. Theory, vol. IT-8, pages 21-28, 1962, the advantageous properties of which have recently been rediscovered, and turbo-codes introduced by C. Berrou et al. in his ground-breaking article “Near optimum error correcting coding and decoding: turbo-codes” which appeared in IEEE Trans. Inform. Theory, vol. 44, No. 10, pages 1261-1271, 1996.
The term bipartite graph is given to an undirected graph in which all nodes are constituted by two disjoint sub-sets such that no two nodes of any one sub-set are connected to each other by an edge of the graph.
Some error correcting codes are susceptible of a representation by bipartite graph. The graph is partitioned into a first sub-set of nodes associated with symbols constituting a code word and a second sub-set of nodes associated with the code constraints, typically with the parity controls. A bipartite graph associated with a group of constraints is also known as a Tanner graph.
The code word symbols are generally elements of the Galois field F2={0,1}, in other words bits, but they can more generally be elements of any characteristic 2 field F2
Codes susceptible of representation by bipartite graphs can be decoded using iterative message-passing decoding, also known as MP (Message Passing) or BP (Belief Propagation). A generic description of this decoding method can be found in the thesis by N. Wiberg entitled “Codes and decoding on general graphs”, 1996. Iterative MP decoding is in fact a generalization of algorithms well-known in the decoding area namely the “forward-backward” algorithm used for turbo-codes and the Gallager algorithm for LDPC codes.
In the interest of simplification, we will repeat hereinafter the principle of iterative decoding by message passing in the context of an LDPC code. We consider a linear code (K,N) where K is the dimension of the code representing the number of information bits and N is the length of the code, representing the number of encoded bits. M=N−K corresponds to the number of parity bits or, in an equivalent way, the number of parity constraints.
In
It will be recalled generally that a linear code is defined by a generator matrix G the elements of which are binary values and that a code word x=(x1, x2, . . . , xN) is obtained from an information bit word a=(a1, a2, . . . , aK) by means of:
x=aG (2)
Since all code words verify the parity controls, we get the relationship:
H·G
T=0 (3)
where GT denotes the transpose of the matrix G.
The code word x is transmitted on a communication channel or recorded on a data support. On receipt or on reading the support, a noisy version of x is recovered, i.e. y=(y1, y2, . . . , yN). The decoding operation consists in finding x and therefore a from the observation y.
We will agree on the following notations before describing the iterative message-passing decoding principle:
H(n) denotes all the controls connected to the variable n in the bipartite graph, in other words all the nodes adjacent to the node n;
H(m) is all the variables connected to the control m in the bipartite graph, in other words all the nodes adjacent to the node m;
αn represents the a priori information relating to the variable n in the bipartite graph, in other words the a priori information relating to the nth bit of the code word. This information takes into account the signal received and the characteristics of the transmission channel. It constitutes the decoder input and is generally supplied by the demodulator in the form of soft values, i.e. in terms of probabilities:
αn=(pn0,pn1) (4)
where pna=Pr(xn=a|yn), aε{0,1},
i.e., more conveniently, in the form of a logarithmic probability ratio, known as a Log Likelihood Ratio or LLR:
Thus, for centred Gaussian white noise (AWGN) and BPSK modulation, the demodulator simply calculates:
where σ2 is the noise variance.
αmn represents the message transmitted by the variable n to the control mεH(n). With reference to the turbo-codes, αmn is further known as extrinsic information;
βnm represents symmetrically the message transmitted by the control m to the variable nεH(m). It is also termed extrinsic information;
{circumflex over (α)}n represents the a posteriori information relative to the variable n: it takes into account both the a priori information αn and the messages βnm received by the variable n from its adjacent controls during decoding;
{circumflex over (α)}n is the hard value corresponding to the soft value {circumflex over (α)}n, in other words the decision made for the bit xn.
The principle of iterative message-passing decoding is shown in
At step 210, the messages αmn are initialized, for each variable n and control pair mεH(n). The messages αmn are generally initialized by the a priori information, in other words: αmn=αn, ∀mεH(n). The iteration counter Iter is also initialized at 0.
The initialization step is followed by an iteration loop comprising the following steps:
At 220, the controls are processed. To be more precise, for each control m, the messages βmn from the control m bound for the respective variables nεH(m) are calculated, i.e.:
βmn=FC({αmn′|n′εH(m)−{n}}) (7)
where FC denotes the control processing function. For any given pair of nodes m, nεH(m), the message βmn is calculated as a function of the messages that the control m has itself received from the variables n′εH(m)−{n}. It will be noted as a consequence that there is no return of extrinsic information from a variable node to itself. The control processing step is also known as a horizontal step.
At 230, the variables are processed symmetrically. To be more precise, for each variable n, the messages αmn bound for the respective controls mεH(n) are calculated, i.e.:
αmn=FV({βm′n|m′εH(n)−{m}}) (8)
where FV denotes the variable processing function. For a given pair of nodes n, mεH(n), the message am, is calculated as a function of the messages that the variable n has itself received from the controls m′εH(n)−{m}, so that, as previously, there is no return of extrinsic information from a node to itself. The variable processing step is also known as a vertical step.
At 240, the a posteriori information {circumflex over (α)}n is estimated from the a priori information αn and from the messages βmn received by the variable n from its adjacent control nodes mεH(n), which is expressed symbolically as:
{circumflex over (α)}n=FAP({αn}∪{βmn|mεH(n)}) (9)
where FAP denotes the a posteriori estimation function.
At 250, a decision is taken in respect of the hard values
n
=F
D({circumflex over (α)}n) (10)
where FD denotes the decision function. Typically, for a BPSK modulation, the decision is taken on the sign of the soft value, in other words
At 260, a check is made as to whether the vector
The order of the steps in the iteration loop may differ from that disclosed in
βmn=0,∀nε{1, . . . N} and ∀mεH(n).
According to the iterative decoding principle outlined in
serial type scheduling, a category in which the scheduling types denoted “serial scheduling”, “shuffled-BP”, “horizontal shuffled” or “vertical shuffled” can be placed. Serial type scheduling can be applied to both controls and variables. When applied to controls, decoding uses the following strategy:
In a dual way, variable by variable processing can be performed instead of control by control processing being performed. Depending on the case envisaged, we will speak of “horizontal shuffled” sequencing or “vertical shuffled” sequencing.
The two aforementioned sequencing types can also be hybridized in the form of “mixed” or “group-shuffled” sequencing. A description of the decoding strategy corresponding to mixed scheduling will be found in the article by J. Zhang et al. entitled “Shuffled iterative decoding” which appeared in IEEE Trans. on Comm., Vol. 53, No. 2, February 2005, pages 209-213. The strategy is based on a partition of the nodes into groups, the processing being in parallel within a group and in series from one group to the next. To be more precise, for a partition in control groups:
In a dual way, it is possible to operate on the basis of a partition into groups of variables instead of a partition into groups of controls.
Two main algorithms are known for iterative message-passing decoding in respect of LDPC codes: the SPA (Sum Product Algorithm), further known as “log-BP” and the “Min-Sum” algorithm, further known as “based BP”. A detailed description of these two algorithms will be found in the article by W. E. Ryan entitled “An introduction to LDPC codes”, published in CRC Handbook for coding and signal processing for recording systems, and available from the link www.csee.wvu.edu/wcrl/ldpc.htm.
SPA and Min-Sum algorithms differ only in the control processing step which will be set out in detail below. The other steps are identical, namely:
Step 230 for processing the variables consists in calculating the messages αmn as follows:
where B*mn represents all the messages βm′n received by the variable n from the controls m′εH(n)−{m} and Cmn represents the event corresponding to a parity control verified for each of these controls. Subject to the independence of the yn, it is shown that αmn can be expressed in the form of an LLR as:
Step 240 of estimating the a posteriori information comprises calculating:
where Bn represents the messages received by the variable n from all the controls of H(n) and Cn represents the event corresponding to a parity control verified for each of these controls. On the same assumption as before, it is shown that {circumflex over (α)}n can be expressed in the form of LLR as:
It is noted according to (12) and (14) that:
αmn={circumflex over (α)}n−βmn (15)
Step 230 of processing variables may consequently be placed at the end of iteration, after the a posteriori information has been estimated The expression (15) conveys the fact that the extrinsic information (here βmn) sent by a node (m) to itself, is not returned.
Step 250 of hard value decisions is simply achieved by:
n
=sgn({circumflex over (α)}n) (16)
where sgn(x)=1 if x is positive and sgn(x)=−1 otherwise.
Parity control verification at 260 in respect of hard values is achieved by calculating the parity controls:
The parity controls are all satisfied if and only if:
The controls processing step 220 consists in calculating, for the SPA algorithm:
where cm=1 signifies a parity condition satisfied for the control m and A*mn represents all the messages αmn′ received by the control m from the variables n′εH(m)−{n)}. It is shown that βmn can be expressed as:
Processing the controls according to the Min-Sum algorithm amounts to a simplification of the expression (20). Given the rapid decay of the function Φ(x) and the equality of Φ(x) with its reciprocal, i.e. Φ(Φ(x))=x, the following approximation may legitimately be made:
The Min-Sum decoding algorithm is substantially more straightforward than the SPA decoding algorithm since it only performs additions, comparisons and changes of sign. Additionally, the performance of the Min-Sum algorithm is independent of the estimation of noise variance σ2. In compensation, approximating the values βmn according to the expression (21) leads to a loss of performance relative to the SPA decoding algorithm.
Although the SPA decoding algorithm actually performs better than the Min-Sum algorithm, its performance can nonetheless deteriorate markedly in the event of the noise power being wrongly estimated.
Different modified versions of the Min-Sum algorithm have been proposed for the purpose of improving its performance. For example, the aforementioned article by W. E. Ryan brings a corrective term into the approximation (21). The performance of the algorithm so modified is then closer to that of the SPA algorithm but becomes heavily dependent on the noise power estimation error. Additionally, the corrective term substantially increases the decoding complexity. This modified version has been further refined in the document 3GPP TSG RAN WG1 #43 of 7 Nov. 2005, entitled “Rate-compatible LDPC codes with low complexity encoder and decoder”. The method disclosed therein consists in bringing a corrective term to (21) only for the δ smallest values αmn′, in other words those contributing most to the expression (20). This algorithm remains sensitive however to the noise power estimation error.
The general object of the present invention is to propose an iterative decoding algorithm of the message-passing type that allows an error correcting code susceptible of representation by bipartite graph to be decoded, and that performs better in terms of error rate and convergence speed than the Min-Sum algorithm, without however having the complexity and sensitivity to noise estimation of the SPA algorithm.
The present invention is defined as an iterative message-passing decoding method for decoding an error correcting code susceptible of representation by a bipartite graph including a plurality of variable nodes and a plurality of control nodes, said messages being expressed in terms of a log likelihood ratio. At each iteration of a plurality of decoding iterations, for each pair comprising a variable node and a control node, a change of sign is detected for the extrinsic information intended for transmission as a message by said variable node to said control node relative to that transmitted at the previous iteration, and in the event of a change of sign, said extrinsic information is subject to an amplitude reduction operation before it is transmitted to said control node.
According to a first embodiment alternative, the amplitude reduction operation is a non-linear operation. For example, the amplitude reduction operation is an operation to threshold the absolute value of said extrinsic information relative to a threshold value. Preferably, the threshold value is calculated adaptively.
According to a first law of adaptation example, the threshold value is obtained as the absolute value of the extrinsic information transmitted at the previous iteration from said variable node to said control node.
According to a second law of adaptation example, said threshold value is obtained as the smallest of the absolute values of the extrinsic information transmitted at the previous iterations from said variable node to said control node.
According to a second embodiment alternative, the amplitude reduction operation is a linear operation, for example a multiplication by a coefficient that is strictly positive and strictly less than 1. To advantage, said coefficient is calculated adaptively.
According to a third example, the extrinsic information thresholding operation is followed by a multiplication of the extrinsic information so processed by a coefficient that is strictly positive and strictly less than 1.
Said extrinsic information can be calculated by αmntemp={circumflex over (α)}n−βmn where {circumflex over (α)}n is the a priori value of the variable associated with the variable node with the index n and βmn is the message transmitted by the control node with the index m to the variable node with the index n.
The message βmn from a control node with the index n to a variable node with the index m can be calculated by:
where αmn′ denotes the message from the variable node with the index n′ to the control node with the index m, H(m) is the set of the variable nodes adjacent to the control node with the index m, sgn(x)=1 if x is positive and sgn(x)=−1 otherwise.
The invention also relates to a computer program comprising software means adapted to implement the steps in the decoding method disclosed above when it is run on a computer.
Further consideration is given to an error correcting code susceptible of representation by a bipartite graph with N variables and M controls. We shall assume hereinafter, for the purposes of illustration and with no loss of generality, that this correcting code is an LDPC code.
The present invention is based on the following statement of fact: when the messages from variables, in other words the extrinsic information values expressed in the form of LLR αmn, fluctuate from one iteration to the next or in a general way when they reproduce a fluctuating pattern, the Min-Sum algorithm does not converge or converges only very slowly. The fluctuations in the values αmn, are generally due to noise distribution in the word received. The principle of the invention is to damper these fluctuations when they do occur, so as to force the algorithm to enter a convergence status.
It will be assumed that the extrinsic information values αmn are stored in a table, this table being updated at each iteration.
Step 330 starts with a calculation of the new extrinsic values in form of an LLR from the expression (15) αmntemp={circumflex over (α)}n−βmn. However, unlike the conventional Min-Sum decoding process, each value so calculated is first of all stored as an intermediate value before being processed and then stored in the table of extrinsic information values. It will be noted that the processing is conducted sequentially on the values of n and m at 330, so that only one intermediate value is necessary.
The sign of the intermediate value αmntemp is compared with that of αmn already stored, in other words with the sign of the extrinsic information value obtained at the previous iteration and transmitted as a message from the variable node n to the control node m. If these signs are identical, sgn(αmntemp)=sgn(αmn), there is no oscillation, αmn is refreshed by αmn=αmntemp and this value is transmitted as it is to the control node m.
However, if the two signs are different, sgn(αmntemp)=−sgn(αmn), the sign switch conveys a lack of reliability of the value αmntemp. This lack of reliability is taken into account by reducing the amplitude of αmntemp. αmn is then refreshed with the value so obtained, i.e.:
αmn=Fred(αmntemp) (22)
this value being transmitted to the control node m.
According to a first embodiment alternative, the amplitude reduction operation |αmntemp| is non-linear, for example a thresholding operation. The values αmn are then updated by:
αmn=sgn(αmntemp)·min(|αmntemp|,αmnT) (23)
where αmnT is a positive threshold value associated with the variable node n and with the control node m. To advantage, the thresholds αmnT, 1≦m≦M, 1≦n≦N are obtained adaptively. To be more precise, they are initialized (at 310) by the absolute values of the a priori information i.e. αmnT=|αn| and then updated by means of a law of adaptation.
According to a first law of adaptation example, for each node pair (n,m), the current iteration threshold value is equal to the absolute value of the extrinsic information of the previous iteration, in other words the calculation of the expression (23) is preceded by the operation:
αmnT=|αmn| (24)
Thus, in the event of a sign switch, the amplitude of the message αmn in the current iteration is limited by the absolute value of the message from the previous iteration.
According to a second law of adaptation example, for each node pair (n,m), the threshold value of the current iteration is equal to the smallest absolute value of the extrinsic information from the previous iterations for the same node pair, in other words, step 330 systematically includes a threshold update, as follows:
∀nε{1, . . . N},∀mεH(n),
αmntemp={circumflex over (α)}n−βmn
if sgn(αmntemp)=sgn(αmn) then αmn=αmntemp
otherwise αmn=sgn(αmntemp)·min(|αmntemp|,αmnT)
if |αmn|<αmnT then αmnT=|αmn| (25)
Step 330 can be implemented in this event in an equivalent way by:
∀nε{1, . . . N},∀mεH(n),
αmntemp={circumflex over (α)}n−βmn
if |αmntemp|<αmnMin then αmnMin=αmntemp
if sgn(αmntemp)=sgn(αmn) then αmn=αmntemp
otherwise αmn=sgn(αmntemp)·αmnMin (26)
where αmnMin is the smallest amplitude of the extrinsic information observed for the node pair (n,m).
According to a second embodiment, the amplitude reduction operation is linear (in terms of LLR). In other words, if a sign switch is detected, the amplitude of the extrinsic information is attenuated:
αmn=λαmntemp (27)
where λ is a coefficient such that 0<λ<1. To advantage, for implementation reasons λ=1-2−b will be selected with b a strictly positive integer.
The coefficient λ may be chosen to be adaptive.
According to a third embodiment, the amplitude reduction operation may be a combination of a linear operation and a non-linear operation, for example a thresholding operation and an attenuation operation as described previously. The extrinsic information is then updated by combining (23) and (27):
αmn=sgn(αmntemp)λ·min(|αmntemp|,αmnT)
Other alternative embodiments of the amplitude reduction operation, linear or non-linear, adaptive or not adaptive, are also conceivable without departing from the scope of the present invention. Among non-linear operations origin based sigmoid functions may be of particular use.
The invention has been described for a parallel sequencing starting with the control nodes. It is however clear for the man skilled in the art that it also applies to parallel sequencing starting with the variable nodes. It also applies to serial and hybrid sequencings as defined above.
It is noted that the performance of the inventive Min-Sum algorithm is very close to that of the SPA algorithm and is so for a complexity comparable to that of the conventional Min-Sum algorithm.
The present invention applies to the decoding of error correcting codes susceptible of representation by bipartite graphs, particularly LDPC codes or turbo-codes. It can be used in the data recording or telecommunications field, in particular for telecommunications systems that already use LDPC codes, for example those complying with the standards IEEE 802.3a (Ethernet 10 Gbits/s), DVB-S2 (satellite video broadcasting), IEEE 802.16 (WiMAX), or able to use them, for example systems complying with the standards IEEE 802.11 (WLAN) and IEEE 802.20 (Mobile Broadband Wireless Access).
Number | Date | Country | Kind |
---|---|---|---|
07 53228 | Feb 2007 | FR | national |