This invention relates to decoding of error correction codes in the field of telecommunications or data recording. More precisely, the invention relates to an iterative method by message passing for decoding of error correction codes that can be displayed in a bipartite graph, such as LDPC (Low Density Parity Check) codes or turbocodes.
Error correction codes that can be displayed by bipartite graph cover a wide variety of codes, particularly LDPC codes, initially described by R. Gallager in his article entitled “Low density parity check codes” published in IEEE Trans. Inform. Theory, vol. IT-8, pages 21-28, 1962, for which interesting properties have been rediscovered recently, and turbocodes introduced by C. Berrou et al. in his founding article “Near optimum error correcting coding and decoding: turbo-codes” published in IEEE Trans. Inform. Theory, vol. 44, No. 10, pages 1261-1271, 1996.
A bipartite graph is a non-oriented graph in which all nodes are composed of two separate subsets such that two arbitrary nodes in a single sub-set are not connected together by an edge of the graph.
Some error correction codes can be displayed as a bipartite graph. The graph is partitioned into a first sub-set of nodes associated with symbols forming a code word and a second sub-set of nodes associated with the code constraints, typically parity checks. A bipartite graph associated with a group of constraints is also called a Tanner graph.
The symbols in the code word are usually Galois body elements F2={0,1}, in other words bits, but they may more generally be elements of a body F2
Additional representation codes by bipartite graph can be decoded using an iterative message passing (MP) or BP (Belief Propagation) decoding. A generic description of this decoding method is given in the thesis by N. Wiberg entitled “Codes and decoding on general graphs”, 1996. MP type iterative decoding is actually a generalisation of algorithms well-known in the decoding field, namely the “forward-backward” algorithm used for turbocodes and the Gallager algorithm for LDPC codes.
For simplification reasons, the following contains a description of the principle of iterative decoding by message passing in the framework of an LDPC code. We will consider a linear code (K,N) in which K is the dimension of the code representing the number of information bits and N is the length of the code representing the number of coded bits. M=N−K is equal to the number of parity bits or equivalently, the number of parity constraints.
In general, remember that a linear code is defined by a generating matrix G for which the elements are binary values and a code word x=(x1, x2, . . . , xN) is obtained from a word with information bits a=(a1, a2, . . . , aK) by means of:
x=aG (2)
Since all code words satisfy the parity checks, we obtain the relation:
H·GT=0 (3)
in which GT refers to the transpose in the matrix G.
The code word x is transmitted on a communication channel or is recorded on a data support. A noisy version of x, namely Y=(y1, y2, . . . , yN), is recovered on reception or when reading the support. The decoding operation consists of finding x and therefore a starting from the observation y.
We will agree upon the following notations before describing the principle for iterative message passing decoding:
H(n) denotes all tests related to the variable n in the bipartite graph, in other words all nodes adjacent to node n;
H(m) is the set of variables connected to the check m in the bipartite graph, in other words all nodes adjacent to node m;
αn represents the a priori information concerning the variable n in the bipartite graph, in other words the a priori information concerning the nth bit in the code word. This information takes account of the signal received and the characteristics of the transmission channel. It forms the input to the decoder and is usually provided by the demodulator in the form of soft values, namely in terms of likelihoods:
αn=(pn0,pn1) (4)
where pna=Pr(xn=a|yn), aε{0,1},
or more conveniently, in the form of a logarithmic likelihood ratio (LLR):
Thus, for centred Gaussian white noise and BPSK modulation, the demodulator simply calculates:
where σ2 is the noise variance.
αmn represents the message transmitted by the variable n to the check mεH(n). By reference to turbocodes, αmn is also called extrinsic information;
βnm symmetrically represents the message transmitted by check m to the variable nεH(m). It is also qualified as extrinsic information;
{circumflex over (α)}n represents the a posteriori information related to variable n: it takes account both of the a priori information αn and the messages βnm received by the variable n from its adjacent checks during decoding;
n is the hard value corresponding to the soft value {circumflex over (α)}n, in other words the decision made for bit xn.
In step 210, messages αmn are initialised for each variable n and check mεH(n) pair. Messages αmn are usually initialised by the a priori information, in other words αmn=αn, ∀mεH(n). The iteration counter Iter is also initialised to 0.
The initialisation step is followed by an iteration loop comprising the following steps:
In 220, the checks are initialised. More precisely, for each check m, messages βmn from check m to the corresponding variables nεH(m) are calculated, namely:
βmn=FC({αmn′|n′εH(m)−{n}}) (7)
in which FC is the check processing function. For an arbitrary given pair of nodes m,nεH(m) the message βmn is calculated as a function of messages that the check m itself received from variables n′εH(m)−{n}. Consequently, it is observed that no extrinsic information is forwarded from a variable node to itself. The check processing step is also called a horizontal step.
In 230, variables are processed symmetrically. More precisely, for each variable n, messages αmn aimed at the corresponding checks mεH(n) are calculated, namely:
αmn=FV({βm′n|m′εH(n)−{m}}) (7′)
in which the variable processing function is denoted FV. For a given node pair n,mεH(n), the message αmn is calculated as a function of messages that the variable n itself received from the checks m′εH(n)−{m}, such that no extrinsic information is forwarded from a node to itself, as described above. The variable processing step is also called the vertical step.
In 240, the a posteriori information {circumflex over (α)}n is estimated from a priori information αn and messages βmn received by the variable n from its adjacent check nodes mεH(n), symbolically expressed as:
in which the a posteriori estimating function is denoted FAP.
In 250, a decision on hard values
in which the decision function is denoted FD. Typically, for a BPSK modulation, the decision is taken on the sign of the soft value, in other words
In 260, it is checked if the
The order of steps in the iteration loop may be different from the order shown in
According to the principle of iterative decoding shown in
serial type scheduling, a category that can include “serial scheduling”, “shuffled-BP”, “horizontal shuffled” or “vertical shuffled” scheduling. Serial type scheduling is equally applicable to checks and to variables. For application to checks, the decoding uses the following strategy:
Similarly, variable by variable processing can be adopted instead of check by check processing. Depending on the case envisaged, the term “horizontal shuffled” or “vertical shuffled” scheduling will be used.
The two scheduling types mentioned above can also be hybridised in the form of “mixed” or “group-shuffled” scheduling. A description of the decoding strategy corresponding to mixed scheduling is given in the article by J. Zhang et al. entitled “Shuffled iterative decoding” published in IEEE Trans. on Comm., Vol. 53, No. 2, February 2005, pages 209-213. The strategy is based on a partition of nodes by groups, the processing being in parallel within a group and in series from one group to the next. More precisely, for a distribution of check groups:
Similarly, processing can be based on a partition by groups of variables, rather than a partition by groups of checks.
It will be noted that serial scheduling and parallel scheduling can be considered like special cases of mixed scheduling, the former corresponding to the case in which groups are reduced to singletons, the latter corresponding to the case in which a group comprises all check nodes (and variable nodes).
Two main principal iterative message passing decoding algorithms are known for LDPC codes, namely the SPA (Sum Product Algorithm) algorithm also called “log-BP”, and the “Min-Sum” algorithm also called “based BP”. A detailed description of these two algorithms is given in the article by W. E. Ryan entitled “An introduction to LDPC codes”, published in the CRC Handbook for coding and signal processing for recording systems, available at the link www.csee.wvu.edu/wcrl/ldpc.htm.
The only difference between the SPA and Min-Sum algorithms is in the check-processing step that will be described later. Other steps are identical, namely:
The variable processing step 230 consists of calculating the messages αmn as follows:
where Bmn* represents all messages βm′n received by the variable n from controls m′εH(n)−{m} and Cmn represents the event corresponding to a parity check verified for each of these checks. Provided that yn values are independent, it is shown that αmn can be expressed in the form of LLR as follows:
Step 240 to estimate the a posteriori information consists of calculating:
where Bn represents messages received by the variable n from all H(n) checks, and Cn represents the event corresponding to a parity check verified for each of these checks. Based on the same assumption as above, it can be shown that {circumflex over (α)}n can be expressed in LLR form by:
According to (11) and (13), it is found that:
αmn={circumflex over (α)}n−βmn (14)
Consequently, the variable processing step 230 may be placed at the end of the iteration, after the estimate of the a posteriori information. Expression (14) translates the fact that a node (m) does not return the extrinsic information (in this case βmn) to itself.
The hard values decision-making step 250 is done simply by:
where sgn(x)=1 if x is positive and sgn(x)=−1 otherwise.
Verification of parity checks on hard values in step 260 uses the calculation of parity checks:
All parity checks are satisfied if and only if:
The check-processing step 220 consists of calculating the following for the SPA algorithm:
where cm=1 means a parity condition satisfied for the check m, and Amn* represents all messages αmn′ received by the check m from variables n′εH(m)−{n}. It is shown that βmn can be expressed by:
Processing of checks according to the Min-Sum algorithm corresponds to a simplification of the expression (19). Due to the fast decay of function Φ(x) and the fact that Φ(x) is equal to its reciprocal, i.e. Φ(Φ(x))=x, we can legitimately make the following approximation:
The Min-Sum decoding algorithm is significantly simpler than the SPA decoding algorithm because it only performs additions, comparisons and sign changes. Furthermore, performances of the Min-Sum algorithm are independent of the estimate of the noise variance σ2.
Although the performances of the SPA decoding algorithm are better than the performances of the Min-Sum algorithm, they can be severely degraded if the noise power is badly estimated.
The general purpose of this invention is to propose an iterative message passing type decoding algorithm to decode an error correction code that could be represented by a bipartite graph, with better performance in terms of error rate and convergence rate than algorithms of the same type known in the state of the art.
A first purpose of the invention is to propose an iterative message passing type decoding algorithm to decode an LDPC code with a significantly lower complexity than the SPA algorithm, while having comparable or even better error rate performances for a given signal to noise ratio, and also that does not require an estimate of the noise power.
Another particular purpose of this invention is to propose a message passing type decoding algorithm to decode LDPC codes with a higher convergence rate than SPA or Min-Sum algorithms.
This invention is defined by an iterative method by message passing for decoding of an error correction code that can be displayed in a bipartite graph comprising a plurality of variable nodes and, a plurality of check nodes, said method being such that for each iteration in a plurality of decoding iterations:
According to a first embodiment, for each node to be classified, said classification includes the calculation of a measurement of the reliability of the information present, sent or received by nodes at not more than a predetermined distance from this node in the bipartite graph, and sorting of values of the measurements thus obtained.
For each iteration of said plurality, the classified nodes are then processed sequentially in the order defined by said classification, and for each classified node, messages addressed to nodes adjacent to it are calculated, and for each said adjacent node, messages to nodes adjacent to said adjacent node are calculated.
According to a second embodiment, for each node to be classified, said classification includes the calculation of a measurement of the reliability of the information present, sent or received by nodes located at not more than a predetermined distance from this node in the bipartite graph. Nodes are grouped in intervals of values of said measurement.
If the reliability measurement uses integer values, then for each said integer value, indexes of nodes for which the reliability measurement is equal to this value are stored in a memory zone associated with it.
For each iteration of said plurality, node groups are processed sequentially in the order defined by said classification, and for each node group, messages to nodes adjacent to the nodes in the group are calculated, and for each of said adjacent nodes, messages to nodes themselves adjacent to said adjacent nodes are also calculated.
According to one variant, for each variable node in said bipartite graph, each iteration of said plurality also comprises a step to calculate a posteriori information as a function of the a priori information already present in this node, and messages received by this node from adjacent check nodes.
The a posteriori information calculation step may be followed by a decision step about the hard value of said variable.
The next step is to test if the hard values of variables thus obtained satisfy the parity checks associated with all check nodes in the graph, and if so, the word composed of said hard values is provided as a decoded word.
Advantageously, the classification is interrupted after a predetermined number of decoding iterations, the decoding method then continuing its decoding iterations in the absence of said classification of said variable nodes or said check nodes.
The classification is interrupted if the minimum of the absolute value of differences between the a posteriori values and a priori values of said variables is greater than a predetermined threshold value, the decoding method then continuing its decoding iterations in the absence of said classification of said variable nodes or said check nodes.
In a first application, said error correction code is a turbocode.
In a second application, said error correction code is an LDPC code (K,N) represented by a bipartite graph with N variable nodes and M=N−K check nodes.
In the latter case, the βmn message from a check node with index mε{1, . . . , M} to a variable node with index nε{1, . . . , N} can be calculated as follows:
where αmn′ denotes the message from the variable node with index n′ to the check node with index m, H(m) represents all variable nodes adjacent to the check node with index m, sgn(x)=1 if x is positive and sgn(x)=−1 otherwise, and
Alternately, the message βmn from a check node with index mε{1, . . . , M} to a variable node with index nε{1, . . . , N} can be calculated as follows:
where αmn′ denotes the message from the variable node with index n′ to the check node with index m, H(m) represents all variable nodes adjacent to the check node with index m, sgn(x)=1 if x is positive and sgn(x)=−1 otherwise.
According to one example embodiment, the classification applies to check nodes and said predetermined distance is equal to 2. The reliability measurement of a check node with index m is then calculated as follows:
where H(m) denotes the set of variable nodes adjacent to the check node with index m, H(n) represents the set of check nodes adjacent to the variable node with index n, Card(.) denotes the cardinal of a set, where c=0 if cm=+1 and c=δmax+1 if cm=−1 where δmax is the maximum degree of check nodes in the bipartite graph, and where cm=+1/cm=−1 mean that the parity check is/is not satisfied respectively for the check node with index m.
Finally, the invention relates to a computer program comprising software means adapted to implementing steps in the decoding method defined above when it is executed by a computer.
We will once again consider an error correction code that could be represented by a bipartite graph with N variables and M checks.
The length of the shortest path through the graph connecting two nodes v,μ on the graph is called the distance and its length is denoted D(v,μ), expressed as a number of edges. Considering the definition of a bipartite graph, it is deduced that the distance between two nodes with the same type is an even number and the distance between two nodes of different types is an odd number.
The order d neighbourhood of an arbitrary node v of a graph Γ is defined as the set Vv(d) of nodes located at a distance less than or equal to d from v, namely:
Vv(d)={μεΓ|D(μ,v)≦d} (21)
Thus, the order 0 neighbourhood of a node consists of the node itself, the order 1 neighbourhood is the combination of the order 0 neighbourhood with all nodes adjacent to this node, the order 2 neighbourhood is the combination of the order 1 neighbourhood with all nodes adjacent to nodes in the order 1 neighbourhood, and so on. For a bipartite graph composed of n variables and m checks, the order 0 to 2 neighbourhoods of these nodes are given by:
It will be noted that the singleton {n} (or {m}) does not appear in the expression of Vn(2) (or Vm(2)) because it is already included in the second term of the combination, taking account of the symmetry of the adjacency relation.
In general, the neighbourhoods of a node v of a bipartite graph can be obtained by means of the recurrence relation:
In the remainder of this description, the term order d reliability measurement of a node v of Γ will be used to refer to a variable fd(V) dependent on the available decoding information in the neighbourhood Vv(d), in other words information present, sent or received by nodes belonging to said neighbourhood indicating the local degree of reliability of the decoding operation. We will qualify the measurement as “positive” when fd(V) increases as the degree of reliability increases and “negative” otherwise.
The decoding information available at a check node m comprises firstly the value cm indicating whether or not the parity check is verified, and secondly βmn messages transmitted to adjacent variable nodes nεH(m). The value cm and messages βmn carry reliability information about decoding.
The information available at a variable node n comprises the a priori information αn, the a posteriori information {circumflex over (α)}n, extrinsic information messages αmn to be sent to adjacent check nodes mεH(n) and hard values
The above-mentioned information, except for hard values, carries reliability information about decoding.
In general, considering soft values βmn, αmn, αn, {circumflex over (α)}n, expressed in LLR form, decoding will be considered more reliable when their absolute values are higher.
The following contains a few illustrative and non-limitative examples of reliability measurements for orders 0 to 2:
The neighbourhood is then reduced to the node itself. We can choose the following function to measure the zero order reliability of a variable node n:
f0(n)=|{circumflex over (α)}n| (25)
which indicates the degree of confidence in the hard value
For a check node m we could simply choose the following function:
{tilde over (f)}0(m)=cm (26)
It can be understood that if the parity check were satisfied (cm=1), decoding would be more reliable than if it were not satisfied.
The processing for a variable node and a check node is different once again. For a variable node, one of the following functions can be chosen as an order 1 reliability measurement:
Expression (27) represents the number of check nodes m connected to the variable node n for which the parity check is verified.
Expression (28) takes account of the irregularity of the code (in other words the possibility that variable nodes have different degrees) and produces the summary of verified and non-verified parity checks.
Expressions (27′) and (28′) are negative measurements corresponding to positive measurements (27) and (28).
The reliability measurement expressed in (29) is conservative because it is based on a reduction of the reliability information received from the adjacent check nodes.
For a check node m, we could choose one of the following functions for an order 1 reliability measurement:
Expression (30) indicates the minimum of the reliabilities (a posteriori information) of variables to which check m is connected.
Expression (31) represents a similar criterion but applies to extrinsic information. It will be noted that it is the twin of the expression (29).
Expression (32) represents the Cartesian product of two items of information and consequently takes account of them jointly, the first applies to the verification of the parity check and the second to a posteriori information as in (30).
The following function can be chosen for a variable node n:
Expression (33) represents a negative decoding reliability measurement; it indicates the number of checks related to variable n for which the parity condition is not satisfied and such that the variable n is the least reliable among all variables adjacent to these checks. In fact, the measurement (33) could be considered as the composition of measurements (27′) and (30).
One of the following functions could be chosen for a check node m:
where c=0 if cm=+1 and c=δmax+1 if cm=−1, where δmax is the maximum degree of check nodes, in other words
Expression (34) gives the number of parity checks not satisfied in the order 2 neighbourhood of m, excluding m itself. It represents a negative measurement of the decoding reliability. It will be noted that the second equality of the expression (34) is only satisfied if there are no cycles with length 4 in the neighbourhood concerned. In practice, codes are used for which Tanner graphs have the longest possible cycles, such that equality is usually verified.
Expression (35) indicates the number of variables adjacent to m, connected to at least one unsatisfied check, excluding m itself. Consequently, it represents a negative measurement of the decoding reliability. Since terms under the sum sign are equal to 0 or 1, the value of the measurement is between 0 and δmax.
The expression (35) makes no distinction depending on whether or not the parity check is verified for node m. In order to take account of this information, a bias c was introduced into expression (36), with a value that depends on the verification of check m.
For the bias values mentioned above (c=0,δmax+1), the reliability measurement is equal to values between 0 and δmax for a verified check and between δmax+1 and 2δmax+1 for a verified check. Note that this is a negative reliability measurement in the sense defined above, i.e. the lower the value of {tilde over (f)}2(m), the more reliable the value of the check m. Thus, the reliability degree of a verified check is always greater than the degree of reliability of a non-verified check, regardless of the value of (35).
It will be noted that some of the functions defined above are integer values, and particularly functions (34) to (36). Therefore, they are particularly suitable for a microprocessor implementation.
Importantly, some of the functions defined above, and particularly functions (34) to (36), only use hard values cm. Since
it can be seen that the value of the noise power (σ2) does not appear in the calculation of these functions.
The basic concept of the invention is to classify nodes of the graph as a function of their reliability and to do the decoding by message passing using a serial or mixed type scheme, beginning with the most reliable node or group of nodes. This considerably improves the decoding performances, both in terms of convergence rate and bit error rate (BER).
It will be understood that messages transmitted by the most reliable nodes are the most capable of efficiently contributing to decoding adjacent nodes. There is thus fast propagation of reliability from one iteration to the next, and from node to node, and correlated to this, a significant increase in the absolute value of extrinsic information as the iterations proceed.
Serial processing or mixed processing is done in step 730. For serial type processing, check nodes are processed sequentially, and variable nodes adjacent to each check node are also processed sequentially, beginning with the most reliable check node and continuing in decreasing degrees of reliability. For mixed type processing, check nodes will have been classified in reliability groups, the first group being composed of the most reliable nodes, the second group being composed of less reliable nodes, etc. Parallel processing is done on variable nodes and check nodes in the first group, then parallel processing is done on variable nodes and check nodes in the next group, and so on until there are no more groups. Steps 740, 750, 760, 765, 767, 770, 775 are identical to steps 240, 250, 260, 265, 267, 270, 275 in
Messages αmn and αmn are initialised in step 810. For example, for each variable n and check mεH(n) pair, αmn is initialised as αmn=αn. Similarly, for each check m and variable nεH(m) pair, βmn is initialised as βmn=0.
The checks are then sorted by decreasing degree of reliability in 820, using the reliability measurements {tilde over (f)}d(m). In other words, for a positive measurement checks are sorted such that {tilde over (f)}d(m0)≧{tilde over (f)}d(m1)≧ . . . ≧ {tilde over (f)}d(mM-1), and for a negative measurement they are sorted in the reverse order. The lexicographic order relation or the inverse lexicographic order relation is used for a Cartesian product measurement (for example see (32)), depending on whether the measurement is positive or negative. We also initialise a counter of check nodes j=0.
The current check m=mj is then selected in 825 and the variables nεH(m) are processed in 830, in other words αmn=FV({βm′n|m′εH(n)−{m}}) is calculated.
Step 835 then does current check processing m, in other words βmn=FC({αmn′|n′εH(m)−{n}}) is calculated for nεH(m).
According to one variant embodiment, steps 830 and 935 are inverted.
The check node counter is incremented in step 837 and a test is made in step 839 to find if they have all been processed. If not, the processing loops back to step 825, otherwise it continues with steps 840 to 860 representing the calculation of a posteriori information, the decision on hard values and the verification of parity checks respectively. Steps 840, 850, 860, 865, 867, 870, 875 are identical to steps 240, 250, 260, 265, 267, 270, 275 respectively in
According to one alternative embodiment, variables instead of checks are sorted. The internal loop 825 to 839 then applies to one variable at a time, and in the same way as before, the current variable may be processed before or after processing of the checks adjacent to it.
In step 910, messages αmn and βmn are initialised as in step 810 in
In step 920, all check nodes or variable nodes are partitioned into groups, each corresponding to a different reliability class. For example, a reliability class is defined by means of an interval of values of a reliability measurement. This embodiment is particularly advantageous when the measurement uses integer values. We can then allocate a range to an integer value or to a plurality of contiguous integer values and store the indexes of the nodes in a table as a function of their corresponding classes. We will subsequently denote the different reliability classes φ0, φ1, . . . , φω, where φ0 is the class with the highest degree of reliability and φω is the class with the lowest degree of reliability, and G0, G1, . . . , Gω are the corresponding node groups. It is assumed in the following that the partition was made on check nodes.
We also initialise the counter of reliability classes to j=0.
In step 925, we select the group of nodes corresponding to the current reliability class φj, namely Gj.
In step 930, variables nεH(m), ∀mεGj, are processed in parallel, in other words messages αmn=FV({βm′n|m′εH(n)−{m}}), ∀mεGj, ∀nεH(m) are calculated.
In step 935, checks mεGj are processed in parallel, in other words messages βmn=FC({αmn′|n′H(m)−{n}}), ∀mεGj, ∀nεH(m) are calculated.
According to one variant embodiment, steps 930 and 935 are inverted.
In step 937, the class counter is incremented and it is tested in step 939 to determine if they have all been processed. If not, the processing loops back to step 925, otherwise it continues with steps 940 to 960 representing the calculation of a posteriori information, the decision on hard values and the verification of parity checks respectively. Steps 940, 950, 960, 965, 967, 970, 975 are identical to steps 240, 250, 260, 265, 267, 270, 275 respectively in
In step 1010, soft a posteriori values {circumflex over (α)}n are initialised by the corresponding observations {circumflex over (α)}n=αn and βmn messages are initialised by 0.
We also initialise the iteration counter to 0.
In step 1015, a decision is made about hard values, namely {circumflex over (α)}n=sgn({circumflex over (α)}n).
Step 1020 calculates the value
in which cm values are parity checks
m=1, . . . , M
Step 1023 tests if χ=M, in other words if all parity checks are verified. If so, the decoding algorithm is exited in step 1025 by providing the code word. Otherwise, the processing continued in step 1027 by calculating reliability measurements of check nodes using the measurement (36). Remember that this measurement is equal to integer values among 0, 1, . . . , 2δmax+1 where δmax is the maximum degree of check nodes. In this case, we assign a reliability class by integer value, namely φ0, φ1, . . . , φω where ω=2δmax+1 and where class φj corresponds to the measurement value jε{0, 1, . . . , 2δmax+1}.
In step 1029, the group of nodes corresponding to the current reliability class φj, namely Gj is chosen.
In step 1030, the processing of variables nεH(m), ∀mεGj is done in parallel, in other words messages αmn={circumflex over (α)}n−βmn ∀mεGj are calculated.
Checks mεGj are processed in parallel in step 1035, in other words messages to be sent to variables nεH(m), ∀mεGj are calculated, specifically
for SPA processing and
for simplified Min-Sum type processing as indicated in this case in the figure. A posteriori information is then calculated in step 1040 using
It will be noted that part of the checks mεH(n) used in the summation possibly do not form part of the group Gj and therefore the messages βmn that originate from these checks were not recalculated in step 1035. Furthermore, since the variable n is adjacent to several checks mεH(n), the calculation of {circumflex over (α)}n, will be made several times as the checks of H(n) are processed. In order to prevent the sum in step 1040 from being calculated several times, and some messages βmn that have not changed from being recalculated each time, steps 1035 and 1040 can advantageously be replaced by a single step including the following operations:
∀mεGj,∀nεH(m)
In this way, the a posteriori information {circumflex over (α)}n is updated as the new messages sent by checks mεH(n) to the variable n are calculated. This can significantly reduce the number of operations carried out for the a posteriori information calculation.
The next step 1043 is to increment the reliability class counter and then to test if all classes have been processed in step 1045. If not, step 1029 is repeated to process the next reliability class. If so, the iteration counter is incremented in step 1050. Step 1053 tests if the stop criterion is satisfied, in other words if the maximum number of iterations has been reached. If so, the loop is terminated in step 1055, concluding that decoding has failed. Otherwise, processing loops back to step 1015 for a new iteration.
We can see in
The algorithm shown in
It is observed that the Min-Sum-FV algorithm according to the invention surpasses the conventional SPA algorithm, although its complexity is significantly lower.
In the decoding method according to the invention shown in
where Tf represents a minimum reliability threshold, it would be possible to return to a Min-Sum or conventional SPA processing. This variant prevents sorting or classification of nodes as soon as decoded values are sufficiently reliable and convergence is assured.
This invention is applicable to decoding of error correction codes that could be represented by bipartite graph, and particularly LDPC codes or turbocodes. It can be used in the field of data recording or telecommunications, and particularly for telecommunication systems that already use LDPC codes, for example telecommunication systems complying with IEEE 802.3a (Ethernet 10 Gbits/s), DVB-S2 (satellite video broadcasting), IEEE 802.16 (WiMAX) standards, or that could use these codes, for example systems satisfying the IEEE 802.11 (WLAN) and IEEE 802.20 (Mobile Broadband Wireless Access) standards.
Number | Date | Country | Kind |
---|---|---|---|
06 53148 | Jul 2006 | FR | national |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2007/057644 | 7/25/2007 | WO | 00 | 1/21/2009 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2008/012318 | 1/31/2008 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
5450535 | North | Sep 1995 | A |
6122763 | Pyndiah et al. | Sep 2000 | A |
7174495 | Boutillon et al. | Feb 2007 | B2 |
7251769 | Ashikhmin et al. | Jul 2007 | B2 |
7836383 | Efimov et al. | Nov 2010 | B2 |
20030138065 | Mills et al. | Jul 2003 | A1 |
20040151259 | Berens et al. | Aug 2004 | A1 |
20050076286 | Zheng et al. | Apr 2005 | A1 |
20050138519 | Boutillon et al. | Jun 2005 | A1 |
20050172204 | Lin | Aug 2005 | A1 |
20050204271 | Sharon et al. | Sep 2005 | A1 |
20050204272 | Yamagishi | Sep 2005 | A1 |
20050210366 | Maehata | Sep 2005 | A1 |
20050210367 | Ashikhmin et al. | Sep 2005 | A1 |
20050216821 | Harada | Sep 2005 | A1 |
20050229087 | Kim et al. | Oct 2005 | A1 |
20050268204 | Harada | Dec 2005 | A1 |
20050283707 | Sharon et al. | Dec 2005 | A1 |
20060195765 | Coffey | Aug 2006 | A1 |
20070101243 | Kim et al. | May 2007 | A1 |
20080065947 | Eroz et al. | Mar 2008 | A1 |
20080109708 | Kim et al. | May 2008 | A1 |
20080141096 | Zheng et al. | Jun 2008 | A1 |
Number | Date | Country | |
---|---|---|---|
20090313525 A1 | Dec 2009 | US |