Disclosed herein is a method and associated devices, for Low-Density Parity Check (LDPC) decoding, that overcomes non-convergence due to trapping sets.
Error Correction Codes (ECCs) are commonly used in communication systems and in storage systems. Various physical phenomena occurring both in communication channels and in storage devices result in noise effects that corrupt the communicated or stored information. Error correction coding schemes can be used for protecting the communicated or stored information against the resulting errors. This is done by encoding the information before transmission through the communication channel or storage in the memory device. The encoding process transforms the information bits sequence into a codeword by adding redundancy to the information. This redundancy can then be used in order to recover the information from the possibly corrupted codeword through a decoding process.
In both communication systems and storage systems an information bit sequence i is encoded into a coded bit sequence v that is modulated or mapped into a sequence of symbols x that is adapted to the communication channel or to the memory device. At the output of the communication channel or memory device a sequence of symbols y is obtained. An ECC decoder of the system decodes the sequence y and recovers the bit sequence î, which should reconstruct the original information bit sequence i with high probability.
A common ECC family is the family of linear binary block codes. A length N linear binary block code of dimension K is a linear mapping of length K information bit sequences into length N codewords, where N>K. The rate of the code is defined as R=K/N. The encoding process of a codeword v of dimension 1×N is usually done by multiplying the information bits sequence i of dimension 1×K by a generator matrix G of dimension K×N according to
v=i·G (1)
It is also customary to define a parity-check matrix H of dimension M×N, where M=N−K. The parity-check matrix is related to the generator matrix through the following equation:
GHT=0 (2)
The parity-check matrix can be used in order to check whether a length N binary vector is a valid codeword. A 1×N binary vector v belongs to the code if and only if the following equation holds:
H·v′=
0 (3)
(In equation (3), the prime on v′ means that v′ is a column vector.)
In recent years iterative coding schemes have become very popular. In these schemes the code is constructed as a concatenation of several simple constituent codes and is decoded using an iterative decoding algorithm by exchanging information between the constituent decoders of the simple codes. Usually, the code can be defined using a bipartite graph describing the interconnections between the constituent codes. In this case, decoding can be viewed as an iterative message passing over the graph edges.
A popular class of iterative codes is Low-Density Parity-Check (LDPC) codes. An LDPC code is a linear binary block code defined by a sparse parity-check matrix H. As shown in
Next to the first and last check nodes of
LDPC codes can be decoded using iterative message passing decoding algorithms. These algorithms operate by exchanging messages between bit nodes and check nodes along the edges of the underlying bipartite graph that represents the code. The decoder is provided with initial estimates of the codeword bits (based on the communication channel output or based on the read memory content). These initial estimates are refined and improved by imposing the parity-check constraints that the bits should satisfy as a valid codeword (according to equation (3)). This is done by exchanging information between the bit nodes representing the codeword bits and the check nodes representing parity-check constraints on the codeword bits, using the messages that are passed along the graph edges.
In iterative decoding algorithms, it is common to utilize “soft” bit estimations, which convey both the bit estimations and the reliabilities of the bit estimations.
The bit estimations conveyed by the messages passed along the graph edges can be expressed in various forms. A common measure for expressing a “soft” bit estimation is as a Log-Likelihood Ratio (LLR)
where the “current constraints and observations” are the various parity-check constraints taken into account in computing the message at hand and the observations y corresponding to the bits participating in these parity checks. Without loss of generality, for simplicity we assume hereinafter that LLR messages are used throughout. The sign of the LLR provides the bit estimation (i.e., positive LLR corresponds to v=0 and negative LLR corresponds to v=1). The magnitude of the LLR provides the reliability of the estimation (i.e., |LLR|=0 means that the estimation is completely unreliable and |LLR|=±∞ means that the estimation is completely reliable and the bit value is known).
Usually, the messages passed during the decoding along the graph edges between bit nodes and check nodes are extrinsic. An extrinsic message m passed from a node n on an edge e takes into account all the values received on edges connected to n other than edge e (this is why the message is called extrinsic: it is based only on new information).
One example of a message passing decoding algorithm is the Belief-Propagation (BP) algorithm, which is considered to be the best algorithm from among this family of message passing algorithms.
Let
denote the initial decoder estimation for bit v, based only on the received or read symbol y. Note that it is also possible that some of the bits are not transmitted through the communication channel or stored in the memory device, hence there is no y observation for these bits. In this case, there are two possibilities: 1) shortened bits—the bits are known a-priori and Pv=±∞ (depending on whether the bit is 0 or 1). 2) punctured bits—the bits are unknown a-priori and
where Pr(v=0) and Pr(v=1) are the a-priori probabilities that the bit v is 0 or 1 respectively. Assuming the information bits have equal a-priori probabilities to be 0 or 1 and assuming the code is linear then
Let
denote the final decoder estimation for bit v, based on the entire received or read sequence y and assuming that bit v is part of a codeword (i.e., assuming H·v=0).
Let Qvc denote a message from bit node v to check node c. Let Rcv denote a message from check node c to bit node v.
The BP algorithm utilizes the following update rules for computing the messages:
The bit node to check node computation rule is:
Here, N(n, G) denotes the set of neighbors of a node n in the graph G and c′ ε N(v, G)\c refers to those neighbors excluding node ‘c’ (the summation is over all neighbors except c).
The check node to bit node computation rule is:
and operations in the φ domain are done over the group {0,1}×R+ (this basically means that the summation here is defined as summation over the magnitudes and XOR over the signs). Analogous to the notation of equation (4), N(c, G) denotes the set of bit node neighbors of a check node c in the graph G and v′ ε N(c, G)\v refers to those neighbors excluding node ‘v’ (the summation is over all neighbors except v).
The final decoder estimation for bit v is:
The order of passing messages during message passing decoding is called the decoding schedule. BP decoding does not imply utilizing a specific schedule—it only defines the computation rules (equations (4), (5) and (6)). The decoding schedule does not affect the expected error correction capability of the code. However, the decoding schedule can significantly influence the convergence rate of the decoder and the complexity of the decoder.
The standard message-passing schedule for decoding LDPC code is the flooding schedule, in which in each iteration all the variable nodes, and subsequently all the check nodes, pass new messages to their neighbors (R. G. Gallager, Low-Density Parity-Check Codes, Cambridge, Mass.: MIT Press 1963). The standard BP algorithm based on the flooding schedule is given in
The standard implementation of the BP algorithm based on the flooding schedule is expensive in terms of memory requirements. We need to store a total of 2|V|+2|E| messages (for storing the Pv, Qv, Qvc and Rcv messages). Moreover, the flooding schedule exhibits a low convergence rate and hence requires higher decoding logic (e.g., more processors on an ASIC) for providing a required error correction capability at a given decoding throughput.
More efficient, serial message passing decoding schedules, are known. In a serial message passing schedule, the bit or check nodes are serially traversed and for each node, the corresponding messages are sent into and out from the node. For example, a serial schedule can be implemented by serially traversing the check nodes in the graph in some order and for each check node c ε C the following messages are sent:
1.Qvc for each v ε N(c) (i.e., all Qvc messages into the node c)
2. Rcv for each v ε N(c) (i.e., all Rcv messages from node c)
Serial schedules, in contrast to the flooding schedule, enable immediate and faster propagation of information on the graph resulting in faster convergence (approximately two times faster). Moreover, serial schedule can be efficiently implemented with a significant reduction of memory requirements. This can be achieved by using the Qv messages and the Rcv messages in order to compute the Qvc messages on the fly, thus avoiding the need to use an additional memory for storing the Qvc messages. This is done by expressing Qvc as (Qv-Rcv) based on equations (4) and (6). Furthermore, the same memory as is initialized with the a-priori messages Pv is used for storing the iteratively updated Qv a-posteriori messages. An additional reduction in memory requirements is obtained because in the serial schedule we only need to use the knowledge of N(c) ∀c ε C, while in the standard implementation of the flooding schedule we use both data structures N(c) ∀c ε C and N(v) ∀v ε V requiring twice as much memory for storing the code's graph structure. The serially scheduled decoding algorithm appears in
To summarize, serial decoding schedules have the following advantages over the flooding schedule:
The methods described herein are applicable to correcting errors in data in at least two different circumstances. One circumstance is that in which data are retrieved from a storage medium. The other circumstance is that in which data are received from a transmission medium. Both a storage medium and a transmission medium are special cases of a “channel” that adds errors to the data. The concepts of “retrieving” and “receiving” data are generalized herein to the concept of “importing” data. Both “retrieving” data and “receiving” data are special cases of “importing” data from a channel.
The data that are decoded by the methods presented herein are a representation of a codeword. The data are only a “representation” of the codeword, and not the codeword itself, because the codeword might have been corrupted by noise in the channel before one of the methods is applied for decoding.
Iterative coding systems exhibit an undesired effect called error floor as shown in
It is well known that the error correction capability and the error floor of an iterative coding system improve as the code length increases (this is true for any ECC system, but especially for iterative coding systems, in which the error correction capability is rather poor at short code lengths).
However, in conventional implementations of iterative coding systems, the memory complexity of the decoding hardware is proportional to the code length; hence using long codes incurs high complexity, even in the most efficient implementations known (e.g. serially scheduled decoders).
Therefore, presented herein are methods for implementing extremely long LDPC codes that provide very low error floor and near optimal error correction capability, using low complexity decoding hardware.
While properly designed LDPC codes are very powerful, and can correct a large number of errors in a code word, a phenomenon known as “trapping sets” may cause the decoder to fail, and increase the error floor of the code, even though the number of incorrect bits may be very small and may be confined to certain regions in the graph. Trapping sets are not well defined for general LDPC codes, but have been described as: “These are sets with a relatively small number of variable nodes such that the induced sub-graph has only a small number of odd degree check nodes.”
Trapping sets are related to the topology of the LDPC graph and to the specific decoding algorithm used, are hard to avoid and are hard to analyze.
Trapping sets are a problem in the field of storage since historically the reliability required from storage devices is relatively high, for example 1 bit error per 1014 stored bits. The result is that codes employed in memory device such as flash memory devices should exhibit low error floor, but trapping sets increase the error floor.
Therefore, one embodiment provided herein is a method of decoding a representation of a codeword that encodes K information bits as N>K codeword bits, the method including: (a) importing the representation of the codeword from a channel; (b) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (c) if (i) the decoding has failed to converge according to a predetermined failure criterion, and (ii) the estimates of the codeword bits satisfy a criterion symptomatic of the graph including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a method of decoding a representation of a codeword that encodes K information bits as N>K codeword bits, the method including: (a) importing the representation of the codeword from a channel; (b) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (c) if according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the bit nodes before continuing the iterations.
Another embodiment provided herein is a decoder for decoding a representation of a codeword that encodes K information bits as N>K codeword bits, including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (a) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (b) if (i) the decoding has failed to converge according to a predetermined failure criterion, and (ii) the estimates of the codeword bits satisfy a criterion symptomatic of the graph including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a decoder for decoding a representation of a codeword that encodes K information bits as N>K codeword bits, including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (a) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (b) if, according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the bit nodes before continuing the iterations.
Another embodiment provided herein is a memory controller including: (a) an encoder for encoding K information bits as a codeword of N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes, and (ii) if (A) the decoding has failed to converge according to a predetermined failure criterion, and (B) the estimates of the codeword bits satisfy a criterion symptomatic of the graph including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a memory controller including: (a) an encoder for encoding K information bits as a codeword of N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (ii) if, according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the bit nodes before continuing the iterations.
Another embodiment provided herein is a receiver including: (a) a demodulator for demodulating a message received from a communication channel, thereby producing a representation of a codeword that encodes K information bits as N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes, and (ii) if (A) the decoding has failed to converge according to a predetermined failure criterion, and (B) the estimates of the codeword bits satisfy a criterion symptomatic of the graph including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a receiver including: (a) a demodulator for demodulating a message received from a communication channel, thereby producing a representation of a codeword that encodes K information bits as N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (ii) if, according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the bit nodes before continuing the iterations.
Another embodiment provided herein is a communication system for transmitting and receiving a message, including: (a) a transmitter including: (i) an encoder for encoding K information bits of the message as a codeword of N>K codeword bits, and (ii) a modulator for transmitting the codeword via a communication channel as a modulated signal; and (b) a receiver including: (i) a demodulator for receiving the modulated signal from the communication channel and for demodulating the modulated signal, thereby providing a representation of the codeword, and (ii) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (A) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes, and (B) if (I) the decoding has failed to converge according to a predetermined failure criterion, and (II) the estimates of the codeword bits satisfy a criterion symptomatic of the graph including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a communication system for transmitting and receiving a message, including: (a) a transmitter including: (i) an encoder for encoding K information bits of the message as a codeword of N>K codeword bits, and (ii) a modulator for transmitting the codeword via a communication channel as a modulated signal; and (b) a receiver including: (i) a demodulator for receiving the modulated signal from the communication channel and for demodulating the modulated signal, thereby providing a representation of the codeword, and (ii) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (A) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including, in a graph that includes N bit nodes and N−K check nodes, exchanging messages between the bit nodes and the check nodes; and (B) it according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the bit nodes before continuing the iterations.
Another embodiment provided herein is a method of decoding a representation of a codeword that encodes K information bits as N>K codeword bits, the method including: (a) importing the representation of the codeword from a channel; (b) providing a parity check matrix having N−K rows and N columns; (c) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns of the matrix; and (d) if (i) the decoding has failed to converge according to a predetermined failure criterion, and (ii) the estimates of the codeword bits satisfy a criterion symptomatic of the parity check matrix including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a method of decoding a representation of a codeword that encodes K information bits as N>K codeword bits, the method including: (a) importing the representation of the codeword from a channel; (b) providing a parity check matrix having N−K rows and N columns; (c) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns; and (d) if according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the columns before continuing the iterations.
Another embodiment provided herein is a decoder for decoding a representation of a codeword that encodes K information bits as N>K codeword bits, including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (a) providing a parity check matrix having N−K rows and N columns; (b) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns; and (c) if (i) the decoding has failed to converge according to a predetermined failure criterion, and (ii) the estimates of the codeword bits satisfy a criterion symptomatic of the parity check matrix including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a decoder for decoding a representation of a codeword that encodes K information bits as N>K codeword bits, including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (a) providing a parity check matrix having N−K rows and N columns; (b) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns; and (c) if, according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the columns before continuing the iterations.
Another embodiment provided herein is a memory controller including: (a) an encoder for encoding K information bits as a codeword of N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) providing a parity check matrix having N−K rows and N columns; (ii) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns, and (iii) if (A) the decoding has failed to converge according to a predetermined failure criterion, and (B) the estimates of the codeword bits satisfy a criterion symptomatic of the parity check matrix including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a memory controller including. (a) an encoder for encoding K information bits as a codeword of N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) providing a parity check matrix having N−K rows and N columns; (ii) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns; and (iii) if according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the columns before continuing the iterations.
Another embodiment provided herein is a receiver including: (a) a demodulator for demodulating a message received from a communication channel, thereby producing a representation of a codeword that encodes K information bits as N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including. (i) providing a parity check matrix having N−K rows and N columns; (ii) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns, and (iii) if (A) the decoding has failed to converge according to a predetermined failure criterion, and (B) the estimates of the codeword bits satisfy a criterion symptomatic of the parity check matrix including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a receiver including: (a) a demodulator for demodulating a message received from a communication channel, thereby producing a representation of a codeword that encodes K information bits as N>K codeword bits; and (b) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (i) providing a parity check matrix having N−K rows and X columns; (ii) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns; and (iii) if, according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the columns before continuing the iterations.
Another embodiment provided herein is a communication system for transmitting and receiving a message, including: (a) a transmitter including: (i) an encoder for encoding K information bits of the message as a codeword of N>K codeword bits, and (ii) a modulator for transmitting the codeword via a communication channel as a modulated signal; and (b) a receiver including; (i) a demodulator for receiving the modulated signal from the communication channel and for demodulating the modulated signal, thereby providing a representation of the codeword, and (ii) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (A) providing a parity check matrix having N−K rows and N columns; (B) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns, and (C) if (I) the decoding has failed to converge according to a predetermined failure criterion, and (II) the estimates of the codeword bits satisfy a criterion symptomatic of the parity check matrix including a trapping set: re-setting at least a portion of the messages before continuing the iterations.
Another embodiment provided herein is a communication system for transmitting and receiving a message, including; (a) a transmitter including: (i) an encoder for encoding K information bits of the message as a codeword of N>K codeword bits, and (ii) a modulator for transmitting the codeword via a communication channel as a modulated signal; and (b) a receiver including: (i) a demodulator for receiving the modulated signal from the communication channel and for demodulating the modulated signal, thereby providing a representation of the codeword, and (ii) a decoder including a processor for decoding the representation of the codeword by executing an algorithm for updating estimates of the codeword by steps including: (A) providing a parity check matrix having N−K rows and N columns; (B) in a plurality of decoding iterations, updating estimates of the codeword bits by steps including exchanging messages between the rows and the columns; and (C) if, according to a predetermined failure criterion, the decoding fails to converge, truncating at least a portion of the messages that are sent from the columns before continuing the iterations.
Four general methods are provided herein for decoding a representation, that has been imported from a channel, of a codeword that encodes K information bits as N>K codeword bits.
According to the first two general methods, in a plurality of decoding iterations, estimates of the codeword bits are updated by exchanging messages between the bit nodes and the check nodes of a graph that includes N bit nodes and N−K check nodes.
According to the first general method, if the decoding has failed according to a predetermined failure criterion, and if the codeword bit estimates satisfy a criterion symptomatic of the graph including a trapping set, at least a portion of the messages are re-set before continuing the iterations.
In some embodiments of the first general method, at least a portion of the graph is partitioned into a plurality of subgraphs. At least a portion of the exchanging of the messages is effected separately within each subgraph. The associated criterion of the graph including a trapping set includes failure of the decoding to converge in only one of the subgraphs.
Another criterion of the graph including a trapping set is that at most about one percent of the elements of a syndrome of the codeword bit estimates are non-zero and constant in two consecutive iterations.
The re-setting of the at least portion of the messages preferably includes setting at least a portion of the messages to be sent from the check nodes, and/or truncating at least a portion of the messages to be sent from the bit nodes. Most preferably, the re-setting includes setting all the messages to be sent from the check nodes to zero, and/or truncating all the messages to be sent from the bit nodes. Preferably, the messages that are to be sent from the bit nodes are log likelihood ratios, of which the messages that are truncated are truncated to a magnitude of at most between about 10 and about 16.
According to the second general method, if, according to a predetermined failure criterion, the decoding fails to converge, at least a portion of the messages that are sent from the bit nodes are truncated before continuing the iterations.
One preferred failure criterion includes at least a predetermined number of elements (e.g. one element) of a syndrome of the codeword bit estimates being non-zero, for example after a pre-determined number of iterations, or after a pre-determined time, or after a pre-determined number of exchanges of messages between the bit nodes and the check nodes. Another preferred failure criterion includes at most a predetermined number of elements of a syndrome of the codeword bit estimates remaining non-zero in two consecutive iterations. Another preferred failure criterion includes the difference between the numbers of non-zero elements of a syndrome of the codeword bit estimates after two consecutive iterations being less than a predetermined limit. Another preferred failure criterion includes the Hamming distance between the codeword bit estimates before and after a predetermined number of consecutive iterations (e.g. before and after a single iteration) being less than a predetermined limit.
Preferably, all the messages that are sent from the bit nodes are truncated.
Preferably, the messages are log likelihood ratios and the messages that are truncated are truncated to a magnitude of at most between about 10 and about 16.
As noted above, the graphical representation of LDPC decoding is equivalent to a matrix representation, as illustrated in
According to the third general method, if the decoding has failed according to a predetermined failure criterion, and if the codeword bit estimates satisfy a criterion symptomatic of the parity check matrix including a trapping set, at least a portion of the messages are re-set before continuing the iterations.
According to the fourth general method, if, according to a predetermined failure criterion, the decoding fails to converge, at least a portion of the messages that are sent from the columns are truncated before continuing the iterations.
A decoder corresponding to one of the four general methods includes one or more processors for decoding the representation of the codeword by executing an algorithm for updating the codeword bit estimates according to the corresponding general method.
A memory controller corresponding to one of the four general methods includes an encoder for encoding K information bits as a codeword of N>K bits and a decoder that corresponds to the general method. Normally, such a memory controller includes circuitry for storing at least a portion of the codeword in a main memory and for retrieving a (possibly noisy) representation of the at least portion of the codeword from the main memory. A memory device corresponding to one of the four general methods includes such a memory controller and also includes the main memory.
A receiver corresponding to one of the four general methods includes a demodulator for demodulating a message received from a communication channel. The demodulator provides a representation of a codeword that encodes K information bits as N>K codeword bits. Such a receiver also includes a decoder that corresponds to the general method.
A communication system corresponding to one of the four general methods includes a transmitter and a receiver. The transmitter includes an encoder for encoding K information bits of a message as a codeword of N>K codeword bits and a modulator for transmitting the codeword via a communication channel as a modulated signal. The receiver is a receiver that corresponds to the general method.
Various embodiments are herein described, by way of example only, with reference to the accompanying drawings, wherein:
The principles and operation of low-complexity LPDC decoding and of LPDC decoding that overcomes non-convergence due to trapping sets may be better understood with reference to the drawings and the accompanying description.
In conventional decoders for LDPC codes, the memory required by the decoder is proportional to the code length N (equal to the number of variable nodes in the code's underlying graph |V|) and to the number of edges in the code's underlying graph |E|. In efficient implementations (e.g. based on serially scheduled decoders), the required memory can be as small as (|V|+|E|)*bpm bits, where |V| is the number of bit estimations, |E| is the number of edge messages and bpm is the number of bits per message stored in the memory of the decoder (note that we assume here that the same number of bits is required for storing bit estimation and edge message, for the sake of simplicity, though this is not necessarily the case). The decoder presented herein uses much smaller memory for implementing the decoding, storing only a small fraction of the |V| bit estimations and of the |E| edge messages simultaneously, without any degradation in decoder's error correction capability, compared to a conventional decoder, assuming sufficient decoding time is available. This is achieved by employing an appropriate decoding schedule and using the decoding hardware described herein.
The methods and decoders described herein operate by dividing the underlying graph representing the code into several sections and to implement the message passing decoding algorithm by sequentially processing the different sections of the graph, one or more sections at a time. At each stage during decoding only the bit estimations and edge messages corresponding to the graph section(s) that is/are currently being processed are stored. This way a very long LDPC code can be employed, providing near optimal error correction capability and very low error floor, while utilizing a low complexity decoding hardware.
The decoders presented herein are highly suitable for usage in memory devices, principally for the three following reasons:
This feature can be easily utilized in a memory device, because only the presently required bit observations (y) can be read from the storage device, hence there is no need for a large buffer in the memory controller in order to implement the ECC decoding. Alternatively, even if all bit observations (represented by the vector y) are read from the memory at once, the buffer required for storing them is usually much smaller than the memory required for storing the bit observations (the Pv messages) required by the decoder. This way, only part of the soft bit estimates corresponding to the graph section that is currently being processed by the decoder are generated each time, resulting in a smaller decoder memory requirement.
Consider for example a SLC Flash memory device (a Flash memory device that stores one bit per cell; “SLC” means “Single Level Cell” and actually is a misnomer because each cell supports two levels; the “S” in “SLC” refers to there being only one programmed level), in which each cell stores a single bit v and the state y read from each cell can be either 0 or 1. Then the memory needed for storing the vector y of read cell states is N bits. On the other hand, the memory required for storing all the soft bit estimates (Pv messages) can be larger (for example 6N bits if each LLR estimate is stored in 6 bits). Hence, it is more efficient to generate only the required soft bit estimates in each decoder activation. A LLR bit estimate
for some bit v can be generated from the corresponding bit observations y that are read from the flash memory device based on an a-priori knowledge of the memory “noise”. In other words, by knowing the memory “noise” statistics we can deduce the probability that a bit v that was stored in a certain memory cell is 0/1 given that ‘y’ is read from the cell.
For example, assume that in a certain SLC Flash memory device the probability of reading the state of the cell different than the one it was programmed to is p=10−2, then if y=0 then
and if y=1 then
Furthermore, if the number of states that can be read from each cell of the flash device (represented by ‘y’) is 8 because the cell stores a single bit (one “hard bit”) and the device is configured to read eight threshold voltage levels, equivalent to two ‘soft bits”, then each element ‘y’ which requires, in the controller, storage for 3 bits, is convened to an LLR value Pv that may be represented as more than 3 bits, for example as 6 bits (BPM=Bits Per Message=6). These 6 bits are a soft bit estimate as opposed to the 2 soft bits read from the flash cell and corresponding to this 6-bit LLR value.
According to one class of embodiments, the bipartite graph G=(V,C,E) that represents the code is divided into several sections in the following way. 1) Divide the set V of bit nodes into t disjoint subsets: V1, V2, . . . , Vt (such that V=V1∪V2∪ . . . ∪Vt). 2) For each subset Vi of bit nodes, form a subset Ci of check nodes, including all of the check nodes that are connected solely to the bit nodes in Vi. 3) Form a subset CJ of external check nodes, including all of the check nodes that are not in any of the check node subsets formed so far, i.e. CJ=C\(C1∪C2∪ . . . ∪Ct). 4) Divide the graph G into t sub-graphs G1, G2, . . . , Gt such that Gi=(Vi,Ci,Ei) where Ei is the set of edges connected between bit nodes in Vi and check nodes in Ci. Denote the edges connected to the set CJ by EJ(note that EJ=E\(E1∪E2∪ . . . ∪Et)).
In these embodiments, the graph G is processed according to a special message passing schedule, by iteratively performing decoding phases, and in each decoding phase exchanging messages along the graph edges in the following order:
Decoding continues until the decoder converges to a valid codeword, satisfying all the parity-check constraints, or until a maximum number of allowed decoding phases is reached. The stopping criterion for the message passing within each sub-graph i is similar: iterate until either all the parity-check constraints within this sub-graph are satisfied or a maximum number of allowed iterations is reached. In general, the maximum allowed number of iterations may change from one sub-graph to another or from one activation of the decoder to another.
The messages sent along the edges in EJ (RCJVi messages and QvicJ messages in
Such a decoding algorithm, assuming serially scheduled message passing decoding within each sub-graph, implementing BP decoding, is summarized in
A high-level schematic block diagram of an exemplary decoder 30 according to this class of embodiments is shown in
Decoder 30 includes a plurality of processing units 42 so that the computations involved in updating the messages may be effected in parallel. An alternative embodiment with only one processing unit 42 would not include a routing layer 44.
As noted above, a serial passing schedule traverses serially either the check nodes or the bit nodes. Decoder 30 of
An example of the graph partitioning according to this class of embodiments is shown in
It is preferred that all the sub-graphs be topologically identical, as in the example of
If need be, however, any LDPC graph G can be partitioned into sub-graphs by a greedy algorithm. The first sub-graph is constructed by selecting an arbitrary set of bit nodes. The check nodes of the first sub-graph are the check nodes that connect only to those bit nodes. The second sub-graph is constructed by selecting an arbitrary set of bit nodes from among the remaining bit nodes. Preferably, of course, the number of bit nodes in the second sub-graph is the same as the number of bit nodes in the first sub-graph. Again, the check nodes of the second sub-graph are the check nodes that connect only to the bit nods of the second sub-graph. This is arbitrary selection of bit nodes is repeated as many times as desired. The last sub-graph then consists of the bit nodes that were not selected and the check nodes that connect only to those bit nodes. The remaining check nodes constitute CJ.
In the class of embodiments described above, the LDPC graph G is partitioned into t sub-graphs, each with its own bit nodes and check nodes, plus a separate subset CJ of only check nodes. In another class of embodiments, as illustrated in
The data stored in the memory cells (M) are read out by column control circuit 2 and are output to external I/O lines via an I/O line and a data input/output buffer 6. Program data to be stored in the memory cells are input to data input/output buffer 6 via the external I/O lines, and are transferred to column control circuit 2. The external I/O lines are connected to a controller 20.
Command data for controlling the flash memory device are input to a command interface connected to external control lines which are connected with controller 20. The command data inform the flash memory of what operation is requested. The input command is transferred to a state machine 8 that controls column control circuit 2, row control circuit 3, c-source control circuit 4, c-p-well control circuit 5 and data input/output buffer 6. State machine 8 can output a status data of the flash memory such as READY/BUSY or PASS/FAIL.
Controller 20 is connected or connectable with a host system such as a personal computer, a digital camera, a personal digital assistant. It is the host which initiates commands, such as to store or read data to or from the memory array 1, and provides or receives such data, respectively. Controller 20 converts such commands into command signals that can be interpreted and executed by command circuits 7. Controller 20 also typically contains buffer memory for the user data being written to or read from the memory array. A typical memory device includes one integrated circuit chip 21 that includes controller 20, and one or more integrated circuit chips 22 that each contain a memory array and associated control, input/output and state machine circuits. The trend, of course, is to integrate the memory array and controller circuits of such a device together on one or more integrated circuit chips. The memory device may be embedded as part of the host system, or may be included in a memory card that is removably insertable into a mating socket of host systems. Such a card may include the entire memory device, or the controller and memory array, with associated peripheral circuits, may be provided in separate cards.
Although the methods and the decoders disclosed herein are intended primarily for use in data storage systems, these methods and decoders also are applicable to communications systems, particularly communications systems that rely on wave propagation through media that strongly attenuate high frequencies. Such communication is inherently slow and noisy. One example of such communication is radio wave communication between shore stations and submerged submarines.
Turning now to the issue of trapping sets, there are two types of conventional methods for overcoming trapping sets in LDPC decoding:
1. Avoid trapping sets by designing LDPC codes without trapping sets.
2. Overcome trapping sets by algorithmic means during decoding.
The first type of conventional methods has the following disadvantages:
Since trapping sets are not well defined, and long LDPC codes are quite complex, designing a graph with a low error floor, and proving that the error floor is low, may be a difficult task that requires extensive simulations. Moreover, such an approach may exclude the use of some LDPC codes that exhibit good properties with respect to other aspects, such as implementation complexity in encoding/decoding schemes, decoding speed and flexibility.
As for the second type of conventional methods, using algorithmic methods during decoding for overcoming trapping sets:
Several suggested methods are mentioned in the literature:
1. Averaging.
2. Informed Dynamic Scheduling
3. Identifying the trapping set and designing a custom sum-product Algorithm trying to avoid them.
1. The averaging method uses an update algorithm for the bit values. The updates are based, not only on the results of the preceding iteration, but on averages over the results of a few iterations. Several averaging methods have been suggested including arithmetic averaging, geometric averaging, and a weighted arithmetic geometric average.
2. Informed Dynamic Scheduling. In this method, not all check nodes are updated at each iteration but rather the next check node to be updated is selected based on the current state of the messages in the graph. The check node is selected based on a metric that measures how useful that check node update is to the decoding process.
Both methods can achieve improvement in the error floor, hut the associated complexity of the algorithms is high, since averaging requires storing a history of previous messages, and Informed Dynamic Scheduling incurs high computational complexity.
Methods of the third type require identification of the trapping set and a tailor-made algorithm for each graph, which limit their usage to specific scenarios, especially when multiple LDPC codes are considered in the same application.
According to the innovative method now described, the decoding of a codeword is performed in two phases. During the first phase, conventional decoding is performed along the graph defined by the LDPC code.
If a trapping set is suspected to exist, which prevents the decoding process from converging to a legal codeword (i.e. a codeword satisfying all parity check equations), then the second phase of the decoding is entered. In this phase some of the values associated with the nodes of the graph of the code are modified.
Since existence of a trapping set implies that a small number of bits are failing to converge correctly, the existence of a trapping set may be identified if all but a small number of bits are stable during successive iterations of the decoding, or if a small number of parity check equations fail while all other parity check equations are S satisfied. For example, if only parity check equations within only one sub-graph of a graph that has been partitioned as described above fail, that sub-graph is suspected to be, or to include, a trapping set. Another symptom suggestive of the existence of a trapping set is only one percent or fewer parity check equations failing consistently. For example, that some of the elements of the syndrome H·v′, where v′ is the column vector of estimated bits, are non-zero and are identical in two consecutive iterations, suggests the existence of a trapping set.
Two examples of such modification are as follows:
1. Resetting the values of the check node messages Rcv to zero.
2. Truncating the soft values Qv corresponding to bit probabilities, i.e., limiting the magnitudes of the soft values Qv corresponding to bit probabilities to be no more than a predetermined value, typically a value between 10 and 16.
The motivation behind this methodology is that failure to converge due to a small trapping set occurs when the incorrect bits achieved a high probability during the iterative process and the reliability of the incorrect results (contained at the nodes corresponding to parity check equations) is also high. In such a situation, further iterations will not alter the hard decisions (preferably implemented as the sign of the soft values) made on the incorrect bits.
However, if the decoder had started its operation in an initial state in which all bits outside a small trapping set are already at their correct values, then the probability of correctly decoding the codeword is extremely high.
By resetting the values of the messages Rcv to zero we revert to a state where all the bits outside the trapping set are correct.
In this situation, messages Qvc and Rcv related to bits which are correctly decoded (most of the bits at this stage) quickly build up to high reliability values, while messages related to bits in the trapping set build up more slowly, thus there is a greater influence on the values corresponding the bits in the trapping set from the correct messages. Such a procedure helps in correcting the values of bits in the trapping set.
This procedure adds only minimal complexity to a conventional LDPC decoding algorithm.
In one embodiment, the algorithm performs decoding for a limited number of iterations. Upon failure to converge, the algorithm adds a step for setting certain variables, such as some or all the Rcv messages, to zero, and then continues with conventional decoding.
In another embodiment, after performing the limited number of iterations, a truncating operation on several variables, such as some or all of the Qv values, is added, and then the algorithm continues with conventional decoding.
Both algorithms are very simple and of low complexity to implement, moreover they apply to general LDPC graphs, in contrast to the conventional high complexity and tailor based methods.
Truncating the soft values Qv is useful in reaction to a variety of non-convergence criteria and slow convergence criteria, as follows:
1. if a predetermined of elements of the syndrome are non-zero after a pre-determined number of iterations, or after a pre-determined time, or after a pre-determined number of message exchanges. A typical value of the predetermined number of elements is 1.
2. if at most a pre-determined number of elements of the syndrome remain non-zero in two consecutive iterations.
3. if the difference between the numbers of non-zero elements of the syndrome in two consecutive iterations is less than a predetermined limit, suggesting slow convergence.
4. if the Hamming distance between the bit estimates before and after a predetermined number of iterations (typically one iteration) is less than a predetermined limit, suggesting slow convergence.
Decoders 30 and 31 of
The foregoing has described a limited number of embodiments of methods for decoding a representation of a codeword, of decoders that use these methods, of memories whose controllers include such decoders, and of communication systems whose receivers include such decoders. It will be appreciated that many variations, modifications and other applications of the methods, decoders, memories and systems may be made.
Number | Date | Country | |
---|---|---|---|
61074701 | Jun 2008 | US |