This invention relates generally to error correction coding, and more particularly to decoding soft information from receive signals
Optical Communication Network
Latency is a major issue in high speed communication networks, such as optical networks. This makes the trade-off between latency, complexity of implementation and coding gain important in the selection of channel codes. In many cases, the use of any coding technique can only provide gain at the expense of additional decoding and encoding complexity and increased latency. It is important to find coding techniques that provide sufficient gains, while keeping the encoding and decoding complexity low.
Reed-Muller (RM) Codes
Polar codes, see U.S. Pat. No. 7,756,424, “Optical CDMA communications system using OTDL device, have been used optical in fiber optic communications systems, to make more efficient use of the available bandwidth. Reed Muller codes, a subset of polar codes, can be used to achieve performance close to capacity limit predicted by the Shannon limit. Reed Muller decoders use linear error-correcting codes. Reed-Muller (RM) codes belong to the classes of locally testable codes, and locally decodable codes. RM codes are useful in the design of probabilistically checkable proofs in communication applications. Special cases of Reed-Muller codes include Hadamard codes, and Walsh-Hadamard codes.
It is known that RM codes have an elegant construction based on polynomials with specific structure. Higher order RM codes can be constructed recursively from lower order RM codes. This enables a decoding process that has complexity that is thousands of times smaller than other error correcting codes with similar performance, such as Reed Solomon codes.
Soft Decision Decoding
As known in the art, a hard-decision decoder decodes data that have a fixed set of discrete possible values, typically 0 or 1.
A soft-decision decoder decodes data that have been encoded with an error correcting code, and the data take on a range of continuous values from 0 to 1. The extra information indicates reliability (probability) of each input data point, and is used to form better estimates of the original data. Therefore, a soft-decision decoder typically performs better in the presence of corrupted data than hard-decision counterparts.
There are two types of soft decision decoders. First, a maximum likelihood (ML) decoder determines the probability that a specific codeword has been sent over a channel. Second, a maximum a posteriori (MAP) decoder determines the probability that information bit has been used to generate a codeword to be sent over a channel.
Embodiments of the invention provide a method for decoding soft information of Reed Muller that show superior performance over existing schemes.
The embodiments of the invention provide a method for performing soft decision decoding of Euclidean space Reed-Muller (RM) codes. The steps of the methods and procedures shown in
A code RM(r, m) of order r and codewode of length 2m is a set of all binary vectors associated with coefficients of a Boolean polynomial with m variables, and whose terms are composed of monomials of degree r. A monomial is a product of powers of variables, or formally any value obtained by finitely many multiplications of a variable.
Such a code has
valid codewords, and a minimum Hamming distance of 2m-r. The mappings, 0 1 and 1 −1 are used to transmit the RM(r, m) codewords using, e.g., binary phase-shift keying (BPSK) symbols. The function
is the binomial coefficient, or the number of ways to construct a set of k elements from a larger set of n elements.
Maximum Likelihood Decoding
Maximum Likelihood Decoding of First Order Reed-Muller Codes and Hadamard Transform
The polynomial of a RM(1, m) code is 1+X1+ . . . +Xm. The RM(1, m) code has the property that each of the code words, after BPSK mapping, is a row in the Hadamard matrix H2
The decoder examines the received vector with 2m coordinates. The m variables form an orthogonal subspace, and can be detected by the Hadamard transform, and the presence of a constant one negates the result of the Hadamard transform.
Let Y be the received vector, and H2
Let Li be the value of the ith element in the likelihood L. Then, the decoder determines î=arg maxiLi, where the function arg max returns the index that obtains the maximum value. The sign of Li gives {circumflex over (b)}1 The binary expansion of the index î indicates which variable is present, and thus gives estimates of the values of {circumflex over (b)}2 to {circumflex over (b)}m+1.
Maximum Likelihood Decoding of Decoding RM(m, m)
A generator matrix G of a code RM(m, m) is full rank and invertible in a Galois field of two elements (GF(2)). Hence, the decoder performs matrix inversion of G to obtain G−1, and multiplies the received vector after threshold operation by G−1 in modulo 2 arithmetic.
Maximum Likelihood Decoding of Higher Order Reed-Muller Codes and Recursive Decomposition
Given the two procedures above for decoding RM(1, m) and RM(m, m) codes, we can now recursively decode of general RM(r, m) codes. We note that the RM(r, m) code can be decomposed into RM(r−1, m−1) and RM(r, m−1) codes, via the well known Plotkin decomposition.
Thus, after BPSK mapping, we can express RM(r, m) as RM(r, m)={(u, uv)|uεRM(r, m−1) and vεRM(r−1, m−1)}, where uv denotes a component-wise multiplication of u and v. Hence, depending on the choice of decomposition variable xj, j=1, 2, . . . , m, the codewords of RM(r, m) can be written after applying the appropriate permutation as
(uj, ujvj)=(r1, r2),
where the superscript j is used to denote that the variable xi was used in the Plotkin decomposition. We use uji and ujivji to denote the ith coordinates of r1 and r2. The log-likelihood ratio (LLR) of uji, LLR(uji) can be determined from r1i−. Similarly, the log-likelihood ratio of ujivji, LLR(ujivji), can be determined from r2i. Because vji=uji(ujivji), the log-likelihood of vji, LLR(vji) can be expressed in terms of LLR(uji) and LLR(ujivji) as LLR(vji)=log((exp(LLR(uji)+LLR(ujivji))+1)/exp(LLR(uji))+exp(LLR(ujivji)))).
Because we have a procedure to compute the LLR(vji), we can perform decoding of vj. This is accomplished by generating a received vector r*, which has the same LLR values as vj. Thus, r* corresponds to the received codeword, assuming that vj was transmitted. This is done by setting r*i=LLR(vji)σ2/2 for i=1, 2, . . . , 2m−1. We assume that we have a RM(r−1,m−1) decoder, and we pass r* through this decoder to obtain vj.
While there are m variables that can be used to perform the Plotkin decomposition, there is an optimal decomposition variable then is xĵ such that
where the function arg max returns a maximum absolute value. The above choice of ĵ maximizes the probability of correct detection of vj.
As a variation that results in a lower performance for the ML decoder, it is possible to use
where the function arg max min returns the maximum index j, and the minimum index i for the absolute value, (|.|), of vji.
The prior art does not determine the LLR for the decomposition variable, and does not use the absolute value (abs) function, and finding a maximum. There are two variations that can be used depending on whether we use a maximum likelihood (ML), or maximum a posteriori (MAP)decoder. We can insert a ‘sum’ function between the max and abs function, or insert a ‘min” function between the max and abs function.
The procedure rearranges 110 bits y1y8 corresponding to each decomposition variable to obtain uj and ujvj for j=1, 2 and 3.
For each of the decomposition variable, the procedure determines the LLR V. The abs function 131 is applied to all computed vii, and a sum or min function 141 is then applied to the index i. The decomposition variable index j that corresponds to the largest value, as determined by the arg max function 141, is then the optimal decomposition variable 101.
Because we now have vj, we can compensate for it in r2 by computing r2vj. We can form the input to the RM(r, m−1) decoder as (r1+r2vj)/2.
Now, v can be decoded using the RM(r−1, m−1) decoder. After v is decoded, two observations exist for u, one for r1 and one with r2v. For a Gaussian distributed channel, the two observations can be averaged, and the RM(r, m−1) decoder can be used to decode u. The process can be recursively applied.
The RM(r−1, m−1) decoder returns both a decoded v 204 and a first set of corresponding undecoded bits. The decoded v are used to estimate u by determining (r1+r2vj)/2 205. The compute vector is decoded using an RM(r, m−1) decoder 206. If r=m−1, subcode (r1+r2 vj)/2 can be decoded using matrix inversion. Otherwise, further recursion is used to decode the subcode. The RM(r, m−1) decoder returns decoded bits u 207, and a second set of corresponding undecoded bits.
Then, the procedure determines 310, if r2 is currently decodable. The input r2from the Plotkin decomposition, corresponding to RM(r−1,m−1), is currently decodable if (r−1=1).
If true, then the maximum likelihood decoder for the RM(1,m−1) decoder based the Hadamard transform 311 is used to decode vj using input r2.
If r−1>1, then the Plotkin decomposition 312 is recursed, this time on the RM(r−1, m−1), and the input r2.
After vj is obtained we can proceed to generating the input for the RM(r, m−1 decoder. This is generated as (r1+r2vj)/2 320.
If the RM(r,m−1) code satisfies the condition that r=m−1 then the input, (r1+r2vj)/2, can be decoded to generate uj using the generator matrix of the RM(m−1,m−1) code described above.
Check 330 if r<m−1. If true, then the Plotkin decomposition is carried out again this time on the RM(r, m−1) code 331 and input (r1+r2vj)/2. Otherwise, decode using a matrix inversion 332.
Maximum Likelihood List Decoding with Optimal Decomposition
Maximum likelihood decoding usually finds the codeword and the corresponding undecoded bit pattern that is most similar to the received signal. In some applications, it can be useful to find not only the single similar codeword, but also multiple code words.
To do so, as shown in
After the optimal decomposition variable is determine 101 according to
The decoded vi are used to estimate ui by computing (r1+r2vi)/2 405. For each of the estimate (iterate over all i), the compute vector is decoded using the RM(r, m−1) decoder 406. If r=m−1, subcode (r1+r2vi)/2 can be decoded using matrix inversion. Otherwise, further recursion is used to decode the subcode. The RM(r, m−1) decoder returns decoded u 407, and the corresponding undecoded bits.
List Hadamard Transform
Let Y be the received vector, and H2
With each vi, (r1+r2vi)/2 are determined, and passed to the RM(r, m−1) decoder. Each of these vectors can be used to decode the corresponding bits, ui vector.
Maximum A Posteriori (MAP) Decoding
In the prior art, a MAP only operates on for codewords, and not individual bits. The embodiments of the invention provide a method for bit-level MAP decoding. An exact MAP decoder is provide for RM(1, m) and RM(m, m), and an approximated MAP decoder is provide for higher order RM codes. In addition, we also provide a fast MAP decoder based on a list maximum likelihood (ML) decoder.
MAP Decoder for RM(1, m)
Let r be the received bit, σ2 be the noise variance and H2
LLRrow=H2
expLL=exp(LLRrow)+exp(−LLRrow)
LLRbit(1)=log(sum(LLRrow)/sum(−LLRrow))
LLRbit(2:end)=log((1−A)exp LL/(A.expLL))
where A is a matrix of all binary vectors.
MAP Decoder for RM(m, m)
Let r be the received bit, σ2 be the noise variance and ci for i=1, 2, . . . , 2m, be all binary vectors. We first determine the likelihood of every ci given r and σ2, which is well known based on probability theory. After that, we note that c=Gb in modulo 2, where G is the generator polynomial, and b is the undecoded bits. It is easy to determiner inverse of G in modulo-2 arithmetic. Using the likelihood of ci and inverse of G, one can then determine the MAP of each bit.
MAP Decoder for Higher Order Reed Muller Code
To determine 101 the optimal for optimal decomposition variable, we use
The RM(r−1, m−1) 603 decoder only returns the LLR of the undecoded bits 604 corresponding to r2.
Using the LLR of the bits, the method determines the likelihood of code words that result in r2. We consider all code words that have substantial probability, for example, a probability higher than 0.01. These code word, which are bit-wise multiplied by r2, gives a compensated received codeword that can be used to decode the remaining bits.
Additionally r1 605 can be used to decode the remaining bits. The decoder for RM(r, m−1) 606 is called for each possibility. To clarify, RM(r, m−1) is called at least two times, and can be called many more times if many code words have the substantial probability.
The LLR of each recursive call determines the LLR of bits corresponding to the specific compensated codeword or r1.
All LLR of bits are combined 607. First, using the probability of each compensated code word with the substantial probability, The LLR of bits can be determined using a weighted sum. Finally, the LLR of bits from r1 is added to the LLR from the compensated code words.
MAP Decoder Using ML List Decoder
For a faster MAP decoder, it is possible to use the ML list decoder described above for
This code can be used in optical fiber, wireless, and wired communication networks.
Although the invention has been described by way of examples of preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the invention. Therefore, it is the object of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the invention.