The invention concerns the decoding of error correcting codes in the field of telecommunications and data storage. More specifically the invention concerns a decoding method and the corresponding decoder for non-binary Low-Density Parity Check (LDPC) codes.
The design of high-throughput decoders for non-binary low-density parity-check (NB-LDPC) codes in GF(q), with moderate silicon area, is a challenging problem, which requires both low complexity algorithms and efficient architectures.
Algorithms derived from the q-ary sum-product (QSPA) [1], [2], [3], [4] such as extended min-sum (EMS) [5] and min-max (Min-Max) [7] involve high complexity in their check node units (CNU), that implement the parity-check node update equations. In particular, the CNU of the different versions of EMS or Min-Max requires many comparisons, which reduce the maximum achievable throughput, due to a large inherent latency. This high latency is a bottleneck, especially for the decoding of high rate NB-LDPC codes (R>0.8), where the parity-check node degree c takes large values. Techniques like forward-backward implementation of the CNU [8], [10] or the bubble check algorithm [9] can reduce the latency to a minimum of dc clock cycles, with enough hardware parallelism, but this is still not sufficient to reach very high throughput. As a result, the architectures based on EMS or Min-Max algorithms [13], [14], [15] can achieve coding gains close to QSPA, but at the cost of low decoding throughput.
Aside from the EMS or Min-Max like algorithms, other solutions have been proposed in the literature, with the objective of greatly reducing the decoding complexity at the cost of larger performance loss compared to QSPA. The architectures that can achieve high throughput, and at the same time use a small chip area, just compute a very small set of parity check equations during the parity-check node update. This approach has been followed for majority-logic decodable (MD) algorithm [11] and generalized bit flipping-decoding algorithm (GBFDA) [12]. These simple algorithms, and the associated architectures [21], [17], [18] suffer from a non-negligible performance loss compared to the QSPA, between 0.7 dB to several dBs depending on the algorithm and the LDPC code. This performance loss is due to the lack of soft-information used in the CNUs of GBFDA and MD, and cannot be recovered with a larger number of decoding iterations [18]. In addition, GBFDA and MD tend to be more efficient for codes with medium variable node degrees (dv>3 in the case of GBFDA), and do not perform well for ultra-sparse dv=2 NB-LDPC codes, which have been identified as an important class of non-binary codes [6].
The invention permits to improve the performance of the decoding of a non-binary low density parity-check (NB-LDPC).
To this end the invention concerns a method for decoding a non-binary low density parity-check code defined in a finite field of size q, the code can be displayed in a bipartite graph comprising at least one variable node Vn, n=0, . . . , N−1 and at least one check node Cm, m=0, . . . , M−1, said method comprising for each iteration j of It decoding iterations, the steps consisting in that:
each variable node Vn, connected to a check node Cm, is configured for determining a most reliable symbols Qn1(j) and at least one symbol which is at least a pth most reliable symbol Qnp(j), with p≧2 for obtaining a vector of dc most reliable symbols;
each check node Cm is configured for determining:
The method according to the invention may have one of the following features:
Each variable node is configured for determining the most reliable symbols Qn(j) and the second most reliable symbol Qn2(j)=Q′n(j) and their corresponding extrinsic reliability ΔWn(j), ΔW′n(j) so that at a check node the list of L+1 test vectors are built by replacing symbol Qn(j) by the second most reliable symbol Q′n(j) in at most η≦L locations were the differences between ΔWn(j) and ΔW′n(j) are the smallest.
For η≦L, it comprises a step consisting in that a sorter unit is configured for sorting the differences of extrinsic reliability Δnm(j)−ΔWnm2(j) from the highest value to the lowest value for obtaining a sequence of L sorted indices n, the sequence comprising the η locations where Qn(j) is replaced by Qn2(j) in the L+1 test vectors.
Each variable node is further configured for computing an intrinsic information Wmn(j) from check node counting the votes of Rn0(j) and of Rni(j) with the respectively amplitude voting v0, v2.
It comprises before the It decoding iterations an initialization step comprising the sub-steps of:
determining a LLR vector Ln=(Ln[0], Ln[1], . . . , Ln[q−1]) of a nth symbol in a sequence of N non-binary noisy symbols;
initializing a vector of APP vectors Wn(0) to the LLR vector Ln and initializing a matrix Wmn(0) to an all-zero matrix, said matrix Wmn(j) being the intrinsic information from the check node m.
Each variable node taking as input the LLR vector and the vector Wmn(j) combines the previous vector Wmn(j−1), the voting symbols Rn0(j) and of R′n(j) and the voting amplitudes v0, v1 through a function F1 for obtaining the vector defined as the intrinsic information Wmn(j).
The function F1 is a simple summation of the values of the previous vector Wmn(j−1) at indices indicated by the voting symbols Rn0(j) and of R′n(j) and the voting amplitudes v0, v1.
The LLR vector and the vector Wn(j) combines (A5.1, A5.2) the previous vector wn(j−1) Wmn(j−1), the voting symbols Rn0(j) and of Rni(j) and the voting amplitudes v0, v1 through a function F2 for obtaining the vector Wn(j).
The function F2 is a simple summation of the values of the previous vector Wn(j−1) at indices indicated by the voting symbols Rn0(j) and of Rni(j) and the voting amplitudes v0, v1.
The invention also concerns, a decoding apparatus comprising at least one variable node Vn and a at least one check node Cm, said decoder being configured for implementing a method for decoding a non-binary low density parity-check code defined in a finite field of size q, according to the invention.
The decoding apparatus of the invention may have one of the following features:
It is configured for implementing check node operations by means of L processing units configured dynamically to compute Rni(j), with i=1, . . . , L, each one of the L processing units share 3×dc inputs that correspond to the symbols Qn1(j), Qn2(j) and to the coefficients of the code, said inputs of each processing unit are combined by means of 2×dc GF multipliers and 2×dc GF adders, said processing units including all the logic necessary to compute L different syndromes as an intermediate step and a variable number of pipeline stages that may vary from 0 to log 2(dc)+2 depending on the speed of said processing unit.
It is configured for implementing the method of the invention and comprises i) a bank of memories that store Rni(j), with i=0, . . . , L symbols, ii) q processors with a logic required to compare the symbols Rni(j), with i=0, . . . , L, with the q elements of the Galois Field and determine the amplitude of the votes corresponding to that symbols; and iii) q cells that implement functions F1 and F2 , the bank of memories being implemented with L RAM memories or a bank of L registers., the processors being implemented with L-XNOR gates of log 2(q) bits and L-OR gates of 1 bit to compare the input symbols (symbols Rni(j), with i=0, . . . , L,) with the q Galois Field elements and determine the amplitude of the votes, the cells including a logic necessary for implementing F1 and F2 and the storage resources for Wmn(j) and Wn(j).
It comprises a sorter unit configured for obtaining the sequence N2 of L sorted indices n according to the method of the invention, the sorter unit including at least one sub-processors of radix L, each sub-processor including: i) one stage of comparators configured for performing all the possible combinations of the inputs; ii) a plurality of adders and a plurality of NOT gates configured for computing a summation of the output signals of the different comparators associated to the same input, the adders being configured for checking how many times a condition of greater than or less than is satisfied for each one of the inputs; iv) a plurality of logic gates configured for implementing a L different masks that allows ordering the inputs according to the information provided by the outputs of the adders, logic gates are XNOR, OR and AND gates. The invention permits to achieve both a high throughput and a coding gain as close as possible to QSPA, by taking advantage of the low complexity of GBFDA operations and introduces soft information in the CNU, as it is done in EMS and Min-Max, to improve the coding gain. The core idea of the ES-GBFDA CNU is to compute the syndrome using the hard decisions on the symbols, obtained from the most reliable Galois field values, and declare a vote for the hard decision symbols which satisfy the parity-check. The vote is then propagated to the symbol nodes and accumulated along the decoding iterations in a memory.
Contrary to ES-GBFDA, the invention not only considers the most reliable symbols in the syndrome computations, but also to take at least the second most reliable symbols (in each incoming message) into account. By doing so, an extended information set is available for the parity-check node update and this allows introducing the concept of weak and strong votes, performed by the CNU, and propagated to the symbol node memory. With this feature, each variable node can receive two kinds of votes, whose amplitude can be tuned to the reliability of the syndrome that produces the vote.
For this reason, the decoding method of the invention can be called multiple-vote symbol-flipping decoder (MV-SF).
At the MV-SF CNU, we call test vector a set of symbols taken from the most reliable and the second most reliable symbols of each incoming message. The invention introduces some extra soft-information and enlarges the list of considered test vectors, therefore, improving the decoding performance. Meanwhile, the complexity of the CNU for each extra test vector is the same as GBFDA, which gives the method of the invention the nice feature of controlling the performance/complexity trade-off: the more test vectors are considered, the better the performance is, and the less test vectors are considered, the better the throughput is.
As an example, a decoder with four times more test vectors than the ES-GBFDA, on a (N=837, K=723) NB-LDPC code over GF(32), shows a coding gain of 0.44 dB compared with GBFDA and has just a performance loss of 0.21dB compared to Min-Max. Moreover, the associated architecture can reach a throughput similar to the ES-GBFDA [18] with an increase of area less than two times, not four times as expected from a direct architecture mapping.
The invention improves the coding gain of ES-GBFDA by creating a list of test vectors that increases the amount of soft information in the check node update. The variable node is also modified to introduce different amplitudes in the votes involved in the computation, with the objective of distinguishing if the vote is produced by the most reliable information or by both most and second most reliable symbols. With the invention, the gap of performance between ES-GBFDA and Min-Max or EMS is reduced.
Derived from the method of the invention, a high throughput architecture of a decoder is proposed. Area required is less than L/2 times area of the ES-GBFDA, being L the size of the list of test vectors. This fact demonstrates the optimization of the architecture, which does not increase its area L times, as it would be the case of a direct mapping solution. The invention, even overestimating area, reaches at least 27% more efficiency (throughput/area) compared to the best Min-Sum and Min-Max architectures, with the cost of only 0.21dB performance loss. The invention based on GBFDA can reach similar performance to EMS and Min-Max, but with the advantage of higher throughputs.
Other features and advantages of the invention will appear in the following description with references to the drawings, in which:
A method for decoding a non-binary low density parity-check (NB-LDPC) code defined in a finite field of size q is based on a bipartite graph comprising at least one variable node Vn, n=0, . . . , N−1 and at least one check nodes Cm, m=0, . . . , M−1.
Let us define a (N,K) NB-LDPC code over GF(q) (q=2p) with code length N and information length K. Its parity check matrix HM,N has N columns and M rows. The non-zero coefficients of HM,N are hm,n, where m is the row index and n the column index. The check node degree is denoted as dc and the variable node degree as dv, (m) defines the set of variables nodes connected to the m-th check node, and (m) the set of check nodes connected to the n-th variable node. After transmission over a noisy Additive White Gaussian Noise (AWGN) channel, the received symbol sequence is defined as Y=(y0,y1, . . . , yN−1). Considering the transmission of the codeword symbol cn, the log-likelihood ration (LLR) is computed as Ln[x]=log [P(cn=0)|yn)/P(cn=x|yn)], for each x ∈ GF(q). The LLR vector of the n-th symbol is Ln=(Ln[0], Ln[1], . . . , Ln[q−1]). The hard-decision based on the most reliable element in Ln is called zn ∈ GF(q).
The method for decoding a NB-LDPC code according to the invention is an improvement of the ES-GBFDA. Before explaining the method of the invention we briefly describe this known improved algorithm, especially the ES-GBFDA from [18] called “Algorithm 1” above.
In this Algorithm 1, two main steps can be distinguished in the iterative process the check node unit (CNU), steps A2 and A3; and the variable node unit (VNU), steps A1, A4 and A5.
During the initialization, Wn(0) equals to the channel LLRs, Ln, and Wmn(0) is an all-zero matrix. After initialization, at the j-th iteration, step A1 sorts the extrinsic information to find the symbols Qn(j) with the maximum reliability (GFmax), which will be taken as the new hard decision.
The extrinsic information is calculated as Wn(j−1)−Wmn(j−1)·Wn(j−1) is the vector where all the votes are accumulated and the Wm(j−1) is the intrinsic information from check node Cm. Step A2 computes the syndrome s, required in step A3 to calculate Rn(j), which is the symbol to be voted (selected). In step A4, Wmn(j) counts the votes on Rn(j) where v is the amplitude of the vote. In step A5, Wn(j) accumulates the initial LLRs plus the votes of all the check nodes connected to the variable node Vm. The votes modify the values of Wn(j−1) and Wmn(j−1), changing the result of the sorting process in A1 and, hence, flipping symbols in Qn(j). Step A6 performs the tentative decoding by finding the symbols associated to the maximum values of Wn(j) and step A7 implements the stopping criterion based on the information of all the syndromes. The decoded codeword is then {tilde over (c)}n(j).
We now describe, in a general way, the method of the invention based on the ES-GBFDA, by increasing the list of candidates computed by each check node and hence, increasing as well the number of votes propagated to the variable node. We only present the steps involved at the j-th decoding iteration.
Let Qn(j) be the hard decision obtained from the most reliable symbols, and Qnp(j) the hard decision of the p-th most reliable symbol within the input messages of a check node cm. We only consider the most reliable symbol Qn(j) and the second most reliable symbol Qn2(j)=Q′n(j). The symbols Qn(j) (respectively Qn2(j)) are associated with reliabilities ΔWn(j) (respectively ΔWn2(j)=ΔW′n(j)).
We define test vector for a check node cm as the combination of dc symbols with the restriction that at least one, and at most η of these dc symbols cannot be based on the most reliable information. In order to reduce the number of test vector candidates we only consider combinations between Qn(j) and Q′n(j). To build the list of test vectors of a check node cm, first the difference between the reliability of ΔWn(j) and ΔW′n(j) is computed and sorted in ascendant order for n ∈(m). The first elements of the sorted differences are the symbols in which the reliability of Q′n(j) is closer to the reliability of Qn(j). To keep the number of test vectors low we only replace Qn(j) by Q′n(j) in at most η locations were the differences between ΔWn(j) and ΔW′n(j) is smaller (first η elements of the sorted list). The parameter η tunes the performance/complexity trade-off of the algorithm since it is directly related to with the number of test vectors to be processed by each check node. η is selected as η<<dc to keep complexity low. The set with the η locations is denoted ′.
Let us define Γi(j) as the i-th test vector of a list 2η−1 possible test vectors of a check node m, different from the one based only on Qn(j) symbols. Each Γi(j) is built replacing Qn(j) with Q′n(j).
In Equation (1) the definition of the test vector Γi(j) is indicated where ′t is the t-th element in the set ′, it is the bit t of the binary representation of i and
In the same way, we can define the reliability of each Γi(j), ωi(j), by means of Equation (2).
We now consider a case wherein each check node is composed of L=η test vectors. Each test vector has dc−1 Qn(j) symbols, and one Q′n(j) symbol. The position of the Q′n(j) symbol in the test vector i, Γi(j) is given by N′i as follows:
The reliability equation for the test vector Γi(j) is given by
It can be deduced from Equation (4) that changing Qn(j) by Q′n(j) in N′i does not introduce a strong variation in the total reliability of the check node, ωi(j). The only element that changes in the sum is
which is selected because the difference
is one of the η smallest (so both
have similar reliabilty in the selected location N′1).
In other words, as Γi(j) has a high reliability, similar to Qn(j), processing its candidates increases the amount of useful soft-information in the decoder and hence improves the performance.
To keep the complexity low and avoid computing and storing (4) for each test vector, function (ωi(j)) has to be very simple.
In a preferred embodiment of the invention, only two amplitudes for the votes are considered vo for the symbol candidates derived from the most reliable test vector, and v1 for the other test vectors, v0>v1. However, this latter condition for the amplitudes v0 and v1 is necessary but not sufficient to obtain the best efficiency and performance. As already mentioned, the distance between the amplitudes is, with 77, the most important parameter to be optimized. If v0 and v1 are too close the amplitudes of the votes will be mixed easily and the flipping of the symbols will be done almost without any criterion, as candidates with very different reliability values will have almost the same vote amplitude. If the difference between v0 and v1 is too large, the votes of the less reliable test vectors will be not taken into account and the effect will be similar as not having an extended list of candidates. On the other hand, the values vo and v1 also have to be scaled according to the channel information. Unlike other algorithm as EMS or Min-max were all the information is strongly related with the incoming LLRs, algorithms based on voting process mix two different domains, the one with channel information and the one based on votes. It is easy to deduce that some kind of normalization is required for vote amplitudes or for channel information in order to combine them properly. Hence, we need to optimize the amplitudes of the votes v0 and v1, and also their combination with the LLR information, through the function .
In the preferred embodiment of the invention called “Algorithm 2” above, L=η. It can be seen that some steps of this Algorithm 2 are similar to some steps of Algorithm 1 (i.e., ES-GBFDA).
The method of the invention, above, comprises It decoding iterations. In the following, the method is described at each iteration j.
Before, the It decoding iterations an initialization step, Initialization, comprises the sub-steps of:
determining a LLR vector Ln=(Ln[0], Ln[1], . . . , Ln[q−1]) of a nth symbol in a sequence of N non-binary noisy symbols;
initializing a vector of APP vectors Wn(0) to the LLR vector Ln and initializing a matrix Wmn(0) to an all-zero matrix, said matrix Wmn(j) being the intrinsic information from the check node m.
In steps A1.1 and A1.2 each variable node V, connected to a check node Cm is configured for determining a most reliable symbols Qn1(j) and at least one symbol which is at least a pth most reliable symbol Qnp(j), with p≧2 for obtaining a vector of dc most reliable symbols.
In particular steps A1.1 and A1.2 search the extrinsic information of the most reliable symbols ΔWn(j)(max1) , and their associated hard decision symbols Qn(j) and Q′n(j).
Furthermore, each variable node n=0, . . . , dc−1 is configured for determining (the most reliable symbols Qn(j) and the second most reliable symbol Q′n(j) and their corresponding extrinsic reliability ΔWn(j), ΔW′n(j) so that at a check node m the list of L+1 test vectors are built by replacing symbol Qn(j) by the second most reliable symbol Q′n(j) in at most η≦L locations were the differences between ΔWn(j) and ΔW′n(j) are the smallest, with ΔWn(j)=Wn(j)−Wmn(j).
Based on the dc most reliable symbols, each check node Cm is configured for determining:
It can be seen that comparatively with Algorithm 1, each check node performs more operations than a check node of Algorithm 1. Further, step A3.1 is the same as A3 in algorithm 1.
Furthermore, each variable node is configured for determining in steps A1.1 and A1.2 the most reliable symbols Qn(j) and the second most reliable symbol Q′n(j) and their corresponding extrinsic reliability ΔWn(j), ΔW′n(j) so that at a check node in steps A2.1 and A2.2 the list of L+1 test vectors are built by replacing symbol Qn(j) by the second most reliable symbol Q′n(j) in at most η≦L locations were the differences between ΔWn(j) and ΔW′n(j) are the smallest, with ΔWn(j)=Wn−Wnm.
The method of the invention also comprises a step consisting in that a sorter unit is configured for sorting in step A1.3 the differences of extrinsic reliability ΔWnm(j)−ΔW′nm(j) from the highest value to the lowest value for obtaining a sequence ′ of L sorted indices n, the sequence ′ comprising the η locations where Qn(j) is replaced by Q′n(j) in the L+1 test vectors.
In particular, step A1.3 sorts ΔWmn(j)−ΔW′nm(j) in increasing order, where n ∈(m). Hence, the first values after the sorting are the ones with least difference between, the reliability of Qn(j) and Q′n(j) symbols. Finally, step A1.3 stores the sorted n indices in .
Additionally, each variable node is further configured for computing in steps A4.1 and A4.2 an intrinsic information Wmn(j) from check node counting the votes of Rn(j) and of Rni(j) with the respectively amplitude voting v0, v1.
Furthermore, each variable node taking as input the LLR vector and the vector Wmn(j) combines (A4.1, A4.2) the previous vector Wmn(j−1), the voting symbols Rn0(j) and of R′n(j) and the voting amplitudes v0, v1 through a function F1 for obtaining the vector defined as the intrinsic information Wmn(j).
The function F1 can be a simple summation of the values of the previous vector Wmn(j−1) at indices indicated by the voting symbols Rn0(j) and of Rni(j) and the voting amplitudes v0, v1.
Also, each variable node taking as input the LLR vector and the vector Wn(j) combines in steps A5.1, A5.2 the previous vector Wn(j−1)Wmn(j−1), the voting symbols Rn(j) and of R′n(j) and the voting amplitudes v0, v1 through a function F2 for obtaining the vector Wn(j).
The function F2 can be a simple summation of the values of the previous vector Wn(j−1) at indices indicated by the voting symbols Rn(j) and of R′n(j) and the voting amplitudes v0, v1.
Steps A2.1, A3.1, A4.1 and A5.1 from Algorithm 2 are the same as A2, A3, A4 and A5 in Algorithm 1. These steps are performed based on the most reliable symbols. Steps A2.2, A3.2, A4.2 and A5.2 are performed with the L test vectors from the considered set of Qn(j) and Q′n(j) symbols. Steps A2.2, A3.2, A4.2 and A5.2 are performed with the L+1 test vectors from the considered set of Qn(j) and Q′n(j) symbols. Step A2.2 performs the syndrome computation for the L+1 test vectors (s′i), formed by one symbol and dc−1 symbols Qn(j). Step A3.2 calculates the candidates of the voting process, R′n−i(j), according to each one if the L+1 test vectors. Steps A4.2 and A5.2 are similar to steps A4 and A5 of algorithm. The main difference is that in the first one the voted candidates are R′n−i(j) instead of Rn(j). One can note that there are two amplitudes for the votes v0, v1. As it can been explained earlier, the constraint v0>v1 must be fulfilled, because Rn(j) is computed with Qn(j) symbols, so it is more reliable than R′n−i(j), which is computed with both Qn(j) and Q′n(j) symbols.
A decoder for implementing the method of the invention is now described in relation to
On this figure, a diagram of the complete architecture is described. Three main parts can be distinguished: i) VNU units, ii) CNU units and iii) the sorter unit. The decoder has dc VNU units. Each VNU unit computes Qn(j) and Q′n(j) symbols which are the input for CNU units, by using Rn(j) and R′n(j) incoming symbols (with i from 0 to L). In addition, VNU units calculate ΔWnm(j) and ΔW′nm(j) and perform the subtraction (ΔWnm(j)−ΔW′nm(j)), which is used as input of the sorter unit. So, VNU units implement steps A1.1, A1.2, A4.1, A4.2, A5.1 and A5.2 from Algorithm 2. The sorter unit receives dc ΔWnm(j)−ΔW′nm(j) values and looks for the L smallest values (the elements which are added to the enlarged list are the ones with the smaller difference between the reliabilities of Qn(j) and Q′n(j)). The sorter outputs are the indexes of the L elements with the smallest ΔWnm(j)−ΔW′nm(j) values within the dc inputs. This part of the architecture implements step A1.3 and due to its complexity and relative novelty it is explained at the end of the section, separately from the rest. Finally, two different kinds of CNU units are found: the ones that compute only Qn(j) symbols (CNU unit HD) and the ones that compute test vectors with both Qn(j) and Q′n(j) (CNU unit test vector (TV)). The decoder has just one CNU unit HD and L CNU units TV. Hardware resources for computing hmn coefficients are shared between both HD and TV units. CNU unit HD implements steps A2.1 and A3.1 steps, while the L CNU units TV implement steps A2.2 and A.3 from Algorithm 2.
Each VNU unit has 2×(L+1) RAMs that store Rn(j) and R′n−i symbols. A single RAM stores (q−1)Rn(j) (or R′n−i) symbols that correspond to the information of one sub-matrix. Rn(j) and R′n−i symbols are represented with p bits because q=2p. Memories connected to the same input work in a ping-pong way, so during q−1 clock cycles one RAM is read and the other is written and for the next q−1 cycles memories work in the opposite way. This reduces the idle cycles of the decoder and increases its efficiency.
mThe logic for selecting the active cells generates sym_sel x which indicates if a VNU cell receives a vote or not. To compute sym_sel x of a cell x, all the Rn(j) and R′n−i symbols (outputs of the ping-pong RAMs selected by the multiplexor) are compared with the symbol x associated to the cell, where x ∈ GF(q). These comparisons are implemented with XNOR gates, so if one of the symbols (Rn(j) or R′n−i) is equal to x, the output of one of the XNOR gates will be one, indicating that the symbol x is voted. sym_sel x is computed by applying the OR operation to all the XNOR outputs that are compared to the symbol x. All the outputs of the XNOR gates (A bit) connected to R′n−i are also added (omitted on
Each VNU unit has q cells that compute Wn(0)−Wmn(0), Wn(α0)−Wmn(α0), |Wn(α1)−Wmn(α1) . . . , Wmn(αq−1)−Wmn(αq−2)|.
In
The outputs of q cells (Wn−Wmn) are the inputs of the maximum finder unit as it is shown in
The latency equation for the whole decoder is the same as the one in [18]; however, if the same frequency wants to be reached the number of pipeline stages must be increased. The latency of the decoder is (q−1+pipeline)×dv×(#iterations+1)+(q−1+pipeline) clock cycles. Computing each sub-matrix takes (2q−1) but the pipeline delay has to be added, so each sub-matrix needs (q−1+pipeline) clock cycles. As there are dv sub-matrices, each iteration needs (q−1+pipeline)×dv clock cycles. We have to add one extra iteration to the total number of iterations for the initialization and (q−1+pipeline) clock cycles are needed to get the values of the decoded codeword after the iterative process.
The proposed ascendant sorter unit can be seen as an L-minimum finder. The architecture is based on the one from [20] but instead of looking for two minimums, it looks for L minimums. The main difference between this proposal and the one in [20] is that we do not apply masks to compute the minimums different from the absolute one. The fact of avoiding masks reduces the critical path at a cost of increasing some hardware resources. We describe an architecture for the sorter unit for L=4 and dc=27, but it can be easily generalized. In addition, the selection of the radix for each one of the stages is not optimized as the objective is just to show that there is a possible solution to implement the ascendant sorter unit with moderated area and continuous processing (each clock cycle a new group of elements can be sorted, after the latency generated by the pipeline with the first input).
The architecture for one unit of stage I=0 is illustrated on
In the invention, the decoder of
We here below give estimated results of area and throughput for the decoder architecture described above.
To perform the estimation, analytical methods as the ones in [21] were applied. To ensure that a fair comparison with other works is done, we overestimate some hardware resources to provide an upper bound in both area and critical path. For example, the area of the first and second maximum finder in
On the other hand, the MV-SF architecture adds hardware to compute the test vectors, increasing the length of the critical path compared to ES-GBFDA in [18]. The number of gates of the critical path is increased at the check node for the selection of the test vectors (
Although this critical path is two gates longer than the one in [18], the effect of routing is bigger than the one from the logic depth, so we can assume frequency ˜238 MHz without introducing a big error in the estimation. Hardware resources for MV-SF decoder with the previous parameters can be found in the table of
In the table of
Comparing the ES-GBFDA decoder to the one for implementing the method of the invention, we increase area 1.5M/847K=1.77 times with an slightly lower throughput (540 Mbps instead of 615 Mbps).
The decoder of Algorithm 2 is 726/360=2 times less efficient than the one based on ES-GBFDA in [18] but it reaches a non-negligible coding gain of 0.44 dB. A direct mapping architecture of Algorithm 2 requires L+1 times the area of ES-GBFDA (L for the less reliable test vectors and one for the most reliable one), which is 4.23 MXOR. Hence, our proposal is 4.23M/1.5M=2.82 times less area consuming than a direct mapping architecture with 12% less throughput. In terms of efficiency, a direct mapping architecture 615 Mbps=4.23 MXORs=145) is more than two times less efficient that the proposal of this work, with the same coding gain.
To the best knowledge of the authors, the architecture in [15] is the most efficient one based on Min-Sum algorithm.
Compared to this architecture, the decoder of the invention requires 1.5M/806K=1.86 times more area, but reaches a throughput 540/149=3.62 times higher. The decoder for Algorithm 2 is 360=185=1.9 times more efficient compared to the one in [15] based on Simplified Min-Sum, but a difference of 0.26dB in coding gain should be taken into account. Regarding to Min-Max architectures, we compare to the most efficient one, [22] (other efficient architectures are included in the table of
To sum up, the architecture based the method of the invention has 1.86 and 2.75 times more area than the ones based on EMS or Min-Max, but reaches a throughput 3.5 times higher. MV-SF introduces 0.26dB of performance loss compared to EMS and 0.21 dB compared to Min-Max.
[1] M. Davey and D. J. MacKay, “Low density parity check codes over GF(q)”, IEEE Commun. Letter, vol. 2, pp. 165-167, June 1998.
[2] L. Barnault and D. Declercq, “Fast decoding algorithm for LDPC over GF(2q)”, Proc. Info. Theory Workshop, pp. 70-73, Paris, France, March 2003.
[3] H. Wymeersch, H. Steendam and M. Moeneclaey, “Log-domain decoding of LDPC codes over GF(q),” Proc. IEEE Intl. Conf. on Commun., pp. 772-776, Paris, France, June 2004.
[4] C. Spagnol, E. Popovici and W. Marnane, “Hardware implementation of GF(2m) LDPC decoders”, IEEE Trans. on Circuits and Syst.-I, vol. 56, no. 12, pp. 2609-2620, December 2009.
[5] D. Declercq and M. Fossorier, “Decoding algorithms for nonbinary LDPC codes over GF(q),” IEEE Trans. on Commun., vol. 55, pp. 633-643, April 2007.
[6] C. Poulliat, M. Fossorier and D. Declercq, “Design of regular (2,dc)-LDPC codes over GF(q) using their binary images”, IEEE Trans. Commun., vol. 56(10), pp. 1626-1635, October 2008.
[7] V. Savin, “Min-max decoding for non binary LDPC codes,” Proc. IEEE ISIT, pp. 960-964, Toronto, Canada, July 2008.
[8] A. Voicila, D. Declercq, F. Verdier, M. Fossorier and P. Urard, “Low-Complexity Decoding for non-binary LDPC Codes in High Order Fields”, IEEE Trans. on Commun., vol. 58(5), pp 1365-1375, May 2010.
[9] E. Boutillon and L. Conde-Canencia, “Bubble check: a simplified algorithm for elementary check node processing in extended min-sum non-binary LDPC decoders,” Electronics Letters, vol. 46, pp. 633-634, April 2010.
[10] X. Zhang and F. Cai, “Partial-parallel decoder architecture for quasi-cyclic non-binary LDPC codes,” Proc. of Acoustics Speech and Signal Processing (ICASSP), pp. 1506-1509, Dallas, Tex., USA, March 2010.
[11] D. Zhao, X. Ma, C. Chen, and B. Bai, “A Low Complexity Decoding Algorithm for Majority-Logic Decodable Nonbinary LDPC Codes,” IEEE Commun. Letters, vol. 14, no. 11, pp. 1062-1064, November 2010.
[12] C. Chen, B. Bai, X. Wang, and M. Xu, “Nonbinary LDPC Codes Constructed
Based on a Cyclic MDS Code and a Low-Complexity Nonbinary Message-Passing Decoding Algorithm,” IEEE Commun. Letters, vol. 14, no. 3, pp. 239-241, March 2010.
[13] Y.-L. Ueng, C.-Y. Leong, C.-J. Yang, C.-C. Cheng, K.-H. Liao, and S.-W. Chen, “An efficient layered decoding architecture for nonbinary QC-LDPC codes,” IEEE Trans. on Circuits and Systems I: Regular Papers, vol. 59, no. 2, pp. 385-398, February 2012.
[14] J. Lin and Z. Yan, “Efficient shuffled decoder architecture for nonbinary quasicyclic LDPC codes,” IEEE Trans. on Very Large Scale Integration (VLSI) Systems, vol. PP, no. 99, p. 1, 2012.
[15] X. Chen and C.-L. Wang, “High-throughput efficient non-binary LDPC decoder based on the Simplified Min-Sum algorithm,” IEEE Trans. On Circuits and Systems I: Regular
Papers, vol. 59, no. 11, pp. 2784-2794, November 2012.
[16] X. Zhang, F. Cai and S. Lin, “Low-Complexity Reliability-Based Message-Passing Decoder Architectures for Non-Binary LDPC Codes,” IEEE Trans. on Very Large Scale Integration (VLSI) Systems, vol. 20, no. 11, pp. 1938-1950, September 2011.
[17] F. Garcia-Herrero, M. J. Canet and J. Valls, “Architecture of generalized bit-flipping decoding for high-rate non-binary LDPC codes,” Springer Circuits, Systems, and Signal Processing, vol. 32, no. 2, pp. 727-741, April 2013.
[18] F. Garcia-Herrero, M. J. Canet, J. Valls, “Decoder for an Enhanced Serial Generalized Bit Flipping Algorithm,” IEEE International Conference on Electronics, Circuits and Systems (ICECS), pp. 412-415, Sevilla, Spain, December 2012.
[19] B. Zhou, J. Kang, S. Song, S. Lin, K. Abdel-Ghaffar, and M. Xu, “Construction of non-binary quasi-cyclic LDPC codes by arrays and array dispersions,” IEEE Trans. on Commun., vol. 57, no. 6, pp. 1652-1662, June 2009.
[20] L. Amaru, M. Martina, and G. Masera, “High speed architectures for finding the first two maximum/minimum values,” IEEE Trans. on Very Large Scale Integration (VLSI) Systems, vol. 20, no. 12, pp. 2342-2346, December 2012.
[21] X. Zhang and F. Cai, “Reduced-complexity decoder architecture for nonbinary LDPC codes,” IEEE Trans. on Very Large Scale Integration (VLSI) Systems, vol. 19, no. 7, pp. 1229-1238, July 2011.
[22] F. Cai; X. Zhang, “Relaxed Min-Max Decoder Architectures for Nonbinary Low-Density Parity-Check Codes,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2013.
[23] F. Garcia-Herrero, D. Declercq, J. Valls, “Non-Binary LDPC Decoder based on Symbol Flipping with Multiple Votes,” submitted to IEEE Commun. Letters, 2013.
Number | Date | Country | Kind |
---|---|---|---|
14290024.0 | Feb 2014 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2015/052206 | 2/3/2015 | WO | 00 |