Threshold-based min-sum algorithm to lower the error floors of quantized low-density parity-check decoders

Information

  • Patent Grant
  • 11962324
  • Patent Number
    11,962,324
  • Date Filed
    Friday, April 29, 2022
    2 years ago
  • Date Issued
    Tuesday, April 16, 2024
    7 months ago
Abstract
A modified version of the min-sum algorithm (“MSA”) which can lower the error floor performance of quantized LDPC decoders. A threshold attenuated min-sum algorithm (“TAMSA”) and/or threshold offset min-sum algorithm (“TOMSA”), which selectively attenuates or offsets a check node log-likelihood ratio (“LLR”) if the check node receives any variable node LLR with magnitude below a predetermined threshold, while allowing a check node LLR to reach the maximum quantizer level if all the variable node LLRs received by the check node have magnitude greater than the threshold. Embodiments of the present invention can provide desirable results even without knowledge of the location, type, or multiplicity of such objects and can be implemented with only a minor modification to existing decoder hardware.
Description
BACKGROUND OF THE INVENTION

Embodiments of the present invention relate to a method and apparatus that selectively attenuates or offsets the messages in a Low-density parity-check (LDPC) decoder based on a simple threshold comparison.


LDPC codes are error-correcting codes that have been widely adopted in practice for reliable communication and storage of information, e.g., cellular data (5G), wi-fi, optical communication, space and satellite communication, magnetic recording, flash memories, and so on. Implementation of LDPC decoders involves iteratively passing quantized messages between processing units on the chip. To reduce implementation and usage costs (e.g., power, area, speed), an approximation to the usual algorithm, called the min-sum algorithm (“MSA”), is employed. Variants of min-sum are used in practice to adjust for the error in approximation and to improve error correction performance. These variants use an attenuation or offset (reduction) in the message values that are passed. Known implementations use a uniform attenuation or offset, meaning that the messages passed are all reduced in value in the same way—such known systems thus lack the ability to attenuate individual messages.


Currently, known systems and/or algorithms that can lower the error floor performance of a conventional state-of-the-art attenuated MSA (“AMSA”) algorithm include:

    • Accepting errors and employing post-processing, which increases chip space, power consumption, and latency;
    • Requesting re-transmission (“HARQ”)—requires feedback channel and adds latency, transmission power, and decoder power; and/or
    • Increasing message precision (more bits for quantization)—increased hardware cost, memory requirements.


More specifically, LDPC codes are a class of linear block codes for which the performance of iterative message passing (“MP”) decoding can approach that of much more complex maximum likelihood (“ML”) decoding. The MSA is a simplified version of the sum-product algorithm (“SPA”) that is commonly used for iterative MP decoding of LDPC codes, where the check node computation is approximated and hence is significantly easier to perform. This simplification is particularly desirable for hardware decoder implementations. Moreover, unlike the SPA, no estimation of the channel signal-to-noise ratio (“SNR”) is needed at the receiver for an additive white Gaussian noise (“AWGN”) channel.


Practical implementations of LDPC decoders require a finite precision (quantized) representation of the log-likelihood ratios. Quantized density evolution (“DE”) has been used to find the optimum attenuation and offset parameters for the attenuated min-sum algorithm (“AMSA”) and the offset min-sum algorithm (“OMSA”), in the sense that DE calculates the iterative decoding threshold, which characterizes the waterfall performance. Further improvements to the waterfall performance of the MSA for quantized and unquantized decoders have been made. At high SNRs, quantization typically causes the early onset of an error floor. It has been shown that certain objects, called trapping sets, elementary trapping sets, leafless elementary trapping sets, or absorbing sets in the Tanner graph, cause the iterative decoding process to get stuck, resulting in decoding errors at high SNRs. Hereafter, the sub-graphs induced by these sets, as well as similar sets, are referred to as problematic graphical objects. Several methods based on problematic objects have been proposed to estimate the performance of LDPC codes and a number of strategies have been proposed to lower the error floor of quantized LDPC decoders, including quantizer design, modifications to iterative decoding, and post-processing. Since each of these methods requires additional hardware and/or complexity, there is a present need for a system that can provide a simple solution, such as selective attenuation of message magnitudes, to avoid decoding failure.


BRIEF SUMMARY OF EMBODIMENTS OF THE PRESENT INVENTION

Embodiments of the present invention relate to a method for lowering the error floor of a quantized low-density parity-check decoder including comparing a variable node log-likelihood ratio magnitude with a threshold, based on the results of the comparison, determining whether to apply a reduction to the variable node log-likelihood ratio magnitude or not to apply a reduction to the variable node log-likelihood ratio magnitude, applying a reduction to the variable node log-likelihood ratio magnitude in instances when the determining step determines that a reduction should be applied, and not applying a reduction to the variable node log-likelihood ratio magnitude in instances when the determining step determines that a reduction should not be applied. Applying a reduction can include applying attenuation and/or offset. Applying attenuation can include multiplying the variable node log-likelihood ratio magnitude by a value that is greater than 0 and less than 1, and/or multiplying the variable node log-likelihood ratio magnitude by a value that is less than one and greater than or equal to 0.5. The method of claim 2 wherein applying attenuation comprises applying attenuation to the variable node log-likelihood ratio magnitude before the variable node log-likelihood ratio magnitude is passed from a check node to a variable node. Applying an offset can include applying a subtraction function, which itself can include subtraction of a predetermined number that is greater than zero and/or subtracting a value greater than zero before passing the variable node log-likelihood ratio to a variable node.


In one embodiment, comparing a magnitude of a variable node log-likelihood ratio with a threshold can include determining whether the variable node log-likelihood ratio is less than the threshold. Comparing a magnitude of a variable node log-likelihood ratio with a threshold can include determining whether the variable node log-likelihood ratio is less than or equal to the threshold, and/or comparing a magnitude of a variable node log-likelihood ratio with a predetermined value, which predetermined value can optionally have a magnitude of greater than 0 to a magnitude of 2.5. Comparing a variable node log-likelihood ratio magnitude with a threshold can include comparing a variable node log-likelihood ratio magnitude which comprises a minimum of a set of variable node log-likelihood ratio magnitudes with a threshold. The method can optionally include a second minimum of a set of variable node log-likelihood ratio magnitudes compared with a threshold. Applying a reduction to the variable node log-likelihood ratio magnitude can include applying a multiplication function of greater than zero and less than one and applying a subtraction function to the variable node log-likelihood ratio. Applying a reduction to the variable node log-likelihood ratio magnitude can also include varying a magnitude of the reduction by iteration and/or varying a magnitude of the reduction based on graph location τ(j) of check node index j.


In this application, we first show that the AMSA and OMSA exhibit worse (higher) error floors than the MSA with parameters that are optimized for waterfall performance. A modification is then introduced to the MSA that applies the strategies from the AMSA and the OMSA selectively, i.e., it applies attenuation/offset when it would be helpful and does not apply it otherwise. Assuming that there exist problematic graphical objects that cause most of the decoding failures in the high SNR regime, it will also be shown that the new MSA modification causes these objects to become less prone to decoding failures. As a result, embodiments of the present invention match the waterfall performance of the AMSA and OMSA, while improving their error floor performance. No information about the location or structure of the problematic objects is required to utilize this approach. However, knowledge of the problematic object facilitates determination of the optimal algorithm parameters. Moreover, AMSA (respectively OMSA) can be viewed as a particular case of the new threshold attenuated min-sum algorithm (“TAMSA”) and respectively the new threshold offset min-sum algorithm (“TOMSA”) and, as such, the performance of TAMSA (respectively TOMSA) is at least as good as AMSA (respectively OMSA) with optimal parameter selection. The complexity of embodiments of the present invention can be quantified to show that, because it uses the information that is already generated inside the check node processing unit of the AMSA or OMSA, the new algorithm is only slightly more complex to implement than the known systems.


Due to the time-consuming process of simulating the high SNR performance of LDPC codes, a code-independent and problematic object-specific method is preferably used to guide/optimize parameter selection and to evaluate the impact of embodiments of the present invention on the performance of LDPC codes containing problematic objects at high SNRs. The results show that embodiments of the present invention improve (i.e. reduce) the error floor caused by specific problematic objects compared to the MSA, AMSA, or OMSA.


Objects, advantages and novel features, and further scope of applicability of the present invention will be set forth in part in the detailed description to follow, taken in conjunction with the accompanying drawings, and in part will become apparent to those skilled in the art upon examination of the following, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The accompanying drawings, which are incorporated into and form a part of the specification, illustrate one or more embodiments of the present invention and, together with the description, serve to explain the principles of the invention. The drawings are only for the purpose of illustrating one or more embodiments of the invention and are not to be construed as limiting the invention. In the drawings:



FIG. 1 is a drawing which illustrates the sub-graph G(A) induced by a (5,3) absorbing set A;



FIG. 2 is a graph which illustrates the simulated performance of an (8000,4000) LDPC code decoded with the MSA, AMSA, and OMSA, wherein solid curves represent bit-error-rate (“BER”) and dashed curves represent frame-error-rate (“FER”);



FIG. 3A is a graph which illustrates the estimated FER performance vs. the parameter sets (α′, τ) for 0.5≤α′≤1, 0≤τ≤lmax, and Eb/N0=2 dB;



FIG. 3B is a graph which illustrates a contour plot of the estimated FER performance vs. the parameter sets (α′, τ) for 0.5≤α′≤1, 0≤τ≤lmax, and Eb/N0=2 dB;



FIG. 4 is a graph which illustrates the simulated performance of an (8000,4000) LDPC code decoded with the MSA, AMSA, OMSA, TAMSA, and TOMSA, wherein solid curves represent BER and dashed curves represent FER;



FIG. 5 is a graph which illustrates the simulated performance of an (1008,504) progressive-edge-growth LDPC (“PEG-LDPC”) code decoded with the AMSA, OMSA, and TAMSA, wherein solid curves represent BER and dashed curves represent FER;



FIG. 6 is a graph which illustrates the simulated performance of a (155,64) Tanner LDPC code with the MSA, AMSA, and TAMSA, wherein solid curves represent BER and dashed curves represent FER:



FIG. 7 is a graph which illustrates the simulated BER performance of the (155,64) Tanner LDPC code with the MSA and the TAMSA with both layered MP and standard MP decoding;



FIG. 8 is a graph which illustrates a sliding window (“SW”) decoder for spatially coupled LDPC codes (“SC-LDPCC”) operating on the parity-check matrix HSC;



FIG. 9 is a graph which illustrates an LDPC block code (“LDPC-BC”) LDPC-BC with 4000×8000 parity-check matrix HBC, μ′=4, ν′=8, and γ=1000, which is partitioned by a cutting vector w=[1, 3, 5, 7] to construct the two component matrices H0 and H1 based on H0′ and H1′, wherein each square represents a γ×γ matrix;



FIG. 10 is a graph which illustrates the performance of an (8000,4000) LDPC-BC and its spatially coupled version decoded with the MSA, AMSA, and TAMSA, wherein dashed curves represent the LDPC-BC and solid curves represent the SC-LDPCC decoded with a SW decoder with W=6; and



FIG. 11 is a graph which illustrates the simulated performance of the quasi-cyclic (155,64) Tanner LDPC-BC and its spatially coupled version decoded with an SW decoder with W=6 and the MSA, AMSA, and TAMSA.





DETAILED DESCRIPTION OF THE INVENTION

Embodiments of the present invention relate to decoding low-density parity-check (“LDPC”) codes, wherein the attenuated min-sum algorithm (“AMSA”) and the offset min-sum algorithm (“OMSA”) can outperform the conventional min-sum algorithm (“MSA”) at low signal-to-noise-ratios (“SNRs”), i.e., in the “waterfall region” of the bit error rate curve. For quantized decoders, MSA actually outperforms AMSA and OMSA in the “error floor” (high SNR) region, and all three algorithms suffer from a relatively high error floor. Embodiments of the present invention can include a modified MSA that can outperform MSA, AMSA, and OMSA across all SNRs. The implementation complexity of embodiments of the present invention are only slightly higher than that of the AMSA or OMSA. The simulated performance of embodiments of the present invention, using several classes of LDPC codes (including spatially coupled LDPC codes), is shown to outperform the MSA, AMSA, and OMSA across all SNRs.


Embodiments of the present invention relate to a novel modification to the check node update of quantized MSA that is straightforward to implement and reduces the error floor when compared to other known methods.


As background, let V={v1, v2, . . . , vn} and C={c1, c2, . . . , cm} represent the sets of variable nodes and check nodes, respectively, of a bipartite Tanner graph representation of an LDPC code with parity-check matrix H. Assume that a binary codeword u=(u1, u2, . . . , un) is binary phase shift keyed (“BPSK”) modulated such that each zero is mapped to +1 and each one is mapped to −1. The modulated signal is transmitted over an AWGN channel with mean 0 and standard deviation σ. The received signal is {tilde over (r)}=1−2u+n, where n is the channel noise. The quantized version of {tilde over (r)} is denoted as r=(r1, r2, . . . , rn).


The Min-Sum Algorithm and its Modifications. The MSA is an iterative MP algorithm that is simpler to implement than the SPA. Unlike the SPA, the MSA does not require channel noise information to calculate the channel log-likelihood ratios (“LLRs”). The SPA is optimum for codes without cycles, but for finite length codes and finite precision LLRs, the SPA is not necessarily optimum, particularly with respect to error floor performance. Let custom characterij represents the LLR passed from variable node vi to check node cj in a given iteration and let custom characterji represent the LLR passed from cj to vi. The check nodes that are neighbors to vi are denoted N(vi), and the variable nodes that are neighbors to cj are denoted N(cj). To initialize decoding, each variable node vi passes ri to the check nodes in N(vi), i.e.,

custom characterij=ri,  (Equation 1)

where the custom characterij's computed throughout the decoding process are referred to as the variable node LLRs. The check node operation to calculate the LLRs sent from check node cj to variable node vi is given by












ji

=



(





i





N

(

c
j

)


i




sign

(

𝕍


i



j


)


)

·

min


i





N

(

c
j

)


i








"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"




,




(

Equation


2

)








where the custom characterji's computed throughout the decoding process are referred to as the check node LLRs. After each iteration, the hard decision estimate û is checked to see if it is a valid codeword, where û=0 if and only if











r
i

+





j




N

(

ν
i

)







j



i




>
0.




(

Equation


3

)








If û is a valid codeword, or if the iteration number has reached Imax, decoding stops. Otherwise, the variable node LLRs are calculated as

custom characterij=rij′∈N(vi)†jcustom characterj′i  (Equation 4)

and decoding continues using equation 2. Two modified versions of the MSA, called attenuated (or normalized) MSA (“AMSA”) and offset MSA (“OMSA”), were introduced to reduce the waterfall performance loss of the MSA compared to the SPA. The modified check node computations are given by











ji

=



α

(





i





N

(

c
j

)


i




s

ign


(

𝕍


i



j


)



)


·

min


i





N

(

c
j

)


\

i








"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"







(

Equation


5

)















ji

=



(





i





N

(

c
j

)


i




sign



(

𝕍


i



j


)



)

·
max



{




min


i





N

(

c
j

)


\

i






"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"



-
β

,
0

}



,




(

Equation


6

)








respectively, where α, β>0 are constants. In both algorithms, the check node LLR magnitudes are modified to be smaller than those of MSA. This reduces the negative effect of overestimating the LLR magnitudes in the MSA, whose larger check node LLR magnitudes compared to the SPA can cause additional errors in decoding at low SNRs.


Implementation of the MSA, AMSA, and OMSA. To implement the check node update of equation 2, in the check node processing unit corresponding to cj, the sign and magnitude of custom characterji to be sent to each vi are calculated separately as follows. First, for all i′∈N(cj) the signs of custom characteri′j are multiplied to form Πi′∈N(cj) sign(custom characteri′,j). Then, for each i∈N(cj), sign(custom characterij) is multiplied by Πi′∈N(cj) sign(custom characteri′j) to form Πi′∈N(cj)†i sign(custom characteri′j). Second, the process of calculating |custom characterji| involves finding two minimum values, the first and second minimum of all the |custom characteri′j| at check node cj, denoted custom character1,j and custom character2,j, respectively. For each custom characterji, if the variable node vi corresponds to custom character1,j, then |custom characterji|=custom character2,j, otherwise, |custom characterji|=custom character1,j. The implementation of equation 5 or 6 is the same with an extra step of attenuating or offsetting the minimum values.


The process of finding custom character1,j and custom character2,j is complex to implement. Therefore, several methods have been suggested to reduce the complexity of the process or to avoid calculating custom character2,j and instead estimate it based on custom character1,j. The result is that custom character1,j plays an important role in the check node processing unit, and embodiments of the present invention can also rely on custom character1,j, the extension of the algorithm to techniques designed for complexity reduction possible.


Quantized Decoders. In a uniform quantized decoder, the operations in equations 1-6 have finite precision, i.e., the values are quantized to a set of numbers ranging from −lmax to lmax, with step size Δ, where the resulting quantizer thresholds are set from







-


max


+


Δ
2



to




max


-


Δ
2

.






The attenuation and offset parameters α and β in equations 5 and 6 that have the best iterative decoding thresholds can be found by computer simulation or by using a technique called quantized density evolution.


Trapping Sets and Error Floors. Let A denote a subset of V of cardinality a. Let Aeven and Aodd represent the subsets of check nodes connected to variable nodes in A with even and odd degrees, respectively, where |Aodd|=b. Here, A is referred to as an (a, b) trapping set. A is defined to be an (a, b) absorbing set if each variable node in A is connected to fewer check nodes in Aodd than in Aeven. These sets, along with similar objects such as elementary trapping sets and leafless elementary trapping sets, are known to cause most of the decoding errors at high SNRs in MP decoders. In FIG. 1, the sub-graph G(A) induced by a (5,3) absorbing set A is shown.


Threshold Attenuated/Offset MSA.—Motivation and Rationale. Although it is known that applying attenuation or offset when computing the check node LLRs typically improves performance in the low SNR (waterfall) region of the BER curve for quantized decoders, because high SNR performance is tied to problematic graphical objects, the AMSA and OMSA do not necessarily achieve a good error floor. For example, assuming BPSK modulation on the AWGN channel, FIG. 2 illustrates the simulated bit-error-rate (“BER”) and frame-error-rate (“FER”) performance of the (8000,4000) code, with a 5-bit uniform quantizer, Δ=0.15, and lmax=2.25, decoded using the MSA, AMSA, and OMSA. The performance of the quantized SPA using 6-bit quantization (1-bit sign, 2-bit integer, 3-bit fractional) is also illustrated for comparison. The AMSA and OMSA gain about 0.7 dB in the waterfall compared to the MSA. However, all the algorithms eventually exhibit an error floor at higher SNRs.












j

i


=

{







(





i





N

(

c
j

)


i




sign



(

𝕍


i



j


)



)

·

min


i





N

(

c
j

)


i








"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"



,



if


min


i





N

(

c
j

)


i






"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"



>
τ

,










α


(





i





N

(

c
j

)


i




sign



(

𝕍


i



j


)



)

·

min


i





N

(

c
j

)


i








"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"



,

otherwise
,









(

Equation


7

)















j

i


=

{







(





i





N

(

c
j

)


i




sign



(

𝕍


i



j


)



)

·

min


i





N

(

c
j

)


i








"\[LeftBracketingBar]"


𝕍


i



j




"\[RightBracketingBar]"



,



if


min


i





N

(

c
j

)


i






"\[LeftBracketingBar]"


V


i



j




"\[RightBracketingBar]"



>
τ

,









(





i





N

(

c
j

)


i




sign



(

V


i



j


)



)

·
max



{




min


i





N

(

c
j

)


i






"\[LeftBracketingBar]"


V


i



j




"\[RightBracketingBar]"



-

β



,
0

}


,


othe

r

w

ise

,









(

Equation


8

)







At high SNRs, for a received vector r of channel LLRs, decoding is successful with high probability. In the case of unsuccessful decoding, it is known that a small number of problematic objects are likely to be the cause, i.e., objects containing variable nodes with unreliable (small magnitude) LLR values. In this regime, the channel LLRs for the variable nodes outside a problematic object will be, however, mostly reliable and have large magnitudes. In other words, the outside LLRs are typically initially large (with the correct sign) and will continue to grow quickly to even larger values (often lmax). However, even if some and/or all of the incorrect sign LLRs inside a problematic object are initially small, they can also be observed to grow quickly to larger values without correcting the errors in sign. This happens because the problematic object contains at least one short cycle, which prevents correction of the sign errors.


To improve the probability of correcting errors occurring in a problematic object G(A) at high SNR, we have found that it is helpful if the LLR magnitudes sent from a check node cj∈Aeven to variable nodes vi∈A grow more slowly (i.e. are attenuated) when cj receives at least one unreliable (small magnitude) LLR from a variable node in A. This ensures that any incorrect LLRs received from the channel in A are not reinforced. On the other hand, if a check node cj (inside or outside G(A)) receives all large magnitude LLRs, these can be helpful for decoding and hence should not be attenuated. These two factors form the essence of the new threshold-based modification of AMSA/OMSA, presented below, that can lead to correct decoding of a received vector r that would not otherwise occur.


A Threshold Attenuated/Offset MSA. An embodiment of the present invention preferably makes use of a relationship observed at high SNRs between the variable node LLR magnitudes |custom characterij| received by check node cj and the likelihood of the check node cj being inside a problematic object G(A). This relationship allows the problem of locating errors affected by G(A) to instead merely considering the variable node LLR magnitudes |custom characterij| received at check node cj, i.e., relying on the |custom characterij|'s to tell if cj is likely to be inside G(A) and has the potential to cause decoding failures. At high SNRs, the check node LLRs outside G(A) typically grow faster than the LLRs inside G(A). Therefore, if a check node cj receives at least one small LLR, i.e., mini′∈N(cj)|custom characteri′j|custom character|custom character1,j|≤τ, where τ is a predetermined threshold, it is likely that cj is inside G(A). Consequently, to improve the error floor performance, the check node computation in equation 7 is preferably used to replace equation 2, where α′≤1 is an attenuation parameter designed to reduce the check node LLR magnitudes sent from a check node cj inside G(A) to the variable nodes in A. This modified check node update algorithm is referred to as the threshold attenuated MSA (“TAMSA”). As will be further shown, with a proper choice of the parameters τ and α′, the TAMSA is capable of correctly decoding some of the errors that occur in the AMSA or MSA due to problematic objects.


In equation 7, α′ is used to make the check node LLR magnitudes smaller when mini′∈N(cj)|custom characteri′j|≤τ. As an alternative (or in combination), an offset parameter β′ can be used to serve the same purpose, as illustrated in equation 8, where β′>0 is an offset parameter that reduces the check node LLR magnitudes. This modified check node updated algorithm is denoted as the threshold offset MSA (“TOMSA”). Both the TAMSA and TOMSA selectively, or locally, reduce the magnitudes of the check node LLRs that are likely to belong to a problematic object without requiring knowledge of its location or structure. The TAMSA and TOMSA add a simple threshold test compared to the AMSA and OMSA, while the attenuation (offset) parameter only needs to be applied to a few check nodes at high SNRs.


In one embodiment, the threshold τ can optionally be varied by iteration number—for example, the value of τ used in equations 7 and 8 can be a function τ(I) of the iteration number 0≤I≤Imax. The threshold τ can also optionally be varied by graph location—for example as a function τ(j) of check node index j. Although embodiments of the present invention can provide desirable results without such variations, such variations can provide further performance improvements.


Implementation of Threshold Attenuated/Offset MSA. For the MSA, for some number K of inputs to a check node processing unit, the implementation of sub-units to calculate custom character1,j and custom character2,j and the index needed to identify which input created custom character1,j required a significant number of multiplexers, comparators, and inverters, which is a function of K. A check node processing unit preferably includes some additional sub-units to generate the proper output and apply the attenuation (offset) parameter for the AMSA and/or OMSA. Implementation of the TAMSA and/or TOMSA adds just two simple steps to the implementation of the AMSA and/or OMSA. First, for a check node processing unit corresponding to cj, after calculating custom character1,j and custom character2,j, the value of custom character1,j is preferably compared to τ. Second, a decision is made based on the outcome of the comparison to use the attenuated (offset) or non-attenuated (non-offset) output. Consequently, implementation of the TAMSA and/or TOMSA requires just one extra comparator and K extra multiplexers to decide if attenuation (offset) should be applied. If not, the additional multiplication for attenuation (or subtraction for offset) is not necessary. Hence, the extra requirements do not significantly increase the overall area or delay of a check node processing unit.


To illustrate the robustness of an embodiment of the present invention, consider the (8000,4000) MacKay code, the progressive edge growth (“PEG”) (1008,504) LDPC code, and the quasi-cyclic (155,64) Tanner code decoded with various algorithms, including the TAMSA and TOMSA according to an embodiment of the present invention, with different parameters, each using a 5-bit uniform quantizer with Δ=0.15 and lmax=2.25.


Performance Estimation Based on Problematic Objects. The impact of a problematic object on the performance of an LDPC code decoded with the MSA, AMSA, and TAMSA can be estimated. To do so, a lower bound on the FER of any LDPC code containing a given problematic object (sub-graph) is derived, assuming a particular message passing decoder and decoder quantization. A crucial aspect of the lower bound is that it is code-independent, in the sense that it can be derived based only on a problematic object and then applied to any code containing that object. Given the dominant problematic object, decoder quantization, and decoding algorithm, a performance estimate of the code containing the dominant object can be derived. The number, type, and location of problematic objects in the Tanner graph do not need to be known to implement the algorithm. However, if the dominant problematic object is known, the performance estimate can facilitate determination of the optimum algorithm parameters. The lower bounds are tight for a variety of codes, problematic objects, and decoding algorithms.


By analyzing the AWGN channel performance simulations of the (8000,4000) code with a 5-bit quantizer, the (5,3) absorbing set of FIG. 1 is found to be the major cause of errors in the error floor. Based on this problematic object, high SNR performance estimates of the code (or any code containing this problematic object as the dominant object) can be obtained for various TAMSA parameter sets and for various values of Eb/N0(dB). For example, FIG. 3A plots the estimated FER performance vs. the parameter sets (α′, τ) for 0.5≤α′≤1, 0≤τ≤lmax, and Eb/N0=2 dB. (A contour plot of the same data is illustrated in FIG. 3B.) Note that when τ=lmax, the TAMSA is equivalent to the AMSA with α=α′, because attenuation is always applied. As illustrated in FIG. 3A, the line τ=lmax=2.25 has a very high FER, meaning that any code containing this (5,3) absorbing set is adversely affected in the error floor when decoded using the AMSA. Also, in two special cases, (α′, τ=0) and (α′=1, τ), the TAMSA is equivalent to the MSA, because attenuation is never applied for these parameter sets. From FIG. 3A, it can be predicted that the MSA will perform better than the AMSA in the error floor for any code for which the (5,3) absorbing set is dominant. It is also important to note from FIG. 3A that there are values of α and τ that lead to better performance than can be achieved by either the AMSA or the MSA for any code for which the (5,3) absorbing set is dominant. This observation supports making use of the parameter t in the TAMSA to reduce the error probability associated with a specific problematic object compared to either the AMSA or the MSA.


Simulated Performance of LDPC Codes with TAMSA and TOMSA Decoders. FIG. 4 illustrates the BER and FER performance of the (8000,4000) code for the MSA, the AMSA with α=0.8, the OMSA with β=0.15, the TAMSA with parameters (α′=0.8, τ=1.5), and the TOMSA with parameters (β′=0.15, τ=2). A syndrome-check stopping rule with a maximum number of iterations Imax=50 was employed for all decoders. For the chosen parameters, the TAMSA and TOMSA exhibit one to two orders of magnitude better error floor performance than the MSA, AMSA, and OMSA while maintaining the same waterfall performance.



FIG. 5 illustrates the BER and FER performance of the (semi-structured) (1008,504) PEG-LDPC code for the AMSA, OMSA, and TAMSA with three parameter sets: (α′=0.8, τ=2), (α′=0.8, τ=1.75), and (α′=0.75, τ=1.75). Again, the best error floors are achieved with the TAMSA. The parameter set (α′=0.75, τ=1.75) exhibits the most gain, about 1.5 orders of magnitude compared to the AMSA and OMSA for Eb/N0=4 dB, but its waterfall performance is slightly worse than for the parameter sets (α′=0.8, τ=2) and (α′=0.8, τ=1.75). This behavior allows the ability to tune the performance of the TAMSA to fit a particular application by choosing the values of α′ and τ.



FIG. 6 illustrates the BER performance of the (highly structured) quasi-cyclic (155,64) Tanner code for the AMSA with different values of α, the TAMSA with parameter set (α′=0.8, τ=1.5), and the MSA. Again, it is observed that at high SNRs, the TAMSA significantly outperforms both the AMSA and the MSA, gaining about one order of magnitude in the error floor. An important performance metric for comparison of these algorithms is the average number of iterations performed. Table I gives the average number of iterations for the AMSA, MSA, and TAMSA recorded from 1 dB to 8 dB. We observe that the AMSA and TAMSA have an approximately equal number of average iterations. Moreover, both the AMSA and TAMSA provide a significant reduction in the average number of iterations when compared to the MSA at low SNR. This advantage diminishes as the SNR increases, and all of the algorithms have a similar average number of iterations at high SNR.


Table 1 illustrates average number of iterations recorded for the quasi-cyclic (155,64) Tanner code with the MSA, AMSA, and TAMSA decoding algorithms.














TABLE 1







Eb/N0
MSA
AMSA
TAMSA





















1 dB
68.95
59.28
59.24



2 dB
30.4
23.13
22.9



3 dB
7.82
6.28
6.2



4 dB
3.06
2.95
2.87



5 dB
1.97
1.98
1.98



6 dB
1.44
1.46
1.46



7 dB
1.09
1.1
1.1



8 dB
0.85
0.86
0.86










Layered MP decoding of LDPC-BCs converges faster than standard MP decoding and is commonly employed in the implementation of quasi-cyclic codes. FIG. 7 illustrates the BER performance of the quasi-cyclic (155,64) Tanner code, using both a layered MP decoder and a standard MP decoder with 100 iterations each, for both the MSA and the TAMSA with parameter set (α′=0.8, τ=1.5). The TAMSA, with both standard and layered decoding, outperforms the MSA and the layered TAMSA and is slightly better than the standard TAMSA in the error floor. Taken together, the results of FIGS. 4-7 illustrate the robustness of TAMSA decoding.


Parameter Set Selection for TAMSA and TOMSA Decoders. FIG. 5 illustrates that the TAMSA parameter sets that lead to the best error floor performance do not necessarily lead to the best waterfall performance. Depending on the application and design goals, the parameter sets can be chosen differently. For example, if α′=αopt, where αopt is the optimal a for the AMSA derived using quantized DE, the best waterfall performance can be achieved. However, choosing α′=αopt is best suited to larger values of τ, because for τ=lmax the TAMSA and AMSA are the same, and there can be a loss in waterfall performance for smaller values of τ.


In the error floor, instead of running time-consuming code simulations, a different method can be applied to problematic objects to find the parameter sets (α′, τ) that lead to the best error floor performance. From the contour plots in FIG. 3B of the FER performance of any code for which the (5,3) absorbing set is dominant in the error floor, it can be seen that certain parameter sets (α′, τ) lead to significantly lower FER values than others. These parameter sets can then be used to guide the selection of the parameters that yield the best error floor performance of any code for which the (5,3) absorbing set is dominant. According to FIG. 3B, the best error floor for a code containing the (5,3) absorbing set is achieved by choosing a parameter set in the vicinity of (α′=0.65, τ=1). If the goal is to achieve waterfall performance as good as the AMSA with α=αopt and to achieve a better error floor than the MSA or AMSA, a good starting point is to set α′=αopt and then choose the value of τ that leads to the best error floor estimate associated with the dominant problematic object. If there is more than one value of that satisfies this condition, the largest should be chosen, because larger values of τ makes the TAMSA perform closer to the AMSA optimized for waterfall performance. In FIG. 3B, αopt=0.8, and therefore choosing the parameter set (α′=0.8, τ=1.5) should provide good performance in both the waterfall and the error floor. The simulation results in FIG. 4 of the (8000,4000) LDPC-BC using this parameter set illustrates the advantage of following this approach.


In FIGS. 2, 4, and 5, it can be seen that the OMSA slightly outperforms the AMSA at high SNRs. This follows from the fact that, for the chosen values of lmax, α, and β, the LLR magnitudes for the OMSA grow to larger values than for the AMSA (quantized value of lmax−β=2.10 vs. quantized value of α×lmax=1.8). Adopting the terminology that check node LLRs larger than τ are reliable while those below τ are unreliable, the reliable check node LLRs of the OMSA (with magnitudes up to 2.10) are more likely to “correct” additional errors inside a problematic object G(A) than those of the AMSA (with magnitudes only up to 1.8). However, in FIG. 4, it can be seen that the TAMSA has better error floor performance than the TOMSA. While the check node LLRs that satisfy equation 7 or 8 (i.e., the reliable LLRs), for both the TAMSA and the TOMSA can grow to lmax, the check node LLRs that don't satisfy equation 7 or 8 (i.e., the unreliable LLRs), are limited to values smaller than τ (a quantized value of α′×τ=1.2 for the TAMSA vs. a quantized value of τ−β′=1.85 for the TOMSA). Consequently, for the parameter sets chosen for the examples contained herein, the TAMSA makes the unreliable check node LLRs smaller than for the TOMSA, which helps TAMSA “correct” more errors by slowing down the check node LLR convergence inside a problematic object.


As previously discussed, AMSA and/or OMSA can be viewed as a particular case of TAMSA and/or TOMSA and, as such, the performance of TAMSA and/or TOMSA is at least as good as AMSA and/or TOMSA with optimal parameter selection. Moreover, significant performance improvements can be seen for a variety of code structures and lengths.


Application of the TAMSA to Spatially Coupled LDPC Codes. Spatially Coupled LDPC Codes (“SC-LDPCC”) are known to combine the best features of both regular and irregular LDPC-BCs, i.e., they achieve excellent performance both in the waterfall and the error floor regions of the BER (FER) curve. The TAMSA is preferably used to decode SC-LDPCCs to further verify the effectiveness of embodiments of the present invention and to illustrate the benefit of combining the advantages of spatial coupling and the TAMSA.


SC-LDPCC Parity-Check Matrix. Given an underlying LDPC-BC with a μ×ν parity-check matrix HBC and rate








R

B

C


=

1
-

μ
v



,





a terminated SC-LDPCC with parity-check matrix HSCL and syndrome former memory m can be formed by partitioning HBC into m component matrices Hi,i=0, 1, . . . , m, each of size μ×ν, such that








H

B

C


=







i
=
0

m



H
i



,





and arranging them as










H
SC
L

=


[




H
0















H
1




H
0















H
1












H
m










H
0









H
m







H
1





























H
m




]



μ

(

L
+
m

)

×
v

L






(

Equation


9

)








where the coupling length L>m+1 denotes the number of block columns in HSCL and the rate of the terminated SC-LDPCC represented by HSCL is given by








R
SC
L

=




v

L

-

μ

(

L
+
m

)



v

L


=

1
-


μ
v



(

1
+

m
L


)





,





such that








lim

L





R

S

C

L


=


1
-

μ
v


=


R

B

C


.






Sliding Window Decoding of SC-LDPCCs. A sliding window (SW) decoder can be used to address the large latency and complexity requirements of decoding SC-LDPCCs with a standard flooding schedule decoder. FIG. 8, illustrates an SW decoder with window size W=6 (blocks) operating on the parity-check matrix HSCL of an SC-LDPCC with m=2 and L=10. All the variable nodes and check nodes included in the window (the cross-hatched area in the W=6 area) are updated using an MP algorithm that has access to previously decoded symbols (the cross-hatched area in the m=2 area). The goal is to decode the variable nodes in the first block of the window, called the target symbols. The MP algorithm updates the nodes in the window until some maximum number, which can be a predetermined number, of iterations Imax is reached, after which the first block of target symbols is decoded. Then the window slides one block (ν code symbols) to the right and one block down (p parity-check symbols) to decode the second block, and the process continues until the last block of target symbols is decoded.


Cut-and-Paste Construction of SC-LDPCC. For the case m=1, the cut-and-paste method of constructing SC-LDPCC uses a cutting vector w=[w0, w1, . . . , wμ′-1] of non-decreasing, non-negative integers (0<w0≤w1≤ . . . ≤wμ′-1<v′) to form two component matrices H0 and H1 from a μ×ν LDPC-BC parity-check matrix HBC. The cutting vector partitions HBC, composed of a μ′×v′ array of γ×γ blocks such that μ×ν=μ′γ×ν′γ are formed into two parts, one below and one above the cut, which can be represented by H0 and H1, respectively. FIG. 9 illustrates an example of a matrix HBC of size 4000×8000, where μ′=4, ν′=8, and γ=1000, are partitioned into H0′ and H1′ by the cutting vector w=[1, 3, 5, 7]. H0 and H1 in equation 9 are then obtained by taking H0 and setting the H1′ part to all zeros and by taking H1 and setting the H0 part to all zeros, respectively, where H0+H1=HBC. The resulting code rate is given by








R
SC
L

=


1
-



(

L
+
1

)


μ


L

v



=

1
-


μ
v



(

1
+

1
L


)





,





where the underlying LDPC-BC has rate








R

B

C


=

1
-

μ
v



.





For quasi-cyclic LDPC-BCs, such as array codes and Tanner codes, the parameter γ is set equal to the size of the circulant permutation matrices in order to maintain the code structure.


Simulation Results. The simulation results for the SC-LDPCC versions of the (8000,4000) LDPC-BC and the quasi-cyclic (155,64) Tanner code decoded with the TAMSA and an SW decoder with W=6, where 50 iterations were performed at each window position are presented in FIGS. 10 and 11. The SC-LDPCCs both have coupling length L=50 and syndrome former memory m=1. For the (8000,4000) code, γ=1000 and the cutting vector w=[1, 3, 5, 7] (as illustrated in FIG. 9) is chosen, and for the (155,64) Tanner code, γ=31 and w=[2,3,5] is chosen, where the size of the circulant permutation matrices is 31.



FIG. 10 illustrates the BER performance of the (8000,4000) LDPC-BC and its spatially coupled version decoded with an SW decoder with W=6 for the MSA, the AMSA with α=0.8, and the TAMSA with parameter set (α′=0.8, τ=1.5). We see that, for the chosen parameters, the TAMSA again has nearly two orders of magnitude better error floor performance than the MSA and the AMSA, for both the LDPC-BC and the SC-LDPCC, and it maintains the same waterfall performance. In addition, the spatial coupling yields a waterfall gain of about 0.5 dB for all the decoding algorithms compared to the underlying LDPC-BC. Moreover, the dominant problematic object for both the LDPC-BC and the SC-LDPCC decoded with the algorithms and parameters in FIG. 10 is the (5,3) absorbing set of FIG. 1. The multiplicity of this object is N=14 and {circumflex over (N)}=4.92 for the LDPC-BC and SC-LDPCC, respectively, where {circumflex over (N)} is the average multiplicity per block of size ν=8000. Therefore, spatial coupling reduces the number of dominant problematic objects by 64%. This explains the almost one order of magnitude gain in the error floor obtained by spatial coupling compared to the underlying LDPC-BC.



FIG. 11 illustrates the BER performance of the quasi-cyclic (155,64) Tanner code and its spatially coupled version decoded with an SW decoder with W=6 for the MSA, the AMSA with α=0.8, and the TAMSA with parameter set (α′=0.8, τ=1.5). Again, the TAMSA outperforms the AMSA and the MSA at high SNRs by about one order of magnitude in the error floor, for both the LDPC-BC and the SC-LDPCC. In addition, an approximately 2 dB gain for the SC-LDPCC is provided as compared to its underlying LDPC-BC in the waterfall. The dominant problematic object for the (155,64) Tanner LDPC-BC decoded with the algorithms and parameters in FIG. 11 is an (8,2) absorbing set. The multiplicity of this object is about N=465, but in this case {circumflex over (N)}=0 for the SC-LDPCC (i.e., this object is completely removed by spatial coupling). As a result, there is almost five orders of magnitude gain at Eb/N0>3 dB for the SC-LDPCC as compared to the underlying LDPC-BC.


The preceding examples can be repeated with similar success by substituting the generically or specifically described components and/or operating conditions of embodiments of the present invention for those used in the preceding examples.


Optionally, embodiments of the present invention can include a general or specific purpose computer or distributed system programmed with computer software implementing the steps described above, which computer software may be in any appropriate computer language, including but not limited to C++, FORTRAN, BASIC, Java, Python, Linux, assembly language, microcode, distributed programming languages, etc. The apparatus may also include a plurality of such computers/distributed systems (e.g., connected over the Internet and/or one or more intranets) in a variety of hardware implementations. For example, data processing can be performed by an appropriately programmed microprocessor, computing cloud, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), or the like, in conjunction with appropriate memory, network, and bus elements. One or more processors and/or microcontrollers can operate via the instructions of the computer code and the software is preferably stored on one or more tangible non-transitive memory-storage devices.


Note that in the specification and claims, “about” or “approximately” means within twenty percent (20%) of the numerical amount cited. All computer software disclosed herein can be embodied on any non-transitory computer-readable medium (including combinations of mediums), including without limitation CD-ROMs, DVD-ROMs, hard drives (local or network storage devices), USB keys, other removable drives, ROMs, and firmware.


Embodiments of the present invention can include every combination of features that are disclosed herein independently from each other. Although the invention has been described in detail with particular reference to the disclosed embodiments, other embodiments can achieve the same results. Variations and modifications of the present invention will be obvious to those skilled in the art and it is intended to cover in the appended claims all such modifications and equivalents. The entire disclosures of all references, applications, patents, and publications cited above are hereby incorporated by reference. Unless specifically stated as being “essential” above, none of the various components or the interrelationship thereof are essential to the operation of the invention. Rather, desirable results can be achieved by substituting various components and/or reconfiguring their relationships with one another.

Claims
  • 1. A method for lowering an error floor of a low-density parity-check (“LDPC”) decoder chip, thereby improving bit-error-rate and/or frame-error-rate performance of the LDPC decoder chip, the method comprising: for each message passed from a check node to a variable node, computing a check node log-likelihood ratio within a check node processing unit of the LDPC decoder chip by iteratively passing quantized messages between processing units on a decoder chip, wherein a plurality of check nodes are connected to a respective variable node and wherein a plurality of variable nodes are connected to a respective check node, and wherein the connections are specified by a parity-check matrix of LDPC code, wherein computing further comprises: comparing a minimum value of a set of variable node log-likelihood ratio magnitudes that are input into the check node processing unit with a threshold, wherein each of the input variable node log-likelihood ratio magnitudes are one of a plurality connected to a respective check node;based on the results of the comparison, determining at the check node processing unit whether to apply a reduction to a check node log-likelihood ratio magnitude or not to apply a reduction to the check node log-likelihood ratio magnitude;applying a reduction to the check node log-likelihood ratio magnitude in instances when the determining step determines that a reduction should be applied; andnot applying a reduction to the check node log-likelihood ratio magnitude in instances when the determining step determines that a reduction should not be applied.
  • 2. The method of claim 1 wherein applying a reduction to the check node log-likelihood ratio magnitude at the check node processing unit comprises applying attenuation.
  • 3. The method of claim 2 wherein applying attenuation comprises multiplying the check node log-likelihood ratio magnitude by a value that is greater than 0 and less than 1.
  • 4. The method of claim 3 wherein applying attenuation comprises multiplying the check node log-likelihood ratio magnitude by a value that is less than one and greater than or equal to 0.5.
  • 5. The method of claim 2 wherein applying attenuation comprises applying attenuation to the check node log-likelihood ratio magnitude before the check node log-likelihood ratio magnitude is passed from a check node to a variable node.
  • 6. The method of claim 1 wherein applying a reduction to the check node log-likelihood ratio magnitude at the check node processing unit comprises applying an offset.
  • 7. The method of claim 6 wherein applying an offset comprises applying a subtraction function.
  • 8. The method of claim 7 wherein applying a subtraction function comprises subtraction of a predetermined number that is greater than zero.
  • 9. The method of claim 7 wherein applying a subtraction function comprises subtracting a value greater than zero before passing the check node log-likelihood ratio to a connected variable node.
  • 10. The method of claim 1 wherein comparing a minimum value of a set of connected variable node log-likelihood ratio magnitudes that are input into the check node processing unit with a threshold comprises determining whether the minimum value is less than the threshold.
  • 11. The method of claim 1 wherein comparing a minimum value of a set of connected variable node log-likelihood ratio magnitudes that are input into the check node processing unit with a threshold comprises determining whether the minimum value is less than or equal to the threshold.
  • 12. The method of claim 1 wherein comparing a minimum value of a set of connected variable node log-likelihood ratio magnitudes that are input into the check node processing unit with a threshold comprises comparing the minimum value with a predetermined value.
  • 13. The method of claim 12 wherein the predetermined value comprises a value having a magnitude greater than 0.
  • 14. The method of claim 1 wherein comparing a minimum value of a set of connected variable node log-likelihood ratio magnitudes that are input into the check node processing unit with a threshold further comprises comparing a second minimum value of the set of connected variable node log-likelihood ratio magnitudes that are input into the check node processing unit with a threshold.
  • 15. The method of claim 1 wherein applying a reduction to the check node log-likelihood ratio magnitude comprises applying a multiplication function of greater than zero and less than one and applying a subtraction function to the check node log-likelihood ratio magnitude.
  • 16. The method of claim 1 wherein applying a reduction to the check node log-likelihood ratio magnitude further comprises varying an amount of the reduction for each iteration.
  • 17. The method of claim 1 wherein applying a reduction to the check node log-likelihood ratio magnitude further comprises varying an amount of the reduction based on a location of the check node log-likelihood ratio in a graph.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 16/871,917, filed on May 11, 2020, entitled “Threshold-Based Min-Sum Algorithm to Lower the Error Floors of Quantized Low-Density Parity-Check Decoders”, which itself claims priority to and the benefit of the filing of U.S. Provisional Patent Application No. 62/873,061, entitled “Threshold-Based Min-Sum Algorithm to Lower the Error Floors of Quantized Low-Density Parity-Check Decoders”, filed on Jul. 11, 2019, and the specification thereof is incorporated herein by reference.

STATEMENT REGARDING FEDERALLY SPONSORED RESEARCH OR DEVELOPMENT

This invention was made with government support from the National Science Foundation under grant numbers ECCS-1710920 and OIA-1757207. The government has certain rights in the invention.

US Referenced Citations (42)
Number Name Date Kind
7477694 Sanderford et al. Jan 2009 B2
8266493 Abbaszadeh et al. Sep 2012 B1
8359522 Gunnam et al. Jan 2013 B2
8549375 Ueng et al. Oct 2013 B2
8621318 Micheloni et al. Dec 2013 B1
8689074 Tai Apr 2014 B1
8689084 Tai Apr 2014 B1
8898537 Gross et al. Nov 2014 B2
8935598 Norrie Jan 2015 B1
8984376 Norrie Mar 2015 B1
8990661 Micheloni Mar 2015 B1
9100153 Gross et al. Aug 2015 B2
9450610 Micheloni et al. Sep 2016 B1
9590656 Micheloni et al. Mar 2017 B2
9608666 Morero et al. Mar 2017 B1
9813080 Micheloni et al. Nov 2017 B1
10103751 Morero et al. Oct 2018 B2
10230396 Micheloni et al. Mar 2019 B1
10284293 Bitra et al. May 2019 B2
10305513 Lee May 2019 B2
10374632 Zhang et al. Aug 2019 B2
10778248 Wu Sep 2020 B1
11309915 Mitchell et al. Apr 2022 B1
20050204271 Sharon Sep 2005 A1
20050204272 Yamagishi Sep 2005 A1
20070089019 Tang et al. Apr 2007 A1
20090164540 Oh Jun 2009 A1
20100131819 Graef May 2010 A1
20100162075 Brannstrom et al. Jun 2010 A1
20100306617 Kondo Dec 2010 A1
20110231731 Gross et al. Sep 2011 A1
20120221914 Morero et al. Aug 2012 A1
20130019141 Wang Jan 2013 A1
20130086445 Yedidia et al. Apr 2013 A1
20140068394 Zhang et al. Mar 2014 A1
20140108883 Tehrani Apr 2014 A1
20140201594 Zhu Jul 2014 A1
20160134305 Morero et al. May 2016 A1
20170085276 Prabhakar et al. Mar 2017 A1
20170264316 Lee Sep 2017 A1
20180041227 Lee Feb 2018 A1
20200136653 Kim Apr 2020 A1
Foreign Referenced Citations (7)
Number Date Country
104205647 Dec 2014 CN
109936379 Jun 2019 CN
2245772 Apr 2019 EP
6396977 Sep 2018 JP
201119247 Jun 2011 TW
2019013662 Jan 2019 WO
2019205313 Oct 2019 WO
Non-Patent Literature Citations (24)
Entry
N. E. Maammar, S. Bri and J. Foshi, “Layered Offset Min-Sum Decoding for Low Density Parity Check Codes,” 2018 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Rabat, Morocco, 2018, pp. 1-5.
Eng Xu, Jianhui Wu and Meng Zhang, “A modified Offset Min-Sum decoding algorithm for LDPC codes,” 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 2010, pp. 19-22.
“Quasi-cyclic Low Density Parity-check code (QC-LDPC)”, https://arxiv.org/ftp/arxiv/papers/1511/1511.00133.pdf, Downloaded Nov. 27, 2019.
Abdu-Aguye, Umar-Faruk , “On Lowering the Error-Floor of Short-to-Medium Block Length Irregular Low Density Parity-Check Codes”, A thesis submitted to Plymouth University in partial fulfillment for the degree of Doctor of Philosophy, Oct. 2017.
All Answers Limited , “Adaptive FPGA-based LDPC-coded Manipulation”, https://ukdiss.com/examples/adaptive-fpga-based-ldpc-coded-modulation.php, Nov. 2018.
Angarita, Fabian , et al., “Reduced-Complexity Min-Sum Algorithm for Decoding LDPC Codes with Low Error-Floor”, IEEE Transactions on Circuits and Systems, vol. 61, No. 7, Jul. 2014, 2150-2158.
Chen, Jinghu , et al., “Reduced-Complexity Decoding of LDPC Codes”, IEEE Transactions on Communications, vol. 53, No. 8, Aug. 2005, 1288-1299.
Darabiha, Ahmad , et al., “A Bit-Serial Approximate Min-Sum LDPC Decoder and FPGA Implementation”, ISCAS 2006, IEEE, 2006, 149-152.
Fossorier, Marc P.C., et al., “Reduced Complexity Iterative Decoding of Low-Density Parity Check Codes Based on Belief Propagation”, IEEE Transactions on Communications, vol. 47, No. 5, May 1999, 673-680.
Hailes, Peter , et al., “Hardware-Efficient Node Processing Unit Architectures for Flexible LDPC Decoder Implementations”, IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 65, No. 12, Dec. 2018, 1919-1923.
Han, Yang , et al., “LDPC Decoder Strategies for Achieving Low Error Floors”, Conference Paper, 2008 Information Theory and Applications Workshop, downloaded from IEEE Xplore, 2008.
He, Huanyu , et al., “A New Low-Resolution Min-Sum Decoder Based on Dynamic Clipping for LDPC Codes”, Conference Paper, IEEE/CIC International Conference on Communications in China (ICCC), downloaded on Feb. 24, 2021 from IEEE Xplore, 2019, 636-640.
Howard, Sheryl , et al., “Soft-bit decoding of regular low-density parity-check codes”, IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 52, No. 10, Oct. 2005, 646-650.
Kudekar, Shrinivas , et al., “The effect of saturation on belief propagation decoding of LDPC codes”, 2014 IEEE International Symposium on Information Theory, 2014, 2604-2608.
Lechner, Gottfried , “Efficient Decoding Techniques for LDPC Codes”, https://publik.tuwien.ac.at/files/pub-et_12989.pdf, Jul. 2007.
Liu, Xingcheng , et al., “Variable-Node-Based Belief-Propagation Decoding With Message Pre-Processing for NANO Flash Memory”, IEEE Access, vol. 7, 2019, 58638-58653.
Siegel, Paul H., “An Introduction to Low-Density Parity-Check Codes”, http://cmrr-star.ucsd.edu/static/presentations/ldpc_tutorial.pdf, May 31, 2007.
Song, Suwen , et al., “A Reduced Complexity Decoding Algorithm for NB-LDPC Codes”, Conference Paper, 17th IEEE International Conference on Communication Technology, downloaded Feb. 24, 2021 from IEEE Xplore, 2017, 127-131.
Tanner, R. Michael, et al., “LDPC Block and Convolutional Codes Based on Circulant Matrices”, IEEE Transactions on Information Theory, VI. 50, No. 12, Dec. 2004, 2966-2984.
Tehrani, Seced Sharifi, “Stochastic Decoding of Low-Density Parity-Check Codes”, Thesis submitted to McGill University in partial fulfillment of the requirements of the degree of Doctor of Philosophy, 2011.
Tehrani, S. , et al., “Stochastic decoding of low-density parity-check codes”, Computer Science (abstract only), 2011.
Vasic, Bane , et al., “Failures and Error-Floors of Iterative Decoders”, In Preparation for Channel, Coding Section of the Elsevier Science E-Book Serires, Dec. 2012.
Yu, Hui , et al., “Systematic construction, verification and implementation methodology for LDPC codes”, EURASIP Journal on Wireless Communications and Networking, http://jwcn.eurasipjournals.com/content/2012/1/84, 2012.
Zhao, Jianguang , et al., “On Implementation of Min-Sum Algorithm and Its Modifications for Decoding Low-Density Parity-Check (LDPC) Codes”, IEEE Transactions on Communications, vol. 53, No. 4, Apr. 2005, 549-554.
Provisional Applications (1)
Number Date Country
62873061 Jul 2019 US
Continuations (1)
Number Date Country
Parent 16871917 May 2020 US
Child 17733924 US