Embodiments of the present invention relate to a method and apparatus that selectively attenuates or offsets the messages in a Low-density parity-check (LDPC) decoder based on a simple threshold comparison.
LDPC codes are error-correcting codes that have been widely adopted in practice for reliable communication and storage of information, e.g., cellular data (5G), wi-fi, optical communication, space and satellite communication, magnetic recording, flash memories, and so on. Implementation of LDPC decoders involves iteratively passing quantized messages between processing units on the chip. To reduce implementation and usage costs (e.g., power, area, speed), an approximation to the usual algorithm, called the min-sum algorithm (“MSA”), is employed. Variants of min-sum are used in practice to adjust for the error in approximation and to improve error correction performance. These variants use an attenuation or offset (reduction) in the message values that are passed. Known implementations use a uniform attenuation or offset, meaning that the messages passed are all reduced in value in the same way—such known systems thus lack the ability to attenuate individual messages.
Currently, known systems and/or algorithms that can lower the error floor performance of a conventional state-of-the-art attenuated MSA (“AMSA”) algorithm include:
More specifically, LDPC codes are a class of linear block codes for which the performance of iterative message passing (“MP”) decoding can approach that of much more complex maximum likelihood (“ML”) decoding. The MSA is a simplified version of the sum-product algorithm (“SPA”) that is commonly used for iterative MP decoding of LDPC codes, where the check node computation is approximated and hence is significantly easier to perform. This simplification is particularly desirable for hardware decoder implementations. Moreover, unlike the SPA, no estimation of the channel signal-to-noise ratio (“SNR”) is needed at the receiver for an additive white Gaussian noise (“AWGN”) channel.
Practical implementations of LDPC decoders require a finite precision (quantized) representation of the log-likelihood ratios. Quantized density evolution (“DE”) has been used to find the optimum attenuation and offset parameters for the attenuated min-sum algorithm (“AMSA”) and the offset min-sum algorithm (“OMSA”), in the sense that DE calculates the iterative decoding threshold, which characterizes the waterfall performance. Further improvements to the waterfall performance of the MSA for quantized and unquantized decoders have been made. At high SNRs, quantization typically causes the early onset of an error floor. It has been shown that certain objects, called trapping sets, elementary trapping sets, leafless elementary trapping sets, or absorbing sets in the Tanner graph, cause the iterative decoding process to get stuck, resulting in decoding errors at high SNRs. Hereafter, the sub-graphs induced by these sets, as well as similar sets, are referred to as problematic graphical objects. Several methods based on problematic objects have been proposed to estimate the performance of LDPC codes and a number of strategies have been proposed to lower the error floor of quantized LDPC decoders, including quantizer design, modifications to iterative decoding, and post-processing. Since each of these methods requires additional hardware and/or complexity, there is a present need for a system that can provide a simple solution, such as selective attenuation of message magnitudes, to avoid decoding failure.
Embodiments of the present invention relate to a method for lowering the error floor of a quantized low-density parity-check decoder including comparing a variable node log-likelihood ratio magnitude with a threshold, based on the results of the comparison, determining whether to apply a reduction to the variable node log-likelihood ratio magnitude or not to apply a reduction to the variable node log-likelihood ratio magnitude, applying a reduction to the variable node log-likelihood ratio magnitude in instances when the determining step determines that a reduction should be applied, and not applying a reduction to the variable node log-likelihood ratio magnitude in instances when the determining step determines that a reduction should not be applied. Applying a reduction can include applying attenuation and/or offset. Applying attenuation can include multiplying the variable node log-likelihood ratio magnitude by a value that is greater than 0 and less than 1, and/or multiplying the variable node log-likelihood ratio magnitude by a value that is less than one and greater than or equal to 0.5. The method of claim 2 wherein applying attenuation comprises applying attenuation to the variable node log-likelihood ratio magnitude before the variable node log-likelihood ratio magnitude is passed from a check node to a variable node. Applying an offset can include applying a subtraction function, which itself can include subtraction of a predetermined number that is greater than zero and/or subtracting a value greater than zero before passing the variable node log-likelihood ratio to a variable node.
In one embodiment, comparing a magnitude of a variable node log-likelihood ratio with a threshold can include determining whether the variable node log-likelihood ratio is less than the threshold. Comparing a magnitude of a variable node log-likelihood ratio with a threshold can include determining whether the variable node log-likelihood ratio is less than or equal to the threshold, and/or comparing a magnitude of a variable node log-likelihood ratio with a predetermined value, which predetermined value can optionally have a magnitude of greater than 0 to a magnitude of 2.5. Comparing a variable node log-likelihood ratio magnitude with a threshold can include comparing a variable node log-likelihood ratio magnitude which comprises a minimum of a set of variable node log-likelihood ratio magnitudes with a threshold. The method can optionally include a second minimum of a set of variable node log-likelihood ratio magnitudes compared with a threshold. Applying a reduction to the variable node log-likelihood ratio magnitude can include applying a multiplication function of greater than zero and less than one and applying a subtraction function to the variable node log-likelihood ratio. Applying a reduction to the variable node log-likelihood ratio magnitude can also include varying a magnitude of the reduction by iteration and/or varying a magnitude of the reduction based on graph location τ(j) of check node index j.
In this application, we first show that the AMSA and OMSA exhibit worse (higher) error floors than the MSA with parameters that are optimized for waterfall performance. A modification is then introduced to the MSA that applies the strategies from the AMSA and the OMSA selectively, i.e., it applies attenuation/offset when it would be helpful and does not apply it otherwise. Assuming that there exist problematic graphical objects that cause most of the decoding failures in the high SNR regime, it will also be shown that the new MSA modification causes these objects to become less prone to decoding failures. As a result, embodiments of the present invention match the waterfall performance of the AMSA and OMSA, while improving their error floor performance. No information about the location or structure of the problematic objects is required to utilize this approach. However, knowledge of the problematic object facilitates determination of the optimal algorithm parameters. Moreover, AMSA (respectively OMSA) can be viewed as a particular case of the new threshold attenuated min-sum algorithm (“TAMSA”) and respectively the new threshold offset min-sum algorithm (“TOMSA”) and, as such, the performance of TAMSA (respectively TOMSA) is at least as good as AMSA (respectively OMSA) with optimal parameter selection. The complexity of embodiments of the present invention can be quantified to show that, because it uses the information that is already generated inside the check node processing unit of the AMSA or OMSA, the new algorithm is only slightly more complex to implement than the known systems.
Due to the time-consuming process of simulating the high SNR performance of LDPC codes, a code-independent and problematic object-specific method is preferably used to guide/optimize parameter selection and to evaluate the impact of embodiments of the present invention on the performance of LDPC codes containing problematic objects at high SNRs. The results show that embodiments of the present invention improve (i.e. reduce) the error floor caused by specific problematic objects compared to the MSA, AMSA, or OMSA.
Objects, advantages and novel features, and further scope of applicability of the present invention will be set forth in part in the detailed description to follow, taken in conjunction with the accompanying drawings, and in part will become apparent to those skilled in the art upon examination of the following, or may be learned by practice of the invention. The objects and advantages of the invention may be realized and attained by means of the instrumentalities and combinations particularly pointed out in the appended claims.
The accompanying drawings, which are incorporated into and form a part of the specification, illustrate one or more embodiments of the present invention and, together with the description, serve to explain the principles of the invention. The drawings are only for the purpose of illustrating one or more embodiments of the invention and are not to be construed as limiting the invention. In the drawings:
Embodiments of the present invention relate to decoding low-density parity-check (“LDPC”) codes, wherein the attenuated min-sum algorithm (“AMSA”) and the offset min-sum algorithm (“OMSA”) can outperform the conventional min-sum algorithm (“MSA”) at low signal-to-noise-ratios (“SNRs”), i.e., in the “waterfall region” of the bit error rate curve. For quantized decoders, MSA actually outperforms AMSA and OMSA in the “error floor” (high SNR) region, and all three algorithms suffer from a relatively high error floor. Embodiments of the present invention can include a modified MSA that can outperform MSA, AMSA, and OMSA across all SNRs. The implementation complexity of embodiments of the present invention are only slightly higher than that of the AMSA or OMSA. The simulated performance of embodiments of the present invention, using several classes of LDPC codes (including spatially coupled LDPC codes), is shown to outperform the MSA, AMSA, and OMSA across all SNRs.
Embodiments of the present invention relate to a novel modification to the check node update of quantized MSA that is straightforward to implement and reduces the error floor when compared to other known methods.
As background, let V={v1, v2, . . . , vn} and C={c1, c2, . . . , cm} represent the sets of variable nodes and check nodes, respectively, of a bipartite Tanner graph representation of an LDPC code with parity-check matrix H. Assume that a binary codeword u=(u1, u2, . . . , un) is binary phase shift keyed (“BPSK”) modulated such that each zero is mapped to +1 and each one is mapped to −1. The modulated signal is transmitted over an AWGN channel with mean 0 and standard deviation σ. The received signal is {tilde over (r)}=1−2u+n, where n is the channel noise. The quantized version of {tilde over (r)} is denoted as r=(r1, r2, . . . , rn).
The Min-Sum Algorithm and its Modifications. The MSA is an iterative MP algorithm that is simpler to implement than the SPA. Unlike the SPA, the MSA does not require channel noise information to calculate the channel log-likelihood ratios (“LLRs”). The SPA is optimum for codes without cycles, but for finite length codes and finite precision LLRs, the SPA is not necessarily optimum, particularly with respect to error floor performance. Let ij represents the LLR passed from variable node vi to check node cj in a given iteration and let ji represent the LLR passed from cj to vi. The check nodes that are neighbors to vi are denoted N(vi), and the variable nodes that are neighbors to cj are denoted N(cj). To initialize decoding, each variable node vi passes ri to the check nodes in N(vi), i.e.,
ij=ri, (Equation 1)
where the ij's computed throughout the decoding process are referred to as the variable node LLRs. The check node operation to calculate the LLRs sent from check node cj to variable node vi is given by
where the ji's computed throughout the decoding process are referred to as the check node LLRs. After each iteration, the hard decision estimate û is checked to see if it is a valid codeword, where û=0 if and only if
If û is a valid codeword, or if the iteration number has reached Imax, decoding stops. Otherwise, the variable node LLRs are calculated as
ij=ri+Σj′∈N(v
and decoding continues using equation 2. Two modified versions of the MSA, called attenuated (or normalized) MSA (“AMSA”) and offset MSA (“OMSA”), were introduced to reduce the waterfall performance loss of the MSA compared to the SPA. The modified check node computations are given by
respectively, where α, β>0 are constants. In both algorithms, the check node LLR magnitudes are modified to be smaller than those of MSA. This reduces the negative effect of overestimating the LLR magnitudes in the MSA, whose larger check node LLR magnitudes compared to the SPA can cause additional errors in decoding at low SNRs.
Implementation of the MSA, AMSA, and OMSA. To implement the check node update of equation 2, in the check node processing unit corresponding to cj, the sign and magnitude of ji to be sent to each vi are calculated separately as follows. First, for all i′∈N(cj) the signs of i′j are multiplied to form Πi′∈N(c
The process of finding 1,j and 2,j is complex to implement. Therefore, several methods have been suggested to reduce the complexity of the process or to avoid calculating 2,j and instead estimate it based on 1,j. The result is that 1,j plays an important role in the check node processing unit, and embodiments of the present invention can also rely on 1,j, the extension of the algorithm to techniques designed for complexity reduction possible.
Quantized Decoders. In a uniform quantized decoder, the operations in equations 1-6 have finite precision, i.e., the values are quantized to a set of numbers ranging from −lmax to lmax, with step size Δ, where the resulting quantizer thresholds are set from
The attenuation and offset parameters α and β in equations 5 and 6 that have the best iterative decoding thresholds can be found by computer simulation or by using a technique called quantized density evolution.
Trapping Sets and Error Floors. Let A denote a subset of V of cardinality a. Let Aeven and Aodd represent the subsets of check nodes connected to variable nodes in A with even and odd degrees, respectively, where |Aodd|=b. Here, A is referred to as an (a, b) trapping set. A is defined to be an (a, b) absorbing set if each variable node in A is connected to fewer check nodes in Aodd than in Aeven. These sets, along with similar objects such as elementary trapping sets and leafless elementary trapping sets, are known to cause most of the decoding errors at high SNRs in MP decoders. In
Threshold Attenuated/Offset MSA.—Motivation and Rationale. Although it is known that applying attenuation or offset when computing the check node LLRs typically improves performance in the low SNR (waterfall) region of the BER curve for quantized decoders, because high SNR performance is tied to problematic graphical objects, the AMSA and OMSA do not necessarily achieve a good error floor. For example, assuming BPSK modulation on the AWGN channel,
At high SNRs, for a received vector r of channel LLRs, decoding is successful with high probability. In the case of unsuccessful decoding, it is known that a small number of problematic objects are likely to be the cause, i.e., objects containing variable nodes with unreliable (small magnitude) LLR values. In this regime, the channel LLRs for the variable nodes outside a problematic object will be, however, mostly reliable and have large magnitudes. In other words, the outside LLRs are typically initially large (with the correct sign) and will continue to grow quickly to even larger values (often lmax). However, even if some and/or all of the incorrect sign LLRs inside a problematic object are initially small, they can also be observed to grow quickly to larger values without correcting the errors in sign. This happens because the problematic object contains at least one short cycle, which prevents correction of the sign errors.
To improve the probability of correcting errors occurring in a problematic object G(A) at high SNR, we have found that it is helpful if the LLR magnitudes sent from a check node cj∈Aeven to variable nodes vi∈A grow more slowly (i.e. are attenuated) when cj receives at least one unreliable (small magnitude) LLR from a variable node in A. This ensures that any incorrect LLRs received from the channel in A are not reinforced. On the other hand, if a check node cj (inside or outside G(A)) receives all large magnitude LLRs, these can be helpful for decoding and hence should not be attenuated. These two factors form the essence of the new threshold-based modification of AMSA/OMSA, presented below, that can lead to correct decoding of a received vector r that would not otherwise occur.
A Threshold Attenuated/Offset MSA. An embodiment of the present invention preferably makes use of a relationship observed at high SNRs between the variable node LLR magnitudes |ij| received by check node cj and the likelihood of the check node cj being inside a problematic object G(A). This relationship allows the problem of locating errors affected by G(A) to instead merely considering the variable node LLR magnitudes |ij| received at check node cj, i.e., relying on the |ij|'s to tell if cj is likely to be inside G(A) and has the potential to cause decoding failures. At high SNRs, the check node LLRs outside G(A) typically grow faster than the LLRs inside G(A). Therefore, if a check node cj receives at least one small LLR, i.e., mini′∈N(c
In equation 7, α′ is used to make the check node LLR magnitudes smaller when mini′∈N(c
In one embodiment, the threshold τ can optionally be varied by iteration number—for example, the value of τ used in equations 7 and 8 can be a function τ(I) of the iteration number 0≤I≤Imax. The threshold τ can also optionally be varied by graph location—for example as a function τ(j) of check node index j. Although embodiments of the present invention can provide desirable results without such variations, such variations can provide further performance improvements.
Implementation of Threshold Attenuated/Offset MSA. For the MSA, for some number K of inputs to a check node processing unit, the implementation of sub-units to calculate 1,j and 2,j and the index needed to identify which input created 1,j required a significant number of multiplexers, comparators, and inverters, which is a function of K. A check node processing unit preferably includes some additional sub-units to generate the proper output and apply the attenuation (offset) parameter for the AMSA and/or OMSA. Implementation of the TAMSA and/or TOMSA adds just two simple steps to the implementation of the AMSA and/or OMSA. First, for a check node processing unit corresponding to cj, after calculating 1,j and 2,j, the value of 1,j is preferably compared to τ. Second, a decision is made based on the outcome of the comparison to use the attenuated (offset) or non-attenuated (non-offset) output. Consequently, implementation of the TAMSA and/or TOMSA requires just one extra comparator and K extra multiplexers to decide if attenuation (offset) should be applied. If not, the additional multiplication for attenuation (or subtraction for offset) is not necessary. Hence, the extra requirements do not significantly increase the overall area or delay of a check node processing unit.
To illustrate the robustness of an embodiment of the present invention, consider the (8000,4000) MacKay code, the progressive edge growth (“PEG”) (1008,504) LDPC code, and the quasi-cyclic (155,64) Tanner code decoded with various algorithms, including the TAMSA and TOMSA according to an embodiment of the present invention, with different parameters, each using a 5-bit uniform quantizer with Δ=0.15 and lmax=2.25.
Performance Estimation Based on Problematic Objects. The impact of a problematic object on the performance of an LDPC code decoded with the MSA, AMSA, and TAMSA can be estimated. To do so, a lower bound on the FER of any LDPC code containing a given problematic object (sub-graph) is derived, assuming a particular message passing decoder and decoder quantization. A crucial aspect of the lower bound is that it is code-independent, in the sense that it can be derived based only on a problematic object and then applied to any code containing that object. Given the dominant problematic object, decoder quantization, and decoding algorithm, a performance estimate of the code containing the dominant object can be derived. The number, type, and location of problematic objects in the Tanner graph do not need to be known to implement the algorithm. However, if the dominant problematic object is known, the performance estimate can facilitate determination of the optimum algorithm parameters. The lower bounds are tight for a variety of codes, problematic objects, and decoding algorithms.
By analyzing the AWGN channel performance simulations of the (8000,4000) code with a 5-bit quantizer, the (5,3) absorbing set of
Simulated Performance of LDPC Codes with TAMSA and TOMSA Decoders.
Table 1 illustrates average number of iterations recorded for the quasi-cyclic (155,64) Tanner code with the MSA, AMSA, and TAMSA decoding algorithms.
Layered MP decoding of LDPC-BCs converges faster than standard MP decoding and is commonly employed in the implementation of quasi-cyclic codes.
Parameter Set Selection for TAMSA and TOMSA Decoders.
In the error floor, instead of running time-consuming code simulations, a different method can be applied to problematic objects to find the parameter sets (α′, τ) that lead to the best error floor performance. From the contour plots in
In
As previously discussed, AMSA and/or OMSA can be viewed as a particular case of TAMSA and/or TOMSA and, as such, the performance of TAMSA and/or TOMSA is at least as good as AMSA and/or TOMSA with optimal parameter selection. Moreover, significant performance improvements can be seen for a variety of code structures and lengths.
Application of the TAMSA to Spatially Coupled LDPC Codes. Spatially Coupled LDPC Codes (“SC-LDPCC”) are known to combine the best features of both regular and irregular LDPC-BCs, i.e., they achieve excellent performance both in the waterfall and the error floor regions of the BER (FER) curve. The TAMSA is preferably used to decode SC-LDPCCs to further verify the effectiveness of embodiments of the present invention and to illustrate the benefit of combining the advantages of spatial coupling and the TAMSA.
SC-LDPCC Parity-Check Matrix. Given an underlying LDPC-BC with a μ×ν parity-check matrix HBC and rate
a terminated SC-LDPCC with parity-check matrix HSCL and syndrome former memory m can be formed by partitioning HBC into m component matrices Hi,i=0, 1, . . . , m, each of size μ×ν, such that
and arranging them as
where the coupling length L>m+1 denotes the number of block columns in HSCL and the rate of the terminated SC-LDPCC represented by HSCL is given by
such that
Sliding Window Decoding of SC-LDPCCs. A sliding window (SW) decoder can be used to address the large latency and complexity requirements of decoding SC-LDPCCs with a standard flooding schedule decoder.
Cut-and-Paste Construction of SC-LDPCC. For the case m=1, the cut-and-paste method of constructing SC-LDPCC uses a cutting vector w=[w0, w1, . . . , wμ′-1] of non-decreasing, non-negative integers (0<w0≤w1≤ . . . ≤wμ′-1<v′) to form two component matrices H0 and H1 from a μ×ν LDPC-BC parity-check matrix HBC. The cutting vector partitions HBC, composed of a μ′×v′ array of γ×γ blocks such that μ×ν=μ′γ×ν′γ are formed into two parts, one below and one above the cut, which can be represented by H0 and H1, respectively.
where the underlying LDPC-BC has rate
For quasi-cyclic LDPC-BCs, such as array codes and Tanner codes, the parameter γ is set equal to the size of the circulant permutation matrices in order to maintain the code structure.
Simulation Results. The simulation results for the SC-LDPCC versions of the (8000,4000) LDPC-BC and the quasi-cyclic (155,64) Tanner code decoded with the TAMSA and an SW decoder with W=6, where 50 iterations were performed at each window position are presented in
The preceding examples can be repeated with similar success by substituting the generically or specifically described components and/or operating conditions of embodiments of the present invention for those used in the preceding examples.
Optionally, embodiments of the present invention can include a general or specific purpose computer or distributed system programmed with computer software implementing the steps described above, which computer software may be in any appropriate computer language, including but not limited to C++, FORTRAN, BASIC, Java, Python, Linux, assembly language, microcode, distributed programming languages, etc. The apparatus may also include a plurality of such computers/distributed systems (e.g., connected over the Internet and/or one or more intranets) in a variety of hardware implementations. For example, data processing can be performed by an appropriately programmed microprocessor, computing cloud, Application Specific Integrated Circuit (ASIC), Field Programmable Gate Array (FPGA), or the like, in conjunction with appropriate memory, network, and bus elements. One or more processors and/or microcontrollers can operate via the instructions of the computer code and the software is preferably stored on one or more tangible non-transitive memory-storage devices.
Note that in the specification and claims, “about” or “approximately” means within twenty percent (20%) of the numerical amount cited. All computer software disclosed herein can be embodied on any non-transitory computer-readable medium (including combinations of mediums), including without limitation CD-ROMs, DVD-ROMs, hard drives (local or network storage devices), USB keys, other removable drives, ROMs, and firmware.
Embodiments of the present invention can include every combination of features that are disclosed herein independently from each other. Although the invention has been described in detail with particular reference to the disclosed embodiments, other embodiments can achieve the same results. Variations and modifications of the present invention will be obvious to those skilled in the art and it is intended to cover in the appended claims all such modifications and equivalents. The entire disclosures of all references, applications, patents, and publications cited above are hereby incorporated by reference. Unless specifically stated as being “essential” above, none of the various components or the interrelationship thereof are essential to the operation of the invention. Rather, desirable results can be achieved by substituting various components and/or reconfiguring their relationships with one another.
This application is a continuation of U.S. patent application Ser. No. 16/871,917, filed on May 11, 2020, entitled “Threshold-Based Min-Sum Algorithm to Lower the Error Floors of Quantized Low-Density Parity-Check Decoders”, which itself claims priority to and the benefit of the filing of U.S. Provisional Patent Application No. 62/873,061, entitled “Threshold-Based Min-Sum Algorithm to Lower the Error Floors of Quantized Low-Density Parity-Check Decoders”, filed on Jul. 11, 2019, and the specification thereof is incorporated herein by reference.
This invention was made with government support from the National Science Foundation under grant numbers ECCS-1710920 and OIA-1757207. The government has certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
7477694 | Sanderford et al. | Jan 2009 | B2 |
8266493 | Abbaszadeh et al. | Sep 2012 | B1 |
8359522 | Gunnam et al. | Jan 2013 | B2 |
8549375 | Ueng et al. | Oct 2013 | B2 |
8621318 | Micheloni et al. | Dec 2013 | B1 |
8689074 | Tai | Apr 2014 | B1 |
8689084 | Tai | Apr 2014 | B1 |
8898537 | Gross et al. | Nov 2014 | B2 |
8935598 | Norrie | Jan 2015 | B1 |
8984376 | Norrie | Mar 2015 | B1 |
8990661 | Micheloni | Mar 2015 | B1 |
9100153 | Gross et al. | Aug 2015 | B2 |
9450610 | Micheloni et al. | Sep 2016 | B1 |
9590656 | Micheloni et al. | Mar 2017 | B2 |
9608666 | Morero et al. | Mar 2017 | B1 |
9813080 | Micheloni et al. | Nov 2017 | B1 |
10103751 | Morero et al. | Oct 2018 | B2 |
10230396 | Micheloni et al. | Mar 2019 | B1 |
10284293 | Bitra et al. | May 2019 | B2 |
10305513 | Lee | May 2019 | B2 |
10374632 | Zhang et al. | Aug 2019 | B2 |
10778248 | Wu | Sep 2020 | B1 |
11309915 | Mitchell et al. | Apr 2022 | B1 |
20050204271 | Sharon | Sep 2005 | A1 |
20050204272 | Yamagishi | Sep 2005 | A1 |
20070089019 | Tang et al. | Apr 2007 | A1 |
20090164540 | Oh | Jun 2009 | A1 |
20100131819 | Graef | May 2010 | A1 |
20100162075 | Brannstrom et al. | Jun 2010 | A1 |
20100306617 | Kondo | Dec 2010 | A1 |
20110231731 | Gross et al. | Sep 2011 | A1 |
20120221914 | Morero et al. | Aug 2012 | A1 |
20130019141 | Wang | Jan 2013 | A1 |
20130086445 | Yedidia et al. | Apr 2013 | A1 |
20140068394 | Zhang et al. | Mar 2014 | A1 |
20140108883 | Tehrani | Apr 2014 | A1 |
20140201594 | Zhu | Jul 2014 | A1 |
20160134305 | Morero et al. | May 2016 | A1 |
20170085276 | Prabhakar et al. | Mar 2017 | A1 |
20170264316 | Lee | Sep 2017 | A1 |
20180041227 | Lee | Feb 2018 | A1 |
20200136653 | Kim | Apr 2020 | A1 |
Number | Date | Country |
---|---|---|
104205647 | Dec 2014 | CN |
109936379 | Jun 2019 | CN |
2245772 | Apr 2019 | EP |
6396977 | Sep 2018 | JP |
201119247 | Jun 2011 | TW |
2019013662 | Jan 2019 | WO |
2019205313 | Oct 2019 | WO |
Entry |
---|
N. E. Maammar, S. Bri and J. Foshi, “Layered Offset Min-Sum Decoding for Low Density Parity Check Codes,” 2018 International Symposium on Advanced Electrical and Communication Technologies (ISAECT), Rabat, Morocco, 2018, pp. 1-5. |
Eng Xu, Jianhui Wu and Meng Zhang, “A modified Offset Min-Sum decoding algorithm for LDPC codes,” 2010 3rd International Conference on Computer Science and Information Technology, Chengdu, China, 2010, pp. 19-22. |
“Quasi-cyclic Low Density Parity-check code (QC-LDPC)”, https://arxiv.org/ftp/arxiv/papers/1511/1511.00133.pdf, Downloaded Nov. 27, 2019. |
Abdu-Aguye, Umar-Faruk , “On Lowering the Error-Floor of Short-to-Medium Block Length Irregular Low Density Parity-Check Codes”, A thesis submitted to Plymouth University in partial fulfillment for the degree of Doctor of Philosophy, Oct. 2017. |
All Answers Limited , “Adaptive FPGA-based LDPC-coded Manipulation”, https://ukdiss.com/examples/adaptive-fpga-based-ldpc-coded-modulation.php, Nov. 2018. |
Angarita, Fabian , et al., “Reduced-Complexity Min-Sum Algorithm for Decoding LDPC Codes with Low Error-Floor”, IEEE Transactions on Circuits and Systems, vol. 61, No. 7, Jul. 2014, 2150-2158. |
Chen, Jinghu , et al., “Reduced-Complexity Decoding of LDPC Codes”, IEEE Transactions on Communications, vol. 53, No. 8, Aug. 2005, 1288-1299. |
Darabiha, Ahmad , et al., “A Bit-Serial Approximate Min-Sum LDPC Decoder and FPGA Implementation”, ISCAS 2006, IEEE, 2006, 149-152. |
Fossorier, Marc P.C., et al., “Reduced Complexity Iterative Decoding of Low-Density Parity Check Codes Based on Belief Propagation”, IEEE Transactions on Communications, vol. 47, No. 5, May 1999, 673-680. |
Hailes, Peter , et al., “Hardware-Efficient Node Processing Unit Architectures for Flexible LDPC Decoder Implementations”, IEEE Transactions on Circuits and Systems-II: Express Briefs, vol. 65, No. 12, Dec. 2018, 1919-1923. |
Han, Yang , et al., “LDPC Decoder Strategies for Achieving Low Error Floors”, Conference Paper, 2008 Information Theory and Applications Workshop, downloaded from IEEE Xplore, 2008. |
He, Huanyu , et al., “A New Low-Resolution Min-Sum Decoder Based on Dynamic Clipping for LDPC Codes”, Conference Paper, IEEE/CIC International Conference on Communications in China (ICCC), downloaded on Feb. 24, 2021 from IEEE Xplore, 2019, 636-640. |
Howard, Sheryl , et al., “Soft-bit decoding of regular low-density parity-check codes”, IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 52, No. 10, Oct. 2005, 646-650. |
Kudekar, Shrinivas , et al., “The effect of saturation on belief propagation decoding of LDPC codes”, 2014 IEEE International Symposium on Information Theory, 2014, 2604-2608. |
Lechner, Gottfried , “Efficient Decoding Techniques for LDPC Codes”, https://publik.tuwien.ac.at/files/pub-et_12989.pdf, Jul. 2007. |
Liu, Xingcheng , et al., “Variable-Node-Based Belief-Propagation Decoding With Message Pre-Processing for NANO Flash Memory”, IEEE Access, vol. 7, 2019, 58638-58653. |
Siegel, Paul H., “An Introduction to Low-Density Parity-Check Codes”, http://cmrr-star.ucsd.edu/static/presentations/ldpc_tutorial.pdf, May 31, 2007. |
Song, Suwen , et al., “A Reduced Complexity Decoding Algorithm for NB-LDPC Codes”, Conference Paper, 17th IEEE International Conference on Communication Technology, downloaded Feb. 24, 2021 from IEEE Xplore, 2017, 127-131. |
Tanner, R. Michael, et al., “LDPC Block and Convolutional Codes Based on Circulant Matrices”, IEEE Transactions on Information Theory, VI. 50, No. 12, Dec. 2004, 2966-2984. |
Tehrani, Seced Sharifi, “Stochastic Decoding of Low-Density Parity-Check Codes”, Thesis submitted to McGill University in partial fulfillment of the requirements of the degree of Doctor of Philosophy, 2011. |
Tehrani, S. , et al., “Stochastic decoding of low-density parity-check codes”, Computer Science (abstract only), 2011. |
Vasic, Bane , et al., “Failures and Error-Floors of Iterative Decoders”, In Preparation for Channel, Coding Section of the Elsevier Science E-Book Serires, Dec. 2012. |
Yu, Hui , et al., “Systematic construction, verification and implementation methodology for LDPC codes”, EURASIP Journal on Wireless Communications and Networking, http://jwcn.eurasipjournals.com/content/2012/1/84, 2012. |
Zhao, Jianguang , et al., “On Implementation of Min-Sum Algorithm and Its Modifications for Decoding Low-Density Parity-Check (LDPC) Codes”, IEEE Transactions on Communications, vol. 53, No. 4, Apr. 2005, 549-554. |
Number | Date | Country | |
---|---|---|---|
62873061 | Jul 2019 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16871917 | May 2020 | US |
Child | 17733924 | US |