The present invention belongs to the field of low-density parity check (LDPC, standing for “Low Density Parity Check”) codes. In particular, the invention concerns an optimization of the calculation of parity check messages in an LDPC decoding process, as well as a strategy for quantifying the data used in the decoding process.
LDPC codes are currently used in several communication technologies, in particular for IEEE 802.16 (WiMAX), IEE 802.11n (Wi-Fi) standards, the 5G standard of the 3GPP (“3rd Generation Partnership Project”) organism, the DVB-S2 (“Digital Video Broadcasting, 2nd Generation”) standard, or the space communications standard CCSDS C 2 (“Consultative Committee for Space Data Systems, C2”).
A binary LDPC code is a linear error corrector code defined by a binary parity matrix (the elements of the matrix are ‘0’ and ‘1’). The parity matrix is a low-density matrix, i.e. the number of non-zero elements of the matrix is relatively small compared to the size M×N of the matrix.
An LDPC code may be represented in the form of a bipartite graph (Tanner graph) having connections between N variable nodes and M parity check nodes. Each non-zero element of the parity matrix corresponds to a connection between a variable node and a parity check node. Each line of the parity matrix corresponds to a parity equation associated with a parity check node. Each column of the parity matrix corresponds to a variable associated with a variable node. A codeword to be decoded corresponds to a set of values taken respectively by the variables associated with the different variable nodes (set of the estimated values of the bits of the codeword).
To reduce the hardware implementation complexity of an LDPC decoder, it is known to use particular structures of the parity matrix. In particular, the quasi-cyclic LDPC codes (QC-LDPC, standing for “Quasi-Cyclic Low Density Parity Check”) are defined by parity matrices composed of Z×Z size sub-matrices. The term Z is generally so-called “expansion factor”. The Z×Z size sub-matrices are generally so-called “circulant matrices”. For example, a parity matrix of a QC-LDPC code is obtained from an R×C size base matrix by replacing each element of the base matrix with a Z×Z size matrix corresponding either to a zero matrix or to an offset-shift of the identity matrix. The parity matrix then includes R×Z lines (M=R×Z) and C×Z columns (N=C×Z).
An interesting characteristic of a QC-LDPC code is that its parity matrix is organized into horizontal or vertical layers. For example, a horizontal layer of the parity matrix corresponds to a set of L consecutive lines of the parity matrix originating from a line of the base matrix (L s Z). This layered structure allows parallelizing the calculations of the parity check messages within a layer because the parity equations of a layer do not involve a variable of the codeword more than once. Indeed, a layer has one single non-zero element in the parity matrix for a given variable, or in other words, the variable nodes connected to a parity check node of one layer are not connected to another parity check node of said layer.
The decoding of an LDPC codeword is based on an iterative exchange of information on the likelihood of the values taken by the bits of the codeword. The iterative decoding process is based on a belief propagation algorithm by exchanging messages between the variable nodes and the parity check nodes, and by applying the parity equations. At each iteration, variable messages are calculated from parity check messages calculated during the previous iteration; the parity check messages are calculated for the current iteration; and variables corresponding to an estimation of the codeword are updated from the parity check messages.
In particular, the iterative process of the decoding of an LDPC codeword may be based on the BP (“Belief Propagation”) algorithm, also known under the term SPA (“Sum-Product Algorithm”). The BP-SPA algorithm offers good decoding performances at the expense of a high computational complexity. This computational complexity is related to the use of functions based on hyperbolic tangents or logarithm and exponential functions for the calculation of the parity check messages.
Hence, variants of the BP-SPA algorithm have been proposed to reduce the computational complexity of the decoding.
For example, the algorithms A-min* and λ-min are close to the formulation of the BP-SPA algorithm, but they reduce the computational complexity thereof. In particular, for the λ-min algorithm, only the variable messages with the lowest amplitudes are taken into account for the calculation of a parity check message (the lower the amplitude of a variable message, and the more it will affect the values of the parity check messages).
According to another example, the Min-Sum algorithm replaces the calculations of hyperbolic tangents by calculations of minimums to approximate the parity check messages. This approximation significantly reduces the computational complexity. However, it over-evaluates the amplitudes of the parity check messages, which leads to a decrease in the error correction performances. Hence, variants of the Min-Sum algorithm have been introduced to compensate for this over-evaluation. This is the case in particular of “Offset Min-Sum” (OMS) and “Normalized Min-Sum” (NMS) algorithms. The OMS algorithm introduces a correction value (“offset” in English) to be subtracted from the value calculated for the amplitude of a parity check message. In turn, the NMS algorithm introduces a normalization factor to be applied to the value calculated for the amplitude of a parity check message.
These different algorithms offer different tradeoffs in terms of computational complexity and of correction power. The selection of a particular algorithm is very strongly related to the context in which the LDPC decoding is applied.
To reduce latency and increase the average decoding rate, it is important to limit the number of iterations necessary to correct the errors. This also allows limiting the energy consumption of the decoder. Thus, an important characteristic of an LDPC decoder lies in the criterion used to stop the decoding process, i.e. the criterion used to consider that the convergence to the correct codeword has been reached.
A stop criterion may be determined from a parity check calculation on the set of estimated values of the bits of the codeword at the end of an iteration (the syndrome is then an M size vector defined by the M parity equations defined by the parity matrix). This leads to relatively low error rates. Nonetheless, this leads to an additional latency because the determination of the stop criterion requires interrupting the decoding process at each iteration. In addition, this solution is not well suited to a layered architecture when the size N of a codeword is large.
When the decoding process follows a layered architecture, for example with the use of a QC-LDPC code, it is possible to consider calculating on-the-fly (i.e. without interrupting the decoding process) a partial syndrome for each layer. A partial syndrome is an L size vector defined by the L parity equations of the considered layer (L being the size of the layer, i.e. the number of consecutive lines of the parity matrix corresponding to a layer in the case of a horizontal layered structure).
For example, it is possible to consider that the stop criterion is met when, at the end of an iteration corresponding to the successive processing of the different layers, all of the partial syndromes calculated respectively for the different layers are zero. Nonetheless, there is no guarantee that the partial syndromes are met by the same codeword because the estimates of the values of the bits of the codeword are updated after the processing of each layer. This could lead to a significant increase in false detections.
The article by A. Hera et al. entitled “Analysis and Implementation of On-the-Fly Stopping Criteria for Layered QC LDPC Decoders” and published in pages 287 to 291 of the minutes of the 22nd International Conference “Mixed design of integrated circuits and systems” which has been held from June 25th to Jun. 27, 2015 in Torun in Poland, proposes a stop criterion that takes several successive iterations into account. More particularly, the stop criterion is considered to be met when the partial syndromes of the different layers are all zero for a predetermined number of successive iterations. By increasing the number of successive iterations to be considered, it is possible to reduce the error rates at the expense of a greater latency and a lower average flow rate.
To reach high flow rates, and to limit the hardware complexity of the decoder, it is preferable to use a fixed-point representation of the data (parity check messages, variable messages, variables corresponding to an estimation of the codeword). Nonetheless, the fixed-point representation could affect the performances of the decoding in terms of error rates. In particular, the saturation of the data used in the decoding process (when these data reach the maximum value permitted by the fixed-point representation) is at the origin of error rate floor (we then talk about “quantization floor”). Hence, the quantization format of the data should be carefully selected so as to lower the quantization floor while limiting the hardware implementation complexity (when the quantization format is large, there is less saturation and the quantization floor is lower, but the hardware implementation complexity is greater).
The thesis by V. Pignoly entitled “Etude de codes LPDC pour applications spatiales optiques et conception de decodeurs associes”, submitted on Mar. 26, 2021, presents, in sections 1.3 and 2.1 to 2.3, the LDPC concept, the particular case of the QC-LDPC codes, the principle of decoding in layered scheduling, the different decoding algorithms and the notion of data quantization.
The thesis by T. T. Nguyen Ly thesis entitled “Efficient Hardware Implementations of LDPC Decoders through Exploiting Impreciseness in Message-Passing Decoding Algorithms” (hereinafter, referenced by “Ref1”) also describes the LDPC decoding principle for a flooding scheduling (cf. sections 2.3.1 and 2.3.3) and for a layered scheduling (cf. section 2.5.2).
Spatial communications impose strong constraints on the channel coding, in particular in terms of correction power, due to the numerous sources of noise that degrade the quality of the transmitted signals and the high cost of a possible retransmission. On the other hand, the flow rates to be reached are increasingly high (the target flow rates may exceed 1 Gbit/s (gigabits per second), and even 10 Gbit/s). Furthermore, the energy consumption and the complexity of the electronic components on board a satellite are generally limited.
The known solutions in terms of stop criterion, decoding algorithm or data quantization strategy do not always allow obtaining an ideal tradeoff between decoding performance, data rate, implementation complexity and energy consumption.
The document “Degree-Matched Check Node Decoding for Regular and Irregular LDPCs”, Howard S. L. et al., describes an “Offset BP-based LDPC decoding” algorithm that depends on the degree of a check node.
Patent application US2017/026055A1 and patent U.S. Pat. No. 9,059,742B1 each describe a strategy for quantizing control messages or estimation variables (LLRs) when a particular saturation criterion is met.
The present invention aims to overcome all or part of the drawbacks of the prior art, in particular those set out hereinbefore.
To this end, and according to a first aspect, the present invention proposes a method for decoding a codeword with a decoder of low-density parity check code, so-called LDPC code. The LDPC code is defined by an M×N size binary parity matrix, M and N being positive integers. The parity matrix corresponds to a representation of a bipartite graph comprising connections between M parity check nodes and N variable nodes. Each line of the parity matrix corresponds to a parity equation associated with a parity check node. Each column of the parity matrix corresponds to a variable associated with a variable node. Each non-zero element of the parity matrix corresponds to a connection between a parity check node and a variable node. The codeword to be decoded corresponds to a set of values taken respectively by said variables. The method comprises executing one or more iteration(s) until a stop criterion is met. Each iteration comprises:
For each parity check node and for each variable node to which said parity check node is connected, the calculation of a parity check message comprises:
This particular method of calculating parity check messages provides a good compromise between decoding performance, data rate and implementation complexity.
In particular modes of implementation, the invention may further include one or more of the following features, considered separately or according to any technically-feasible combinations.
In particular modes of implementation, the first smallest value and the second smallest value are determined among the absolute values of the variable messages associated with said parity check node while excluding the variable message associated with said variable node VNn.
In particular modes of implementation, the calculation of a parity check message comprises at least one second comparison of the difference between the second smallest value and the first smallest value to a second threshold, and the determination of the correction value is determined out of at least three possible values according to the results of the first comparison and of the second comparison.
For low coding rates, using two thresholds and three correction values (instead of one threshold and two correction values) significantly improves decoding performance.
In particular modes of implementation, at least one out of the first threshold and the second threshold is defined in such a way that the decimal representation of its value S can be written in the form:
where NLSB is a positive integer.
This particular choice of threshold values simplifies the implementation of the decoder.
In particular modes of implementation, the decoder supports various code rates and the possible values for the correction value are defined according to the code rate used.
In particular modes of implementation, the calculation of the parity check message includes the calculation of a subtraction of the correction value from the first smallest value. If the result of the subtraction is negative, the value of the parity check message is set to zero. If the result of the subtraction is positive or zero, the absolute value of the parity check message is set to the value of the result of the subtraction.
In particular modes of implementation, the parity matrix has a horizontal layered structure. Each layer corresponds to one or more consecutive line(s) of the parity matrix. Each layer has one single non-zero element for a given variable.
In particular modes of implementation, the LDPC code is a quasi-cyclic code. The parity matrix is obtained by extending a R×C size base matrix by an expansion factor Z, Z being a positive integer, each element of the base matrix being replaced by a Z×Z size matrix corresponding either to a zero matrix, or to an offset-shift of an identity matrix. The parity matrix includes R×Z lines and C×Z columns.
In particular modes of implementation, each layer corresponds to the Z lines of the parity matrix corresponding to a line of the base matrix.
In particular modes of implementation, when the calculated value of a parity check message or of an a posteriori estimation variable exceeds a predetermined saturation value, said calculated value is saturated at said saturation value; at the end of an iteration, when a saturation criterion is verified, the method comprises at least one scaling of the parity check messages and of the a posteriori estimation variables. A scaling corresponds to assigning to a value the integer having the same sign, the absolute value of which is the closest integer greater than the absolute value of the value divided by two. The saturation criterion is verified when one or more of the following conditions are met:
This particular method of on-the-fly scaling makes it possible to lower the error rate floor for a given quantization. It can also make it possible to obtain performance in terms of the error rate floor that is comparable to that of a higher quantization. This results in a significant reduction in the decoder's memory footprint at the cost of a relatively low implementation overhead.
In particular modes of implementation, said first scaling also comprises a scaling of the possible correction values.
According to a second aspect, the present invention relates to a decoder of low-density parity check code, so-called LDPC code. The LDPC code is defined by an M×N size binary parity matrix, M and N being positive integers. The parity matrix corresponds to a representation of a bipartite graph comprising connections between M parity check nodes and N variable nodes. Each line of the parity matrix corresponds to a parity equation associated with a parity check node. Each column of the parity matrix corresponds to a variable associated with a variable node. Each non-zero element of the parity matrix corresponds to a connection between a parity check node and a variable node. A codeword to be decoded corresponds to a set of values taken respectively by said variables. The decoder includes a processing unit configured to execute one or more iteration(s) until a stop criterion is met and, at each iteration, to:
For each parity check node and for each variable node to which said parity check node is connected, to calculate a parity check message, the processing unit is configured to:
In particular embodiments, the invention may further include one or more of the following features, considered separately or according to any technically-feasible combinations.
In particular embodiments, the first smallest value and the second smallest value are determined among the absolute values of the variable messages associated with said parity check node while excluding the variable message associated with said variable node VNn.
In particular modes of implementation, to calculate a parity check message, the processing unit is configured to
In particular modes of implementation, at least one out of the first threshold and the second threshold is defined in such a way that the decimal representation of its value S can be written in the form:
where NLSB is a positive integer. The comparison is carried out by an “OR” gate or a “NOR” gate taking as input the most significant bits beyond the NLSB least significant bits of the value of the difference between the second smallest value and the first smallest value.
In particular modes of implementation, the processing unit is configured to support various code rates and the possible values for the correction value are defined according to the code rate used.
In particular modes of implementation, the calculation of the parity check message includes the calculation of a subtraction of the correction value from the first smallest value. If the result of the subtraction is negative, the value of the parity check message is set to zero. If the result of the subtraction is positive or zero, the absolute value of the parity check message is set to the value of the result of the subtraction.
In particular modes of implementation, when the calculated value of a parity check message or of an a posteriori estimation variable exceeds a predetermined saturation value, said calculated value is saturated at said saturation value; at the end of an iteration, when a saturation criterion is verified, the method comprises at least one scaling of the parity check messages and of the a posteriori estimation variables. A scaling corresponds to assigning to a value the integer having the same sign, the absolute value of which is the closest integer greater than the absolute value of the value divided by two. The saturation criterion is verified when one or more of the following conditions are met:
In particular modes of implementation, to carry out said first scaling, the processing unit is further configured to scale the possible correction values According to a third aspect, the present invention relates to a satellite including a decoder according to any one of the preceding embodiments.
The invention will be better understood upon reading the following description, given as a non-limiting example, and made with reference to
In these figures, identical references from one figure to another designate identical or similar elements. For clarity, the illustrated elements are not necessarily plotted to the same scale, unless stated otherwise.
In the remainder of the description, without limitation, we consider the case of an LDPC decoder for spatial communications. The CCSDS (English acronym standing for “Consultative Committee for Space Data Systems”) is currently defining a standard for optical spatial communications for which LDPC codes have been defined by the company AIRBUS DEFENCE AND SPACE and the National Center of Space Studies (CNES). The invention applies particularly well to this communication standard. Nonetheless, the invention could also be applied to other types of communications, in particular radio communications. The data rates targeted for the considered spatial communications are relatively high, for example higher than 100 Mbit/s, and even higher than 1 Gbit/s, and even higher than 10 Gbit/s. Nonetheless, nothing prevents the invention from applying the invention to a case where the data rate would be lower than these values.
An LDPC code is defined by a parity matrix.
As illustrated in
A codeword may have a relatively large size, for example a size larger than or equal to 1,000 bits (N≥1,000). In the considered example, the codeword has a size of 30,720 bits (N=30,720) (we consider hereinafter a step of punching some information bits).
We consider the case of an LDPC decoder which supports different coding rates. The coding rate corresponds to the ratio between the number of useful bits in a codeword to the total number of bits of a codeword. The higher the coding rate, the lower the computational complexity and the higher the flow rate will be; in return, the error correction power is lower (and therefore the error rate is higher). Conversely, the lower the coding rate and the higher the error correction power will be (low error rate); in return, the computational complexity is higher and the flow rate is lower.
In the considered example, the number of lines M and the density of the parity matrix depend on the coding rate. For a coding rate of 9/10, M=4,608 and the density amounts to 0.0816%. For a coding rate of ½, M=17,920 and the density amounts to 0.024%. For a coding rate of 3/10, M=23,040 and the density amounts to 0.0213%.
The decoding of a codeword LPDC is based on an iterative exchange of information on the likelihood of the values taken by the bits of the codeword. The decoding iterative process is based on a belief propagation algorithm which is based on an exchange of messages between the variable nodes VNn and the parity check nodes CNm.
As illustrated in
A message sent by a variable node VNn to a parity check node CNm is denoted αn,m (we sometimes use the notation v2cn,m to reflect the notion of directing the message from one variable node to a parity check node). The value of a message αn,m is calculated at the level of a variable node VNn for each of the parity nodes CNm connected to the variable node VNn on the graph G.
This is an iterative process: the messages αn,m are calculated from the previously calculated messages βm, and the messages βm,n are calculated from the previously calculated messages αn,m. This iterative process takes as input a priori estimation variables of the codeword that correspond for example to log-likelihood ratios (LLR, standing for “Log-Likelihood Ratio” in English). These are values representative of the likelihood that the value of one bit of the codeword is equal to ‘1’ or to ‘0’ (logarithm of the ratio between the likelihood that the value of the bit is equal to ‘0’ and the likelihood that the value of the bit is equal to ‘1’).
A posteriori estimation variables γn (n varying from 1 to N) of the bits of the codeword are also calculated iteratively from the messages βm,n. These values γn are also representative of the probability of the value of one bit of the codeword being equal to ‘1’ or to ‘0’. They allow making a decision on the value of each of the bits of the codeword. A syndrome could then be calculated from the estimated values of the bits of the codeword and from the parity equations defined by the parity matrix H. If we denote c=(c1, c2, . . . , cN) all of the estimated values of the bits of the codeword, then the syndrome s is defined by the matrix equation s=H*cT. A zero syndrome means that the estimated values of the bits of the codeword meet the parity equations.
The algorithm 1 defined in section 2.3.1 of the document Ref1 describes an example of an LDPC decoding iterative process with the BP-SPA algorithm. The algorithm 2 defined in section 2.3.3 of the document Ref1 describes an example of an LDPC decoding iterative process with the Min-Sum algorithm. These conventional algorithms are known to a person skilled in the art.
These two algorithms are described in the case of a flooding scheduling. The messages αn,m and the values γn are initialized with the a priori estimation variables. Afterwards, at each iteration, the messages βm,n are calculated from the messages αn,m: the messages αn,m are calculated from the messages βm,n and from the a priori estimation variables; the a posteriori estimation variables γn are calculated from the messages βm,n and from the a priori estimation variables. Afterwards, a syndrome may be calculated from the a posteriori estimation variables γn.
To reduce the hardware implementation complexity of the LDPC decoder, it is possible to use particular structures of the parity matrix H which confer on the matrix an organization in horizontal or vertical layers. For example, a horizontal layer of the parity matrix H may be defined as a set of consecutive lines defined such that, for a given variable (i.e. for a given column of the parity matrix H), the layer has only one non-zero element.
This layered structure allows parallelizing the calculations of the parity check messages within a layer because the parity equations of one layer do not involve a variable of the codeword more than once. Indeed, if a layer has only one non-zero element for a given variable, this means that the variable nodes VNn connected to a parity check node CNm of one layer are not connected to another parity check node of said layer.
There are different ways for obtaining a parity matrix H having a layered structure. In particular, and as illustrated in
For example, and as illustrated in
A horizontal layer of the parity matrix H may then be defined as a set of L consecutive lines of the parity matrix H originating from a line of the base matrix B, with L≤Z.
In the considered example, and without limitation, the expansion factor Z is equal to 128.
A QC-LDPC code may also be obtained by a repetition of a protograph (a protograph is a bipartite graph) and permutations, following predetermined rules, of the connection links existing between its nodes (the permutations are defined by circulant matrices).
Many types of LDPC codes correspond to quasi-cyclic codes and/or to juxtapositions and/or to quasi-cyclic code combinations. For example, these may consist of an irregular code of the “accumulate repeat accumulate” type (LDPC ARA code), or of the “irregular repeat accumulate” type (LDPC IRA code), or of the protograph-based Raptor-like type (LDPC PBRL code).
In a horizontal layered architecture, the calculations are centered primarily on the parity check nodes CNm. The number L corresponds to the number of functional units used to execute in parallel the calculations performed at the level of the parity check nodes. When L=Z, the parallelization level is maximum.
A sequencing based on a horizontal layered structure allows doubling the rate of convergence of the decoding process (two times less iterations are needed with a horizontal layered scheduling to reach performances equivalent to those obtained with flooding scheduling). Furthermore, the memory footprint of a decoder based on a layered scheduling is smaller than that of a decoder based on a flooding scheduling because it is not necessary to store the messages αn,m. The use of a QC-LPDC code further allows simplifying the permutation network of the decoder by exploiting the linear properties of the rotation operation.
The algorithm 7 defined in section 2.5.2 of the document Ref1 describes an example of an LDPC decoding iterative process with the Min-Sum algorithm in the case of horizontal layered scheduling. The a posteriori estimation variables γn are initialized with the a priori estimation variables (LLRs). The messages βm,n are initialized at zero. Afterwards, at each iteration, the different layers are successively processed. For each layer: the messages αn,m are calculated from the messages βm,n and from the a posteriori estimation variables γn; the messages βm,n are calculated from the messages αn,m; the a posteriori estimation variables γn are calculated from the messages βm,n; a partial syndrome could then be calculated from the a posteriori estimation variables γn.
As illustrated in
The calculation 111 of a variable message αn,m is performed for each variable node VNn involved in the layer being processed and for each of the parity check nodes CNm connected to said variable node VNn. The messages αn,m are calculated from the current values of the a posteriori estimation variables γn and from the current values of the parity check messages βm,n. These current values correspond either to the initialization values (for the first iteration) or to the values calculated during the previous iteration. For example, a message αn,m is calculated such that αn,m=γn−βm,n.
The calculation 112 of a parity check message βm,n is performed for each parity check node CNm involved in the layer being processed and for each of the variable nodes VNn connected to said parity check node CNm. The messages βm,n are calculated from the current values of the variable messages αn,m. For example, a message βm,n is calculated while considering all of the messages αn′,m associated with the parity check node CNm while excluding the message αn,m associated with the variable node VNn; the absolute value of a message βm,n is equal to the smallest absolute value of the considered messages αn′,m; the sign of a message βm,n is equal to the product of the signs of the considered messages αn′,m.
The calculation 113 of a value of the a posteriori estimation variable γn is performed for each bit of the codeword. For example, the γn are calculated from the current values of the parity check messages βm,n and from the current values of the variable messages αn,m such that γn=αn,m+βm,n.
The calculation 114 of a partial syndrome for the layer being processed is performed by applying the parity equations of said layer to the a posteriori estimation variables γn. The partial syndrome is then an L-size vector.
The method 100 includes, at the end of the processing of each layer, checking 120 whether the iteration is completed or not. The iteration is completed when all the layers have been processed.
At the end of an iteration, the method 100 includes evaluating 130 a stop criterion. For example, it is possible to consider that the stop criterion is met when all the partial syndromes calculated respectively for the different layers are zero.
For example, the decoder 10 is implemented in the form of a specific integrated circuit of the ASIC type (English acronym standing for “Application-Specific Integrated Circuit”, or a reprogrammable integrated circuit of the FPGA type (English acronym standing for “Field-Programmable Gate Array”).
For example, the decoder 10 is embedded in a receiver device of a payload of a satellite intended to be placed in orbit around the Earth, or in a receiver device of a communication station on the ground.
The known solutions for determining the stop criterion do not always allow obtaining an ideal tradeoff between the decoding performance, the data rate, the implementation complexity and the energy consumption.
The simple solution consisting in checking at the end of an iteration whether all of the partial syndromes calculated respectively for the different layers are zero, leads to relatively high error rate floors (there is no guarantee that the partial syndromes are met by the same codeword because the posteriori estimation variables γn are updated after the processing of each layer).
The solution consisting in checking whether the partial syndromes of the different layers are all zero for a predetermined number of successive iterations is not always satisfactory, because this could lead to a latency related to an increase in the number of iterations necessary to meet the stop criterion.
A particular solution for evaluating the stop criterion is proposed hereinafter. In this solution, the evaluation 130 of the stop criterion comprises checking, for a plurality of successive iterations, whether the number of iterations for which all the partial syndromes are zero subtracted by the number of iterations for which at least one of the partial syndromes is non-zero is greater than or equal to a predetermined stop threshold. If so is the case, then the stop criterion may be considered to be met.
This solution is relatively barely complex and therefore easy to implement. Furthermore, it allows reaching lower error rate floors in comparison with a counter conventional approach. Indeed, this solution allows filtering the oscillations of the counter which could be observed for low signal-to-noise ratios (SNR, standing for “Signal to Noise Ratio”). This solution offers a better robustness for low coding rates while providing an equivalent convergence speed for the other cases. This solution also allows increasing the convergence speed for low SNRs because it is no longer necessary to oversize the stop threshold in order to obtain an f error rate floor similar to that one obtained with a counter conventional approach.
It should be noted that other conditions could be added to the evaluation 130 of the stop criterion, like for example the condition according to which a minimum number of iterations has already been performed.
Of course, there are other manners for implementing this solution. For example, it is possible to initialize the counter at a predetermined non-zero value, decrement the counter when all the partial syndromes are zero, and increment the counter if at least one of the partial syndromes is not zero and if the counter is strictly less than its initialization value. The stop criterion is then met when the counter becomes equal to zero.
When the decoder supports different coding rates, the value of the stop threshold may be predetermined according to the coding rate used (for example, the lower the coding rate, and the higher the stop threshold will be).
It should be noted that the evaluation 130 of the stop criterion is not necessarily performed at each iteration. For example, the evaluation 130 of the stop criterion may be performed periodically after a given number of successive iterations.
The methods used in the prior art to calculate the parity check messages βm,n do not always allow obtaining an ideal tradeoff between the decoding performance, the data rate and the implementation complexity.
This is why a new method is proposed. The new method is so-called hereinafter “Adapted Offset Min-Sum” or AOMS (an adaptation of the “Offset Min-Sum” method).
In the example illustrated in
The number of possible correction values depends on the number of thresholds (and therefore on the number of comparisons) used. For example, with one single comparison with respect to a threshold, the correction value is selected between two possible values; with two comparisons with respect to two distinct thresholds, the correction value is selected between three possible values; etc.
In particular modes of implementation, the first smallest value Min1 and the second smallest value Min2 are determined from among all the absolute values of the variable messages αn′,m associated with the parity check node CNm. The index n′ then belongs to the set N(m) of indices i of the variable nodes VNi connected to the parity check node CNm.
In particular modes of implementation, the first smallest value Min1 and the second smallest value Min2 are determined from among the absolute values of the variable messages αn′,m associated with the parity check node CNm while excluding the variable message αn,m associated with the variable node VNm. In other words, Min1 and Min2 are the two smallest values among the absolute values of the variable messages αn′,m associated with the parity check node CNm with the index n′ belonging to (N(m)-n). The set (N(m)-n) is the set N(m) with the index n excluded. Such arrangements allow obtaining better performances at the expense of a slightly more complex implementation.
For example, the absolute value of the parity check message βm,n is calculated by subtracting the selected correction value from the first smallest value Min1. If the obtained value is negative, the value of the parity check message βm,n is reset at zero.
To simplify the implementation, it is advantageous, for at least one of the comparisons, that the associated threshold be defined such that the decimal representation of its value S could be written in the form:
where NLSB is a positive integer (the threshold is then defined by a binary base value only the NLSB least significant bits of which are equal to ‘1’). Indeed, in this case, the comparison may be carried out by an “OR” gate or by a “NOR” gate taking as input the most significant bits beyond the NLSB least significant bits of the value of the difference between Min2 and Min1.
In the considered example illustrated in
If s2 amounts to ‘0’ and s3 amounts to ‘1’, then the difference between Min2 and Min1 is less than or equal to the first threshold. If s2 and s3 amount to ‘1’, then the difference between Min2 and Min1 is strictly greater than the first threshold and less than or equal to the second threshold. If s3 amounts to ‘0’, then the difference between Min2 and Min1 is strictly greater than the second threshold.
An “OR” gate could be used instead of the “NOR” gate 26 by inverting the logics (in this case s3 amounts to ‘1’ if at least one amongst the most significant bits is non-zero, and s3 amounts to ‘0’ if all of the most significant bits are zero).
The multiplexer 27 is configured to determine a correction value selected from among three possible values a1, a2 and a3 according to the results s2, s3 of the comparisons. For example, the correction value a1 is selected if the difference between Min2 and Min1 is less than or equal to the first threshold, the correction value a2 is selected if the difference between Min2 and Min1 is strictly greater than the first threshold and less than or equal to the second threshold, and otherwise the correction value a3 is selected.
When the decoder supports different coding rates, the different possible values for the correction value and/or the different threshold values may be predetermined according to the coding rate used. For example, for a coding rate lower than or equal to ½, we use a1=3, a2=2 and a3=0. For a coding rate higher than ½, we use a1=a2=1 and a3=0.
For the low coding rates, using two thresholds and three correction values (instead of one threshold and two correction values) allows significantly improving the decoding performances.
The choice of the correction values may also depend on the quantization strategy of the a priori estimation values (LLRs).
The graph illustrated in
One could notice on this graph that the performances obtained with the AOMS method are relatively close to those obtained with the Amin* method. For a frame error rate in the range of 10-6, the Amin* method has a gain lower than 0.05 dB in terms of SNR in comparison with the AOMS method; in turn, the AOMS method has a gain greater than 0.1 dB in terms of SNR in comparison with the OMS method.
The AOMS method for the calculation of the parity check messages βm,n may be used in combination with the specific stop criterion one example of implementation of which is described with reference to
Nonetheless, it should be noted that the AOMS method may also be implemented independently of the stop criterion. Also, the LPDC decoding process in which the AOMS method is used should be based on a layered scheduling. For example, it is possible to use the AOMS method for the calculation 112 of the parity check messages βm,n in an LPDC decoding by flooding. For example,
When the data used in the decoding process are quantized with a fixed-point representation ad when relatively low coding rates are used, the saturation of the data (saturation of the values βm,n and/or γn when they reach a predetermined maximum value) could lead to error rate floors (“quantization floors”). Hence, the quantization format of the data should be carefully selected in order to lower the quantization floor while limiting the hardware implementation complexity and while preserving the decoding performances.
The graph illustrated in
One could observe more or less severe quantization floors depending on the method and the quantization rate used. In particular, with the L6-G9-B7 quantization, the AOMS method has a relatively high error rate floor in comparison with the OMS method for low coding rates.
This is why an “on-the-fly quantization” method is proposed hereinafter.
As illustrated in
For example, the evaluation 151 of the criterion may be performed by checking whether one or more of the following conditions is/are met:
The different saturation thresholds may have identical or different values.
When the criterion is met, the parity check messages βm,n and a posteriori estimation variables γn are “scaled”. It should be noted that the scaling may be performed before the beginning of the next iteration (in this case, it is performed directly during step 152 illustrated in
A scaling 152 corresponds to assigning to a value the integer of the same sign whose absolute value is the closest integer greater than the absolute value of the value divided by two. Such arrangements allow guaranteeing a lower convergence of the sign change of a parity check message βm,n or of an a posteriori estimation variable γn. In particular, in the particular cases 1 and −1, the rounding gives respectively 1 and −1 and therefore preserves the change in sign of the considered metric. This is particularly important for the performance of decoding irregular codes (PBRL, IRA, ARA, etc.) with weakly connected and/or punctured nodes.
Optionally, when the AOMS method or another OMS or NMS type method is used, the correction or normalization values could also be scaled.
For this purpose, the processing unit 16 of the decoder 10 implements a saturation counter and a scaling module. Many conventional LDPC decoders already implement saturation modules. Thus, the cost related to the implementation of a counter of the number of saturations is relatively low. The cost related to the implementation of a scaling module is higher but it remains largely acceptable.
It should be noted that an a posteriori estimation variable γn could be used several times during the same iteration. Nonetheless, the scaling of this variable γn should be performed only once when a scaling is necessary (at the first reading of the variable γn during the considered iteration). For this purpose, for each variable γn, it is possible to memorize an information bit indicating whether it is the first reading of said variable γn for the current iteration.
When the decoder supports different coding rates, the saturation threshold may advantageously be predetermined according to the coding rate used.
As illustrated in
When the decoder supports different coding rates, the predetermined number of successive iterations after which an additional scaling should be done may advantageously be predetermined according to the coding rate use.
The graph illustrated in
The scaling of the data may be used in combination with the AOMS method and with the specific stop criterion set out before. In particular, the scaling of the data, the AOMS method and the specific stop criterion may be implemented in combination in the LDPC decoding method 100 described in
When the data scaling is used in combination with the stop criterion, the evaluation of the saturation criterion in step 151 or in step 153 may be performed before (as illustrated in
Nonetheless, it should be noted that the scaling method may also be implemented independently of the AOMS method and/or independently of the stop criterion.
Also, it is not necessary for the LDPC decoding process in which the scaling method is used to be based on a layered scheduling. For example, it is possible to use the data scaling method in an LDPC decoding process by flooding. For example,
The previous description clearly illustrates that, by its different features and their advantages, the present invention achieves the set objectives. In particular, the different proposed solutions (the specific stop criterion, the “Adapted Min-Sum” calculation method, and the data scaling method) allow obtaining new tradeoffs in terms of error rates, implementation complexity, data rate, and energy consumption.
Advantageously, the stop criterion allows improving the robustness of the decoding at low coding rates. It also allows reducing the number of iterations necessary to decode a codeword, which allows increasing the average decoding flow rate and reducing the latency and the energy consumption of the decoder. The AOMS method allows improving the decoding performances of a codeword in comparison with an OMS-type conventional method. In turn, the data scaling method allows lowering the error floor at particularly low frame error rates. The complexity introduced by these different methods in the decoding process remains largely acceptable with regards to the improvements conferred thereby.
Each of the different methods may be used alone or in combination with the others. The stop criterion is specific to a layered scheduling, but the AOMS method and the data scaling method could be applied both to layered scheduling and to flooding ordering.
The invention has been described in the context of an LDPC decoder for spatial communications, and more particularly for high-speed optical communications. Nonetheless, nothing prevents applying all or part of the methods proposed by the invention to LDPC decoders intended for other applications.
Number | Date | Country | Kind |
---|---|---|---|
FR2204540 | May 2022 | FR | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2023/057488 | 3/23/2023 | WO |