Examples relate to a Low-Density Parity-Check Code, LDPC, decoder apparatus or device, to an LDPC decoder system and to corresponding methods and computer programs.
Forward error correction codes and systems are being used in various contexts, e.g. in communication systems for transmitting data over a lossy channel, or in memory or storage applications for recovering bit errors or faulty memory or storage circuitry. One technique being used for providing forward error correction is based on so-called “Low-Density Parity-Check Codes” (LDPC), which are codes that are based on a sparse matrix (i.e. a matrix where most of the elements are logical 0s) that can be used to recover a codeword.
Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
Some examples are now described in more detail with reference to the enclosed figures. However, other possible examples are not limited to the features of these examples described in detail. Other examples may include modifications of the features as well as equivalents and alternatives to the features. Furthermore, the terminology used herein to describe certain examples should not be restrictive of further possible examples.
Throughout the description of the figures same or similar reference numerals refer to same or similar elements and/or features, which may be identical or implemented in a modified form while providing the same or a similar function. The thickness of lines, layers and/or areas in the figures may also be exaggerated for clarification.
When two elements A and B are combined using an ‘or’, this is to be understood as disclosing all possible combinations, i.e. only A, only B as well as A and B, unless expressly defined otherwise in the individual case. As an alternative wording for the same combinations, “at least one of A and B” or “A and/or B” may be used. This applies equivalently to combinations of more than two elements.
If a singular form, such as “a”, “an” and “the” is used and the use of only a single element is not defined as mandatory either explicitly or implicitly, further examples may also use several elements to implement the same function. If a function is described below as implemented using multiple elements, further examples may implement the same function using a single element or a single processing entity. It is further understood that the terms “include”, “including”, “comprise” and/or “comprising”, when used, describe the presence of the specified features, integers, steps, operations, processes, elements, components and/or a group thereof, but do not exclude the presence or addition of one or more other features, integers, steps, operations, processes, elements, components and/or a group thereof.
The following description relates to the LDPC decoder apparatus/device and system and to the corresponding method or methods. Features of the apparatus/device or method may be likewise applied to the corresponding method.
Various examples of the present disclosure relate to an LDPC decoder apparatus or device, to a system comprising such an LDPC decoder apparatus or device, and to a corresponding method. As has been mentioned before, LDPC are codes that are being used to provide both error detection and error correction for codewords comprising LDPC parity information. Such codewords are, for example, used in communication systems for transmitting information over a lossy channel, or in memory or storage applications, where transmission and/or memory/storage errors can be recovered using such codes. In general, an LDPC decoder takes a codeword as an input, and uses a so-called parity-check matrix (also called H matrix) to calculate a syndrome of the codeword (using a matrix multiplication). The component “low-density” in LDPC refers to the sparseness of the H matrix, in which only few non-zero elements (e.g. logical ones, or other non-binary values when a non-binary LDPC code is used) are interspersed among zeros. In the Figures shown in later parts of the disclosure, e.g.
Accordingly, the syndrome may be calculated beforehand, and then passed to the LDPC decoder apparatus. In other words, the syndrome may be initially computed outside the LDPC decoder apparatus. The codeword might not be provided to the LDPC decoder apparatus. In various examples, the syndrome generation circuitry 20 is used to generate the syndrome that is input into the LDPC decoder apparatus. In general, the syndrome generation circuitry may be application-specific circuitry configured to calculate the syndrome based on the codeword using a hardware- or software-implementation of a matrix multiplication using the respective H-matrix. For example, the syndrome generation circuitry may be implemented using application-specific hardware (e.g. with a memory storing a representation of the H matrix), or a general-purpose processor equipped with corresponding software may be used to implement the syndrome generation circuitry. The processing circuitry of the LDPC decoder apparatus is configured to obtain the syndrome of the codeword via the input circuitry, e.g. from the syndrome generation circuitry 20. The processing circuitry might not be configured to obtain the actual codeword, i.e. the processing circuitry might not accept the codeword via the input circuitry.
The processing circuitry is configured to perform LDPC iterative decoding using the obtained syndrome. In general, the LDPC iterative decoding being performed by the processing circuitry may be implemented similar to other systems, with at least one major difference—instead of applying the changes to be applied during the LDPC iterative decoding to the codeword, the changes are instead applied to a so-called surrogate codeword, i.e. bit vector having (generally) the same size as the actual codeword, but which is initialized with all zeros. For example, the surrogate codeword is used by the processing circuitry for the LDPC iterative decoding instead of the codeword. As a result of the LDPC iterative decoding, instead of the corrected codeword, the surrogate codeword represents the changes (e.g. bit flips) to be applied to the actual codeword. In other words, after the LDPC iterative decoding, the surrogate codeword may represent the difference between the corrected codeword and the codeword. In other words, after the LDPC iterative decoding is completed, the surrogate codeword represents the changes to be applied to the codeword. In general, the concept may be applied to various hard decoding or soft decoding approaches. For example, the LDPC iterative decoding may be performed using one of a belief propagation algorithm, a sum-product message-passing algorithm, a min-sum algorithm, and a bit-flipping algorithm. In the non-binary LDPC case, the iterative decoding may be performed using one of belief propagation algorithm, a min-max algorithm, an extended min-sum algorithm, a trellis min-sum algorithm, a symbol flipping algorithm, or a non-binary stochastic decoder.
The processing circuitry is configured to record the changes to be applied to the codeword due to the LDPC iterative decoding by storing the surrogate codeword in a memory structure, e.g. within flip-flops/random access memory (RAM) of the processing circuitry. As shown in
In addition to the split between the codeword and the surrogate codeword (for the variable bit nodes), the same principle may be applied to the check nodes. For example, the obtained syndrome may be stored in a further memory structure (e.g. flip flops/ram), and the changes to be applied to the syndrome may be stored separately from the syndrome within the further memory structure (e.g. as shown in
Similar to the storage of the surrogate codeword, one of two approaches may be chosen—all of the bits of the syndrome may be duplicated, or only the bits that have been changed may be stored. For example, the processing circuitry may be configured to store, for each bit of the syndrome, a further bit representing whether a change is to be applied to the respective bit of the syndrome. In other words, the further memory structure may comprise, for each bit of the syndrome, another bit for recording the changes to be applied to the syndrome. Alternatively, only changes might be stored (in addition to the static syndrome) the memory. For example, the processing circuitry may be configured to store, for each bit of the syndrome that is changed due to the LDPC iterative decoding (e.g. only for bits of the syndrome that are changed during the LDPC iterative decoding), a further bit indicating that a change is to be applied to the respective bit of the syndrome. Again, a list structure, an array, a vector, or a FIFO (First In, First Out) structure, may be used to record the changes. In other words, the processing circuitry may be configured to store the further bit indicating that a change is to be applied to the respective bit of the syndrome using a list structure.
The processing circuitry is configured to output the information representing the changes to be applied to the codeword via the output circuitry. For example, the processing circuitry may be configured to output the (entire) surrogate codeword (representing the changes to be applied to the codeword), or information about single bits to be changed in the codeword.
This information can subsequently be applied to the (actual) codeword to obtain the corrected codeword. The optional combination circuitry is configured to combine the output of the LDPC decode apparatus, e.g. the surrogate codeword, with the codeword. For example, the combination circuitry may be configured to combine the information representing the changes to be applied to the codeword (e.g. the surrogate codeword) with the codeword using an XOR combination. The LDPC decoder system may comprise memory for storing the codeword in the interim. The combination circuitry is configured to output the combination, i.e. the combination of the surrogate codeword and the codeword, i.e. the corrected codeword.
In various examples, the processing circuitry or means for processing 14 may be implemented using one or more processing units, one or more processing devices, any means for processing, such as a processor, a computer or a programmable hardware component being operable with accordingly adapted software. In other words, the described function of the processing circuitry or means for processing 14 may as well be implemented in software, which is then executed on one or more programmable hardware components. Such hardware components may comprise a general-purpose processor, a Digital Signal Processor (DSP), a micro-controller, etc. In some examples, the processing circuitry may be implemented using a field-programmable gate-array (FPGA). In various examples, however, the processing circuitry may be implemented by an application-specific integrated circuitry, using logical gates and memory cells that are purpose-built for providing the functionality of the processing circuitry.
An input, e.g. the input circuitry or input means 12 may correspond to an interface for receiving information, which may be in digital (bit) values according to a specified code, within a module, between modules or between modules of different entities. An output, e.g. the output circuitry or output means 16 may correspond to an interface for transmitting information, which may be represented by digital (bit) values according to a specified code or protocol, within a module, between modules, or between modules of different entities.
More details and aspects of the LDPC decoder apparatus, device, system, and method are mentioned in connection with the proposed concept or one or more examples described above or below (e.g.
As has been mentioned before, LDPC-based decoding may be used in a variety of contexts. Therefore, in the following, a communication device, such as a wireless communication device or a wireline modem, a memory device and a storage device is introduced.
More details and aspects of the communication device are introduced in connection with the proposed concept or one or more examples described above or below (e.g.
In some examples, the memory device may be a memory device for implementing two-level memory (2LM). In some examples, where the memory device is configured as a 2LM system, the memory device 300 may serve as main memory for a computing device. For these examples, memory circuitry 310 may include the two levels of memory including cached subsets of system disk level storage. In this configuration, the main memory may include “near memory” arranged to include volatile types on memory and “far memory” arranged to include volatile or non-volatile types of memory. The far memory may include volatile or non-volatile memory that may be larger and possibly slower than the volatile memory included in the near memory. The far memory may be presented as “main memory” to an operating system (OS) for the computing device while the near memory is a cache for the far memory that is transparent to the OS. The management of the 2LM system may be done by a combination of logic and modules executed via processing circuitry (e.g., a CPU) of the computing device. Near memory may be coupled to the processing circuitry via high bandwidth, low latency means for efficient processing. Far memory may be coupled to the processing circuitry via low bandwidth, high latency means.
In some examples, the memory circuitry 310 may include non-volatile and/or volatile types of memory. Non-volatile types of memory may include, but are not limited to, 3-dimensional cross-point memory, flash memory, ferroelectric memory, silicon-oxide-nitride-oxide-silicon (SONOS) memory, polymer memory such as ferroelectric polymer memory, nanowire, ferroelectric transistor random access memory (FeTRAM or FeRAM), ovonic memory, nanowire or electrically erasable programmable read-only memory (EEPROM). Volatile types of memory may include, but are not limited to, dynamic random access memory (DRAM) or static RAM (SRAM).
More details and aspects of the memory device are introduced in connection with the proposed concept or one or more examples described above or below (e.g.
The memory device may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept or one or more examples described above or below.
More details and aspects of the storage device are introduced in connection with the proposed concept or one or more examples described above or below (e.g.
Various examples of the present disclosure relate to a syndrome Input to LDPC Decoder.
A low-density parity-check code (LDPC) H matrix is sparse, so most of the elements are 0s, denoting lack of connections.
To begin decoding, in various systems, the codeword bits may be loaded into the variable bit nodes, which outnumber the check nodes. Then the check node values may be calculated by computing the syndrome by multiplying the values of the variable bit nodes by the H matrix. In different types of LDPC decoding (e.g. min-sum, bit flipping, belief propagation), messages are passed between the variable bit nodes and check nodes over multiple iterations until a codeword is found such that the variable bit nodes contain the corrected codeword and the check nodes contain all 0s indicating a zero syndrome value. Memory may be required to be stored for the variable bit nodes and the check nodes. An introduction to LDPC codes can be found in Sarah S. Johnson: “Introducing Low-Density Parity-Check Codes”
In 25G PON (Passive Optional Network), codewords may be required to complete in a certain time frame, and they may be queued up behind a codeword that takes longer to decode. In order to use a shared ECC (Error Correction Code) decoder, the received data may be queued up, which may cost storage elements and latency.
In some approaches, the full LDPC codeword is loaded into the LDPC decoder. There is increased latency due to inputting the codeword over multiple clock cycles. The LDPC decoder may require storage space to store a copy of the original codeword.
Various examples of the present disclosure may reduce the hardware and/or storage costs for the decoding on linear block codes. Second, multiple clock cycles may be required to input a codeword into an LDPC decoder. This can be a significant source of latency, especially when the number of bit errors is low. Various examples of the present disclosure may reduce the input latency to the LDPC decoder. Third, memory may be required to store the input codeword inside the LDPC decoder. Various examples of the present disclosure may reduce the amount of memory required inside the LDPC decoder.
In various examples, e.g. for hard-decision decoding, only the syndrome might be passed to the LDPC decoder (i.e. the LDPC decoder apparatus, device, or method). Instead of inputting the codeword bits, the syndrome bits may be input, which can be much smaller, and the syndrome bits may be stored in flip-flops. The input codeword bits might not be stored in flip-flops as the original codeword bits may all assumed to be zero. In another example, even the intermediate codeword bits might not be stored inside the bit flipping decoder.
There are typically more codeword bits than syndrome bits. Syndrome bits may be passed through multiple pipeline stages. It may reduce the input latency into the LDPC decoder as syndrome bits can be input in one clock cycle. It may also reduce congestion in the core decoding logic, as all of the codeword input values are hard-wired to zero. When there are multiple codewords queued up for decoding, the queue size can be smaller.
In various examples, the input ports to the min-sum decoder or bit-flipping decoder may be the size of the syndrome, which is also the number of parity bits. An SRAM approximately the same size as the LDPC codeword size may be omitted from the LDPC decoder. The output from the LDPC decoder may be XORed (combined with an Exclusive Or operation) with the full codeword, which may be stored in a FIFO (First In First Out) or SRAM (static RAM). Various examples may comprise standalone syndrome calculation block, e.g. syndrome generation circuitry, located prior to the decoder, e.g. the min-sum decoder or bit-flipping decoder, and may comprise many, many XOR gates.
For the bit flipping decoder with a changed memory structure, a syndrome calculator may be that calculates a syndrome and feeds into the bit flipping decoder. There might be no syndrome calculator inside the bit flipping decoder. There might only be enough flip flops in the bit flipping decoder to store the check nodes, but not enough flip-flops to store the variable nodes. If the decoder comprises a RAM (Random Access Memory) that stores codeword bits (variable nodes), its outputs might not be fed into the syndrome flip-flops. In various examples, when an LDPC codeword is decoded, the input codeword and the bit flips may be handled separately. Since the LDPC codes are linear codes, the syndrome may depend (only) on the error vector (noise vector) and not on the input data (noise-free codeword). Throughout the iterative process of decoding and at the end of decoding, the partially and fully corrected codeword may be the sum of the input codeword and the bit flips.
It may be noted that going from a noisy codeword to a corrected codeword may be considered equivalent to going from the zero codeword to an error vector. This is shown in the
Various examples take advantage of this by calculating the syndrome for the input codeword first, and then continuing decoding without the input codeword, since its contribution is represented by the input codeword syndrome, and that may be sufficient to find the bit flip locations. The concept that the syndrome is sufficient to fully represent the error locations may be used in Hamming, SECDED (Single Error Correction, Double Error Detection), BCH (Bose-Chaudhuri-Hocquenghem) and Reed-Solomon decoding. A use of the concept for LDPC decoders might not be well-known, and the adaptation of the concept may require some modified memory structures.
Two separate blocks may be used: one to calculate the input codeword syndrome (e.g. the syndrome generation circuitry 20), and one to perform LDPC decoding (e.g. the LDPC decoder apparatus 10).
The decoder will be changed. The variable nodes might not need storage space to store the input intrinsic information, as all bits (of the surrogate codeword) logical 0s. The input may go directly to the input codeword syndrome storage in the check nodes. The check node storage may grow to store an additional set of syndrome sign bits for the input codeword syndrome. In other words, extra flip-flops may be added to each check node, e.g. for both the min-sum decoder and the bit flipping decoder.
It may be noted that the syndrome input to the LDPC decoder also works with soft decoding with different LLRs. In that case, the sign bits for each variable node may not need to be stored, and extra sign bits may be stored for each check node. The reliability or soft information may be input to the decoder as before.
In the following, an approach is introduced in which the memory structure is changed even further. Some low-complexity LDPC decoders include majority logic decoders and bit-flipping decoders. These flip bits in the LDPC codeword when a variable node, that is associated with a codeword bit, is determined to be an error. The LDPC code can be represented by a Tanner graph, where there are variable nodes (c0 to c9 in
After the bits are flipped following certain rules, the parity-check equations may be recalculated to verify whether all of the equations pass. When all of the parity-check equations pass, decoding may be successful. Computing the parity-check equations is also called a syndrome check. When all of the equations pass, the check nodes all have the value 0. The aggregation of the check nodes is called the syndrome. So both of these decoding methods require 1) one syndrome check, 2) logic to determine status of a variable node based on check nodes, and 3) a second syndrome check.
Another approach taken in the present disclosure is to decode the LDPC codeword by storing the check node bits, which correspond to the syndrome, and saving bit flips as a vector of locations. A main difference between this and regular bit flipping algorithms is how memory is used. For example, the current value of each variable node may be stored in flip-flops or RAM. In a proposed algorithm, the syndrome may be first calculated, and the received codeword bits may be stored elsewhere to be corrected later. While much of the combinational logic remains the same, to identify which bits should flip, the associated syndrome bits may be flipped while the new bit is not stored immediately in the variable node. Instead, an error location is added to a vector of error locations (i.e. to record the changes to be applied to the codeword). This may reduce the memory directly accessed by the core bit flipping algorithm to the number of check nodes, which is often ten times smaller than the number of variable nodes. Afterwards, the error locations stored in the error location vector can be used to correct the codeword stored elsewhere. Alternatively, the (surrogate) codeword can be modified as each error location is found, but the codeword might not be an input to the bit flipping algorithm. In regular bit flipping algorithms, the updated codeword may be used to recalculate the syndrome, but that might not be necessary in the proposed implementation. The algorithm is as follows and illustrated in
In one implementation, a rotating check node register may be used.
The aspects and features described in relation to a particular one of the previous examples may also be combined with one or more of the further examples to replace an identical or similar feature of that further example or to additionally introduce the features into the further example.
Example 1 relates to a low-density parity-check code, LDPC, decoder apparatus (10), comprising input circuitry (12) and processing circuitry (14), the processing circuitry being configured to obtain a syndrome of a codeword via the input circuitry, perform LDPC iterative decoding using the obtained syndrome, wherein the changes to be applied to the codeword due to the LDPC iterative decoding are recorded by applying the changes to a surrogate codeword, and record changes to be applied to the codeword due to the LDPC iterative decoding by storing the surrogate codeword in a memory structure.
In Example 2, the subject matter of example 1 or any of the Examples described herein may further include, that the memory structure for recording the changes to be applied to the codeword is used to store each bit of the surrogate codeword.
In Example 3, the subject matter of one of the examples 1 to 2 or any of the Examples described herein may further include, that the memory structure for recording the changes to be applied to the codeword is used to store the bits of the surrogate codeword that are changed due to the LDPC iterative decoding.
In Example 4, the subject matter of example 3 or any of the Examples described herein may further include, that the processing circuitry is configured to store the bits of the surrogate codeword that are changed due to the LDPC iterative decoding within the memory structure using a list structure.
In Example 5, the subject matter of one of the examples 1 to 4 or any of the Examples described herein may further include, that the processing circuitry is configured to record changes to be applied to the syndrome due to the LDPC iterative decoding in a further memory structure.
In Example 6, the subject matter of example 5 or any of the Examples described herein may further include, that the processing circuitry is configured to store the obtained syndrome using the further memory structure, and to record changes to be applied to the syndrome separately from the obtained syndrome within the further memory structure.
In Example 7, the subject matter of example 6 or any of the Examples described herein may further include, that the processing circuitry is configured to store, for each bit of the syndrome, a further bit representing whether a change is to be applied to the respective bit of the syndrome.
In Example 8, the subject matter of example 6 or any of the Examples described herein may further include, that the processing circuitry is configured to store, for each bit of the syndrome that is changed due to the LDPC iterative decoding, a further bit indicating that a change is to be applied to the respective bit of the syndrome.
In Example 9, the subject matter of example 8 or any of the Examples described herein may further include, that the processing circuitry is configured to store the further bit indicating that a change is to be applied to the respective bit of the syndrome using one of a list structure, an array, a vector or a FIFO structure.
In Example 10, the subject matter of one of the examples 1 to 9 or any of the Examples described herein may further include, that the surrogate codeword is used by the processing circuitry for the LDPC iterative decoding instead of the codeword.
In Example 11, the subject matter of one of the examples 1 to 10 or any of the Examples described herein may further include, that the LDPC iterative decoding is performed using one of a belief propagation algorithm, a sum-product message-passing algorithm, a min-sum algorithm, a bit-flipping algorithm, a min-max algorithm, an extended min-sum algorithm, a trellis min-sum algorithm, a symbol flipping algorithm, or a non-binary stochastic decoder.
In Example 12, the subject matter of one of the examples 1 to 11 or any of the Examples described herein may further include, that the surrogate codeword is initialized with all zeros during an initialization of the LDPC iterative decoding.
In Example 13, the subject matter of one of the examples 1 to 12 or any of the Examples described herein may further include, that the LDPC decoder apparatus further comprises output circuitry, wherein the processing circuitry is configured to output information representing the changes to be applied to the codeword via the output circuitry.
Example 14 relates to a low-density parity-check code, LDPC, decoder system (100) comprising a LDPC decoder apparatus (10) according to one of the examples 1 to 13. The low-density parity-check code comprises syndrome generation circuitry (20) configured to generate a syndrome based on a codeword, and to provide the syndrome to the LDPC decoder apparatus. The low-density parity-check code comprises combination circuitry (30) configured to combine an output of the LDPC decoder apparatus with the codeword, and to output the combination.
In Example 15, the subject matter of example 14 or any of the Examples described herein may further include, that the LDPC decoder system further comprises receiver circuitry (210), wherein the LDPC decoder system is configured to decode codewords received via the receiver circuitry.
In Example 16, the subject matter of example 15 or any of the Examples described herein may further include, that the LDPC decoder system is a communication device for communicating via a passive optical network,
Example 17 relates to a communication device (200) comprising receiver circuitry (210) and a low-density parity-check code, LDPC, decoder system (100) according to example 14, wherein the LDPC decoder system is configured to decode codewords received via the receiver circuitry.
In Example 18, the subject matter of example 17 or any of the Examples described herein may further include, that the communication device is a communication device for communicating via a passive optical network.
Example 19 relates to a memory device (300) comprising memory circuitry (310) and a low-density parity-check code, LDPC, decoder system (100) according to example 14, wherein the LDPC decoder system is configured to decode codewords obtained from the memory circuitry.
Example 20 relates to a storage device (400) comprising storage circuitry (410) and a low-density parity-check code, LDPC, decoder system (100) according to example 14, wherein the LDPC decoder system is configured to decode codewords obtained from the storage circuitry.
Example 21 relates to a low-density parity-check code, LDPC, decoder device (10), comprising input means (12) and means for processing (14), the means for processing being configured to obtain a syndrome of a codeword via the input means, perform LDPC iterative decoding using the obtained syndrome, wherein the changes to be applied to the codeword due to the LDPC iterative decoding are recorded by applying the changes to a surrogate codeword, and record changes to be applied to the codeword due to the LDPC iterative decoding by storing the surrogate codeword in a memory structure.
In Example 22, the subject matter of example 21 or any of the Examples described herein may further include, that the memory structure for recording the changes to be applied to the codeword is used to store each bit of the surrogate codeword.
In Example 23, the subject matter of one of the examples 21 to 22 or any of the Examples described herein may further include, that the memory structure for recording the changes to be applied to the codeword is used to store the bits of the surrogate codeword that are changed due to the LDPC iterative decoding.
In Example 24, the subject matter of example 23 or any of the Examples described herein may further include, that the means for processing is configured to store the bits of the surrogate codeword that are changed due to the LDPC iterative decoding within the memory structure using a list structure.
In Example 25, the subject matter of one of the examples 21 to 24 or any of the Examples described herein may further include, that the means for processing is configured to record changes to be applied to the syndrome due to the LDPC iterative decoding in a further memory structure.
In Example 26, the subject matter of example 25 or any of the Examples described herein may further include, that the means for processing is configured to store the obtained syndrome using the further memory structure, and to record changes to be applied to the syndrome separately from the obtained syndrome within the further memory structure.
In Example 27, the subject matter of example 26 or any of the Examples described herein may further include, that the means for processing is configured to store, for each bit of the syndrome, a further bit representing whether a change is to be applied to the respective bit of the syndrome.
In Example 28, the subject matter of example 26 or any of the Examples described herein may further include, that the means for processing is configured to store, for each bit of the syndrome that is changed due to the LDPC iterative decoding, a further bit indicating that a change is to be applied to the respective bit of the syndrome.
In Example 29, the subject matter of example 28 or any of the Examples described herein may further include, that the means for processing is configured to store the further bit indicating that a change is to be applied to the respective bit of the syndrome using one of a list structure, an array, a vector or a FIFO structure.
In Example 30, the subject matter of one of the examples 21 to 29 or any of the Examples described herein may further include, that the surrogate codeword is used by the means for processing for the LDPC iterative decoding instead of the codeword.
In Example 31, the subject matter of one of the examples 21 to 30 or any of the Examples described herein may further include, that the LDPC iterative decoding is performed using one of a belief propagation algorithm, a sum-product message-passing algorithm, a min-sum algorithm, a bit-flipping algorithm, a min-max algorithm, an extended min-sum algorithm, a trellis min-sum algorithm, a symbol flipping algorithm, or a non-binary stochastic decoder.
In Example 32, the subject matter of one of the examples 1 to 11 or any of the Examples described herein may further include, that the surrogate codeword is initialized with all zeros during an initialization of the LDPC iterative decoding.
In Example 33, the subject matter of one of the examples 1 to 12 or any of the Examples described herein may further include, that the LDPC decoder device further comprises output means, wherein the means for processing is configured to output information representing the changes to be applied to the codeword via the output means.
Example 34 relates to a low-density parity-check code, LDPC, decoder system (100) comprising a LDPC decoder device (10) according to one of the examples 21 to 33. The low-density parity-check code comprises syndrome generation means (20) configured to generate a syndrome based on a codeword, and to provide the syndrome to the LDPC decoder device. The low-density parity-check code comprises combination means (30) configured to combine an output of the LDPC decoder device with the codeword, and to output the combination.
In Example 35, the subject matter of example 34 or any of the Examples described herein may further include, that the LDPC decoder system further comprises means for receiving (210), wherein the LDPC decoder system is configured to decode codewords received via the means for receiving.
In Example 36, the subject matter of example 35 or any of the Examples described herein may further include, that the LDPC decoder system is a communication device for communicating via a passive optical network,
Example 37 relates to a communication device (200) comprising means for receiving (210) and a low-density parity-check code, LDPC, decoder system (100) according to example 34, wherein the LDPC decoder system is configured to decode codewords received via the means for receiving.
In Example 38, the subject matter of example 37 or any of the Examples described herein may further include, that the communication device is a communication device for communicating via a passive optical network.
Example 39 relates to a memory device (300) comprising memory (310) and a low-density parity-check code, LDPC, decoder system (100) according to example 34, wherein the LDPC decoder system is configured to decode codewords obtained from the memory.
Example 40 relates to a storage device (400) comprising storage (410) and a low-density parity-check code, LDPC, decoder system (100) according to example 34, wherein the LDPC decoder system is configured to decode codewords obtained from the storage.
Example 41 relates to a low-density parity-check code, LDPC, decoder method, comprising obtaining (120) a syndrome of a codeword via an input. The low-density parity-check code comprises performing LDPC iterative decoding (130) using the obtained syndrome, wherein the changes to be applied to the codeword due to the LDPC iterative decoding are recorded by applying the changes to a surrogate codeword. The low-density parity-check code comprises recording changes (140) to be applied to the codeword due to the LDPC iterative decoding by storing the surrogate codeword in a memory structure.
In Example 42, the subject matter of example 41 or any of the Examples described herein may further include, that the memory structure for recording the changes to be applied to the codeword is used to store each bit of the surrogate codeword.
In Example 43, the subject matter of one of the examples 41 to 42 or any of the Examples described herein may further include, that the memory structure for recording the changes to be applied to the codeword is used to store the bits of the surrogate codeword that are changed due to the LDPC iterative decoding.
In Example 44, the subject matter of example 43 or any of the Examples described herein may further include, that the method comprises storing (140) the bits of the surrogate codeword that are changed due to the LDPC iterative decoding within the memory structure using a list structure.
In Example 45, the subject matter of one of the examples 41 to 44 or any of the Examples described herein may further include, that the method comprises recording (150) changes to be applied to the syndrome due to the LDPC iterative decoding in a further memory structure.
In Example 46, the subject matter of example 45 or any of the Examples described herein may further include, that the method comprises storing (122) the obtained syndrome using the further memory structure, and recording (150) changes to be applied to the syndrome separately from the obtained syndrome within the further memory structure.
In Example 47, the subject matter of example 46 or any of the Examples described herein may further include, that the method comprises storing (150), for each bit of the syndrome, a further bit representing whether a change is to be applied to the respective bit of the syndrome.
In Example 48, the subject matter of example 46 or any of the Examples described herein may further include, that the method comprises storing (150), for each bit of the syndrome that is changed due to the LDPC iterative decoding, a further bit indicating that a change is to be applied to the respective bit of the syndrome.
In Example 49, the subject matter of example 48 or any of the Examples described herein may further include, that the method comprises storing (150) the further bit indicating that a change is to be applied to the respective bit of the syndrome using one of a list structure, an array, a vector or a FIFO structure.
In Example 50, the subject matter of one of the examples 41 to 49 or any of the Examples described herein may further include, that the surrogate codeword is used by the method for the LDPC iterative decoding instead of the codeword.
In Example 51, the subject matter of one of the examples 41 to 50 or any of the Examples described herein may further include, that the LDPC iterative decoding is performed using one of a belief propagation algorithm, a sum-product message-passing algorithm, a min-sum algorithm, a bit-flipping algorithm, a min-max algorithm, an extended min-sum algorithm, a trellis min-sum algorithm, a symbol flipping algorithm, or a non-binary stochastic decoder.
In Example 52, the subject matter of one of the examples 41 to 51 or any of the Examples described herein may further include, that the surrogate codeword is initialized with all zeros during an initialization of the LDPC iterative decoding.
In Example 53, the subject matter of one of the examples 41 to 52 or any of the Examples described herein may further include, that the method comprises outputting (160) information representing the changes to be applied to the codeword via an output.
Example 54 relates to a low-density parity-check code, LDPC, decoder method comprising generating (110) a syndrome based on a codeword. The low-density parity-check code comprises using (120-160) the LDPC decoder method of one of the examples 41 to 53 with the generated syndrome. The low-density parity-check code comprises combining (170) the output of the LDPC decoder method with the codeword. The low-density parity-check code comprises outputting (180) the combination.
Example 55 relates to a communication device (200) comprising receiver circuitry (210), the communication device being configured to perform the low-density parity-check code, LDPC, decoder method according to example 54, wherein the LDPC decoder method is used to decode codewords received via the receiver circuitry.
In Example 56, the subject matter of example 55 or any of the Examples described herein may further include, that the communication device is a communication device for communicating via a passive optical network.
Example 57 relates to a memory device (300) comprising memory circuitry (310), the memory device being configured to perform the low-density parity-check code, LDPC, decoder method according to example 54, wherein the LDPC decoder method is used to decode codewords obtained from the memory circuitry.
Example 58 relates to a storage device (400) comprising storage circuitry (410), the storage device being configured to perform the low-density parity-check code, LDPC, decoder method according to example 54, wherein the LDPC decoder method is used to decode codewords obtained from the storage circuitry.
Example 59 relates to a machine-readable storage medium including program code, when executed, to cause a machine to perform the method of one of the examples 41 to 54.
Example 60 relates to a computer program having a program code for performing the method of one of the examples 41 to 54, when the computer program is executed on a computer, a processor, or a programmable hardware component.
Example 61 relates to a machine-readable storage including machine readable instructions, when executed, to implement a method or realize an apparatus as claimed in any pending claim or shown in any example.
Examples may further be or relate to a (computer) program including a program code to execute one or more of the above methods when the program is executed on a computer, processor, or other programmable hardware component. Thus, steps, operations, or processes of different ones of the methods described above may also be executed by programmed computers, processors, or other programmable hardware components. Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor- or computer-readable and encode and/or contain machine-executable, processor-executable or computer-executable programs and instructions. Program storage devices may include or be digital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example. Other examples may also include computers, processors, control units, (field) programmable logic arrays ((F)PLAs), (field) programmable gate arrays ((F)PGAs), graphics processor units (GPU), application-specific integrated circuits (ASICs), integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
It is further understood that the disclosure of several steps, processes, operations or functions disclosed in the description or claims shall not be construed to imply that these operations are necessarily dependent on the order described, unless explicitly stated in the individual case or necessary for technical reasons. Therefore, the previous description does not limit the execution of several steps or functions to a certain order. Furthermore, in further examples, a single step, function, process, or operation may include and/or be broken up into several sub-steps, -functions, -processes or -operations.
If some aspects have been described in relation to a device or system, these aspects should also be understood as a description of the corresponding method. For example, a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method. Accordingly, aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
The following claims are hereby incorporated in the detailed description, wherein each claim may stand on its own as a separate example. It should also be noted that although in the claims a dependent claim refers to a particular combination with one or more other claims, other examples may also include a combination of the dependent claim with the subject matter of any other dependent or independent claim. Such combinations are hereby explicitly proposed, unless it is stated in the individual case that a particular combination is not intended. Furthermore, features of a claim should also be included for any other independent claim, even if that claim is not directly defined as dependent on that other independent claim.