Embodiments of the invention relate to devices, systems and methods for encoding and/or decoding error correction codes, and, in particular, product error correction codes.
A fundamental problem in digital communication and data storage is that data transmitted over a noisy channel or storage media may be retrieved with errors. A common solution to this problem is to add redundancy information to the data, referred to as Error Correcting Codes (ECC) or Error Control Codes, to correct the errors and enable reliable communication in the presence of channel interference. According to this technique, the receiver, which only has access to the noisy version of the received signal, may use the added redundancy (provided by the ECC) to correct errors and reconstruct the original version of the information as it was transmitted.
Current demand for increased throughput of communication over various communication media (e.g., satellite, wireless and optical) and increased density of data stored in nonvolatile memory modules (e.g., flash memory) poses growing challenges for error correction systems. At the same time, current system standards are increasing requirements for data fidelity and reliability over those systems (see e.g. high data fidelity requirements for nonvolatile memory modules or high throughput requirements for optical communication devices). For example, current Flash memory specifications require block error probability to be less than 10−11.
Consequently, there is a growing need in the art for systems and methods to efficiently encode and decode error correction codes to provide relatively high throughput or data density, while maintaining low decoding error rates.
A device, system and method for implementing a two-pass decoder. In a first pass, the two-pass decoder may decode a first dimension of a product code and serves as an erasure “channel” that inserts erasures where decoding fails in the first dimension. In a second pass, the two-pass decoder may decode a second dimension of a product code and serves as an erasure corrector that corrects erasures inserted in the first pass. In some embodiments, the two-pass decoder may repeat one or more iterations of the first and/or second passes if the product code contains errors after the first iteration of the first and/or second passes. In one embodiment, the two-pass decoder may repeat the first pass if the initial iteration of the first pass failed and the second pass succeeded, for example, to propagate any erasure corrections in the second pass to increase the probability of successfully decoding a first dimension code that failed in the first pass. In some embodiments, the two-pass decoder may repeat the second pass if errors remain after the current iteration of the first pass.
A device, system and method for decoding a product code. The product code may encode codewords by a plurality of first and second dimension error correction codes. For each of a plurality of first dimension codewords, the first dimension codeword may be decoded using a first dimension error correction code and the first dimension codeword may be erased if errors are detected in the decoded first dimension codeword. For each of a plurality of second dimension codewords, the second dimension codeword may be decoded using a second dimension erasure correction code to correct an erasure in the second dimension codeword that was erased in the first dimension decoding.
The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:
It will be appreciated that for simplicity and clarity of illustration, elements shown in the FIGS. have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the FIGS. to indicate corresponding or analogous elements.
A channel may distort data in several ways. One type of distortion inserts errors into a data signal, e.g., by changing values of the received data in unknown places. For example, a codeword transmitted as [1 0 0 0 1 1] may be received as [1 0 0 1 0 1](changing the values of the fourth and fifth terms). Another type of distortion replaces some of the positions of the vector values by a new symbol referred to as an “erasure” (e.g., leaving the other symbols unchanged). For example, a codeword transmitted as [1 0 0 0 1 1] may be received as [1 0 0 e e 1] (the fourth and fifth terms are marked by an erasure, “e”). Other types of distortions may occur, including, for example, adding extra values, swapping values, etc.
“Error Detection Codes” (EDC) may refer to codes, vectors, or redundancy information, for detecting errors in an associated code (e.g., accidental changes in one or more values of the associated code's vector). Error Detection Code typically only has enough redundancy information to detect, but not to correct, a certain set of errors. Examples of EDC include repetition codes, parity bits, checksums, cyclic redundancy checks, and cryptographic hash functions (other EDC may also be used). Parity bits may be used to verify a linear constraint on the code symbols (e.g., that an associated codeword has even or odd hamming weight).
“Error Correcting Codes” (ECC) or “Error Control Codes” (ECC) may refer to codes or a collection of codewords, vectors, or redundancy information, for correcting errors in an associated code. ECC may be appended to, or stored with, the associated information to be corrected, or may be stored or transmitted separately. Examples of ECC include BCH codes, Reed-Solomon (RS) codes, Convolution codes, Turbo codes, Low Density Parity Check codes (LDPC) and Polar codes (other ECC may also be used) for correcting errors in an error correction process. An example of an error decoding process is the read operation in storage implementation (e.g., Flash memories). The data retrieved from the storage medium may be distorted due to physical interferences occurring in the device. When the user data is protected with an appropriate error correction code, the flash controller may still fix certain cases of errors (e.g., the most probable set) thereby leaving the user with a relatively small decoding failure probability.
ECCs may have both correction and detection capabilities. In some embodiments of the invention, ECC may be re-used for error detection (in addition to error correction) instead of using separate EDC for error detection. Such embodiments may provide a more compact encoding by eliminating separate EDC, thereby reducing the overall storage requirements for the codewords.
“Erasure correction codes” or “erasure codes” may refer to ECC for correcting erasures. Erasures may refer to codes in which a codeword value (e.g., “0” or “1”) is erased and/or replaced by an erasure value (e.g., “e”). Decoding erasure codes is typically simpler and faster than decoding standard ECC because erasure codes define codes as either an error (e.g., erased, “e”) or validated (e.g., not erased, “0” or “1”) symbols. In standard ECC decoding, a decoder maintains a probability of error for each symbol and performs complex computations to determine if a symbol is correct or not (e.g., maximum likelihood (ML) computations enumerating all possibilities of a symbol value to determine a closest or most likely codeword to a noisy word, such as, based on a smallest number of bit flips between sent and received codewords). Correcting erasure codes is typically simpler than standard (e.g., maximum likelihood) ECC decoding, for example, equivalent to solving a system of N equations with a set of N (or fewer) unknowns (erasures). Erasure codes are limited in the number of erasures that can be corrected (e.g., in maximum distance separable (MDS), erasure codes of length or parity (p) may recover up to (p) erasures). Examples of erasure codes are BCH codes, Reed-Solomon (RS) codes, Low Density Parity Check codes (LDPC), Tornado Codes, LT Codes, Raptor Codes and Polar codes (other erasure codes may be used) used to correct erasures in an erasure correcting process. An example of an erasure decoding process is in multiple-storage device applications in which there may be a need to quickly recover from a possible failure of a certain number of storage devices (e.g., one or more storage servers, SSDs, HDDs, etc.). In such cases, the system may use an erasure correction code, in which different symbols of the code are stored on different media devices (e.g., for m devices, each media chunk may be divided into m−p sub-chunks that are encoded with erasure correction code with p parity symbols, and each chunk may then be stored on one of those m devices). In case a storage device fails, all the symbols that were stored on that device are set to erasures. Erasure correction code enables recovery of the missing device data without having to revert to backup restoration.
In accordance with some embodiments of the present invention, a device, system and method is provided for a family of Error Correcting Codes (ECC) that quickly and efficiently encodes and decodes while providing low error rates. This family of error correction codes is referred to as “product codes.” Product codes are composed of a plurality of row codes (e.g., in a first dimension) and a plurality of columns codes (e.g., in a second dimension). Row codes may be orthogonal to, and encoded and/or decoded independently of, column codes. Each data element in a product code may be encoded twice, once in a first dimension by a row code and then again in a second dimension by a column code, where the row and column codes intersect the data element. Product code structure has an advantage that it is composed of relatively shorter constituent error correction codes (e.g., row codes and column codes) compared to a linear code of comparable encoding information, and may thus be decoded by relatively simple or shorter corresponding decoding algorithms because the complexity or computing time of decoders is typically dependent on code length. For example, consider the case that the consumed memory of the decoder is S(N)=c·N and the decoding time of a decoder is f(N)=N·g(N) where c is constant and g(N) is at most a linear function of the code of length N (e.g., g(N)=log2 N or g(N)=N0.5). Consequently, decreasing the length of the code to
reduces the memory requirement by a multiplicative factor of α>1 and the decoding time by a multiplicative factor of
In other words, by decreasing the length of the ECC from a linear code, product codes may reduce the memory complexity and decoding time of the decoder compared to a decoder that is employed on the entire comparable linear code. Note, however, that decoding a product code with length N and m row constituent codes (e.g., each one of length
where α=m) may have a higher frame error rate, compared to decoding systems that use a single code of length N. Moreover, by dividing a product code into row and column codes, it is possible to employ a plurality of decoders for decoding a plurality of respective row codes and/or column codes in parallel, thereby increasing the throughput of the decoder (e.g., approximately multiplying the throughput by the number of parallel decoders) and decreasing the decoding time of the product code (e.g., approximately dividing the decoding time by the number of parallel decoders). Row and column codes may also implement different types of ECC for row and column codes (e.g., in different dimensions), for example, to gain the benefits of both high accuracy codes (e.g., parity row codes) and high-speed codes (e.g., erasure column codes). Product codes, according to some embodiments of the invention, may use the following types of codes as their constituent codes: (i) erasure correction codes (e.g., column codes), (ii) error correction codes (e.g., row codes) that may be concatenated with additional error detection codes. In some embodiments, erasure correction codes may serve a dual purpose of correcting and detecting errors, in which case separate error detection code may not be used.
An insight according to some embodiments of the invention is that erasure decoding is beneficial as a supplemental or second-pass decoding step, used in conjunction with another primary or first-pass decoding step. Erasure correction codes are typically only capable of correcting a specific type of interference that introduces erasures into a signal over an erasure channel. Thus, conventional decoders use standard (non-erasure) error correction decoding in the more typical type of interference that introduces errors (not erasures) into a signal. However, according to embodiments of the present invention, a two-pass decoder is provided in which the first-pass decoder is the erasure “channel” that inserts erasures into the codeword during this first-pass internal decoding method. The first-pass decoder may decode more general errors, and insert erasures where the first-pass decoding fails. The second-pass decoder may then perform erasure decoding to correct the erasures introduced by the first-pass decoding.
Product codes may support two such passes, dimensions, or layers, of decoding. According to some embodiments of the invention, there is now provided a device, system and method for encoding/decoding a product code comprising a first pass, layer, or dimension of ECC (e.g., row codes) to correct a first set (e.g., majority) of errors and a second pass, layer, or dimension of erasure ECC (e.g., column codes) to recover a second set (e.g., minority) of information in rows that were not decoded successfully, but erased, in the first decoding pass. The first error correction pass may output corrected information (e.g., with a negligible probability of being wrong) and erased information (e.g., with an above threshold non-negligible probability of being wrong). The corrected information may subsequently be used in the supplemental or second-pass decoding (e.g., column decoding) to decode information (e.g., rows) that failed the first pass decoding and were erased.
In accordance with some embodiments of the present invention, product codes may be efficiently encoded to have a plurality of row (or first dimension) codes including error detection code and/or error correction code for a row (or first dimension) codeword, and have a plurality of column (or second dimension) codes including erasure codes for a column (or second dimension) codeword. For example, a product code may include:
In accordance with some embodiments of the present invention, the product codes may be efficiently decoded by performing a first decoding pass of the row (or first dimension) and a second decoding pass of the column (or second dimension). The first decoding pass may correct a row (or first dimension) codeword using the row (or first dimension) error correction codes (e.g., D1-ECC) decoder. The decoder may detect if the initially corrected row (or first dimension) codeword contains any remaining errors using the associated error detection codes (e.g., D1-EDC). When no decoding error is detected in a row (or first dimension) codeword, the row (or first dimension) codeword is validated, and no column (or second dimension) decoding may be executed for the data elements in that row (or first dimension) codeword. However, when a decoding failure or error is detected in a row (or first dimension), the row (or first dimension) codeword may be erased and recovered by employing a second error correction pass using erasure correction codes in a column (or second dimension) (e.g., D2-Erasure ECC).
By combining the speed and efficiency of decoding erasure codes (e.g., in a column or second dimension) with the accuracy of decoding ECC (e.g., in a row or first dimension), these product codes provide efficient decoding with good error correction performance. The increase in speed provided by the erasure codes is ideal for systems, for example, with limited computational resources, such as, when information rendering is needed fast or immediately in real-time, such as, as in telecommunications systems (see e.g.,
Reference is made to
Transmitter(s) 110 and receiver(s) 112 may include one or more controller(s) or processor(s) 118 and 120, respectively, configured for executing operations or computations disclosed herein and one or more memory unit(s) 122 and 124, respectively, configured for storing data such as inputs or outputs from each of the operations or computations and/or instructions (e.g., software) executable by a processor, for example for carrying out methods as disclosed herein. Processor(s) 120 may decode a product code representation of the input data or codeword(s) sent from transmitter(s) 110 and received by receiver(s) 112.
Processor(s) 118 and 120 may include, for example, a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multi-purpose or specific processor or controller. Processor(s) 118 and 120 may individually or collectively be configured to carry out embodiments of a method according to the present invention by for example executing software or code. Memory unit(s) 122 and 124 may include, for example, random access memory (RAM), dynamic RAM (DRAM), flash memory, volatile memory, non-volatile memory, cache memory, buffers, registers, short term memory, long term memory, or other suitable memory units or storage units. Processor(s) 120 may be part of a general-purpose processor executing special-purpose decoding software or a special-purpose hardware unit that is a dedicated decoder 126.
Transmitter(s) 110 and receiver(s) 112 may include one or more input/output devices, such as a monitor, screen, speaker or audio player, for displaying, playing or outputting to users results provided by the decoder (e.g., data communicated by transmitter(s) 110 decoded by decoder 126) and an input device (e.g., such as a mouse, keyboard, touchscreen, microphone, or audio recorder) for example to record communication, control the operations of the system and/or provide user input or feedback, such as, selecting one or more encoding or decoding parameters, such as, decoding speed, decoding accuracy, an arrangement of rows and columns in the product code, absolute or relative numbers or lengths of row and/or column codes, numbers of data blocks into which the product code is divided, a threshold amount of time or duration allowable for decoding, a threshold maximum time delay allowable between the time of receiving a signal and the time of decoding the signal, etc.
Reference is made to
In
Memory controller 158 performs the following tasks: (a) to provide the most suitable interface and protocol for both the host 150 and the memory system 156; (b) to efficiently handle data, maximize transfer speed, and maintain data integrity and information retention. In order to carry out such tasks, some embodiments of the invention implement an application specific device, for example, embedding one or more processor(s) (e.g. 176 and 180, usually 8-16 bits), together with dedicated hardware or software to handle timing-critical tasks. Generally speaking, memory controller 158 can be divided into multiple parts (e.g., parts 160, 162, 164 and 170), which are implemented either in hardware or in firmware.
Describing components of memory system 156 from top-to-bottom in
Memory controller 158 may include one or more controller(s) or processor(s) 176 and 180 for implementation of the ECC encoder 166 and the ECC decoder 168, respectively, configured for executing operations or computations disclosed herein and one or more memory unit(s) 178 and 182, respectively, configured for storing data such as inputs or outputs from each of the operations or computations and/or instructions (e.g., software) executable by a processor, for example for carrying out methods as disclosed herein.
Processor(s) 176 and 180 may include, for example, a central processing unit (CPU), a digital signal processor (DSP), a microprocessor, a controller, a chip, a microchip, an integrated circuit (IC), or any other suitable multi-purpose or specific processor or controller. Processor(s) 176 and 180 may individually or collectively be configured to carry out embodiments of a method according to the present invention by for example executing software or code. Memory unit(s) 178 and 182 may include, for example, random access memory (RAM), dynamic RAM (DRAM), flash memory, volatile memory, non-volatile memory, cache memory, buffers, registers, short term memory, long term memory, or other suitable memory units or storage units.
Reference is made to
In
In
In
In
A decoder may execute a plurality of decoding iterations to sequentially decode the plurality of respective data blocks (e.g., the rows Informationx,y in 302-308). The decoder may iterate progressing sequentially from the block of column codewords (308) with the (e.g., longest) erasure redundancy (e.g., p) to the block of column codewords (302) with the (e.g., shortest or no) erasure redundancy (p-do). In each iteration, a two-pass decoder may alternate between executing a first-pass row decoding to correct errors in the rows of the data block and executing a second-pass column decoding to restore erased data or rows of the data block (e.g., only for data or rows that are erased in the first pass). To decode the first input codeword block (e.g., the column block with the longest erasure redundancy part) (e.g., 308) in a first iteration, the decoder may execute a first-pass row decoder, decoding rows of the product codeword (e.g., having parity 312). The first pass may continue until t1 (e.g., m) rows are successfully decoded. If errors are detected in a row by the EDC (e.g., having parity 310), the decoder may erase this row. After t1 (e.g. m) rows were successfully decoded, a second-pass column decoder is triggered in attempt to recover the erasures and sections that were not decoded yet using the corresponding (e.g., p) redundancy erasure code parity in the data block (e.g., 308). After recovering the erased information in the first column codeword block (e.g., 308), the decoder may progress to the next (kr-2) adjacent second column codeword block (e.g., 306) in a second decoding iteration. To decode the second codeword block (e.g., 306), the decoder may execute a first-pass row decoder, seeking to decode successfully t2 (e.g., m+dr-2) rows of the product codeword using the row ECC (e.g., with parity 312) (e.g., including the (t1) rows that were successfully decoded in the previous iteration). Following this pattern, a second-pass column decoder may attempt to recover the missing parts of second data block (e.g., 306) using the additional redundant set of (e.g., p−dr-2) erasure column codes (e.g., 306). After the erasures are corrected in the second iteration by column decoding the second data block (e.g., of kr-2 columns), the decoder may then revert to row decode any failed rows of the previous first iteration using the corrected rows and ECC (e.g., with parity 312). Erasures recovered in one iteration (e.g., by column decoding the first kr-1 columns in the column block 308) may propagate to correct errors in the next iteration (e.g., when row decoding the t2−t1 rows), thus propagating corrections from one data block to the next. Accordingly, each subsequently decoded data block (e.g., 308, 306, . . . , 304, 302) includes an incrementally increasing number of successful decoded rows t1, t2, t3, . . . , tr-1 (e.g., m, m+dr-2, m+dr-1, . . . , m+d0, respectively) forming the stair-step pattern of
In accordance with some embodiments of the present invention, multiple rows may be decoded simultaneously (in parallel) or sequentially (in series or in a pipeline). In accordance to some embodiments of the present invention, multiple columns may be decoded simultaneously (in parallel) or sequentially (in series or in a pipeline). In accordance with some embodiments of the present invention, rows and columns of a product code may be encoded simultaneously (in parallel) or sequentially (in series or in a pipeline). In some embodiments, encoding or decoding may be executed in sequence or in parallel depending on speed or hardware requirements. For example, encoding or decoding in parallel provides faster results than encoding or decoding in sequence. However, encoding or decoding in parallel uses more complex circuitry (e.g., duplicating some logical circuitry) than encoding or decoding in sequence. In various embodiment, an encoder/decoder may encode/decode all (or a subset of) rows followed by all (or a subset of) columns, or may alternate between encoding/decoding rows and columns or row blocks and column blocks. In some embodiments, implementing the above parallelization may depend on whether the average decoding time or latency is acceptable for the rendering needs of each system. For example, if real-time decoding is needed, parallelization may only be used if a decoding speed greater than provided by sequential decoding is needed to render the data in real-time. In some embodiments, parallelization may be selectively or dynamically implemented or adjusted in real-time to meet changing data error rates or rendering speeds.
In some embodiments, determining the length or parity (p) of the erasure codes may depend on the average error rates of the data, the length, type and/or error correction rate of the row (or first dimension) ECC, and/or system error requirements. The length or parity (p) of the erasure codes in each column may be equal to the number of erased rows the erasure codes may recover. This length of the erasure codes (p) may be equal to a predetermined value, may be set to meet system requirements for maximal errors, or may be adjusted dynamically based on rates of system errors. In some embodiments, the length or parity (p) of the erasure codes may be tuned, for example, increased proportionally to the level of noise over the channel, or inversely proportionally to the system's threshold error acceptability. In one example, if the probability of having two erasures is on the order of 10−6, and the system has an error acceptability threshold of 10−4, then only a single erasure parity may be used because the probability of two erasures is very unlikely (e.g., 10−6, which is less than the system's threshold error acceptability of 10−4). In some embodiments, the size (e.g., height of the stairs) of erasure parity may be adjusted on-the-fly, e.g., based on the real-time error rates. For example, if the level of noise over a channel is consistently measured to be below a threshold, the receiver may signal the transmitter to reduce the erasure parity segment (e.g., reduce p and/or increase di), whereas if the level of noise over a channel is consistently measured to be above a threshold, the receiver may signal the transmitter to increase the erasure protection, by increasing the number of parity symbols per columns (e.g. increase p and/or decrease di). In some embodiments, the receiver may send a pre-emptive handshake signal to the transmitter, requesting a test codeword (e.g., also referred to as a training sequence or pilots) to determine the real-time level of noise over the channel, and send a tuned parity order to the transmitter requesting that the transmitted maintains a minimum parity configuration to ensure the system's threshold error acceptability. Erasure parity may be tuned periodically, upon detecting or receiving an indication of a change in signal or channel quality, and/or upon a first loss of data.
The following notations and conventions are used herein for example only (other notations or representations may also be used). For a natural number l, the term [l]_ may denote the set of size l, {0, 1, 2, . . . l−1}). Vectors may be denoted by bold letters and random variables may be denoted by capital letters (e.g., random vectors may be denoted by bold upper-case letters). For i≤j, uij=[ui, ui+1, . . . , uj] may denote a sub-vector of u of length j−i+1 (e.g., if i>j, then uij=[ ], the empty vector, and its length is 0). Vectors described herein are assumed to be row-vectors, although vectors may also represent column vectors, sequences, or other data structures. Column vectors may be generated from row vectors u by the transpose operation, uT, or vice versa.
Matrices may be denoted by capital letters. A matrix M with mr rows and mc columns over a set F may be denoted as MϵFm
Let M(0)ϵFm
A vector u of length γ=γ0·γ1 may be reshaped into a matrix Γ of γ0 rows and γ1 columns if Γi,j=ui+j·γ
Given a set of possible signal values F, an error correction code ECC (denoted as C) of length n symbols may be a subset of an n-dimensional space Fn. The rate of C may be defined, for example, as
based on the size or magnitude of the code |C|, the size of the field |F| and the length of the code n. The rate may be the proportion of user information symbols that is conveyed when sending a codeword with respect to the total codeword length (e.g., a higher rate indicates a more efficient representation of the signal). For example, as the codeword length n increases, the rate R decreases and the speed decreases for communicating or transferring information. If the set of possible signal values F is a field and C is linear space in Fn, C may be a linear code over F. Such a code C may be defined, in one example, by a generating matrix B, e.g., having k=R·n rows and n columns, e.g., such that C={v·B|vϵFk}. The parameter k may denote the code dimension, e.g., such that |C|=|F|k (the code dimension k is equal to the base |F| logarithm of the size of the code).
For a linear code C of dimension k with generating matrix B, another code C′ may be generated as C′={v·B|vϵFk, vi=f0}. In other words, C′ may be a sub-code of C in which vi=f0. In this case, the value vi may be referred to as “frozen” to the value f0. This freezing operation is also generally referred to as “code expurgation”.
A dual space of a linear code C may be denoted as C⊥ and may be defined as a plurality or all of the vectors in Fn that are orthogonal to all the codewords in C. Thus, the dual space C⊥ is orthogonal to its code space C such that any codeword c⊥ϵC⊥ is orthogonal to any codeword cϵC, e.g., c⊥·cT=0. Codewords c⊥ of the dual space C⊥ may be generated by a dual space generating matrix H (e.g., a parity check matrix of C) that may have, for example, n columns and n−k rows, such that cϵC if and only if c·HT=0. A syndrome s of a length n vector v may be defined, for example, as s=v HT. Thus, the syndrome of codewords cϵC is, for example, zero. A syndrome may measure the result of applying the parity check equations (e.g., columns of H) to the values of a codeword v. When all the parity check equations are satisfied (e.g., v·HT is equal to the zero vector), then codeword v is a codeword in C.
An encoder ENC(⋅) may be referred to as “systematic” if there exists a sequence of indices ij (jϵ[k]_) such that for each codeword c0n-1 its corresponding information word u0k-1 fulfills uj=ci
Consider a channel carrying signals x→y in which a signal xϵ is transmitted using a transmitter over the channel and a noisy version of the signal yϵ is received at a receiver. The channel may be “memoryless,” meaning that the channel noise at one time for one transmission xi→yi is independent of the channel noise at another time for another transmission xj→yj.
A maximum likelihood (ML) codeword decoder may determine for each received channel observation vector y, the most probable transmitted codeword x, for example, by maximizing the following likelihood:
Pr(Y0n-1=y0n-1|X0n-1=x0n-1)=Πi=0n-1Pr(Yi=yi|Xi=xi).
This likelihood defines the probability or likelihood of receiving a channel observation vector y if a codeword x is sent over the channel. The maximum likelihood codeword decoder may be defined, for example, as:
to detect the transmitted codeword {circumflex over (x)}=x0n-1ϵC that maximizes this probability. If each codeword in an ECC C is sent over the channel with same probability (e.g., the system has no preference or bias for certain codewords over others, such as preferring O's over 1's in a binary system), then this maximum likelihood criterion corresponds to a minimum block error probability (BLER) Pr({circumflex over (X)}≠X) defining a minimum probability of error that the estimated codeword {circumflex over (x)} is incorrect. If codewords in an ECC C are transmitted with bias, then the maximum likelihood criterion above may be replaced with argmax of Pr(Y0n-1=y0n-1|X0n-1=x0n-1)Pr(X0n-1=x0n-1) to take the preference into account. In such a case, the criterion may be referred to as the maximum posteriori probability (MAP) criterion.
The maximum likelihood (ML) criterion may minimize the output block error probability. However, in many cases, implementing such maximum likelihood (ML) criterion is prohibitive because of the high complexity of the ML computations required to examine all possible candidate codewords in the codebook. In some embodiments, an approximation of ML may be used to reduce decoding complexity in a first pass or dimension of a product code. To compensate for errors caused by the approximation, some embodiments of the present invention may provide erasure correction in a second or supplemental pass or dimension of a product code. Such methods and apparatuses may provide fast and efficient encoding and decoding while still achieving effective error correction performance.
Reference is again made to
As shown in
mega symbols per row of input data (informationi) 106. A column in the matrix consisting of those mega-symbols may be referred to as a “mega-column” or simply a column.
Each one of those columns may be encoded by a systematic erasure correcting code 109, for example, having parity size of p symbols. The erasure correction code may therefore have code length m+p mega-symbols and m information mega-symbols. The erasure correction codes may generate a p×k matrix of erasure correction code parity 113. The erasure correction codes parity or redundancy 113 may be organized into rows such that mega symbol i of all the instances of the columns erasure detection code may be located in the ith erasure correction code parity 113, ErasureParityi. In total, p rows 115 of erasure correction code parity 113 may be generated. Each such row of erasure correction code parity 113, ErasureParityi, may be encoded by a systematic error detection code (with parity EDCParitym+i). Both the erasure correction code parity 113, ErasureParityi, and the error detection code parity 108, EDCParitym+i, may then be encoded by the row error correction code 109, generating RowParitym+i.
The (m+p)×k matrix with rows of Informationi and ErasureParityj where i=0, 1, 2, . . . , m−1 and j=0, 1, 2, . . . , p−1 may be referred to as the “main information matrix” denoted by M.
Example parameters of the product code of
Reference is made again to
In this structure, an input data vector of size K=k·m+k0·p symbols may be encoded as a product codeword with the following format:
A first input data block 202 of k0·(m+p) symbols may be organized in a matrix of k0 columns and (m+p) rows. Row i of this matrix may be denoted as Informationi,0 where 0≤i≤m+p−1.
A second input data block 204 of the k1·m symbols may be organized in a matrix of k1 columns and m rows. Row i of this matrix is denoted Informationi,1 where 0≤i≤m−1.
The k1 columns of the second input data block 204 may be divided into row segments or sub-rows of l elements, l≥1 such that e divides k1 (e.g., the first l symbols of each row are the first row segment, the next l symbols generate the second row segment, and so on). Each row segment may be considered a “mega symbol”. In total, there may be
mega symbols per row segment, informationi,1. A column in the matrix including those mega-symbols is referred to as a “mega-column” or simply a column. Each one of those columns may be encoded by a systematic erasure correcting code, for example, having parity size of p symbols. The erasure correcting code parity 206 may be organized in a matrix with, for example, p rows and k1 columns, such that row i of this matrix (denoted as ErasureParityi) contains the ith parity mega-symbols from each one of the k1/l erasure parities. The (m+p)×k matrix that contains the Informationi,j and ErasureParityt row segments may be referred to as the “main information matrix” and denoted by M. An encoder may encode the product code or matrix of
To encode the top m rows, for each i=0, 1, . . . , m−1, the encoder may encode Informationi,0 and Informationi,1 by a systematic error detection code with parity size medc symbol denoted as EDCParityi. Next, the encoder may encode Informationi,0, Informationi,1 and EDCParityi by an error correction code (not necessarily systematic) with a redundancy of mrow symbols. In the example of
To encode the bottom p rows, for each i=0, 1, . . . , p−1, the encoder may encode Informationm+i,0 and ErasureParityi by the aforementioned systematic error detection code with parity size medc symbol denoted as EDCParitym+i. Next, the encoder may encode Informationm+i,0 and ErasureParityi. These steps generate p codewords 210, such that each row codeword is generated using only the user information in its same row, and the erasure code parities corresponding to all the rows of matrix 204.
Example parameters of the product code of
Reference is again made to
k0·(m+d0) symbols may be organized in a matrix of k0 columns and (m+d0) rows. Row i of this matrix is denoted Informationi,0 where 0≤i≤m+d0−1. If d0<p, each row of the matrix may be divided into mega-symbols of l0 symbols (k0 is divisible by l0) and each mega-column may be encoded by a systematic erasure correction code with parity size of p−d0 mega-symbols. These parities of the instances of the column code may be organized in p-do rows such that row i (denoted ErasureParityi,0) contains the ith parity mega-symbol from each one of the column code instances. If d0=p no erasure correction code may be applied on the columns of this matrix. Matrix 302 contains Informationi,0 chunks as its first m+d0 rows, followed by the p−d0 ErasureParityi,0 symbols.
For j=1, 2, . . . , r−2: kj·(m+dj) symbols may be organized in a matrix of kj columns and (m+dj) rows. Row i of this matrix is denoted as Informationi,j where 0≤i≤m+d1−1. Each row of the matrix may be divided into mega symbols of lj symbols (kj is divisible by lj). Each mega-column of the matrix may be encoded by a systematic erasure correction code with parity size of p−dj symbols. These parities of the instances of the column code may be organized in p−dj rows such that row i (denoted ErasureParityi,j) contains the ith parity mega-symbol from each one of the column code instances. Matrices 304 and 306, for j=1 and j=r−2, respectively, contain the Informationi,j chunks in its first m+dj rows, followed by ErasureParityi,j symbols in its next p−dj rows.
kr-1·(m+dr-1) symbols are organized in a matrix of kr-1 columns and (m+p) rows. Row i of this matrix may be denoted Informationi,r-1 where 0≤i≤m+p−1. Each row of the matrix may be divided into mega-symbols, each one of lr-1 symbols (kr-1 is divisible by lr-1). Each mega-column of the matrix may be encoded by a systematic erasure correction code with parity size of p mega-symbols. These parities of the instances of the column code are organized in p rows such that row i (denoted ErasureParityi,r-1) contains the ith parity symbol from each one of the column code instances. Matrix 312 contains the Informationi,r-1 chunks in its first m+dr-1 rows, followed by the ErasureParityi,j symbols in its next p rows.
An (m+p)×k matrix that is a concatenation (column by column) of the aforementioned matrices may be the main information matrix denoted by M. The rows of M may be encoded, for example, as follows:
To encode the top m rows, for each i=0, 1, . . . , m−1 Informationi,0, Informationi,1, . . . , Informationi,r-1, may be concatenated into a single vector and encoded by a systematic error detection code with parity size medc symbols denoted as EDCParityi. Next, the encoder may encode Informationi,0, Informationi,1, . . . , Informationi,r-1 and EDCParityi by an error correction code (not necessarily systematic) with a redundancy of mrow symbols. In the example of
For j=1, 2, . . . , r−2 let Δj=dr-j−1−dr-j and i=0, 1, . . . , Δj−1(dr-10) Informationm+d
For i=0, 1, . . . , p−d0−1, ErasureParityi,0, ErasureParityi+d
Example parameters of the product code of
The structure of the product code in
Reference is made to
The input (e.g., K information symbols) and output (e.g., a product codeword of length N) of the process of
In operation 404, one or more processor(s) may encode the input information to build the main information matrix having k columns and m+p rows.
In operation 406, one or more processor(s) may encode each row of the matrix by an error detection code (step 1) and later a row error correction code (step 2).
In operation 408, one or more processor(s) may generate a single row codeword by concatenating the rows of the matrix. The operations may be performed sequentially, so that each step can start only after the previous step has ended, or in parallel. The structure of the code, however, supports pipelined operation of the steps if such a mechanism is advantageous.
Reference is made to
The input (e.g., K information symbols) and output (e.g., a product codeword of length N) of the process of
In operation 504, one or more processor(s) may encode the input information, row-by-row, to build the main information matrix having k columns and m+p rows. The process may input each row into operation 506 as soon as it is generated.
In operation 506, as soon as each row i is generated, one or more processor(s) may encode the row of the matrix by an error detection code (step 1) and later a row error correction code (step 2). The process may input each row into operation 506 as soon as it is encoded.
In operation 506, one or more processor(s) may generate a single row codeword by concatenating the rows of the matrix.
Reference is made to
The input (e.g., K information symbols) and output (e.g., an information matrix M) of the process of
In operation 604, one or more processor(s) may initialize a set of integer numbers 0, 1, . . . , K−1. This set may denote the indices of u that were not yet encoded. encoded.
Operation 606 defines a main loop of the process with r iterations, such that in iteration j, columns κ(j) . . . κ(j+1)−1 of the main matrix are generated, where κ(i) is defined for example in box 608. This matrix denoted as M(j) has (m+p) rows and kj columns. In step 1 of operation 606, the information part of the matrix may be selected that is located on the first m+dj rows of the matrix (e.g., where dr-1 is defined as 0 according to 610).
The specific assignment of indices of u to the matrix may be chosen arbitrarily. The set of indices of u selected on iteration j is called (j) and may be deleted from the set to ensure that each index is selected exactly once. In step 2, the values of u in indices (j) may be ordered into a matrix of m+dj rows and kj columns. In step 4, each mega-column may have applied a systematic erasure correction code of p−d0 parity mega-symbols. The parity of each mega-column may be concatenated as a column vector to the end of the column that served as information, creating sub-matrices M(j). After all the sub-matrices M(j) are created, the matrix M may be generated in operation 612 by concatenating the sub-matrices side by side. Only after the sub-matrices are concatenated may the matrix be completed according to operation 404 of
An additional generalization to the encoding approach in
Reference is made to
The input (e.g., K information symbols) and output (e.g., an information matrix M) of the process of
In operation 704, one or more processor(s) may initialize a set of integer numbers 0, 1, . . . , K−1, may initialize matrix M, and may initialize r pairs of matrices {tilde over (M)}(j) and {tilde over ({tilde over (M)})}(j). Matrix {tilde over (M)}(j) contains the information rows and {tilde over ({tilde over (M)})}(j) contains the corresponding erasure correction parity rows of sub-matrix M(j) consisting of columns κ(j) . . . κ(j+1)−1 of the main information matrix M (e.g., where κ(i) is defined in 706). All matrices may be initially empty (e.g., having zero rows) in operation 704, and may be populated in subsequent operations in
In operation 708, one or more processor(s) may execute a main loop of the process. The loop may have r iterations, such that in iteration j a sub-matrix of M is generated, for example, with rows in the range (m+dr-j), (m+dr-j+1), . . . , (m+dr-j−1−1), where we define dr-10 and dr−m (e.g., defined according to 710). In step 1 of the iteration, the indices of the information vector u may be selected, which are used to generate the information part of the rows of this stage. The subset of selected indices is denoted as (j) and is removed from in order to ensure that each index is selected exactly once. As in
Operation 712 may be executed, in some embodiments, only if d0<p. The last sub-matrix of M corresponding to rows m+d0, m+d0+1, . . . , m+p−1 may contain only erasure parity symbols. As such, the processor(s) may only concatenate the remaining rows of all the erasure parity matrices {tilde over ({tilde over (M)})}(j) in step 1 to matrix M. The resulting matrix M may be input in to the next stage in processing (e.g., operation 506 in
Reference is made to
The input and output of the process of
The output of the process may be an estimation of the information vector u, denoted here as û.
In some embodiments of the invention, one measurement of L may be jointly used for several symbols. In one example, a row of the product code is divided into chunks of {tilde over (l)} symbols, such that each chunk is modulated to a single constellation point. In this example, the matrix Y and L each contain
measurements, and elements of matrix L may be defined, for example, as follows:
In operation 804, one or more processor(s) may initialize variables including, for example, the set of row indices I of the main information matrix that were not yet decoded, and the set of columns indices of the main information matrix that were not yet decoded. For example, before the first iteration, I and may be initialized as I={0, 1, 2, . . . , m+p−1} and ={0, 1, 2, . . . , k−1}. One or more processor(s) may also initialize matrix {circumflex over (M)} of (m+p)×k dimensions, which may include estimations of successfully decoded rows of {circumflex over (M)}. Prior to a first iteration, one or more processor(s) may initialize all the elements of {circumflex over (M)}, for example, to be a special symbol defined as an “erasure” and denote, e.g., as ε. If {circumflex over (M)}i,j=ε, then there may be insufficient information to properly estimate the i,j symbol of the main information matrix. If {circumflex over (M)}i,jϵF, its information may be determined to be a reliable estimation of Mi,j.
After the initialization operation 804, one or more processor(s) may start an iterative process that includes two types of iterations: row iterations (operation 806) and column iterations (operation 816).
In operation 806, one or more processor(s) may execute the row iterations. First, one or more processor(s) may initialize an indicator variable iterRowSuccess that indicates whether at least one new row was decoded successfully in the current iteration. One or more processor(s) may enumerate rows that were not decoded successfully yet. One or more processor(s) may decode each unsuccessfully decoded row by executing a row decoder (e.g., an ECC decoder for the row ECC). If this decoding attempt is successful, the row index may be deleted from the list of non-decoded rows (I), and the estimated information part of the row may be copied to row i of {circumflex over (M)} and the indicator variable iterRowSuccess may be toggled from False to True.
In operation 808, after the row iteration finished, one or more processor(s) may check whether any success occurred in the current iteration.
In operation 810, if no row was decoded successfully in the current iteration, one or more processor(s) may determine a “decoding failed” status.
In operation 812, if at least one row was decoded successfully in the current iteration, one or more processor(s) may check whether I is empty (e.g., indicating that all the rows were decoded successfully). If so, the product code decoding is completed successfully and one or more processor(s) may end the process in operation 814. Otherwise, at least one row failed decoding. In that case, one or more processor(s) may proceed to operation 816 to apply the column iteration process.
In operation 806, one or more processor(s) may execute the column iteration. First, one or more processor(s) may initialize iterColSuccess indicator to False. This variable indicates whether there was a column that had a successful decoding attempt in this iteration. One or more processor(s) may enumerate columns that were not decoded successfully yet in {circumflex over (M)} and attempt to decode them using erasure decoding of the column. Note that the erasure decoding process receives as input the relevant column in {circumflex over (M)} that may contain decoded symbols and erasures. If erasure decoding the column succeeded to recover at least one erased mega-symbol, one or more processor(s) may update the corresponding column in {circumflex over (M)}. One or more processor(s) may further update the input matrix L and toggle iterColSuccess to True. If the entire column is decoded successfully (i.e., no erasures remain), one or more processor(s) may eliminate j from .
In the example ECC structure in
The update of matrix L on column medc+j may be such that the validated decoded values rows reflect the decoded value with full certainty. For example, consider the case in which the input information includes binary code and log likelihoods ratio
An example column decoder may determine that a certain symbol of index i may have a value bϵ{0,1}. In this case, for example, LLRi=(−1)b·∞, where ∞ may denote infinity (e.g., in terms of the decoding system's algebra) or a limit approaching infinity. Such an input may be applicable, according to some embodiments of the invention, when the row code is systematic.
In case the row ECC is non-systematic (e.g., non-systematic polar code), one or more processor(s) may update a frozen indicator list of row codes to have the specific recovered symbol frozen with the decoded value b as the frozen value. Accordingly, the next row iterations may use expurgated versions of the row codes (e.g., having fewer candidate codewords) and consequently there may be a better chance of decoding the codewords. For example, in some cases, the expurgated versions may have a higher minimum distance (e.g., minimum bit flit difference) between codewords than the original versions and consequently higher resiliency to noise.
In operation 818, after the column iteration finishes, one or more processor(s) may check a success condition. If no column was added to the list of decoded column, one or more processor(s) may determine a decoding failure 810.
Otherwise, in operation 820, if all the columns were decoded, one or more processor(s) may determine a decoding success and may output the rest of the rows that were decoded. Otherwise, one or more processor(s) may return to the row iteration (operation 806) and perform an additional decoding attempt.
In
In some embodiments of the invention, a successful decoding may result in retrieving the information chunks from {circumflex over (M)} and placing them in the estimated information vector û.
Reference is made to
The input and output of the process of
In operation 904, one or more processor(s) may run the row ECC decoding process on the input. Examples of ECC decoding algorithms may include but are not limited to, Belief Propagation (BP) for LDPC codes, BP or Successive Cancellation with List (SCL) for polar codes, BCJR or Viterbi algorithm for convolution codes and Berlekamp-Massey algorithm incorporated with Chase algorithm for BCH or RS codes. Some of those algorithms have indications on decoding failures. For example, in BP, one or more processor(s) may verify that the decoded codeword fulfills the code constraints such as the code's parity check matrix in LDPC codes or the frozen symbols fixed values in polar codes.
In operation 906, one or more processor(s) may check if the row decoder indicated such a failure. If so, the decoding is indicated as a failure in operation 908. If no failure was reported, one or more processor(s) may check the error detection codeword in operation 910. If the error detection check indicated a failure, then one or more processor(s) may report a failure in operation 908. Otherwise, one or more processor(s) may determine a successful row decoding in operation 912 and may output the decoded information part.
Reference is made to
decodingMode—Indicates the computational effort or performance of the row iteration mechanism in the decoding process. As the decodingMode increases, the error correction capability of the row decoding process also improves, involving an increase in the complexity, time, memory usage, and/or power consumption. According to some embodiments of the invention, in most cases, a “low power” decoding mode provides sufficient performance, which results in low consumption of resources and/or faster decoding. However, in cases in which the lower power decoding mode does not achieve a desired (e.g., threshold) performance, one or more alternate or additional high performance decoding modes (e.g., with higher resource consumption) may be used to elevate the mode of operation (e.g., a mode that may correct more errors but uses more time, more memory usage, and/or power consumption of the decoding process). The maximum decoding mode is denoted MAX_DEC_MODE. Examples of implementations of a high power or maximum decoding mode may include, but are not limited to:
a. Increasing the number of iterations in BP as the decoding mode increases.
b. Increasing the fixed-point resolution as the decoding mode increases.
c. Using hard-bit decoding for decodingMode==0 and soft decision decoding for decodingMode>0.
d. Increasing the list size in sequential decoding of convolution codes, or list successive cancellation (SCL) in polar codes, as decodingMode increases.
e. Increasing the number of considered candidates in the chase algorithm as the decoding mode increases.
It should be understood that the above are only examples and a decodingMode case may include combination of the above or further options. The implementation may have a relational table or data structure of available modes with their corresponding decoding specifications.
numIter—Indicates the number of times the row iteration process was performed. This variable is used to limit the processing time of the algorithm by comparing it to a maximum number of iterations, MAX_ITER_NUM, which may be a tunable parameter of the process.
In operation 1004, one or more processor(s) may increase the iteration number to commence the row iteration block.
In operation 1006, upon completing the row iteration block, one or more processor(s) may check if at least one row was successfully decoded at the previous iteration. If no row was decoded successfully, the process may proceed to operation 1008. In operation 1008, one or more processor(s) may determine that this condition decoding fails if either the maximum number of iterations is reached (numIter=MAX_ITER_NUM), or the maximum decoding mode is applied. Otherwise, in operation 1010, one or more processor(s) may increase the decodingMode variable and return to operation 1004. According to some embodiments of the current invention, operation 1010 may be implemented according (but not limited) to the following options:
Opt0: Independent increase: decodingMode←decodingMode+1.
Opt1: Implement dependency with the difference between MAX_ITER_NUM and numIter, e.g., diter=MAX_ITER_NUM−numIter. For example, as diter decreases (e.g., indicating a decreasing number of remaining iterations), decodingMode increases. At the extreme case or limit, e.g., diter=1, decodingMode←MAX_DEC_MODE.
Opt2: Implement dependency with |I| (number of non-decoded rows) and diter, such that as |I| increases, the decoding mode, decodingMode, also increases.
Opt3: Implement dependency with the previous iteration number that caused an increase of the decodingMode. For example, if the decoder increased the decodingMode T iterations ago, the decoder may change the decodingMode based on τ (e.g., if τ is less than a threshold value, increase decodingMode by a larger gap than the standard increase).
After completing the column iteration (operation 1012), in operation 1014, one or more processor(s) may check a condition for iteration success. If no erased symbol was successfully recovered in the current iteration, the process may proceed to operation 1008. In operation 1008, one or more processor(s) may determine if the number of iterations<MAX_ITER_NUM, and if so, in operation 1010 may increase the decoding mode for the next row iteration. Otherwise there was at least one column that was successfully decoded in the previous iteration. If all the columns were decoded as determined in operation 1016, one or more processor(s) may declare a successful decoding in operation 1018. Otherwise, one or more processor(s) may compare numIter to MAX_ITER_NUM in operation 1020. If numIter≥MAX_ITER_NUM, one or more processor(s) may determine a decoding failure. Otherwise, if numIter<MAX_ITER_NUM, one or more processor(s) may return to operation 1004 to attempt to perform an additional row iteration. In operation 1022, prior to repeating the row iteration, one or more processor(s) may increase the decoding mode, for example, by logic (B) that may be different than logic (A) used in operation 1010. For example, in operation 1010 one or more processor(s) may increase decodingMode, whereas in operation 1022 this increase is optional. Additionally, or alternatively, operation 1022 and/or operation 1010 may be implemented according to options Opt0, Opt1, Opt2 and Opt3 above (each operation may use the same or different option and/or the same or different parameter configurations). The implementations of both operations 1022 and 1010 are provided only as examples of embodiments and are not meant to be limiting in any way.
A device or system operating according to some embodiments of the invention may include hardware (e.g., decoder(s) 126 of
According to some embodiments of the invention, one or more processor(s) may implement bounded distance erasure correction decoding in the column iteration block (e.g., operation 816 in
Reference is made to
A device or system operating according to some embodiments of the invention may include hardware (e.g., decoder(s) 126 of
According to some embodiments of the invention, the decoder may output rows as soon as they are decoded successfully (e.g., prior to decoding the next or one or more subsequent rows). In some embodiments of the invention, rows that were not decoded successfully may be output as a hard decision of the original input, for example, including recoveries made by the columns code decoder. In some embodiments, rows that were not decoded successfully may be output as a decoded word of the row ECC decoder, for example, including the erasure recoveries made by the column code decoder.
Some embodiments of the invention implement the structure in
According to some embodiments of the present invention, a row that is not decoded or recovered may be provided as an output of the row ECC decoder, for example, together with an indicator that the entire product codeword was not successfully decoded. The code structure allows estimations of the upper bounds on the block error rate (BLER) and bit error rate (BER) of the decoding system given simulation results of the row ECC decoder.
Reference is made to
Box 1202 provides some definitions and notation. The first four terms (1)-(4) may be measured by employing a simulation of the row ECC decoder. Specifically, PB(rowECC) is the BLER of the row ECC decoder and PMD(rowECC) is the probability of decoding failure in row ECC in which the decoded word is a codeword (however, the wrong one). The latter event is also known as the row ECC decoding misdetection event. {circumflex over (B)}(rowECC) and {tilde over (B)}(rowECC) are BER measures that are conditioned on the event of decoding failure of the row ECC decoder. The difference between these measures is that {circumflex over (B)}(rowECC) is conditioned on the event that the decoded word was not a codeword (thereby the row ECC decoder can detect this error event), while {tilde over (B)}(rowECC) is conditioned on the event that the decoded word is a codeword (but the wrong one). The next parameter (5) is the erasure probability experienced by the column erasure correction decoder (and no misdetection of row errors occurred), denoted as e. This erasure probability may be approximated by PB(rowECC) (which is an upper-bound because it includes all the error events, even those that are not detectable). The next parameter (6), PMD is the probability of misdetection (e.g., at least one row failed to be decoded by the ECC and was not detected by the ECC or EDC). The probability of this event may be estimated, e.g., as a multiplication of two probabilities: PMD(rowECC) and a probability of misdetection in the CRC (approximated by 2−m
Boxes 1204 and 1206 define upper-bounds for the BLER and BER of the construction, respectively.
In 1204, the BLER may be calculated based on two events: (i) at least one row misdetection has occurred (e.g., approximated by (m+p)·PMD) (ii) no misdetection of a row occurred and the number of erased rows is beyond the column code erasure recoverability.
In 1206, the BER may be calculated based on a summation of two terms: term 1208 defines BER(1) for the case in which no misdetection occurred and term 1210 defines BER(2) for the case that at least one misdetection occurred. Box 1212 shows the computation for BER(1). Box 1214 shows the computation for BER(2) for the case that exactly τ rows had misdetection events. Those rows are assumed to be specific (e.g., the first τ rows of the codeword matrix). The event may be approximated by the following components:
Term 1216 represents the BER(2) contribution of the rows that had misdetection. The column decoder typically does not change this term, which therefore outputs the BER of the row-ECC decoder assuming a row ECC decoder misdetection occurred.
Terms 1218 and 1220 are based on enumeration of the events that i decoding failures occurred on the rest of the rows.
Term 1218, −1≤i≤p: The number of detected errors is less than the recovery capability of the column code. As a consequence, the column code may be used to recover those i rows. However, due to the rows that were not decoded correctly, this recovery may be erroneous. In this case, the BER may have an upper bound, e.g., of ‘1’.
Term 1220, −p+1≤i≤m+p−τ: The number of detected errors is greater than the recovery capability of the column code decoder. As a consequence, the column code may not be used to recover those rows. As such, the term may define the BER of the row code decoder (e.g., assuming that an error occurred and was detected).
Consider the case of the aforementioned embodiment with m=64, p=2, n=4400 bits, k=3824 bits, medc=26 bits.
Each row code may be an LDPC and the error detection code may be CRC with 26 bits. The column erasure correction code may be a shortened Reed-Solomon with symbols size θ=8 bits. The code length may be N=(m+p)·n=290,400 bits and the code rate may be
The LDPC code may be a quasi-cyclic binary irregular code of block length 4400 bits and information length 3850 (e.g., lifting factor is Z=55). A serial schedule BP algorithm may be used with a maximum of 20 iterations as a row decoder. The column decoder may be a solver of a system of two equations with two variables over GF(256).
This estimation may initially involve collecting results of the LDPC row simulation. Table 1302 contains empirical estimations of several parameters. The SNR measured in dB is in Eb/N0 [dB] units where Eb is the energy per bit, which is
(e.g., where Es is the energy per channel use, and RLDPC is the LDPC rate). N0=2σ2, where σ is the Gaussian noise standard deviation. The input BER in 1302 is the uncoded bit error rate of the channel (e.g., the achieved BER if no ECC is employed). The rest of the rows contain the empirical results of the simulation. As indicated in 1304, each SNR value was obtained by simulating frames until e.g., at least 100 block decoding errors were captured.
The translation between the simulated LDPC row code and the performance estimation of the entire product code may be computed according to the 1306. Firstly, the
of the product code may be greater than that of the LDPC code because the rate of the product code is smaller than that of the LDPC. Secondly, some embodiments of the invention may use overestimations of {circumflex over (B)}(LDPC), {tilde over (B)}(LDPC) and PMD(LDPC) as shown in 1306 (e.g., note that those parameters correspond to {circumflex over (B)}(rowECC), {tilde over (B)}(rowECC) and PMD(rowECC) from
axis between the uncoded BER curve to the coded scheme BER curve measured at BER=b.
Reference is made to
In operation 1500, one or more processor(s) (e.g., receiver processor(s) 120 of
The product codeword may be arranged in a matrix as described in reference to
In operation 1510, the one or more processor(s) may execute a first pass of the multi-pass decoder for each of a plurality of first dimension input codewords. In the first pass, the multi-pass decoder may decode the first dimension input codeword using a first dimension error correction code.
In operation 1520, the one or more processor(s) may erase the first dimension input codeword (e.g., or a segment or element thereof) if errors are detected in the decoded first dimension input codeword (e.g., if the first pass decoding in operation 1510 fails to decode the first dimension input codeword). Thus, the first pass decoding is an erasure channel that inserts erasures into the product codeword in operation 1520 for the second pass decoding to correct with the erasure correction codes in operation 1530. The one or more processor(s) may determine if the first dimension codeword decoded in operation 1510 has any (or an above threshold number of) errors remaining (e.g., if the first pass decoding failed) using error detection codes (e.g., 108 of
In operation 1530, the one or more processor(s) may execute a second pass of the multi-pass decoder for each of a plurality of second dimension input codewords. In the second pass, the multi-pass decoder may decode the second dimension input codeword using a second dimension erasure correction code decoder to correct an erasure in the second dimension input codeword that was erased in the first dimension decoding (e.g., an erasure inserted in the first pass). In some embodiments, the second pass decoding is only executed if the first pass decoding fails (e.g., if a first dimension input codeword is erased) for a corresponding intersecting second dimension codeword.
A process or processor(s) may repeat the first and/or second pass decoding in operations 1510 and/or 1530 for one or more iterations, for example, if one or more codewords contain errors after the first iteration of the first and second passes. In one embodiment, a process or processor(s) may repeat the first pass decoding if the initial first pass decoding of a first dimension input codeword fails and the second pass decoding of one or more intersecting second dimension codewords succeeds in operation 1530. Repeating first pass decoding operation 1510 may propagate any corrected erasures from the second pass to increase the probability of successfully decoding a first dimension input data that had previously failed to decode in the initial iteration of the first pass.
Operations 1510-1530 may repeat for each of a plurality of first and second dimension codewords (e.g., row and column codewords). Operations 1510-1530 may repeat for each of a plurality of sequential data blocks (e.g., column pillars 202-204 of
In operation 1540, the one or more processor(s) may output the decoded input codewords. The processor(s) may perform operations or task on the decoded codewords, deliver the decoded codewords to memory units for further processing or to a host (e.g., memory unit 124 of
Other operations or orders of operations may be used.
In accordance with an embodiment of present invention, executed “in parallel” as used herein may refer to executed substantially concurrently, simultaneously, during the same iteration, prior to completion of a subsequent iteration, or during completely or partially overlapping time intervals.
In some embodiments of the invention, the first dimension is a row dimension and the second dimension is a column dimension, while in other embodiments of the invention, the first dimension is a column dimension and the second dimension is a row dimension. Accordingly, embodiments of the invention that describe rows and columns may also cover product codes in which the rows and columns are inverted or transposed into columns and rows, respectively. For example, although row codes are described to include parity ECC and/or EDC, and column codes are described to include erasure codes, the orientation of the rows and columns (and their decoding order) may be inverted, such that row codes include erasure codes and column codes include ECC and/or EDC.
Although product codes are described as having two dimensions (e.g., row codes in a first dimension and column codes in a second dimension), embodiments of the invention may include product codes having any plurality of (lD) dimensions comprising (lD) orthogonal and independent codes encoding each data element. Thus, each data element may be encoded and decoded by (lD) ECC, for example, sequentially or in parallel. In one embodiment, the (lD) codes may be executed sequentially until the EDC decoder indicates there are no or below threshold number of errors remaining.
Data structures that are row or column codes (or first or second dimension codes) may be stored or represented as standard or linear codes with their implementation as either row or column codes (or first or second dimension codes) designated, for example, by row or column (or dimension) tags or identifiers, by the sequential order of the codes or codewords (e.g., user information stored first, then row or first dimension redundancy codes second, then column or second dimension redundancy codes third), or by their locations in memory (e.g., coordinates (i,j) of a product code pre-designated to represent a row or a column).
In the foregoing description, various aspects of the present invention have been described. For purposes of explanation, specific configurations and details are set forth in order to provide a thorough understanding of the present invention. However, it will also be apparent to a person of ordinary skill in the art that the present invention may be practiced without the specific details presented herein. Furthermore, well-known features may be omitted or simplified in order not to obscure the present invention.
Unless specifically stated otherwise, as apparent from the foregoing discussions, it is appreciated that throughout the specification discussions utilizing terms such as “processing,” “computing,” “calculating,” “determining,” or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical, such as electronic, quantities within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
In accordance with any of the aforementioned embodiments of the invention, systems and methods may be software-implemented using dedicated instruction(s) or, alternatively, hardware-implemented using designated circuitry and/or logic arrays.
In accordance with any of the aforementioned embodiments of the invention, systems and methods may be executed using an article such as a computer or processor readable non-transitory storage medium, or a computer or processor storage medium, such as for example a memory (e.g. one or more memory unit(s) 122 and 124 of
Different embodiments are disclosed herein. Features of certain embodiments may be combined with features of other embodiments; thus, certain embodiments may be combinations of features of multiple embodiments.
Although the particular embodiments shown and described above will prove to be useful for the many distribution systems to which the present invention pertains, further modifications of the present invention will occur to persons skilled in the art. All such modifications are deemed to be within the scope and spirit of the present invention as defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4719628 | Ozaki | Jan 1988 | A |
5559506 | Leitch | Sep 1996 | A |
6415411 | Nakamura | Jul 2002 | B1 |
7181669 | Shen | Feb 2007 | B2 |
7346829 | Riho | Mar 2008 | B2 |
Entry |
---|
Lin et al., Error Control Coding, 2nd Edition, Pearson Prentice Hall, Pearson Education, Inc., Upper Saddle River, NJ 07458, 2004. |
Arikan, “Channel Polarization: A Method for Constructing Capacity-Achieving Codes for Symmetric Binary-Input Memoryless Channels,” IEEE Transactions on Information Theory, vol. 55, No. 7, pp. 3051-3073, Jul. 2009. |
Dumer, “Concatenated Codes and Their Multilevel Generalizations”, Handbook of Coding Theory, Editors: V.S. Pless and W.C. Huffman, Eds., Elsevier, The Netherlands, 1998, pp. 1-66. |
Mukhtar el al., “Turbo Product Codes: Applications, Challenges, and Future Directions,” IEEE Communications Surveys & Tutorials, vol. 18, No. 4, pp. 3052-3069, Fourth Quarter 2016. |
Lin et al., Error Control Coding: Fundamentals and Applications, Prentice-Hall Computer Applications in Electrical Engineering Series, Prentice-Hall, Inc., Englewood Cliffs, NJ 07632, USA, 1983. |
Number | Date | Country | |
---|---|---|---|
20180219561 A1 | Aug 2018 | US |