In coding theory, concatenated codes are a class of error-correcting codes that are derived by combining an inner code and an outer code. Concatenated codes allow for the handling of symbol errors and erasures, and phased burst errors and erasures. However, many applications require a reduced number of parity symbols compared to those provided by concatenated codes.
The accompanying drawings, which are incorporated in and form a part of this specification, illustrate and serve to explain the principles of embodiments in conjunction with the description. Unless specifically noted, the drawings referred to in this description should be understood as not being drawn to scale.
Reference will now be made in detail to various embodiments, examples of which are illustrated in the accompanying drawings. While the subject matter will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the subject matter to these embodiments. Furthermore, in the following description, numerous specific details are set forth in order to provide a thorough understanding of the subject matter. In other instances, conventional methods, procedures, objects, and circuits have not been described in detail as not to unnecessarily obscure aspects of the subject matter.
Some portions of the detailed descriptions which follow are presented in terms of procedures, logic blocks, processing and other symbolic representations of operations on data bits within a computer memory. These descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. In the present application, a procedure, logic block, process, or the like, is conceived to be a self-consistent sequence of steps or instructions leading to a desired result. The steps are those requiring physical manipulations of physical quantities. Usually, although not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated in a computer system.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussions, it is appreciated that throughout the present discussions terms such as “selecting”, “encoding”, “transmitting”, “receiving”, “computing”, “applying”, “decoding”, “updating”, or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Furthermore, in some embodiments, methods described herein can be carried out by a computer-usable storage medium having instructions embodied therein that when executed cause a computer system to perform the methods described herein.
Example techniques, devices, systems, and methods for implementing a coding scheme are described herein. Discussion begins with a brief overview of a coding scheme, and how it addresses phase burst errors and erasures and symbol burst errors and erasures. Next, encoding using the coding scheme is described. Discussion continues with various embodiments used to decode the coding scheme. Next, several example methods of use are described. Lastly, an example computer environment is described.
Transmission and storage systems suffer from different types of errors contemporaneously. For example, a memory cell in a data storage system may be altered by an alpha particle that hits the memory cell. In some cases entire blocks of memory cells may become unreliable due to the degradation of hardware. Such data transmission and data storage systems can be viewed as channels that introduce symbol errors and block errors, where block errors encompass a plurality of contiguous information symbols. It should be understood that as discussed herein, the terms phased burst errors and block errors may be used interchangeably. Moreover, if additional information (e.g., side information) is available, for instance based on previously observed erroneous behavior of a memory cell or cells, a symbol erasure or block erasure is modeled. In an embodiment, erasures differ from errors in that a location of an erasure is known while the location of an error is not. In various embodiments described herein, a coding scheme is operable to perform the task of a concatenated code using fewer parity symbols than a concatenated coding scheme performing the same task.
Note that the terms horizontal and vertical (and columns and rows) are terms used to describe a visualization of an array or matrix and may be interchanged (i.e., the visualization of an array may be turned on its side). In various examples decoders may be selected for combinations of errors (220, 230, 240 and 250) that are more efficient than a corresponding decoder for a suitably chosen Reed-Solomon code of length mn over F.
In various embodiments, encoding is performed on information symbols 210 by applying a coding scheme 100 to information symbols 210. In describing the coding scheme 100, it is necessary to describe the channel model and code definition. In an example channel model, an m×n stored (also referred to herein as transmitted or encoded) array 206 (Γ) over F is subject to symbol errors 220, block errors 230, symbol erasures 240, and block erasures 250.
In one example, block errors 230 (also referred to as error type (T1)) are a subset of columns 260 in array 200 that may be indexed by
J
⊂
n, Equation 1
where n denotes the set of integers {0, 1, . . . , n−1}, and a, b denotes the set of integers {a, a+1, a+2, . . . , b−1}.
In one example, block erasures 250 (also referred to as (error type (T2)) are a subset of columns 260 in array 200 that may be indexed by
K
⊂
n\J. Equation 2
In one example, symbol errors 220 (also referred to as error type (T3)) are a subset of symbols 210 in array 200 that may be indexed by
L
⊂
m×(n\K∪J)). Equation 3
In one example, symbol erasures 240 (also referred to as (error type (T4)) are a subset of symbols 210 in array 200 that may be indexed by
R
⊂(m×(n\K))\L. Equation 4
An error matrix (ε) over F represents the alterations that have occurred on encoded array 206 (e.g., alterations that may have occurred during transmission). The received array 200 (referred to herein as γ, or the corrupted message) to be decoded is given by the m×n matrix:
γ=+ε Equation 5
In such an example, erasures are seen as errors with the additional side information K and R indicating the location of these errors.
In an example:
τ=|T|,ρ=|K|,θ=|L|, and =|R| Equation 6
In other words,
The total number of symbol errors 220 (resulting from error types (T1) and (T3)) is at most mτ+θ and the total number of symbol erasures (resulting from erasure types (T2) and (T4)) is at most m p+. Thus, all error and erasure types (220, 230, 240 and 250 or (T1), (T2), (T3) and (T4)) can be corrected (while occurring simultaneously) while using a code of length m×n of F with a minimum distance of at least
m(2τ+ρ)+2θ++1. Equation 7
In an example, the code (C) is a linear code (with parameters [n, k, d]) over F. Matrix 130 (Hin) is an m×(mn) matrix over F that satisfies the following two properties for a positive integer (δ):
H
in=(H0|H1| . . . |Hn−1) Equation 8
In an example, a codeword is defined to be an m×n encoded matrix (Γ)
=(0|1| . . . |n−1) Equation 9
over F (where j stands for column j of Γ) such that each row in
Z=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 10
is a codeword of C (horizontal code 120).
In an example, the code C′ is an m-level interleaving of a horizontal code 120 (C), such that an m×n matrix
Z=(Z0|Z1| . . . |Zn−1) Equation 11
over F is a codeword if each row in Z belongs to C. Each column in Z then undergoes encoding by an inner encoder of rate one, wherein the encoder of column j is given by the bijective mapping Zj→Hj−1Zj.
This section will address a plurality of decoders. First, a polynomial-time decoding process for all errors and erasures is presented. Next, specialized decoders are presented. The first specialized decoder corrects (T1), (T2), and (T4) errors and erasures but not (T3) (i.e., symbol erasure 240) errors. By defining the encoder with the help of C and Hin, and using the decoders described herein the decoding complexity scales linearly with n3. In various examples, parameters such as m scale with n.
A. Polynomial-Time Decoding
In an embodiment, the horizontal code 120 (C) is a Generalized Reed-Solomon (GRS) code over F and Hin is an arbitrary m×(mn) matrix over F that satisfies two properties:
H
in=(H0|H1| . . . |Hn−1) Equation 12
with H0, H1, . . . , Hn−1 being m×m sub-matrices of Hin, wherein each Hin is invertible over F.
Columns of m×n arrays may be regarded as elements of the extension field GF(qm) (according to some basis of GF(qm) over F). In this example, the matrix Z is a codeword of a GRS code (referred to as C′) over GF(qm), where C′ has the same code locators as a code C.
In an example, Γ is referred to as a codeword and is transmitted as an m×n array. In this example, Y is the received m×n array 200, which may have been corrupted by τ errors of type (T1) (block errors 230) and θ errors of type (T3) (symbol errors 220), wherein
τ≦(d/2)−1 Equation 13
(where d is the minimum distance of an horizontal code 120 (C) as discussed below) and
θ≦(δ−1)/2 Equation 14
First an array 200 an m×n array is computed:
Y=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 15
where γ 200 and Y each contains θ+θ≦(d+δ−3)/2 erroneous columns. In other words, Y is a corrupted version of a codeword of C′. In one example a list decoder can be applied for C′ to Y. In various examples, a list decoder returns a list of up to a prescribed number (herein referred to as l) of codewords of C′, and the returned list is guaranteed to contain the correct codeword Γ, provided that the number of erroneous columns 260 in γ 200 does not exceed the decoding radius of C′, which is [nθl(d/n)]−1, where θl(d/n) is the maximum over sε{1, 2, . . . , l} of the following expression:
Thus, if l is such that
nΘ
l(d/n)≧(d+δ−1)/2 Equation 17
then the returned list will contain the correct codeword:
Z=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 18
of C′. For each array Z′ in the list the respective array 206 can be computed,
′=(H0−1Z′0|H1−1Z′1| . . . |Hn−1−1Z′n−1). Equation 19
Only one ′, namely, the transmitted array, can correspond to an error pattern of up to (d/2)−1 block errors and up to (δ−1)/2 symbol errors. In other words, ′ can be computed by checking each computed Z′ against the received array γ 200.
In some examples, the coding scheme 100 can be generalized to handle (T2) and (T4) errors (i.e., block erasures 250 and symbol erasures 240) by applying a list decoder for the GRS code obtained by puncturing C′ to the columns 260 that are affected by erasures. To perform this, the minimum distance (d) is replaced with d−ρ−.
B. Decoding (T1), (T2), and (T4) Errors and Erasures but not (T3) (e.g., block errors 230, block erasures 250, and symbol erasures 240 but not symbol errors 220).
In one example, a code C is selected for the case where there are no (T3) errors (i.e., there are no symbol erasures 240, or
|L|=θ=0. Equation 20
In an example, an m×n matrix Γ 206 is transmitted and an m×n matrix
=Γ+ε Equation 21
is received, where
ε=(εκ,j)κ∈m,j∈n Equation 22
is an m×n error matrix, with T(⊂n) (and thus where K(⊂n)) indexing the columns in which block errors (respectively, block erasures) have occurred, and R(⊂m×n) is a nonempty set of positions where symbol erasures have occurred. In some examples it is assumed that d, τ(=|T|), and ρ(=|K|) satisfy
2τ+ρ≦d−2 Equation 23
and that (=|R|) satisfies
0<≦m Equation 24
In this example Y 200 and E are defined as
Y=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 25
and
E=(εh,j)h∈m,j∈n=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 26
Thus,
Y=Z+E Equation 27
where Z is given by
Z=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 28
as discussed above. In particular, in this example, every row of Z is a codeword of a horizontal code 120 taken to be a Generalized Reed-Solomon (GRS) code C=CGRS, the latter being a linear code over F which is defined by the parity-check matrix HGRS=(αji)i∈d−1,j∈n, where α0, α0, . . . , αn−1 are distinct elements of F.
Next, denote the elements of R by
R={K
l
,j
l)}l∈<>. Equation 29
In an example, some l∈<>, and the univariate polynomial (of degree −1) is defined by
where βκ,j are distinct and nonzero in F for all κ∈m and j∈n; the respective matrix Hin=(Hj)h∈mis then a parity check matrix of an [mn, m (n−1), m+1] GRS code of F, and where
e
(l)=(ej(l))j∈n
denote row −1 of the (m+−1)×n matrix whose entries are given by the coefficients of the bivariate polynomial product B(l)(y)E(y,x), where E(y,x) is the bivariate polynomial in x and y with coefficient of yixj being the entry of E that is indexed by (i,j). As an example,
supp(e(l))⊂T∪K∪{jl},l∈<> Equation 32
The contribution of a symbol erasure at position (κ, j) in ε to the column Ej(y) of E(y, x) is an additive term of the form
where for an element ξ∈F the polynomial Tm (y; ξ) is defined as
Σi∈mξiyi Equation 34
So, if
(κ,j)(κl,jl) Equation 35
then the product
is a polynomial in which the powers y−1, y, . . . , ym−1 have zero coefficients.
At this point in the process, every row in the (m+−1)×n array
Z
(l)(y,x)=B(l)(y)Z(y,x) Equation 37
is a codeword of CGRS, where Z(y, x) is the bivariate polynomial in x and y with coefficient of yixj being the entry of Z that is indexed by (i,j). Therefore, by applying a decoder for CGRS to row Q−1 of Z(l) with ρ+1 erasures indexed by K∪{jl}, the vector e(l) may be decoded.
In this example, it follows from the definition of e(l) that for every j∈n,
In particular,
Because l∈ was arbitrary, the error values in ε at the positions R may be recovered. I.e.,
εκ
As such, in this example, symbol erasures may be eliminated from E.
In an example, Table 2 summarizes the process described above for a decoding process for (T1), (T2), and (T4) (i.e., block errors 230, block erasures 250, and symbol erasures 240).
C. Decoding (T1), (T2), and (T4) Errors and Erasures and with Restrictions on Errors of Type (T3) (e.g., Block Errors 230, Block Erasures 250, and Symbol Erasures 240 and with Restrictions on Symbol Errors 220).
In one example, a code C is selected (e.g., the code C is guaranteed to work) for the case where there are some restrictions on the symbol error 220 positions ((T3) errors), wherein an example, each column, except for possibly one, contains at most one symbol error. The positions of these errors are determined, thereby reducing the decoding to the case described in Section B, above. These restrictions always hold when |L|≦3 and d is sufficiently large
In an example, the same notation is used except: (1) the set L is not necessarily empty; and (2) R is empty. As above, the number of block errors 230 (τ) and the number of block erasures 250 (ρ) satisfy
2τ+ρ≦d−2 Equation 41
In an example, when
θ=|L|>0, Equation 42
and
L={(κl,jl)}l∈θ. Equation 43
In an example, there exists a w∈θ such that the values j0, j1, . . . jw are all distinct, while
j
w
=j
w+1
= . . . =J
θ−1. Equation 44
In this example, θ and w satisfies the inequalities
In other words, the number of erroneous columns does not exceed d−1.
In an example, εκ
In an example, the modified syndrome σ is the m×(d−1) matrix that satisfies
and ˜S is the m×(d−1−ρ) matrix formed by the columns of a that are indexed by ρ, d−1. Note that μ=rank (˜S)=rank ((E)T∪L′).
If μ≧2w+2, then the columns that are indexed by L′ are full block errors (230) (i.e., errors of type (T1)), and
2(τ+w+1)+ρ≦d+μ−2. Equation 48
In an example,
μ≦2w+1. Equation 49
For every j∈T∪L′, column Ej, namely, the column of E that is indexed by j, belongs to colspan(˜S), where, colspan(X) is the vector space spanned by the columns of the array X. This holds for j∈L′\{jw}, in which case Ej (in polynomial notation) takes the form
E
j(y)=εκ,j·Tm(y;βκ,j). Equation 50
In an example, the row vectors a0, a1, . . . , am−μ−1 form a basis of the dual space of colspan(˜S), and for every i∈m−μ, ai(x) denotes herein the polynomial of degree less than m with coefficient vector ai. Note that
a(y)=gcd(a0(y),a1(y), . . . ,am−μ−1(y)); Equation 51
since the ai's are linearly independent,
deg a(y)≦μ. Equation 52
In other words, a(y) has at most μ(≦2w+1) distinct roots in F. For every ξ∈F, the column vector (ξh)h∈m(also represented as Tm(y;ξ)) belongs to colspan(˜S) (and, hence, to colspan(E)T∪L′), if and only if E is a root of a(y). In particular, βκ
R={(κ,j):a(βκ,j)=0} Equation 53
is denoted herein by R, and the polynomial A(v) is defined by
In an example, the (m−η)×n matrix Ê=(êh,j)h∈(m−η),j∈(n) which is formed by the rows A(y)E(y,x) that are indexed by m−n. Including,
Ŝ is the (m−η)×(d−1−ρ) matrix formed b the rows of A(y)Ŝ(y,x) that are indexed by μ,m. Therefore, Êjl(y)=0 for l∈w and
The number of summands on the right side of Equation 56 is θ−w, and that number is bounded from above by m−2w−1≦m−u≦m−η. This means that Êjw(y)=0 if and only if A(βκ
rank(˜S)=rank(({circumflex over (E)})T∪{j
Next, the following three cases are distinguished.
1. Case 1: η=μ
According to example Equation 57, Êjw(y)=0, which is equivalent to having A(βκ
2. Case 2: η=μ−1
If Êjw(y)=0 then L⊂R. Otherwise (according to Equation 57), each column in (˜S) must be a scalar multiple of Êjw. Note that herein Ê and Ê (as used in some equations), are used interchangeably. The entries of Êjw, in turn, form a sequence that satisfies the (shortest) linear recurrence
This recurrence is uniquely determined, since the number of entries in Êjw, which is m−η=m−μ+1≧m−2w, is at least twice the degree |R′| (≦θ−w) of B(y). The recurrence can be computed from any nonzero column of (˜S).
From there, L⊂R∪R′, is derived, where
Once again decoding can be reduced to the case in Section B, above.
3. Case 3: η≦μ−2
If Êjw(y)=0 then (again) L⊂R. Hence, Ê can be decoded. As shown in Equation 56,j=jw, the vector Êjw(y) can be referred to as a syndrome of the column vector
with respect to the following parity-check matrix of a GRS code:
Since the Hamming weight of ε*j is at most θ−w<(m−η)/2, ε*j can be decoded uniquely from Êj. Thus, for every K such that A(βκ,j−1)≠0, an error value εK,j is derived, and subtracted from the respective entry of γ, thereby making R a superset of the remaining symbol errors 220. In an example the above process is applied to every nonzero column in Ê with index j∉K. A decoding failure means that j is not jw, and a decoding success for j≠jw will just cause a coding scheme 100 to incorrectly change already corrupted columns in γ, without introducing new erroneous columns. Again, decoding of γ may proceed as in Section B, above.
Table 3 presents the implied decoding system of a combination of errors of the type (T1), (T2), and (T3) (block errors 230, block erasures 250, and symbol errors 220) provided that the type (T3) errors (symbol errors 220) satisfy requirements (a) and (b) above, including Equation 7. As discussed above, these equations hold when m≦d−ρ and the number of type (T3) errors (symbol errors 220) is at most 3.
The following discussion sets forth in detail the operation of some example methods of operation of embodiments.
In operation 310, in one example, a horizontal code 120 (C) is selected, and in operation 320, a matrix 130 (Hin) is selected.
In an example, A vertical code over F is defined as (C, Hin), which consists of all m×n matrices
=(0|1| . . . |n−1) Equation 64
over F (where j stands for column j of Γ, and is a transmitted array 206) such that each row in
Z=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 65
is a codeword in a horizontal code 120 (C).
In an example, the code C′ is an m-level interleaving of C, such that an m×n matrix
Z=(Z0|Z1| . . . |Zn−1) Equation 66
over F is a codeword of C if each row in Z belongs to C. Each column in Z then undergoes encoding by an inner encoder of rate one, wherein the encoder of column j is given by the bijective mapping Zj→Hj−1Zj.
In operation 310, in one example, a horizontal code 120 (C) is selected as a linear [n, k, d] code over F.
In operation 320, in one example, a matrix 130 is selected from a plurality of matrices 130. As discussed above, a matrix 130 (Hin) is an m×(mn) matrix over F that satisfies the following two properties a positive integer (δ):
H
in=(H0|H1| . . . |Hn−1) Equation 67
with H0, H1, . . . , Hn−1 being m×m sub-matrices of Hin, wherein each Hin is invertible over F.
In operation 330, in one example, information symbols 210 are encoded based at least upon the code C. In 340, each column in Z undergoes encoding by an inner encoder of rate one, wherein the encoder of column j is given by the bijective mapping Zj→Hj−1Zj.
In operation 410, in various examples, an array of encoded symbols 211 is transmitted. In an example, an array 206 is altered such that encoded symbols 211 in an array 206 become a corrupted array 200 (γ).
In operation 420, in various examples, a received array 200 (γ) of possibly-corrupted encoded symbols 210 is received. The array 200 may be received by a device comprising a decoder.
In an example, an m×n received array 200 (γ)
Y=(H0 0|H0 1| . . . |Hn−1 n−1) Equation 68
where received array 200 contains θ+θ≦(d+δ−3)/2 erroneous columns.
In operation 430, in various examples, a received array 200 of encoded symbols 210 is decoded. Using one of the examples described herein for decoding, received array 200 (γ) is decoded back into transmitted array 206 ( ).
In operation 300, in various examples, information symbols 210 are encoded using a coding scheme 100.
In operation 400, in various examples, encoded symbols 210 are transmitted, received, and decoded.
In operation 510, when included, a syndrome array (S) is computed. For example, the syndrome array may be of size m×(d−1) and shown by
S=(H0γ0|H1γ1| . . . |Hn−1γn−1)HGRST Equation 69
In operation 520, when included, in various examples, a modified syndrome array is computed. For example, a modified syndrome array is computed to be the unique ×(d−1) matrix that satisfies the congruence
Note that in various embodiments, the term aj is the same as the ones used above
In operation 530, when included, in various examples, if there are additional symbol erasures 240 in the received array 200 repeat operations 531, 532, and 533. For example, for every l∈<>, operations 531, 532, and 533 are performed.
In operation 531, when included, in various examples, a row in a unique row matrix is computed.
In operation 532, when included, in various examples, a decoder is applied for the horizontal code 120 based at least on the syndrome array and a row in the matrix. For example, ejl(l) (i.e., entry jl in e(l)) by applying a decoder for CGRS (horizontal code 120 utilizing a GRS code) using row −1 in σ(l) as syndrome and assuming that columns indexed by K∪{jl} are erased. Then
εκ
In operation 533, when included, in various examples, the received array and the syndrome array are updated. For example, the received array (γ) 200 and the syndrome array (S) are updated as in Equations 74 and 75.
γ(y,x)←γ(y,x)−εκ
S(y,x)←S(y,x)−εκ
In operation 540, when included, in various examples, a decoder is applied for an inner array based at least on the syndrome array and a row in the matrix. For example, for every h∈m decoder is applied for inner linear code 120 (CGRS) using row h of S as syndrome and assuming that columns 260 indexed by K are erased. E is a m×n matrix, where the rows of E are the decoded error vectors for all h∈m.
In operation 550, when included, in various examples, a first error array is computed. For example,
ε=(H0−1E0|H1−1E1| . . . |Hn−1−1En−1). Equation 76
In operation 560, when included, in various examples, a received array of information symbols 210 is decoded by applying the error array to the received array 200 of encoded symbols 211. For example, transmitted array Γ 206 may be computed by array γ−ε, where γ−ε is an array of size m×n.
In operation 570, when included, in various examples, a syndrome array is computed. For example, the syndrome array (S) may be of size m×(d−1) and shown by
S=(H0γ0|H1γ1| . . . |Hn−1γn−1)HGRST Equation 77
In operation 571, when included, in various examples, a modified syndrome array is computed. For example, the matrix (˜S) is formed by the columns of
that are indexed by ρ, d−1. In an example μ=rank (˜S).
In operation 572, when included, in various examples, a polynomial is computed using a Feng-Tzeng operation. In various examples, using a Feng-Tzeng process, a polynomial λ(x) is computed of degree Δ≦(d+μ−r)/2 such that the following congruence is satisfied for some polynomial ω(y,x) with degx ω(y,x)<r+Δ:
σ(y,x)λ(x)≡ω(x,y)(mod xd−1). Equation 78
If no such λ(x) exists or the computed λ(x) does not divide Πj∈n(1−αjx) then the decoding has failed and stops. In one example, if the decoding fails flowchart 500 proceeds to step 580. In another example, if the decoding did not fail, flowchart 500 proceeds to step 573.
In operation 573, when included, in various examples, an error array (E) is computed. In an example, an m×n error array (E) is computed by Equation 79:
where (·)′ denoted formal differentiation.
In operation 574, when included, in various examples, the received array 200 of information symbols 211 is decoded by applying the error array to the received array 200 of information symbols 211. In an example, an error array is computed with equation 80:
ε=(H0−1E0|H1−1E1| . . . |Hn−1−1En−1). Equation 80
In an example, a transmitted array 206 is computed by applying the error array to the received array 200:
Γ=γ−ε. Equation 81
In operation 580, when included, in various examples, the greatest common divisor is computed based on a left kernel of a second matrix. For example, as shown in step 4 of Table 3, a greatest common divisor a(y) is computed based at least on the left kernel of (˜S).
In operation 581, when included, in various examples, a root sub-set and a polynomial are computed. For example, the set R and the polynomial A(y) are computed as in Equations 53 and 54. In an example, η=|R|.
In operation 582, when included, in various examples, a second matrix is computed. For example, a (m−η)×(d−1−σ) second matrix (Ŝ) is formed based at least on the rows of A(y)Ŝ(y,x) that are indexed by η,m.
In an example, if η=μ−1 then operations 591, 592 and 593 are performed. In another example, if η≦μ−1, operations 595, 596, 597, 598 and 599 are performed. One example of these operations can be seen in table 3 at steps 5 and 6.
In operation 591, when included, in various examples, the shortest linear recurrence of any nonzero column in the second matrix is computed. For example, the shortest linear recurrence B(y) is computed for any nonzero column in Ŝ.
In operation 592, when included, in various examples, the root sub-set is computed. For example, the set
R′={(κ,j):A(βκ,j−1)≠0 and A(βκ,j−1)=0}. Equation 82
is computed.
In operation 593, when included, in various examples, the root sub-set is updated. In various examples the root sub-set is not updated. For example, if |R|=deg B(y) and |R′|≦m−η then update R←R∪R′.
As discussed above, in an example, if η≦μ−1, operations 595, 596, 597, 598 and 599 are performed. An example of these options can be seen in table 3 at step 6.
In operation 595, when included, in various examples, a modified syndrome array is computed. For example, a modified syndrome array is computed to be the unique m×(d−1) matrix σ that satisfies the congruence:
In an example, μ is the rank of the m×(d−1−r) matrix (˜S) formed by the columns of matrix σ that are indexed by r, d−1.
In operation 596, when included, in various examples, a polynomial is computed using a Feng-Tzeng operation. In various examples, using a Feng-Tzeng process, a polynomial λ(x) is computed of degree Δ≦(d+μ−r)/2 such that the following congruence is satisfied for some polynomial ω(y,x) with degx ω(y,x)<r+Δ:
σ(y,x)λ(x)≡ω(x,y)(mod xd−1). Equation 85
If no such λ(x) exists or the computed λ(x) does not divide Πj∈n(1−α1x) then the decoding has failed and stops. In one example, if the decoding did not fail, flowchart 500 proceeds to step 597.
In operation 597, when included, in various examples, an error array is computed provided the Feng-Tzeng operation is successful. In an example, an m×n error array (Ê) is computed by Equation 79:
where (·)′ denoted formal differentiation.
In an example, steps 598 and 599 are performed for every nonzero column of the error array (Ê). This is shown in step 6(b) of Table 3 (where operation 598 correlates with step 6(b)(i) and step 599 correlates with step 6(b)(ii).
In operation 598, when included, in various examples, a decoder for the inner word 120 is applied. A decoder for a GRS code is applied with the parity-check matrix HGRS(j) as in equations 62 and 63 above (i.e.,
where Êj is a syndrome array, to produce an error vector ε*j.
In operation 599, when included, in various examples, the corrupted array is updated provided applying the decoder to the inner codeword 210 is successful. For example, provided that operation 598 is successful, E*j=Hjε*j and a received array is updated. For example, γj←γj−ε*j and S(y,x)←S(y,x)−E*j−E*j·Td−1(x;αjl).
With reference now to
System 600 of
Referring still to
Referring still to
Embodiments of the present technology are thus described. While the present technology has been described in particular examples, it should be appreciated that the present technology should not be construed as limited by such examples, but rather construed according to the following claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2012/062835 | 10/31/2012 | WO | 00 |