The invention relates to electronic devices, and, more particularly, to error correction structures and methods.
Digital communication and storage systems typically include error correction coding in order to overcome errors arising from the transmission or storage medium. Forward error-correction coding (FEC) systems add redundancy to the transmitted signal so that the receiver can detect and correct errors using only the received signal. This eliminates the need for the receiver to send requests for retransmission to the transmitter.
One of the more popular error correction code types is BCH codes which are cyclic block codes that utilize Galois fields beyond the simplest GF(2) (the usual binary {0,1}) to prescribe code generator polynomials. Indeed, a BCH code uses a minimal degree generator polynomial with roots being a sequence of powers of a primitive element of a Galois field which may be an extension field of the symbol field (codeword components' field). This leads to computations involving multiplications of Galois field elements for the usual decoding steps of syndrome calculation, association with error pattern (determine error-locator polynomial and error locations and values), and error correction. Reed-Solomon codes are a subclass of BCH codes with both symbols and generator polynomial roots in the same field GF(pm). The commonly used field GF(2m) allows the elements to be represented as m-bit words.
The nonzero elements of a Galois field form a cyclic multiplicative subgroup and can be expressed as powers of a primitive element α. That is, the elements of GF(pm) are {0, 1, α, α2, . . . , (αq} where the maximum q=pm−2 and αq+1=1. Thus the roots of a generator polynomial G(x) for a BCH code could be {α, α2, . . . , α2t} for a code which can correct t errors per codeword. The generator polynomial thus would be the least common multiple of the polynomials φh(x) for j=1, 2, . . . , 2t where φj(x) is a minimal polynomial for αj. The special case of the symbol field being the same as the root field (Reed-Solomon) implies φj(x) is simply x−αj.
Systematic BCH encoding, as with cyclic codes in general, forms codewords by concatenating the k information symbols with n−k parity symbols which are computed according to Xn−k|(x) modG(x). The additional n−k parity symbols contain the redundant information that is used by the receiver to choose the most likely transmitted k information symbols. In particular, with receiver soft decision the n−k parity symbols can be used to correct t error symbols and detect s erased symbols provided 2t+s is at most equal to n−k. Note that values such as n=204 and k=188 with the field GF(28) in a Reed-Solomon code is a commonly used (shortened) code for high speed modems. Such a (204,188) code can correct 8 error symbols per 204-symbol codeword. Similarly, the (200,192) code can correct 4 errors per 200-symbol codeword.
S0=r0*β0n−1+r1*β0n−2+r2*β0n−3+ . . . +rn−2*β0+rn−1
S1=r0*β1n−1+r1*β1n−2+r2*β1n−3+ . . . +rn−2*β1+rn−1
. . .
S2t−1=r0*β2t−1n−1+r1*β2t−1n−2+r2*β2t−1n−3+ . . . +rn−2*β2t−1+rn−1
Because C(x) is a product of G(x), C(x) vanishes at each of the 2t roots of G(x), and the syndrome Sj equals E(βj). Thus the 2t syndrome equations are nonlinear relations between the 2t syndromes, the at most t error locations, and the at most t error values; that is, 2t nonlinear equations for at most 2t unknowns.
Next, linearize these nonlinear syndrome equations by introduction of the error locator polynomial, Λ(x)=πm(1+Xmx)=1+ΣmΛmXm, where Xm is the mth error location. The error locator polynomial has degree equal to the (not-yet-known) number of errors. That is, Xm=αj for the mth j for which ej is nonzero, and the roots of Λ(x) are the inverses of the error locations.
Multiplying the defining equation for Λ(x) by the error values and powers of the error locations and summing leads to n−k−e linear equations for the e unknowns Λj with coefficients of these linear equations being an array of the syndromes S0 to S2e−2 and the inhomogeneous terms being the sequence of syndromes Se to S2e−1. The number of errors, e, is unknown, so the linear equations cannot be simply solved. Rather, the method of
Once the locator polynomial is known, find its roots (the inverses of the error locations) using the Chien search. The Chien search systematically evaluates the locator polynomial Λ(x) at all 255 non-zero possible values of the variable and checks for zeros by repeatedly computing Λ(αj) and incrementing j.
Inserting the error locations into the definitions of the syndromes yields simple linear equations for the error values. In fact, the Forney algorithm in
However, efficient computation of the syndromes has problems for systems, especially for systems in which the code (generator polynomial) may change and parallel multiplication is available.
The present invention provides a BCH code syndrome method with partitioning of the received polynomial plus interleaving of partial syndrome computations within each received polynomial partition.
This has the advantages of avoiding multiplier latency in syndrome computations.
The drawings are heuristic for clarity.
a-4c show a preferred embodiment Galois multiplier.
System overview
Preferred embodiment syndrome evaluations partition received polynomials into subsets for computation of syndrome portions and reconstruction of the syndromes from the portions. This avoids latency of a Galois multiplier. Further, the preferred embodiments may use parallel Galois multipliers for parallel simultaneous syndrome evaluations and allow changes in code parameters with the same stored codewith differing partitions. Preferred embodiment systems include a digital signal processor with a parallel Galois multiplier, parallel Galois adder (XOR), and memory which implement these syndrome evaluations and other methods to provide realtime decoding of BCH (including Reed-Solomon) coded streams such as used in ADSL communications.
Syndrome preferred embodiments
The first syndrome preferred embodiments use GF(256) symbols with (n,k) Reed-Solomon codes with n−k=16 and 8, and with n even or odd. The elements of GF(256) can be expressed as 8-bit symbols (bytes). Also, the first preferred embodiments include a 32-bit GF(256) multiplier which can multiply in parallel 4 pairs of GF(256) elements to yield 4 products as illustrated in
For the case of n−k=2t=16, there are 16 syndromes, Si for i=0, 1, 2, . . . , 15, where Si is defined as follows in terms of the received codeword R={r0, r1, r3, . . . , rn−1} with each rj an element of GF(256) and the ith root βi=αi of the code generator polynomial with a a primitive element for GF(256):
Si=r0*βin−1+r1*βin−2+r2*βin−3+ . . . +rn−2*βi+rn−1
The ith syndrome could be computed by Homer's rule using repeated multiplication by βi plus accumulation of the next received codeword element:
Si=r0*βin−1+r1*βin−2+r2*βin−3+ . . . +rn−2*βi+rn−1
=( . . . ((r0*βi+r1)*βi+r2)*βi+ . . . +rn−2)*βi+rn−1
However, the preferred embodiments partition the received codeword into four subsets and employ Horner's method for each subset independently to yield portions of the syndrome; this avoids multiplier latency by interleaving the subset multiplications. After computing each of the four portion of a syndrome, the portions are combined to yield the syndrome.
The ith syndrome would be computed as in the following sequences in the Galois multiplier and Galois adder (bitwise XOR) in which the multiplier has a latency so that the product of the multiplier and multiplicand entered on a first cycle is not available until the fourth cycle:
Note that the last three multiplications had high powers of Pi and that syndrome Si is just the sum of the last three products of the multiplier plus the last sum of the adder. That is,
Si=( . . . ((r0*βi+r1)*βi+ . . . +rn/4−2)*βi+rn/4−1)*βi3n/4+( . . . ((rn/4*βi+rn/4+1)*βi+ . . . +rn/2−2)*βi+rn/2−1)*βin/2+( . . . ((rn/2*βi+rn/2+1)*βi+ . . . +r3n/4−2)*βi+r3n/4−1)*βin/4+( . . . ((r3n/4*βi+r3n/4+1)*βi+ . . . +rn−2)*βi+rn−1)
The preferred embodiment performs these last three additions to complete the computation of Si; see
After the foregoing syndrome computations, three more loops of analogous parallel computations yield {S4, S5, S6, S7}, {S8, S9, S10, S11}, and {S12, S13, S14, S15} which use the corresponding {β4, β5, β6, and β7}, {β8, β9, β10, and β11}, and {β12, β13, β14, and β15}.
When the Galois multiplier has a larger latency, the computations for {S0, S1, S2, S3} and {S8, S9, S10, S11} could be interleaved, and similarly for {S4, S5, S6,S7} and {S12, S13, S14, S15} to effectively double the number of cycles from the time a multiplicand and multiplier are entered into the Galois multiplier and the time that their product is used in another calculation.
Note that since only one memory load (to a byte) is being done every cycle, there will not be any memory bank conflicts in cases of segmented memory.
Alternatives
For the case of n−k=2t=8, there are 8 syndromes Si for i=0, 1, . . . , 7 which are the same as the previously-defined Si. Thus a prferred embodiment uses two loops of analogous computations would yield {S0, S1, S2,S3} and {S4, S5, S6, S7}. However, an alternative preferred embodiment retains the four loops of computation and instead partitions the received polynomial into eight subsets beginning at r0, rn/8, rn/4, r3n/8, rn/2, r5n/8, r3n/4, and r7n/8, and uses a loop of half the length (n/8 terms rather than n/4) but retains the n/4 offset of t=8 so the subsets are interleaved. Thus each of the computations yields half of one of the 8 syndromes, and these halves are finally combined to yield the syndromes. In particular, the computations would be:
This would be interleaved as illustrated in
S′i=( . . . ((r0*βi+r1)*βi+ . . . +rn/8−2)*βi+rn/8−1)*βi7n/8+( . . . ((rn/4*βi+rn/4+1)*βi+ . . . +r3n/8−2)*βi+r3n/8−1)*βi5n/8+( . . . ((rn/2*βi+rn/2+1)*βi+ . . . +r5n/8−2)*βi+r5n/4−1)*βi3n/8+( . . . ((r3n/4*βi+r3n/4+1)*βi+ . . . +r7n/8−2)*βi+r7n/8−1)*βin/8
and
S″i=( . . . ((rn/8*βi+rn/8+1)*βi+ . . . +rn/4−2)*βi+rn/4−1)*βi3n/4+( . . . ((r3n/8*βi+r3n/8+1)*βi+ . . . +rn/2−2)*βi+rn/2−1)*βin/2+( . . . ((r5n/8*βi+r5n/8+1)*βi+ . . . +r3n/4−2)*βi+r3n/4−1)*βin/4+( . . . ((r7n/8*βi+r7n/8+1)*βi+ . . . +rn−2)*βi+rn−1
Lastly, Si=S′i+S″i. Of course, the three multiplications of the Homer's method partial results by powers of βi in the summation to form S′i are the same power as for S″i with an additional multiplication by βin/8; that is, (βi7n/8; βi5n/8; βi3n/8; βi n/8)=βin/8*(βi3n/4; βin/2; βin/4; 1).
Note that essentially the computations of {S8, S9, S10, S11} for t=8 are used to compute the last half of {S0, S1,S2, S3} for t=4 by using β0, β1, β2, β3, in place of β8, β9, β10, β11, an offset starting point of n/8, and only n/8 terms instead of n/4. Analogously, use the computations of {S12,S13, S14, S15} for t=8 to compute the last half of {S4, S5, S6, S7} for t=4.
The foregoing partitioning the received polynomial into 8 subsets allows essentially the same program to handle both the t=8 and t=4 cases. Thus a modem which monitors error rates and adaptively adjusts code parameters can use the same program footprint.
For the cases of n odd or certain n even, n/2, n/4, or n/8 may not necessarily be integers. In such cases the same computation loops can be used after padding the codeword with 0 bytes at the r0 end to increase n to a multiple of 8 or 4 as needed. Again, the same program for the syndrome computations is used and minimizes program memory used.
Modifications
The preferred embodiments can be modified in various ways while retaining the features of BCH decoding with a syndrome method including a partition of the received polynomial (block of symbols) into subsets and interleaved subset partial syndrome evaluations (to overcome multiplier latency) followed by a combination of partial syndrome evaluations and parallel evaluations with a parallel multiplier and parallel adder and scalable evaluations for use with changes in code error-correction capabilities such as t changing from 8to4.
For example, various Galois fields could be used; in particular, GF(2m) for m less than 8 can immediately be implemented by just using only the m lefthand bits of a symbol byte: the Galois multiplier yields a product in the same lefthand m bits for a given m by applying a corresponding mask in the partial product combinations.
Further, codes with various error-correction capabilities (e.g., t values from 1 to 8) can be handled by the same syndrome method with further partitioning of the received polynomial and use of a higher t value for odd t. For t larger than 8, the method can be scaled up by just increasing the number of evaluation loops: if the Galois multiplier can handle P parallel multiplies, then ┌2t/P ┐ loops suffice, and if the Galois multiplier has a latency of L cycles, then partition the received block of n symbols into at least L subsets for interleaved Homer's methods on the subsets (with loops of length ┌n/L┐) to yield P partial syndromes in parallel. And decreases in t may allow use of the same number of loops but with a subdivision of the subsets and the loop length for a subdivision of the partial syndromes among multiple loops.
This application claims priority from provisional application Ser. No. 60/183,533, filed Feb. 18, 2000. The following copending application with common assignee of this application disclose related subject matter: Ser. No. 60/183,419 filed Feb. 18, 2000 (TI-30531).
Number | Name | Date | Kind |
---|---|---|---|
3745526 | Hong et al. | Jul 1973 | A |
4151510 | Howell et al. | Apr 1979 | A |
4875211 | Murai et al. | Oct 1989 | A |
4890287 | Johnson et al. | Dec 1989 | A |
5051998 | Murai et al. | Sep 1991 | A |
5537429 | Inoue | Jul 1996 | A |
5577054 | Pharris | Nov 1996 | A |
5677919 | Antia | Oct 1997 | A |
5872799 | Lee et al. | Feb 1999 | A |
5974580 | Zook et al. | Oct 1999 | A |
6041431 | Goldstein | Mar 2000 | A |
Number | Date | Country |
---|---|---|
09162753 | Jun 1997 | JP |
Number | Date | Country | |
---|---|---|---|
20020002693 A1 | Jan 2002 | US |
Number | Date | Country | |
---|---|---|---|
60183533 | Feb 2000 | US | |
60183419 | Feb 2000 | US |