Reed-Solomon code system employing k-bit serial techniques for encoding and burst error trapping

Information

  • Patent Grant
  • 5875200
  • Patent Number
    5,875,200
  • Date Filed
    Friday, March 28, 1997
    27 years ago
  • Date Issued
    Tuesday, February 23, 1999
    25 years ago
Abstract
Apparatus and methods are disclosed for providing an improved system for encoding and decoding of Reed-Solomon and related codes. The system employs a k-bit-serial shift register for encoding and residue generation. For decoding, a residue is generated as data is read. Single-burst errors are corrected in real time by a k-bit-serial burst trapping decoder that operates on this residue. Error cases greater than a single burst are corrected with a non-real-time firmware decoder, which retrieves the residue and converts it to a remainder, then converts the remainder to syndromes, and then attempts to compute error locations and values from the syndromes. In the preferred embodiment, a new low-order first, k-bit-serial, finite-field constant multiplier is employed within the burst trapping circuit. Also, code symbol sizes are supported that need not equal the information byte size. The implementor of the methods disclosed may choose time-efficient or space-efficient firmware for multiple-burst correction.
Description

BACKGROUND OF THE INVENTION
This invention relates to information storage and retrieval or transmission systems, and more particularly to means for encoding and decoding codewords for use in error detection and correction in such information systems.
Digital information storage devices, such as magnetic disk, magnetic tape or optical disk, store information in the form of binary bits. Also, information transmitted between two digital devices, such as computers, is transmitted in the form of binary bits. During transfer of data between devices, or during transfer between the storage media and the control portions of a device, errors are sometimes introduced so that the information received is a corrupted version of the information sent. Errors can also be introduced by defects in a magnetic or optical storage medium. These errors must almost always be corrected if the storage or transmission device is to be useful.
Correction of the received information is accomplished by (1) deriving additional bits, called redundancy, by processing the original information mathematically; (2) appending the redundancy to the original information during the storage or transmission process; and (3) processing the received information and redundancy mathematically to detect and correct erroneous bits at the time the information is retrieved. The process of deriving the redundancy is called encoding. One class of codes often used in the process of encoding is Reed-Solomon codes.
Encoding of information is accomplished by processing a sequence of information bits, called an information polynomial or information word, to devise a sequence of redundancy bits, called a redundancy polynomial, in accord with an encoding rule such as one of the Reed-Solomon codes. An encoder processes the information polynomial with the encoding rule to create the redundancy polynomial and then appends it to the information polynomial to form a codeword polynomial which is transmitted over the signal channel or stored in an information storage device. When a received codeword polynomial is received from the signal channel or read from the storage device, a decoder processes the received codeword polynomial to detect the presence of error(s) and to attempt to correct any error(s) present before transferring the corrected information polynomial for further processing.
Symbol-serial encoders for Reed-Solomon error correcting codes are known in the prior art (see Riggle, U.S. Pat. No. 4,413,339). These encoders utilize the conventional or standard finite-field basis but are not easy to adapt to bit-serial operation. Bit-serial encoders for Reed-Solomon codes are also known in the prior art (see Berlekamp, U.S. Pat. No. 4,410,989 and Glover, U.S. Pat. No. 4,777,635). The Berlekamp bit-serial encoder is based on the dual basis representation of the finite field while the bit-serial encoder of U.S. Pat. No. 4,777,635 is based on the conventional representation of the finite field. Neither U.S. Pat. No. 4,410,989 nor U.S. Pat. No. 4,777,635 teach methods for decoding Reed-Solomon codes using bit-serial techniques.
It is typical in the prior art to design encoding and error identification apparatus where n=m=8 bits, where n is the number of bits in a byte and m is the symbol size (in bits) of the Reed-Solomon code. However, this imposes a severe restriction on the information word length: since the total of information bytes plus redundancy symbols must be less than 2.sup.m, no more than 247 information bytes may appear in a single information word if 8 redundancy symbols are to be used. Increasing media densities and decreasing memory costs push for increasing the size of information words. Thus, there is a need to decouple n, the byte-size of the information word, from m, the symbol size of the Reed-Solomon code employed.
Also, bit-serial finite-field constant multiplier circuits are well known in the prior art. For example, see Glover and Dudley, Practical Error Correction Design for Engineers (Second Edition), pages 112-113, published by Data Systems Technology Corp., Broomfield, Colo. However, these designs require that the most-significant (higher order) bit of the code symbol be presented first in the serial input stream. Using exclusively multipliers with this limitation to implement an error identification circuit results in the bits included in a burst error not being adjacent in the received word symbols. Thus, there is a need for a least-significant-bit first, bit-serial, finite-field constant multiplier.
Prior-art circuits used table look-up to implement the finite-field arithmetic operation of multiplication, which is used in the error-identification computation. Because the look-up table size is established by 2.sup.m, the number of bits in each code symbol, even a modest increase in m results in a substantial increase in the look-up table size. It is possible to reduce the size of the required tables, at the expense of the multiplication computation time, by representing the finite field elements as the concatenation of two elements of a finite field whose size is significantly less than the size of the original field. However, there are situations in which one would like to be able to choose implementations at either of two points on the speed-versus-space tradeoff to accomplish either fast correction using large tables or slower correction using small tables. Thus, there is a need for a way of supporting either implementation of finite-field arithmetic in error correcting computations.
As the recording densities of storage devices increase, the rate of occurrence for soft errors (non-repeating noise related errors) and hard errors (permanent defects) increase. Soft errors adversely affect performance while hard errors affect data integrity.
Errors frequently occur in bursts e.g. due to a noise event of sufficient duration or a media defect of sufficient size to affect more than one bit. It is desirable to reduce the impact of single-burst errors by correcting them on-the-fly, without re-reading or re-transmitting, in order to decrease data access time. Multiple, independent soft or hard errors affecting a single codeword occur with frequency low enough that performance is not seriously degraded when re-reading or off-line correction is used. Thus, there is a need for the capability to correct a single-burst error in real time and a multiple-burst error in an off-line mode.
Due to market pressure there is a continuous push toward lower manufacturing cost for storage devices. This constrains the ratio of the length of the redundancy polynomial to the length of the information polynomial.
It is thus apparent that there is a need in the art for higher performance, low cost implementations of more powerful Reed-Solomon codes.
FIG. 1A shows the prior art classical example of a Reed-Solomon linear feedback shift register (LSFR) encoder circuit that implements the code generator polynomial
x.sup.4 +c.sub.3 x.sup.3 +c.sub.2 x.sup.2 +c.sub.1 x+c.sub.0
over the finite field GF(2.sup.m). The elements 121, 122, 123, and 124 of the circuit are m-bit wide, one-bit long registers. Data bits grouped into symbols being received on data path 125 are elements of the finite field GF(2.sup.m). Prior to transmitting or receiving, register stages 121, 122, 123, 124 are initialized to some appropriate starting value; symbol-wide logic gate 128 is enabled; and multiplexer 136 is set to connect data path 125 to data/redundancy path 137. On transmit, data symbols from data path 125 are modulo-two summed by symbol-wide EXCLUSIVE-OR gate 126 with the high order register stage 121 to produce a feedback symbol on data path 127. The feedback symbol on data path 127 then passes through symbol-wide logic gate 128 and is applied to finite field constant multipliers 129, 130, 131, and 132. These constant multipliers multiply the feedback symbol by each of the coefficients of the code generator polynomial. The outputs of the multipliers 129, 130, and 131, are applied to symbol-wide summing circuits 133, 134, and 135 between registers 121, 122, 123, and 124. The output of multiplier 132 is applied to the low order register 124.
When the circuit is clocked, register 121, 122, and 123 take the values at the outputs of the modulo-two summing circuits 133, 134, and 135, respectively. Register 124 takes the value at the output of constant multiplier 132. The operation described above for the first data symbol continues for each data symbol through the last data symbol. After the last data symbol is clocked, the REDUNDANCY TIME signal 138 is asserted, symbol-wide logic gate 128 is disabled, and symbol-wide multiplexer 136 is set to connect the output of the high order register 121 to the data/redundancy path 137. The circuit receives 4 additional clocks to shift the check bytes to the data/redundancy path 137. The result of the operation described above is to divide an information polynomial I(x) by the code generator polynomial G(x) to generate a redundancy polynomial R(x) and to append the redundancy polynomial to the information polynomial to obtain a codeword polynomial C(x). Circuit operation can be described mathematically as follows:
R(x)=(x.sup.4 *I(x))MOD G(x)
C(x)=x.sup.4 *I(x)+R(x)
where + means modulo-two sum and * means finite field multiplication.
FIG. 1B shows a prior art example of an external-XOR Reed-Solomon LFSR encoder circuit that implements the same code generator polynomial as is implemented in FIG. 1A, though FIG. 1A uses the internal-XOR form of LFSR. Internal-XOR LFSR circuits always have an XOR (or parity tree or summing) circuit between shift register stages containing different powers of X, whereas external-XOR circuits do not have a summing circuit between all such shift register stages. In addition, internal-XOR LFSR circuits always shift data toward the stage holding the highest power of X, whereas external-XOR circuits always shift data toward the stage holding the lowest power of X. External-XOR LFSR circuits are known to the prior art (for example, see Glover and Dudley, Practical Error Correction for Engineers pages 32-34, 181, 296 and 298).
FIG. 2 is a block diagram of another prior art encoder and time domain syndrome generation circuit which operates on m-bit symbols from GF(2.sup.m), k bits per clock cycle where k evenly divides m. The circuit of FIG. 2 employs the conventional finite field representation and performs the same function as the encoder shown in FIG. 1 except that it is easily adapted to operate on k bits of an m-bit symbol per clock, where k evenly divides m.
The circuit of FIG. 2 utilizes n registers, here represented by 160, 161, 162, and 163, where n is the degree of the code generator polynomial. The input and output paths of each register are k bits wide. The depth (number of delay elements between input and output) of each register is m/k. When k is less than m, each of the registers 160, 161, 162, and 163 function as k independent shift registers, each m/k bits long. Prior to transmitting or receiving, all registers 160, 161, 162, and 163 are initialized to some appropriate starting value; logic gates 164 and 165 are enabled; and multiplexer 166 is set to pass data from logic gate(s) 165 to data/redundancy path 167. On transmit, data symbols from data path 168 are modulo-two summed by EXCLUSIVE-OR gate(s) 169 with the output of the high order register 160, k bits at a time, to produce a feedback signal at 170. The feedback signal is passed through gate(s) 164 to the linear network 171 and to the next to highest order register 161. The output of register 161 is fed to the next lower order register 162 and so on. The output of all registers other than the highest order register 160 also have outputs that go directly to the linear network 171. Once per m-bit data symbol the output of linear network 171 is transferred, in parallel, to the high order register 160.
When k is equal to m, the linear network 171 is comprised only of EXCLUSIVE-OR gates. When k is not equal to m, the linear network 171 also includes linear sequential logic components. On each clock cycle, each register is shifted to the right one position and the leftmost positions of each register take the values at their inputs. The highest order register 160 receives a new parallel-loaded value from the linear network 171 once per m-bit data symbol. Operation continues as described until the last data symbol on data path 168 has been completely clocked into the circuit. Then the REDUNDANCY TIME signal 175 is asserted, which disables gates 164 and 165 (because of INVERTER circuit 178) and changes multiplexer 166 to pass the check symbols (k bits per clock) from the output of the modulo-two summing circuit 169 to the data/redundancy path 167. Clocking of the circuit continues until all redundancy symbols have been transferred to the data/redundancy path 167. The result of the operation described above is that the information polynomial I(x) is divided by the code generator polynomial G(x) to generate a redundancy polynomial R(x) which is appended to the information polynomial I(x) to obtain the codeword polynomial C(x). This operation can be described mathematically as follows:
R(x)=(x.sup.m *I(x))MOD G(x)
C(x)=x.sup.m *I(x).sym.R(x)
In receive mode, the circuit of FIG. 2 operates as for a transmit operation except that after all data symbols have been clocked into the circuit, RECEIVE MODE signal 176, through OR gate 177, keeps gate(s) 165 enabled while REDUNDANCY TIME signal 175 disables gate(s) 164 and changes multiplexer 166 to pass time domain syndromes from the output of modulo-two summing circuit 169 to the data-redundancy path 167. The circuit can be viewed as generating transmit redundancy (check bits) during transmit, and receive redundancy during receive. Then the time domain syndromes can be viewed as the modulo-two difference between transmit redundancy and receive redundancy. The time domain syndromes are decoded to obtain error locations and values which are used to correct data. Random access memory (RAM) could be used as a substitute for registers 160, 161, 162, and 163.
SUMMARY OF THE INVENTION
Apparatus and methods are disclosed for providing an improved system for encoding and decoding of Reed-Solomon and related codes. The system employs a k-bit-serial shift register for encoding and residue generation. For decoding, a residue is generated as data is read. Single-burst errors are corrected in real time by a k-bit-serial burst trapping decoder that operates on this residue. Error cases greater than a single burst are corrected with a non-real-time firmware decoder, which retrieves the residue and converts it to a remainder, then converts the remainder to syndromes, and then attempts to compute error locations and values from the syndromes. In the preferred embodiment, a new low-order-first, k-bit-serial, finite-field constant multiplier is employed within the burst trapping circuit. Also, code symbol sizes are supported that need not equal the information byte size. Also, the implementor of the methods disclosed may choose time-efficient or space-efficient firmware for multiple-burst correction.
In accordance with the foregoing, an object of the present invention is to reduce the implementation cost and complexity of real-time correction of single-burst errors by employing a k-bit-serial external-XOR burst-trapping circuit which, in the preferred embodiment, uses a least-significant-bit first, finite-field constant multiplier.
Another object of the present invention is to provide a high-performance, cost-efficient implementation for Reed-Solomon codes that allows the same LFSR to be used for both encoding and decoding of Reed-Solomon codes, more particularly, that utilizes the same k-bit-serial, external-XOR LFSR circuit as is used in encoding operations in decoding operations to generate a residue that can subsequently be transformed into the time-domain syndrome (or remainder) known in the prior art (see Glover et al, U.S. Pat. No. 4,839,896).
Another object of the present invention is to provide a polynomial determining a particular Reed-Solomon code that supports a high degree of data integrity with a minimum of media capacity overhead and that is suitable for applications including, but not limited to, on-the-fly correction of magnetic disk storage.
Another object of the invention is to provide a means for extending the error correction power of an error correction implementation by implementing erasure correction techniques.
Another object is to provide an implementation of Reed-Solomon code encoding and decoding particularly suitable for implementation in an integrated circuit.
Another object is to support error identification (i.e., determining the location(s) and pattern(s) of error(s)) of single-burst errors exceeding m+1 bits in length, where m is the code symbol size, using a k-bit-serial error-trapping circuit.
Another object of the present invention is to provide a cost-efficient non-real-time implementation for multiple-burst error correction which splits the correction task between hardware and firmware.
Another object is to support both time-efficient and firmware space-efficient computation for multiple-burst error identification (i.e., determination of error locations and values) with the choice depending on the predetermined preference of the implementor of the error identification apparatus.
Another object is a method for mapping between hardware finite-field computations and software finite-field computations within the same error-identification computation such that the hardware computations reduce implementation cost and complexity and the software computations reduce firmware table space.
Another object is to reduce the implementation cost and complexity of the storage or transmission control circuitry by allowing the LFSR used in encoding and residue generation, in addition, to be used in encoding and decoding of information according to other codes such as computer generated codes or cyclic redundancy check (CRC) codes.
Another object is to support larger information polynomials without excessive amounts of redundancy by eliminating the prior-art requirement that the number of bits in the information word be a multiple of the size of the code symbol by including one or more pad fields in the codeword polynomial, i.e., the word generated by concatenating the information polynomial with the redundancy polynomial and with the pad field(s).
Another object is to achieve the above objectives in a manner that supports improved protection against burst errors and longer information polynomials by allowing the code symbols of the information polynomial to be interleaved, as is known in the prior art, among a plurality of codeword polynomials, each containing its own independent redundancy polynomial.
These and other objects of the invention will become apparent from the detailed disclosures following herein.





BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1A is a block diagram showing a prior art, symbol-wide linear feedback shift register (LFSR) configured with internal XOR gates that generates the redundancy polynomial for a distance 5 Reed-Solomon code.
FIG. 1B is a block diagram showing a prior art, symbol-wide LFSR configured with external XOR gates that generates the redundancy polynomial for a distance 5 Reed-Solomon code.
FIG. 2 is a block diagram showing a prior art, k-bit-serial, LFSR configured with external XOR gates with equivalent function to that of FIG. 1B.
FIG. 3 is a block diagram showing an application of the present invention in the encoding, decoding, and error correction of information transferred to/from a host computer and a storage/transmission device.
FIG. 4 is a logic diagram of the LFSR for an external-XOR, 1-bit-serial encoder and residue generator employing a high-order first, finite-field constant multiplier.
FIG. 5 is a logic diagram of the LFSR for an external-XOR, 2-bit-serial encoder and residue generator employing a high-order first, finite-field constant multiplier.
FIG. 6 is a logic diagram of the LFSR for an external-XOR, 1-bit-serial, single-burst error trapping decoder employing a high-order first, finite-field constant multiplier.
FIG. 7 is a logic diagram of the LFSR for an external-XOR, 2-bit-serial, single-burst error trapping decoder employing a high-order first, finite-field constant multiplier.
FIG. 8 is a logic diagram template of a 1-bit-serial low-order first, finite-field constant multiplier.
FIG. 9 is a logic diagram template of a 1-bit-serial, low-order first, finite-field, double-input, double-constant multiplier.
FIG. 10 is a logic diagram template of a 2-bit-serial, low-order first, finite-field, constant multiplier.
FIG. 11 is a logic diagram template of a 2-bit-serial, low-order first, finite-field, double-input, double constant multiplier.
FIG. 12 is a logic diagram for the LFSR of an external-XOR 1-bit-serial, low-order first, single-burst error trapping decoder employing a low-order first, finite-field constant multiplier.
FIG. 13 is a logic diagram for the LFSR of an external-XOR 2-bit-serial, low-order first, single-burst error trapping decoder employing a low-order first, finite-field constant multiplier.
FIG. 14 is a logic diagram showing how the logic diagram of FIG. 12 can be modified to support correcting burst errors of length greater than m+1 bits.
FIG. 15 is a logic diagram showing how the logic diagram of FIG. 13 can be modified to support correcting burst errors of length greater than m+1 bits.
FIG. 16 is a chart showing the use of pre-pad and post-pad fields and the correspondence between n-bit bytes and m-bit symbols in the information, redundancy, and codeword polynomials.
FIG. 17 is a logic diagram for the LFSR of a an external-XOR two-way interleaved, 1-bit-serial encoder and residue generator.
FIG. 18 is a logic diagram for the LFSR of an external-XOR two-way interleaved, 1-bit-serial, single-burst error trapping decoder employing a low-order first, finite-field constant multiplier.
FIGS. 19A through 19D illustrate the use of a low-order first, finite-field constant multiplier in burst trapping. In particular, FIG. 19A is a chart illustrating bit-by-bit syndrome reversal. FIG. 19B is a chart showing symbol-by-symbol syndrome reversal. FIG. 19C is a chart illustrating a burst error that spans two adjacent symbols in the recording or transmission media. FIG. 19D is a chart illustrating the same burst error fragmented in the same symbols as transformed by symbol-by-symbol reversal.
FIG. 20 shows a read/write timing diagram.
FIG. 21 shows a correction mode timing diagram.
FIG. 22 shows a timing diagram for the A.sub.-- CLK.
FIG. 23 illustrates the steps required to fetch, assemble, map to an "internal" finite field with subfield properties, and separate the subfield components of the ten-bit symbols of a residue polynomial T(x) stored in a memory of width eight bits.
FIG. 24 illustrates the steps required to calculate the coefficients of S(x).
FIG. 25 illustrates the steps required to iteratively generate the error locator polynomial .sigma.(x).
FIG. 26 illustrates the steps required to locate and evaluate errors by searching for roots of .sigma.(x).
FIG. 27 illustrates the steps required to divide .sigma.(x) by (x.sym..alpha..sup.L), compute the error value E, and adjust the coefficients of S(x).
FIG. 28 illustrates the steps required to transfer control to the appropriate special error location subroutine.
FIG. 29 illustrates the steps required to compute a root X, and its log L, of a quadratic equation in a finite field.
FIG. 30 illustrates the steps required to compute a root X, and its log L, of a cubic equation in a finite field.
FIG. 31 illustrates the steps required to compute the log L of one of the four roots of a quartic equation in a finite field.
FIG. 32 illustrates the steps required to analyze a set of up to four symbol errors for compliance with a requirement that there exist at most a single burst up to twenty-two bits in length or two bursts, each up to eleven bits in length, where the width of a symbol is ten bits.
FIG. 33 illustrates the steps required to analyze a set of two adjacent symbol errors for compliance with a requirement that there exists a single burst up to eleven bits in length, where the width of a symbol is ten bits.
FIG. 34 illustrates the steps required to analyze a set of three or four adjacent symbol errors for compliance with a requirement that there exist at most a single burst up to twenty-two bits in length or two bursts, each up to eleven bits in length, where the width of a symbol is ten bits.
FIGS. 35 through 132 comprise three groups of figures which illustrate three different embodiments of the present invention, namely FIGS. 35 through 65 illustrating Version 1, FIGS. 66 through 97 illustrating Version 2 and FIGS. 98 through 132 illustrating Version 3, each version being an alternate embodiment of the invention.





DESCRIPTION OF THE PREFERRED EMBODIMENT
The following description is of the best presently contemplated mode of carrying out the instant invention. This description is not to be taken in a limiting sense but is made merely for the purpose of describing the general principles of the invention. The scope of the invention should be determined with reference to the appended claims.
SYSTEM BLOCK DIAGRAM
Referring to FIG. 3, a data controller 100 having a host interface 102 is connected to a host computer 104. The data controller 100 also has a device interface 101 which connects the data controller 100 to an information storage/transmission device 108. In the process of writing data onto the storage/transmission device 108, an information word (or polynomial) from the host computer 104 is transferred to the data controller 100 through an information channel 103, through the host interface 102, through a data buffer manager 107, and into a data buffer 106. The information word is then transferred through a sequencer 109 into an encoder and residue generator circuit 110 where the redundancy word is created. At the same time the information word is transferred into the encoder 110, it is transferred in parallel through switch 112, through device interface 101, and through device information channel 116, to the information storage/transmission device 108. After the information word is transferred as described above, switch 112 is changed and the redundancy word is transferred from the encoder 110, through switch 112, through device interface 101, through device information channel 116, and written on information storage/transmission device 108.
For reading information from information storage/transmission device 108, the process is reversed. A received polynomial from information storage/transmission device 108 is transferred through device information channel 115, through the device interface 101, through switch 111 into the residue generator 110. At the same time the received word is being transferred into the residue generator 110, it is transferred in parallel through sequencer 109, and through data buffer manager 107 into the data buffer 106. After the received word has been transferred into the data buffer 106, if the residue is non-zero, then while the next received codeword polynomial is being transferred from storage/transmission device 108, the residue is transferred from the residue generator 110 to the burst trapping decoder 113, which then attempts to identify the location and value of a single-burst error. If this attempt succeeds, then the error location and value are transferred to the data buffer manager 107, which then corrects the information polynomial in the data buffer 106 using a read-modify-write operation. If this attempt fails, then the processor 105 can initiate a re-read of the received codeword polynomial from information storage/transmission device 108, can take other appropriate action, or can use the residue bits from the residue generator 110 to attempt to correct a double-burst error in the information word in data buffer 106. After correction of any errors in the data buffer 106, the data bits are transferred through the host interface 102, through the information channel 118 to the host computer 104.
ENCODER AND RESIDUE GENERATOR
FIG. 4 shows an external-XOR LFSR circuit using bit-serial techniques, including a high-order first, bit-serial multiplier, which is shared for both encoding and residue generation functions. Similar external-XOR bit-serial LFSR circuits which are shared for encoding and remainder generation are known in the prior art (see U.S. Pat. No. 4,777,635 for example). These prior art circuits transform a residue within the LFSR to a remainder by disabling feedback in the LFSR but continuing to clock the LFSR during the redundancy time of a read. In contrast, the circuit of FIG. 4 continues, by leaving feedback enabled, to develop a residue within the shift register during the redundancy time of a read. At the end of reading information and redundancy, the residue can be transferred to a burst-trapping circuit for real-time correction or the residue can be transferred to firmware for non-real-time correction.
The linear network comprises a high-order-first, multiple input, bit-serial multiplier. This type of multiplier is known in the prior art (see Glover, U.S. Pat. No. 4,777,635 for example).
The equations for zo through z3 are established by the coefficients of the code generator polynomial.
FIG. 5 shows an external-XOR LFSR circuit using 2-bit-serial techniques, including a high-order first, 2-bit-serial multiplier, which is shared for both encoding and residue generation functions. Similar external-XOR 2-bit-serial LFSR circuits which are shared for encoding and remainder generation are known in the prior art (see U.S. Pat. No. 4,777,635 for example). These prior art circuits transform a residue within the LFSR to a remainder by feedback in the LFSR and by disabling continuing to clock the LFSR during the redundancy time of a read. The circuit of FIG. 5 continues, by leaving feedback enabled, to develop a residue within the shift register during redundancy time of a read. At the end of a read, the residue can be transferred to a burst-trapping circuit for real-time correction or the residue can be transferred to firmware for non-real-time correction.
HIGH-ORDER FIRST BURST TRAPPING DECODER
FIG. 6 shows an external-XOR LFSR circuit using bit-serial techniques, including a high-order first, bit-serial multiplier which accomplishes burst-trapping for single-burst errors. The linear network is a multiple input, high-order first, bit-serial multiplier. Such multipliers are known in the prior art (see Glover, U.S. Pat. No. 4,777,635). The use of such a multiplier in the external-XOR, bit-serial burst-trapping circuit is not taught in references known to applicants.
This type of burst-trapping circuit can be used within the current invention to accomplish real-time correction of single bursts that span two m-bit symbols.
Circuit operation is as follows. First, the circuit is parallel loaded with a residue generated within a residue generator such as is shown in FIGS. 4 and 5. The residue from a circuit such as shown in FIGS. 4 and 5 must be symbol-by-symbol reversed (i.e., as shown in FIG. 19B, the position of each symbol is flipped end-to-end, the first symbol becoming the last, the last becoming the first and so on) as it is parallel loaded into the circuit of FIG. 6. Next the circuit of FIG. 6 is clocked. A modulo 4 (modulo m for the general case) counter keeps track of the clock count modulo 4 as clocking occurs. Points A through F are OR'd together and monitored as counting occurs. If this monitored OR result is zero for the four successive clocks of a symbol (counter value 0 through counter value m-1), then a single-burst spanning two symbols has been isolated. When this happens, control circuitry stops the clock and the error pattern is in bit positions B3 through B0 and B31 through B28. The number of clocks counted up until the clock is stopped is equal to the error location in bits plus some delta, where the delta is fixed for a given implementation and is easily computed empirically.
FIG. 7 shows an external LFSR circuit using 2-bit-serial techniques, including a high-order first, 2-bit-serial multiplier which accomplishes burst-trapping for single-burst errors. The linear network is a multiple input, high-order first, 2-bit-serial multiplier. Such multipliers are known in the prior art (see Glover, U.S. Pat. No. 4,777,635). The use of such a multiplier in the external-XOR 2-bit-serial burst-trapping circuit is not taught in references known to applicants.
This type of burst-trapping circuit can be used within the current invention to accomplish real-time correction of single bursts that span two m-bit symbols.
Circuit operation is as follows. First, the circuit is parallel loaded with a residue generated within a residue generator such as is shown in FIGS. 4 and 5. The residue from a circuit such as shown in FIGS. 4 and 5 must be symbol-by-symbol reversed as it is parallel loaded into the circuit of FIG. 7. Next the circuit of FIG. 7 is clocked. A modulo 2 (modulo (m/2) for the general case) counter keeps track of the clock count modulo 2 as clocking occurs. Points C through N are OR'd together and monitored as counting occurs. If this monitored OR result is zero for the two successive clocks of a symbol (counter value 0 through counter value m/2-1), then a single-burst spanning two symbols has been isolated. When this happens, control circuitry stops the clock and the error pattern is in bit positions B3 through B0 and B31 through B28. The number of clocks counted up until the clock is stopped is equal to the error location in (bits*2) plus some delta, where the delta is fixed for a given implementation and is easily computed empirically.
LOW-ORDER FIRST CONSTANT MULTIPLIER
FIG. 8 is a template for the logic diagram of a 1-bit-serial, low-order first, constant multiplier for the example finite field given in Table
TABLE 1______________________________________Vector Representation of Elements of Finite FieldGF(2.sup.4) Established by the Field Generator Polynomialx.sup.4 + x + 1 VECTORFINITE FIELD REPRESENTATIONELEMENT .alpha..sup.3 .alpha..sup.2 .alpha..sup.1 .alpha..sup.0______________________________________0 0 0 0 0.alpha..sup.0 0 0 0 1.alpha..sup.1 0 0 1 0.alpha..sup.2 0 1 0 0.alpha..sup.3 1 0 0 0.alpha..sup.4 0 0 1 1.alpha..sup.5 0 1 1 0.alpha..sup.6 1 1 0 0.alpha..sup.7 1 0 1 1.alpha..sup.8 0 1 0 1.alpha..sup.9 1 0 1 0.alpha..sup.10 0 1 1 1.alpha..sup.11 1 1 1 0.alpha..sup.12 1 1 1 1.alpha..sup.13 1 1 0 1.alpha..sup.14 1 0 0 1______________________________________
Given an input field element W, the circuit of FIG. 8 computes an output field element Y, where Y=.alpha..sup.i .multidot.W and where .alpha..sup.i is a predetermined constant chosen from the example finite field. The dotted lines in FIG. 8 indicate predetermined connections that are present or absent depending on the value chosen for a.sup.i. To determine these connections, compute .alpha..sup.m-1 .multidot..alpha..sup.i, and look up its vector representation in Table 1. For example, if .alpha..sup.i is chosen to be .alpha..sup.7, then .alpha..sup.m-1 .multidot..alpha..sup.i =.alpha..sup.4-1 .multidot..alpha..sup.7 (because m, the number of bits per symbol is 4 for the example finite field), which equals .alpha..sup.10. The vector representation of .alpha..sup.10 is 0111, which means that the XOR gates 202, 203, and 204 would have connections from the output of the W shift register 196, but XOR gate 201 would not have such a connection. In this manner, the template of FIG. 8 can be used to generate a constant multiplier circuit for any constant element of the example finite field.
The operation of the circuit of FIG. 8 is as follows: The 4-bit representation of W is assumed to be initially present in the four one-bit shift register stages W.sub.0 through W.sub.3, with the LSB of the representation in W.sub.0. The one-bit shift register stages Y.sub.0 through Y.sub.3 are assumed to be initially zero. The W shift register 197-200 and the Y shift register 190-193 are each clocked synchronously 4 times, 4 being the symbol size. Then the value of Y can be read from the Y shift register.
Because of the specific feedback connections from the Y.sub.0 bit (i.e. shift register stage 193) to the XOR gates 201 and 204, the logic diagram template of FIG. 8 only applies to the example finite field representation given in Table 1. To generalize FIG. 8 to other finite fields, it is necessary to add or remove bits in shift register Y and to add or remove corresponding XOR gates so that there are m bits and m XOR gates or trees, where m is the size of the symbols of the finite field chosen. To generalize FIG. 8 to other finite fields or to other representation of the finite field of order 2.sup.4, the feedback points from Y.sub.0 must be predetermined by the vector representation of .alpha..sup.(2.spsp.m.sup.-2), i.e. the "last" element of the chosen finite field. Similar to the method in which the connections from the W shift register to the XOR gates are determined, connect the output of Y.sub.0 to the input of the XOR gate preceding stage Y.sub.i if and only if bit i of the vector representation of .alpha..sup.(2.spsp.m.sup.-2) is l.
FIG. 9 is a template for the logic diagram of a 1-bit-serial, double-input, double-constant, multiply-and-sum circuit for the example finite field given in Table 1. Given two input field elements W and X, the circuit of FIG. 9 computes an output field element Y, where Y=.alpha..sup.i .multidot.X+.alpha..sup.j .multidot.W and .alpha..sup.i and .alpha..sup.j are predetermined constants chosen from the example finite field. The dotted lines in FIG. 9 indicate predetermined connections that are present or absent depending on the values chosen for .alpha..sup.i and .alpha..sup.j. The connections for each of .alpha..sup.i and .alpha..sup.j are independently determined in the same manner discussed for determining those for .alpha..sup.i in FIG. 8, i.e., by using the vector representations of the constants. In this manner, the template of FIG. 9 can be used to generate a constant multiplier circuit for any two constant elements of the example finite field.
The operation of the circuit of FIG. 9 is similar to that of FIG. 8 and is as follows: The 4-bit representation of W is assumed to be initially present in the four one-bit shift register stages W.sub.0 through W.sub.3, with the LSB of the representation in W.sub.0. Similarly for X.sub.0-3 being initialized to X. The one-bit shift register stages Y.sub.0 through Y.sub.3 are assumed to be initially zero. The X, W, and Y shift registers are each clocked synchronously 4 times, 4 being the symbol size. Then the value of Y can be read from the Y shift register.
As was the case in FIG. 8, the feedback connections in FIG. 9 from the Y.sub.0 bit to two of the four XOR gates limit the circuit of FIG. 9 such that it applies only to the example finite field representation given in Table 1. To generalize FIG. 9 to other finite fields, it is necessary to add or remove bits in shift registers X and Y and to add or remove corresponding XOR gates so that there are m bits and gates, where m is the size of the symbols of the finite field chosen. The selection of the feedback points is the same as for FIG. 8, i.e., connect the output of Y.sub.0 to the input of the XOR gate preceding stage Y.sub.i if and only if bit i of the vector representation of .alpha..sup.(2.spsp.m.sup.-2) is l.
In the special case where .alpha..sup.i equals .alpha..sup.j, then the problem simplifies to the single constant case and the circuit of FIG. 9 is unnecessarily complex. In this case,
Y=.alpha..sup.i .multidot.X+.alpha..sup.i .multidot.W
Y=.alpha..sup.i (X+W)
Thus, we need merely XOR the outputs of W.sub.0 and X.sub.0 and use that in the place of the output of the W.sub.0 stage 200 of FIG. 8.
The above method for designing 1-bit-serial low-order first, finite field, double-input, double-constant, multiply-and-sum circuits can be generalized for any number of inputs and constants. The connections from each input shift register are independent from each other and each depends only on the multiplier constant chosen for that input.
FIG. 10 is a template for the logic diagram of a 2-bit-serial, constant multiplier for the example finite field given in Table 1. Like the circuit of FIG. 8, given an input field element W, the circuit of FIG. 10 computes an output field element Y, where Y=.alpha..sup.i .multidot.W and .alpha..sup.i is a predetermined constant chosen from the example finite field. Unlike the circuit of FIG. 8, which is 1-bit-serial, the circuit of FIG. 10 accepts two adjacent bits of W during each clock cycle. The dotted lines in FIG. 10 indicate predetermined connections from W.sub.0 243 and W.sub.1 242 that are present or absent depending on the value chosen for .alpha..sup.i.
To determine the connections from W.sub.o 243, compute .alpha..sup.(m-2) .multidot..alpha..sup.i, and look up its vector representation in Table 1. To determine the connections from W.sub.1 242, compute .alpha..sup.(m-1) .multidot.60 .sup.i and look up its vector representation in Table 1. For example, if .alpha..sup.i is chosen to be .alpha..sup.7, then .alpha..sup.(m-2) .multidot..alpha..sup.i =.alpha..sup.(4-2) .multidot..alpha..sup.7 (because m, the number of bits per symbol is 4 for the example finite field), which equals .alpha..sup.9. Similarly, .alpha..sup.(m-1) .multidot..alpha..sup.i =.alpha..sup.10. The vector representation of .alpha..sup.9 is 1010, which means that connections 245 and 249 would be made from the ouput of W.sub.0 243 to the XOR gates preceeding Y.sub.3 and Y.sub.1. Similarly, the vector representation of .alpha..sup.10 is 0111, which means that the connections 246, 248, and 250 would be made from the output of W.sup.1 242 to the XOR gates preceding Y.sub.2, Y.sub.1, and Y.sub.0. In this manner, the template of FIG. 10 can be used to generate a constant multiplier circuit for any predetermined element of the example finite field.
The operation of the circuit of FIG. 10 is as follows: The 4-bit representation of W is assumed to be initially present in the four one-bit shift register stages W.sub.0 through W.sub.3, with the LSB of the representation in W.sub.0. The one-bit shift register stages Y.sub.0 through Y.sub.3 are assumed to be initially zero. The W shift register 240-243 and the Y shift register are each clocked synchronously twice, 2 being the symbol size (i.e., 4) divided by k (i.e., 2). Then the value of Y can be read from the Y shift register.
The logic diagram template of FIG. 10 only applies to the example finite field representation given in Table 1, because of the feedback connections from the Y.sub.0 and Y.sub.1 bits to the XOR gates preceding Y.sub.3, Y.sub.2, and Y.sub.0. To generalize FIG. 10 to other finite fields, it is necessary to add or remove bits in shift register Y and to add or remove corresponding XOR gates so that there are m bits and gates, where m is the size of the symbols of the finite field chosen. To select the feedback points from Y.sub.0, use the vector representation of .alpha..sup.(2.spsp.m.sup.-3), i.e., similar to the method in which the connections from the W shift register to the XOR gates are determined, connect the output of Y.sub.0 to the input of the XOR gate preceding stage Y.sub.i if and only if bit i of the vector representation of .alpha..sup.(2.spsp.m-3) is l. Use the vector representation of .alpha..sup.(2.spsp.m.sup.-2) to determine the feedback from Y.sub.1.
The above method of designing k-bit-serial, low-order first, constant multipliers with k=2 can be generalized for any k up to the symbol size m such that k evenly divides m. The input connections from each stage of W, W.sub.j for 0.ltoreq.j.ltoreq.k-1, are determined as discussed above by the vector representation of the finite-field element given by the following formula:
.alpha..sup.i .multidot..alpha..sup.(m-k+j)
The feedback connections from each stage of Y, Y.sub.j for 0.ltoreq.j.ltoreq.k-1, are determined as discussed above by the vector representation of the finite-field element given by the following formula:
.alpha..sup.(2.spsp.m.sup.-k+j-1)
FIG. 11 is a logic diagram template of a 2-bit-serial, low-order first, finite-field, double-input, double-constant multiply-and-sum circuit. It will be appreciated that the circuit of FIG. 11 essentially combines the features of the circuits of FIGS. 9 and 10. Like the circuit of FIG. 9, the circuit of FIG. 11 computes Y=.alpha..sup.i .multidot.X+.alpha..sup.j .multidot.W. Like the circuit of FIG. 10, the circuit of FIG. 11 accepts X and W and produces Y two bits in parallel within each clock cycle.
The discussion with regard to FIG. 10 of how to determine the connections from W.sub.0 to the XOR gates preceding each stage of the shift register Y applies to determining the connections from each of X.sub.0, X.sub.1, W.sub.0, and W.sub.1 in FIG. 11. As was the case in FIG. 9, the connections from the X.sub.0 and X.sub.1 pair are independent of those of the W.sub.0 and W.sub.1 pair; each pair depends only on its respective constant .alpha..sup.i or .alpha..sup.j.
The operation of the circuit of FIG. 11 is similar to those of FIGS. 9 and 10 and is as follows: the 4-bit representation of W is assumed to be initially present in the four one-bit shift register stages W.sub.0 through W.sub.3, with the LSB of the representation in W0. Similarly for X and X.sub.0-3. The one-bit shift register stages Y.sub.0 through Y.sub.3 are assumed to be initially zero. The W shift register 260, the X shift register 262, and the Y shift register 265-268 are each clocked synchronously twice, 2 being the symbol size (i.e., 4) divided by k (i.e. 2). Then the value of Y can be read from the Y shift register.
As is the case with the circuit of FIG. 10, the circuit of FIG. 11 only applies to the case of the example finite-field representation of Table 1. It will be appreciated that the discussion of generalizing the circuit of FIG. 10 to other finite fields or other finite-field representations also applies to the circuit of FIG. 11.
The above method for designing k-bit-serial low-order first, finite-field, double-input, double-constant, multiply-and-sum circuits with k=2 can be generalized for any k up to the symbol size m, such that k evenly divides m, and for any number of inputs and constants. The connections from each input H, H.sub.j for 0.ltoreq.j.ltoreq.k-1, are independent from each other and each depends only on the multiplier constant .alpha..sup.h chosen for that input. The input connections are determined as discussed above by the vector representation of the finite-field element given by the following formula:
.alpha..sup.h .multidot..alpha..sup.(m-k+j)
The feedback connections from each stage of Y, Y.sub.j for 0.ltoreq.j.ltoreq.k-1, are determined as discussed above by the vector representation of the finite-field element given by the following formula:
.alpha..sup.(2.spsp.m.sup.-k+j-1)
USE OF LOW-ORDER FIRST MULTIPLIER IN DECODING
The advantages of using a low-order (least-significant bit) first, finite-field constant multiplier in decoding is illustrated in FIG. 19. FIG. 19A shows the bit-by-bit syndrome reversal that is required by the mathematics of Reed-Solomon codes. If only high-order first, finite-field constant multipliers are used, then to accommodate this, a symbol-by-symbol syndrome reversal technique must be used as shown in FIG. 19B. However, the effect of this technique is to separate burst errors (i.e., a contiguous sequence of probably erroneous bits) into fragments as shown in FIGS. 19C and D. In FIG. 19C, symbols Y and Y+1 are shown as they are recorded or transmitted bit-serially through a media. In FIG. 19D, they are shown as transformed by symbol-by-symbol reversal, which corresponds to the bit-serial order of the received information or codeword polynomial. Using exclusively high-order first, finite-field constant multipliers in both the encode operation and in the burst trapping phase of the decode operation results in the bits included in a burst error introduced in the recording or transmission media not being adjacent during burst trapping, which substantially complicates computing the length of the burst error. Such computation is required to decide whether or not to correct, or to automatically correct, the error. The preferred embodiment of the present invention uses a high-order first, finite-field constant multiplier in encoding and residue generation, bit-by-bit reversal syndrome reversal, and a low-order first, finite-field constant multiplier in burst trapping. Clearly, an equally meritorious design would be to use a low-order first, finite-field constant multiplier in encoding and residue generating, bit-by-bit syndrome reversal, and a high-order first finite-field constant multiplier in burst trapping.
LOW-ORDER FIRST BURST-TRAPPING DECODER
FIG. 12 shows an external-XOR LFSR circuit which accomplishes burst-trapping for single-burst errors and uses bit-serial techniques, including a low-order first, bit-serial multiplier. The linear network is a multiple input, low-order first, bit-serial multiplier. The use of such multipliers is not taught in any reference known to Applicants.
This type of burst-trapping circuit can be used within the current invention to accomplish real-time correction of single bursts that span two m-bit symbols.
Circuit operation is as follows. First, the circuit is parallel loaded with a residue generated within a residue generator such as is shown in FIGS. 4 and 5. The residue from a circuit such as shown in FIGS. 4 and 5 must be bit-by-bit reversed (i.e., as is shown in FIG. 19A, the position of each bit is flopped end-to-end, the first bit becoming the last, the last becoming the first and so on) as it is parallel loaded into the circuit of FIG. 12. Next, the circuit of FIG. 12 is clocked. A modulo 4 (modulo m for the general case) counter keeps track of the clock count modulo 4 as clocking occurs. Points A through F are OR'd together and monitored as counting occurs. If this monitored OR result is zero for the four successive clocks of a symbol (counter value 0 through counter value m-1), then a single-burst spanning two symbols has been isolated. When this happens, control circuitry stops the clock and the error pattern is in bit positions B3 through B0 and B31 through B28. The number of clocks counted up until the clock is stopped is equal to the error location in bits plus some delta, where the delta is fixed for a given implementation and is easily computed empirically.
FIG. 13 shows an external LFSR circuit using 2-bit-serial techniques, including a low-order first, 2-bit-serial multiplier which accomplishes burst-trapping for single-burst errors. The linear network is a multiple input, low-order first, 2-bit-serial multiplier. The use of such multipliers is not taught in any reference known to Applicants.
This type of burst-trapping circuit can be used within the current invention to accomplish real-time correction of single bursts that span two m-bit symbols.
Circuit operation is as follows. First, the circuit is parallel loaded with a residue generated within a residue generator such as is shown in FIGS. 4 and 5. The residue from a circuit such as shown in FIGS. 4 and 5 must be bit-by-bit reversed as it is parallel loaded into the circuit of FIG. 13. Next the circuit of FIG. 13 is clocked. A modulo 2 (modulo (m/2) for the general case) counter keeps track of the clock count modulo 2 as clocking occurs. Points C through N are OR'd together and monitored as counting occurs. If this monitored OR result is zero for the two successive clocks of a symbol (counter value 0 through counter value m/2-1), then a single-burst spanning two symbols has been isolated. When this happens, control circuitry stops the clock and the error pattern is in bit positions B3 through B0 and B31 through B28. The number of clocks counted up until the clock is stopped is equal to the error location in (bits*2) plus some delta, where the delta is fixed for a given implementation and is easily computed empirically.
DECODING SINGLE-BURST ERRORS SPANNING THREE ADJACENT SYMBOLS
FIG. 14 shows a modification for the circuit of FIG. 6 to allow the correction of bursts whose length spans up to three adjacent symbols. Operation of the circuit is changed as follows: points A through E are OR'd together instead of A through F. Also, after the clock stop criteria is met, as defined in the description of operation for FIG. 6, actual stopping of the clock is delayed by m (4 for the example of FIG. 6) clock periods, where m is the width of symbols in bits. During these extra m clock periods the MULTIPLY.sub.-- INHIBIT signal of FIG. 14 is held LOW. After the clock is stopped the error pattern resides in and can be retrieved from the low-order symbol position and the two high-order symbol positions of the shift register. For the current example, the error pattern would be in shift register bit positions B3 through B0 and B31 through B24.
FIG. 15 shows a modification for the circuit of FIG. 7 to allow the correction of bursts whose length spans up to three adjacent symbols. Operation of the circuit is changed as follows: points C through L are OR'd together instead of C through N. Also, after the clock stop criteria is met, as defined in the description of operation for FIG. 7, actual stopping of the clock is delayed by m/k clock periods (e.g. 4/2=2 for the example of FIG. 7), where m is the width of symbols in bits. During these extra m/k clock periods the MULTIPLY.sub.-- INHIBIT signal of FIG. 15 is held LOW. After the clock is stopped the error pattern resides in and can be retrieved from the low-order symbol position and the two high-order symbol positions of the shift register. For the current example, the error pattern would be in shift register bit positions B3 through B0 and B31 through B24.
PADDING
FIG. 16 is a chart showing the use of pre-pad and post-pad fields and the correspondence between n-bit bytes and m-bit symbols in the information, redundancy, and codeword polynomials. In substantially all popular computer systems, data is handled in n-bit byte form, typically 8-bit bytes, and multiples thereof. Consequently, in a typical application of the present invention, information will be presented logically organized in n-bit bytes. The general case of the Reed-Solomon code is an m-bit symbol size, where m.noteq.n. For the preferred embodiment, m>n, specifically n=8 and m=10.
At the top of FIG. 16, a series of n-bit bytes representing the information polynomial may be seen. The number of bytes times n bits per byte might be divisible by m, and thus the bits in the information polynomial might readily be logically organizable into an integral number of symbols. In the general case however, the number of bits in the information polynomial will not be divisible by m, and thus a number of bits comprising a pre-pad field is added to the information bits to allow the logical organization of the same into an integral number of m-bit information symbols. Since the redundancy is determined by a Reed-Solomon code using an m-bit symbol analysis, the redundancy symbols will be m-bit symbols. When the redundancy symbols are added to the symbols comprising the information bytes and the pre-pad field to make up the codeword polynomial, the resulting number of bits in the codeword polynomial may or may not be integrally divisible by n. If not, a post-pad field is added to the symbol codeword to form an integral number of bytes for subsequent transmission, storage, etc. in byte form. The post-pad field is dropped during decoding as only the codeword polynomial in symbol form is decoded. The contents of the pre-pad field may or may not be predetermined (in the preferred embodiment the pre-pad field is all zeros).
In the case of a variable number of bytes in the information polynomial (variable record length) and a fixed number of redundancy symbols, as might be used with variable-length sectors on a disk recording media, the pre-pad field length will vary with the record length, though the sum of the number of pre-pad bits and the number of post-pad bits will remain the same, or constant. The one exception is the special case where the codeword polynomial (which comprises the information, pre-pad, and redundancy polynomials or words) is an integral number of bytes. In this case the sum of the pre-pad and post-pad field lengths may be zero if the pre-pad field length is zero; otherwise the sum of the pre-pad and post-pad field lengths will be equal to one byte. In the preferred embodiment, the sum of the pre-pad and post-pad field lengths is always one byte.
INTERLEAVING
The technique of interleaving a single information polynomial among a multiplicity of codeword polynomials is well-known in the art (see Glover and Dudley, Practical Error Correction Design for Engineers, pages 270, 285, and 350, and Chen et al U.S. Pat. No. 4,142,174). FIG. 17 is similar to FIG. 4 in that both are logic diagrams of the LFSR of external-XOR, 1-bit-serial encoders. However, FIG. 17 shows a two-way interleave in which, if the symbols are numbered in their serial sequence, then every even-numbered symbol of the information polynomial is placed in a first codeword polynomial and every odd-numbered symbol of the information polynomial is placed in a second codeword polynomial.
Likewise, FIG. 18 is similar to FIG. 12 in that both are logic diagrams of the LFSR for external-XOR, 1-bit-serial, low-order first, single-burst error trapping decoders. However, FIG. 18 shows the same two-way interleave of FIG. 17.
There are numerous variations of interleaving techniques known in the prior art. The teachings of the present invention include, but are not limited to, k-bit-serial techniques and the use of both high-order first, finite-field constant multipliers and low-order first, finite-field constant multipliers in both encoding and decoding operations. It should be obvious to one knowledgable about interleaving techniques in Reed-Solomon codes how these variations can be combined.
REPRESENTATIVE IMPLEMENTATION ALTERNATIVES
There are a significant number of implementation alternatives available for the current invention. The encoder and residue generator can be implemented using k-bit serial techniques for any k which divides m, the symbol width. This, of course, includes the case where k=m.
The burst trapper can use k-bit serial techniques, where k, i.e. the number of bits processed per clock, need not be the same as used in the encoder and residue generator.
All of the constant multiplications of the encoder and residue generator, which are associated with code generator polynomial coefficients, can be accomplished with a single k-bit serial multiple-input, multiple-constant, multiply-and-sum circuit. This e burst trapper circuit as well.
There are four choices associated with the order in which bits are processed within the k-bit serial, multiple-input, multiple-constant, multiply-and-sum circuits of the encoder and residue generator the burst trapper.
______________________________________ Encoder and BurstChoice Residue Generator Trapper______________________________________1 High order first High order first2 High order first Low order first3 Low order first High order first4 Low order first Low order first______________________________________
If choice 1 or 4 is used, the residue is flipped end-on-end on a symbol-by-symbol basis as it is transferred from the encoder and residue generator to the burst trapper. If choice 2 or 3 is used, the residue is flipped end-on-end on a bit-by-bit basis as it is *transferred from the encoder and residue generator to the burst trapper.
There are also several choices associated with the firmware decoding. One choice uses large decoding tables and executes quickly. Another choice uses small decoding tables but executes more slowly.
It is also possible to share the LFSR of the encoder and residue generator with the encoding and decoding of other types of codes such as computer generated codes and/or CRC codes.
There are also choices associated with polynomial selection. It is possible to use one polynomial to establish a finite field representation for both hardware (encoding, residue generation, and burst trapping) and firmware decoding. In this case, any primitive polynomial with binary coefficients whose degree is equal to symbol width can be used. It is also possible to define the representation of the finite field differently for hardware and firmware decoding and to map between the two representations. In this case, the choice of polynomials is limited to a pair which share a special relationship.
The code generator polynomial of the preferred embodiment of the current invention is self-reciprocal. That is, the code generator polynomial is its own reciprocal.
There are several choices available with correction span. It is possible to limit correction performed by the burst trapper to two adjacent symbols. However, a small change extends the correction performed by the burst trapper to three adjacent symbols. Additional hardware extends correction to an even greater number of adjacent symbols. In addition, it is possible to establish correction span in bits instead of symbols.
Interleaving may or may not be employed. In the preferred embodiment interleaving is not employed. This avoids interleave pattern sensitivity and minimizes media overhead. See Practical Error Correction Design for Engineers, (Glover and Dudley, Second Edition, Data Systems Technology Corp. (Broomfield, Colo. 1988)) p. 242 for information on interleave pattern sensitivity.
Another alternative is to implement a polynomial over a finite field whose representation is established by the techniques defined in the section entitled "Subfield Computation" herein, in both the hardware (encoder, residue generator, and burst trapper) and firmware decoder.
FIGS. 35 through 132 comprise three groups of figures which illustrate three different embodiments of the present invention, namely FIGS. 35 through 65 illustrating Version 1, FIGS. 66 through 97 illustrating Version 2 and FIGS. 98 through 132 illustrating Version 3, each version being an alternate embodiment of the invention.
Version 1
Single burst correction real time.
Real time correction span 11 bits.
Real time correction by burst trapping.
Residue available for non-real time correction.
1-bit serial encoder and residue generator.
1-bit serial burst trapping.
External XOR LFSR for encode and residue generation.
External XOR LFSR for burst trapping.
1-bit serial, high order first, multiple-input, multiple-constant, multiply-and-sum circuit used in encoder and residue generator.
1-bit serial, low order first, multiple-constant, multiply-and-sum circuit used in burst trapping.
1F clock is used for encode and residue generation.
2F clock is used for burst trapping.
Real time correction is accomplished in one-half sector time.
Version 2
Single burst correction real time.
Real time correction 11 bits.
Real time correction by burst trapping.
Residue available for non-real time correction.
1-bit serial encoder and residue generator.
2-bit serial burst trapping.
External XOR LFSR for encode and residue generation.
External XOR LFSR for burst trapping.
1-bit serial, high order first, multiple-input, multiple-constant, multiply-and-sum circuit used in encoder and residue generator.
2-bit serial, low order first, multiple-constant, multiply-and-sum circuit used in burst trapping.
Also supports 32 and 56-bit computer-generated codes (shift register A is shared).
Also supports CRC-CCITT CRC code (shift register A is shared).
1F clock is used for encode, residue generation, and burst trapping.
Real time correction is accomplished in one-half sector time.
The 32-bit, 56-bit and CRC codes are as follows:
32-bit Computer-Generated Polynomial
x.sup.32 +x.sup.28 +x.sup.26 +x.sup.19 +x.sup.17 +x.sup.10 +x.sup.6 +x.sup.2 +x.sup.0
56-bit Computer-Generated Polynomial
x.sup.56 +x.sup.52 +x.sup.50 +x.sup.43 +x.sup.41 +x.sup.34 +x.sup.30 +x.sup.26 +x.sup.24 +x.sup.8 +1
CRC Code
x.sup.16 +x.sup.12 +x.sup.5 +1
Version 3
Single burst correction real time.
Real time correction span programmable from 11 to 20 bits in 3 bit increments.
Real time correction by burst trapping.
Residue available for non-real time correction.
1-bit serial encoder and residue generator.
2-bit serial burst trapping.
External XOR LFSR for encode and residue generation.
External XOR LFSR for burst trapping.
1-bit serial, high order first, multiple-input, multiple-constant, multiply-and-sum circuit used in encoder and residue generator.
2-bit serial, low order first, multiple-constant, multiply-and-sum circuit used in burst trapping.
1F clock is used for encode, residue generation, and burst trapping.
Real time correction is accomplished in one-half sector time.
DETAILED HARDWARE LOGIC DIAGRAMS
The Hardware can be divided into two major sections, the generator and the corrector. The following description applies specifically to Version 1.
The Generator. The generator section of the logic consists of Shift Register A and control logic. The clock for Shift Register A is the A-CLK. The clock for the control logic is the 1FCLK. Shift Register A is used to compute redundancy during a write and to compute a residue during read.
The Corrector. The corrector section of the logic consists of Shift Register B and control logic. The clock for Shift Register B is the B-CLK. The clock for the control logic is the 2FCLK. If an ECC error is detected during a read, at the end of the read the contents of Shift Register A are flipped end-on-end, bit-by-bit, and transferred to Shift Register B then Shift Register B is clocked to find the error pattern. An offset register is decremented as Shift Register B is clocked. When the error pattern is found, clocking continues until the error pattern is byte- and right-aligned. When alignment is complete, the clock for Shift Register B is shut off and decrementing of the offset counter is stopped in order to freeze the error pattern and offset. In addition, the interrupt and CORRECTABLE.sub.-- ECC.sub.-- ERR signals are asserted. If the offset count is exhausted without finding the error pattern, the interrupt and UNCORRECTABLE.sub.-- ECC.sub.-- ERR signals are asserted. When the interrupt signal is asserted for a correctable error, stages B71-B48 of shift register B contain the error pattern and the offset counter contains the error displacement. The error displacement is the displacement in bytes from the beginning of the sector to the last byte in error. The logic prevents correction from being attempted on extended redundancy bytes (prepad, redundancy, or postpad bytes). Errors that span data and redundancy are also handled by the logic.
In order to avoid implementing an adder to add the offset to an address, the ECC circuit provides signals on its interface that can be used by the data buffer logic to decrement an address counter.
An error that is found to be uncorrectable by the hardware on-the-fly correction circuits may still be correctable by software. In the preferred embodiment, hardware on-the-fly correction is limited to a single burst of length 11 bits or less. Software algorithm correction is limited to the correction a single burst of length 22 bits or less or two independent bursts, each of length 11 bits or less.
Since the Reed-Solomon code implemented in the preferred embodiment is not interleaved, a single burst can affect two adjacent symbols and, therefore, it was necessary to select a code that could correct four symbols in error in order to guarantee the correction of two bursts. The code itself could be used to correct up to four independent bursts if each burst is contained within a single symbol. However, using the code in this way increases miscorrection probability and, therefore, is not recommended. When the software algorithm determines that four symbols are in error, it verifies that no more than two bursts exist by performing a check on error locations and patterns.
LINE AND FUNCTION DEFINITIONS
__________________________________________________________________________1FCLK Clock synchronized to read/write data. 2FCLK Clock with twice the frequency of 1FCLK. A0-A79 Outputs of flops of Shift Register A. A.sub.-- CLK A gated clock developed by the ECC cir- cuit and used to clock Shift Register A. ADDRDEC This signal is used to decrement the address counter in the data buffer man- ager logic. When the error pattern is found, the address counter holds the offset of the last byte in error from the beginning of the sector. The data buffer logic performs a read-modify- write at the location pointed to by the address counter using bits B55-B48 as the error pattern. Next, the address counter is decremented by the data buffer manager logic and another read-modify- write is performed using bits B63- B56 as the error pattern. The address counter is decremented once more by the data buffer manager logic and the final read-modify-write is performed using bits B71-B64 as the error pattern. The above procedure is modified if any of the signals DISPMINUS, DISPZERO, or DISPONE are asserted. B0-B79 Outputs of flops of Shift Register B. B.sub.-- CLK A gated clock developed by the ECC cir- cuit and used to clock Shift Register B. CORR.sub.-- MODE CORR.sub.-- MODE is set if an error is detected on reading a data field, provided hard- ware correction is enabled. The set up condition for this mode causes Shift Register A to be transferred to Shift Register B. The error displacement and pattern are determined under this mode. CORRECTABLE.sub.-- ECC.sub.-- ERR This signal is activated if the error. pattern is found in correction mode within the number of shifts allocated and other qualifying criteria are met. COUNT.sub.-- NINE.sub.-- A Activates for one GTD1FCLK clock period each time the modulo-ten counter A reaches nine. COUNT.sub.-- NINE.sub.-- B Activates for one GTD1FCLK clock period each time the modulo-ten counter B reaches nine. COUNT.sub.-- ZERO.sub.-- A Activates for one GTD1FCLK clock period each time the modulo-ten counter A reaches zero. CRC/ECC This signal is high for an ID field and low for a data field. DATA.sub.-- TIME DATA.sub.-- TIME is an input to the circuit. It is asserted prior to the leading edge of 1FCLK for the first data bit. It is de-asserted after the leading edge of the 1FCLK for the last data bit. DATA.sub.-- DONE.sub.-- PULSE Asserted for one GTD1FCLK clock time after the de-assertion of DATA.sub.-- TIME. OFFSET.sub.-- MOD.sub.-- 8=1 This signal is activated for one GTD1FCLK clock time when the contents of the offset counter modulo 8 are equal to one. It is used in achieving error pattern byte alignment. DISPGTHONE If this line is asserted, the data buffer manager logic will perform three read-modify-writes in accomplishing correction. DISPMINUS If this line is asserted, the data buffer manager logic will not perform any read-modify-writes. DISPONE If this line is asserted, the data buffer manager logic will perform only two read-modify-writes in accomplishing correction. DISPZERO If this line is asserted, the data buffer manager logic will perform only one read-modify-write in accomplishing correction. DLYD.sub.-- DATA.sub.-- TIME DATA.sub.-- TIME delayed by one GTD1FCLK clock time of GTD1FCLK. DLYD.sub.-- REDUN.sub.-- TIME REDUN.sub.-- TIME delayed by one GTD1FCLK clock time of GTD1FCLK. ECCIN This is the input to Shift Register A during a write or read. During a write, write data appears on this line. During a read, data and redundancy read from the media appear on this line. This line is forced low during PREPAD.sub.-- TIME and POSTPAD.sub.-- TIME during both writes and reads. ERR.sub.-- CLEAR Clears error status. EXT.sub.-- BCLK.sub.-- EN External B clock enable. This is asserted for 8 periods of 2FCLK, to shift Shift Register B 8 times in order to position the next byte for outputting. This function is used only when HDW.sub.-- CORR.sub.-- EN is inactive. This signal must be activated and de-activated during the positive half cycle of 2FCLK. EXT.sub.-- REDUN.sub.-- TIME Extended redundancy time. This signal is the OR of PREPAD.sub.-- TIME, REDUN.sub.-- TIME, and POSTPAD.sub.-- TIME. FDBKEN When high, this signal enables feedback for Shift Register A. FREEZE.sub.-- CLK This signal is normally de-asserted. It is asserted only when it is desired to hold the ECC circuit conditions as the gap between split fields is processed. It must be activated and de-activated during the high half of 1FCLK. GTD1FCLK This is the gated 1FCLK. 1FCLK is gated only by the FREEZE.sub.-- CLK input signal. HDW.sub.-- CORR.sub.-- EN When this signal is high, single bursts are corrected on-the-fly. ID.sub.-- FIELD.sub.-- CRC.sub.-- ERROR Indicates an ID field error. ID.sub.-- ERR.sub.-- CLEAR Clears the ID field CRC error latch. INTERRUPT If hardware correction is not enabled, INTERRUPT is set at the end of a read if an error exists. If hardware correction is enabled, INTERRUPT is set when the error pattern is found for a correctable error or when the offset counter goes negative for an uncorrectable error. INIT Initializes the ECC circuit. INIT must be asserted for one 1FCLK clock time prior to each read or write (prior to asserting DATA.sub.-- TIME). ISOLATED ISOLATED is asserted if either the iso- lation detect Flipflop (FF) or the first non-zero FF are in the one state. JOB.sub.-- DONE.sub.-- PULSE This signal is active for one GTD1FCLK clock time at the end of POSTPAD.sub.-- TIME or at the end of REDUN.sub.-- TIME if no post- padding is required. LATCHED.sub.-- ERROR On a read, LATCHED.sub.-- ERROR is set if a nonzero difference exists between read checks and write checks. LNET.sub.-- A LNET.sub.-- A is the linear network (PTREE.sub.-- A) and the linear network register for Shift Register A. LNET.sub.-- B LNET.sub.-- B is the linear network (PTREE.sub.-- B) and the linear network register for Shift Register B. LNLOAD.sub.-- A This signal is asserted each time modulo ten counter A reaches nine. On the next rising clock edge after its assertion, the LNET.sub.-- A register is cleared and its input is transferred to Shift Register A bits A70-A79. LNLOAD.sub.-- B This signal is asserted each time modulo ten counter B reaches nine. On the next rising clock edge after its assertion, the LNET.sub.-- B register is cleared and its input is transferred to Shift Register B bits B70-B79. MODULO.sub.-- TEN.sub.-- COUNTER.sub.-- A The symbol size for the Reed-Solomon code is 10. The MODULO.sub.-- TEN.sub.-- COUNTER.sub.-- A establishes symbol boundaries during read and write operations. MODULO.sub.-- TEN.sub.-- COUNTER.sub.-- B The symbol size for the Reed-Solomon code is 10. The MODULO.sub.-- TEN.sub.-- COUNTER.sub.-- B establishes symbol boundaries during a correction operation. MP.sub.-- BUS The microprocessor bus comes to the ECC circuit for loading the offset counter. MP.sub.-- BUS.sub.-- CONTROL Control signals for latching the con- tents of the microprocessor bus into the offset counter. NUMFMTBYTES=0 This signal informs the ECC logic of the NUMFMTBYTES=1 number of formal bytes between the sync NUMFMTBYTES=2 byte and the first data bytes. Errors in the sync byte are considered uncor- rectable (option) while errors in the format bytes are ignored. OFFSET COUNTER At the beginning of a correction opera- tion, the offset counter is initialized to the maximum number of shifts of Shift Register B that could be required before the error pattern is found. The offset counter is decremented once each time Shift Register B is shifted in searching for the error pattern. When the error pattern is found, shifting continues until it is byte aligned. When byte alignment is complete, the offset counter contains the displacement. OFFSET.sub.-- CNTR=0 Asserted when the offset counter is equal to zero. OFFSET.sub.-- MOD.sub.-- 8=1 This signal is used in byte aligning the error pattern. PAD COUNTER The pad counter counts pad bits. There are always a total of eight pad bits. There are two pad areas. The prepad area is between data and redundancy. The postpad area is after redundancy. If the number of data bits is divisible by 10, all pad bits are written in the postpad area, otherwise, pad bits are split between the prepad and postpad areas. The number of prepad bits are selected to make the sum of data and prepad bits divisible by 10. PAD.sub.-- CNTR=7 Asserted when the PAD COUNTER is equal to seven. PAD.sub.-- CNTR=8 Asserted when the PAD COUNTER is equal to eight. POSTPAD.sub.-- TIME This signal spans all post pad bits. POSTPAD.sub.-- DONE.sub.-- PULSE This signal is active for one GTD1FCLK clock time after POSTPAD.sub.-- TIME. PREPAD.sub.-- COUNT SAVE The number of prepad bits varies with sector size. REGISTER This register saves the number of prepad bits for the correction circuitry. PREPAD.sub.-- CNT.sub.-- SAV Outputs of the PREPAD.sub.-- COUNT SAVE REGISTER. PREPAD.sub.-- TIME This signal spans all prepad bits. PREPAD.sub.-- DONE.sub.-- PULSE This signal is active for one GTD1FCLK clock time after PREPAD.sub.-- TIME. PTREE.sub.-- A This is the linear network for Shift Register A. Its configuration is estab- lished by the code generator polynomial. PTREE.sub.-- B This is the linear network for Shift Register B. Its configuration is estab- lished by the reciprocal polynomial of the code generator polynomial. PTRN.sub.-- FOUND As Shift Register B is shifted while searching for the error pattern, certain conditions are monitored. The PTRN.sub.-- FOUND signal is active when the monitored conditions are met. PWR.sub.-- ON.sub.-- RST Asserted at POWER.sub.-- ON time or at any other time when the state of the ECC circuitry is not known. RD/WRT.sub.-- DATA This is the data input signal to the ECC circuit. READ Active during a read from the media. REDUNDANCY.sub.-- COUNTER Counts CRC redundancy bits for ID fields and ECC redundancy bits for data fields. REDUN.sub.-- TIME Spans all redundancy bits during a read or write operation. REDUN.sub.-- TCNT This signal becomes active on count 15 for ID fields (CRC) and on count 79 for data fields (ECC). REDUN.sub.-- DONE.sub.-- PULSE This signal is active for one B-CLK clock time after REDUN.sub.-- TIME. REDUN/REM During a write, redundancy bits appear on this line, during a read, residue bits appear on this line. RSTB1 The logical OR of ERR.sub.-- CLEAR, PWR.sub.-- ON.sub.-- RST and CORR.sub.-- MODE. SET.sub.-- CORR.sub.-- MODE This signal activates at the end of a read to set correction mode if an error exists. SET.sub.-- REDUN.sub.-- TIME The set condition for REDUN.sub.-- TIME. SHIFT.sub.-- Register.sub.-- A (SRA) Shift Register A generates redundancy during write operations and a residue during read operations. SHIFT.sub.-- Register.sub.-- B (SRB) Shift Register B is the correct shift register. On the detection of an error on read, the contents of Shift Register A (the residue) are flipped end-on-end and then transferred to Shift Register B. Shift Register B is then shifted until the error pattern is found or until the offset count is exhausted. STOP.sub.-- A.sub.-- CLK This signal goes active to stop clocking of Shift Register A so that its contents can be transferred to Shift Register B. STOP.sub.-- B.sub.-- CLK This signal goes active to stop clocking of Shift Register B and the offset counter once the error pattern is found so that the error pattern can be preserved. SUPPRESS SUPPRESS is asserted during correction mode during the first clocks as we clock back over redundancy and prepad bits. It is used to prevent the circuitry from attempting a correction within redundancy or pad bits. SYNC.sub.-- ERR.sub.-- INHIBIT If this signal is asserted, errors in the sync byte will be ignored. UNCORRECTABLE.sub.-- ECC.sub.-- ERR This signal goes active if the offset count is exhausted while clocking Shift Register B in searching for an error pattern. WRITE Active during a write to the media. WRITE DATA/REDUN During DATA.sub.-- TIME of a write operation, this line carries write data bits. During the REDUN.sub.-- TIME that follows, it carries write redundancy bits. XFER This signal causes the contents of Shift Register A to be flipped end-on-end and then transferred to Shift Register B.__________________________________________________________________________
NOTES
1. All clocking is on the positive edge of the input clocks 1FCLK and 2FCLK.
2. When the B-CLK stops, B48-B55 (B55 is LSB) is the last byte in error. B56-B63 is the middle byte in error. B64-B71 is the first byte in error. Data buffer READ.sub.-- MODIFY.sub.-- WRITES are required only for the non-zero of these bytes.
3. Shift Register B (SRB) is loaded with a flipped copy of Shift Register A (SRA) and therefore, does not require preset or clear. Shift Register A must be initialized to the following HEX pattern prior to any write or read:
HEX "00 29 3F75 71 DB 5D 40 FF FF"
The least significant bit of this pattern defines the initialization value for Shift Register bit AO and so on. The LFSR initialization pattern used in the preferred embodiment was chosen to minimize the likelihood of undetected errors in the synchronization between the bit stream recorded or transmitted in the media and the byte or symbol boundaries imposed on the information as it is received. This type of error is called a synchronization framing error. Techniques for minimizing the influence of synchronization framing errors on miscorrection are known in the prior art. See the book Practical Error Correction Design for Engineers by Glover and Dudley, page 256. The initialization pattern of the preferred embodiment was selected according to the rules set forth in the above reference so as to be unlike itself in shifted positions. This initialization pattern provides protection from miscorrection associated with synchronization framing errors that is far superior to the protection provided by initialization patterns of all ones or of all zeros.
4. Clock cycles start on a positive edge. DATA.sub.-- GATE must be activated within the first half of a cycle of 1FCLK.
5. There are always 8 bits of padding to be handled on each read or write. This padding is divided such that part is accomplished between data and redundancy and part follows redundancy. In the special case where the number of data bits is divisible by 10, all padding follows redundancy. In all other cases, the number of pad bits between data and redundancy bits (prepad bits) is selected to make the number of data and prepad bits divisible by 10.
DETAILED FIRMWARE DESCRIPTION
In a finite field GF(2.sup.m), elements are composed of m binary bits and addition (.sym.) consists of MODULO 2 summation of corresponding bits; this is equivalent to performing the bit-wise EXCLUSIVE-OR sum of operands:
x.sym.y=x XOR y.
Note that subtraction is equivalent to addition since the MODULO 2 difference of bits is the same as their MODULO 2 sum.
In software, multiplication (*) may be implemented using finite field logarithm and antilogarithm tables wherein LOG�.alpha..sup.i !=i and ALOG�i!=.alpha..sup.i : ##EQU1## where the addition of the finite field logarithms is performed MODULO 2.sup.m -1. LOG�0! is undefined.
Division (/) may be implemented similarly: ##EQU2## Note that for non-zero x, LOG�1/x!=-LOG�x!=LOG�x! XOR 2.sup.m -1.
Alternatively, multiplication of two elements may be implemented without the need to check either element for zero by appropriately defining LOG�0! and using a larger antilogarithm table, e.g. by defining LOG�0!=2.sup.(m+1) -3 and using an antilogarithm table of 2.sup.(m+2) -5 elements wherein: ##EQU3##
The size of the tables increases exponientially as m increases. In certain finite fields, subfield computations can be performed, as developed in the section entitled "Subfield Computation" herein. In such a finite field, addition, the taking of logarithms and antilogarithms, multiplication, and division in the "large" field GF(2.sup.m) are performed using a series of operations in a "small" finite field GF(2.sup.n) where n=m.div.2. Consequently, the size of the tables required is greatly reduced. However, such a finite field may not have the best characteristics for minimizing complexity and cost of hardware necessary to implement encoders and decoders. By proper selection of finite field generator polynomials as shown in the section entitled "Constructing Reed-Solomon Codes" herein, it is possible to use an "external" finite field well suited for hardware implementation and an "internal" finite field with subfield properties for software algorithms. Conversion between the two fields is performed using a linear mapping, as developed in the section entitled "Constructing Reed-Solomon Codes" herein.
In a decoder for an error detection and correction system using a Reed-Solomon or related code of distance d for the detection and correction of a plurality of symbol errors in codewords of n symbols comprised of n-(d-1) data symbols and d-1 check symbols, each symbol an element of GF(2.sup.m), a codeword C(x) is given by
C(x)=(x.sup.d-1 *I(X)).sym.((x.sup.d-1 *I(x))MOD G(x)) (1)
where I(x) is an information polynomial whose coefficients are the n-(d-1) data symbols and G(x) is the code generator polynomial ##EQU4## where m.sub.0 is a parameter of the code. A code of distance d can be used to correct all cases of t=INT((d-1)/2) symbol errors without pointers and is guaranteed to detect all cases of INT(d/2) symbol errors.
When e symbol errors occur, the received codeword C'(x) consists of the EXCLUSIVE-OR sum of the transmitted codeword C(x) and the error polynomial E(x):
C'(x)=C(x).sym.E(x). (3)
where
E(x)=E.sub.1 *x.sup.L.sbsp.1 .sym. . . . E.sub.e *x.sup.L.sbsp.e ;(4)
L.sub.i and E.sub.i are the locations and values, respectively, of the e symbol errors.
The remainder polynomial
R(x)=R.sub.d-2 *x.sup.d-2 .sym. . . . .sym.R.sub.1 *x.sym.R.sub.0(5)
is given by
R(x)=C'(x)MOD G(x), (6)
that is, the remainder generated by dividing the received codeword C'(x) by the code generator polynomial G(x).
By equation (1),
C(x)MOD G(x)=0, (7)
so from equation (3),
R(x)=E(x)MOD G(x). (8)
The coefficients of the syndrome polynomial
S(x)=S.sub.d-2 *x.sup.d-2 .sym. . . . .sym.S.sub.1 *x.sym.S.sub.0
are given by
S.sub.i (x)=C'(x)MOD g.sub.i (x), (9)
that is, the remainders generated by dividing the received codeword C'(x) by the factors
g.sub.i (x)=(x.sym..alpha..sup.m.sbsp.0.sup.+i)
of the code generator polynomial G(x).
Equation (1) implies
C(x)MOD g.sub.i (x)=0, (10)
so from equation (3),
g.sub.i =E(x)MOD g.sub.i (x). (11)
Shift Register A of the present invention could emit the remainder coefficients R.sub.i if feedback were disabled while the redundancy symbols of the received codeword C'(x) are being received, but additional hardware would be required to collect and store the coefficients R.sub.i for use in decoding error locations and values. Instead, shift register feedback is enabled while redundancy symbols are being received and a modified form of the remainder polynomial, called the residue polynomial T(x), is stored in the shift register itself. The coefficients T.sub.i of the residue polynomial T(x) are related to the coefficients R.sub.i of the remainder polynomial R(x) according to: ##EQU5##
The residue polynomial T(x) can be used in decoding error locations and values in several ways. T(x) can be used directly, e.g. the burst-trapping algorithm implemented in the preferred embodiment of the invention uses T(x) to decode and correct a single error burst spanning multiple symbols using a shifting process. Decoding error locations and values from the remainder polynomial R(x) or the syndrome polynomial S(x) is known in the prior art, for example see Glover and Dudley, U.S. Pat. No. 4,839,896. T(x) could be used to compute R(x) by solving the system of equations above. T(x) could be used to directly compute S(x).using a matrix of multiplication constants. In the preferred embodiment of the invention, T(x) is used to compute a modified form of the remainder polynomial R(x), which is then used to compute a modified form of the syndrome polynomial S(x).
SYNDROME POLYNOMIAL GENERATION
A software correction algorithm could produce a modified form of the remainder polynomial defined by
P(x)=(x.sup.d-1 *C'(x))MOD G(x)
from T(x) by simulating clocking the shift register d-1 symbol-times with input forced to zero and feedback disabled and recording the output of the XOR gate which emits redundancy during a write operation. Mathematically, this process is defined by: ##EQU6##
The coefficients S.sub.i ' of a modified frequency-domain syndrome polynomial S'(x) can be computed from the coefficients P.sub.i of the modified remainder polynomial P(x) according to ##EQU7## When the coefficients P.sub.i or S.sub.i ' are used in decoding, the error locations produced are greater than the actual error locations by d-1.
In the preferred embodiment of the invention, software complexity is reduced by first simulating the clocking of the shift register one symbol-time with input forced to zero and feedback enabled and then clocking d-1 symbol-times with input forced to zero and feedback disabled to produce a modified form of the remainder defined by
Q(x)=(x.sup.d *C'(x))MOD G(x)
The coefficients Qi of Q(x) are calculated from the residue coefficients T.sub.i as follows: ##EQU8##
The coefficients S.sub.i " of the frequency-domain syndrome polynomial S"(x) can be computed from the coefficients Q.sub.i of the modified remainder polynomial Q(x) according to ##EQU9## When the coefficients Q.sub.i or S.sub.i " are used in decoding, the error locations produced are greater than the actual error locations by d. Hereafter, S(x) means S"(x), S.sub.i means S.sub.i ", etc., unless otherwise noted.
It is clear that those skilled in the art could implement variations of the above methods to produce remainder and/or syndrome polynomials suitable for decoding errors.
Sequential computation of each coefficient S.sub.i would require d-1 references to each coefficient Q.sub.j. Physical constraints and interleaving of multiple codewords often make each reference to a coefficient Q.sub.j difficult and time-consuming.
In the preferred embodiment of this invention, the time required to calculate the coefficients of S(x) is reduced by computing each coefficient Q.sub.j and sequentially computing and adding its contribution to each coefficient S.sub.i.
When an "external" finite field suited for hardware implementation and an "internal" finite field with subfield properties suited for software implementation are used, the coefficients T are mapped from the "external" finite field to the "internal" finite field before any finite field computations are performed. When an error value has been decoded, it is mapped back to the "external" finite field before being applied to the symbol in error.
Data paths and storage elements in hardware executing a software correction algorithm are typically eight, sixteen, or thirty-two bits in width. When m differs from the data path width, storage space can be minimized by storing finite field elements in a "packed" format wherein a given finite field element may share a storage element with one or more others. Shifting of the desired finite field element and masking of the undesired finite field element(s) are required whenever a finite field element is accessed. On the other hand, speed can be increased by storing finite field elements in an "unpacked" format wherein each storage element is used by all or part of a single finite field element, with unused bits reset. When subfield computation is to be used, software complexity and execution time can be reduced when the components x.sub.0 and x.sub.1 of a finite field element x=x.sub.1 .multidot..alpha..sym.x.sub.0 are kept in separate storage elements with unused high-order bits reset. In the preferred embodiment of the invention, the process of mapping the coefficients T.sub.i from the "external" field to the "internal" field is combined with that of separating subfield components. This is done by separating the mapping table into two parts, one for the m/2 low-order bits and one for the m/2 high-order bits of the "internal" finite field representation, where each part of the table has m entries of m/2 bits each. Likewise, the process of mapping an error value E from the "internal" field to the "external" field is performed simultaneously with that of combining subfield components. This is done by separating the mapping table into two parts, one for the m/2 low-order bits and one for the m/2 high-order bits of the "internal" finite field representation, where each part of the table has m/2 entries of m bits each.
ERROR LOCATOR POLYNOMIAL GENERATION
The coefficients of S(x) are used to iteratively generate the coefficients of the error locator polynomial .sigma.(x). Such iterative algorithms are known in the prior art; for example, see Chapter 5 of Error-Correction Coding for Digital Communications by Clark and Cain. Typically, the error locator polynomial is iterated until n=d-1, but at the cost of some increase in miscorrection probability when an uncorrectable error is encounterd, it is possible to reduce the number of iterations required for correctable errors by looping only until n=t+l.sub.n, where l.sub.n is the degree of .sigma.(x).
ERROR LOCATION AND EVALUATION
If the degree of .sigma.(x) indicates more than four errors exist, .sigma.(x) is evaluated at x=.alpha..sup.L for each L, 0.ltoreq.L<2.sup.m -1, until the result is zero, which signifies that .alpha..sup.L is a root of .sigma.(x) and L is an error location. When the location L of an error has been determined, .alpha.(x) is divided by (x.sym..alpha..sup.L) , producing a new error locator polynomial of degree one less than that of the old: ##EQU10##
The error value E may be calculated directly from S(x) and the new .alpha.(x) using ##EQU11## where j is the degree of the new .sigma.(x).
In the preferred embodiment of this invention, the division of .sigma.(x) by (x.sym..alpha..sup.L) and the calculation of the numerator and denominator of E are all performed in a single software loop.
When the location L and value E of an error have been determined, the coefficients of S(x) are adjusted to remove its contribution according to
S.sub.i =S.sub.i .sym.E*.alpha..sup.L(m.sbsp.0.sup.+i).
By reducing the degree of .sigma.(x) and adjusting S(x) as the location and value of each error are determined, the time required to locate and evaluate each successive error is reduced.
As noted above, in the preferred embodiment of the invention, an error location L produced is greater than the actual error location by d, due to the manner in which S(x) is calculated. Also, when different "external" and "internal" finite fields are used, the error value E must be mapped back to the "external" field before it is applied to the symbol in error.
When the degree j of .sigma.(x) is four or less, the time required to locate the remaining errors is reduced by using the special error locating routines below, each of which locates one of the remaining errors without using the Chien search. After the location of an error has been determined by one of the special error locating routines, its value is calculated, .sigma.(x) is divided by (x.sym..alpha..sup.L), and S(x) is adjusted in the same way as when an error is located by evaluating .sigma.(x).
When j=1, the error locator polynomial is
x.sym..sigma..sub.1 =0
By inspection, the root of this equation is .sigma..sub.1 =.alpha..sup.L. Thus
L=LOG�.sigma..sub.1 !.
When j=2, the error locator polynomial is
x.sup.2 .sym..sigma..sub.1 *x.sym..sigma..sub.2 =0.
Solution of a quadratic equation in a finite field is known in the prior art; for example, see Chapter 3 of Practical Error Correction Design for Engineers by Neal Glover and Trent Dudley. Substituting x=y*.sigma..sub.1 yields ##EQU12##
For each odd solution to this equation Y.sub.1, there is an even solution Y.sub.2 =Y.sub.1 .sym..alpha..sup.0 (where .alpha..sup.0 =1 in the preferred embodiment). Y.sub.1 can be fetched from a pre-computed quadratic table derived according to
QUAD�i.sup.2 .sym.i!=i.sym.1 for i=0,2, . . . 2.sup.m -2
using c as an index. There are 2.sup.m-1 such pairs of solutions; the other elements of the table are set to an invalid number, for example zero, to flag the existence of more than two errors. When Y.sub.1 .noteq.0 has been determined, reverse substitution yields an expression for the error location
L.sub.1 =LOG�.sigma..sub.1 *Y.sub.1 !
When j=3, the error locator polynomial is
x.sup.3 .sym..sigma..sub.1 *x.sup.2 .sym..sigma..sub.2 *x.sym..sigma..sub.3 =0.
Solution of a cubic equation in a finite field is known in the prior art; for example, see Flagg, U.S. Pat. No. 4,099,160.
Substituting ##EQU13## yields a quadratic equation in v: ##EQU14## where
A=.sigma..sub.1.sup.2 .sym..sigma..sub.2 and B=.sigma..sub.1 *.sigma..sub.2 .sym..sigma..sub.3
A root V of this equation may be found by the quadratic method above. Then by reverse substitution ##EQU15##
When j=4, the error locator polynomial is
x.sup.4 .sym..sigma..sub.1 *x.sup.3 .sym..sigma..sub.2 *x.sup.2 .sym..sigma..sub.3 *x.sym..sigma..sub.4 =0.
Solution of a quartic equation in a finite field is known in the prior art; for example, see Deodhar, U.S. Pat. No. 4,567,594. If .sigma..sub.1 =0, assign b.sub.i =.sigma..sub.i for i=2 to 4, otherwise substitute ##EQU16## to give
z.sup.4 .sym.b.sub.2 z.sup.2 .sym.b.sub.3 z.sym.b.sub.4 =0
where ##EQU17##
The resulting affine polynomial may be solved in the following manner:
1) Solve for a root Q of the equation q.sup.3 .sym.b.sub.2 *q.sym.b.sub.3 =0 by the cubic method above.
2) Solve for a root S of the equation s.sup.2 .sym.b.sub.3 /Q*s.sym.b.sub.4 =0 by the quadratic method above.
3) Solve for a root Z of the equation z.sup.2 .sym.Q*z.sym.S=0 by the quadratic method above.
If .sigma..sub.1 =0, L=LOG�Z!, otherwise reverse substitution yields ##EQU18##
FIG. 23 illustrates, without loss of generality, the particular case where m=10, the width of data paths and storage elements is eight bits, the residue coefficients T.sub.i are accessed beginning with the eight least-significant bits of T.sub.d-2, and subfield computation is to be used in a software correction algorithm.
Referring to FIG. 23, Step 2300 initializes counters j=0, k=d-2, l=0 and fetches the first 8-bit byte from the residue buffer BO. Step 2310 increments counter j, fetches the next 8-bit byte from the residue buffer into B1, and shifts, masks, and combines BO and B1 to form the next residue coefficient T.sub.k, as determined by the value of counter 1. Because subfield computation is to be used, Step 2310 then performs the mapping between the "external" finite field and the "internal" finite field, simultaneously separating the subfield components T.sub.k0 and T.sub.k1 for more efficient manipulation by the software correction algorithm. MAP.sub.-- .gamma..sub.-- TO.sub.-- .alpha.�i! is a table such as Table 2 whose entries represent the contribution to the "internal" finite field element of each set bit i in the "external" finite field element. The two components are then stored in a pair of 8-bit storage elements. If counter 1 is not equal to six, Step 2320 transfers control to Step 2340. Otherwise Step 2330 increments counter j and fetches the next 8-bit byte from the residue buffer. Step 2340 adds two to counter 1 in a MODULO eight fashion, transfers the contents of B1 to B0, and decrements counter k. If counter k is not less than zero, Step 2350 transfers control back to Step 2310. Otherwise all residue coefficients T.sub.i have been assembled, mapped, separated, and stored, and control is transferred to FIG. 24.
Referring to FIG. 24, Step 2400 initializes all syndrome coefficients S.sub.i =T.sub.d-2 and initializes counter j=1. Step 2405 computes Q.sub.j. If Q.sub.j =0, it does not alter the coefficients S.sub.i, so Step 2410 transfers control to Step 2450. Otherwise Step 2420 computes and adds the contribution of Q.sub.j to each coefficient S.sub.i. Step 2450 increments counter j. If counter j is less than d-1, Step 2460 transfers control back to Step 2405. Otherwise all coefficients S.sub.i have been calculated and control is transferred to FIG. 25.
Referring to FIG. 25, Step 2500 initializes the polynomials, parameters, and counters for iterative error locator polynomial generation. When erasure pointer information is available, the correction power of the code is increased. Parameter t' is maximum number of errors and erasures which the code can correct. P.sub.i are the erasure pointer locations. If the number of erasure pointers p is equal to d-1 , the maximum degree of .sigma.(x) has been reached, so Step 2505 transfers control to FIG. 26. Otherwise, Step 2510 computes the nth discrepancy value d.sub.n. If d.sub.n is equal to zero, Step 2520 transfers control to Step 2560. Otherwise Step 2525 updates .sigma.(x). If l.sub.n .gtoreq.l.sub.k +n-k, Step 2530 transfers control to Step 2550. Otherwise Step 2540 updates .sigma..sup.k (x) and other parameters. Step 2550 updates .sigma..sup.P (x). Step 2560 increments counter n. If n<d-1 Step 2570 transfers control back to Step 2510. If l.sub.n, the degree of .sigma.(x), is greater than the number of errors and erasures the code can correct, Step 2580 exits the correction procedure unsuccessfully. Otherwise we are assured that we have generated a valid error locator polynomial and control is transferred to FIG. 26.
Referring to FIG. 26, Step 2600 initializes counters j=l.sub.n and k=0. If the erasure pointer counter p is not zero, Step 2605 transfers control to Step 2655, which sets L equal to the next unused erasure pointer and transfers control to FIG. 27. Otherwise, Step 2610 initializes counter i=0. If j is less than or equal to four, Step 2620 transfers control to FIG. 28. Otherwise Step 2630 evaluates .sigma.(x) at x=.alpha..sup.i. If the result A is equal to zero, a root of .sigma.(x) has been found and Step 2640 transfers control to Step 2650, which sets L=i before transferring control to FIG. 27. Otherwise Step 2640 transfers control to Step 2670. On successful exit from FIG. 27, control is transferred to Step 2660. If the erasure pointer counter p is not equal to zero, there may remain unused erasure pointers; Step 2660 transfers control to Step 2665, which decrements the erasure pointer counter p and transfers control back to Step 2605. Otherwise, Step 2670 increments counter i. If counter i is then not equal to 2.sup.m -1, Step 2680 transfers control back to Step 2620. Otherwise all possible locations have been tested without locating all the errors; therefore the correction procedure is exited unsuccessfully.
Referring to FIG. 27, Step 2700, decrements counter j, the number of errors remaining to be found, initializes D=1 and N=S.sub.j, records the true error location. Without loss of generality, the case where coefficients Q.sub.j were used to compute S(x)=S"(x) is shown; the true error location is (L-d) MODULO 2.sup.m -1. Step 2710 divides .sigma.(x) by (x.sym..alpha..sup.L) and calculates the numerator N and denominator D of E'=.alpha..sup.L*m.sbsp.0 *E. If the new .sigma..sub.j (g now equal to j) is equal to zero, the new .sigma.(x) has a root equal to zero, which is not the finite field antilogarithm of any error location, so Step 2720 exits the correction procedure unsuccessfully. If the denominator is equal to zero, the error value cannot be computed, since division by zero in a finite field is undefined, so Step 2720 exits the correction procedure unsuccessfully. If the numerator not equal to zero, Step 2725 transfers control to Step 2740. Otherwise, if the erasure pointer counter p is not equal equal to zero, a false erasure pointer has been detected, so Step 2730 transfers control to Step 2750. Otherwise, the computed error value is equal to zero in the absence of an erasure pointer, so Step 2725 exits the correction procedure unsuccessfully. Step 2740 calculates E=N/D and E=.alpha..sup.-Lm.sbsp.0 *E'. Without loss of generality, the case where subfield computation is used is shown; the true error value is obtained by mapping the value E from the "internal" finite field to the "external" finite field. MAP.sub.-- .alpha..sub.-- TO.sub.-- .gamma.�i! is a table such as Table 3 whose entries represent the contribution to the "external" finite field element Step 2740 also increments counter k, the number of errors found of each set bit i in the "internal" finite field element. If counter j, the number of errors remaining to be found, is equal to zero, Step 2750 transfers control to FIG. 32. Otherwise Step 2760 adjusts the coefficients of S(x) to remove the contribution of the error just found and transfers control to FIG. 26, Step 2660.
Referring to FIG. 28, if four errors remain, Step 2800 calls the quartic solution subroutine of FIG. 31. If three errors remain, Step 2800 transfers control to Step 2802, which sets parameters for and calls the cubic solution subroutine of FIG. 30. If two errors remain, Step 2800 transfers control to Step 2804, which sets parameters for and calls the quadratic solution subroutine of FIG. 29. Otherwise one error remains and Step 2800 transfers control to Step 2806. If .sigma..sub.1 is equal to zero, Step 2806 exits the correction procedure unsuccessfully, since the finite field logarithm of zero is undefined. Otherwise Step 2808 determines L=LOG�.sigma..sub.1 ! and transfers control to FIG. 27. Likewise, if one of the subroutines of FIGS. 29, 30, or 31 successfully determines an error location, Step 2810 transfers control to FIG. 27. Otherwise, the correction procedure is exited unsuccessfully.
On entry to FIG. 29, the parameters c.sub.1 and c.sub.2 describe the quadratic equation
x.sup.2 .sym.c.sub.1 *x.sym.c.sub.2 =0.
If c.sub.1 =0, the equation has a repeated root. If c.sub.2 =0, one of the roots is zero, whose log is undefined. If c.sub.1 =0 or c.sub.2 =0, Step 2900 exits the subroutine unsuccessfully. Otherwise Step 2902 determines a transformed root Y.sub.1 ; when subfield computation is used, Step 2902 involves a procedure described in the section entitled "Subfield Computation" herein. If Y.sub.1 is invalid, Step 2904 exits the subroutine unsuccessfully. Otherwise Step 2906 calculates the root X and its log L and returns successfully.
On entry to FIG. 30, the parameters c.sub.1, c.sub.2, and c.sub.3 describe the cubic equation
x.sup.3 .sym.c.sub.1 *x.sup.2 .sym.c.sub.2 *x.sym.c.sub.3 =0.
Step 3000 calculates the transform parameters A and B. If B is equal to zero, Step 3002 exits the subroutine unsuccessfully. Otherwise Step 3004 determines a root V of the quadratic equation ##EQU19## using the QUAD table. If no such root exists, Step 3004 produces zero and Step 3006 exits the subroutine unsuccessfully. Otherwise Step 3008 computes U. If U is not the cube of some finite field value T, Step 3010 exits the subroutine unsuccessfully. Otherwise Step 3012 calculates T and a root X of the cubic equation. If X is equal to zero, Step 3014 exits the subroutine unsuccessfully. Otherwise Step 3016 calculates the log L of the root X and returns successfully.
On entry to FIG. 31, the parameters .sigma..sub.1, .sigma..sub.2, .sigma..sub.3, and .sigma..sub.4 describe the quartic equation
x.sup.4 .sym..sigma..sub.1 *x.sup.3 .sym..sigma..sub.2 *x.sup.2 .sym..sigma..sub.3 *x.sym..sigma..sub.4 =0.
If .sigma..sub.1 is equal to zero, Step 3100 transfers control to Step 3110; if .sigma..sub.3 is equal to zero, the quartic equation has repeated roots, so Step 3110 exits the subroutine unsuccessfully. Otherwise Step 3112 assigns b.sub.i =.sigma..sub.i for i=2 to 4 and transfers control to Step 3120. If .sigma..sub.1 is not equal to zero, Step 3100 transfers control to Step 3102, which calculates the numerator and denominator of transform parameter b.sub.4. If the denominator of b.sub.4 is equal to zero, Step 3104 exits the subroutine unsuccessfully. Otherwise Step 3106 calculates the transform parameters b.sub.4, b.sub.3, and b.sub.2 and transfers control to Step 3120.
Step 3120 sets parameters for and calls the cubic solution subroutine of FIG. 30. If this returns unsuccessfully, Step 3122 exits the subroutine unsuccessfully. Otherwise Step 3130 assigns Q=X and sets parameters for and calls the quadratic solution subroutine of FIG. 29. If this returns unsuccessfully, Step 3132 exits the subroutine unsuccessfully. Otherwise Step 3140 sets parameters for and calls the quadratic solution subroutine of FIG. 29. If this returns unsuccessfully, Step 3142 exits the subroutine unsuccessfully. Otherwise if .sigma..sub.1 is equal to zero, Step 3150 returns L successfully. Otherwise Step 3160 computes X. If X is equal to zero, Step 3162 exits the subroutine unsuccessfully. Otherwise Step 3170 computes and returns L successfully.
FIG. 32 illustrates, without loss of generality, error burst length checking for the particular case where m=10, t=4, and a single burst up to twenty-two bits in length or two bursts, each up to eleven bits in length, are allowed.
Referring to FIG. 32, if the number of error symbols found is less than or equal to two, by inspection there are at most two bursts, each less than eleven bits in length, so Step 3200 exits the correction procedure successfully. Otherwise, Step 3205 sorts the symbol errors into decreasing-L order. If there are four symbols in error, Step 3210 transfers control to Step 3250. Otherwise, if the first and second error symbols are adjacent, Step 3220 transfers control to Step 3230. If the third error symbol is also adjacent to the second error symbol, Step 3230 transfers control to Step 3245, which forces the fourth error symbol to zero and transfers control to FIG. 34 to check the length of the error burst(s) contained in the three adjacent error symbols. Otherwise, Step 3230 transfers control to FIG. 33 to check the length of the error burst contained in the first and second error symbols. If the first two error symbols are not adjacent, Step 3220 transfers control to Step 3240. If the second and third error symbols are also not adjacent, three bursts have been detected, so Step 3240 exits the correction procedure unsuccessfully. Otherwise, Step 3240 transfers control to FIG. 33 to check the length of the error burst contained in the second and third error symbols.
If the number of error symbols found is equal to four, Step 3210 transfers control to Step 3250. If the first and second error symbols are not adjacent, or if the third and fourth error symbols are not adjacent, two bursts have been detected, one of which is at least twelve bits in length, so Step 3250 exits the correction procedure unsuccessfully. Otherwise, if the second and third error symbols are adjacent, Step 3260 transfers control to FIG. 34 to check the length of the burst(s) contained in the four adjacent error symbols. If the second and third error symbols are not adjacent, two bursts have been detected, so Step 3260 transfers control to Step 3265, which calls FIG. 33 to check the length of the burst contained in the first and second error symbols. If that burst is less than or equal to eleven bits in length, Step 3270 transfers control to FIG. 33 to check the length of the burst contained in the third and fourth error symbols.
Referring to FIG. 33, Step 3300 sets X equal to the first error symbol in the burst to be checked, initializes the burst length l=20, and sets bit number b=9. Steps 3110 and 3320 search for the first bit of the error burst. Steps 3340 and 3350 search for the last bit of the error burst. Upon entry to Step 3360, l is equal to the length of the error burst. If l is greater than eleven, Step 3360 returns unsuccessfully. Otherwise, the burst contained in the two adjacent error symbols is less than or equal to eleven bits in length and Step 3360 returns successfully.
Referring to FIG. 34, Step 3400 initializes symbol number i=0 and bit number b=9. A single burst, twenty-two bits in length, is treated as two consecutive bursts, each eleven bits in length. Steps 3410 and 3415 search for the first bit of the first burst. Steps 3420, 3425, 3430, and 3440 skip the next eleven bits, allowing the first burst to be up to eleven bits in length, then search for the next non-zero bit, which is the first bit of the second burst. If the fourth error symbol is not zero, Step 3450 transfers control to Step 3455. On entry to Step 3455, the end of the second burst has been determined to be in the fourth error symbol; if the second burst begins in the second error symbol, the second burst is at least twelve bits in length, so Step 3455 exits the correction procedure unsuccessfully. If the second error burst begins in the third error symbol and ends in the fourth error symbol, Step 3455 transfers control to Step 3465. If the fourth error symbol is zero, Step 3450 transfers control to Step 3460; if the second error burst begins and ends in the third error symbol, the second error burst must be less than eleven bits in length, so Step 3460 exits the correction procedure successfully. Otherwise, the second error burst begins in the second error symbol and ends in the third error symbol, so step 3460 transfers control to Step 3465. Steps 3465, 3470, 3475, and 3480 skip eleven more bits, allowing the second error burst to be up to eleven bits in length, then search for any other non-zero bits in the last error symbol. If a non-zero bit is detected, the second error burst is more than eleven bits in length, so Step 3480 exits the correction procedure unsuccessfully. Otherwise, Step 3470 exits the correction procedure successfully when all bits have been checked.
SUBFIELD COMPUTATION
In this section, a large field, GF(2.sup.2*n), generated by a small field, GF(2.sup.n), is discussed. Techniques are developed to accomplish operations in the large field by performing several operations in the small field.
Let elements of the small field be represented by powers of .beta.. Let elements of the large field be represented by powers of .alpha..
The small field is defined by a specially selected polynomial of degree n over GF(2). The large field is defined by the polynomial:
x.sup.2 +x+.beta.
over the small field.
Each element of the large field, GF(2.sup.2*n), can be represented by a pair of elements from the small field, GF(2.sup.n). Let x represent an arbitrary element from the large field. Then:
x=x.sub.1 .multidot..alpha.+x.sub.0
where x.sub.1 and x.sub.0 are elements from the small field, GF(2.sup.n). The element x from the large field can be represented by the pair of elements (x.sub.1,x.sub.0) from the small field. This is much like representing an element from the field of Figure 2.5.1 of Glover and Dudley, Practical Error Correction Design for Engineers, pg. 89, with three elements from GF(2), (x.sub.2,x.sub.1,x.sub.0). Let a be any primitive root of:
x.sup.2 +x+.beta.
Then:
.alpha..sup.2 +.alpha.+.beta.=0
Therefore:
.alpha..sup.2 =.alpha.+.beta.
The elements of the large field GF(2.sup.2*n), can be defined by the powers of .alpha.. For example: ##EQU20##
This list of elements can be denoted ##EQU21##
The large field, GF(2.sup.2*n), can be viewed as being generated by the following shift register. All paths are n bits wide. ##STR1##
This shift register implements the polynomial x.sup.2 +x+.beta. over GF(2.sup.n).
Methods for accomplishing finite field operations in the large field by performing several simpler operations in the small field are developed below.
ADDITION
Let x and w be arbitrary elements from the large field. Then: ##EQU22##
MULTIPLICATION
The multiplication of two elements from the large field can be accomplished with several multiplications and additions in the small field. This is illustrated below: ##EQU23##
Methods for accomplishing other operations in the large field can be developed in a similar manner. The method for several additional operations are given below without the details of development.
INVERSION ##EQU24##
LOGARITHM ##EQU25## Then,
L=(the integer whose residue modulo (2.sup.n -1) is J and whose residue modulo (2.sup.n +1) is K}
This integer can be determined by the application of the Chinese Remainder Method. See Section 1.2 of Glover and Dudley, Practical Error Correction Design for Engineers pages 11-13, for a discussion of the Chinese Remainder Method.
The function f.sub.1 can be accomplished with a table of 2.sup.n entries which can be generated with the following algorithm.
BEGIN
Set table location f.sub.1 (0)=0
FOR I=2 to 2.sup.n
Calculate the GF(2.sup.2*n) element Y=.alpha..sup.I =Y.sub.1 .multidot..alpha.+Y.sub.0
Calculate the GF(2.sup.n) element Y.sub.0 /Y.sub.1
Set f.sub.1 (Y.sub.0 /Y.sub.1)=I
NEXT I
END
ANTILOGARITHM
______________________________________X = ANTILOG.sub..alpha. (L) = ANTILOG.sub.B (INT(L/(2.sup.n + 1)) if �L MOD (2.sup.n + 1)! = 0 = �ANTILOG.sub.B (INT(L/(2.sup.n + 1))! .multidot. .alpha. if �L MOD (2.sup.n + 1)! = 1 = x.sub.1 .multidot. .alpha. + x.sub.0 if �L MOD (2.sup.n + 1)! > 1______________________________________
where x.sub.1 and x.sub.0 are determined as follows. Let
a=ANTILOG.sub.B �L MOD(2.sup.n -1)!
b=f.sub.2 �(L mod(2.sup.n +1))-2!
Then, ##EQU26##
The function f.sub.2 can be accomplished with a table of 2.sup.n entries. This table can be generated with the following algorithm.
BEGIN
Set f.sub.2 (2.sup.n-1)=0
FOR I=0 to 2.sup.n -2
Calculate the GF(2.sup.2*n) element y=.alpha..sup.(I+2) =Y.sub.1 .multidot..alpha.+Y.sub.0
Calculate the GF(2.sup.n) element Y.sub.0 /Y.sub.1
Set f.sub.2 (I)=Y.sub.0 /Y.sub.1
NEXT I
END
ROOTS OF Y.sup.2 +Y+C=0
To find the roots of:
Y.sup.2 +Y+C=0 (1)
in the large field, first construct a table for finding such roots in the small field. Roots in the large field are then computed from roots in the small field.
JUSTIFICATION ##EQU27##
Equating coefficients of powers of a on the two sides of the equation yields:
(Y.sub.1.sup.2 +Y.sub.1 +C.sub.1).alpha.=0 (2)
and
�Y.sub.0.sup.2 +Y.sub.0 +(C.sub.0 +Y.sub.1.sup.2 .beta.)!=0(3)
PROCEDURE
Construct a table for finding roots of:
Y.sup.2 +Y+C=0 (4)
in the small field. The contents of table locations corresponding to values of C for which a root of (4) does not exist should be forced to all zeros. The low order bit (2.sup.0) of each table location corresponding to values of C for which a root of (4) exists should be forced to .alpha..sup.0 (where .alpha..sup.0 =1 in the preferred embodiment).
IF, Trace(C)=0, THEN,
Y.sub.a =0.
ELSE,
FIND A ROOT OF (2), SAY Y.sub.1a, USING THE TABLE FOR FINDING A ROOT OF Y.sup.2 +Y+C=0 IN THE SMALL FIELD.
SUBSTITUTE Y.sub.1a INTO (3) AND FIND A ROOT OF (3) , SAY Y.sub.0a, USING THE SAME TABLE.
IF Y.sub.0a IS 0, XOR Y.sub.1a WITH .beta..sup.0 AND AGAIN SUBSTITUTE Y.sub.1a INTO (3) AND FIND A ROOT OF (3) USING THE TABLE.
THE DESIRED ROOT IN THE LARGE FIELD IS:
Y.sub.a Y.sub.1a .alpha.+Y.sub.0a
THE SECOND ROOT IS SIMPLY:
Y.sub.b =Y.sub.a +.alpha..sup.0
END IF
NOTE: Y.sub.a =0 flags the case where a root does not exist in the large field for (1).
CONSTRUCTING REED-SOLOMON CODES
CONSTRUCTING THE FINITE FIELD FOR A REED-SOLOMON CODE
It is well-known in the prior art that a primitive polynomial of degree m over GF(2) can be used to construct a representation for a finite field GF(2.sup.m). For example, see Practical Error Correction Design for Engineers, hereinbefore identified, pages 89-90.
It is possible to use such a representation of GF(2.sup.m) to construct other representations of GF(2.sup.m). For example, let .beta..sup.i represent the elements of a finite field constructed as described, then the elements .alpha..sup.i of another representation may be constructed by application of the equation
.alpha..sup.i =(.beta..sup.i).sup.M
where no factor of M divides 2.sup.m -1 (field size minus one).
Appendix A discusses the use of a polynomial of the form
x.sup.2 +x +.beta.
to construct a representation for a large finite field GF(2.sup.2m) from a representation of a small finite field GF(2.sup.m).
It is possible to use a primitive polynomial of degree m over GF(2) to construct a representation for the elements .gamma..sup.i of a small finite field GF(2.sup.m) and then to use the relationship
.beta..sup.i =(.gamma..sup.i).sup.M
to construct another representation of the elements of the small field and then to use the polynomial
x.sup.2 +x+.beta.
over GF(2.sup.m) to construct a representation for the elements .alpha..sup.i of a large finite field of GF(2.sup.2m).
DEFINING THE CODE GENERATOR POLYNOMIAL FOR A REED-SOLOMON CODE
The code generator polynomial G(x) for a Reed-Solomon Code is defined by the equation ##EQU28##
where
d=the minimum Hamming distance of the code
m.sub.0 =the offset
The minimum Hamming distance d of the code establishes the number of symbol errors correctable by the code (t) and the number of extra symbol errors detectable by the code (det). The equation
d-1=2t+det
establishes the relationship between the code's minimum Hamming distance and its correction and detection power.
MAPPING BETWEEN FINITE FIELDS
Let .omega..sup.i be a representation of the elements of GF(2.sup.2m) established by any primitive polynomial of degree 2m over GF(2).
Let .gamma..sup.i be a representation of the elements of GF(2.sup.2m) established by the relationship
.gamma..sup.i =(.omega..sup.i).sup.MM
Let .mu..sup.i be a representation of the elements of GF(2.sup.m) established by any primitive polynomial of degree m over GF(2).
Let .beta..sup.i be a representation of the elements of GF(2.sup.m) established by the relationship
.beta..sup.i =(.mu..sup.i).sup.M
Let .alpha..sup.i be a representation of the elements of GF(2.sup.2m) established by the polynomial
x.sup.2 +x+.beta.
over GF(2.sup.m).
A simple linear mapping may exist between elements of the .alpha. and .gamma. finite fields. One such candidate mapping can be defined as follows: ##EQU29## The mapping is valid only if the following test holds: ##EQU30## An alternative candidate mapping can be defined as follows: ##EQU31## The mapping is valid only if the following test holds: ##EQU32##
In constructing candidate a fields, any value of M satisfying the relationship
1.ltoreq.M.ltoreq.2.sup.m -1 {no factor of M divides 2.sup.m -1}
may be used.
In constructing candidate .gamma. fields, any value of MM satisfying the relationship
1.ltoreq.MM.ltoreq.2.sup.2m -1 {no factor of MM divides 2.sup.2m -1}
may be used.
In most cases, many pairs of .gamma. and a fields can be found for which there exists a simple linear mapping (as described above) between the elements of the two fields. Such a mapping is employed in the current invention to minimize the gate count in the encoder and residue generator and to minimize firmware space required for the correction of multiple bursts.
One could reduce the computer time required for evaluating candidate pairs of .gamma. and .alpha. fields by performing a number of prescreening operations to pre-eliminate some candidate pairs, though the computer time required without such pre-screening operations is not excessive.
MAPPING BETWEEN ALTERNATIVE FINITE FIELDS
In the preferred embodiment of the current invention, the representation for the .omega. field is established by the primitive polynomial
x.sup.10 +x.sup.9 +x.sup.5 +x.sup.4 +x.sup.2 +x.sup.1 +1
over GF(2). The representation for the .gamma. field is established by the equation
.gamma..sup.i =(.omega..sup.i).sup.MM
where,
MM=32
The representation for the .mu. field is established by the primitive polynomial
x.sup.5 +x.sup.4 +x.sup.2 +x.sup.1 +1
over GF(2). The representation for the .beta. field is established by the relationship
.beta..sup.i =(.mu..sup.i).sup.M
where
M=1
The representation for the .alpha. field is established by the polynomial
x.sup.2 +x+.beta.
over GF(2.sup.m)
Also in the preferred embodiment of the current invention, the alternative form of mapping described above is employed. The resulting mapping is defined in the tables shown below.
TABLE 2______________________________________Bit of .gamma. Field Element Contribution to .alpha. Field Element______________________________________0000000001 00000000010000000010 11011000000000000100 10110110110000001000 11101101100000010000 11110111010000100000 11010111100001000000 00010110100010000000 01100000100100000000 00111011001000000000 1111000111______________________________________
TABLE 3______________________________________Bit of .alpha. Field Element Contribution to .gamma. Field Element______________________________________0000000001 00000000010000000010 01100011010000000100 01001010000000001000 00110111100000010000 11010000110000100000 11000110100001000000 10010100000010000000 01101111000100000000 00101100011000000000 0111111001______________________________________
To convert an element of the .gamma. field to an element of the .alpha. field, sum the contributions in the right-hand column of Table 2 that corresond to bits that are "1" in the .gamma. field element.
To convert an element of the .alpha. field to an element of the .gamma. field, sum the contributions in the right-hand column of Table 3 that correspond to bits that are "1" in the a field element.
In the preferred embodiment of the current invention, the code generator polynomial ##EQU33## is selected to be self-reciprocal. G(x) is self-reciprocal when m.sub.0 satisfies ##EQU34## More specifically, the preferred code generator polynomial is ##EQU35##
There has been disclosed and described in detail herein three preferred embodiments of the invention and their method of operation. From the disclosure it will be obvious to those skilled in the art that various changes in form and detail may be made to the invention and its method of operation without departing from the spirit and scope thereof.
Claims
  • 1. A decoder for an error detection and correction system using a Reed-Solomon code or related code of degree d-1 for detection and correction of a plurality of errors in a codeword of n symbols comprised of k data symbols and d-1 check symbols, wherein each symbol is comprised of m binary bits of information, said decoder comprising:
  • residue generator for producing a modified residue polynomial T(x) having modified residue coefficients T.sub.i according to a predetermined transformation of a remainder polynomial having coefficients R.sub.i wherein the predetermined transformation is: ##EQU36## where: d-1 a number of check symbols; and
  • G.sub.j =coefficients of a generator polynomial; processor comprising:
  • syndrome generator for computing a syndrome polynomial S(x) from said modified residue coefficients T.sub.i ;
  • error polynomial generator for generating an error locator polynomial .sigma.(x) from said syndrome polynomial S(x);
  • error locator responsive to said locator polynomial .sigma.(x) for generating error locations;
  • error value generator responsive to said error locator polynomial .sigma.(x), said error locations, and said syndrome polynomial S(x) for generating error values; and
  • corrector for applying said error values to said data symbols in said data buffer to correct symbols that are in error.
  • 2. In a decoder for an error detection and correction system using a Reed-Solomon code or related code of degree d-1 for detection and correction of a plurality of errors in a codeword of n symbols comprised of k data symbols and d-1 check symbols, wherein each symbol is comprised of m binary bits of information, an error decoding method comprising the steps of:
  • generating a modified residue polynomial T(x) having modified residue coefficients T.sub.i according to a predetermined transformation of a remainder polynomial having coefficients R.sub.i wherein the predetermined transformation is: ##EQU37## where: d-1 a number of check symbols; and
  • G.sub.j =coefficients of a generator polynomial;
  • computing a frequency domain syndrome polynomial S(x) from said modified residue polynomial T(x)
  • generating an error locator polynomial .sigma.(x) from said syndrome polynomial S(x);
  • generating error locations using said error locator polynomial .sigma.(x);
  • generating error values using said error locator polynomial .sigma.(x), said error locations, and said syndrome polynomial S(x); and
  • applying said error values to said data symbols in said data buffer to correct symbols that are in error.
  • 3. A Reed-Solomon encoder comprising:
  • means for receiving a plurality of information bits and logically organizing the same into a plurality of ten bit information symbols;
  • means for determining Reed-Solomon redundancy polynomial for the plurality of 10 bit information symbols using the generator polynomial: ##EQU38## where .gamma..sup.i =(.omega..sup.i).sup.32, and wherein .omega..sup.i are elements of a finite field generated by a GF(2) polynomial
  • x.sup.10 +x.sup.9 +x.sup.5 +x.sup.4 +x.sup.2 +x.sup.1 +1
  • means for appending the Reed-Solomon redundancy polynomial onto the plurality of information bits.
  • 4. A Reed-Solomon coder comprising:
  • means for receiving information consisting of a number of data bytes forming an information polynomial, each byte having a predetermined number of data bits therein, unequal to the number of bits in a code symbol; and,
  • means for determining and appending a redundancy polynomial to the information polynomial using the GF(1024) generator polynomial: ##EQU39## where .gamma..sup.i =(.omega..sup.i).sup.32, and wherein .omega..sup.i are elements of a finite field generated by a GF(2) polynomial
  • x.sup.10 +x.sup.9 +x.sup.5 +x.sup.4 +x.sup.2 +x.sup.1 +1.
  • 5. The Reed-Solomon coder of claim 4, wherein the means for determining and appending a redundancy polynomial to the information polynomial is an encoder having a k-bit serial external XOR form of linear feedback shift register, where k is equal to or greater than 1.
  • 6. The Reed-Solomon coder of claim 5, wherein the linear feedback shift register is also configurable to perform the encoding and syndrome generation functions for a computer generated code.
  • 7. The Reed-Solomon coder of claim 4, further comprised of a means for providing a multiple-way interleave wherein even numbered symbols of the information polynomial are placed in a first codeword polynomial and odd numbered symbols of the information polynomial are placed in a second codeword polynomial.
  • 8. The Reed-Solomon coder of claim 4, further comprised of a means for providing an N-way interleave of N information polynomials 1 . . . n . . . N, each having m symbols therein, the m modulo N numbered symbols of each information polynomial being placed in the m modulo N numbered codeword polynomial.
  • 9. A Reed-Solomon coder for appending redundancy bits to an information polynomial to form an original digital message word so that the location and pattern of errors in a corrupted version of the original digital message word may later be determined, comprising:
  • means for receiving an information polynomial of a plurality of 8 bit bytes;
  • means for appending to said information polynomial, eight 10 bit redundancy symbols for later determining the location and pattern of a first burst error not exceeding a first predetermined number of bits, or determining the location and pattern of a first and second burst error each not exceeding a second predetermined number of bits;
  • the combination of the information polynomial and the eight 10 bit redundancy symbols, together with any pad bits, forming the original digital message word;
  • whereby the location and pattern of a first burst error not exceeding the first predetermined number of bits in a corrupted version of the original digital message word may later be determined, or the location and pattern of both first and second burst errors each not exceeding the second predetermined number of bits in a corrupted version of the original digital message word may later be determined.
  • 10. The Reed-Solomon coder of claim 9, wherein the means for determining and appending a redundancy polynomial to the information polynomial and any pad bits is a means using the GF(1024) generator polynomial: ##EQU40## where .gamma..sup.i =(.omega..sup.i).sup.32, and wherein .omega..sup.i are elements of a finite field generated by a GF(2) polynomial
  • x.sup.10 +x.sup.9 +x.sup.5 +x.sup.4 +x.sup.2 +x.sup.1 +1.
  • 11. A Reed-Solomon coder for appending redundancy bits to an information polynomial to form an original digital message word so that the location and pattern of errors in a corrupted version of the original digital message word may later be determined, comprising:
  • means for receiving an information polynomial of a plurality of 8 bit bytes;
  • means for appending to said information polynomial, eight 10 bit redundancy symbols for later determining the location and pattern of a first burst error not exceeding 22 bits in length, or determining the location and pattern of a first and second burst error each not exceeding 11 bits in length;
  • the combination of the information polynomial and the eight 10 bit redundancy symbols, together with any pad bits, forming the original digital message word;
  • whereby the location and pattern of a first burst error not exceeding 22 bits in length in a corrupted version of the original digital message word may later be determined, or the location and pattern of both first and second burst errors each not exceeding 11 bits in length in a corrupted version of the original digital message word may later be determined.
  • 12. The Reed-Solomon coder of claim 36, wherein the means for determining and appending a redundancy polynomial to the information polynomial and any pad bits is a means using the GF(1024) generator polynomial: ##EQU41## where .gamma..sup.i =(.omega..sup.i).sup.32, and wherein .omega..sup.i are elements of a finite field generated by a GF(2) polynomial
  • x.sup.10 +x.sup.9 +x.sup.5 +x.sup.4 +x.sup.2 +x.sup.1 +1.
  • 13. A k-bit serial burst trapping decoder for decoding a codeword comprised of a plurality of m-bit symbols where k<m.
  • 14. The k-bit serial burst trapping decoder as recited in claim 13, wherein k>1.
  • 15. The k-bit serial burst trapping decoder as recited in claim 13, comprising an external XOR linear feedback shift register.
  • 16. The k-bit serial burst trapping decoder as recited in claim 13, further comprising a k-bit serial encoder.
  • 17. A Reed-Solomon error detection and correction system for processing digital data, comprising:
  • an input for receiving a codeword represented by a polynomial having coefficient symbols in a finite field GF(2.sup.m) where m>1;
  • an error signature generator, responsive to the codeword, for generating an error signature represented as a polynomial having coefficient symbols in a finite field GF(2.sup.n) where n>1;
  • a means for bit-by-bit reversing the error signature to generate a reversed error signature; and
  • a burst trapping decoder, responsive to the reversed error signature, for correcting a burst error in the codeword.
  • 18. The error detection and correction system as recited in claim 17 wherein the error signature is a modified residue comprising modified coefficients T.sub.i related to a remainder polynomial comprising coefficients R.sub.i according to the following transformation: ##EQU42## where: d-1=a number of check symbols; and
  • G.sub.j =coefficients of a generator polynomial.
  • 19. An error correcting system for decoding a received codeword polynomial having coefficients represented by symbols in a finite field GF(2.sup.m), comprising:
  • (a) a first decoder for decoding the received codeword polynomial according to a first representation of the finite field; and
  • (b) a second decoder for decoding the received codeword polynomial according to a second representation of the finite field.
  • 20. The error correcting system as recited in claim 19, wherein m=10.
  • 21. The error correcting system as recited in claim 19, wherein the first decoder is a single burst error burst trapping decoder and the second decoder is a multiple burst error Reed-Solomon decoder.
  • 22. The error correcting system as recited in claim 19, further comprising a code mapper for mapping the symbols of the received codeword from the first representation of the finite field to the second representation of the finite field.
  • 23. The error correcting system as recited in claim 19, wherein the received codeword polynomial is encoded using a code generator polynomial G(x) of the form: ##EQU43## d=a minimum Hamming distance of the code; .sup.m 0=an offset; and
  • .gamma..sup.m 0.sup.+i =(.alpha..sup.M).sup.m 0.sup.+i ; where:
  • .alpha. is a root of a primitive polynomial;
  • no factor of M divides 2.sup.m -1; and
  • M does not equal 1.
  • 24. The error correcting system as recited in claim 19, wherein the second decoder can correct more errors in the received codeword than the first decoder.
  • 25. The error correcting system as recited in claim 19, wherein the second representation of the finite field is a large field generated by a polynomial over a small field GF(2.sup.k) where k>1.
  • 26. A Reed-Solomon error correcting system for decoding a received codeword polynomial having coefficients represented by symbols in a finite field GF(2.sup.m), comprising:
  • (a) a burst trapping decoder for correcting, on-the-fly, a single burst error in the received codeword polynomial, wherein the burst trapping decoder is a k-bit serial burst trapping decoder where k<m; and
  • (b) a firmware decoder for correcting, not on-the-fly, a plurality of burst errors in the received codeword polynomial.
  • 27. The error correcting system as recited in claim 26, wherein the firmware decoder operates according to a subfield representation of the finite field.
  • 28. The error correcting system as recited in claim 26, wherein the received codeword polynomial comprises user data bits, redundancy bits and pad bits, wherein the number of user data bits plus pad bits is divisible by the number of bits in a symbol.
  • 29. The error correcting system as recited in claim 26, wherein the received codeword polynomial comprises user data bits, redundancy bits and pad bits, wherein the number of user data bits plus redundancy bits plus pad bits is divisible by the number of bits in a byte.
  • 30. The error correcting system as recited in claim 26, wherein the received codeword polynomial is encoded using a code generator polynomial G(x) of the form: ##EQU44## d=a minimum Hamming distance of the code; .sup.m 0=an offset; and
  • .gamma..sup.m 0.sup.+i =(.alpha..sup.M).sup.m 0.sup.+i ; where:
  • .alpha. is a root of a primitive polynomial; and
  • no factor of M divides 2.sup.m -1.
  • 31. The error correcting system as recited in claim 26, wherein the firmware decoder is responsive to an error signature and an erasure locator polynomial for generating error locations and values used to correct the plurality of burst errors in the received codeword.
  • 32. The error correcting system as recited in claim 31, wherein the firmware decoder initializes an error locator polynomial to the erasure locator polynomial before generating the error locations and values.
Parent Case Info

The present application is a continuation of application Ser. No. 08/056,839, filed May 3, 1993, now U.S. Pat. No. 5,659,557, which is a continuation of application Ser. No. 07/612,430, filed Nov. 8, 1990, issued as U.S. Pat. No. 5,280,488 on Jan. 18, 1994.

US Referenced Citations (42)
Number Name Date Kind
3811108 Howell May 1974
4099160 Flagg Jul 1978
4142174 Chen et al. Feb 1979
4162480 Berlekamp Jul 1979
4355391 Alsop IV Oct 1982
4410989 Berlekamp Oct 1983
4413399 Riggle et al. Nov 1983
4455655 Galen et al. Jun 1984
4494234 Patel Jan 1985
4525838 Patel Jul 1985
4566105 Oisel et al. Jan 1986
4567594 Deodhar Jan 1986
4584686 Fritze Apr 1986
4604750 Manton et al. Aug 1986
4633470 Welch et al. Dec 1986
4706250 Patel Nov 1987
4730321 Machado Mar 1988
4733396 Baldwin et al. Mar 1988
4769818 Mortimer Sep 1988
4777635 Glover Oct 1988
4782490 Tenengolts Nov 1988
4833678 Cohen May 1989
4833679 Anderson et al. May 1989
4839896 Glover et al. Jun 1989
4843607 Tong Jun 1989
4845713 Zook Jul 1989
4849975 Patel Jul 1989
4856003 Weng Aug 1989
4866716 Weng Sep 1989
4890287 Johnson et al. Dec 1989
4916702 Berlekamp Apr 1990
4975915 Sako et al. Dec 1990
4979173 Geldman et al. Dec 1990
5001715 Weng Mar 1991
5099482 Cameron Mar 1992
5107503 Riggle et al. Apr 1992
5107506 Weng et al. Apr 1992
5109385 Karp et al. Apr 1992
5136592 Weng Aug 1992
5267241 Kowal Nov 1993
5280488 Glover et al. Jan 1994
5659557 Glover et al. Aug 1997
Non-Patent Literature Citations (12)
Entry
Glover et al., Practical Error Correction for Engineers, Second Edition, Dec. 1988, Data Systems Technology Corp., pp. 11-13, 32-34, 89-90, 112-113, 129-134, 181-184, 186, 194-196, 242, 256-268, 270, 285, 296, 298, 350.
Blahut, "Transform Techniques for Error Control Codes", IBM Journal of R&D, vol. 23, No. 3, May 1989.
Heise et al., "Serial Implementation of b-adjacent Codes", IBM Technical Disclosure Bulletin, vol. 24, No. 5, Oct. 1981.
Bossen et al., "Serial Processing of Interleaved Codes", IBM Technical Disclosure Bulletin, vol. 17, No. 3, Aug. 1974.
Berlekamp, "Algebraic Codes for Improving the Reliability of Tape Storage", National Computer Conference, Dec. 1975, pp. 497-499.
Data Sheet for: "Advanced Burst Error Processor", Part No. Am95C94, Advanced Micro Devices, May, 1989.
Product Description for: "Low-Cost High Performance Error Correcting Code Chip", Part. No. NG-8250, Cirrus Logic, Inc., Jan. 1988.
Clark et al., Error Correction Coding for Digital Communications, Plenum Press, Dec. 1981, Chapter 5, pp. 181-225.
Berlekamp, "Bit-Serial Reed-Solomon Encoders", IEEE Transactions on Information Theory, vol. IT-28, No. 6, Nov. 1982, pp. 869-874.
Maki et al., "A VLSI Reed-Solomon Encoder: An Engineering Approach", IEEE Custom Integrated Circuits Conference, Dec. 1986, pp. 177-181.
Beth et al., "Architectures for Exponentiation in GF(2* *n)", Advances in Cryptology-CRYPTO Dec. 1986 Proceedings, Springer-Verlag, pp. 302-310.
Peterson et al., Error Correcting Codes, MIT Press, Dec. 1972, pp. 277, 472-476.
Continuations (2)
Number Date Country
Parent 056839 May 1993
Parent 612430 Nov 1990