1. Technical Field of the Invention
The invention relates generally to communication systems; and, more particularly, it relates to communication systems employing Reed-Solomon (RS) coding.
2. Description of Related Art
Data communication systems have been under continual development for many years. One such type of communication system that has been of significant interest lately is a communication system that employs iterative error correction codes. One type of communication system that has received interest in recent years has been one which employs Reed-Solomon (RS) codes (one type of iterative error correcting code). Communications systems with iterative codes are often able to achieve lower bit error rates (BER) than alternative codes for a given signal to noise ratio (SNR).
A continual and primary directive in this area of development has been to try continually to lower the SNR required to achieve a given BER within a communication system. The ideal goal has been to try to reach Shannon's limit in a communication channel. Shannon's limit may be viewed as being the data rate to be used in a communication channel, having a particular SNR, that achieves error free transmission through the communication channel. In other words, the Shannon limit is the theoretical bound for channel capacity for a given modulation and code rate.
There are a wide variety of applications in which RS codes can be employed to attempt to effectuate (ideally) error free transmission and receipt of information. In the context of communication systems having a communication channel over which coded signals are communicated, RS codes can be employed to attempt to effectuate (ideally) error free transmission from a communication device and/or (ideally) error free receipt of information to a communication device. In the context of hard disk drive (HDD) applications, RS codes can be employed to attempt to effectuate (ideally) error free write and/or read of information to and from storage media. With respect to HDD applications, as is known, many varieties of memory storage devices (e.g. disk drives), such as magnetic disk drives are used to provide data storage for a host device, either directly, or through a network such as a storage area network (SAN) or network attached storage (NAS). Typical host devices include stand alone computer systems such as a desktop or laptop computer, enterprise storage devices such as servers, storage arrays such as a redundant array of independent disks (RAID) arrays, storage routers, storage switches and storage directors, and other consumer devices such as video game systems and digital video recorders. These devices provide high storage capacity in a cost effective manner.
One of the operations performed in prior art decoding of a RS coded signal is the generation of an error value polynomial (EVP). Prior art RS decoding approaches necessarily require the computing the EVP. For large error correction code (ECC) systems (e.g., t=120), this can take anywhere from 1000 to 7000 clock cycles depending on the amount of ALU parallelism is provided in the decoding device. Additionally, computing the EVP typically requires some additional MUXing which may significantly affects area and speed of design in silicon when implementing an actual communication device capable to perform decoding of a RS coded signal.
The present invention is directed to apparatus and methods of operation that are further described in the following Brief Description of the Several Views of the Drawings, the Detailed Description of the Invention, and the claims. Other features and advantages of the present invention will become apparent from the following detailed description of the invention made with reference to the accompanying drawings.
A novel approach is presented herein that is operable to save computation clock cycles that would normally be used to compute the error value polynomial (EVP) to be used in Forney's algorithm for computing error values when decoding a Reed-Solomon (RS) coded signal. These clock cycles may be used to reduce the otherwise required parallelism and complexity in the ECC design that may be needed to perform the error correction in the allotted time. Moreover, this reduction in clock cycles may also result in power savings. The typically large hardware costs required to perform multiplexing of signals needed when computing the EVP in accordance with RS decoding are largely avoided. The approach presented herein provides for a much lesser complex solution for decoding RS coded signals. Some advantages related to this may approach include lower risk, less design time, and more scalability in an overall design.
Disk drive unit 100 further includes one or more read/write heads 104 that are coupled to arm 106 that is moved by actuator 108 over the surface of the disk 102 either by translation, rotation or both. A disk controller 130 is included for controlling the read and write operations to and from the drive, for controlling the speed of the servo motor and the motion of actuator 108, and for providing an interface to and from the host device.
Disk controller 130 further includes a processing module 132 and memory module 134. Processing module 132 can be implemented using one or more microprocessors, micro-controllers, digital signal processors, microcomputers, central processing units, field programmable gate arrays, programmable logic devices, state machines, logic circuits, analog circuits, digital circuits, and/or any devices that manipulates signal (analog and/or digital) based on operational instructions that are stored in memory module 134. When processing module 132 is implemented with two or more devices, each device can perform the same steps, processes or functions in order to provide fault tolerance or redundancy. Alternatively, the function, steps and processes performed by processing module 132 can be split between different devices to provide greater computational speed and/or efficiency.
Memory module 134 may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static random access memory (SRAM), dynamic random access memory (DRAM), flash memory, cache memory, and/or any device that stores digital information. Note that when the processing module 132 implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory module 134 storing the corresponding operational instructions may be embedded within, or external to, the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. Further note that, the memory module 134 stores, and the processing module 132 executes, operational instructions that can correspond to one or more of the steps or a process, method and/or function illustrated herein.
Disk controller 130 includes a plurality of modules, in particular, device controllers 105, processing module 132, memory module 134, read/write channel 140, disk formatter 125, and servo formatter 120 that are interconnected via bus 136 and bus 137. The host interface 150 can be connected to only the bus 137 and communicates with the host device 50. Each of these modules can be implemented in hardware, firmware, software or a combination thereof, in accordance with the broad scope of the present invention. While a particular bus architecture is shown in
In one possible embodiment, one or more modules of disk controller 130 are implemented as part of a system on a chip (SoC) integrated circuit. In an embodiment, this SoC integrated circuit includes a digital portion that can include additional modules such as protocol converters, linear block code encoding and decoding modules, etc., and an analog portion that includes device controllers 105 and optionally additional modules, such as a power supply, etc. In a further embodiment, the various functions and features of disk controller 130 are implemented in a plurality of integrated circuit devices that communicate and combine to perform the functionality of disk controller 130.
When the drive unit 100 is manufactured, disk formatter 125 writes a plurality of servo wedges along with a corresponding plurality of servo address marks at equal radial distance along the disk 102. The servo address marks are used by the timing generator for triggering the “start time” for various events employed when accessing the media of the disk 102 through read/write heads 104.
In a possible embodiment, wireless communication device 53 is capable of communicating via a wireless telephone network such as a cellular, personal communications service (PCS), general packet radio service (GPRS), global system for mobile communications (GSM), and integrated digital enhanced network (iDEN) or other wireless communications network capable of sending and receiving telephone calls. Further, wireless communication device 53 is capable of communicating via the Internet to access email, download content, access websites, and provide steaming audio and/or video programming. In this fashion, wireless communication device 53 can place and receive telephone calls, text messages such as emails, short message service (SMS) messages, pages and other data messages that can include attachments such as documents, audio files, video files, images and other graphics.
Referring to
The signals employed within this embodiment of a communication system 400 can be Reed-Solomon (RS) coded signals. Any of a very wide variety of applications that employ RS coding can benefit from various aspects of the invention, including any of those types of communication systems depicted in
A corresponding RS encoder (not shown in this particular embodiment) takes data (e.g., a block of digital data) and adds redundancy or parity bits thereto thereby generating a codeword (e.g., a codeword to be written, transmitted, and/or launched into a communication channel). This redundancy is generated as a function of the particular RS code employed. Therefore, when the data (after undergoing RS encoding) is provided to some storage media (and/or transmitted via a communication channel and/or launched into a communication channel), and after it is read there from (or received there from), in the undesirable event that any errors occurred during either of these processes (write and/or read or transmit and/or receive), hopefully the number of errors incurred is less than the error correcting capability of the RS code. The number and types of errors that can be corrected depends on the particular characteristics of the RS code employed.
Looking at
A syndrome calculation module 510 then processes the received codeword 591 to generate syndromes 592. The operation of the syndrome calculation module 510 is analogous and similar to the calculation of the redundancy or parity bits within the RS encoding processing. As a function of the RS code employed, a RS codeword has a predetermined number of syndromes that depend only on errors (i.e., not on the actually written or transmitted codeword). The syndromes can be calculated by substituting a predetermined number of roots (as determined by the RS code) of the generator polynomial (employed within RS encoding) into the received codeword 591.
An error locator polynomial generation module 520 then receives these calculated syndromes 592. The syndromes 592 are also passed to an error magnitude calculation module 540. The error locator polynomial generation module 520 can generate the error locator polynomial 593 using various means, two of which can include the Berlekamp-Massey method 522 or Euclid method 524.
The error locator polynomial 593 is provided to an error correction module 550. The error locator polynomial 593 is also provided to an error location search module 530 that is operable to solve for the roots of the error locator polynomial 593. One approach is to employ the Chien search function 532.
Once the error locations 594 have been found within the error location search module 530 (i.e., using the Chien search function 532), then the error locations 594 are provided to the error magnitude calculation module 540 as well as to the error correction module 550. The error magnitude calculation module 540 finds the symbol error values, and it can employ a known approach such as the Forney method 542. Once the error locations 594 and the error magnitudes 595 are known, then the error correction module 550 corrects for them and outputs an estimated codeword 596.
With respect to the various processing modules depicted in this diagram as well as others, it is noted that any such processing module may be a single processing device or a plurality of processing devices. Such a processing device may be a microprocessor, micro-controller, digital signal processor, microcomputer, central processing unit, field programmable gate array, programmable logic device, state machine, logic circuitry, analog circuitry, digital circuitry, and/or any device that manipulates signals (analog and/or digital) based on operational instructions. Any such processing module can also be coupled to a memory. Such a memory may be a single memory device or a plurality of memory devices. Such a memory device may be a read-only memory, random access memory, volatile memory, non-volatile memory, static memory, dynamic memory, flash memory, and/or any device that stores digital information. Note that when such a processing module implements one or more of its functions via a state machine, analog circuitry, digital circuitry, and/or logic circuitry, the memory storing the corresponding operational instructions is embedded with the circuitry comprising the state machine, analog circuitry, digital circuitry, and/or logic circuitry. The memory stores, and the processing module executes, operational instructions corresponding to at least some of the steps and/or functions illustrated herein. Alternatively, it is noted that such a processing module may include an embedded memory (or memories) that is operable to assist in the operations analogous to an external memory as described above.
In this embodiment as well, a corresponding RS encoder (not shown in this particular embodiment) takes data (e.g., a block of digital data) and adds redundancy or parity bits thereto thereby generating a codeword (e.g., a codeword to be written, transmitted, and/or launched into a communication channel). This redundancy is generated as a function of the particular RS code employed. Therefore, when the data (after undergoing RS encoding) is provided to some storage media (and/or transmitted via a communication channel and/or launched into a communication channel), and after it is read there from (or received there from), in the undesirable event that any errors occurred during either of these processes (write and/or read or transmit and/or receive), hopefully the number of errors incurred is less than the error correcting capability of the RS code. The number and types of errors that can be corrected depends on the particular characteristics of the RS code employed.
Looking at
A syndrome calculation module 610 then processes the received codeword 691 to generate syndromes 692. The operation of the syndrome calculation module 610 is analogous and similar to the calculation of the redundancy or parity bits within the RS encoding processing. As a function of the RS code employed, a RS codeword has a predetermined number of syndromes that depend only on errors (i.e., not on the actually written or transmitted codeword). The syndromes can be calculated by substituting a predetermined number of roots (as determined by the RS code) of the generator polynomial (employed within RS encoding) into the received codeword 691.
An error locator polynomial generation module 620 then receives these calculated syndromes 692. The syndromes 692 are also passed to an error magnitude calculation module 640. The error locator polynomial generation module 620 can generate the error locator polynomial 693 using various means, two of which can include the Berlekamp-Massey method 622 or Euclid method 624.
The error locator polynomial 693 is provided to a combined error location search and error magnitude calculation module 640. This combined error location search and error magnitude calculation module 640 is operable to locate any errors within the error locator polynomial 693 (i.e., solve for the roots of the error locator polynomial 693 to identify any error locations 692, if existent). One approach is to employ the Chien search function.
Once the error locations have been found within the combined error location search and error magnitude calculation module 640 (e.g., using the Chien search function or some other search function), then the error locations 694 are employed also by the combined error location search and error magnitude calculation module 640 to perform calculation of any error values (or error magnitudes 695). There is no need to perform the calculation of the EVP in this embodiment, as the combined error location search and error magnitude calculation module 640 is operable to perform calculation of the error values directly without requiring or using an EVP. Once the error locations 694 and the error magnitudes 695 are known, then the error correction module 650 corrects for them and outputs an estimated codeword 696.
In order to implement the suggested modified version of the Koetter theorem for computing ECC error values with a conventional Berlekamp-Massey hardware implementation, some minor changes must first be made to the Berlekamp-Massey algorithm (BMA). Once this is accomplished, an error value computation can be performed by simply executing a three-way Galois field multiply followed by a Galois field inversion at each error location during the Chien search operation. The modifications to Berlekamp-Massey and the additional hardware requirements for executing the modified Koetter Theorem for value computations are described below.
The following equation taken from reference [1] defines the error value computation that must be performed following each location of the Chien search in which the ELP (σ(x)) evaluates to zero.
In this equation, “i” is the error location beginning with zero as the last ECC parity check symbol and ending with the first data symbol, α is an element of the Galois field (GF), and “v” is the degree of the “complete” ELP. To describe this equation in more practical terms, it is broken apart even further below:
In this breakdown, introduce some additional symbolic terms to represent hardware register values stored during the Berlekamp-Massey processing in one implementation. These new symbols are defined below:
σc: the “current” sigma register. This is the working sigma storage register and the coefficients of the “complete” error location polynomial at the end of the Berlekamp-Massey processing.
σc(odd): the odd terms of the “current” sigma register. During the error location search operation (e.g., Chien search operation) the term τc(odd)(α−1) is used to signify the odd terms of the error location polynomial evaluated at α−1, where “i” is the error location as defined above.
σp: the “previous” sigma register. This is the value of sigma at the previous degree of the error location polynomial, after being appropriately shifted. During the error location search operation (e.g., Chien search operation) the term up σp(α−1) is used to signify the next-to-last degree of the error location polynomial evaluated at α−1.
Δp: the discrepancy of the previous sigma. This is the discrepancy of the error location polynomial at the next-to-last degree of the error location polynomial.
By changing the positions of the terms of the above equations, the following modified equation for calculating error value, ei, can be produced:
In one implementation of the Berlekamp-Massey algorithm (BMA) as employed to decode a RS coded signal, one bank of (t) symbol-wide registers is used to store the coefficients of σc(x) and another is used to store the coefficients of σp(x). In addition, the current discrepancy (Δc) is computed each iteration and (when non-zero) an inverted version is calculated for use when the degree of the error location polynomial (ELP) is updated next, at which time it becomes known as 1/Δp. Then, for each iteration for which the discrepancy of the previous iteration was non-zero, the discrepancy ratio (Δc/Δp) is computed by multiplying Δc by 1/Δp, and used when calculating the new σc(x).
As can be seen, a first plurality of registers 740 (as shown by register 741, register 742, . . . , and register 743) and a second plurality of registers 750 (as shown by register 751, register 752, . . . , and register 753) are required for use in storing the coefficients of σc(x) and the coefficients of σp(x) are employed during the searching for error locations and calculation of any error values. In some desired embodiments, each of the first plurality of registers 740 and a second plurality of registers 750 includes a same number of registers; each has the same number of registers.
The purpose for shifting σp in this variant to the Berlekamp-Massey decoding approach is to account for the Xc−p product in the classical σc computation (e.g., where c is the iteration counter that corresponds to the current iteration, and p corresponds to the previous iteration). When using this variant of the Berlekamp-Massey decoding approach, care must be now taken to ensure that σp is only shifted after iterations with non-zero discrepancies. And, when (one or more) intermediate zero discrepancies, is discovered followed by a non-zero discrepancy, multiple shifts in σp must be performed at that time to account for the intermediate zero discrepancies that occurred.
Another alteration to Berlekamp-Massey decoding approach is the compilation of the term α(2v−1) that to be used to generate the complete term, (α(2v−1)·i/Δp), from the modified equation for calculating error value, ei, shown above. This can be accomplished during the Berlekamp-Massey algorithm using the circuit in the upper portion of the
The completion of the term, (α(2v−1)·i/Δp), is initially calculated and updated during each search cycle of the Chien search. This is accomplished using the hardware shown on the lower portion of
Another register 830 is operable to store the inverted discrepancy (shown as a disc_stor register). A Galois Field (GF) inverter 840 is operable to generate the inverted discrepancy. To complete the error value computation (i.e., calculation of the error value, ei), the three terms in the denominator of the equation above are multiplied together and the result is inverted using Galois Field arithmetic. The modified equation for calculating the error value, ei, is provided here again for the ease of the reader:
As can be seen, the embodiment of the
Additional information is provided below to assist the reader in understanding the means by which calculation of the error value polynomial (EVP) can be obviated when decoding a RS coded signal.
In reference [1], Koetter developed an error value computation method for algebraic geometric codes that saves hardware or latency. However, the method in reference [1] cannot be directly used in a typical Reed-Solomon (RS) decoder using the Berlekamp-Massey algorithm (BMA) for decoding processing (i.e., see references [2, 3]) and the Chien search as employed within decoding of a RS coded signal. Herein, the approach of reference [1] is modified to allow for decoding of a RS coded signal in an approach in which the calculation of the error value polynomial (EVP) is obviated.
Syndrome Calculation and Error-Locator
Let g(x)=(x−αL)(x−αL+1) . . . (x−αL+2t−1) be a generator polynomial of a t error correction m-bit symbol Reed-Solomon (RS) code of length n, where α is a primitive element of a Galois Field, i.e., GF(2m), and L is an integer (e.g., such as L=0 if desired in one embodiment). Let the received vector r=c+e, where c is a received codeword and e=(e0, . . . , en−1) is an error vector. Then the syndrome of the received vector, r, is defined as follows:
Let v≦t and suppose e have v non-zero error locations, e.g., ei
The relationship between syndromes and error location polynomial is as follows (see also reference [1]):
In other words, the error location polynomial, σ(x), generates the syndromes, S0, . . . , S2t−1, and the error location polynomial, σ(x), is the polynomial whose roots define the locations of those errors. The error location polynomial, σ(x), can also be referred to as a linear feedback shift register (LFSR)-connection polynomial because of its particular characteristics.
Berlekamp Massey Algorithm (BMA)
We present the BMA modified as it is described within reference [3].
Let the syndromes, S0, . . . , S2i−1, be defined in (EQ 1). Given
where vr=deg(σ(r)(x)), define the discrepancy
The procedures of BMA decoding processing can be stated as follows.
1) When r=0, initialize σ(0)(x)=1, B(0)(x)=1, L0=0 and P0=0
2) Iteratively, conduct the following operations for r=1, . . . , 2t−1
2.1) Compute Δr−1
2.2) Compute
2.3) Compute
2.4) Compute
2.5) Compute
σ(r)(x)=σ(r−1)(x)−Δr−1x1+P
2.6) Compute
In the rest of this section, we present some properties of σ(r)(x), B(r)(x) and Lr.
Proposition 1 (see reference [3]) σ(r)(x) satisfies the following equations
Proposition 2 (see reference [3]) Lr is the shortest length of an LFSR that generates the syndromes, S0, . . . , Sr−1. Moreover, the sequence, L0, L1, . . . , L2t−1 is an increasing sequence.
Proposition 3 If the number of errors is v≦t, then L2t−1=v and Δ2t−1=0.
Proof. By (EQ 3) and Proposition 2.
Proposition 4 If the number of errors is v≦t, and suppose the error locations are as follows:
αl
Then, for k=1, . . . , v, σ(2t−1)(αl
Proof: Then σ(2t−1)(x) part of the proof can be found in reference [3]. Here, proof is given only for the last part of the proposition. Let r1, . . . , rs be the sequence {1, . . . , 2t−1} such that Δr
where r0=r1−1. Moreover, by (EQ 10) and (EQ 11) of the BMA decoding processing approach, the following can be shown:
σ(2t−1)(x)=σ(r
Suppose there is 1≦j≦s such that B(2t−1)(αl
σ(r
Based on the assumption of r1, . . . , rs, we have Δ0= . . . =Δr
Proposition 5 If the number of errors is v≦t, and let Q=1+P2t−1−2(t−v) and
λ(x)=B(2t−1)(x) ((EQ 16)
Then
Proof: Let 0≦r≦2t−1 such that Δr−1≠0 but Δr=Δr+1= . . . =Δ2t−1=0. Then δr=δr+1= . . . =δ2t−1=0. Moreover, by the BMA decoding approach and Proposition 3 above, we have Lr=v.
Let m be the last number in {0, 1, . . . , 2t−1} such that δm=1. Thus Δm−1≠0 and δm+1= . . . =δ2t−1=0. Then by the definition of r, we have r≧m, Moreover, by the BMA decoding approach, we have
v=Lr=Lm=m−Lm−1 (EQ 18)
Pm+k=k,k=0, . . . , 2t−1−m (EQ 19)
and
B(2t−1)(x)=B(m)(x)=Δm−1−1σ(m−1)(x) (EQ 20)
Therefore, by Proposition 1
Since λ(x)=xQB(2t−1)(x), we have λ0= . . . =λQ−1=0 and λQ+k=Bk(2t−1), k=0 , , , .deg(B(2t−1)). Thus
Since P2t−1=2t−1−m by (EQ 10), we have Q=1+2t−1−m−2t+2v=2v−m. By this conclusion and (EQ 18), j−Q=Lm−1 implies j=v and j−Q=m−1 implies j=2v−1. Therefore, (EQ 22) is (EQ 17).
Classical Error Evaluator: Forney's Formula
Let σ(x) be the error location polynomial with deg(σ(x))=v≦t. Compute the following:
which is
Then the error values (e.g., the error magnitudes) can be computed as follows:
where
can be obtained from the odd parts of the Chien search.
New Error Evaluator (Obviates Need for Error Value Polynomial (EVP))
Let σx)=σ(2t−1)(x),B(2t−1)(x) and P(2t−1) be obtained from the BMA decoding approach and suppose deg(σ(x))=v≦t. Define
λ(x)=xQB(2t−1)(x) (EQ 26)
where Q=1+P(2t−1)−2(t−v).
Theorem 1 Then the error value, ei, can be computed by
Proof: Let i be an error location (i.e., σ(α−i)=0) and error value, ei, its error value. Modify this error value, ei, to be as follows:
Then by Proposition 5 the modified error vector E=( . . . , eiαivλ(α−i), . . . ) has the same error locations as the original error vector, i.e., σ(x) is also the error locator polynomial of E. Let Tj be syndrome of the modified error vector E. Thus, by (EQ 3), we have
Moreover,
Then by (EQ 17) of Proposition 5, we have Ti=0, i=1, . . . , v−2 but Tv−1=1. With this conclusion and (EQ 29), the following is shown:
By Forney's formula provide above, we have
Thus
The new error evaluator is as follows:
As shown in a block 920, the method 900 includes performing Chien searching to process the error location polynomial to locate an error within the RS coded signal. Thereafter, the method 900 continues by directly calculating an error magnitude of the located error (identified during Chien searching) within the RS coded signal by evaluating a first plurality of error location polynomial coefficients and a second plurality of error location polynomial coefficients of the error location polynomial provided by a final iteration of Berlekamp-Massey processing, as shown in a block 930. As can be seen, the need to perform generation of the error value polynomial (EVP) is obviated by the ability to calculate any error magnitudes (e.g., any error values) directly. The method 900 then continues by employing the calculated error magnitude to make a best estimate of an information codeword encoded within the RS coded signal, as shown in a block 940.
It is also noted that the specific variant of the Berlekamp-Massey algorithm (BMA) decoding approach may change in alternative embodiments. This might cause minor modifications to the error value equation employed herein, yet the principles presented herein can also be applied to those embodiments to allow for direct calculation of error magnitudes (e.g., error values) without requiring the need to calculate the EVP. These variants may include, but are not limited to the following:
(1) computing the discrepancy at the beginning of the decoding rather than the end;
(2) calculating sigma (σ(x)) by multiplying with the x(r−u) rather than shifting sigma-p (σp);
(3) performing the Chien search from the first data symbol until the last ECC parity symbol rather than the vice verse; and
(4) storing lambda (λ) rather than sigma-p (σp) and Delta-p (Δp) delta-P as described herein.
The present invention has also been described above with the aid of method steps illustrating the performance of specified functions and relationships thereof. The boundaries and sequence of these functional building blocks and method steps have been arbitrarily defined herein for convenience of description. Alternate boundaries and sequences can be defined so long as the specified functions and relationships are appropriately performed. Any such alternate boundaries or sequences are thus within the scope and spirit of the claimed invention.
The present invention has been described above with the aid of functional building blocks illustrating the performance of certain significant functions. The boundaries of these functional building blocks have been arbitrarily defined for convenience of description. Alternate boundaries could be defined as long as the certain significant functions are appropriately performed. Similarly, flow diagram blocks may also have been arbitrarily defined herein to illustrate certain significant functionality. To the extent used, the flow diagram block boundaries and sequence could have been defined otherwise and still perform the certain significant functionality. Such alternate definitions of both functional building blocks and flow diagram blocks and sequences are thus within the scope and spirit of the claimed invention.
One of average skill in the art will also recognize that the functional building blocks, and other illustrative blocks, modules and components herein, can be implemented as illustrated or by discrete components, application specific integrated circuits, processors executing appropriate software and the like or any combination thereof.
Moreover, although described in detail for purposes of clarity and understanding by way of the aforementioned embodiments, the present invention is not limited to such embodiments. It will be obvious to one of average skill in the art that various changes and modifications may be practiced within the spirit and scope of the invention, as limited only by the scope of the appended claims.
[1] R. Koetter, “On the determination of error values for codes from a class of maximal curves,” Proceedings Allerton Conference on Communication, Control, and Computing, University of Illinois at Urbana-Champaign, 1997.
[2] R. E. Blahut, Theory and Practice of Error Control Codes, Addison-Wesley Publishing Company, 1983.
[3] J. L. Massey, “Shift-register synthesis and BCH decoding,” IEEE, Vol. IT-15, No. 1, pp. 122-127, January 1969.
The present U.S. Utility patent application claims priority pursuant to 35 U.S.C. §119(e) to the following U.S. Provisional Patent Applications which are hereby incorporated herein by reference in their entirety and made part of the present U.S. Utility patent application for all purposes: 1. U.S. Provisional Application Ser. No. 60/878,553, entitled “Area efficient on-the-fly error correction code (ECC) decoder architecture,” filed Jan. 4, 2007. 2. U.S. Provisional Application Ser. No. 60/899,522, entitled “Simplified RS (Reed-Solomon) code decoder that obviates error value polynomial calculation,” filed Feb. 5, 2007. The following U.S. Utility patent application is hereby incorporated herein by reference in its entirety and is made part of the present U.S. Utility patent application for all purposes: 1. U.S. Utility patent application Ser. No. 11/717,468, entitled “Area efficient on-the-fly error correction code (ECC) decoder architecture,” filed concurrently on Mar. 13, 2007, pending.
Number | Name | Date | Kind |
---|---|---|---|
5430739 | Wei et al. | Jul 1995 | A |
5889792 | Zhang et al. | Mar 1999 | A |
6061826 | Thirumoorthy et al. | May 2000 | A |
6081920 | Morelos-Zaragoza | Jun 2000 | A |
6122766 | Fukuoka et al. | Sep 2000 | A |
6487692 | Morelos-Zaragoza | Nov 2002 | B1 |
6553537 | Fukuoka | Apr 2003 | B1 |
7096408 | Ireland et al. | Aug 2006 | B1 |
7590923 | Kikuchi et al. | Sep 2009 | B1 |
7716562 | Wu et al. | May 2010 | B1 |
7757156 | Jung et al. | Jul 2010 | B2 |
7793195 | Wu | Sep 2010 | B1 |
7805662 | Ma et al. | Sep 2010 | B2 |
Number | Date | Country | |
---|---|---|---|
20080168336 A1 | Jul 2008 | US |
Number | Date | Country | |
---|---|---|---|
60878553 | Jan 2007 | US | |
60899522 | Feb 2007 | US |