ERROR-LOCATOR-POLYNOMIAL GENERATION WITH ERASURE SUPPORT

Abstract
A system and method for correcting errors in an ECC block using erasure-identification data when generating an error-locator polynomial. In an embodiment, a ECC decoding method, uses “erasure” data indicative of bits of data that are unable to be deciphered by a decoder. Such a method may use an Berlekamp-Massey algorithm that receives two polynomials as inputs; a first polynomial indicative of erasure location in the stream of bits and a syndrome polynomial indicative of all bits as initially determined. The Berlekamp-Massey algorithm may use the erasure identification information to more easily decipher the overall codeword when faced with a error-filled codeword.
Description
BACKGROUND

A data-communications system, such as a computer disk drive or a cell phone, includes a read channel, which recovers data from a received read signal (sometimes called a data signal) by interpreting a stream of bits. Such systems may read and write data to and from storage mediums and/or communication channels at ever-increasing rates. With the increase in data throughput, software and hardware may need to be more and more resilient to noise-induced errors. Thus, many communication and computer systems employ error-checking data processing that may be both hardware and software based in order to recover data if noise-induced errors arise.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the subject matter disclosed herein will become more readily appreciated as the same become better understood by reference to the following detailed description, when taken in conjunction with the accompanying drawings.



FIG. 1 is a block diagram of an embodiment of a hard-disk-drive system that may use soft data for error-checking.



FIG. 2 is a block diagram of an embodiment of an error-code correction block (ECC) that may be part of the controller of FIG. 1.



FIG. 3 is a block diagram of an embodiment of a computer system that may implement the HDD of FIG. 1 and the ECC block of FIG. 2.





DETAILED DESCRIPTION

The following discussion is presented to enable a person skilled in the art to make and use the subject matter disclosed herein. The general principles described herein may be applied to embodiments and applications other than those detailed above without departing from the spirit and scope of the present detailed description. The present disclosure is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features disclosed or suggested herein.


Error Correction Coding (ECC) is a method wherein errors during the reading of data may be corrected through statistical interpolation of the data itself when coded in a known manner. For example, in a computer system having a hard-disk drive, when the computer system (i.e., a host system for the hard-disk system) writes data to or reads data from the disk, the data may be checked for errors that may arise due to noise and inter-symbol interference (ISI). In specific, during a read data process, errors in reading the data may occur from various problems that may be encountered in the read channel of a disk drive system. Such errors may cause one or more bits to be read out incorrectly; e.g., the read channel may interpret a bit as “0” when the bit should be “1” and vice versa.


Error Correction Coding (ECC) may be implemented as a function of statistical interpolation based on the data as it is read as well as other meta-data that is stored with the functional data from a storage medium. ECC is a mathematically-intensive process, but may greatly increase the reliability of data being read from a medium. As such, with the increased ability of hard-drive systems and other communications systems to transmit and receive data at greater speeds, improvements in ECC may complement the ability of such systems as well.


Hard-disk drive (HDD) systems and communication channels may use an ECC module to attempt to correct errors that may arise due to reading data incorrectly due to external noise within a particular system. BCH codes (as generally based upon the error correction coding algorithms developed by Bose and Ray-Chaudhiri—in which the acronym BCH derives from these inventor's names) allow algebraic syndrome decoding to be used to find and fix bit-reading errors. One particular BCH coding schema is Reed-Solomon (RS) coding. RS Coding is a form of the general BCH code wherein bits of data are grouped into symbols rather than being handled as individual bits. Generally speaking, sometimes the number of errors present in the symbols of the RS code are too numerous to fix using conventional methods. The errors may typically arise from noise and ISI or from an erasure. An erasure may be a bit or grouping of bits that a read channel is unable to definitively determine one way or another as to what the bits may actually be. As such, erasures may be handled differently as no assumption is made as to the possible validity of the initial interpretation.


Prior to discussing the figures and by way of initial overview, a summary of the subject disclosed herein is presented. Conventional methods for correcting errors in RS codes utilize a hard-decision decoding method wherein any errors are attempted to be corrected on a sector-by-sector basis. Thus, for a given sector of data (e.g., 512 bytes of data, for example), each RS code symbol should correspond to a coefficient in a polynomial that describes the entire sector. If errors arise, then at least one symbol will not correspond to any coefficient in this polynomial and therefore an error is identified. Conventional methods for dealing this error (as well as additional identified errors) involve using a Berlekamp-Massey algorithm to identify the location of the errors in the sector and a Chien-Forney search engine to correct the identified errors. However, conventional methods do not support using the identifications of the erasures. Various Berlekamp-Massey algorithms are discussed in detail below that are able to take into account erasure information.


Depending on how many errors and erasures there are in a sector, the hard-decision ECC decoding method may be able to correct all errors. That is, for a given ECC block, the size and scope of the Chien-Forney search engine determines how many errors may be corrected. In an example, the number of errors that may be corrected is 20. If more than 20 errors are present, the then hard-decision ECC decoding method may be unable to solve for all errors and other additional decoding methods may then be attempted.


When a codeword is identified as having errors and erasures, the afore-mentioned Berlekamp-Massey algorithm block may attempt to locate the exact position of the errors in the codewords. Erasure locations may be known as the read channel process may flag these positions initially. Then, an erasure-locator polynomial may be generated that describes the identified erasures in the codeword. Together with the actual codeword (which may contain noise-related errors as well), the Berlekamp-Massey algorithm bock may identify the noise-induced error locations and fold in the erasure location to recover the exact locations of all errors and erasures. The location, expressed as a polynomial as well may then be passed to a Chien-Forney search to fix the errors. Details regarding the ECC decoding are discussed in greater detail with respect to FIGS. 1-2.



FIG. 1 is a block diagram of a hard-disk-drive (HDD) system 100 according to an embodiment of the subject matter disclosed herein. Such an HDD system 100 may read data from a hard disk 106 or write data to the hard disk. For the purposes of soft-decision decoding of ECC, only the read operation of the HDD 100 is discussed herein.


Generally speaking, the HDD 100 may include a read channel 109 that may read data from a disk 106 and then pass read data through an ECC block 130 to a buffer manager 150 before eventually being passed along to a host computer (not shown). Each of these components may be controlled by a local HDD controller 105. Further, a skilled artisan will understand that these components (with the exception of the disk 106) may be disposed on a single integrated circuit die, individual integrated circuit dies or any combination of dies thereof. Each of these components is discussed further in the following paragraphs.


When data is to be read from a disk 106, a read head 112 that is part of a front end 110 interprets signals detected on the disk 106 to produce a stream of bits to be sent to a read data path. The front end 110 may include amplification and processing circuitry that assists with reading of data stored on the disk 106. Such circuitry may include pre-amplifier 113, a variable-gain amplifier (VGA) 114, and an analog-to-digital converter (ADC) 115. The read head 112 and pre-amplifier 113 convert the data stored on the disk 106 into an analog read signal, and the VGA 114 adjusts the amplitude of the analog read signal to a predetermined value or range of values deemed suitable for the subsequent components of the read circuit 120. The ADC 115 samples the gain-adjusted analog read signal and converts the analog read signal into a digital read signal that may then be passed to the read circuit 120. As was discussed earlier, noise and inter-symbol interference (ISI) may cause read errors wherein bits of data are affected when being read. Such noise-induced errors may be passed to the read circuit 120.


The read circuit 120 includes several data processing components such as filters and the like (not all are shown) for interpreting the read signal. Generally speaking, data read from the disk 106 may be stored and processed in groupings of eight or ten bits (or other suitable grouping numbers) depending on the RS code being employed. A grouping of bits may be referred to as an ECC symbol wherein a sector of data (comprising 512 bytes of data, for example) may include approximately 410 ECC symbols. These ECC symbols are used for error correction as discussed further below.


The read circuit 120 then interprets signals from the front end 110 on a bit-by-bit basis to reconstruct the symbols of the RS codeword. One component for accomplishing this interpretation is a Viterbi detector 122 that includes a path-history exchange block 121. The Viterbi detector 122 processes the sampled digital read signal to produce a signal comprising a stream of bits having definitive logical values representing “1” or “0”. An example of a Viterbi detector that may be the same as or similar to the Viterbi detector 122 is disclosed in U.S. Pat. No. 6,662,338 and U.S. Publication Nos. 2004/0010749 and 2004/0010748, which are incorporated by reference. To help the reader understand the present application better, a brief overview of the operation of the Viterbi detector 122 is presented in the following paragraphs.


A Viterbi detector 122 “recovers” data stored on the disk 106 from the digitalized samples of the read signal generated by the read head 112. Assuming the stored data is binary data, the read head 112 senses one or more bits at a time as the surface of the disk 106 spins, and generates a series of sense voltages that respectively correspond to the sensed bits. This series of sense voltages composes the read signal, which consequently represents these sensed data bits in the order in which the read head 112 sensed them.


Unfortunately, because the disk 106 spins relatively fast with respect to the read head, the read signal is not a clean logic signal having two distinct levels that respectively represent logic 1 and logic 0. Instead, the read signal is laden with noise and ISI, and thus more closely resembles a continuous analog signal than a digital signal. Using a sample clock (not shown), the front end 110 samples the read signal at points that correspond to the read head 112 being aligned with respective bit storage locations on the surface of the disk. The ADC 115 digitizes these samples, and generally, a signal-conditioning block (e.g., the VGA 114 and ADC 118) adjusts the gain and timing of these samples and further equalizes these samples, before passing them to the Viterbi detector 122. The Viterbi detector 122 generates a sequence of bit values that is the most likely interpretation of the sequence of bit values stored on the disk 106.


In determining the output data sequence of the Viterbi detector 122, a dynamic programming detection algorithm (i.e., a Viterbi algorithm) may be used to determine the most probable interpretation of the signals from the front end 110 by passing possible interpretations through various “paths” of a state machine. (Although called a state machine here, those skilled in the art understand that such a reference is made for ease of understanding as the iterations of various dynamic calculations may be software-based and not embodied in any traditional state machine.) As is discussed in greater detail in the aforementioned U.S. Pat. No. 6,662,338, each bit is deterministically calculated by analyzing previous and subsequent bits with respect to the bit being analyzed. Such a calculation determines a “cost” (in term of computational iterations) of determining a logic value of a read bit of data. Thus, the Viterbi algorithm continuously calculates the cost of determining a logical state, and the value of each bit is determined by choosing the least costly path to that bit. The least costly path is the most likely interpretation of the actual bit. It is this most likely determination that is sent to the output of the Viterbi detector 122.


Assuming a noiseless read signal and binary stored data, the read circuit 120 may actually generate digitized read-signal samples having no errors. With such a noiseless environment, the cost of the correct path (correct bit sequence) would be zero, thereby indicating that the likelihood of the correct interpretation is at a maximum. However, as noise and ISI is introduced, different bits may be read incorrectly. In a hard-decision ECC decoding method, only the least costly path (i.e., most likely interpretation) is used for determining the output for each bit.


Further, erasures may still be present and identified by the Viterbi detector 122. The Viterbi detector 122 is able to generate erasure flags to identify bits of data determined to be an erasure. e.g., it is equally likely that a bit is a “1” or “0”. Such erasure identification is described in detail in U.S. patent application Ser. No. ______ entitled “Tiziano Erasure Identification” and is herein incorporated by reference. For the purpose herein, the erasure information may be passed to the ECC block 130 for use in the ECC decoding method described below with respect to FIG. 2. After errors are identified and recovered, a corrected codeword may be passed to a buffer manager 140 for additional digital signal processing 150. Aspects of the ECC block 130 and the various ECC methods are described below with respect to FIG. 2.



FIG. 2 is a block diagram of an ECC block 130 according to an embodiment of the subject matter disclosed herein. As before with respect to FIG. 1, only aspects of a read channel and read operation are discussed with respect to FIG. 2. The ECC block 130 may be a functional computing block having program modules for accomplishing the computations and tasks described. Alternatively, aspects of the ECC block 130 may also be hardware-based logic such that computations and tasks are accomplished without use of a processor or by processing instructions. Further, the various components described herein may comprise any combination of hardware or software based modules disposed on one or more integrated circuits.


The ECC block 130 receives data from the read circuit (120 of FIG. 1) in three different manners at a code word interface 210. Depending on the nature of the received data, the code word interface 210 includes different modules for manipulating the data into a format for providing an error-code checking schema. The actual data sequence as interpreted by the hard decisions of the Viterbi detector 122 may be sent in the form of code word symbols 201. Soft data, i.e., the reliability information of the received data is used to identify most likely error events in the detected data sequence. Soft-decision data may be sent to the code word interface 210 in the form of flip-bit identifications 203 and erasure identification 202. The r flip-bit identifications 203 may be used in a soft-decision ECC method as discussed related U.S. patent application Ser. No. ______, entitled “LOW-COMPLEXITY SOFT-DECISION DECODING OF ERROR CORRECTION CODES” and is herein incorporated by reference. The erasure identification 202 may be passed through a soft-error capture block where the location identifications are codified as an erasure-locator polynomial ┌(x) which may be a first input to a first Berlekamp-Massey block 220.


The code word symbols 201 may be a RS-coded series of bits comprising ten-bit symbols of data within a disk sector as read from the disk. Errors may be checked and corrected on a sector-by-sector when reading data from the disk (106 of FIG. 1). In one embodiment, sectors comprise 512 or 1024 bytes of data. As such, each symbol in the code word symbols 201 may be coded into ten-bit symbols. Different embodiments may use different lengths for symbols, such as an eight-bit length in one other embodiment. Any bit-length for symbols may be used, but for the purposes of this example, a ten-bit symbol is used. Therefore, if one reads one data sector (which may be 512 bytes, for this example), the data sector may have about 410 ten-bit symbols representing the user data in the data sector.


Further, the read channel 109 (FIG. 1) may also have an ECC encoder (not shown) that adds ECC parity symbols to provide error correction capability. In an embodiment, RS encoding used in HDD systems may add a number of parity symbols that is twice the error-correction capability T (e.g., the number of errors the ECC block may handle when only using a hard-decision ECC decoding method) of the ECC block 130. For example, if the error-correction capability T is 20, corresponding to a correction capability of 20 symbols, there will be 40 ECC parity (also called ECC redundancy) symbols. These parity symbols are typically written immediately after the user data in the sector of data. Thus, for T=20, and 410 user data symbols, a total of 450 symbols will be read from the disk for each sector.


After the Viterbi detector sends its hard-decisions on the individual bits read from the disk to the ECC block 130, this 450-symbol grouping of bits may then be used to generate a syndrome at the ECC syndrome generator 212. As syndromes representing a sector of data are computed, syndrome polynomials S(x) are passed to a Berlekamp algorithm block 220 as a second input here.


With the two polynomial inputs (syndrome polynomials S(x) and erasure-locator polynomial ┌(x)) the Berlekamp algorithm block 220 generates an error-locator polynomial σ(x) that is a polynomial which has roots at the error locations (both noise-induced errors and erasures) of the received inputs. As is discussed in greater detail below, there are a number of algorithms that may be used to generate the error-locator polynomial σ(x). In the next paragraphs however, the remainder of the overall method associated with FIG. 2 is discussed.


After the error-locator polynomial σ(x) is generated, the first 250a of five Chien-Forney search engines may be used to find the roots and error magnitude by brute force in a hard-ECC decoding method. Depending on the actual number of errors identified, i.e., the degree of the error-locator polynomial σ(x), the first Chien-Forney search engine may or may not be able to correct the specific errors with the sector. If the number of the errors is less than the error-correction capability T of the ECC block 130, then the Chien-Forney search engine 250a will be able to find and correct all the errors. If the number of errors is greater than error-correction capability T, then this Chien-Forney search engine 250a will not be able to find the roots, and will indicate a decoding failure.


As such, if the degree of the initial error-locator polynomial σ(x) that was generated by the first Berlekamp algorithm block 220 is less than a threshold (e.g., 20 errors for example) for the Chien-Forney search engine 250a, any remaining soft-ECC decoding in a soft-ECC block 230 may not be necessary. However, as the number of erroneous symbols exceeds the error-correction capability T, the hard-decision ECC decoding method fails. Thus, in addition to the hard-decision ECC decoding in this path as just discussed, a soft-decision ECC decoding path may also be employed for attempting to correct symbol errors by methods beyond the hard-decision ECC decoding method.


Briefly, in a soft-decision ECC method, additional reliability information from the Viterbi detector about the detected bits may also be used in the ECC decoding process. The soft-decision ECC method is based on a concept referred to as reliability. Reliability may be a mathematical determination based on a logarithmic likelihood ratio (LLR) as discussed in the aforementioned copending U.S. patent application Ser. No. ______ entitled “LOW-COMPLEXITY SOFT-DECISION DECODING OF ERROR CORRECTION CODES.”


The reliability information from the Viterbi detector 122 may be used to identify and list the least reliable bit locations that correspond to the most likely error events and most likely locations for erasures. This list may be sent to the code word interface 210 as a stream of flip-bit data 203 from the path-history exchange block 121 (FIG. 1) of the Viterbi detector 122. Here, a soft-error capture block 211 passes this information to a soft-ECC block 230 that may be part of the overall ECC block 130. The soft-ECC block 230 may then use this list to iteratively attempt to correct errors using the initial (hard-decision) error-locator polynomial of the first Berlekamp algorithm block 220. As the soft-ECC block identifies potential error correction polynomials, additional hard-decision ECC decoding may be attempted using a second Berlekamp-Massey algorithm block 222 and additional Chien-Forney search engines 250b-250e. If any on method succeeds in correcting all sector errors, a correction vector 260 is generated and all other process are halted. The second Berlekamp-Massey algorithm block 222 may also use erasure information in determining its error-locator polynomial.


Turning back to the algorithms of the Berlekamp-Massey algorithm block 220 (and 222), erasure information may be identified as a series of positions denoted by j1, j2, . . . , j2t. In an embodiment, up to 32 erasures may be handled in a single sector. Further, the symbols identified as erasures may be passed along as part of the codeword to the ECC block without any additional manipulation. That is, some methods may zero out the symbols known to be incorrect. However, in this embodiment, simply the identification of a symbol as an erasure is sufficient to recover the erasure through the described ECC method. As such, syndromes noting erasure information calculated may then be denoted as:






S
j
=r
0
+r
1
a
j
+r
2
a
2j
+ . . . +r
n−1
a
(n−1)j


Then an erasure-locator polynomial may be generated as:





┌(x)=Π(1+ajmx) for m=1 to s


Several mathematical methods may be used to determine an error-locator polynomial for a given set of syndromes. Five such methods are 1) a Forney algorithm approach, 2) a Blahut algorithm approach, 3) an inversionless Forney algorithm approach, 4) an inversionless Blahut algorithm approach, and 5) a dual-line inversionless Blahut algorithm approach. These are discussed and shown in the following paragraphs.


A first method involves a Forney algorithm approach, wherein one may fold the set of 2t syndromes S1, S2, . . . , S2t into 2t-s modified syndromes ≡s+1, ≡s+2, . . . , ≡2t. as modified by the erasure locator polynomial ┌(x). The resulting polynomial is a modified error-locator polynomial Λ(x) which may be used to determine an errata-locator polynomial Ω(x) indicative of all errors and erasures in a sector.


Then the key equation:





[1+S(x)]┌(x)Λ(x)=Ω(x)mod x2t+1


may be viewed in the form:





[1+≡(x)]Λ(x)=Ω(x)mod x2t+1


Therefore, the modified syndromes may be:





1+≡1x,+ . . . , +≡2tx2t=(1+S1x,+ . . . , +S2tx2t)(1+┌1x,+ . . . , +┌2tx2t)mod x2t+1


and the last 2t-s modified syndromes may be fed to the Berlekamp-Massey algorithm block 220 to determine a modified error-locator polynomial Λ(x). However, the generation of such a modified error-locator polynomial Λ(x) requires the pre-calculation of the modified syndromes which requires additional hardware and computational effort.


In the second approach, the Blahut algorithm does not generate modified syndromes; rather in this approach, 2t regular syndromes are fed directly to the Berlekamp-Massey algorithm block 220. Thus, the error-locator polynomial σ(x) may be defined as:





σ(x)=Λ(x)┌(x)


This approach reduces decoder latency in that the polynomial multiplications before and after the Berlekamp-Massey algorithm are avoided. Therefore, the Blahut algorithm may be summarized as follows:

















Λ(s)(x) = Γ(x), T(s)(x) = Γ(x), Ls = s for (i=s+1; i <= 2t; i++)



{ Δi = ΣΛk(i−1)S(i−1) from k = 0 to Li−1



Λ(i)(x) = Λ(i−1)(x) + ΔixT(i−1)(x)



if ΔI ≠ 0 and 2Li−1 ≦ i−1+s, then



  Li = i − Li−1 + S and T(i)(x) = Δi−1 Λ(i−1)(x)



else



  Li = Li−1 and T(i)(x) = xT(i−1)(x) }










Another approach is to use an inversionless Forney algorithm as follows:

















Λ(0)(x) = 1, B(0)(x) = 1, L0 = 0 for (i= 1; i ≦ 2t−s; i++)



  { Δi = ΣΛk(i−1)(i−k+s) from k = 0 to Li−1



  Λ(i)(x) = ε(i−1)Λ(i−1)(x) + ΔixB(i−1)(x)



  if ΔI ≠ 0 and 2Li−1 ≦ i−1, then



    Li = i − Li−1 and ε(i) = Δi and B(i)(x) = Λ(i−1)(x)



  else



    Li = Li−1 and ε(i) = ε(i−1) and B(i)(x) = B(i−1)(x) }










Further yet, another approach is to use an inversionless Blahut algorithm as follows:

















σ(0)(x) = Γ(x), B(0)(x) = Γ(x), L0 = s for (i= s+1; i ≦ 2t; i++)



  { Δi = ΣΛk(i−1)(i−k+s) from k = 0 to Li−1



  Λ(i)(x) = ε(i−1)Λ(i−1)(x) + ΔixB(i−1)(x)



  if ΔI ≠ 0 and 2Li−1 ≦ s+i−1, then



    Li = i − Li−1 − s and ε(i) = Δi and B(i)(x) = Λ(i−1)(x)



  else



    Li = Li−1 and ε(i) = ε(i−1) and B(i)(x) = B(i−1)(x) }










Lastly, in these example approaches, a dual-line inversionless Blahut algorithm may be realized. The algorithm may be initialized by the following partial metrics:















Ck(s) = ΣΓiS(2t−s) + 1 + k from i = 0 to s
for k = 0, ..., 2t−s−1


Ck(s) = Γi − (2t−s) + 1 + k
for k = 2t−s, ..., 2t


Dk(s) = Ck−1(s)
for k = 0, ..., 2t+1 and k ≠ 2t−s


Dk(s) = 0
for k = 2t−s










Then the dual-line inversionless Blahut algorithm is summarized as:

















Ls = s, ε(s) = 1 for (i = s+1; i ≦ 2t; i++)



  { Δi = C0(i−1)



    for (k = 0; k ≦ 2t; k++)



      Ck(i) = ε(i−1)Ck+1(i−1) + ΔiDk+1(i−1)



      C2t(i) = ΔiD2t+1(i−1)



  if ΔI ≠ 0 and 2Li−1 ≦ i−1+s, then



    Li = i − Li−1 + s and ε(i) = Δi



    for (k = 1; k ≦ 2t; k++)



      Dk(i) = (k==2t−i) ? 0: Ck(i−1)



      D2t+1(i) = 0



  else



    Li = Li−1 and ε(i) = ε(i−1)



    for (k = 1; k ≦ 2t; k++)



      Dk(i) = (k==2t−i) ? 0: Dk(i−1)



      D2t+1(i) = 0 }










To further illustrate the differences and advantages of various algorithms as discussed above, a simple example is shown for each of the algorithms. In this example, the codeword is a Reed Solomon code over Galois Field (23) with a generator polynomial g(x):






g(x)=Π(x+ai) for i=1 to 4==α3+αx+x23x3+x4


At the front end 110 (FIG. 1) data may initially be encoded as a message polynomial m(x):






m(x)=α54x+x2


into the systematic codeword c(x) that describes the data as should be read from the medium:






c(x)=α4+x35x44x5+x4


However, the read channel may corrupts the codeword c(x) by the error polynomial e(x):






e(x)=α2x+α6x2+αx5


into the received noisy codeword r(x):






r(x)=α42x+α6x2+x35x42x5+x6


In this example, the Viterbi detector 122 may tag two symbols (e.g., symbols one a five) as being erasures. Assuming the tagging is correct and these symbols are actually erasures, then the erasure locator polynomial ┌(x) is:





┌(x)=(1+α1x)(1+α5x)=1+α6x+α6x2


The syndromes S1-S4 are also computed for received codeword (having errors and erasures):






S
1
=r1)=α2 S2=r2)=α3 S3=r3)=α2 S4=r4)=α6


Then, the syndrome polynomial S(x) is:





1+S(x)=1+α2x+α3x22x36x4


Working through each algorithm described above, the first approach uses the Forney algorithm. A modified syndrome polynomial ≡(x) may be computed as:





1+≡(x)=[1+S(x)]┌(x)=1+x+α2x2+αx3α3x4


Then implement the Berlekamp-Massey algorithm on the last 2t-s modified syndromes with the initialization of:

















Λ(0)(x) = 1  T(0)(x) = 1  L0 = 0



  for i=I



Δ1 = ≡3 = α Λ(1)(x) = Λ(0)(x) + Δ1xT(0)(x) = 1 + αx



2L1 < i then length change L1 = 1 T1(x) = Δ1−1, Λ(0)(x) = α6



  For i=2



Δ2 = ≡4 + Λ113 = α5 Λ(2)(x) = Λ(1)(x) + Δ2xT(1)(x) = 1 + α2x



2L1 ≧ i then no length change L2 = 1 T2(x) = xT(1)(x) = α6x










It can be seen that the Berlekamp-Massey algorithm has correctly identified that an error is in position 2. Given the error locator Λ(x) and erasure locator ┌(x) polynomials we compute the errata locator polynomial ψ(x):





ψ(x)=┌(x)Λ(x)=(1+α66x2)(1+α2x)=1+x+α5x2+αx3


The formal derivative ψ(x) of the errata locator polynomial is:





ψ′(x)=1+α2x


The errata evaluator polynomial Ω(x) may then be determined:





Ω(x)=[1+S(x)]ψ(x)mod x2t+1=1+α6x+α2x3


And the errata magnitudes may be computed:







e
1

=



a
1




Ω


(

a

-
1


)




ψ




(

a

-
1


)




=



a
2







e
2


=



a
2




Ω


(

a

-
2


)




ψ




(

a

-
2


)




=



a
6







e
5


=



a
5




Ω


(

a

-
5


)




ψ




(

a

-
5


)




=
a









Finally the errata magnitudes are added to the received codeword (with errors and erasures) to recover the (estimate of the) original codeword ĉ(x):











c
^



(
x
)


=


r


(
x
)


+

e


(
x
)









=


(


a
4

+


a
2


x

+


a
6



x
2


+

x
3

+


a
5



x
4


+


a
2



x
5


+

x
6


)

+

(



a
2


x

+


a
6



x
2


+

ax
5


)








=


a
4

+

x
3

+


a
5



x
4


+


a
4



x
5


+

x
6









A second approach example uses the Blahut algorithm. Again, the erasure locator ┌(x) and syndrome polynomials S(x) are used to begin:





┌(x)=(1+α1x)(1+α5x)=1+α6x+α6x2





1+S(x)=1+α2x+α3x22x36x4


Then implement the Berlekamp-Massey algorithm on the last 2t-s modified syndromes with the initialization of:

















Λ(2)(x) = Γ(x) = 1 + α6x + α6x2  T(2)(x) = Γ(x) = 1 + α6x + α6x2



L2 = s = 2



  for i=3



Δ3 = S3 + Λ1(2)S2 + Λ2(2)S1 = α



L2 < i − L2 + s then length change



L3 = i − L2 + s = 3



Λ(3)(x) = Λ(2)(x) +Δ3xT(2)(x) = 1 +x + α5x2 + αx2



T(3)(x) = Δ3−1Λ(2)(x) = α6 + α5x + α2x3



  for i = 4



Δ4 = S4 + Λ1(3)S3 + Λ2(3)S2 + Λ3(3)S1 = α5



L3 ≧ i − L3 + s then no length change



L4 = L3 = 3



Λ(4)(x) = Λ(3)(x) +Δ4xT(3)(x) = 1 +x + α5x2 + αx3



T(4)(x) = xT(3)(x) = α6x + α5x2 + α2x3










The errata locator polynomial is ψ(x):





ψ(x)=Λ(4)(x)=1+x+α5x2+αx3


The errata positions correspond to the roots:





ψ(α−2)=ψ(α−2)=ψ(α−5)=0


Similar to the Forney example, the errata magnitudes are evaluated and the original codeword is recovered.







e
1

=



a
1




Ω


(

a

-
1


)




ψ




(

a

-
1


)




=



a
2







e
2


=



a
2




Ω


(

a

-
2


)




ψ




(

a

-
2


)




=



a
6







e
5


=



a
5




Ω


(

a

-
5


)




ψ




(

a

-
5


)




=
a
















c
^



(
x
)


=


r


(
x
)


+

e


(
x
)









=


(


a
4

+


a
2


x

+


a
6



x
2


+

x
3

+


a
5



x
4


+


a
2



x
5


+

x
6


)

+

(



a
2


x

+


a
6



x
2


+

ax
5


)








=


a
4

+

x
3

+


a
5



x
4


+


a
4



x
5


+

x
6









The third approach uses an Inversionless Forney algorithm as is detailed within the same example as follows. Here, the modified syndrome polynomial ≡(x) is initially used:





1+≡(x)=[1+S(x)]┌(x)=1+x+α2x2+αx3α3x4


Then implement the Berlekamp-Massey algorithm on the last 2t-s modified syndromes with the initialization of:

















Λ(0)(x) = 1, B(0)(x) = 1, L0 = 0, ε(0) = 1



  for i=1



Δ1 = ≡3 = α



if L0 < i − L0 then length change



  L1 = i − L0 = 1



  ε(1) 1= α



Λ(1)(x) = ε(0)Λ(0)(x) +Δ1xB(0)(x) = 1 + αx



B(1)(x) = Λ(0)(x) = 1



  for i = 2



Δ2 = ≡4 + Λ1(1)3 = α5



if L0 ≧ i − L1 then no length change



  L2 = L1 = 1



  ε(2) = ε(1) = α



Λ(2)(x) = ε(1)Λ(1)(x) +Δ2xB(1)(x) = α + α3x



B(2)(x) = xB(1)(x) = x










Although this polynomial is not monic, the roots are the same as the earlier Forney example. One may allow for an arbitrary scalar factor in the error locator, as it will drop out later when evaluating the errata magnitudes. The errata locator polynomial ψ(x) is:





ψ(x)=┌(x)Λ(x)=(α+α3x)(1+α6x α6x2)=α+αx+α6x22x3


The errata positions correspond to the roots:





ψ(α−2)=ψ(α−2)=ψ(α−5)=0


Then compute the errata evaluator polynomial Ω(x):





Ω(x)=[1+≡(x)]Λ(x)mod x2t+1=α+x+α3x3


the errata magnitudes:







e
1

=



a
1




Ω


(

a

-
1


)




ψ




(

a

-
1


)




=



a
2







e
2


=



a
2




Ω


(

a

-
2


)




ψ




(

a

-
2


)




=



a
6







e
5


=



a
5




Ω


(

a

-
5


)




ψ




(

a

-
5


)




=
a









and then recover the original codeword:






ĉ(x)=r(x)+e(x)=α4+x35x44x5+x6


The fourth example approach involves an Inversionless Blahut algorithm where the same erasure locator ┌(x) and syndrome polynomials S(x) are used to begin:





┌(x)=(1+α1x)(1+α5x)=1+α6x+α6x2





1+S(x)=1+α2x+α3x22x36x4


Then implement the Berlekamp-Massey algorithm on the last 2t-s modified syndromes with the initialization of:

















Λ(2)(x) = Γ(x) = 1 + α6x + α6x2  B(2)(x) = Γ(x) = 1 + α6x + α6x2



L2 = 2 ε(2) = 1



  for i=3



Δ3 = S3 + Λ1(2)S2 + Λ2(2)S1 = α



L2 < i − L2 + s then length change



  L3 = i − L2 + s = 3



  ε(2) = Δ3= α



Λ(3)(x) = ε(2)Λ(2)(x) +Δ3xB(2)(x) = 1 + α5x + α2x2 + x3



B(3)(x) = Λ(2)(x) = 1 + α6x + α6x2



  for i = 4



Δ4 = S4 + Λ1(3)S3 + Λ2(3)S2 + Λ3(3)S1 = α5



L3 ≧ i − L3 + s then no length change



  L4 = L3 = 3



  ε(4) = ε(3) = α



Λ(4)(x) = ε(3)Λ(3)(x) + Δ4xB(3)(x) = α + αx + α6x2 + α3x3



B(4)(x) = xB(3)(x) = x + α6x2 + α6x3










The errata locator polynomial ψ(x) is:





ψ(x)=Λ(4)(x)=α+αx+α6x23x3


Although this polynomial is not monic, the roots are the same as the earlier Blahut example. We can allow an arbitrary scalar factor in the errata locator polynomial ψ(x), as it will drop out later when we evaluate the errata magnitudes. Thus, the errata evaluator polynomial Ω(x) is computed:





Ω(x)=[1+S(x)]ψ(x)mod x2t+1=α+x+α3x3


yielding the errata magnitudes:







e
1

=



α
1




Ω


(

α

-
1


)




Ψ




(

α

-
1


)




=

α
2









e
2

=



α
2




Ω


(

α

-
2


)




Ψ




(

α

-
2


)




=

α
6









e
5

=



α
5




Ω


(

α

-
5


)




Ψ




(

α

-
5


)




=
α





Similar to the previous examples one may correctly recover the original codeword:






ĉ(x)=r(x)+e(x)=α4+x35x44x5+x6


Finally, the fifth approach corresponds to using an Inversionless Blahut Dual Line algorithm. The erasure locator ┌(x) and syndrome polynomials S(x) are as before:





┌(x)=(1+α1x)(1+α5x)=1+α6x+α6x2





1+S(x)=1+α2x+α3x22x36x4


Then implement the Berlekamp-Massey algorithm on the last 2t-s modified syndromes with the initialization of:

















L2 = 2    ε(2) = α



C0(2) = Γ0S3 + Γ1S2 + Γ2S1= α



C1(2) = Γ0S4 + Γ1S3 + Γ2S2 = α3



C2(2) = Γ0 = 1



C3(2) = Γ1 = α6



C4(2) = Γ2 = α6



D1(2) = Γ0S3 + Γ1S2 + Γ2S1 = α



D2(2) = 0



D3(2) = Γ0 = 1



D4(2) = Γ1 = α6



D5(2) = Γ2 = α6



for i = 3



L2 < i − L2 +s then length change  L3 = i − L2 + s  ε(2) 3 = α



C0(3) = ε(2)C1(2) + Δ3D1(2) = α5



C1(3) = ε(2)C2(2) + Δ3D2(2) = 1



C2(3) = ε(2)C3(2) + Δ3D3(2) = α5



C3(3) = ε(2)C4(2) + Δ3D4(2) = α2



C4(3) = Δ3D5(2) = 1



D1(3) = 0



D2(3) = C2(2) = 1



D3(3) = C3(2) = α6



D4(3) = C4(2) = α6



D5(3) = 0



for i = 4



L3 ≧ i − L3 +s then no length change L4 = L3 = 3 ε(4) = ε(3) = α



C0(4) = ε(3)C1(3) + Δ4D1(3) = α5



C1(4) = ε(3)C2(3) + Δ4D2(3) = 1



C2(4) = ε(3)C3(3) + Δ4D3(3) = α5



C3(4) = ε(3)C4(3) + Δ4D4(3) = α2



C4(4) = Δ4D5(3) = 1



D1(4) = D1(3) = 0



D2(4) = D1(3) = 1



D3(4) = D1(3) = α6



D4(4) = D1(3) = α6



D5(4) = 0










The errata locator polynomial ψ(x) is:





ψ(x)=C(4)(x)=α+αx+α6x23x3


This is the same errata locator polynomial as in the inversionless Blahut example. Thus, this algorithm recovers the original codeword in a similar fashion as shown before but does so by using a simpler mathematical approach. Therefore, computational time and effort savings are realized.



FIG. 3 is a block diagram of an embodiment of a computer system 400 that may implement the HDD system 100 of FIG. 1 and the ECC block 130 of FIG. 2. In this system embodiment, the system 400 may include a processor 410 coupled to a local memory 415 and coupled to the HDD system 100. As can be seen, the HDD system 100 includes a hard disk 106 and a read/write channel 420 having an ECC bock 130. The processor 410 may be operable to control the memory 415 and the HDD system 100 in transferring data to and from the disk 106 and to and from the memory 415. Further, additional data stores and communication channels (not shown) may be used to transfer data to and from the HDD system 100 that may be remote from this computer system 400.


Such a computer system may be any number of devices including a CD player, a DVD player, a Blu-Ray player, a personal computer, a server computer, a smart phone, a wireless personal device, a personal audio player, media storage and delivery system or any other system that may read and write data to and from a storage medium or communication channel.


While the subject matter discussed herein is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the claims to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the claims.

Claims
  • 1. An code error-locating module, comprising: a first input operable to receive a polynomial indicative of a stream of bits, the stream of bits including at least one error and at least one erasure;a second input operable to receive a polynomial indicative of erasure information about the stream of bits; andan output operable to generate a polynomial indicative of the location of the at least one error and the at least one erasure.
PRIORITY CLAIM TO PROVISIONAL PATENT APPLICATION

This patent application claims priority to U.S. Provisional Patent Application No. 61/142,030 entitled ‘ERROR-LOCATOR-POLYNOMIAL GENERATION WITH ERASURE SUPPORT’ filed on Dec. 31, 2008 and is hereby incorporated by reference.

Provisional Applications (1)
Number Date Country
61142030 Dec 2008 US