The present invention relates to techniques for performing error correction encoding in data recording systems, and more particularly, to techniques for performing error correction encoding using error correction codes that are less computationally intensive.
Error correcting codes are used in data recording systems to ensure data reliability. Parity codes are examples of error correction codes. Parity codes are often used to correct randomly occurring errors.
Short parity codes typically provide good error correction performance, but at a low code rate. Longer parity codes provide a higher code rate. However, high rate parity codes typically have reduced error correction performance and are more likely to propagate errors.
These deficiencies can be overcome by using a tensor product parity code or codes modified from a tensor product code. A tensor product parity (TPP) code is the tensor product of two smaller codes. The parity check matrix of a TPP code is derived by taking the tensor product of the parity check matrices for the two smaller codes.
For example, a tensor product parity code can be the tensor product of a short parity code and a component Bose-Chaudhuri-Hochquenghem (BCH) code. Such a tensor product parity code has an error correction performance equivalent to a short parity code, but with a substantially higher code rate. BCH codes are another well-known family of error correcting codes. The component BCH code can be replaced by any other error correction codes.
A Reed-Solomon (RS) error correction code can be combined with a tensor product parity (TPP) code to generate a combined code. The combined code can be used to provide two levels of error correction in a data recording system. While efficient encoding methods exist for encoding the TPP and RS codes separately, no such efficient encoder exists to simultaneously enforce both TPP and RS parity rules. The only method known to encode a combination RS and TPP code is by brute-force matrix multiply. Combined RS/TPP codes typically have very large parity check matrices, and as a result, they are difficult to encode, because they require extensive matrix multiplication.
It would therefore be desirable to provide combined error correcting codes that are simpler to encode and that require less extensive matrix multiplication.
The present invention provides systems and methods for performing error correction encoding using error correction codes. The error correction encoding techniques of the present invention have a reduced complexity that allows them to be applied to practical data recording systems.
An encoder inserts redundant parity information into a data stream to improve system reliability. According to one embodiment, the encoder can generate the redundant parity information by combining two component codes. Dummy bits are inserted into the data stream in locations reserved for parity information generated by subsequent encoding. The redundant parity information can be generated by applying encoders for each component code successively such that data and parity information from all of the preceding encoders are input into a subsequent encoder.
An error correction code of the present invention can have a uniform or a non-uniform span. The span corresponds to consecutive channel bits that are within a single block of a smaller parity code that is used to form a composite code. The span lengths can be variant across the whole codeword by inserting dummy bits in less than all of the spans.
Other objects, features, and advantages of the present invention will become apparent upon consideration of the following detailed description and the accompanying drawings, in which like reference designations represent like features throughout the figures.
In many data storage or communications systems, two separate codes are combined to form a composite code. The most common method of combining two component codes is simple concatenation. In simple concatenation, the composite codeword consists of a sequence of smaller blocks. Each of the smaller blocks is a codeword of an inner component code. The sequence of blocks is a codeword of an outer component code. Simple concatenation combines two component codes to form a composite code that has stronger error correcting capabilities than either component code. However, the composite code incurs the parity overhead of both component codes.
Encoding proceeds by first encoding the data blocks using the outer component code by adding outer parity blocks. Then, every block is encoded using the inner component code by adding inner parity bits within each block.
Decoding proceeds by first decoding each block using the inner component code decoder. The inner component code decoder corrects all errors in blocks with only a few bits in error. The resulting sequence of blocks is then decoded using the outer component code decoder. The outer component code decoder corrects blocks that were decoded incorrectly by the inner component code decoder.
Another method for combining two component codes known in the prior art is generalized concatenation. As with simple concatenation, the composite codeword consists of a sequence of smaller blocks. The blocks are not codewords of the inner component code. The degree to which each block deviates from the parity rules of the inner component code is called the syndrome for that block. The outer component code does not operate over the sequence of blocks as such, but rather the sequence of syndromes is a codeword of the outer component code.
Encoding proceeds by computing the inner component code syndrome for blocks corresponding to data elements of the outer component code. The outer component code encoder then computes the syndromes required for the remaining blocks in order for the complete sequence of syndromes to form a valid codeword of the outer component code. These remaining blocks correspond to parity elements of the outer component code. For the remaining blocks, parity bits are added to force the syndrome to the required value.
Decoding proceeds by first computing the inner block syndrome for each block. The sequence of syndromes is then decoded using the outer component code decoder. Each block is then decoded again using the inner component code decoder and the corresponding syndrome value given by the outer component code decoder.
According to an embodiment of the present invention, three component codes are combined to form a composite code. First, two codes are combined by generalized concatenation to form a first composite code. The first composite code is then used as the inner code in simple concatenation with an outermost error correction code to form a second composite code.
In the preferred embodiment, a simple parity code is concatenated with a BCH code to form a composite tensor product parity code that is then concatenated with a Reed-Solomon outermost error correction code. It should be understood that the principles of the present invention can encode data using composite codes formed by combining different component codes in a similar fashion.
A composite code formed in this way cannot easily be encoded. This difficulty arises due to the fact that both the composite code formed by generalized concatenation and the outermost error correcting code involve parity checks that span the entire codeword. The present invention describes how simple modifications to the details of the concatenation can render the encoding problem more tractable.
Input data bits are provided to a first level error correction encoder 101. Error correction encoder 101 can apply any error correction or detection code to the input data bits to generate redundant data bits. For example, first level error correction encoder 101 can be a Reed-Solomon (RS) encoder that generates RS check bytes for each block of input data.
The data output blocks of encoder 101 include RS check bytes. Data output blocks of encoder 101 are provided to delay block 102 and second level error correction encoder 104. According to one embodiment of the present invention, second level error correction encoder 104 uses a tensor product parity code (TPPC) to generate a second level of redundant parity bits.
Second level encoder 104 generates a set of redundant parity bits for each block of input data using a composite code, such as a tensor product parity (TPP) code. The parity bits are then inserted into the data block at block 103.
Delay block 102 delays the output data block of encoder 101 so that encoder 104 has enough time to calculate the parity bits and to insert the parity bits into the same data block before the data is written onto a recording medium.
The span of the code corresponding to the HRSTP matrix is the granularity length of each TPP inner component code. In the example of
The example parity check matrix HTPP 202 for the TPP code is the tensor product of a parity check matrix H1 for a single parity code and a parity check matrix H2 for a BCH code. The parity check matrix HTPP 202 shown in
The check matrix H1 corresponds to a (3, 2) single parity code, and the check matrix H2 corresponds to a (7, 4) BCH code. Parity check matrix HTPP 202 is shown below.
The tensor product parity check matrix HTTP can be expressed as two levels of equations using modulo 2 arithmetic. The first level equations are tensor local parity equations that are based on the H1 parity check matrix. The first level equations are used to generate intermediate values ai, where i=1, 2, 3, . . . m, and m is the number of columns in the H2 matrix. Using the example H1 matrix given above, first level equations can be expressed as shown in equations (1)-(7), where + represents modulo 2 addition (an XOR function).
a1=x1+x2+x3 (1)
a2=x4+x5+x6 (2)
a3=x7+x8+x9 (3)
a4=x10+x11+x12 (4)
a5=x13+x14+x15 (5)
a6=x16+x17+x18 (6)
a7=x19+x20+x21 (7)
The second level equations are global parity equations that are based on the H2 parity check matrix. Each of the second level equations corresponds to one row in the H2 matrix. Using the example H2 matrix given above and the example equations (1)-(7), second level equations can expressed as shown in equations (8)-(10), where + represents modulo 2 addition.
a1+a2+a4+a5=0 (8)
a1+a2+a3+a6=0 (9)
a1+a3+a4+a7=0 (10)
The parity check matrix 201 in
TPP check matrix 202 contains three columns of parity bits. The 9th, 12th, and 15th columns in matrix 202 contain the parity bits for the TPP code. The dummy bits in matrix 201 are in the same three columns as the parity bits in matrix 202. Unlike many prior art systems, an RS decoder of the present invention does not check the TPP parity bits. This means that the RS code can be encoded independent of the TPP code.
A parity check matrix completely describes any linear block code. Furthermore, by applying simple algebraic manipulation known to persons skilled in the art, a parity check matrix can be transformed into a generator matrix. A generator matrix can be used to encode data into a codeword that satisfies the parity check rules described in the parity check matrix. Encoding by matrix multiplication is not preferred. For the most common codes, more efficient encoders exist that do not require large matrix multiplications.
Codes used for real hard disk drives are much larger than the example codes shown in
The number of bits in each segment equals the span length. In the example of
At step 301, register 310A is set up, for example, by setting the values stored in the register to zero. The register stores input bits. A set of 12 input bits (e.g., 101011011110) is serially shifted into the register from left to right at step 302. None of the 12 input bits are stored in the 9th, 12th, and 15th bit positions of the shift register. Instead, three zero-value dummy bits are stored in these 3 bit positions. The last two segments of the register remain empty.
At step 303, a first level of error correction encoding is performed. The result of the first level of error correction encoding is a set of redundant bits that is added to the set of input bits. For example, the first level of error correction encoding can be Reed-Solomon (RS) encoding. RS parity data can be efficiently generated by recursive methods well known in the prior art. In
At step 304, a second level of error correction encoding is performed using a composite code to compute additional parity bits. In the example of
The second level encoding is performed in three steps in the example of
In the second step 304B of second level encoding, the second component code encoder generates new intermediate values a3′, a4′, and a5′ such that a1, a2, a3′, a4′, a5′, a6, a7 satisfy parity check matrix H2. In this example, the inputs to the second component code encoder are intermediate values a1, a2, a6, and a7, and the outputs are a3′, a4′, and a5′. In general, the inputs are the intermediate values generated by segments that do not contain a dummy bit, and the outputs correspond to segments that do contain a dummy bit.
In the third step 304C of second level encoding, the final parity bits for the composite code are generated by applying modulo 2 addition (XOR) to the two sets of ai values calculated for the segments with dummy bits. For example, in
In the example of
The present invention provides significant benefits to data recording media, including hard disk drives. Specifically, the error encoding techniques of the present invention use dummy bits in the encoding process to simplify the computations. The encoding techniques of the present invention are simple enough that they can be performed using encoders designed for two or more codes that are used to form a composite code. For the toy example shown in
The present invention reduces the size of the chipset required to perform the encoding. The present invention also reduces the latency in the controller electronics.
The parity check matrix HRSTP shown in
Three additional columns are added to the RS parity check matrix 401 corresponding to three dummy bits per row, as shown in
The span of the TPP component code is variant in the example of
First level error correction encoding (e.g., RS encoding) is then performed to generate first level redundant check bytes 512. The redundant check bytes are loaded into the last two segments of register 510B as shown in
The second level of error correction encoding is performed using a composite code (e.g., a tensor product parity code) to compute the parity bits. In the example of
The first component code encoder is applied to each segment of bits in the codeword to compute intermediate results a1-7. Subsequently, the second component code encoder is applied to the intermediate results ai computed using the segments that do not contain a dummy bit.
In the example of
The results of these three XOR functions are the correct parity values for the second level composite code. The correct parity values are inserted into the codeword stored in register 510C to replace the dummy bits, as shown in
The foregoing description of the exemplary embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. A latitude of modification, various changes, and substitutions are intended in the present invention. In some instances, features of the invention can be employed without a corresponding use of other features as set forth. Many modifications and variations are possible in light of the above teachings, without departing from the scope of the invention. It is intended that the scope of the invention be limited not with this detailed description, but rather by the claims appended hereto.
Number | Name | Date | Kind |
---|---|---|---|
4451919 | Wada et al. | May 1984 | A |
4544950 | Tu | Oct 1985 | A |
4604747 | Onishi et al. | Aug 1986 | A |
5136592 | Weng | Aug 1992 | A |
5355412 | Kangas | Oct 1994 | A |
5436918 | Kato et al. | Jul 1995 | A |
5463762 | Morrissey et al. | Oct 1995 | A |
5815514 | Gray | Sep 1998 | A |
5931968 | Gray | Aug 1999 | A |
6081919 | Fujiwara et al. | Jun 2000 | A |
6081921 | Simanapalli | Jun 2000 | A |
6141787 | Kunisa et al. | Oct 2000 | A |
6363512 | Gray | Mar 2002 | B2 |
6397367 | Park et al. | May 2002 | B1 |
6415398 | Kikuchi et al. | Jul 2002 | B1 |
6501748 | Belaiche | Dec 2002 | B1 |
6513139 | Gray | Jan 2003 | B2 |
6513141 | Livingston | Jan 2003 | B1 |
6530057 | Kimmitt | Mar 2003 | B1 |
6581178 | Kondo | Jun 2003 | B1 |
6625762 | Le Dantec | Sep 2003 | B1 |
6662338 | Rezzi et al. | Dec 2003 | B1 |
6708308 | De Souza et al. | Mar 2004 | B2 |
6757117 | Livingston | Jun 2004 | B1 |
6757122 | Kuznetsov et al. | Jun 2004 | B1 |
6763495 | Suzuki et al. | Jul 2004 | B2 |
6766489 | Piret et al. | Jul 2004 | B1 |
6820228 | Keller | Nov 2004 | B1 |
6888897 | Nazari et al. | May 2005 | B1 |
6910172 | Hara et al. | Jun 2005 | B2 |
6934902 | Hara et al. | Aug 2005 | B2 |
7162678 | Saliba | Jan 2007 | B2 |
20020147954 | Shea | Oct 2002 | A1 |
20030033570 | Khannanov et al. | Feb 2003 | A1 |
20030043487 | Morita et al. | Mar 2003 | A1 |
20030074626 | Coker et al. | Apr 2003 | A1 |
20030174426 | Akamatsu | Sep 2003 | A1 |
20040064777 | Kurtas et al. | Apr 2004 | A1 |
20040187066 | Ichihara et al. | Sep 2004 | A1 |
20040201503 | Han et al. | Oct 2004 | A1 |
20040205383 | Sawaguchi | Oct 2004 | A1 |
20050062623 | Lee et al. | Mar 2005 | A1 |
20050066261 | Morita et al. | Mar 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
20070043997 A1 | Feb 2007 | US |