This application claims priority from European patent application No. 04425486.0, filed Jun. 30, 2004, which is incorporated herein by reference.
Embodiments of the present invention relates to a method and system for correcting low latency errors in read and write non volatile memories, particularly electronic flash memories.
Embodiments of the invention particularly relates to read and write memories having a NAND structure and the following description is made with reference to this specific field of application for convenience of illustration only, since the invention can be also applied to memories with NOR structure, provided that they are equipped with an error correction system.
Even more particularly, embodiments of the invention relates to a method and system for correcting errors in electronic read and write non volatile memory devices, particularly flash memories, of the type providing at least the use of a BCH binary error correction code for the information data to be stored.
As it is well known in this specific technical field, two-level and multilevel NAND memories have such a Bit Error Rate (BER) as to require an Error Correction system (ECC) in order to allow them to be used as reliably as possible.
Among the innumerable present ECC correction methods a particular interest is assumed by the so-called cyclical correction codes; particularly binary BCH and Reed-Solomon codes.
The main features concerning these two codes are quoted hereafter by way of comparison.
The code will be examined first:
1 ) Binary BCH.
This code operates on a block of binary symbols. If N (4096+128) is the block size, the number of parity bits is P (assuming to correct 4 bits, P is equal to 52 bits).
As it will be seen hereafter, the code operates on a considerably lower number of bits with respect to the Reed-Solomon code.
The canonical coding and decoding structures process the data block by means of sequential. operations on the bits to be coded or decoded.
The latency to code and decode data blocks is higher than the Reed-Solomon code latency since it operates on symbols.
The arithmetic operators (sum, multiplication, inversion) in GF(2), and thus those necessary for this kind of code, are extremely simple (XOR, AND, NOT).
The code corrects K bits.
The other code will now be seen:
2) Reed Solomon
It operates on a block of symbols composed by a plurality of bits.
If N ((4096+128)/9) is the symbol block size, the number of parity symbols is P (assuming to correct 4 errors, P is equal to 8 symbols formed by 9-bit, i.e. 72 bits).
The canonical coding and decoding structures process the data block by means of sequential operations on the symbols to be coded or decoded.
In this case, the latency to code and decode data blocks is lower than the BCH binary code latency since it operates on symbols rather than bits (1/9).
Another difference is due to the fact that the arithmetic operators (sum, multiplication, inversion) in GF(2m) are in this case complex operators with respect to the BCH code.
The code corrects K symbols. This is very useful in communication systems such as: Hard disks, Tape Recorders, CD-ROMs etc. wherein sequential errors are very probable. This latter feature, however, often cannot be fully used in NAND memories.
For a better understanding of aspects of the present invention, the structure of the error correction systems using a BCH coding and decoding will be analyzed hereafter, the structure of Reed Solomon correction systems will be analyzed afterwards.
The BCH Structure
The typical structure of a BCH code is shown in the attached
Blocks C, 1 and 3 can be realized by means of known structures, (for example according to what has been described by: Shu Lin, Daniel Costello—“Error Control Coding: Fundamentals and Applications”) operating in a serial way and thus having a latency being proportional to the length of the message to be stored.
In particular:
BLOCK C: the block latency is equal to the message to be stored (4096 bits);
BLOCK 1: the block latency is equal to the coded message (for a four-error-corrector code 4096+52);
BLOCK 3: the block latency is equal to the coded message (for a four-error-corrector code 4096+52).
Blocks (2.1) grouping or decomposing the bits to satisfy said requirements are thus required in the architecture.
Consequently, in order not to slow the data flow down, it is required that the coder and the decoder operate with a clock time being eight times higher than the clock of the data storage and reading step.
The other correction mode of the Reed Solomon type will now be examined.
The Reed Solomon Structure (RS)
Reed-Solomon codes do not operate on bits but on symbols. As shown in
The coding block C and the syndrome calculation block 1 are similar to the ones used for BCH codes with the only difference that they operate on symbols. The error detector block 3 must determine, besides the error position, also the correction symbol to be applied to the wrong symbol.
Since the code RS operates on symbols, a clearly lower latency is obtained paying a higher hardware complexity due to the fact that operators are no more binary.
BLOCK C: the block latency is equal to the number of symbols in the message to be coded (462);
BLOCK 1: the block latency is equal to the number of symbols in the coded message (470);
BLOCK 3: the block latency is equal to the number of symbols in the coded message (470).
Also in this case the same conditions about the bit grouping and decomposition occur. This time however the Reed-Solomon code does not operate in a sequential way on bits but on s-bit symbols.
Also in this case structures for grouping bits are required, but to ensure a continuous data flow the clock time must be 8/s. It must be observed that in the case s=8 these architectures are not required.
In this way the latency problem is solved, but, by comparing the number of parity bits required by BCH and Reed-Solomon, it can be seen that Reed-Solomon is much more expensive.
In the case being considered by way of example, i.e., 4224 (4096+128) data bits for correcting four errors, Reed-Solomon codes require twenty parity bits more than BCH binary codes.
Although advantageous under several aspects, known systems do not allow the latency due to the sequential bit processing to be reduced by keeping a number of parity bits, close to the theoretical minimum.
In substance, the advantages of the code RS low latency are accompanied by a high demand of parity bits and a higher system structural complexity.
An embodiment of the invention is directed to an error correction method and system having respective functional and structural features such as to allow the coding and decoding burdens to be reduced, reducing both the latency and the system structural complexity, thus overcoming the drawbacks of the solutions provided by the prior art.
The error correction method and system obtain for each coding and decoding block a good compromise between the speed and the occupied circuit area by applying a BCH code of the parallel type requiring a low number of parity bits and having a low latency.
By using this circuit solution it is possible to use for each coding and decoding block the most convenient parallelism and thus latency degree, taken into account that, in the flash memory, the coding block is only involved in writing operations (only once since it is a non volatile memory), the first decoding block is involved in all reading operations (and it is the block requiring the greatest parallelism), while correction blocks are only called on in case of error and thus not very often.
In this way it is often possible to optimize the system speed reducing in the meantime the circuit area occupied by the memory device.
Features and advantages of the methods and systems according to the invention will be apparent from the following description of an embodiment thereof given by way of indicative and non limiting example with reference to the attached drawings.
With reference to the figures of the attached drawings, and particularly to the example of
The system 10 comprises a block indicated with C representing the coding step; a block M indicating the electronic memory device and a group of blocks 1, 2 and 3 which are active during the decoding step. In particular, the block 1 is responsible for calculating the so-called code syndrome; the block 2 is a calculation block, while the block 3 is responsible for detecting the error by means of the Chien wrong position search algorithm.
The blocks indicated with 20.1 represent the parallelism conversion blocks on the data flow.
This embodiment of the invention is particularly suitable for the use in a flash EEPROM memory M having a NAND structure; nevertheless nothing prevents this embodiment from also being applied to memories with NOR structure, provided that they are equipped with an error correction system.
Advantageously, the method and system according to this embodiment of the invention is based on an information data processing by means of a BCH code set parallel in the coding step and/or in the decoding step in order to obtain a low latency. The parallelism being used for blocks C, 1 and 3 is selected to optimize the system performance in terms of latency and device area.
Two different methods to make a BCH binary code parallel are provided.
In substance, the parallel scanning can be performed in any phase of the data processing flow according to the application requirements.
The mathematical basics whereon the two parallelization methods of a BCH code according to this embodiment of the invention are based will be described hereafter.
First Parallelization Method:
Coding (Block C) and Syndrome Calculation (Block 1)
The structures for the syndrome coding and calculation are very similar since both involve a polynomial division.
With reference to
The message to be coded enters the circuit performing the division and it simultaneously goes out being so shifted that in the end the coded message is composed of the initial data message and of the parity being calculated in the circuit.
The method intends to parallelize the division calculating the parity of the data to be written in the memory.
The structure being proposed, in the case of n input data, is represented in
Registers 5.1 are initially reset. The words to be coded are applied to the logic network 5.2 in succession. After a word has been applied to the logic network 5.2, the outputs of the logic network 5.2 are stored in the registers 5.1. Once the message last word is applied, registers 5.1 will comprise the parity bits to be added to the data message.
It is observed that the number of adders depends on the number one of the code generator polynomial.
The example of a BCH [15,11] code with generator polynomial g(x)=11011 is to be seen, in the illustrative case of two input data (
The syndrome calculation structure is similar to the coding structure. Each syndrome is calculated by dividing the datum being read from the memory for convenient polynomial factors of the code generator polynomial (prior art) and in the end the register content will be valued at α, α3, α5 ed α7 by means of a matrix up to obtaining the syndromes. The method being shown for parallelizing the parity calculation can thus be similarly used for the syndrome calculation.
Search for the Error Detection Polynomial Fast BCH.
This block is unchanged with respect to the traditional BCH, but it is observed that, although it is more complex than the decoding algorithm, it is the one requiring less time.
Search for Error Detection Numbers
The syndromes being known, the error detection polynomial is searched, whose roots are the inverse of the wrong positions. This polynomial being known, the roots are then found. This search is performed by means of the Chien algorithm (prior art).
The algorithm carries out a test for all the field elements in order to check if they are the roots of the error detection polynomial.
If αi is a root of the error detection polynomial, then the position n−i is wrong, where n is the code length.
At this point, for each field element, it is determined if this is a root of the error detection polynomial, i.e. to check if the following equation is valid for some j.
1+llαj+ . . . +ltαjt=0
j=0, 1, . . . , n−1
Consequently, a total sum is performed of all the register contents and the field element ‘1’ as shown in
With reference to
Second Parallelization Method:
The structure of the system 10 according to a further embodiment of the invention, incorporating coding and decoding blocks, is similar to the structure of an error correction system having a traditional BCH binary code; nevertheless, the internal structure of each block changes.
According to an embodiment of the invention, it is provided to break the initial information message n times and to operate autonomously on each block. The possibility to break the initial information block into two blocks is considered by way of example; there will be thus bits in the even position and bits in the odd position so that two bits enter at a time in the circuit and the speed doubles.
Generally, parity bits are calculated according to the following relation (1), shown in
par=xn−km(x)mod g(x) (1)
where m(x) is the data message and g(x) is the code generator polynomial.
Operating in parallel, parity bits par1 and par2 are calculated according to these relations:
par=par1+par2 wherein
par1=[(xn−km(x))pair mod g(x)] evaluated in α2
par2=α[(xn−km(x))impair mod g(x)] evaluated in αq (2)
In a general case of q bits processed in parallel, parity bits par1, par2, . . . , parq are calculated according to these relations:
par=par1+par2+ . . . +parq
par1=[(xn−km(x))qi mod g(x)]evaluated in αq being
par2=[α(xn−km(x))qi+1 mod g(x)]evaluated in αq being
and qi+1<n
. . .
parq=α[(xn−km(x))qi+q−1 mod g(x)]evaluated in αρbeing
and qi+1<n
An example of known circuit allowing the coding (1) to be realized is shown in
On the contrary, for calculating the parity in the double parallelization case the structure of
The blocks indicated with “cod” perform both the division as in the traditional algorithm and the evaluation in α2. This evaluation can be carried out by means of a logic network being described by a matrix.
As regards odd bits, it is then necessary to multiply the results by α, following the modes being already described.
If the circuit is to be further parallelized in a plurality of q blocks, reference can be made to the example of
In the case of the traditional serial BCH binary coding it is possible to calculate the so-called code syndromes by means of the following calculation formula (3), corresponding to the circuit block diagram of
On the contrary, according to an embodiment of the present invention, the syndrome calculation is set out on the basis of the following formulas (4):
Sj=S1j+S2 dove:
A possible implementation of the syndrome calculation according to the prior art is shown in
In general terms, advantageously according to an embodiment of the present invention, in a q-bit parallel processing of the syndrome (S1, S2, . . . , Sq), the syndrome calculation is set out on the basis of the following relation:
wherein r(x) is an erroneously read word and S1, S2, . . . , Sq are calculated as follows:
Consequently, a division is performed similarly to the coding in order to obtain the remainder in the registers marked with s0, s1, . . . . This remainder (seen as a polynomial) must then be valued in α, α2, α3, α4 as above described, for example by using a logic network being described by matrixes.
The structure of
The blocks shown in
In particular, two bits are analyzed simultaneously, the evens and the odds and a structure similar to the traditional syndrome calculation occurs for both.
In fact, both for the evens and for the odds, there is a block calculating the remainder of the division of the input message with a polynomial, a factor of the code generator polynomial.
These remainders must be now valued in precise α powers, but differently from the traditional syndrome calculation, this time they are valued in α2, α4, α6 and in α8.
In the case of odd bits, a multiplication for different a powers must be also performed.
The results of the even block and of the odd block will be then added in order to obtain the final syndromes.
Now, according to the prior art, a search algorithm of the roots of the error detection polynomial is located in block 3 and it provides the replacement of all the field elements in the polynomial.
In substance, in the case of a serial BCH code, a test is performed for all the elements of the following field, according to the following formula:
1+llαj+ . . . +ltαjt=0
j=0, 1, . . . , n−1 (5)
In the traditional serial BCH code, always assuming to correct two errors, a circuit structure like the one of
According to an embodiment of the invention, and assuming to parallelize only once, two circuits are obtained, checking each half of the field elements and thus two different tests TEST1 e TEST2:
Consequently, parallelizing this portion means having several circuits replacing different field elements in the error detection polynomial. In particular, by parallelizing twice the diagram of
The first circuit performs the first test, i.e. it checks if the field elements being even α powers are the roots of the error detection polynomial, while the second checks if the odd α powers are the roots of the error detection polynomial.
In the general case of a q-bit parallel processing, the search algorithm of the roots of the error detection polynomial is calculated according to the following formula:
1+llαj+ . . . +ltαjt=0
j=0, 1, . . . , n−1
wherein I(x) is the error detection polynomial on which, in the q-bit parallel processing, a plurality of tests (TEST1, TEST2, . . . , TESTq) are performed for all the elements as follows:
The previous description has shown how to realize parallel structures for coding blocks C, syndrome calculation blocks 1 and error correction blocks 3.
It will be proved hereafter how, no correlation existing between the parallelism of one block and the parallelism of another block, it is very advantageous to structure the coding and decoding system 10 architecture in a structure having a hybrid parallelism, and thus a hybrid latency.
Specific reference will be made to the example of
The coding and decoding example of
Assuming an error probability of 10−5 on a single bit for the NAND memory M, since the protection code operates on a package of 4096 bits, the probability that the package is wrong is 1 out of 50.
In order to understand if the message is correct, the syndrome calculation in block 1 is performed. For this reason for block 1 it is suitable to use a high parallelism in order to reduce the overall average latency.
The Chien circuit (block 3) performing the correction is called on only in case of error (1 out of 50), it is thus suitable, for an area reduction, to use a low-parallelism structure for this single block 3 circuit.
For the coding block C it is possible to choose the most suitable parallelism for the application in order to optimize the coding speed or the overall system area.
This solution allows the coding and decoding time to be reduced by varying the parallelism at will.
Another advantage is given by the fact that the independency of the parallelism of each block being involved in coding and decoding operations allows the performances and the system 10 or 11 area to be optimized according to the applications.
The system 10 of
From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
04425486.0 | Jun 2004 | EP | regional |