This invention relates generally to mass digital data storage systems using flash electrically erasable and programmable read only memory (“EEPROM”) technology, and, more specifically, to techniques of controlling the use of such systems in order to improve their useful life.
An advantage of using EEPROM technology is that a solid-state, non-volatile memory is provided, which can be repetitively reprogrammed. Each EEPROM cell includes an electrically floating gate positioned over a substrate channel between source and drain regions. A thin gate oxide layer separates the floating gate from the substrate. The threshold level of the cell is controlled by an amount of charge that is placed on the floating gate. If the charge level is above some threshold, the cell is read to have one state, and if below that threshold, is read to have another state.
The desired floating gate charge level is programmed by applying an appropriate combination of voltages to the source, drain, substrate and a separate control gate, for a designated period of time, in order to cause electrons to move from the substrate to the floating gate through the gate oxide layer. Current leakage from the floating gate is very small over time, thereby providing permanent storage. The charge level on the floating gate can be reduced by an appropriate combination of voltages applied to the elements described above, but it is preferable to include a separate erase gate that is positioned adjacent the floating gate with a thin layer of tunnel oxide between them.
A large number of such cells form a memory. The cells are preferably arranged on a semiconductor integrated circuit chip in a two-dimensional array with a common control gate provided for a row of such cells as a word line and the cells in each column having either their drain or source connected to a common bit line. Each cell is then individually addressable by applying the appropriate voltages to the word and bit lines that intersect at the desired cell. Rather than providing for such individual addressing for the purpose of erasing the cells, however, the erase gates of a block of cells are generally connected together in order to allow all of the cells in the block to be erased at the same time, i.e., in a “flash”.
In operating such a memory system, cells can be rewritten with data by either programming with electrons from the substrate or erasing through their erase gates, depending upon the state in which they are found and the state to which they are to be rewritten. However, flash EEPROM systems are generally operated by first erasing all of the cells in a erasable block to a common level, and then reprogramming them to desired new states.
Flash EEPROM mass storage systems have many advantages for a large number of applications. These advantages include their non-volatility, speed, ease of erasure and reprogramming, small physical size and similar factors. Because there are no mechanical moving parts, such systems are not subject to failures of the type most often encountered with hard and floppy disk mass storage systems. However, EEPROM cells do have a limited lifetime in terms of the number of times they can be reprogrammed or erased. As the number of cycles to which a cell is subjected reaches a few tens of thousands, it begins to take more voltage and/or time to both program and erase the cell. This is believed due to electrons being trapped in the respective gate and tunnel dielectric layers during repetitive programming and erase cycles. After a certain number of cycles, the number of electrons that are so trapped begin to change the operating characteristics of the cell. At some point, after one hundred thousand or more such cycles, so much voltage or time is required to either program or erase the cell, or both, that it becomes impractical to use it any further. The lifetime of the cell has at that point ended. This characteristic of EEPROM cells is described in European Patent Application Publication No. 349,775-Harari (1990).
Therefore, it is a principal object of the present invention, given a finite lifetime of individual EEPROM cells, to maximize the service lifetime of an entire mass storage EEPROM system.
This and additional objects are accomplished by the present invention, wherein, briefly and generally, the EEPROM array of cells is divided into two or more interchangeable banks of cells, each bank having one or more blocks of cells. A block is the smallest group of cells that is erasable or programmable at one time. A memory controller provides for interchanging such banks over the lifetime of the memory at times when it is detected that they are receiving significantly uneven use.
If such an interchange, or wear leveling, is not carried out in the case where there is significantly uneven use among groups of EEPROM cells, one group will reach its end of lifetime while other groups have significant life left in them. When one group reaches an end of lifetime, the entire memory may have to be replaced unless extra groups of memory cells are included in the system for replacing those that reach their lifetime. However, the techniques of the present invention allow for extending overall memory system lifetime without having to provide such replacement groups of memory cells. The ability to interchange groups of cells to result in more even wear among the groups is particularly advantageous in computer system applications wherein flash EEPROM memory is used in the nature of a disk drive. This is because the memory is subjected to frequent erase and reprogramming cycles in some groups but not others, and since the large capacity of the memory would require a large number of spare groups in order to obtain a reasonable memory system lifetime without use of the group interchange technique of the present invention.
Additional objects, advantages and features of the various aspects of the present invention will become apparent from the following description of its preferred embodiments, which description should be taken in conjunction with the accompanying drawings.
In order to set forth one environment in which the improved memory system of the present invention may be utilized,
In a specific form, each block is designed to contain a standard computer sector's worth of data plus some overhead fields. Blocks of data, indicated in dashed outline by a block 23, are received from the computer system over the bus 15, indicated to travel along a path 25. A logical address of a memory location for a block 23 to be written into is also sent by the computer system. This logical address is converted by an address translation table 27 into a physical memory address. A path 29 indicates the block within the memory 11 into which the data is to be written. The address translation table 27 simply converts a given logical address from the computer system into a corresponding physical address of a block within the memory 11 that is to receive that data. As explained later, the translation table 27 is reprogrammable by signals in a path 31 from a processing unit 33 to redirect data blocks of given logical addresses into different physical banks of the memory 11 in order to even out use of the banks.
In preparation for the processing circuits 33 to decide whether such redirection is required, information is first gathered of memory characteristics and usage. A running record 35 tabulates information from logical addresses of data blocks being directed to the memory system from the computer system. Another running record 37 tabulates information of physical block usage within the memory array 11 itself. The processing circuits 33 take this data from either or both of the records 35 and 37 and determine whether any data shifting among banks in the memory is required. Depending upon specific applications, some of the information available from monitoring the logical or physical addresses of data blocks being programmed is used, and sometimes both. The purpose of the conceptual diagram of
Among the types of information that may beneficially be acquired by the records 35 and 37 are the following:
(a) The total number of blocks of memory with which the computer system is working at the moment. The number of logical blocks recognized by the host computer operating system, noted in the record 35, will obviously be no more than the number of available physical blocks within interchangeable banks, noted in the record 37, and will likely be fewer. The physical specification obviously can be no higher than the number of physical blocks of memory available for data storage, and generally will be less in order to allow for some memory blocks becoming defective.
(b) For each of the available blocks, a record may be maintained of the number of times that the block was written since operation of the memory array 11 was first started. This number for physical blocks maintained by the record 37 will be higher than the logical number in record 35 because of overhead writes which the memory controller 13 may cause to occur.
(c) A total number of block writing cycles that have been initiated since the memory array 11 was first put into operation, the logical number in record 35 and the physical number in record 37.
(d) The total number of cycles experienced by the interchangeable banks, either by way of a total of all the blocks of each bank, or by way of an average number per bank. Both a logical record 35 and a physical record 37 of this may be maintained.
(e) Related to (d) is to maintain an identification of the banks having the minimum and the maximum number of cycles. The minimum and maximum numbers can than be quickly ascertained.
This provides a great deal of information from which the processing 33 can determine whether there is uneven wear among the various banks of memory cells. The records 35 or 37 may be stored in separate tables or, to the extent possible, maintained as part of the blocks to which the data pertains in an overhead section or the information stored in the block. Where an accumulation of numbers must be made, it is preferable to keep running totals in order to minimize the amount of processing that is necessary when the wear leveling operation is performed. The processing 33 can use this information in a number of different ways to detect when one or more of the memory banks is being used considerably more frequently than one or more of the other memory banks.
As an example of one alternative, the maximum usage of any of the banks is first noted and a calculation made of the total number of block writes which could have been accomplished if each bank of the memory 11 was used to the exactly the same amount. This is the ideal, perfect even wear of the memory that is a goal of the wear leveling process. This is then compared with the total number of erase and write cycles that have occurred in the memory, the arithmetical difference being indicative of how far the system is operating from that ideal. A high difference value indicates a large imbalance in usage among the blocks. It may be calculated either from data acquired from the logical address records 35 or from the physical address records 37. In order to save memory, the logical address records 35 may be omitted entirely if the system speed is not unduly limited by the omission.
A wear leveling operation can be triggered by that difference exceeding a certain magnitude. Alternatively, that difference can be used in conjunction with other data before a wear leveling event is initiated. That other data includes static information of the ideal number of blocks that could be written during the life of the memory system if the wear is perfectly evenly distributed. A target for a total actual number of blocks to be stored over the lifetime of the memory is then determined, taking into account that perfectly even wear is not going to occur under any circumstances. This static difference between the ideal and target number of total block writes during the lifetime of the system is then compared with the actual difference number described above. When that calculated difference is about the same or less than the static target difference, the memory is operating within its target parameters and no action is taken. However, when the calculated difference number exceeds the static target difference, the memory is not operating up to expectations. If continued to operate in that manner, one or more blocks of the memory will reach their end of lifetime before the targeted number of user writes has been reached. Therefore, when the calculated difference exceeds the target difference by some amount, the wear leveling process 33 is then initiated.
When wear leveling is accomplished, two main events occur. First, as indicated by a path 39 of
As can be visualized from
Some limitation should be imposed on how often the wear leveling process is allowed to take place. If done again before there have been many operational cycles experienced by the memory, the process will undesirably swap operation of the system back to the previous condition of promoting maximum uneven wear. If the process is allowed to be immediately performed a further time, operation will swap back to the low wear level case, and so on. Unnecessary use of the wear leveling process simply adds to wear of the memory, and shortening its life, rather than extending it. Therefore, some limitation is preferably imposed on how often the wear leveling process is performed, such as by allowing it only after many thousands of cycles have occurred since the last time. This is, in effect, a limitation upon the feedback system loop gain.
Another example of a way of determining when wear leveling is necessary is to compare the number of block writes which have occurred to the present time in each of the memory banks, either by total number of block writes or some type of average of cycles of blocks within the bank, by monitoring the physical memory usage record 37. When those bank usage numbers are significantly different from each other, uneven wear among the banks is apparent. When these numbers become skewed in excess of a set threshold amount, then the wear leveling processing 33 is initiated.
It is this latter technique that is used in a specific implementation which will be described with respect to
The EEPROM cells are arranged on each of the integrated circuit chips 51, 53, etc., in four separate two-dimensional arrays of rows and columns of such cells. Referring to the circuit chip 51, for example, a small area 57 contains interfacing circuits, while four areas 59, 61, 63 and 65 provide separate arrays of rows and columns of memory cells arranged as quadrants of the chip. In this specific example, each of the quadrants 59, 61, 53 and 65 is designated as a memory bank, the smallest unit of memory that is swapped in order to improve wear leveling. A large number of such banks are provided in a typical memory system that can employ from a few to many EEPROM integrated circuit chips, with four such banks per chip. Each bank is, in turn, subdivided into memory blocks, such as the block 67 illustrated in the bank 61. Each bank can contain from several to hundreds of such blocks, depending, of course, on the density of the EEPROM cell formations on the chip, its size, and similar factors. Alternatively, but usually not preferred, each bank can have a single block.
The nature of each block is illustrated in
A field 73 is included in the header 71 to maintain a count of the number of times that the block has been erased and rewritten. As part of an erasure and rewrite cycle, this count is updated by one. When data is swapped among memory banks in order to accomplish wear leveling, it is the data stored in the portion 69 of each block of a bank that is swapped. The header 71, including the cycle count field 73, remains with its physical block. The cycle count 73 starts with one the first time its respective block of a new memory is erased and written, and is incremented by one upon the occurrence of each subsequent cycle during the lifetime of the memory. It is the count field 73 of each block that is read and processed periodically throughout the lifetime of the memory in order to determine whether there is uneven wear among the various memory banks and, if so, how a leveling of that uneven use can be accomplished.
A process flow diagram of
Once begun, as indicated by a step 81, the cycle count field 73 of each data storage block in the system is read. An average block cycle count for each bank is then calculated from these numbers. The average cycle counts for each bank, as indicated by a step 83, are then compared to determine whether there is such an imbalance of use of the various banks that a wear leveling operation should take place.
Even though the steps 75 and 77 provide a limitation and control on how often this process is accomplished, a step 85 shows a further limitation, which references the average bank usage count numbers. As will be explained more fully below, the system includes a spare bank of memory which is used in the wear leveling process. During each implementation of the process, the bank having the highest average block usage count is designated as the current spare bank. Thus, in order to prevent banks from being unnecessarily swapped back and forth, the count of the current spare bank, which has not been used for data storage purposes since the last wear leveling operation, provides a benchmark. Only if the usage of some other bank exceeds the previous record use carried by the current spare bank, then the process continues.
A next step 87, in that case, compares the maximum and minimum bank usage numbers to determine whether they differ by more than some present number A. If not, the wear leveling process reverts back to the step 77 wherein it awaits another command from the host computer system. If the difference does exceed A, however, then a swapping of banks of memory is accomplished in subsequent steps in order to even out the bank usage during future cycles. An example of the difference number A is 15;000 erase and write cycles. That number can vary considerably, however, depending upon the desired memory system operation. If the number is made too small, wear leveling cycles will occur too frequently, thus adding to the wear of the system since some overhead erase and rewrite cycles are required each time the wear leveling process in accomplished. If the number A is too large, on the other hand, the lifetime of the memory system is likely to be cut short by one or more banks reaching its lifetime limit of erase and rewrite cycles long prior to other banks approaching such a limit.
Before proceeding with other steps of the process of
As a first step 89 of a leveling procedure, data stored in the bank with the minimum count is written into the current spare memory bank. This is illustrated in
Finally, as indicated by a step 101, the translation table 27′ is updated so that blocks of data within the swapped banks are redirected to those new physical bank locations.
Alternatively, a spare bank of EEPROM memory need not be designated for the wear leveling process, thus freeing up another bank for storing data. Data can simply be swapped between the banks experiencing the maximum and minimum cycles to-date, and the translation table 27′ then being updated to redirect data accordingly. The controller buffer memory 47 can be used for temporary storage of data from the maximum and minimum use banks as data is being swapped between them. The count of the most heavily used bank is then remembered and used in the comparison step 35 when determining whether the imbalance is sufficient to justify the wear leveling process being performed. However, since the buffer memory 47 is usually preferred to be RAM, any power failure or significant power glitches occurring during the wear leveling process will cause data to be lost. The use of the spare bank in the manner described above will prevent such a data loss since the data of each block being swapped will remain in EEPROM memory at all times.
Although the various aspects of the present invention have been described to its preferred embodiments, it will be understood that the invention is entitled to protection within the full scope of the appended claims.
This application is a continuation of application Ser. No. 10/428,422, filed May 2, 2003 now U.S. Pat. No. 6,850,443, which in turn is a continuation of application Ser. No. 09/108,084, filed Jun. 30, 1998, now U.S. Pat. No. 6,594,183, which in turn is a continuation of application Ser. No. 07/759,212 filed Sep. 13, 1991, now U.S. Pat. No. 6,230,233 B1, issued on May 8, 2001, which applications are incorporated herein in its entirety by this reference.
Number | Name | Date | Kind |
---|---|---|---|
4093985 | Das | Jun 1978 | A |
4430727 | Moore et al. | Feb 1984 | A |
4528683 | Henry | Jul 1985 | A |
4530054 | Hamstra et al. | Jul 1985 | A |
4562532 | Nishizawa et al. | Dec 1985 | A |
4563752 | Pelgrom et al. | Jan 1986 | A |
4608671 | Shimizu et al. | Aug 1986 | A |
4612640 | Mehrotra et al. | Sep 1986 | A |
4616311 | Sato | Oct 1986 | A |
4638457 | Schrenk | Jan 1987 | A |
4663770 | Murray et al. | May 1987 | A |
4682287 | Mizuno et al. | Jul 1987 | A |
4718041 | Baglee et al. | Jan 1988 | A |
4803707 | Cordan, Jr. | Feb 1989 | A |
4899272 | Fung et al. | Feb 1990 | A |
4922456 | Naddor et al. | May 1990 | A |
4924375 | Fung et al. | May 1990 | A |
4943962 | Imamiya et al. | Jul 1990 | A |
4947410 | Lippmann et al. | Aug 1990 | A |
4953073 | Moussouris et al. | Aug 1990 | A |
5034926 | Taura et al. | Jul 1991 | A |
5043940 | Harari | Aug 1991 | A |
5053990 | Kreifels et al. | Oct 1991 | A |
5065364 | Atwood et al. | Nov 1991 | A |
5095344 | Harari | Mar 1992 | A |
5103411 | Shida et al. | Apr 1992 | A |
5134589 | Hamano | Jul 1992 | A |
5138580 | Farrugia et al. | Aug 1992 | A |
5155705 | Goto et al. | Oct 1992 | A |
5163021 | Mehrotra et al. | Nov 1992 | A |
5168465 | Harari | Dec 1992 | A |
5172338 | Mehrotra et al. | Dec 1992 | A |
5193071 | Umina et al. | Mar 1993 | A |
5210716 | Takada | May 1993 | A |
5222109 | Pricer | Jun 1993 | A |
5245572 | Kosonocky et al. | Sep 1993 | A |
5263003 | Cowles et al. | Nov 1993 | A |
5267218 | Elbert | Nov 1993 | A |
5268870 | Harari | Dec 1993 | A |
5270979 | Harari et al. | Dec 1993 | A |
5272669 | Samachisa et al. | Dec 1993 | A |
5280447 | Hazen et al. | Jan 1994 | A |
5295255 | Malecek et al. | Mar 1994 | A |
5297148 | Harari et al. | Mar 1994 | A |
5303198 | Adachi et al. | Apr 1994 | A |
5341489 | Heiberger et al. | Aug 1994 | A |
5357473 | Mizuno et al. | Oct 1994 | A |
5371876 | Ewertz et al. | Dec 1994 | A |
5388083 | Assar et al. | Feb 1995 | A |
5430859 | Norman et al. | Jul 1995 | A |
5544118 | Harari | Aug 1996 | A |
5548554 | Pascucci et al. | Aug 1996 | A |
5568626 | Takizawa | Oct 1996 | A |
5630093 | Holzhammer et al. | May 1997 | A |
5663901 | Wallace et al. | Sep 1997 | A |
5726937 | Beard | Mar 1998 | A |
5930167 | Lee et al. | Jul 1999 | A |
6081447 | Lofgren et al. | Jun 2000 | A |
6230233 | Lofgren et al. | May 2001 | B1 |
6594183 | Lofgren et al. | Jul 2003 | B1 |
6850443 | Lofgren et al. | Feb 2005 | B2 |
Number | Date | Country |
---|---|---|
2840305 | Mar 1980 | DE |
3200872 | Jul 1983 | DE |
0349775 | Jan 1990 | EP |
0392895 | Oct 1990 | EP |
0398654 | Nov 1990 | EP |
0424191 | Apr 1991 | EP |
0492106 | Jul 1992 | EP |
0522780 | Jan 1993 | EP |
0569040 | Nov 1993 | EP |
0615193 | Sep 1994 | EP |
2251323 | Jul 1992 | GB |
2251324 | Jul 1992 | GB |
58215794 | Dec 1983 | JP |
58215795 | Dec 1983 | JP |
59045695 | Mar 1984 | JP |
59162695 | Sep 1984 | JP |
60179857 | Sep 1985 | JP |
60-212900 | Oct 1985 | JP |
61-165894 | Jul 1986 | JP |
62-229598 | Oct 1987 | JP |
62-245600 | Oct 1987 | JP |
62283496 | Dec 1987 | JP |
62283497 | Dec 1987 | JP |
S63-6600 | Jan 1988 | JP |
63183700 | Jul 1988 | JP |
1235075 | Jul 1989 | JP |
2189790 | Jul 1990 | JP |
2292798 | Dec 1990 | JP |
3025798 | Feb 1991 | JP |
3030034 | Feb 1991 | JP |
3283094 | Dec 1991 | JP |
4123243 | Apr 1992 | JP |
4243096 | Aug 1992 | JP |
5027924 | Feb 1993 | JP |
5028039 | Feb 1993 | JP |
5204561 | Aug 1993 | JP |
5241741 | Sep 1993 | JP |
2000346670 | Dec 2000 | JP |
9218928 | Oct 1992 | WO |
9311491 | Jun 1993 | WO |
9427382 | Nov 1994 | WO |
Number | Date | Country | |
---|---|---|---|
20050114589 A1 | May 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10428422 | May 2003 | US |
Child | 11028882 | US | |
Parent | 09108084 | Jun 1998 | US |
Child | 10428422 | US | |
Parent | 07759212 | Sep 1991 | US |
Child | 09108084 | US |