This invention relates generally to the operation of semiconductor non-volatile memory systems such as flash memory, and, more specifically, to the operation of such memory systems having very large erasable memory cell blocks and which access the blocks in much smaller units for programming and reading data.
There are many commercially successful non-volatile memory products being used today, particularly in the form of small form factor cards, which employ an array of flash EEPROM (Electrically Erasable and Programmable Read Only Memory) cells formed on one or more integrated circuit chips. A memory controller, usually but not necessarily on a separate integrated circuit chip, interfaces with a host to which the card is removably connected and controls operation of the memory array within the card. Such a controller typically includes a microprocessor, some non-volatile read-only-memory (ROM), a volatile random-access-memory (RAM) and one or more special circuits such as one that calculates an error-correction-code (ECC) from data as they pass through the controller during the programming and reading of data. Some of the commercially available cards are CompactFlash™ (CF) cards, MultiMedia cards (MMC), Secure Digital (SD) cards, Smart Media cards, personnel tags (P-Tag) and Memory Stick cards. Hosts include personal computers, notebook computers, personal digital assistants (PDAs), various data communication devices, digital cameras, cellular telephones, portable audio players, automobile sound systems, and similar types of equipment. Besides the memory card implementation, this type of memory can alternatively be embedded into various types of host systems.
Two general memory cell array architectures have found commercial application, NOR and NAND. In a typical NOR array, memory cells are connected between adjacent bit line source and drain diffusions that extend in a column direction with control gates connected to word lines extending along rows of cells. A memory cell includes at least one storage element positioned over at least a portion of the cell channel region between the source and drain. A programmed level of charge on the storage elements thus controls an operating characteristic of the cells, which can then be read by applying appropriate voltages to the addressed memory cells. Examples of such cells, their uses in memory systems and methods of manufacturing them are given in U.S. Pat. Nos. 5,070,032, 5,095,344, 5,313,421, 5,315,541, 5,343,063, 5,661,053 and 6,222,762.
The NAND array utilizes series strings of more than two memory cells, such as 16 or 32, connected along with one or more select transistors between individual bit lines and a reference potential to form columns of cells. Word lines extend across cells within a large number of these columns. An individual cell within a column is read and verified during programming by causing the remaining cells in the string to be turned on hard so that the current flowing through a string is dependent upon the level of charge stored in the addressed cell. Examples of NAND architecture arrays and their operation as part of a memory system are found in U.S. Pat. Nos. 5,570,315, 5,774,397, 6,046,935, and 6,522,580.
The charge storage elements of current flash EEPROM arrays, as discussed in the foregoing referenced patents, are most commonly electrically conductive floating gates, typically formed from conductively doped polysilicon material. An alternate type of memory cell useful in flash EEPROM systems utilizes a non-conductive dielectric material in place of the conductive floating gate to store charge in a non-volatile manner. A triple layer dielectric formed of silicon oxide, silicon nitride and silicon oxide (ONO) is sandwiched between a conductive control gate and a surface of a semi-conductive substrate above the memory cell channel. The cell is programmed by injecting electrons from the cell channel into the nitride, where they are trapped and stored in a limited region, and erased by injecting hot holes into the nitride. Several specific cell structures and arrays employing dielectric storage elements are described by Harari et al. in United States patent application publication no. 2003/0109093.
As in most all integrated circuit applications, the pressure to shrink the silicon substrate area required to implement some integrated circuit function also exists with flash EEPROM memory cell arrays. It is continually desired to increase the amount of digital data that can be stored in a given area of a silicon substrate, in order to increase the storage capacity of a given size memory card and other types of packages, or to both increase capacity and decrease size. One way to increase the storage density of data is to store more than one bit of data per memory cell and/or per storage unit or element. This is accomplished by dividing a window of a storage element charge level voltage range into more than two states. The use of four such states allows each cell to store two bits of data, eight states stores three bits of data per storage element, and so on. The charge level of a storage element controls the threshold voltage (commonly referenced as VT) of its memory cell, which is used as a basis of reading the storage state of the cell. A threshold voltage window is commonly divided into a number of ranges, one for each of the two or more storage states of the memory cell. These ranges are separated by guard bands that individually include a nominal sensing reference level for reading the storage states of the individual cells.
Multiple state flash EEPROM structures using floating gates and their operation are described in U.S. Pat. Nos. 5,043,940 and 5,172,338, and for structures using dielectric floating gates in aforementioned U.S. application Ser. No. 10/280,352. Selected portions of a multi-state memory cell array may also be operated in two states (binary) for various reasons, in a manner described in U.S. Pat. Nos. 5,930,167 and 6,456,528.
Memory cells of a typical flash EEPROM array are divided into discrete blocks of cells that are erased together. That is, the block is the erase unit, a minimum number of cells that are simultaneously erasable. Each block typically stores one or more pages of data, the page being the minimum unit of programming and reading, although more than one page may be programmed or read in parallel in different sub-arrays or planes. Each page typically stores one or more sectors of data, the size of the sector being defined by the host system. An example sector includes 512 bytes of user data, following a standard established with magnetic disk drives, plus some number of bytes of overhead information about the user data and/or the block in which they are stored. Such memories are typically configured with 16, 32 or more pages within each block, and each page stores one or just a few host sectors of data.
In order to increase the degree of parallelism during programming user data into the memory array and read user data from it, the array is typically divided into sub-arrays, commonly referred to as planes, which contain their own data registers and other circuits to allow parallel operation such that sectors of data may be programmed to or read from each of several or all the planes simultaneously. An array on a single integrated circuit may be physically divided into planes, or each plane may be formed from a separate one or more integrated circuit chips. Examples of such a memory implementation are described in U.S. Pat. Nos. 5,798,968 and 5,890,192.
To further efficiently manage the memory, blocks may be linked together to form virtual blocks or metablocks. That is, each metablock is defined to include one block from each of several or all of the planes. Use of the metablock is described in United States patent application publication no. 2002-0099904. The metablock is identified by a host logical block address as a destination for programming and reading data. Similarly, all blocks of a metablock are erased together. The controller in a memory system operated with such large blocks and/or metablocks performs a number of functions including the translation between logical block addresses (LBAs) received from a host, and physical block numbers (PBNs) within the memory cell array. An intermediate quantity of logical block numbers (LBNs) may also be used, one LBN typically designating a range of LBAs that include an amount of data equal to the storage capacity of one or more memory array blocks or of a metablock. Individual pages within the blocks are typically identified by offsets within the block address. Address translation often involves uses logical block numbers (LBNs) and logical pages.
In an ideal case, the data in all the pages of a block would be updated together by writing the updated data to the pages within an unassigned, erased block, and a logical-to-physical block number table would then be updated with the new address. The original block would then be available to be erased and placed in an erased block pool for future use. However, it is more typical that the data stored in a number of pages less than all of the pages within a given block must be updated. The data stored in the remaining pages of the given block remain unchanged. The probability of this occurring is higher in systems in which the number of pages of data stored per block is higher. One technique now used to accomplish such a partial block update is to write the data of the pages to be updated into a corresponding number of the pages of an erased block and then to copy the unchanged pages from the original block into pages of the new block. The original block may then be erased and added to the erased block pool. Over time, as a result of host data files being re-written and updated, many blocks can end up with a relatively few number of its pages containing valid data and remaining pages containing data that are no longer current. In order to be able to efficiently use the data storage capacity of the array, logically related data pages of valid data are from time-to-time gathered together from fragments among multiple blocks and consolidated together into a fewer number of blocks. This process is commonly termed “garbage collection.”
An alternative technique similarly writes the updated pages to a different block than the block containing the original data but eliminates the need to copy the other pages of data into the new block by appropriately marking the data to distinguish the updated data from the superceded data that are identified by the same logical address. This is a subject discussed in afore-mentioned United States published application no. 2002-0099904. Then when the data are read, the updated data read from pages of the new block are combined with the unchanged data read from pages of the original block that remain current, and the invalid superceded data are ignored.
The memory system controller is preferably able, by its structure and controlling firmware, to cause data to be programmed and read under a variety of conditions imposed upon it by the host. At one extreme, audio, video or other streaming data can be received at a high rate of speed, and the memory system is called upon to store the data in real time. At another extreme, the host may cause the memory system to occasionally program one sector of data at a time or to program several single data sectors together that have non-sequential logical addresses. The same data sector can be also be frequently updated. Such single sector programming can occur, for example, when a file allocation table (FAT) stored in the array is being written or updated. The problem presented by such operations on large erase block memories is that frequent garbage collection is required in order to efficiently utilize the storage capacity of the memory. The controller needs to suspend its primary function of transferring data in and out of the memory in order to perform garbage collection, thus adversely affecting system performance.
Accordingly, at least two different mechanisms are maintained for programming data according to the characteristics of write commands received from a host in order to increase the overall performance of the memory system. In general, the storage of non-sequentially addressed data is treated differently than the storage of sequentially addressed data, in ways that optimize memory system performance with either type of operation.
In an example implementation, a host command to program a single host unit of data (a sector being a common example), a small number of units with sequential logical addresses or units of data with non-sequential logical addresses are handled differently than a host command to program a number of data units that is large, relative to the storage capacity of the individual logical blocks or metablocks, and which have sequential logical addresses. The single, small number of sequential data units or non-sequential data units are written to a first type of designated logical block or metablock while the larger number of sequential data units are written to a second type of designated logical block or metablock. The first type of designated block or metablock (referenced herein as E1) receives updates of data spread over a wide range of logical addresses while the second type of designated block or metablock (referenced herein as E2) receives updates of data stored in a range of logical addresses limited to a single block. Further, if a single or non-sequential data units are more frequently updated than other data units with surrounding logical addresses, the updates may be stored in the first type of designated logical block or metablock that is dedicated to the logical address range of those units (referenced herein as dE1).
A principal result of using these designated blocks is a reduction of the amount of data consolidation that currently becomes necessary when less data that may be stored in a block are updated. The updated data need not be immediately recombined in a single physical block with unchanged data of the same logical block, thus improving system performance. Such a recombination may be postponed until a time when it interferes less with the memory system's acquisition and storage of data. Further, the various designated blocks are preferably dynamically formed to accommodate the programming commands being received, thereby to adapt the memory system for high performance operation in a wide variety of data storage applications.
Additional aspects, features and advantages of the present invention are included in the following description of exemplary embodiments, which description should be read in conjunction with the accompanying drawings. All patents, patent applications, articles publications and other documents referenced herein are hereby incorporated herein by this reference in their entirety for all purposes.
If desired, a plurality of arrays 400, each with its associated X decoders, Y decoders, program/verified circuitry, data registers, and the like are provided, for example as taught by U.S. Pat. No. 5,890,192, issued Mar. 30, 1999, and assigned to SanDisk Corporation, assignee of the present application, which patent is hereby incorporated herein in its entirety by this reference. Related memory system features are described in co-pending patent application Ser. No. 09/505,555, filed Feb. 17, 2000 by Kevin Conley et al., which application is expressly incorporated herein in its entirety by this reference.
The external interface I/O bus 411 and control signals 412 can include the following:
This interface is given only as an example as other signal configurations can be used instead that provide the same functionality.
Data is transferred between the memory array via the data register 404 and an external controller via the data registers' coupling to the I/O bus AD[7:0] 411. The data register 404 is also coupled the sense amplifier/programming circuit 454. The number of elements of the data register coupled to each sense amplifier/programming circuit element may depend on the number of bits stored in each storage element of the memory cells In one popular form, flash EEPROM cells each contain one or more floating gates as charge storage elements. Each charge storage element may store a plurality of bits, such as 2 or 4, if the memory cells are operated in a multi-state mode. Alternatively, the memory cells may be operated in a binary mode to store one bit of data per storage element.
The row decoder 401 decodes row addresses for the array 400 in order to select the physical page to be accessed. The row decoder 401 receives row addresses via internal row address lines 419 from the memory control logic 450. A column decoder 402 receives column addresses via internal column address lines 429 from the memory control logic 450.
The flash memory array 400 is usually divided into two or more sub-arrays, herein referenced as planes, two such planes 400a and 400b being illustrated in
If the memory system is operated with binary states, where one bit of data is stored in each memory cell storage element, one sector of user data plus overhead data occupies 528 memory cells. If the memory cells are operated in four states, thus storing two bits of data per cell, only one-half as many cells are required to store a single data sector, or the same number of cells can store two data sectors such as where each cell stores one bit from each of two data sectors. Operation with a higher number of states per storage element further increases the data storage density of an array.
In some prior art systems having large capacity memory cell blocks that are divided into multiple pages, as discussed above, data of pages in a block that are not being updated need to be copied from the original block to a new block that also contains the new, updated data that has been written by the host. This technique is illustrated in
With reference to
The LBN of the data in each page may be stored as overhead data in that page, as done in some commercial flash memory products. The controller then builds a table in the form of that shown in
In other prior art systems that operate differently than described with respect to
Various flags are typically stored as overhead in the same physical page as the other associated overhead data, such as the LBN and an ECC field. Thus, to program the old/new flags in pages where the data has been superceded requires that a page support multiple programming cycles. That is, the memory array must have the capability for its individual pages to be programmed in at least two stages between erasures. Furthermore, the block must support the ability to program a page when other pages in the block with higher offsets or addresses have been already programmed. A limitation of some flash memories, however, prevents the usage of such flags by specifying that the pages in a block can only be programmed in a physically sequential manner. Furthermore, in such flash memories, the pages support a finite number of program cycles and in some cases additional programming of programmed pages is not permitted. There are many different types of flash EEPROM, each of which presents its own limitations that must be worked around to operate a high performance memory system formed on a small amount of integrated circuit area.
What is needed is a mechanism of optimally managing data based on host usage data patterns.
The trend in the development of flash EEPROM systems is to increase significantly the number of memory cells, and thus the amount of data stored, in the individual blocks in order to reduce the cost of manufacturing the integrated memory circuit chips. A block size of something like 512 or 1024 sectors of data (528 bytes each) is contemplated, thus having an individual capacity of 270,336 or 540,672 bytes of user and overhead data. If only one sector is included in a page, then there are the same number of pages but the trend is also to increase the amount of data that is programmed as part of one programming operation by including two, or perhaps more, data sectors in each page, in which case the number of pages in a block that stores a given number of sectors of data is reduced. But regardless of the details of any particular implementation, the existing techniques described above for updating only a portion of the data in a block increases the adverse effect on memory performance and/or capacity utilization as the data storage capacity of the individual block increases.
It can be seen that if the data in only a few of the 528 or so pages of a block are being updated, the existing technique described with respect to
Therefore, according to one aspect of the present invention, at least one block is provided in each plane of the memory for receiving such small amounts of updates to the data of some or all of the other blocks in the plane. In a memory plane illustrated in
With reference to
At the time of storing the updated data, the original data in pages 7-10 of block 3 become obsolete, in this example. When reading the data of block 3, therefore, the memory system controller needs to also identify the updated pages 7-10 from the E1 block and use their data in place of that of pages 7-10 in the original block. An address map is maintained in a fast volatile memory of the controller for this purpose. Data for the address map are obtained upon initialization of the system from the overhead data of the pages in at least a portion of the system or other stored data in the non-volatile memory, including data in the E1 block. This data include the LBN of each page that is commonly included as part of the overhead data of each page. Since the pages are not constrained to be written in any particular order in the E1 block, the overhead of each data page also preferably includes its logical page offset within the block. The data of the address map are then updated from the overhead fields of any data pages that are changed in the E1 block.
It has been assumed so far that there is only one update of any given page in the E1 block. This may be the case in some applications but not in others. In the example of
As a more general, alternate way to identify the most current of two pages having the same LBN and page offset, the overhead of each page may additionally contain an indication of its time of programming, at least relative to the time that other pages with the same logical address are programmed. This allows the controller to determine, when reading data from a particular block of the memory, the relative ages of the pages of data that are assigned the same logical address. This technique allows the updated pages to be written into the E1 block in any order, in a memory system that allows this. It can also make it easier to operate a memory system with more than one E1 block in a single plane. This way of distinguishing old from current data is described more fully in aforementioned United States patent application publication no. 2002-0099904.
There are several ways in which the time stamp may be recorded in the individual pages. The most straightforward way is to record, when the data of its associated page is programmed, the output of a real-time clock in the system. Later programmed pages with the same logical address then have a later time recorded. But when such a real-time clock is not available in the system, other techniques can be used. One specific technique is to store the output of a modulo-N counter as the time stamp. The range of the counter should be one more than the number of pages that are contemplated to be stored with the same logical page number. When updating the data of a particular page in the original block 3 of
The controller, when called upon to read the data, easily distinguishes between the new and superceded pages' data by comparing the time stamp counts in the overhead of pages having the same LBA and page offset. In response to a need to read the most recent version of a data file, data from the identified new pages are then assembled, along with original pages that have not been updated, into the most recent version of the data file.
An example of the structure of a single sector of data stored in the individual pages of
The E1 block is used for updates when the number of pages being updated, for example, by a single host command is small in comparison with the total number of pages in the individual block. When a sufficiently large proportion of the pages of a block are being updated, it is then more efficient to use the existing technique instead, described with respect to
The optimum proportion or number of updated pages that serves as a decision criterion between invoking the two updating techniques may differ between memory systems that are constructed and/or operated in different ways. But having a fixed criterion is the most convenient to implement. For example, if the number of pages being updated is less than 50 percent of the total number of pages in the block but at least one page is being updated, the new technique of
Once a decision is made by the controller to direct incoming data to the E1 block, the nature of the programming operation may be detected, after writing one or more pages into the E1 block, to be better directed to the E2 block. An example situation is when sequential write commands are discovered to be writing sequential pages in a single block, one or a few pages at a time. This can be noted by the controller after a few such pages are written into the E1 block, after which further writes to the E1 block are stopped and the remaining sequential writes are directed instead to the E2 block. Those pages already written to the E1 block are then transferred to the E2 block. This procedure reduces the likelihood of having to consolidate the pages of the block E1 as a result of this programming operation. Alternatively, in a case where the sequential page writes begin in an erased E1 block, that block may be converted to an E2 block.
As described above, updated pages of data of one block are preferably stored in pages of the E2 block having the same offsets as in the original block. As an alternative that is suitable for some applications, however, the system controller can store pages in the E2 block without regard for their offsets within the original block. The pages can, in this alternative, be stored in order beginning with page P0 of the E2 block. This adopts one characteristic of the E1 block that is different from the usual blocks but will still not allow more than one copy of any page of data to be stored in the E2 block. When this alternative type of E2 block is used, data consolidation may be more complex since the out of order pages in the E2 block will be transferred into pages of another block having the page offsets of their original block, in order to combine these updated pages with unchanged pages of the original block.
In order to be able to limit the number of blocks in a system that are set aside to serve as E1 blocks, it is desirable that they be used efficiently so that there are an adequate number of erased E1 block pages available to satisfy an expected demand for small partial block updates. Therefore, an intermittent consolidation takes place of pages of data of a logical block that are stored in a primary physical block and the E1 block. This removes at least some of the updated data pages from the E1 block that belong to that logical block, thus making these pages available for future use. They are consolidated into a single physical block.
An example of such an erase consolidation operation (garbage collection) is given in time sequential block diagrams of
As a first step in an erase consolidation operation to free up some of the pages of the E1 block, data from the four pages P5-P8 of block 10 are copied into pages P7-P10 of the designated E2 block 13 (bottom diagram of
Although the data in pages P5-P8 of the E1 block 10 are no longer necessary (
There are several triggering events that can be used by the memory system controller to initiate the erase consolidation operation described above with respect to
Another event that can be used to trigger the erase consolidation of
Also, when a block of data having updated data pages in the E1 block needs to be refreshed, if this data refresh is part of the memory's operation, its refreshing can include the erase consolidation operation of
The consolidation operation described with respect to
In the example of
A next step, illustrated in
The erase consolidation operation illustrated by
The erase consolidation operations involving the E1 block, as described with respect to
For systems where such frequent updates of significant amounts of data take place, a small amount at a time, the performance of the memory system is improved by designating more than one E1 block for a region of the memory that is subjected to these frequent updates. For a range of LBAs to which the host stores primarily or only such frequently updated data, an E1 block or metablock can even be dedicated for use with only that block. This is done when a resulting improved performance is worth the cost of having to remove one or more additional blocks from general service in order to serve as additional or dedicated E1 (dE1) blocks. This is often the case for blocks storing FAT tables, block overhead data, and the like. The designation of additional or dedicated E1 blocks can result in a substantial reduction in the frequency at which the erase consolidation operation of
The continued use of a dedicated E1 block will, of course, cause it to eventually become full. Certain pages of the data block to which an E1 block is dedicated can be rewritten multiple times before the E1 block becomes full. Each page is written into the next available erased page of the E1 block, in this example, and its page offset within the original data block stored as part of the overhead data for the block. Shortly before or at the time that any dedicated E1 block becomes full, a consolidation operation takes place to rewrite the data block to include the most current pages from its E1 block and any unchanged data pages. An example of this is given in
In
However, it is not required that the data pages in the E2 block 13 have the same address offsets as the pages that are updated or copied. It is sufficient that they remain in the same relative order. For example, the first data page P0 may be stored in the E2 block 13 of
The designation and use of the additional E1 blocks in the memory plane of
As another example of when the need for a dE1 block exists, a much higher use of the consolidation process of
As another example of dynamically establishing dE1 blocks, the memory controller can be programmed to distinguish frequently updated types of data from less frequently updated types of data, and direct such data to the appropriate blocks. For example, the memory controller can recognize when a single sector of data is being sent by the host with individual commands, typical of entries for a FAT table that are frequently updated, as compared with the host sending multiple sectors of data with a single command, typical of user data that is not so frequently updated. When single sectors of data are received, they are then mapped to the physical block(s) to which a dE1 block is dedicated. When multiple sectors of data are received by the memory system as a unit, they are sent to data blocks that share an E1 block with other data blocks. This non-dedicated E1 block contains data from multiple LBNs.
The techniques of the present invention may also be applied to memory architectures having one or multiple planes that are logically divided into zones but the use of zones is not necessary for implementing the present invention. In the case of multiple planes, the individual zones extend across the planes. Metablocks may or may not be used. An example of a memory system defining logical zones across multiple planes is schematically shown in
The unit of operation of the memory system of
One or more blocks within each zone can be allocated for use as the block(s) E1 and block(s) E2 for the other blocks in the zone. In the example illustrated in
In a typical allocation of physical blocks E1, dE1 and E2 for receiving updated data pages within discrete ranges of logical block addresses, a number of such blocks are allocated for use with each of two or more non-overlapping sets of contiguous logical block addresses, such as typically occurs when the system has its blocks or metablocks organized into the zones described above. The same rules are then most conveniently applied with each set of logical block addresses as to when and how their allocated E1, dE1 and E2 blocks or metablocks are used.
However, the use of any or all of the physical E1, dE1 and E2 blocks need not follow such constraints. For example, the rules of when and how these special blocks are used can be different for one range of logical block addresses than for another range. This allows recognition of different typical host usage patterns or types of data that are stored in different ranges of host logical addresses. Further, the range of logical block addresses to which a particular set of E1, dE1 and E2 blocks is dedicated need not be entirely contiguous. One range of logical block addresses can even be made to overlap with another range, with the rules for using the E1, dE1 and E2 blocks dedicated to each overlapping range being different. In this latter case, programming by the host of data with logical block addresses within the overlapping ranges are eligible for storage in one of two or more sets of physical E1, dE1 or E2 blocks, depending upon which set of rules is satisfied by the host operation. Preferably, the rules are set so that any one programming operation in an overlapping logical address range is eligible for storage in only one of the possible sets of E1, dE1 or E2 blocks.
Data stored in an E2 block, when a portion is further updated, are stored in an E1 block designated for the same data logical addresses. Of course, updated data having logical addresses designated for an E2 block can be stored in an E1 block without first having to store data of that logical address within the E2 block. The selection of the E1 or E2 block for storing updated data is dependent upon the criteria discussed above.
The flowchart of
But if it is determined in the step 105 that the new write is not of sectors that are a continuation of a previous sequence of sectors, it is then determined in a step 109 whether the gap of logical sector addresses involves a small number of sectors, such as one or just a few. If less than some preset number of sectors, then these unchanged data sectors are copied into the E2 block in order, as indicated by a step 111, and the new data sectors are written to the E2 block per the step 107. This copying of a few sectors of unchanged data into the existing E2 block may be optimal rather than treat the subsequent write as totally disconnected from the prior write because of a gap of a small number of sectors.
But if the logical address gap is larger than the preset number of sectors, then a next step 113 considers whether a new E2 block should be created for an LBN range that includes the pending write operation. The statistics considered can include an analysis of stored or recent write transactions, or may be as simple as determining whether the number of sectors of the current write operation is greater or not than a fixed or dynamically determined threshold number. This threshold number may be, for example, less than the number of data sectors stored in a block but more than one-half or three-quarters of a block's capacity. If it is determined from the defined statistics to create a new E2 block, this is done in a step 115 and the data of the pending write are then programmed into the new E2 block in the step 107. The process for allocating a new E2 block is explained below with respect to the flowchart of
This discussion is based upon the determination being made in the step 103 that there is room in an existing E2 block for the number of data sectors of the pending write operation. If there is not enough room, however, the processing jumps to the step 113 to determine whether a new E2 block should be created.
If it is determined in the step 113 that a new E2 block should not be created for the data of the pending write operation, it may likely be because the number of sectors is less than (or equal to) the preset threshold number used in the step 113 and thus more appropriately written to an E1 block. In a next step 117, therefore, it is determined whether an existing dE1 block exists for an LBA range in which the current data sectors reside. If so, these data sectors are programmed into that dE1 block, in a step 119. But if not, it is then determined in a step 121 whether a new dE1 block should be created. This determination may be based upon an analysis of stored operations, logical or physical, or may be based upon instantaneous operations. The statistics considered may include the occurrences of garbage collection operations, number of E1 blocks created or the number of non-sequential single sector write operations within a given LBA range. A dE1 block is dedicated for a certain block when the number and frequency of updates of data in that certain block are much higher than is typical for a majority of the blocks in the system. If a dE1 is to be allocated as a result, this is done in a step 123 in accordance with the flowchart of
But if there is no room in an existing dE1 block and it is determined not to allocate another, then room in an existing E1 block associated with the LBA of the pending write operation is sought, in a step 125. E1 blocks may be individually limited to storing data sectors of a specific range of LBAs in order to make them easier to manage, such as providing one in each plane as described above. Alternately, E1 blocks may store data with LBAs from anywhere within the array. If there is enough space for the data, then, in a next step 127, that data is written into the E1 block associated with the LBA of the data.
If, on the other hand, the E1 block associated with the LBA of the data does not have room, space is sought in another E1 block, as indicated by a step 129. If such space exists, a decision is then made as to whether to expand the range of LBAs handled by the other E1 block to include the LBA of the current data. Depending upon usage patterns of the host, it may be better either to store data with any LBAs in any of several or all of the E1 blocks in the system, or to strictly limit each E1 block to a set range of LBAs. If it is decided to write the data to another E1 block, this is done in the step 127. However, if it is decided not to write the data to another E1 block, then a new E1 block is allocated, per step 131, and the data written into it in the step 127.
A routine for allocating a new block in any of the steps 115, 123 or 131 is shown by the flowchart of
If a new E1 block is being created, when the processing of
If a new dE1 block is being created (step 123 of
The step 149 is also reached from the step 143 when neither a new dE1 nor a new E1 block is being created, and from the step 139 when the maximum numbers of E1 blocks have not been created. If it is determined in the step 149 to deallocate a dE1 block, then this is done in the step 147 and that block is reallocated in the step 135 as one of the new E1 or dE1 blocks being created. But if an existing dE1 block is not to be deallocated, then step 151 is reached wherein an E2 block is deallocated, followed by assigned that block as the new E1 or dE1 block. When assigned as a dE1 block, it may be assigned a LBN range that exceeds that of a single other physical block.
Although the invention has been described with respect to various exemplary embodiments, it will be understood that the invention is entitled to protection within the full scope of the appended claims.
This application is a divisional of U.S. application Ser. No. 10/749,831, filed on Dec. 30, 2003, U.S. Publication No. 2005/0144358A1, which application is incorporated herein in its entirety by this reference.
Number | Name | Date | Kind |
---|---|---|---|
5043940 | Harari | Aug 1991 | A |
5070032 | Yuan et al. | Dec 1991 | A |
5095344 | Harari | Mar 1992 | A |
5172338 | Mehrotra et al. | Dec 1992 | A |
5313421 | Guterman et al. | May 1994 | A |
5315541 | Harari et al. | May 1994 | A |
5341330 | Wells et al. | Aug 1994 | A |
5341489 | Heiberger et al. | Aug 1994 | A |
5343063 | Yuan et al. | Aug 1994 | A |
5404485 | Ban | Apr 1995 | A |
5457658 | Niijima et al. | Oct 1995 | A |
5479638 | Assar et al. | Dec 1995 | A |
5481691 | Day, III et al. | Jan 1996 | A |
5485595 | Assar et al. | Jan 1996 | A |
5519843 | Moran et al. | May 1996 | A |
5537577 | Sugimura et al. | Jul 1996 | A |
5541886 | Hasbun | Jul 1996 | A |
5570315 | Tanaka et al. | Oct 1996 | A |
5572466 | Sukegawa | Nov 1996 | A |
5598370 | Niijima et al. | Jan 1997 | A |
5648929 | Miyamoto | Jul 1997 | A |
5649200 | Leblang et al. | Jul 1997 | A |
5661053 | Yuan | Aug 1997 | A |
5687345 | Matsubara et al. | Nov 1997 | A |
5774397 | Endoh et al. | Jun 1998 | A |
5798968 | Lee et al. | Aug 1998 | A |
5835935 | Estakhri et al. | Nov 1998 | A |
5838614 | Estakhri et al. | Nov 1998 | A |
5845313 | Estakhri et al. | Dec 1998 | A |
5860090 | Clark | Jan 1999 | A |
5890192 | Lee et al. | Mar 1999 | A |
5896393 | Yard et al. | Apr 1999 | A |
5907856 | Estakhri et al. | May 1999 | A |
5930167 | Lee et al. | Jul 1999 | A |
5986933 | Takeuchi et al. | Nov 1999 | A |
5987563 | Itoh et al. | Nov 1999 | A |
5999947 | Zollinger et al. | Dec 1999 | A |
6000004 | Fukumoto | Dec 1999 | A |
6040997 | Estakhri | Mar 2000 | A |
6046935 | Takeuchi et al. | Apr 2000 | A |
6115785 | Estakhri et al. | Sep 2000 | A |
6122195 | Estakhri et al. | Sep 2000 | A |
6125435 | Estakhri et al. | Sep 2000 | A |
6134151 | Estakhri et al. | Oct 2000 | A |
6151247 | Estakhri et al. | Nov 2000 | A |
6161163 | Komatsu et al. | Dec 2000 | A |
6219768 | Hirabayashi et al. | Apr 2001 | B1 |
6222762 | Guterman et al. | Apr 2001 | B1 |
6330634 | Fuse et al. | Dec 2001 | B1 |
6426893 | Conley et al. | Jul 2002 | B1 |
6449625 | Wang | Sep 2002 | B1 |
6456528 | Chen | Sep 2002 | B1 |
6507885 | Lakhani et al. | Jan 2003 | B2 |
6522580 | Chen et al. | Feb 2003 | B2 |
6567307 | Estakhri | May 2003 | B1 |
6598171 | Farmwald et al. | Jul 2003 | B1 |
6662264 | Katayama et al. | Dec 2003 | B2 |
6721843 | Estakhri | Apr 2004 | B1 |
6763424 | Conley et al. | Jul 2004 | B2 |
6775423 | Kulkarni et al. | Aug 2004 | B2 |
6834331 | Liu | Dec 2004 | B1 |
6839798 | Nagayoshi et al. | Jan 2005 | B1 |
6871259 | Hagiwara et al. | Mar 2005 | B2 |
6925007 | Harari et al. | Aug 2005 | B2 |
6968421 | Conley | Nov 2005 | B2 |
6978342 | Estakhri et al. | Dec 2005 | B1 |
7020739 | Mukaida et al. | Mar 2006 | B2 |
7024514 | Mukaida et al. | Apr 2006 | B2 |
7062630 | Otake et al. | Jun 2006 | B2 |
7107388 | Zimmer et al. | Sep 2006 | B2 |
7328301 | Eilert et al. | Feb 2008 | B2 |
20020034105 | Kulkarni et al. | Mar 2002 | A1 |
20020099904 | Conley | Jul 2002 | A1 |
20020194451 | Mukaida et al. | Dec 2002 | A1 |
20030018847 | Katayama et al. | Jan 2003 | A1 |
20030028704 | Mukaida et al. | Feb 2003 | A1 |
20030109093 | Harari et al. | Jun 2003 | A1 |
20030110343 | Hagiwara et al. | Jun 2003 | A1 |
20040030825 | Otake et al. | Feb 2004 | A1 |
20050144358 | Conley et al. | Jun 2005 | A1 |
20050144361 | Gonzalez et al. | Jun 2005 | A1 |
Number | Date | Country |
---|---|---|
1 189 139 | Mar 2002 | EP |
2 742 893 | Jun 1997 | FR |
11-053248 | Feb 1999 | JP |
2000-122923 | Apr 2000 | JP |
2000-285017 | Oct 2000 | JP |
2002-202912 | Jul 2002 | JP |
2003-006044 | Oct 2003 | JP |
2004-533029 | Oct 2004 | JP |
WO 9420906 | Sep 1994 | WO |
WO 9422075 | Sep 1994 | WO |
WO 0205285 | Jan 2002 | WO |
WO 0249309 | Jun 2002 | WO |
WO 02058074 | Jul 2002 | WO |
Entry |
---|
EPO/ISA, “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration”, mailed in corresponding PCT/US2004/042644 on May 10, 2005, 10 pages. |
“Preliminary Notice of Rejection of the IPO” for SanDisk Corporation, Chinese Application No. 093140808 mailed Nov. 10, 2006, 2 pages. |
The Patent Office of the People's Republic of China, “First Office Action,” mailed in related Chinese Patent Application No. 200480039298 on Jun. 15, 2007, 7 pages (including translation). |
USPTO, “Office Action ,” mailed in related U.S. Appl. No. 10/750,190 on Jun. 20, 2007, 10 pages. |
Response to Office Action filed in related U.S. Appl. No. 10/750,190 on Oct. 29, 2007, 7 pages. |
“International Preliminary Examination Report”, European Patent Office, Corresponding Application PCT/US02/00366, Jul. 4, 2003, 7 pages. |
Petro Estakhri et al., “Moving Sectors Within a Block of Information in a Flash Memory Mass Storage Architecture”, U.S. Appl. No. 09/620,544, filed Jul. 21, 2000, 48 pages. |
PCT International Search Report, European Patent Office, corresponding PCT Application No. PCT/US02/00366, Apr. 7, 2003, 5 pages. |
“FTL Updates, PCMCIA Document No. 0165”, Personal Computer Memory Card International Association, Version 022, Release 003, Mar. 4, 1996, pp. 1-26. |
Mergi, Aryeh and Schneider, Robert, “M-Systems & SCM Flash Filing Software Flash Translation Layer—FTL”, PCMCIA, Jul. 1994, 15 pages. |
EPO/ISA, “Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration”, mailed in related PCT/US2004/042862 on Sep. 12, 2005, 19 pages. |
European Patent Office, “Office Action,” mailed in related European Patent Application No. 04 814 991.8 on Jun. 6, 2007, 2 pages. |
China State Intellectual Property Office, “First Office Action,” corresponding related Chinese Patent Application No. 2004800416806 on Sep. 28, 2007, 18 pages. |
Taiwan Intellectual Property Office, “Office Action of the IPO,” corresponding related Taiwanese Patent Application No. 093140967 on Nov. 13, 2007, 5 pages. |
USPTO, “Office Action,” mailed in related U.S. Appl. No. 10/750,190 on Dec. 12, 2007, 14 pages. |
EPO, “Office Action,” corresponding European Patent Application No. 04 814 787.0 on Oct. 23, 2007, 8 pages. |
The Patent Office of the People's Republic of China, “Second Office Action,” mailed in related Chinese Patent Application No. 200480039298.1 on Feb. 5, 2008, 3 pages (translation). |
China State Intellectual Property Office, “Office Action,” corresponding Chinese Patent Application No. 200480039298.1, mailed on Jul. 11, 2008, 7 pages (including translation.). |
China State Intellectual Property Office, “Office Action,” corresponding Chinese Patent Application No. 200480039298.1, mailed on Nov. 28, 2008, 10 pages (including translation.). |
China State Intellectual Property Office, “Decision of Refusal,” corresponding Chinese Patent Application No. 200480039298.1, mailed on May 8, 2009, 12 pages (including translation.). |
China State Intellectual Property Office, “Reexamination Notification,” corresponding Chinese Patent Application No. 200480039298.1, mailed on Dec. 14, 2009, 6 pages (including translation.). |
Korean Intellectual Property Office, “Notice of the Preliminary Rejection,” corresponding Korean Patent Application No. 2006-7012947, mailed on Mar. 9, 2011, 4 pages (including translation.). |
Japanese Patent Office, “Notification of Reasons for Refusal,” corresponding Japanese Patent Application No. 2006-547192, mailed on Mar. 29, 2011, 6 pages (including translation.). |
Number | Date | Country | |
---|---|---|---|
20130339585 A1 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10749831 | Dec 2003 | US |
Child | 13931107 | US |