Advanced management of a non-volatile memory

Information

  • Patent Grant
  • 9195592
  • Patent Number
    9,195,592
  • Date Filed
    Wednesday, December 4, 2013
    11 years ago
  • Date Issued
    Tuesday, November 24, 2015
    9 years ago
Abstract
A method of managing a non-volatile memory module, the method may include: allocating, by a memory controller, logically erased physical blocks of a non-volatile memory module to a spare block pool; allocating, by the memory controller, physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; and controlling, by the memory controller, a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.
Description
BACKGROUND

Flash memory devices store information with high density on Flash cells with ever smaller dimensions. In addition, Multi-Level Cells (MLC) store several bits per cell by setting the amount of charge in a cell. Flash memory devices are organized into (physical) pages. Each page includes a section allocated for data (512 bytes-16 Kbytes and expected larger in the future) and a small amount of spare bytes (64-1 Kbytes or more bytes for every page) for storing redundancy and metadata. The redundancy bytes are used to store error correcting information, for correcting errors which may have occurred during flash lifetime and the page read process. Each program operation is performed on an entire page. A number of pages are grouped together to form an Erase Block (erase block). A page cannot be erased unless the entire erase block which contains it is erased.


One common application of flash memory devices is Secure Digital (SD) cards and embedded Multi-Media Cards (eMMC). An embedded flash memory storage device (like eMMC card) may typically contain flash memory devices and a flash memory controller. The memory controller translates commands coming in through the host interface into actions (Read/Write/Erase) on the flash memory devices. The most common commands for memory storage device may be Read and Write commands of one or more sectors, where a sector may be, but is not limited to, a sequence of 512 bytes. The Read or Write commands may be of a single sector or multiple sectors. These commands may refer to logical addresses. These addresses may then be redirected to new addresses on the flash memory which need not directly correspond to the logical addresses that might be referenced by the Read or Write commands. This is due to memory management that may be carried out by the flash memory controller in order to support several features such as wear-leveling, bad block management, firmware code and data, error-correction, and others. The erase function is performed on an entire erase block. Because of this functionality, before the data of a certain block may be replaced such as during a write function, the new data must be written in an alternative location before an erase can occur, to preserve the integrity of the stored data.


Due to the small dimensions of a typical SD or eMMC card and the price limitations, the memory controller may typically have only a small RAM available for storage. The small size of the RAM memory limits the type of memory management which may be carried out by the memory controller with regard to the data stored in the flash memory device and received from the interface.


The memory controller may typically manage the memory at the erase block level, because managing data of small particle sizes becomes difficult. That is, the logical memory space may be divided into units of memory contained within a single erase block or some constant multiple of erase blocks, such that all logical sector addresses within each said unit of memory may be mapped to the same erase block or some constant multiple thereof.


This type of management has the drawback that for writing random access data sectors to memory or other memory units smaller than an erase block, erase blocks must be frequently rewritten. Because of the characteristics of flash memory, each new piece of information is written into an empty page. In flash memory a page may not be rewritten before the entire erase block is erased first.


If a portion of the memory unit contained within an erase block may need to be rewritten, it is first written into a freshly allocated erased erase block. The remaining, unmodified, contents of the erase block may then be copied into the new erase block and the former erase-block may be declared as free and may be further erased. This operation may be referred to as “sealing” or “merging”. The operation involves collecting the most recent data of a logical block and then merging it with the rest of the block data in a single erase block. Thus, even if a single sector from an erase block is rewritten, a complete erase block would be rewritten.


This may result in causing a significant degradation in the average write speed. It may also impose a significant delay in the response time between random write sector operations. It also may cause excessive P/E (program/erase) cycling, which may be problematic in new generations of flash memory devices where the number of P/E cycles is limited to a few thousand or even a few hundreds.


The memory controller is used to manage the overhead described above, and must always keep track of the data associated with each logical address and the actual memory location. This is usually achieved by implementing a mapping method between the logical address space assigned to the data and the actual memory storage location of the data.


Several methods may be implemented to execute such a mapping. Two approaches implement mapping systems that rely on block mapping and page mapping, respectively. In an approach using block mapping (block based management), each physical block in the flash memory is mapped to a contiguous logical memory block of the same data size (LMBA). In this approach when one page in some logical block is updated, the entire associated physical block must be copied to a fresh block, and the new data must be written in place of the obsolete copy. A merge may be an operation where the original content of a logical block is merged with the new data to form a new up to date copy of the block. This up to date copy is the data block that is associated with a logical data block assigned to the data contained within. In the second approach, each logical page (usually but not necessary 4 KB of data) of a logical block is mapped to an arbitrary physical page where two pages belonging to the same logical block can reside in different physical blocks of the flash memory. The second approach (page based management) requires additional complexity in terms of the amount of management data and memory overhead required for the physical memory to logical address mapping tables. For memory applications where severe limitations exist on available control memory, this approach is less appropriate. Flash memories such as SD or eMMC have limited amount of memory overhead and the first mapping approach, or variants thereof are more practical.


In order to be able to benefit from the advantages of both aforementioned flash management approaches, embedded device can manage main part of the device capacity in a block based manner, while managing a smaller portion in a page based manner. Such approach allows to keep most of the device capacity (according to the size allocated for the device user/host) mapped with rather compact tables, translating logical address of a large contiguous span of memory to a physical location (logical to physical blocks mapping), therefore using less RAM memory, and minimizing the need to access flash for storing and loading such tables or their derivatives. In parallel such approach (page based random data cache portion) allows overcoming problems relevant to writing of random traffic as described above (performance degradation, delay in response time, excessive P/E cycling etc.). Typically some sort of cleaning algorithms should be applied constantly or from time to time in order to vacate data from the random data cache or move data in the cache and free space available for further absorbing random data. Efficiency of cleaning algorithms, capacity available for caching random data (overprovisioning) and the efficiency of the random write operations determines the random write performance of the device.


Overprovisioning—usually there's a difference between the physical flash capacity used by the device and the logical capacity available for the device user. This difference allows the memory controller to use some of the device capacity for internal management needs. Overprovisioning in embedded flash memory storage systems is typically defined statically and doesn't change over system's lifetime.


Overprovisioning influences Random Write performance—one of the purposes of the capacity not allocated for the user (overprovisioning) is to allow caching of random write traffic (short, non-sequential write transactions). Increase in overprovisioning capacity typically will result in Random Write performance improvement due to ability to perform cleaning of the cached (page based) data to block based mapped data less frequently.


Another host interface command that can be exploited by the flash memory storage system is a logical erase operation, usually called “Trim”. In this command the host provides the flash memory controller logical address space which was typically previously written by the host and the user data in it is no longer required. This logical address range is declared as logically erased and storing no valid user data.


The invention suggests that partially written user space (unused since the device life start or trimmed by the device user or available due to changing redundancy rate, to handle different reliability requirements over device lifetime) can free additional temporary overprovisioning.


Some of the data in a typical flash memory device may be stored in a single cell manner (SLC) in order to increase the performance or the reliability of the stored information. E.g. metadata (system control information/mapping tables etc.), enhanced storage partition, random data caches etc.


Typical flash memory device memory controller needs to account for the free blocks available in the system (spare blocks). Such blocks may be signed by some software indication or managed in some software database. Collection of all or part (distinguished according to some trait) of spare blocks in the system can be regarded as “spare blocks pool” (list of such blocks). MLC/TLC devices may need to distinguish MLC spare blocks from SLC spare blocks and manage several pools of spare blocks of different types. E.g. SLC blocks may need to reach a much higher cycling than the MLC blocks.


SUMMARY

According to an embodiment of the invention there may be provided a method of managing a non-volatile memory module, the method may include allocating, by a memory controller, logically erased physical blocks of a non-volatile memory module to a spare block pool; allocating, by the memory controller, physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; and controlling, by the memory controller, a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.


The allocating of the logically erased physical blocks may include initially allocating a predefined number of physical blocks to the spare block pool; and the method may include allocating to the spare block pool additional logically erased physical blocks.


The controlling of the utilization of the buffer blocks may include controlling cleaning operations in response to a status of the buffer.


The method may include controlling, by the memory controller, a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.


The method may include managing a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.


The method may include receiving data sectors to be written to the non-volatile memory module and determining whether to write the data sectors to the buffer or to a mapped physical block.


The determining may be responsive to a state of the buffer.


The logically erased physical blocks allocated to the spare block pool may include all or only part of all of logically erased physical blocks of the non-volatile memory module.


The method may include allocating the logically erased physical blocks to multiple spare block pools, wherein different spare block pools are associated with different types of programming, wherein different types of programming differ from each other by a number of logical bits programmed per flash memory cell (for example—SLC, MLC); and allocating, by the memory controller, physical blocks from the multiple spare block pools to become buffer blocks of the buffer of the non-volatile memory module.


The allocating of the physical blocks from the multiple spare block pools may be followed by independently managing the multiple spare block pools.


The method may include allocating, by the memory controller, physical blocks that have become buffer blocks of a buffer of the non-volatile memory module to a sequential portion of the non-volatile memory module.


The method may include allocating, by the memory controller, physical blocks from the spare block pool to a sequential portion of the non-volatile memory module.


According to an embodiment of the invention there may be provided a non-transitory computer readable medium that stores instructions that once executed by a memory controller may cause the memory controller to allocate logically erased physical blocks of a non-volatile memory module to a spare block pool; allocate physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; and control a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to initially allocate a predefined number of physical blocks to the spare block pool; and further allocate to the spare block pool additional logically erased physical blocks.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to control cleaning operations in response to a status of the buffer.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to control a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to manage a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to receive data sectors to be written to the non-volatile memory module and determine whether to write the data sectors to the buffer or to a mapped physical block.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to determine in response to a state of the buffer.


The logically erased physical blocks allocated to the spare block pool may include all or only part of all of logically erased physical blocks of the non-volatile memory module.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to allocate the logically erased physical blocks to multiple spare block pools, wherein different spare block pools are associated with different types of programming, wherein different types of programming differ from each other by a number of logical bits programmed per flash memory cell (for example—SLC, MLC); and allocate, by the memory controller, physical blocks from the multiple spare block pools to become buffer blocks of the buffer of the non-volatile memory module.


The allocating of the physical blocks from the multiple spare block pools may be followed by independently managing the multiple spare block pools.


The non-transitory computer readable medium may store instructions that once executed by a memory controller may cause the memory controller to allocate, by the memory controller, physical blocks that have become buffer blocks of a buffer of the non-volatile memory module to a sequential portion of the non-volatile memory module.


According to an embodiment of the invention there may be provided a memory controller that may include an allocation circuit that may be arranged to allocate logically erased physical blocks of a non-volatile memory module to a spare block pool and to allocate physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; and a buffer memory controller that may be arranged to control a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.


The allocation circuit may be arranged to initially allocate a predefined number of physical blocks to the spare block pool; and further allocate to the spare block pool additional logically erased physical blocks.


The buffer memory controller may be arranged to control cleaning operations in response to a status of the buffer.


The memory controller may include a block control circuit that may be arranged to control a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.


The memory controller may be arranged to manage a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.


The memory controller may include an interface that may be arranged to receive data sectors to be written to the non-volatile memory module and wherein the memory controller may be arranged to determine whether to write the data sectors to the buffer of to a mapped physical block.


The determining may be responsive to a state of the buffer.


The logically erased physical blocks allocated to the spare block pool may include all or only part of all of logically erased physical blocks of the non-volatile memory module.


The memory controller may be arranged to allocate the logically erased physical blocks to multiple spare block pools, wherein different spare block pools are associated with different types of programming, wherein different types of programming differ from each other by a number of logical bits programmed per flash memory cell (for example—SLC, MLC); and allocate physical blocks from the multiple spare block pools to become buffer blocks of the buffer of the non-volatile memory module.


The allocating of the physical blocks from the multiple spare block pools may be followed by independently managing the multiple spare block pools.


The memory controller may be arranged to allocate physical blocks that have become buffer blocks of a buffer of the non-volatile memory module to a sequential portion of the non-volatile memory module.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:



FIG. 1 illustrates a system according to an embodiment of the invention;



FIG. 2 illustrates data structures according to an embodiment of the invention;



FIG. 3 illustrates a method according to an embodiment of the invention;



FIG. 4 illustrates a method according to an embodiment of the invention;



FIG. 5 illustrates allocations of physical blocks between various entities method according to an embodiment of the invention; and



FIG. 6 illustrates a method according to an embodiment of the invention;





It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.


DETAILED DESCRIPTION OF THE DRAWINGS

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.


The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings.


It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.


Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.


Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that once executed by a computer result in the execution of the method.


Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that may be executed by the system.


Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer readable medium and should be applied mutatis mutandis to method that may be executed by a computer that reads the instructions stored in the non-transitory computer readable medium.


The invention describes methods, systems and computer readable media for dynamically increasing embedded flash storage device overprovisioning and as a result increasing the random write performance on the account of unclaimed user space.



FIG. 1 illustrates a system 10 according to embodiments of the invention. System 10 includes an interface 20 that may be linked to a memory controller 30 and may be also linked to a nonvolatile memory module 60 and a volatile memory module 80.


The nonvolatile memory module 60 may include the random portion 64, a sequential portion 62, a data buffer 70 and a metadata buffer 63.


The random portion 64 may refer to a logically allocated random portion memory, while the sequential portion may refer to a logically allocated sequential portion memory. The metadata buffer 63 and other management portions may be allocated within the nonvolatile memory module 60. In FIG. 1 some data structures such as the metadata buffer 63 may be illustrated as being contained outside the random portion 64 or sequential portion 62, although these structures may be contained within nonvolatile memory. It is noted that the data buffer 70 may be included in the random portion 64.


System 10 may store one or more management data structures that may store metadata about the content of the volatile memory module 80, the content of the nonvolatile memory module 60 or both memory modules. The management data structure can be stored at the volatile memory module 80 and, additionally or alternatively at the nonvolatile memory module 60.



FIG. 1 illustrates volatile memory module 80 and metadata buffer 63 as storing management data structures such as logical to physical data base 100, spare pool data base 83 and buffer database 84.



FIG. 1 also illustrates a volatile merger portion 81 that can be used when data sectors are merged. Data sectors that were previously stored at the random portion can be merged before being written to the sequential portion. Additionally or alternatively, the merging can occur between data sectors from sequential and random portions. The merging may include copying data sectors to be merged to volatile merger portion 81 and then writing the merged data sectors to the sequential portion.


The non-volatile memory module 60 includes multiple physical blocks. These physicla blocks can be allocated to data buffer 70 (see J physical blocks 60(1)-60(J)), to sequential nonvolatile portion 62 (see K physical blocks 60(J+1)-60(J+1+K)) to random nonvolatile portion 54 (such blocks are not shown for brevity of explanation) and to spare pool 66 (see M physical blocks 60(J+K+2)-60(J+K+M+2)).


The memory controller 30 may include multiple cirucits such as:

    • a. Allocation circuit 31 that is arranged to allocate logically erased physical blocks of a non-volatile memory module 60 to a spare block pool 66 and to allocate physical blocks from the spare block pool 66 to become buffer blocks of a buffer 70 of the non-volatile memory module.
    • b. Buffer memory controller 3032 that is arranged to control a utilization of the buffer blocks (60(1)-60(J)) of the buffer 70 by applying a page based buffer management scheme.
    • c. Block control circuit 33 that is arranged to control a utilization of a memory section (such as random nonvolatile portion 64 and/or sequential nonvolatile portion 62) of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.


The spare pool 66 can include multiple separate spare pools—one per type of programming. FIG. 5 illustrates the spare pool 66 as including MLC and SLC spare pools 66(1) and 66(2). FIG. 5 illustrates the allocation of blocks and their change of allocation between data buffer 70, a block based mapping non-volatile memory space (that may include sequential and/or random nonvolatile portions 62 and 64), MLC spare blocks pool 66(1) and MLC spare blocks pool 66(2).


Random write performance can be significantly improved by allocating some of the block-level mapped user-data blocks which are not in use (since start of life or after logical space trim operation or due to changing redundancy rate) to be used for page-level mapped random data absorption (overprovisioning for random data).


A predefined part of the storage device capacity is usually allocated for page based data caching, which among other things absorbs random write transactions.


Part of the user capacity which is not exploited by the user for data storing at any given time may be used to temporarily increase the overprovisioning part and therefore increase the random write performance.


There may be provided methods, systems and computer readable media for allocating unused part of the user space as overprovisioning and freeing it in time to allow the user to exploit full user capacity whenever needed.


There may be provided methods, systems and computer readable media for achieving the above for a system using separate pools/lists of spare blocks (e.g. for different block types) for user data storing and for the random data cache.


Terminology


MLC—multi level cell (device or block in a device), relates to the ability to store several bits of data in a single memory cell. For example relates to 2-bits per cell and 3-bits per cell devices in the same manner (without loss of generality).


L2P mapping—translation from logical (user/host/virtual) address space to physical address space of the device (e.g. translation from logical flash block address to a physical one).


L2P blocks mapping—translation from logical flash block address (aligned to flash block size or some other contiguous mapping unit) to corresponding physical/semi-physical address, usually consisting of one or several flash blocks.


Logical memory block address (LMBA)—user address space unit, mapped by L2P blocks mapping, typically corresponds in size to one or several flash blocks (aligned to such unit start and size).


Physical memory block address (PMBA)—physical/semi-physical flash address unit used in L2P blocks mapping, typically in the size of LMBA, which typically corresponds to one or several flash blocks.


Spare block—free physical block not holding any valid data, which is available for memory controller 30 utilization. Memory controller 30 can allocate such block for storing information to it when needed.


Native overprovisioning—the difference between the memory capacity available for the user and the actual capacity of the total flash memory available for the memory controller 30 usage. Typically exploited by the memory controller 30 for internal management (storing control information, data caching etc.).


Mapped logical block—entry in L2P blocks mapping table, associated with some Physical block address. The Logical block corresponding user space may or may not contain user data (may be erased/unwritten).


Unmapped logical block—free entry in L2P blocks mapping table corresponding to logically erased/unwritten Logical block address (e.g. logical address space mapped to a flash block) which is not associated with any Physical block address.


Block mapping—the act of associating an unmapped logical block to a Physical memory block address.


Cleaning—any algorithm for vacating data from the buffer absorbing the random writes traffic or moving data within random data cache buffer or any other method for freeing space in such buffer. E.g. moving random data from page based managed portion of the device memory to the block based managed portion.


To support variable over-provisioning the memory controller 30 supports unmapped logical blocks. E.g. by mapping logical blocks to invalid physical location. Such logical block is naturally logically erased (holds no user/host data). However the random data absorbing buffer may hold user data which is mapped to the address space of logical block which is not mapped to a physical block. In that case the logical address space of that logical block is considered partly written. The memory controller 30 can support unmapped logical blocks for the entire logical blocks span or any subset of such blocks. The memory controller 30 should support all required operations on logically erased block while it is unmapped (e.g. reading erased data)


The memory controller 30 should support mapping an unmapped block. I.e. associating unmapped logical block address with a physical memory block acquired from a list (pool) of spare blocks.


The memory controller 30 should support un-mapping a mapped block. I.e. freeing unused physical memory block associated with a logical block address into a of spare blocks list/pool (signing it as spare).


The memory controller 30 should support treating logically erased blocks (some or all of them) as spare blocks, e.g. by un-mapping all or part of the logically erased blocks and signing them as part of the spare blocks pool. Associating logically erased blocks with spare blocks pool can be either ongoing (e.g. instantly sign a block as spare as soon as it's being logically erased) or performed once in a while. E.g. always keep count of mapped and erased blocks and scan such blocks in order to sign some of them as spare when spare blocks pool runs out of available blocks.



FIG. 2 illustrates a logical to physical data base 100 according to an embodiment of the invention. The logical to physical data base 100 includes multiple (N) entries 100(1)-100(N) and three columns—thereby each entry has three fields corresponding to (a) a logical address 111 of a logical block, (b) a physical address 112 of a physical block—if such a physical block is mapped to the logical block, a NONE value if such a physical block is not mapped to the logical block and (c) a logical erased flag indicative of whether the physical block is logically erased or not.


The four entries of the logical to physical data base 100 start by a first logically erased and mapped physical block <0, 122, YES>, a second unmapped and logically erased physical block <1, NONE, YES>, and two non-logically erased and mapped physical blocks <2, 264, NO>, and <3, 313, NO>.


Total overprovisioning at any given moment is determined by the amount of the native overprovisioning and the amount of unused user space (or some part of it that the memory controller 30 is able to exploit).


The memory controller 30 should be able to perform cleaning algorithms according to the available overprovisioning. E.g. dynamically adjusting the rate of the cleaning process to the amount of available spare blocks.


Cleaning algorithms of the buffer absorbing random write transactions (e.g. page based portion of the device memory) should take into account the extra space available due to increased effective overprovisioning and achieve improved performance accordingly.


Physical blocks not associated with any logical block can be signed as spare blocks. The total amount of spare space at any moment is determined by the total overprovisioning which is not being exploited to hold valid data.


When host first writes to an unmapped block address, while in a normal system state (not during cleaning or under regular cleaning policy), the memory controller 30 can perform mapping of such logical address with a physical block, thus consuming (removing) an appropriate block from the spare blocks pool.


If the memory controller 30 performs blocks mapping as soon as the host claims the logical address space (as described in previous paragraph), cleaning algorithms vacating data to block based managed memory should only vacate data with logical addresses corresponding to mapped blocks. Therefore, cleaning process should not consume additional blocks from spares blocks pool for the sake of mapping.


Otherwise, cleaning algorithm should account for the spare blocks which will be consumed for the sake of mapping as a result of the cleaning process


If the system is in a critical state which requires immediate freeing of overprovisioning space (e.g. random writes buffer runs out of available space and spare blocks amount has to be increased), rigorous cleaning will be typically applied. During such state, mapping due to traffic from host to unmapped addresses, should be temporary avoided.


If mapping process was stopped for some reason (e.g. cleaning during critical system state), it should be resumed when mapping avoiding policy is over (it is assumed overprovisioning was freed while in such process). E.g. all incoming traffic to unmapped addresses, during the period in which mapping was not performed, can be mapped as soon as this policy is over.


When traffic is incoming to an unmapped logical address, during stage in which mapping is not allowed (e.g. critical state cleaning is performed to free some spare blocks), the traffic should be routed to the random transactions buffer until mapping is permitted again. It is assumed random traffic buffer applies cleaning algorithms in a manner that allows safely absorbing incoming data until enough space is freed.


In a system in which there's significance for keeping separate the blocks for random data buffer and the regular block-based-managed disk portion (e.g. random data buffer exploits SLC blocks and the block-based-managed-space blocks are MLC and therefore have different reliability requirements) the below treatment is proposed.


It is assumed that this section is relevant when:

    • a. The impact of additional wear (e.g. P/E cycling) on the physical blocks free due to un-mapping and used for increasing overprovisioning (due to the described feature) presents no problem for the device.
    • b. Most of the overprovisioning space is managed in a different manner than most of the general space. E.g. most overprovisioning manages blocks as SLC buffer, while the general memory space is managed as MLC and the spare blocks are kept in separate pools most of the time in order not to mix up the blocks and their physical characteristics.


Memory controller 30 should be able to distinguish the blocks used to store random write traffic currently signed as spare as a result of un-mapping (e.g. originally MLC blocks used as SLC blocks in random data buffer). E.g. SW flag (indication) can be added per used block for that matter.


Memory controller 30 should be able to consume spare blocks originated in un-mapping process to be used both as general (block-based) storage blocks and as random write cache blocks. E.g. for the case of MLC and SLC blocks separation some amount of MLC blocks can be used as both SLC and MLC blocks as long as they will be freed to the MLC spare blocks pool.


Unmapped blocks (spare physical blocks originated in un-mapping process) should only be consumed for the sake of performance increasing (for storing random write cache data). E.g. memory controller 30 should not use such block for storing system/control information.


Typically it would be more efficient to use most of the native (not the unmapped ones) overprovisioning blocks before starting consuming the unmapped blocks (e.g. in order not to add wear on such blocks).


Mapping blocks due to incoming traffic claiming unmapped user space should be held up when the system needs to map a block and is not able to do so due to the need to free such block and until it is freed.


The memory controller 30 should be able to dynamically adjust the cleaning rate so it allows absorbing the incoming traffic in the random data cache until originally unmapped blocks are freed allowing writing directly to the general (block-based) storage portion.


For some systems, due to possible clustering of native overprovisioning blocks and unmapped blocks in the random data buffer, it may mean that random data cache may need to first free many native overprovisioning (e.g. SLC) blocks before freeing a single originally unmapped block (e.g. MLC). Such behavior can result in a suboptimal performance. Example of a technique that can be applied to overcome this issue is to allow the general storage temporary use native-overprovisioning blocks (e.g. SLC) which should then be switched back with the originally unmapped blocks (e.g. MLC) when these become free.



FIG. 3 illustrates method 200 for erasing or trimming a logical address space that includes multiple logical blocks according to an embodiment of the invention.


Method 200 starts by stage 210 of defining the erased or trimmed logical address space to range between LMBAx and LMBAy.


Stage 210 may be followed by stage 220 of starting to scan the logical blocks-variable LMBA is set to LMBAx (the first logical block in the logical address space).


Stage 220 may be followed by stage 230 of checking if all logical blocks were not already processed: Is LMBA<=LMBAy?


If yes (there are logical blocks to process)—jumping to stage 240 of tagging the physical block previously mapped to LMBA as logically erased (e.g. L2P(LMBA)·IsLogicallyErased=YES). This stage may also include tagging the LMBA as erased.


If no—jumping to stage 270 of


Stage 240 may be followed by stage 250 of un-mapping the physicla block (PMBA) that was previously mapped to LMBA. This involves allocating PMBA to the spare block pool and altering the logical to physical address to indicate that LMBA is not mapped to a physicla block—physical address=NONE.


Stage 250 may be followed by stage 260 of preparing to check the next logical block: LMBA=LMBA+1.


Stage 260 may be followed by stage 270 of ending the erasure process.



FIG. 4 illustrates method 300 according to an embodiment of the invention.


Method 300 illustrates an execution of a write transaction according to an embodiment of the invention.


Method 300 starts by stage 310 of transaction start by receiving a request to write data to a logical block LMBA.


Stage 310 may be followed by stage 315 of checking if the logica block LMBA is mapped to a physical block.


If no—jumping to stage 320 and if yes jumping to stage 330.


Stage 320 includes checking if the buffer (also referred to as data buffer 70) in an urgent cleaning state (thereby there is an urgent need to free buffer blocks of the buffer).


If no—jumping to stage 325 and if yes—jumping to stage 340.


If the buffer is not in an urgent cleaning an unmapped physical block PMBA from the spare block pool is allocated to LMBA—and LMBA is now mapped (stage 325) to the PMBA and the logical to physical DB 82 is updated accordingly: L2P(LMBA)·PMBA=PMBA from Spare Pool.


Stage 325 is followed by stage 330 of checking whether to route transaction data to the buffer. (denoted in FIG. 4 as “Is short transaction?”). Stage 330 may include, fore xample, writing short transactions (random writing) to the data buffer.


If no—jumping to stage 340 and if yes jumping to stage 335.


Stage 335 includes writing transaction data (associated with LMBA) to a block mapped space (the allocated PMBA from stage 325).


Stage 335 is followed by stage 360 of ending the write transaction.


Stage 340 includes routing transaction data to the buffer. The utilization of the buffer is done on a page to page basis.


Stage 340 is followed by stage 345 of performing cleaning operations if needed.


Stage 345 is followed by stage 350 of checking if the urgent state changed to normal state. If no—jumping to stage 360 and if yes jumping to stage 355.


Stage 355 may include mapping all unmapped LMBAs in the buffer.


Accordingly—all LMBAs residing in the buffer at that point and which are unmapped (in the L2P DB) are mapped. FIG. 6 illustrates method 400 according to an embodiment of the invention.


Method 400 includes stage 410 of dynamically allocating by a memory controller physical blocks of a nonvolatile memory module to various purposes.


Stage 410 may include stage 412 of allocating, by a memory controller, logically erased physical blocks of a non-volatile memory module to a spare block pool and stage 414 of allocating, by the memory controller, physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module.


The allocation can change over time and physical blocks allocated to the spare block pool can be reallocated for other purposes—for being used as buffer blocks, for being allocated for sequential writing, and can be mapped and unmapped over time.


Stage 410 may be followed by stage 420 of controlling, by the memory controller, a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme. The page based management scheme control the writing of data pages to the buffer blocks, reading data pages from the buffer blocks, cleaning data pages and merging data pages.


Stage 412 may include initially allocating (for example—after the memory controller is powered up) a predefined number of physical blocks to the spare block pool; and further allocating to the spare block pool additional logically erased physical blocks. Stage 420 may include controlling cleaning operations in response to a status of the buffer.


A cleaning operation may include writing data sectors (for example—data pages) that are associated with a same logical memory space portion and are stored in different buffer block to a target buffer block. Alternatively, a cleaning operation may include move all valid data from one physical space portion (old block) to another (new block).


Stage 420 may include receiving data pages to be written to the non-volatile memory module and determining whether to write the data pages to the buffer or to a mapped physical block. The determining may be responsive to a state of the buffer.


Stage 410 may also be followed by stage 430 of controlling, by the memory controller, a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme. Stage 430 may include managing a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.


The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention. The computer program may cause the storage system to allocate disk drives to disk drive groups.


A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of: a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.


The computer program may be stored internally on a non-transitory computer readable medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system. The computer readable media may include, for example and without limitation, any number of the following: magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.


A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.


The computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via I/O devices.


In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims.


Moreover, the terms “front,” “back,” “top,” “bottom,” “over,” “under” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.


The connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections. The connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa. Also, plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.


Although specific conductivity types or polarity of potentials have been described in the examples, it will be appreciated that conductivity types and polarities of potentials may be reversed.


Each signal described herein may be designed as positive or negative logic. In the case of a negative logic signal, the signal is active low where the logically true state corresponds to a logic level zero. In the case of a positive logic signal, the signal is active high where the logically true state corresponds to a logic level one. Note that any of the signals described herein may be designed as either negative or positive logic signals. Therefore, in alternate embodiments, those signals described as positive logic signals may be implemented as negative logic signals, and those signals described as negative logic signals may be implemented as positive logic signals.


Furthermore, the terms “assert” or “set” and “negate” (or “deassert” or “clear”) are used herein when referring to the rendering of a signal, status bit, or similar apparatus into its logically true or logically false state, respectively. If the logically true state is a logic level one, the logically false state is a logic level zero. And if the logically true state is a logic level zero, the logically false state is a logic level one.


Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures may be implemented which achieve the same functionality.


Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.


Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.


Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner.


Also for example, the examples, or portions thereof, may implemented as soft or code representations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.


Also, the invention is not limited to physical devices or units implemented in non-programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code, such as mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices, commonly denoted in this application as ‘computer systems’.


However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.


In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.


While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Claims
  • 1. A method of managing a non-volatile memory module, the method comprising: allocating, by a memory controller, logically erased physical blocks of a non-volatile memory module to a spare block pool;allocating, by the memory controller, physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; andcontrolling, by the memory controller, a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.
  • 2. The method according to claim 1 wherein the allocating of the logically erased physical blocks comprises: initially allocating a predefined number of physical blocks to the spare block pool; andfurther allocating to the spare block pool additional logically erased physical blocks.
  • 3. The method according to claim 1 wherein the controlling of the utilization of the buffer blocks comprises controlling cleaning operations in response to a status of the buffer.
  • 4. The method according to claim 1 further comprising controlling, by the memory controller, a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.
  • 5. The method according to claim 4 comprising managing a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.
  • 6. The method according to claim 4 comprising receiving data sectors to be written to the non-volatile memory module and determining whether to write the data sectors to the buffer or to a mapped physical block.
  • 7. The method according to claim 6 wherein the determining is responsive to a state of the buffer.
  • 8. The method according to claim 1 wherein the logically erased physical blocks comprise all logically erased physical blocks of the non-volatile memory module.
  • 9. The method according to claim 1 wherein the logically erased physical blocks form only a part of all logically erased physical blocks of the non-volatile memory module.
  • 10. The method according to claim 1 comprising allocating the logically erased physical blocks to multiple spare block pools, wherein different spare block pools are associated with different types of programming, wherein different types of programming differ from each other by a number of logical bits programmed per flash memory cell; and allocating, by the memory controller, physical blocks from the multiple spare block pools to become buffer blocks of the buffer of the non-volatile memory module.
  • 11. The method according to claim 10 wherein the allocating of the physical blocks from the multiple spare block pools comprising independently managing the multiple spare block pools.
  • 12. The method according to claim 1 further comprising allocating, by the memory controller, physical blocks that have become buffer blocks of a buffer of the non-volatile memory module to a sequential portion of the non-volatile memory module.
  • 13. The method according to claim 1 further comprising allocating, by the memory controller, physical blocks from the spare block pool to a sequential portion of the non-volatile memory module.
  • 14. A non-transitory computer readable medium that stores instructions that once executed by a memory controller cause the memory controller to: allocate logically erased physical blocks of a non-volatile memory module to a spare block pool;allocate physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; andcontrol a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.
  • 15. The non-transitory computer readable medium according to claim 14 that stores instructions that once executed by a memory controller cause the memory controller to initially allocate a predefined number of physical blocks to the spare block pool; and further allocate to the spare block pool additional logically erased physical blocks.
  • 16. The non-transitory computer readable medium according to claim 14 that stores instructions that once executed by a memory controller cause the memory controller to control cleaning operations in response to a status of the buffer.
  • 17. The non-transitory computer readable medium according to claim 14 that stores instructions that once executed by a memory controller cause the memory controller to control a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.
  • 18. The non-transitory computer readable medium according to claim 17 that stores instructions that once executed by a memory controller cause the memory controller to manage a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.
  • 19. The non-transitory computer readable medium according to claim 14 that stores instructions that once executed by a memory controller cause the memory controller to receive data sectors to be written to the non-volatile memory module and determine whether to write the data sectors to the buffer or to a mapped physical block.
  • 20. The non-transitory computer readable medium according to claim 19 that stores instructions that once executed by a memory controller cause the memory controller to determine in response to a state of the buffer.
  • 21. A memory controller, comprising: an allocation circuit that is arranged to allocate logically erased physical blocks of a non-volatile memory module to a spare block pool and to allocate physical blocks from the spare block pool to become buffer blocks of a buffer of the non-volatile memory module; anda buffer memory controller that is arranged to control a utilization of the buffer blocks of the buffer by applying a page based buffer management scheme.
  • 22. The memory controller according to claim 21 wherein the allocation circuit is arranged to initially allocate a predefined number of physical blocks to the spare block pool; and further allocate to the spare block pool additional logically erased physical blocks.
  • 23. The memory controller according to claim 21 wherein the buffer memory controller is arranged to control cleaning operations in response to a status of the buffer.
  • 24. The memory controller according to claim 21 comprising a block control circuit that is arranged to control a utilization of a memory section of the non-volatile memory module that differs from the buffer by applying a block based buffer management scheme.
  • 25. The memory controller according to claim 24 that is arranged to manage a logical block to physical block mapping data structure that maps logical blocks to mapped physical blocks.
  • 26. The memory controller according to claim 24 comprising an interface that is arranged to receive data sectors to be written to the non-volatile memory module and wherein the memory controller is arranged to determine whether to write the data sectors to the buffer of to a mapped physical block.
  • 27. The memory controller according to claim 26 wherein the determining is responsive to a state of the buffer.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation in part of U.S. patent application Ser. No. 13/859,497 filing date Apr. 9, 2013 titled “ADVANCED MANAGEMENT OF A NON-VOLATILE MEMORY” which is a continuation in part of U.S. patent application Ser. No. 13/434,083, filing date Mar. 29, 2012, titled “SYSTEM AND METHOD FOR FLASH MEMORY MANAGEMENT”, which claims priority from U.S. provisional patent Ser. No. 61/485,397 filing date May 12, 2011, all applications are incorporated herein by reference.

US Referenced Citations (321)
Number Name Date Kind
4430701 Christian et al. Feb 1984 A
4463375 Macovski Jul 1984 A
4584686 Fritze Apr 1986 A
4589084 Fling et al. May 1986 A
4777589 Boettner et al. Oct 1988 A
4866716 Weng Sep 1989 A
5003597 Merkle Mar 1991 A
5077737 Leger et al. Dec 1991 A
5297153 Baggen et al. Mar 1994 A
5305276 Uenoyama Apr 1994 A
5592641 Doyle et al. Jan 1997 A
5623620 Alexis et al. Apr 1997 A
5640529 Hasbun Jun 1997 A
5657332 Auclair et al. Aug 1997 A
5663901 Harari et al. Sep 1997 A
5724538 Morris et al. Mar 1998 A
5729490 Calligaro et al. Mar 1998 A
5740395 Wells et al. Apr 1998 A
5745418 Hu et al. Apr 1998 A
5778430 Ish et al. Jul 1998 A
5793774 Usui et al. Aug 1998 A
5920578 Zook Jul 1999 A
5926409 Engh et al. Jul 1999 A
5933368 Hu et al. Aug 1999 A
5956268 Lee Sep 1999 A
5956473 Hu et al. Sep 1999 A
5968198 Hassan et al. Oct 1999 A
5982659 Irrinki et al. Nov 1999 A
6011741 Harari et al. Jan 2000 A
6016275 Han Jan 2000 A
6038634 Ji et al. Mar 2000 A
6081878 Estakhri et al. Jun 2000 A
6094465 Stein et al. Jul 2000 A
6119245 Hiratsuka Sep 2000 A
6182261 Haller et al. Jan 2001 B1
6192497 Yang et al. Feb 2001 B1
6195287 Hirano Feb 2001 B1
6199188 Shen et al. Mar 2001 B1
6209114 Wolf et al. Mar 2001 B1
6259627 Wong Jul 2001 B1
6272052 Miyauchi Aug 2001 B1
6278633 Wong et al. Aug 2001 B1
6279133 Vafai et al. Aug 2001 B1
6301151 Engh et al. Oct 2001 B1
6370061 Yachareni et al. Apr 2002 B1
6374383 Weng Apr 2002 B1
6504891 Chevallier Jan 2003 B1
6532169 Mann et al. Mar 2003 B1
6532556 Wong et al. Mar 2003 B1
6553533 Demura et al. Apr 2003 B2
6560747 Weng May 2003 B1
6637002 Weng et al. Oct 2003 B1
6639865 Kwon Oct 2003 B2
6674665 Mann et al. Jan 2004 B1
6675281 Oh et al. Jan 2004 B1
6704902 Shinbashi et al. Mar 2004 B1
6751766 Guterman et al. Jun 2004 B2
6772274 Estakhri Aug 2004 B1
6781910 Smith Aug 2004 B2
6792569 Cox et al. Sep 2004 B2
6873543 Smith et al. Mar 2005 B2
6891768 Smith et al. May 2005 B2
6914809 Hilton et al. Jul 2005 B2
6915477 Gollamudi et al. Jul 2005 B2
6952365 Gonzalez et al. Oct 2005 B2
6961890 Smith Nov 2005 B2
6968421 Conley Nov 2005 B2
6990012 Smith et al. Jan 2006 B2
6996004 Fastow et al. Feb 2006 B1
6999854 Roth Feb 2006 B2
7010739 Feng et al. Mar 2006 B1
7012835 Gonzalez et al. Mar 2006 B2
7038950 Hamilton et al. May 2006 B1
7068539 Guterman et al. Jun 2006 B2
7079436 Perner et al. Jul 2006 B2
7149950 Spencer et al. Dec 2006 B2
7177977 Chen et al. Feb 2007 B2
7188228 Chang et al. Mar 2007 B1
7191379 Adelmann et al. Mar 2007 B2
7196946 Chen et al. Mar 2007 B2
7203874 Roohparvar Apr 2007 B2
7212426 Park et al May 2007 B2
7290203 Emma et al. Oct 2007 B2
7292365 Knox Nov 2007 B2
7301928 Nakabayashi et al. Nov 2007 B2
7315916 Bennett et al. Jan 2008 B2
7388781 Litsyn et al. Jun 2008 B2
7395404 Gorobets et al. Jul 2008 B2
7441067 Gorobets et al. Oct 2008 B2
7443729 Li et al. Oct 2008 B2
7450425 Aritome Nov 2008 B2
7454670 Kim et al. Nov 2008 B2
7466575 Shalvi et al. Dec 2008 B2
7533328 Alrod et al. May 2009 B2
7558109 Brandman et al. Jul 2009 B2
7593263 Sokolov et al. Sep 2009 B2
7610433 Randell et al. Oct 2009 B2
7613043 Cornwell et al. Nov 2009 B2
7619922 Li et al. Nov 2009 B2
7697326 Sommer et al. Apr 2010 B2
7706182 Shalvi et al. Apr 2010 B2
7716538 Gonzalez et al. May 2010 B2
7804718 Kim Sep 2010 B2
7805663 Brandman et al. Sep 2010 B2
7805664 Yang et al. Sep 2010 B1
7844877 Litsyn et al. Nov 2010 B2
7911848 Eun et al. Mar 2011 B2
7961797 Yang et al. Jun 2011 B1
7975192 Sommer et al. Jul 2011 B2
8020073 Emma et al. Sep 2011 B2
8108590 Chow et al. Jan 2012 B2
8122328 Liu et al. Feb 2012 B2
8159881 Yang Apr 2012 B2
8190961 Yang et al. May 2012 B1
8250324 Haas et al. Aug 2012 B2
8300823 Bojinov et al. Oct 2012 B2
8305812 Levy et al. Nov 2012 B2
8327246 Weingarten et al. Dec 2012 B2
8407560 Ordentlich et al. Mar 2013 B2
8417893 Khmelnitsky et al. Apr 2013 B2
20010034815 Dugan et al. Oct 2001 A1
20020063774 Hillis et al. May 2002 A1
20020085419 Kwon et al. Jul 2002 A1
20020154769 Petersen et al. Oct 2002 A1
20020156988 Toyama et al. Oct 2002 A1
20020174156 Birru et al. Nov 2002 A1
20030014582 Nakanishi Jan 2003 A1
20030065876 Lasser Apr 2003 A1
20030101404 Zhao et al. May 2003 A1
20030105620 Bowen Jun 2003 A1
20030177300 Lee et al. Sep 2003 A1
20030192007 Miller et al. Oct 2003 A1
20040015771 Lasser et al. Jan 2004 A1
20040030971 Tanaka et al. Feb 2004 A1
20040059768 Denk et al. Mar 2004 A1
20040080985 Chang et al. Apr 2004 A1
20040153722 Lee Aug 2004 A1
20040153817 Norman et al. Aug 2004 A1
20040181735 Xin Sep 2004 A1
20040203591 Lee Oct 2004 A1
20040210706 In et al. Oct 2004 A1
20050013165 Ban Jan 2005 A1
20050018482 Cemea et al. Jan 2005 A1
20050083735 Chen et al. Apr 2005 A1
20050117401 Chen et al. Jun 2005 A1
20050120265 Pline et al. Jun 2005 A1
20050128811 Kato et al. Jun 2005 A1
20050138533 Le-Bars et al. Jun 2005 A1
20050144213 Simkins et al. Jun 2005 A1
20050144368 Chung et al. Jun 2005 A1
20050169057 Shibata et al. Aug 2005 A1
20050172179 Brandenberger et al. Aug 2005 A1
20050213393 Lasser Sep 2005 A1
20050243626 Ronen Nov 2005 A1
20060059406 Micheloni et al. Mar 2006 A1
20060059409 Lee Mar 2006 A1
20060064537 Oshima Mar 2006 A1
20060101193 Murin May 2006 A1
20060195651 Estakhri et al. Aug 2006 A1
20060203587 Li et al. Sep 2006 A1
20060221692 Chen Oct 2006 A1
20060248434 Radke et al. Nov 2006 A1
20060268608 Noguchi et al. Nov 2006 A1
20060282411 Fagin et al. Dec 2006 A1
20060284244 Forbes et al. Dec 2006 A1
20060294312 Walmsley Dec 2006 A1
20070025157 Wan et al. Feb 2007 A1
20070063180 Asano et al. Mar 2007 A1
20070081388 Joo Apr 2007 A1
20070098069 Gordon May 2007 A1
20070103992 Sakui et al. May 2007 A1
20070104004 So et al. May 2007 A1
20070109858 Conley et al. May 2007 A1
20070124652 Litsyn et al. May 2007 A1
20070140006 Chen et al. Jun 2007 A1
20070143561 Gorobets Jun 2007 A1
20070150694 Chang et al. Jun 2007 A1
20070168625 Cornwell et al. Jul 2007 A1
20070171714 Wu et al. Jul 2007 A1
20070171730 Ramamoorthy et al. Jul 2007 A1
20070180346 Murin Aug 2007 A1
20070223277 Tanaka et al. Sep 2007 A1
20070226582 Tang et al. Sep 2007 A1
20070226592 Radke Sep 2007 A1
20070228449 Takano et al. Oct 2007 A1
20070253249 Kang et al. Nov 2007 A1
20070253250 Shibata et al. Nov 2007 A1
20070263439 Cornwell et al. Nov 2007 A1
20070266291 Toda et al. Nov 2007 A1
20070271494 Gorobets Nov 2007 A1
20070297226 Mokhlesi Dec 2007 A1
20080010581 Alrod et al. Jan 2008 A1
20080028014 Hilt et al. Jan 2008 A1
20080049497 Mo Feb 2008 A1
20080055989 Lee et al. Mar 2008 A1
20080082897 Brandman et al. Apr 2008 A1
20080092026 Brandman et al. Apr 2008 A1
20080104309 Cheon et al. May 2008 A1
20080112238 Kim et al. May 2008 A1
20080116509 Harari et al. May 2008 A1
20080126686 Sokolov et al. May 2008 A1
20080127104 Li et al. May 2008 A1
20080128790 Jung Jun 2008 A1
20080130341 Shalvi et al. Jun 2008 A1
20080137413 Kong et al. Jun 2008 A1
20080137414 Park et al. Jun 2008 A1
20080141043 Flynn et al. Jun 2008 A1
20080148115 Sokolov et al. Jun 2008 A1
20080158958 Sokolov et al. Jul 2008 A1
20080159059 Moyer Jul 2008 A1
20080162079 Astigarraga et al. Jul 2008 A1
20080168216 Lee Jul 2008 A1
20080168320 Cassuto et al. Jul 2008 A1
20080181001 Shalvi Jul 2008 A1
20080198650 Shalvi et al. Aug 2008 A1
20080198652 Shalvi et al. Aug 2008 A1
20080201620 Gollub Aug 2008 A1
20080209114 Chow et al. Aug 2008 A1
20080219050 Shalvi et al. Sep 2008 A1
20080225599 Chae Sep 2008 A1
20080250195 Chow et al. Oct 2008 A1
20080263262 Sokolov et al. Oct 2008 A1
20080282106 Shalvi et al. Nov 2008 A1
20080285351 Shlick et al. Nov 2008 A1
20080301532 Uchikawa et al. Dec 2008 A1
20090024905 Shalvi et al. Jan 2009 A1
20090027961 Park et al. Jan 2009 A1
20090043951 Shalvi et al. Feb 2009 A1
20090046507 Aritome Feb 2009 A1
20090072303 Prall et al. Mar 2009 A9
20090091979 Shalvi Apr 2009 A1
20090103358 Sommer et al. Apr 2009 A1
20090106485 Anholt Apr 2009 A1
20090113275 Chen et al. Apr 2009 A1
20090125671 Flynn May 2009 A1
20090132755 Radke May 2009 A1
20090144598 Yoon et al. Jun 2009 A1
20090144600 Perlmutter et al. Jun 2009 A1
20090150599 Bennett Jun 2009 A1
20090150748 Egner et al. Jun 2009 A1
20090157964 Kasorla et al. Jun 2009 A1
20090158126 Perlmutter et al. Jun 2009 A1
20090168524 Golov et al. Jul 2009 A1
20090187803 Anholt et al. Jul 2009 A1
20090199074 Sommer Aug 2009 A1
20090213653 Perlmutter et al. Aug 2009 A1
20090213654 Perlmutter et al. Aug 2009 A1
20090228761 Perlmutter et al. Sep 2009 A1
20090240872 Perlmutter et al. Sep 2009 A1
20090282185 Van Cauwenbergh Nov 2009 A1
20090282186 Mokhlesi et al. Nov 2009 A1
20090287930 Nagaraja Nov 2009 A1
20090300269 Radke et al. Dec 2009 A1
20090323942 Sharon et al. Dec 2009 A1
20100005270 Jiang Jan 2010 A1
20100025811 Bronner et al. Feb 2010 A1
20100030944 Hinz Feb 2010 A1
20100058146 Weingarten et al. Mar 2010 A1
20100064096 Weingarten et al. Mar 2010 A1
20100088557 Weingarten et al. Apr 2010 A1
20100091535 Sommer et al. Apr 2010 A1
20100095186 Weingarten Apr 2010 A1
20100110787 Shalvi et al. May 2010 A1
20100115376 Shalvi et al. May 2010 A1
20100122113 Weingarten et al. May 2010 A1
20100124088 Shalvi et al. May 2010 A1
20100131580 Kanter et al. May 2010 A1
20100131806 Weingarten et al. May 2010 A1
20100131809 Katz May 2010 A1
20100131826 Shalvi et al. May 2010 A1
20100131827 Sokolov et al. May 2010 A1
20100131831 Weingarten et al. May 2010 A1
20100146191 Katz Jun 2010 A1
20100146192 Weingarten et al. Jun 2010 A1
20100149881 Lee et al. Jun 2010 A1
20100172179 Gorobets et al. Jul 2010 A1
20100174853 Lee et al. Jul 2010 A1
20100180073 Weingarten et al. Jul 2010 A1
20100199149 Weingarten et al. Aug 2010 A1
20100211724 Weingarten Aug 2010 A1
20100211833 Weingarten Aug 2010 A1
20100211856 Weingarten Aug 2010 A1
20100241793 Sugimoto et al. Sep 2010 A1
20100246265 Moschiano et al. Sep 2010 A1
20100251066 Radke Sep 2010 A1
20100253555 Weingarten et al. Oct 2010 A1
20100257309 Barsky et al. Oct 2010 A1
20100269008 Leggette et al. Oct 2010 A1
20100293321 Weingarten Nov 2010 A1
20100318724 Yeh Dec 2010 A1
20110051521 Levy et al. Mar 2011 A1
20110055461 Steiner et al. Mar 2011 A1
20110093650 Kwon et al. Apr 2011 A1
20110096612 Steiner et al. Apr 2011 A1
20110099460 Dusija et al. Apr 2011 A1
20110119562 Steiner et al. May 2011 A1
20110153919 Sabbag Jun 2011 A1
20110161775 Weingarten Jun 2011 A1
20110194353 Hwang et al. Aug 2011 A1
20110209028 Post et al. Aug 2011 A1
20110214029 Steiner et al. Sep 2011 A1
20110214039 Steiner et al. Sep 2011 A1
20110246792 Weingarten Oct 2011 A1
20110246852 Sabbag Oct 2011 A1
20110252187 Segal et al. Oct 2011 A1
20110252188 Weingarten Oct 2011 A1
20110271043 Segal et al. Nov 2011 A1
20110302428 Weingarten Dec 2011 A1
20120001778 Steiner et al. Jan 2012 A1
20120005554 Steiner et al. Jan 2012 A1
20120005558 Steiner et al. Jan 2012 A1
20120005560 Steiner et al. Jan 2012 A1
20120008401 Katz et al. Jan 2012 A1
20120008414 Katz et al. Jan 2012 A1
20120017136 Ordentlich et al. Jan 2012 A1
20120051144 Weingarten et al. Mar 2012 A1
20120063227 Weingarten et al. Mar 2012 A1
20120066441 Weingarten Mar 2012 A1
20120110250 Sabbag et al. May 2012 A1
20120124273 Goss et al. May 2012 A1
20120246391 Meir et al. Sep 2012 A1
Non-Patent Literature Citations (37)
Entry
Search Report of PCT Patent Application WO 2009/118720 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/095902 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/078006 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/074979 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/074978 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072105 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072104 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072103 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072102 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072101 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072100 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/053963 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/053962 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/053961 A3, Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/037697 A3, Mar. 4, 2010.
Yani Chen, Kcshab K. Parhi, “Small Area Parallel Chien Search Architectures for Long BCH Codes”, Ieee Transactions on Very Large Scale Integration(VLSI) Systems, vol. 12, No. 5, May 2004.
Yuejian Wu, “Low Power Decoding of BCH Codes”, Nortel Networks, Ottawa, Ont., Canada, in Circuits and systems, 2004. ISCAS '04. Proceeding of the 2004 International Symposium on Circuits and Systems, published May 23-26, 2004, vol. 2, pp. II-369-72 vol. 2.
Michael Purser, “Introduction to Error Correcting Codes”, Artech House Inc., 1995.
Ron M. Roth, “Introduction to Coding Theory”, Cambridge University Press, 2006.
Akash Kumar, Sergei Sawitzki, “High-Throughput and Low Power Architectures for Reed Solomon Decoder”, (a.kumar at tue.nl, Eindhoven University of Technology and sergei.sawitzki at philips.com), Oct. 2005.
Todd K.Moon, “Error Correction Coding Mathematical Methods and Algorithms”, A John Wiley & Sons, Inc., 2005.
Richard E. Blahut, “Algebraic Codes for Data Transmission”, Cambridge University Press, 2003.
David Esseni, Bruno Ricco, “Trading-Off Programming Speed and Current Absorption in Flash Memories with the Ramped-Gate Programming Technique”, IEEE Transactions on Electron Devices, vol. 47, No. 4, Apr. 2000.
Giovanni Campardo, Rino Micheloni, David Novosel, “VLSI-Design of Non-Volatile Memories”, Springer Berlin Heidelberg New York, 2005.
John G. Proakis, “Digital Communications”, 3rd ed., New York: McGraw-Hill, 1995.
J.M. Portal, H. Aziza, D. Nee, “EEPROM Memory: Threshold Voltage Built in Self Diagnosis”, ITC International Test Conference, Paper 2.1, Feb. 2005.
J.M. Portal, H. Aziza, D. Nee, “EEPROM Diagnosis Based on Threshold Voltage Embedded Measurement”, Journal of Electronic Testing: Theory and Applications 21, 33-42, 2005.
G. Tao, A. Scarpa, J. J Dijkstra, W. Stidl, F. Kuper, “Data retention prediction for modern floating gate non-volatile memories”, Microelectronics Reliability 40 (2000), 1561-1566.
T. Hirncno, N. Matsukawa, H. Hazama, K. Sakui, M. Oshikiri, K. Masuda, K. Kanda, Y. Itoh, J. Miyamoto, “A New Technique for Measuring Threshold Voltage Distribution in Flash EEPROM Devices”, Proc. IEEE 1995 Int. Conference on Microelectronics Test Structures, vol. 8, Mar. 1995.
Boaz Eitan, Guy Cohen, Assaf Shappir, Eli Lusky, Amichai Givant, Meir Janai, Ilan Bloom, Yan Polansky, Oleg Dadashev, Avi Lavan, Ran Sahar, Eduardo Maayan, “4-bit per Cell NROM Reliability”, Appears on the website of Saifun.com , 2005.
Paulo Cappelletti, Clara Golla, Piero Olivo, Enrico Zanoni, “Flash Memories”, Kluwer Academic Publishers, 1999.
JEDEC Standard, “Stress-Test-Driven Qualification of Integrated Circuits”, JEDEC Solid State Technology Association. JEDEC Standard No. 47F pp. 1-26, Dec. 2007.
Dempster, et al., “Maximum Likelihood from Incomplete Data via the EM Algorithm”, Journal of the Royal Statistical Society. Series B (Methodological), vol. 39, No. 1 (1997), pp. 1-38.
Mielke, et al., “Flash EEPROM Threshold Instabilities due to Charge Trapping During Program/Erase Cycling”, IEEE Transactions on Device and Materials Reliability, vol. 4, No. 3, Sep. 2004, pp. 335-344.
Daneshbeh, “Bit Serial Systolic Architectures for Multiplicative Inversion and Division over GF (2)”, A thesis presented to the University of Waterloo, Ontario, Canada, 2005, pp. 1-118.
Chen, Formulas for the solutions of Quadratic Equations over GF (2), IEEE Trans. Inform. Theory, vol. IT-28, No. 5, Sep. 1982, pp. 792-794.
Berlekamp et al., “On the Solution of Algebraic Equations over Finite Fields”, Inform. Cont. 10, Oct. 1967, pp. 553-564.
Provisional Applications (1)
Number Date Country
61485397 May 2011 US
Continuations (1)
Number Date Country
Parent 13434083 Mar 2012 US
Child 13859497 US
Continuation in Parts (1)
Number Date Country
Parent 13859497 Apr 2013 US
Child 14097086 US