Accelerating programming of a flash memory module

Information

  • Patent Grant
  • 9972393
  • Patent Number
    9,972,393
  • Date Filed
    Thursday, July 3, 2014
    10 years ago
  • Date Issued
    Tuesday, May 15, 2018
    6 years ago
Abstract
According to an embodiment of the invention there is provided a method for accelerating programming of data, the method may include receiving multiple input data units that were sent from a host computer; wherein the input data units may include first and second input data units; first level programming the first input data units to cache memory pages and first level programming the second input data units to first level target memory pages; and applying a copy back operation that comprises retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages; wherein any target page out of the first level target pages and the second level target pages differs from a cache memory page; and wherein the first level programming is faster than the second level programming.
Description
BACKGROUND OF THE INVENTION

Multi level cells (MLC) flash memory cells may store multiple bits per cell. These multiple bits per cell may include a least significant bit (LSB), a most significant bit (MSB) and zero or more central significant bits (CSBs).


Bits of different order (also referred to as bits of different significance) are stored by programmings of different significance. MSB bits are programmed by MSB programming, LSB bits are programmed by LSB programming and each CSB bit is programmed by the appropriate CSB programming. Higher significance bit programming is faster than lower significance bit programming.


When performing MSB programming a host interface of a memory controller can slow down the programming process (form a bottleneck) while when performing LSB programming (which is slower than MSB programming) the flash memory module can slow down the programming process (form a bottleneck).



FIG. 1 is a prior art timing diagram 100 that shows (a) data being written 10 by a host computer to a host interface of a memory controller, (b) data being written to a flash memory module from a flash memory module interface of a memory controller, (c) a first idle event 31 in which a flash memory module waits for data from a host computer and (d) a second idle event 32 in which the host computer is barred from sending more information—as the programming of data to a flash memory module did not end.


There is a growing need to increase the programming speed especially in devices where an internal volatile memory of a memory controller is not big enough to smooth (by buffering) the incoming data.


SUMMARY

According to an embodiment of the invention there may be provided a method, a non-transitory computer readable medium and a memory controller for acceleration of programming.


According to an embodiment of the invention there may be provided a method for accelerating programming of data, the method may include receiving multiple input data units that were sent from a host computer; wherein the input data units comprise first and second input data units; first level programming the first input data units to cache memory pages and first level programming the second input data units to first level target memory pages; and applying a copy back operation that retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages; wherein any target page out of the first level target pages and the second level target pages differs from a cache memory page; and wherein the first level programming may be faster than the second level programming.


The first level programming may be a most significant bit (MSB) programming.


The second level programming may be a least significant bit (LSB) programming.


The first level programming of the first and second input data units occur in parallel to each other.


The first level programming of the first and second input data units occur in a partially overlapping manner.


The method may include preventing programming of any input data unit after the input data unit is programmed to a target page.


The ratio between an overall number of dies performing Copy Back and an overall number of dies caching may exceed one.


The ratio between an overall number of dies performing Caching an overall size of dies performing Copy Back and may be a fraction of a ratio between programming speeds of the first level and second level programming.


The fraction may be one half.


The input data units may include third input data units; and the method may include first level programming the third input data units to additional cache memory pages; wherein the applying of the copy back operation may include retrieving the third input data units from the additional cache memory pages and third level programming the third input data units to third level target memory pages; and wherein the third level programming differs by speed from the first and second level programming.


According to an embodiment of the invention there may be provided method for accelerating programming of data, the method receiving multiple input data units by a memory controller and from a host computer; wherein the input data units first and second input data units; instructing a programming circuit of a flash memory module to perform first level programming the first input data units to cache memory pages of the flash memory module and to perform first level programming the second input data units to first level target memory pages of the flash memory module; and instructing a copy back circuit of the flash memory module to apply a copy back operation that retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages; wherein any target page out of the first level target pages and the second level target pages differs from a cache memory page; and wherein the first level programming may be faster than the second level programming. The method may include allocating cache memory pages and target pages.


The allocating may include responsive to programming speeds of the first level and second level programming.


The input data units may include third input data units; wherein the method may include first level programming the third input data units to additional cache memory pages; wherein the applying of the copy back operation may include retrieving the third input data units from the additional cache memory pages and third level programming the third input data units to third level target memory pages; and wherein the third level programming may differ by speed from the first and second level programming.


According to an embodiment of the invention there may be provided a non-transitory computer readable medium that stores instructions that once executed by a computer causes the computer to execute the stages of receiving multiple input data units that were sent from a host computer; wherein the input data units may include first and second input data units; first level programming the first input data units to cache memory pages and first level programming the second input data units to first level target memory pages; and applying a copy back operation that retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages; wherein any target page out of the first level target pages and the second level target pages may differ from a cache memory page; and wherein the first level programming may include faster than the second level programming.


According to an embodiment of the invention there may be provided a memory controller that may include a control unit and an interface; wherein the interface may be arranged to receive multiple input data units from a host computer; wherein the input data units may include first and second input data units; wherein the control unit may be arranged to instruct a programming circuit of a flash memory module to perform first level programming the first input data units to cache memory pages of the flash memory module and to perform first level programming of the second input data units to first level target memory pages of the flash memory module; and instruct a copy back circuit of the flash memory module to apply a copy back operation that retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages; wherein any target page out of the first level target pages and the second level target pages differs from a cache memory page; and wherein the first level programming may be faster than the second level programming.


According to an embodiment of the invention there may be provided a flash memory module that may include an interface, a copy back circuit, a programming circuit and a flash memory pages; wherein the interface may be arranged to receive multiple input data units from a memory controller; wherein the input data units may include first and second input data units; wherein the programming circuit may be arranged to perform first level programming the first input data units to cache memory pages of the flash memory module and to perform first level programming of the second input data units to first level target memory pages of the flash memory module; and wherein the copy back circuit may be arranged to apply a copy back operation that may include retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages; wherein any target page out of the first level target pages and the second level target pages may differ from a cache memory page; and wherein the first level programming may include faster than the second level programming.





BRIEF DESCRIPTION OF THE DRAWINGS

The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings in which:



FIG. 1 is a prior art timing diagram;



FIG. 2 illustrates a method according to an embodiment of the invention;



FIG. 3 illustrates a method according to an embodiment of the invention;



FIG. 4 illustrates a system according to an embodiment of the invention; and



FIG. 5 is a timing diagram according to an embodiment of the invention.





DETAILED DESCRIPTION OF THE DRAWINGS

In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention. However, it will be understood by those skilled in the art that the present invention may be practiced without these specific details. In other instances, well-known methods, procedures, and components have not been described in detail so as not to obscure the present invention.


The subject matter regarded as the invention is particularly pointed out and distinctly claimed in the concluding portion of the specification. The invention, however, both as to organization and method of operation, together with objects, features, and advantages thereof, may best be understood by reference to the following detailed description when read with the accompanying drawings.


It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements for clarity. Further, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements.


Because the illustrated embodiments of the present invention may for the most part, be implemented using electronic components and circuits known to those skilled in the art, details will not be explained in any greater extent than that considered necessary as illustrated above, for the understanding and appreciation of the underlying concepts of the present invention and in order not to obfuscate or distract from the teachings of the present invention.


Any reference in the specification to a method should be applied mutatis mutandis to a system capable of executing the method and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that once executed by a computer result in the execution of the method.


Any reference in the specification to a system should be applied mutatis mutandis to a method that may be executed by the system and should be applied mutatis mutandis to a non-transitory computer readable medium that stores instructions that may be executed by the system.


Any reference in the specification to a non-transitory computer readable medium should be applied mutatis mutandis to a system capable of executing the instructions stored in the non-transitory computer readable medium and should be applied mutatis mutandis to method that may be executed by a computer that reads the instructions stored in the non-transitory computer readable medium.



FIG. 2 illustrates method 200 according to an embodiment of the invention.


Method 200 is executed by a flash memory module that may be coupled to a memory controller that in turn is coupled to a host computer.


Method 200 may start by stage 210 of receiving multiple input data units that were sent from a host computer. The input data units comprise first and second input data units. The first input data units are to be cached while the second data units are to be written to their target memory pages.


Stage 210 may be followed by stages 220 and 230.


Stage 220 may include first level programming the first input data units to cache memory pages and first level programming the second input data units to first level target memory pages.


Stage 230 may include applying a copy back operation that comprises retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages.


Any target page out of the first level target pages and the second level target pages differ from a cache memory page. The first level programming is faster than the second level programming. Cache memory pages may be SLC mode pages within MSB device.


The first level programming may be a most significant bit (MSB) programming. The second level programming may be a least significant bit (LSB) programming.


It is noted that the first and second level programming may be selected from a group of different bit significance programming that may include MSB programming, LSB programming and at least one CSB programming.


The first level programming of the first and second input data units may occur in parallel to each other, in a partially of fully overlapping manner.


Programming data to a target page may mean that the data is not further programming to another page. Thus, the method may include preventing further programming of any input data unit after the input data unit is programmed to a target page. Alternatively, further programming of the data may be performed during memory management operations such as cleaning or merging.


The ratio between an overall size (or overall number) of caching dies and an overall size (or overall number) of target dies may exceed one, may equal one or may be lower than one.


The ratio between an overall size (or overall number) of target dies an overall size (or overall number) of caching dies may be related to (for example may be a fraction of) a ratio between programming speeds of the first level and second level programming. The optimal ratio shall equalize the performance of caching process and copy-back process.


The fraction may be equal to the ratio between programming speeds of caching and copy-back. Meaning faster process will need less dies for operation and wise versa. In case part of the pages are programmed directly without caching first, ration would be one half, one third and the like, according to directly programmed fraction of the overall pages


The method may be applied mutatis mutandis to more than two programming levels. For example, the input data units further comprise third input data units; and the method may include first level programming the third input data units to additional cache memory pages. The applying of the copy back operation may also include retrieving the third input data units from the additional cache memory pages and third level programming the third input data units to third level target memory pages. The third level programming differs by speed from the first and second level programming.



FIG. 3 illustrates method 300 according to an embodiment of the invention.


Method 300 is executed by a memory controller that is coupled to a host computer and to a flash memory module.


Method 300 may start by stage 310 of receiving multiple input data units by a memory controller and from a host computer; wherein the input data units comprises first and second input data units.


Stage 310 may be followed by stages 320 and 330.


Stage 320 may include instructing a programming circuit of a flash memory module to perform first level programming the first input data units to cache memory pages of the flash memory module and to perform first level programming the second input data units to first level target memory pages of the flash memory module.


Stage 330 may include instructing a copy back circuit of the flash memory module to apply a copy back operation that comprises retrieving the first input data units from the cache memory pages and second level programming the first input data units to the second level target memory pages. Any target page out of the first level target pages and the second level target pages differ from a cache memory page. The first level programming is faster than the second level programming.


Method 300 may also include stage 305 of allocating cache memory pages and target pages, and may include allocating dies for caching process and for copy-back process.


The allocating of dies may be responsive to programming speeds of the first level and second level programming.


The allocating can include allocating memory dies for caching process and dies for copy back process so that the ratio between an overall size (or overall number) of cache memory dies and an overall size (or overall number) of copy back dies may exceed one, may equal one or may be lower than one.


The allocating can include allocating cache memory dies and dies for copy back so that the ratio between an overall size (or overall number) of copy back dies an overall size (or overall number) of caching memory dies may be a fraction of a ratio between programming speeds of the first level and second level programming.


If first level programming is done directly, while second level is done via copy back process, the fraction may be one half, one third, and the like.


The method may be applied mutatis mutandis to more than two programming levels. For example, the input data units may include third input data units. The method may include instructing the programming circuit of the flash memory module to perform third level programming the third input data units to additional cache memory pages. The applying of the copy back operation further comprises retrieving the third input data units from the additional cache memory pages and third level programming the third input data units to third level target memory pages. The third level programming differs by speed from the first and second level programming.


In order to balance and optimize Caching and Copy Back stages need to get near the same performance.


Using ratio between MSB and LSB page program bandwidth (or speed) BW (single die):








MSB
BW


LSB
BW


=
ρ




Assuming that caching and MSB programming have similar performance, and assuming that caching group of dies perform both caching of data designated to second level and first level direct programming. Thus caching process process twice more data than Cony Back process. Caching to Copy Back ratio then (single die):








MSB
BW


(

2
·

LSB
BW


)


=


1
/
2


ρ





Optimal balancing would be L/M≈½ρ, where L-number of dies performing Copy-Back (toward LSB), and M-number of dies performing Caching (toward MSB).


Other ratios (other than ½) can be applied.


Average write BW is given by:







Absolute





Average





Write





BW

=



Total





data





written


Total





time





it





took


=




Data





Written





to





MSB

+

Data





Written





to





LSB




Time





took





to





write





MSB

+

Time





took





to





write





LSB



=





Data





Written





to





MSB

+

Data





Written





to





LSB





Data





Written





to





MSB


Write





BW





MSB


+


Data





Written





to





LSB


Write





BW





LSB











Assuming





same





amount






of





data





we





get







2


1

Write





BW





MSB


+

1

Write





BW





LSB





=

2



(



(

Write





BW





MSB

)


-
1


+


(

Write





BW





LSB

)


-
1



)


-
1










Performance of prior art device: Average Write BW=2((Effective Write BW MSB)−1+(Effective Write BW LSB)−1)−1


Where Effective Write BW=Max (Write BW, Host Interface BW)=Max (Write BW, α) due to Host interface as bottleneck.


Total MSB pages write BW is higher than LSB→Write BW MSB>Write BW LSB


Total MSB pages write BW is higher than Host interface speed→Write BW MSB>α→Effective Write BW MSB=α


Total LSB pages write BW is lower than Host interface speed→Write BW LSB<α→Effective Write BW LSB=Write BW LSB


Thus: Average Write BW=2((α)−1+(Write BW LSB)−1)−1


Performance when practicing a method according to an embodiment of the invention:


The flash memory module is virtually divided to two groups:


a. N—number of dies performing Caching to SLC and MSB;


b. M—number of dies performing Copy-back operations.


Data in and Caching BW given by Effective Cache In BW=2((Effective Write BW MSB(N dies))1+(Effective Write BW SLC(N dies))−1)−1


Assuming that the number N was chosen in such manner that Caching performance is near Host interface we can assume that Effective equal to caching: Effective Cache In BW=2((N·Write BW MSB)−1+(N·Write BW SLC)−1)−1


Copy Back BW is given by M·Copy Back BW LSB


Total performance is given by the bottleneck of those two processes: Write BW=Min(Effective Cache In BW,M·Copy Back BW LSB)


Example:


















Variable
Symbol
Value
Units





















Host I/F BW
HBW
100
[MB/s]



NAND I/F BW
NBW
300
[MB/s]













Program Speed
SLC
γ
50
[us]




MSB
α
50
[us]




LSB
β
8⅓
[us]



Number of dies
Total

8





Caching
N
2





Copy-Back
M
6










Host Interface BW=100 MB/s


Average NAND die Write BW=2(α−1−1)−1=2(50−1+8⅓−1)−1=14.3 MB/s


If there will be no Host Interface bottleneck:


Average Array Write BW=(N+M)·Average NAND die Write BW=114.3 MB/s


Host Interface bottleneck cause







Actual





Write





BW

=


2



(



(

MIN


(

HBW
,


(

N
+
M

)


α


)


)


-
1


+


(

MIN


(

HBW
,


(

N
+
M

)


β


)


)


-
1



)


-
1



=


2



(


100

-
1


+

662
/

3

-
1




)


-
1



=

80






MB
/
s








Invention Write BW=MIN (Caching, Copy_Back)=MIN (MIN (HBW, N·α),2·M·β)=MIN (MIN (100,2·50), 2·6·8⅓)=100 MB/s


Accordingly—the appliance of methods 200 and/or 300 resulted in a full Host Interface BW utilization and a gain of 25% in comparison to the prior art performance.



FIG. 4 illustrates a flash memory module 410, a memory controller 420 and a host computer 430 according to an embodiment of the invention.


The memory controller 420 includes a control unit 422 and an interface 424. The interface may include a host interface 424(1) and a flash memory module interface 424(2).


The interface 424 is arranged to receive multiple input data units from a host computer. The input data units comprises first and second input data units.


The control unit 422 is arranged to (a) instruct a programming circuit of a flash memory module to perform first level programming the first input data units to cache memory pages of the flash memory module and to perform first level programming of the second input data units to first level target memory pages of the flash memory module; and (b) instruct a copy back circuit of the flash memory module to apply a copy back operation that comprises retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages. Any target page (out of the first level target pages and the second level target pages) differs from a cache memory page. The first level programming is faster than the second level programming.


Flash memory module 410 includes interface 412, a copy back circuit 414, a programming circuit 416, and flash memory pages 418.


The flash memory pages 418 may include flash memory pages that at a certain point in time are cache memory pages (such as 418(1)) and may include flash memory pages that at the certain point in time are non-cache memory pages and may be target memory pages (such as 418(2)). The allocation may be fixed or change over time.


The interface 412 is arranged to receive multiple input data units from a memory controller; wherein the input data units comprises first and second input data units.


The programming circuit 416 is arranged to perform first level programming the first input data units to cache memory pages of the flash memory module and to perform first level programming of the second input data units to first level target memory pages of the flash memory module.


The copy back circuit 414 is arranged to apply a copy back operation that comprises retrieving the first input data units from the cache memory pages and second level programming the first input data units to second level target memory pages. Any target page out of the first level target pages and the second level target pages may differ from a cache memory page. The first level programming is faster than the second level programming.



FIG. 5 is a timing diagram 500 according to an embodiment of the invention. The timing diagram 500 shows (a) data being written 10 by a host computer to a host interface of a memory controller, (b) first data units being written 12 to cache memory pages, and (c) data being copied back 14 to second level target pages.


The invention may also be implemented in a computer program for running on a computer system, at least including code portions for performing steps of a method according to the invention when run on a programmable apparatus, such as a computer system or enabling a programmable apparatus to perform functions of a device or system according to the invention. The computer program may cause the storage system to allocate disk drives to disk drive groups.


A computer program is a list of instructions such as a particular application program and/or an operating system. The computer program may for instance include one or more of a subroutine, a function, a procedure, an object method, an object implementation, an executable application, an applet, a servlet, a source code, an object code, a shared library/dynamic load library and/or other sequence of instructions designed for execution on a computer system.


The computer program may be stored internally on a non-transitory computer readable medium. All or some of the computer program may be provided on computer readable media permanently, removably or remotely coupled to an information processing system. The computer readable media may include, for example and without limitation, any number of the following magnetic storage media including disk and tape storage media; optical storage media such as compact disk media (e.g., CD-ROM, CD-R, etc.) and digital video disk storage media; nonvolatile memory storage media including semiconductor-based memory units such as FLASH memory, EEPROM, EPROM, ROM; ferromagnetic digital memories; MRAM; volatile storage media including registers, buffers or caches, main memory, RAM, etc.


A computer process typically includes an executing (running) program or portion of a program, current program values and state information, and the resources used by the operating system to manage the execution of the process. An operating system (OS) is the software that manages the sharing of the resources of a computer and provides programmers with an interface used to access those resources. An operating system processes system data and user input, and responds by allocating and managing tasks and internal system resources as a service to users and programs of the system.


The computer system may for instance include at least one processing unit, associated memory and a number of input/output (I/O) devices. When executing the computer program, the computer system processes information according to the computer program and produces resultant output information via I/O devices.


In the foregoing specification, the invention has been described with reference to specific examples of embodiments of the invention. It will, however, be evident that various modifications and changes may be made therein without departing from the broader spirit and scope of the invention as set forth in the appended claims.


Moreover, the terms “front,” “back,” “top,” “bottom,” “over,” “under” and the like in the description and in the claims, if any, are used for descriptive purposes and not necessarily for describing permanent relative positions. It is understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the invention described herein are, for example, capable of operation in other orientations than those illustrated or otherwise described herein.


The connections as discussed herein may be any type of connection suitable to transfer signals from or to the respective nodes, units or devices, for example via intermediate devices. Accordingly, unless implied or stated otherwise, the connections may for example be direct connections or indirect connections. The connections may be illustrated or described in reference to being a single connection, a plurality of connections, unidirectional connections, or bidirectional connections. However, different embodiments may vary the implementation of the connections. For example, separate unidirectional connections may be used rather than bidirectional connections and vice versa. Also, plurality of connections may be replaced with a single connection that transfers multiple signals serially or in a time multiplexed manner. Likewise, single connections carrying multiple signals may be separated out into various different connections carrying subsets of these signals. Therefore, many options exist for transferring signals.


Although specific conductivity types or polarity of potentials have been described in the examples, it will be appreciated that conductivity types and polarities of potentials may be reversed.


Each signal described herein may be designed as positive or negative logic. In the case of a negative logic signal, the signal is active low where the logically true state corresponds to a logic level zero. In the case of a positive logic signal, the signal is active high where the logically true state corresponds to a logic level one. Note that any of the signals described herein may be designed as either negative or positive logic signals. Therefore, in alternate embodiments, those signals described as positive logic signals may be implemented as negative logic signals, and those signals described as negative logic signals may be implemented as positive logic signals.


Furthermore, the terms “assert” or “set” and “negate” (or “deassert” or “clear”) are used herein when referring to the rendering of a signal, status bit, or similar apparatus into its logically true or logically false state, respectively. If the logically true state is a logic level one, the logically false state is a logic level zero. And if the logically true state is a logic level zero, the logically false state is a logic level one.


Those skilled in the art will recognize that the boundaries between logic blocks are merely illustrative and that alternative embodiments may merge logic blocks or circuit elements or impose an alternate decomposition of functionality upon various logic blocks or circuit elements. Thus, it is to be understood that the architectures depicted herein are merely exemplary, and that in fact many other architectures may be implemented which achieve the same functionality.


Any arrangement of components to achieve the same functionality is effectively “associated” such that the desired functionality is achieved. Hence, any two components herein combined to achieve a particular functionality may be seen as “associated with” each other such that the desired functionality is achieved, irrespective of architectures or intermedial components. Likewise, any two components so associated can also be viewed as being “operably connected,” or “operably coupled,” to each other to achieve the desired functionality.


Furthermore, those skilled in the art will recognize that boundaries between the above described operations merely illustrative. The multiple operations may be combined into a single operation, a single operation may be distributed in additional operations and operations may be executed at least partially overlapping in time. Moreover, alternative embodiments may include multiple instances of a particular operation, and the order of operations may be altered in various other embodiments.


Also for example, in one embodiment, the illustrated examples may be implemented as circuitry located on a single integrated circuit or within a same device. Alternatively, the examples may be implemented as any number of separate integrated circuits or separate devices interconnected with each other in a suitable manner.


Also for example, the examples, or portions thereof, may implemented as soft or code representations of physical circuitry or of logical representations convertible into physical circuitry, such as in a hardware description language of any appropriate type.


Also, the invention is not limited to physical devices or units implemented in non-programmable hardware but can also be applied in programmable devices or units able to perform the desired device functions by operating in accordance with suitable program code, such as mainframes, minicomputers, servers, workstations, personal computers, notepads, personal digital assistants, electronic games, automotive and other embedded systems, cell phones and various other wireless devices, commonly denoted in this application as ‘computer systems’.


However, other modifications, variations and alternatives are also possible. The specifications and drawings are, accordingly, to be regarded in an illustrative rather than in a restrictive sense.


In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word ‘comprising’ does not exclude the presence of other elements or steps then those listed in a claim. Furthermore, the terms “a” or “an,” as used herein, are defined as one or more than one. Also, the use of introductory phrases such as “at least one” and “one or more” in the claims should not be construed to imply that the introduction of another claim element by the indefinite articles “a” or “an” limits any particular claim containing such introduced claim element to inventions containing only one such element, even when the same claim includes the introductory phrases “one or more” or “at least one” and indefinite articles such as “a” or “an.” The same holds true for the use of definite articles. Unless stated otherwise, terms such as “first” and “second” are used to arbitrarily distinguish between the elements such terms describe. Thus, these terms are not necessarily intended to indicate temporal or other prioritization of such elements. The mere fact that certain measures are recited in mutually different claims does not indicate that a combination of these measures cannot be used to advantage.


While certain features of the invention have been illustrated and described herein, many modifications, substitutions, changes, and equivalents will now occur to those of ordinary skill in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the true spirit of the invention.

Claims
  • 1. A method for accelerating programming of data, comprising: receiving multiple input data units that were sent from a host computer; wherein the input data units comprise first and second input data units;most significant bit (MSB) programming the first input data units to cache memory pages and MSB programming the second input data units to first level target memory pages; andapplying a copy back operation that comprises retrieving the first input data units from the cache memory pages and least significant bit (LSB) programming the retrieved first input data units to second level target memory pages;wherein any target page out of the first level target memory pages and the second level target memory pages differs from a cache memory page;wherein the MSB programming is faster than the LSB programming;wherein the input data units further comprise third input data units;wherein the method further comprises MSB programming the third input data units to additional cache memory pages;wherein the applying of the copy back operation further comprises retrieving the third input data units from the additional cache memory pages and third level programming the third input data units to third level target memory pages; andwherein the third level programming differs by speed from the MSB and LSB programming.
  • 2. The method according to claim 1, wherein the MSB programming of the first and second input data units occur in parallel to each other.
  • 3. The method according to claim 1, wherein the MSB programming of the first and second input data units occur in a partially overlapping manner.
  • 4. The method according to claim 1, further comprising preventing further programming of any input data unit after the input data unit is programmed to a target page.
  • 5. The method according to claim 1, wherein a ratio between an overall number of dies performing Copy Back and an overall number of dies caching exceeds one.
  • 6. The method according to claim 1, wherein a ratio between an overall number of dies performing Caching and an overall size of dies performing Copy Back is a fraction of a ratio between programming speeds of the MSB and the LSB programming.
  • 7. The method according to claim 6, wherein the fraction is one half.
  • 8. The method according to claim 1, wherein the cache memory pages and the first level target pages are MSB pages.
  • 9. The method according to claim 8, wherein the second level target pages are LSB pages.
  • 10. The method according to claim 9, wherein the MSB programming of the first and second input data units occur in parallel to each other.
  • 11. The method according to claim 9, wherein the MSB programming of the second input data units and the first input data units occur in a partially overlapping manner.
  • 12. The method according to claim 1, wherein the receiving the multiple input data units from the host computer includes continuously receiving additional ones of the multiple input data units while MSB programming the first input data units.
  • 13. A method for accelerating programming of data, comprising: receiving multiple input data units by a memory controller and from a host computer;
  • 14. The method according to claim 13, further comprising allocating cache memory pages and target pages.
  • 15. The method according to claim 14, wherein the allocating is responsive to programming speeds of the MSB and LSB programming.
  • 16. The method according to claim 13, wherein the cache memory pages and the first level target pages are MSB pages, and wherein the second level target pages are LSB pages.
  • 17. A non-transitory computer readable medium that stores instructions that once executed by a computer causes the computer to execute the stages of: receiving multiple input data units that were sent from a host computer; wherein the input data units comprises first and second input data units;most significant bit (MSB) programming the first input data units to cache memory MSB pages and MSB programming the second input data units to MSB target memory pages; andapplying a copy back operation that comprises retrieving the first input data units from the cache memory MSB pages and least significant bit (LSB) programming the retrieved first input data units to LSB target memory pages;wherein any target page out of the MSB target memory pages and the LSB target memory pages differs from a cache memory MSB page;wherein the MSB programming is faster than the LSB programming; andwherein a ratio between an overall number of dies performing the caching and an overall size of dies performing the copy back operation is a fraction of a ratio between programming speeds of the MSB and the LSB programming.
  • 18. A memory controller, comprising; a control unit; andan interface;wherein the interface is arranged to receive multiple input data units from a host computer;wherein the input data units comprises first and second input data units; wherein the control unit is arranged to: instruct a programming circuit of a flash memory module to perform most significant bit (MSB) programming the first input data units to cache memory pages of the flash memory module and to perform MSB programming of the second input data units to first level target memory pages of the flash memory module; andinstruct a copy back circuit of the flash memory module to apply a copy back operation that comprises retrieving the first input data units from the cache memory pages and least significant bit (LSB) programming the retrieved first input data units to second level target memory pages;wherein any target page out of the first level target memory pages and the second level target memory pages differs from a cache memory page;wherein the MSB programming is faster than the LSB programming; andwherein a ratio between an overall number of dies performing the copy back operation and an overall number of dies caching exceeds one.
  • 19. The memory controller according to claim 18, wherein the cache memory pages and the first level target pages are MSB pages, and wherein the second level target pages are LSB pages.
  • 20. A flash memory module, comprising: an interface;a copy back circuit;a programming circuit; andflash memory pages; wherein the interface is arranged to receive multiple input data units from a memory controller; wherein the input data units comprise first and second input data units;wherein the programming circuit is arranged to perform most significant bit (MSB) programming the first input data units to cache memory pages of the flash memory module and to perform MSB programming of the second input data units to first level target memory pages of the flash memory module; andwherein the copy back circuit is arranged to apply a copy back operation that comprises retrieving the first input data units from the cache memory pages and least significant bit (LSB) programming the retrieved first input data units to second level target memory pages;wherein any target page out of the first level target memory pages and the second level target memory pages differs from a cache memory page;wherein the MSB programming is faster than the LSB programming; andwherein a ratio between an overall number of dies performing the copy back operation and an overall number of dies caching exceeds one.
US Referenced Citations (329)
Number Name Date Kind
4430701 Christian et al. Feb 1984 A
4463375 Macovski Jul 1984 A
4584686 Fritze Apr 1986 A
4589084 Fling et al. May 1986 A
4777589 Boettner et al. Oct 1988 A
4866716 Weng Sep 1989 A
5003597 Merkle Mar 1991 A
5077737 Leger et al. Dec 1991 A
5297153 Baggen et al. Mar 1994 A
5305276 Uenoyama Apr 1994 A
5592641 Doyle et al. Jan 1997 A
5623620 Alexis et al. Apr 1997 A
5640529 Hasbun Jun 1997 A
5657332 Auclair et al. Aug 1997 A
5663901 Harari et al. Sep 1997 A
5724538 Bryg Mar 1998 A
5729490 Calligaro et al. Mar 1998 A
5740395 Hasbun Apr 1998 A
5745418 Hu et al. Apr 1998 A
5778430 Giovannetti Jul 1998 A
5793774 Usui et al. Aug 1998 A
5920578 Zook et al. Jul 1999 A
5926409 Engh et al. Jul 1999 A
5933368 Hu et al. Aug 1999 A
5956268 Lee Sep 1999 A
5956473 Hu et al. Sep 1999 A
5968198 Balachandran Oct 1999 A
5982659 Irrinki et al. Nov 1999 A
6011741 Harari et al. Jan 2000 A
6016275 Han Jan 2000 A
6038634 Ji et al. Mar 2000 A
6081878 Estakhri Jun 2000 A
6094465 Stein et al. Jul 2000 A
6119245 Hiratsuka Sep 2000 A
6182261 Haller et al. Jan 2001 B1
6192497 Yang et al. Feb 2001 B1
6195287 Hirano Feb 2001 B1
6199188 Shen et al. Mar 2001 B1
6209114 Wolf et al. Mar 2001 B1
6259627 Wong Jul 2001 B1
6272052 Miyauchi Aug 2001 B1
6278633 Wong et al. Aug 2001 B1
6279133 Vafai et al. Aug 2001 B1
6301151 Engh et al. Oct 2001 B1
6370061 Yachareni et al. Apr 2002 B1
6374383 Weng Apr 2002 B1
6504891 Chevallier Jan 2003 B1
6532169 Mann et al. Mar 2003 B1
6532556 Wong et al. Mar 2003 B1
6553533 Demura et al. Apr 2003 B2
6560747 Weng May 2003 B1
6637002 Weng et al. Oct 2003 B1
6639865 Kwon Oct 2003 B2
6674665 Mann et al. Jan 2004 B1
6675281 Oh Jan 2004 B1
6704902 Shinbashi et al. Mar 2004 B1
6751766 Guterman et al. Jun 2004 B2
6772274 Estakhri Aug 2004 B1
6781910 Smith Aug 2004 B2
6792569 Cox et al. Sep 2004 B2
6873543 Smith et al. Mar 2005 B2
6891768 Smith et al. May 2005 B2
6914809 Hilton et al. Jul 2005 B2
6915477 Gollamudi et al. Jul 2005 B2
6952365 Gonzalez et al. Oct 2005 B2
6961890 Smith Nov 2005 B2
6968421 Conley Nov 2005 B2
6990012 Smith et al. Jan 2006 B2
6996004 Fastow et al. Feb 2006 B1
6999854 Roth Feb 2006 B2
7010739 Feng et al. Mar 2006 B1
7012835 Gonzalez et al. Mar 2006 B2
7038950 Hamilton et al. May 2006 B1
7068539 Guterman et al. Jun 2006 B2
7079436 Perner et al. Jul 2006 B2
7149950 Spencer et al. Dec 2006 B2
7177977 Chen et al. Feb 2007 B2
7188228 Chang et al. Mar 2007 B1
7191379 Adelmann et al. Mar 2007 B2
7196946 Chen et al. Mar 2007 B2
7203874 Roohparvar Apr 2007 B2
7212426 Park May 2007 B2
7290203 Emma et al. Oct 2007 B2
7292365 Knox Nov 2007 B2
7301928 Nakabayashi et al. Nov 2007 B2
7315916 Bennett Jan 2008 B2
7388781 Litsyn Jun 2008 B2
7395404 Gorobets Jul 2008 B2
7441067 Gorobets et al. Oct 2008 B2
7443729 Li Oct 2008 B2
7450425 Aritome Nov 2008 B2
7454670 Kim et al. Nov 2008 B2
7466575 Shalvi et al. Dec 2008 B2
7533328 Alrod et al. May 2009 B2
7558109 Brandman et al. Jul 2009 B2
7593263 Sokolov et al. Sep 2009 B2
7610433 Randell et al. Oct 2009 B2
7613043 Cornwell Nov 2009 B2
7619922 Li Nov 2009 B2
7697326 Sommer et al. Apr 2010 B2
7706182 Shalvi et al. Apr 2010 B2
7716538 Gonzalez May 2010 B2
7804718 Kim Sep 2010 B2
7805663 Brandman et al. Sep 2010 B2
7805664 Yang et al. Sep 2010 B1
7844877 Litsyn et al. Nov 2010 B2
7911848 Eun Mar 2011 B2
7961797 Yang et al. Jun 2011 B1
7975192 Sommer Jul 2011 B2
8020073 Emma et al. Sep 2011 B2
8108590 Chow et al. Jan 2012 B2
8122328 Liu et al. Feb 2012 B2
8159881 Yang Apr 2012 B2
8190961 Yang May 2012 B1
8228728 Yang Jul 2012 B1
8250324 Haas Aug 2012 B2
8300823 Bojinov Oct 2012 B2
8305812 Levy Nov 2012 B2
8327246 Weingarten Dec 2012 B2
8407560 Ordentlich Mar 2013 B2
8417893 Khmelnitsky Apr 2013 B2
20010034815 Dugan et al. Oct 2001 A1
20020063774 Hillis et al. May 2002 A1
20020085419 Choi Jul 2002 A1
20020154769 Petersen et al. Oct 2002 A1
20020156988 Sekibe Oct 2002 A1
20020174156 Birru Nov 2002 A1
20030014582 Nakanishi Jan 2003 A1
20030065876 Lasser Apr 2003 A1
20030101404 Zhao et al. May 2003 A1
20030105620 Bowen Jun 2003 A1
20030177300 Jeong Sep 2003 A1
20030192007 Miller et al. Oct 2003 A1
20040015771 Lasser et al. Jan 2004 A1
20040030971 Shibata Feb 2004 A1
20040059768 Denk Mar 2004 A1
20040080985 Chang et al. Apr 2004 A1
20040153722 Lee Aug 2004 A1
20040153817 Chevallier Aug 2004 A1
20040181735 Xin Sep 2004 A1
20040203591 Lee Oct 2004 A1
20040210706 In et al. Oct 2004 A1
20050013165 Ban Jan 2005 A1
20050018482 Cemea et al. Jan 2005 A1
20050083735 Chen et al. Apr 2005 A1
20050117401 Chen et al. Jun 2005 A1
20050120265 Pline et al. Jun 2005 A1
20050128811 Kato et al. Jun 2005 A1
20050138533 Le-Bars et al. Jun 2005 A1
20050144213 Simkins et al. Jun 2005 A1
20050144368 Chung et al. Jun 2005 A1
20050169057 Shibata Aug 2005 A1
20050172179 Brandenberger et al. Aug 2005 A1
20050213393 Lasser Sep 2005 A1
20050243626 Ronen Nov 2005 A1
20060059406 Micheloni et al. Mar 2006 A1
20060059409 Lee Mar 2006 A1
20060064537 Oshima Mar 2006 A1
20060101193 Murin May 2006 A1
20060195651 Estakhri Aug 2006 A1
20060203587 Li et al. Sep 2006 A1
20060221692 Chen Oct 2006 A1
20060248434 Radke et al. Nov 2006 A1
20060268608 Noguchi et al. Nov 2006 A1
20060282411 Fagin et al. Dec 2006 A1
20060284244 Forbes Dec 2006 A1
20060294312 Walmsley Dec 2006 A1
20070025157 Wan et al. Feb 2007 A1
20070063180 Asano et al. Mar 2007 A1
20070081388 Joo Apr 2007 A1
20070098069 Gordon May 2007 A1
20070103992 Sakui et al. May 2007 A1
20070104004 So et al. May 2007 A1
20070109858 Conley et al. May 2007 A1
20070124652 Litsyn et al. May 2007 A1
20070140006 Chen Jun 2007 A1
20070143561 Gorobets Jun 2007 A1
20070150694 Chang et al. Jun 2007 A1
20070168625 Cornwell et al. Jul 2007 A1
20070171714 Wu et al. Jul 2007 A1
20070171730 Ramamoorthy et al. Jul 2007 A1
20070180346 Murin Aug 2007 A1
20070223277 Tanaka et al. Sep 2007 A1
20070226582 Tang et al. Sep 2007 A1
20070226592 Radke Sep 2007 A1
20070228449 Takano et al. Oct 2007 A1
20070253249 Kang et al. Nov 2007 A1
20070253250 Shibata Nov 2007 A1
20070263439 Cornwell et al. Nov 2007 A1
20070266291 Toda et al. Nov 2007 A1
20070271494 Gorobets Nov 2007 A1
20070297226 Mokhlesi Dec 2007 A1
20080010581 Alrod et al. Jan 2008 A1
20080028014 Hilt et al. Jan 2008 A1
20080049497 Mo Feb 2008 A1
20080055989 Lee Mar 2008 A1
20080082897 Brandman et al. Apr 2008 A1
20080092026 Brandman et al. Apr 2008 A1
20080104309 Cheon et al. May 2008 A1
20080112238 Kim May 2008 A1
20080116509 Harari et al. May 2008 A1
20080126686 Sokolov et al. May 2008 A1
20080127104 Li May 2008 A1
20080128790 Jung Jun 2008 A1
20080130341 Shalvi et al. Jun 2008 A1
20080137413 Kong et al. Jun 2008 A1
20080137414 Park et al. Jun 2008 A1
20080141043 Flynn et al. Jun 2008 A1
20080148115 Sokolov Jun 2008 A1
20080158958 Shalvi et al. Jul 2008 A1
20080159059 Moyer Jul 2008 A1
20080162079 Astigarraga et al. Jul 2008 A1
20080168216 Lee Jul 2008 A1
20080168320 Cassuto et al. Jul 2008 A1
20080181001 Shalvi Jul 2008 A1
20080198650 Shalvi et al. Aug 2008 A1
20080198652 Shalvi et al. Aug 2008 A1
20080201620 Gollub Aug 2008 A1
20080209114 Chow et al. Aug 2008 A1
20080219050 Shalvi et al. Sep 2008 A1
20080225599 Chae Sep 2008 A1
20080250195 Chow et al. Oct 2008 A1
20080263262 Sokolov et al. Oct 2008 A1
20080282106 Shalvi et al. Nov 2008 A1
20080285351 Shlick et al. Nov 2008 A1
20080301532 Uchikawa et al. Dec 2008 A1
20090024905 Shalvi et al. Jan 2009 A1
20090043951 Shalvi et al. Feb 2009 A1
20090046507 Aritome Feb 2009 A1
20090072303 Prall et al. Mar 2009 A9
20090091979 Shalvi Apr 2009 A1
20090103358 Sommer et al. Apr 2009 A1
20090106485 Anholt Apr 2009 A1
20090113275 Chen et al. Apr 2009 A1
20090125671 Flynn May 2009 A1
20090132755 Radke May 2009 A1
20090027961 Park Jun 2009 A1
20090144598 Yoon Jun 2009 A1
20090144600 Perlmutter et al. Jun 2009 A1
20090150599 Bennett Jun 2009 A1
20090150748 Egner et al. Jun 2009 A1
20090157964 Kasorla et al. Jun 2009 A1
20090158126 Perlmutter et al. Jun 2009 A1
20090168524 Golov et al. Jul 2009 A1
20090187803 Anholt et al. Jul 2009 A1
20090199074 Sommer Aug 2009 A1
20090213653 Perlmutter et al. Aug 2009 A1
20090213654 Perlmutter et al. Aug 2009 A1
20090228761 Perlmutter et al. Sep 2009 A1
20090240872 Perlmutter et al. Sep 2009 A1
20090282185 Van Cauwenbergh Nov 2009 A1
20090282186 Mokhlesi Nov 2009 A1
20090287930 Nagaraja Nov 2009 A1
20090300269 Radke et al. Dec 2009 A1
20090323942 Sharon Dec 2009 A1
20100005270 Jiang Jan 2010 A1
20100017561 Yang Jan 2010 A1
20100025811 Bronner et al. Feb 2010 A1
20100030944 Hinz Feb 2010 A1
20100058146 Weingarten et al. Mar 2010 A1
20100064096 Weingarten et al. Mar 2010 A1
20100088557 Weingarten et al. Apr 2010 A1
20100091535 Sommer et al. Apr 2010 A1
20100095186 Weingarten Apr 2010 A1
20100110787 Shalvi et al. May 2010 A1
20100115376 Shalvi et al. May 2010 A1
20100122113 Weingarten et al. May 2010 A1
20100124088 Shalvi et al. May 2010 A1
20100131580 Kanter et al. May 2010 A1
20100131806 Weingarten et al. May 2010 A1
20100131809 Katz May 2010 A1
20100131826 Shalvi et al. May 2010 A1
20100131827 Sokolov et al. May 2010 A1
20100131831 Weingarten et al. May 2010 A1
20100146191 Katz Jun 2010 A1
20100146192 Weingarten et al. Jun 2010 A1
20100174853 Lee Jul 2010 A1
20100180073 Weingarten et al. Jul 2010 A1
20100172179 Gorobets et al. Aug 2010 A1
20100199149 Weingarten et al. Aug 2010 A1
20100211724 Weingarten Aug 2010 A1
20100211833 Weingarten Aug 2010 A1
20100211856 Weingarten Aug 2010 A1
20100241793 Sugimoto Sep 2010 A1
20100246265 Moschiano et al. Sep 2010 A1
20100251066 Radke Sep 2010 A1
20100253555 Weingarten et al. Oct 2010 A1
20100257309 Barsky et al. Oct 2010 A1
20100269008 Leggette Oct 2010 A1
20100149881 Lee et al. Nov 2010 A1
20100293321 Weingarten Nov 2010 A1
20100318724 Yeh Dec 2010 A1
20100332922 Chang Dec 2010 A1
20110051521 Levy et al. Mar 2011 A1
20110055461 Steiner et al. Mar 2011 A1
20110093650 Kwon et al. Apr 2011 A1
20110096612 Steiner et al. Apr 2011 A1
20110099460 Dusija et al. Apr 2011 A1
20110119562 Steiner et al. May 2011 A1
20110153919 Sabbag Jun 2011 A1
20110161775 Weingarten Jun 2011 A1
20110194353 Hwang Aug 2011 A1
20110209028 Post Aug 2011 A1
20110214029 Steiner et al. Sep 2011 A1
20110214039 Steiner et al. Sep 2011 A1
20110246792 Weingarten Oct 2011 A1
20110246852 Sabbag Oct 2011 A1
20110252187 Segal et al. Oct 2011 A1
20110252188 Weingarten Oct 2011 A1
20110271043 Segal et al. Nov 2011 A1
20110302428 Weingarten Dec 2011 A1
20120001778 Steiner et al. Jan 2012 A1
20120005554 Steiner et al. Jan 2012 A1
20120005558 Steiner et al. Jan 2012 A1
20120005560 Steiner et al. Jan 2012 A1
20120008401 Katz et al. Jan 2012 A1
20120008414 Katz et al. Jan 2012 A1
20120017136 Ordentlich et al. Jan 2012 A1
20120051144 Weingarten et al. Mar 2012 A1
20120063227 Weingarten et al. Mar 2012 A1
20120066441 Weingarten Mar 2012 A1
20120110250 Sabbag et al. May 2012 A1
20120124273 Goss et al. May 2012 A1
20120221774 Atkisson Aug 2012 A1
20120246391 Meir Sep 2012 A1
20130132651 Li May 2013 A1
20130304966 Joo Nov 2013 A1
20140029341 In Jan 2014 A1
20140208187 Rho Jul 2014 A1
Foreign Referenced Citations (1)
Number Date Country
WO2009053963 Apr 2009 WO
Non-Patent Literature Citations (37)
Entry
Search Report of PCT Patent Application WO 2009/118720 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/095902 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/078006 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/074979 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/074978 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072105 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072104 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072103 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072102 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072101 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/072100 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/053963 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/053962 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/053961 A3, dated Mar. 4, 2010.
Search Report of PCT Patent Application WO 2009/037697 A3, dated Mar. 4, 2010.
Yani Chen, Kcshab K. Parhi, “Small Area Parallel Chien Search Architectures for Long BCH Codes”, Ieee Transactions on Very Large Scale Integration(VLSI) Systems, vol. 12, No. 5, May 2004.
Yuejian Wu, “Low Power Decoding of BCH Codes”, Nortel Networks, Ottawa, Ont., Canada, in Circuits and systems, 2004. ISCAS '04. Proceeding of the 2004 International Symposium on Circuits and Systems, published May 23-26, 2004, vol. 2, pp. II-369-72 vol. 2.
Michael Purser, “Introduction to Error Correcting Codes”, Artech House Inc., 1995.
Ron M. Roth, “Introduction to Coding Theory”, Cambridge University Press, 2006.
Akash Kumar, Sergei Sawitzki, “High-Throughput and Low Power Architectures for Reed Solomon Decoder”, (a.kumar at tue.nl, Eindhoven University of Technology and sergei.sawitzki at philips.com), Oct. 2005.
Todd K.Moon, “Error Correction Coding Mathematical Methods and Algorithms”, A John Wiley & Sons, Inc., 2005.
Richard E. Blahut, “Algebraic Codes for Data Transmission”, Cambridge University Press, 2003.
David Esseni, Bruno Ricco, “Trading-Off Programming Speed and Current Absorption in Flash Memories with the Ramped-Gate Programming Technique”, Ieee Transactions on Electron Devices, vol. 47, No. 4, Apr. 2000.
Giovanni Campardo, Rino Micheloni, David Novosel, “VLSI-Design of Non-Volatile Memories”, Springer Berlin Heidelberg New York, 2005.
John G. Proakis, “Digital Communications”, 3rd ed., New York: McGraw-Hill, 1995.
J.M. Portal, H. Aziza, D. Nee, “EEPROM Memory: Threshold Voltage Built in Self Diagnosis”, ITC International Test Conference, Paper 2.1, Feb. 2005.
J.M. Portal, H. Aziza, D. Nee, “EEPROM Diagnosis Based on Threshold Voltage Embedded Measurement”, Journal of Electronic Testing: Theory and Applications 21, 33-42, 2005.
G. Tao, A. Scarpa, J. Dijkstra, W. Stidl, F. Kuper, “Data retention prediction for modern floating gate non-volatile memories”, Microelectronics Reliability 40 (2000), 1561-1566.
T. Hirncno, N. Matsukawa, H. Hazama, K. Sakui, M. Oshikiri, K. Masuda, K. Kanda, Y. Itoh, J. Miyamoto, “A New Technique for Measuring Threshold Voltage Distribution in Flash EEPROM Devices”, Proc. IEEE 1995 Int. Conference on Microelectronics Test Structures, vol. 8, Mar. 1995.
Boaz Eitan, Guy Cohen, Assaf Shappir, Eli Lusky, Amichai Givant, Meir Janai, Ilan Bloom, Yan Polansky, Oleg Dadashev, Avi Lavan, Ran Sahar, Eduardo Maayan, “4-bit per Cell NROM Reliability”, Appears on the website of Saifun.com , 2005.
Paulo Cappelletti, Clara Golla, Piero Olivo, Enrico Zanoni, “Flash Memories”, Kluwer Academic Publishers, 1999.
JEDEC Standard, “Stress-Test-Driven Qualification of Integrated Circuits”, JEDEC Solid State Technology Association. JEDEC Standard No. 47F pp. 1-26, Dec. 2007.
Dempster, et al., “Maximum Likelihood from Incomplete Data via the EM Algorithm”, Journal of the Royal Statistical Society. Series B (Methodological), vol. 39, No. 1 (1997), pp. 1-38.
Mielke, et al., “Flash EEPROM Threshold Instabilities due to Charge Trapping During Program/Erase Cycling”, IEEE Transactions on Device and Materials Reliability, vol. 4, No. 3, Sep. 2004, pp. 335-344.
Daneshbeh, “Bit Serial Systolic Architectures for Multiplicative Inversion and Division over GF (2)”, A thesis presented to the University of Waterloo, Ontario, Canada, 2005, pp. 1-118.
Chen, Formulas for the solutions of Quadratic Equations over GF (2), IEEE Trans. Inform. Theory, vol. IT-28, No. 5, Sep. 1982, pp. 792-794.
Berlekamp et al., “On the Solution of Algebraic Equations over Finite Fields”, Inform. Cont. 10, Oct. 1967, pp. 553-564.