A solid state drive (SSD) is a data storage device that utilizes solid-state memory to retain data in nonvolatile memory chips. NAND-based flash memories are widely used as the solid-state memory storage in SSDs due to their compactness, low power consumption, low cost, high data throughput and reliability. SSDs commonly employ several NAND-based flash memory chips and a flash controller to manage the flash memory and to transfer data between the flash memory and a host computer. SSDs may be used in place of hard disk drives (HDDs) to provide higher performance and to reduce mechanical reliability issues. An SSD includes a high-speed interface connected to a controller chip and a plurality of storage, or memory, elements. The controller chip translates a high-speed protocol received over the high-speed interface into the protocol required by the storage elements, which include solid state memory devices, such as semiconductor devices. The controller controls the occurrence of read and erase (i.e. program/erase cycles, or P/E cycles) events in the storage elements.
The storage elements in the SSD are organized into a plurality of blocks, which are the smallest erasable units in the memory device. The blocks are subdivided into pages, which are the smallest readable units of the memory device and the pages are subdivided into sectors. In a P/E cycle, all the pages in a block are erased and then some, if not all, of the pages in the block are subsequently programmed.
An issue for SSDs is the reliability of the storage elements over the life of the SSD. Over time, relatively high gate voltages applied to the storage elements during P/E cycles in the SSD may cause cumulative permanent changes to the storage element characteristics. Charge may become trapped in the gate oxide of the storage elements through stress-induced leakage current (SILC). As the charge accumulates, the effect of programming or erasing a storage element becomes less reliable and the overall endurance of the storage element decreases. Additionally, an increasing number of P/E cycles experienced by a storage element decreases the storage element's data retention capacity, as high voltage stress causes charge to be lost from the storage element's floating gate.
Because the cells become unreliable as a result of numerous program and erase (P/E) cycles and that the number of cycles that a single cell can sustain is limited, there is a need to avoid stressing particular blocks of cells of the memory device. Techniques known as “wear leveling” have been developed to evenly spread the number of P/E cycles among all of the available memory blocks to avoid the overuse of specific blocks of cells, thereby extending the life of the device. The goal of wear leveling is to insure that no single block of cells prematurely fails as a result of a higher concentration of P/E cycles than the other blocks of the memory storage device. Conventional wear leveling techniques arrange data so that P/E cycles are evenly distributed among all of the blocks in the device. The effect of wear leveling is to minimize the time between two consecutive P/E cycles for all of the blocks of the memory storage device to extend the useful life of the device. In addition to extending the useful life of the device, it is also desirable to minimize the Bit Error Rate (BER) of the data storage device. However, experimental measurements show that conventional wear leveling techniques may not be effective in minimizing the (BER) of the data storage device.
Accordingly, what is needed in the art is a system and method for wear leveling which also minimizes the BER of the data storage device.
In various embodiments, a nonvolatile memory system includes a nonvolatile memory storage module for storing encoded data. The nonvolatile memory storage module comprises a plurality of memory cells and the memory cells are controlled by a nonvolatile memory controller.
A method for memory block pool wear leveling in a nonvolatile memory system includes, identifying a plurality of memory block pools of the nonvolatile memory system, each of the memory block pools comprising a plurality of memory blocks and each of the plurality of memory blocks comprising a plurality of memory cells. The method further includes, identifying a relaxation time delay for each of the plurality of memory block pools, wherein the relaxation time delay for each of the plurality of memory block pools is identified as a duration of time between a completion of a programming cycle of the memory block pool and a point in time when the BER (bit error rate) of the memory block pool is at a minimum. Following the identification of the plurality of memory block pools and the associated relaxation time delay for each of the memory block pools, the method further includes, executing a predetermined number of program/erase cycles for each of the plurality of memory block pools based upon the relaxation time delay of each of the plurality of memory block pools.
A nonvolatile memory controller for memory block pool wear leveling in a nonvolatile memory system includes, a memory block pool wear leveling module configured for identifying a plurality of memory block pools of the nonvolatile memory device and for identifying a relaxation time delay for each of the plurality of memory block pools. The nonvolatile memory controller further includes, a program/erase module coupled to the memory block pool wear leveling module, the program/erase module configured for executing a predetermined number of program/erase cycles for each of the plurality of memory block pools based upon the relaxation time delay of each of the plurality of memory block pools.
The use of a relaxation time delay between active cycles in which program and erase operations are performed reduces BER and extends the lifetime of the nonvolatile memory system.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification. The drawings illustrate embodiments of the invention, and together with the description, serve to explain the principles of the invention.
In the operation of a nonvolatile memory system, the storage elements of the memory system are subjected to many program/erase cycles over the lifetime of the device. Over time, the relatively high gate voltages applied to the storage elements during P/E cycles of the storage elements may cause cumulative permanent changes to the storage element characteristics. These cumulative changes to the storage element characteristics may cause a decrease in the reliability of the storage elements and a decrease in the overall endurance of the storage elements, thereby resulting in an undesirable increase in the bit error ratio (BER) of the memory system.
The nonvolatile memory system may be a NAND-based flash memory system. NAND flash memories are nonvolatile, and as such, are able to store and keep data even in the absence of a power source. With reference to
In NAND based memories, a logical page is composed of cells belonging to the same WL. The number of pages per WL is related to the storage capability of the memory cell. Depending upon the number of storage levels, flash memories are referred to in different ways: SLC (single level cell) memories store 1 bit per cell, MLC (multi-level cell) memories store 2 bits per cell, 8LC (eight level cell or triple level cell) memories store 3 bits per cell and 16LC (sixteen level cell) memories store 4 bits per cell.
Considering the SLC case with interleaved architecture, wherein one page is composed of even cells and a second page is composed of odd cells, if the page size is 4 kB, it follows that a WL has 32,768+32,768=65,536 cells. In contrast, in the MLC case, there are four pages, as each cell stores one least significant bit (LSB) and one most significant bit (MSB).
In general, a logical page is the smallest addressable unit for reading from and writing to the NAND memory. The number of logical pages within a logical block is typically a multiple of 16 (e.g. 64, 128). Additionally, in a NAND based memory, a logical block is the smallest erasable unit.
As shown with reference to
NAND-based flash memories are based on the floating gate technology. In a typical floating gate technology, a MOS transistor is built with two overlapping gates, wherein the first gate is completely surrounded by oxide, while the second gate is contacted to form the gate terminal. The isolated gate creates an excellent “trap” for electrons, which guarantees the charge retention of the memory cell for years. In floating gate storage technologies, two logic states are achieved by altering the number of electrons within the floating gate to achieve two logic states (1 and 0). In order to change the logic states of the memory cells of NAND-based flash memories, a strong electric field is applied to the cells which results in the destruction of the charge storage characteristics of the memory cell and negatively effects the ability of the cell to store information after a certain number of program/erase cycles. The cumulative result of the numerous program/erase cycles of the memory cells is a corresponding undesirable increase in the BER of the memory storage device.
NAND-based flash memories are characterized by a fixed number of P/E cycles and generally, in order to uniformly distribute the P/E cycles over all the memory cell blocks of the device, a wear-leveling algorithm is applied. Each block of memory cells can tolerate a finite number of P/E cycles before becoming unreliable. For example, an SLC (single level cell) NAND-based flash memory is typically rated at about 100,000 P/E cycles. Wear-leveling techniques known in the art are designed to extend the life of the NAND-based flash memory device, thereby decreasing the BER of the device, by evenly distributing the P/E cycles over all of the memory blocks of the device. The objective of current wear-leveling techniques is to maximize the time between two consecutive P/E cycles for every block of cells of the memory device. As such, each block of cells is treated equally and the wear-leveling algorithms known in the art are designed to maximize the time between P/E cycle n and P/E cycle n+1 for every block of cells.
However, experimental measurements show that these common wear-leveling schemes that evenly distribute the P/E cycles over all of the memory blocks of the device by maximizing the time between two consecutive P/E cycles for every block of cells do not necessarily minimize the BER of the device.
A nonvolatile memory system 300 for performing memory block pool wear-leveling is illustrated with reference to
With reference to
After the memory block pools have been identified 410, the method continues by identifying a relaxation time delay for each of the plurality of memory block pools. In the present embodiment the relaxation time delay for each of the plurality of memory block pools is identified as a duration of time between a completion of a programming cycle of the memory block pool and a point in time when the BER (bit error rate) of the memory block pool is at a minimum. In one embodiment, identifying the relaxation time delay for each of the plurality of memory block pools 411 is performed by memory block pool wear leveling module 320. The relaxation time delay may be identified experimentally for each of the plurality of memory block pools. In one embodiment, the relaxation time delay for each of the plurality of memory block pools of the nonvolatile memory device may be substantially equivalent. In an additional embodiment the relaxation time delay may different for one or more of the memory block pools.
Experimental results may indicate that the relaxation time delay should be adjusted during the lifetime of the device to minimize the BER. In this case, memory block pool wear leveling module 320 is operable to either use different relaxation time delays that are stored in nonvolatile memory system 300 or to retest NAND chips 350 for determining the adjusted relaxation time delay.
As a result of the charge leakage of the cells over time, retention errors that occur as a result of data retention tends to shift the voltage threshold distribution of the cells such that it is more likely that a logic state of “0” (programmed) becomes a logic state of “1” (erased) and corresponding it is less likely that a logic state of “1” (erased) becomes a logic state of “0” (programmed). In contrast, it is known that the programming errors that occur as a result of the programming operation of the P/E cycling of the memory cells tends to shift the voltage threshold distribution of the cells in the opposite direction, such that it is more likely that a logic state of “1” (erased) becomes a logic state of “0” (programmed) and correspondingly, it is less likely that a logic state of “0” (programmed) becomes a logic state of “1” (programmed). As such, while both P/E cycling and data retention contribute to the BER, for P/E cycling it is more likely that an erased cell will become a programmed cell than it is that a programmed cell will become an erased cell, whereas as the duration of time the data is stored increases, it is more likely that a programmed cell will become an erased cell than it is that an erased cell will become a programmed cell.
While in this exemplary embodiment, a logic state of “0” is representative of a programmed state and a logic state of “1” is representative of an erased state, in an alternative embodiment, a logic state of “1” may be representative of a programmed state and a logic state of “0” may be representative of an erased state.
As shown in the graph of
In one embodiment the identified relaxation time delay is determined to be the retention time corresponding to the lowest number of failures (lowest BER) for one or more stored data patterns. In the embodiment shown in
In another embodiment that is illustrated in
As shown in the graph of
Accordingly, when program errors and retention errors are used to calculate the retention time corresponding to the minimum BER, the retention time corresponding to the end of the relaxation phase and the minimum BER can be defined as the time at which data retention has resulted in the transition of the same number of cells from an erased state to a programmed state as have transitioned from a programmed state to an erased state for a data test pattern. Thus, the relaxation time delay will be the retention time at which data retention has resulted in the transition of the same number of cells from an erased state to a programmed state as have transitioned from a programmed state to an erased state for a data test pattern. In another embodiment, the relaxation time delay is a time that is within the relaxation phase and that is at or near the end of the relaxation phase. As can be seen from the graph, most of the benefit of relaxation is achieved during the first sixty percent of the relaxation phase. Accordingly, in one embodiment the relaxation time delay is within the relaxation phase and within the last forty percent of the relaxation phase (e.g., for T3 of 11 hrs. the relaxation time delay would be less than or equal to 11 hours and greater than 5.6 hours).
As illustrated with respect to
In another embodiment relaxation time delay is based on a range of numbers that correspond to a minimum BER calculated on a component level as illustrated in
In one embodiment relaxation time delay for each of the plurality of memory block pools is determined experimentally and is stored in nonvolatile memory controller 310 prior to assembly of nonvolatile memory system 300; and nonvolatile memory controller 310 is programmable such that each vendor can change the stored relaxation time delay value to conform to the characteristics of NAND chips 350.
Alternatively, at initial start-up of nonvolatile memory system 300 memory block pool wear leveling module 320 is operable to test the memory blocks of each memory block pool identified in step 410 to determine the relaxation time delay. This test may be a test that programs one or more pattern into the memory blocks of each data pool, reads the memory blocks, and determines errors during the retention time of the test. The test may determine the total number of failures and take the time associated with the minimum total number of failures as the relaxation time delay as is illustrated in
Following the identification of the relaxation time delay for each of the plurality of memory block pools 411, the method continues by executing a predetermined number of program/erase cycles for each of the plurality of memory block pools based upon the relaxation time delay of each of the plurality of memory block pools 412.
In one embodiment, executing a predetermined number of program/erase cycles for each of the plurality of memory block pools based upon the relaxation time delay of each of the plurality of memory block pools is performed by a program/erase module 330 of the nonvolatile memory controller 310. More particularly, program/erase module 330 is configured for executing a predetermined number of program/erase cycles for each of the plurality of memory block pools based upon the relaxation time delay of each of the plurality of memory block pools identified by the memory block pool wear leveling module 320.
The predetermined number of program/erase cycles for each of the plurality of memory block pools may be experimentally determined and the predetermined number of program/erase cycles for each of the plurality of memory block pools may be substantially equivalent or may be different.
In one embodiment the number of program and erase cycles to be used in each cycle of step 412 is determined experimentally and is stored in nonvolatile memory controller 310 prior to assembly of nonvolatile memory system 300; and nonvolatile memory controller 310 is programmable such that memory system vendors can change the predetermined number of P/E cycles in each set of program and erase cycles to conform to the characteristics of NAND chips 350.
In the embodiment shown in
In the present embodiment, the method includes evenly distributing the execution of the predetermined number of program/erase cycles among the plurality of blocks of the memory block pool during the active cycle of the pool. This distribution may use conventional wear leveling techniques.
Executing a predetermined number of program/erase cycles for each of the plurality of memory block pools based upon the relaxation time delay of each of the plurality of memory block pools may be performed at a maximum program/erase cycling rate. Alternatively, program/erase cycles are interrupted by read operations when read operations are to be performed on one or more page in a memory pool that is undergoing P/E cycling.
Continuing with
When Pool A, Pool B or Pool C becomes active, the predetermined number (x) of P/E cycles could be performed at a maximum program/erase speed of the device.
In the embodiment shown in
In one embodiment the predetermined number (x) of program and erase cycles for each of the plurality of memory block pools is determined experimentally and is stored in nonvolatile memory controller 310 prior to assembly of nonvolatile memory system 300; and nonvolatile memory controller 310 is programmable such that each vendor can change the stored predetermined number of program and erase cycles to conform to the characteristics of NAND chips 350. In the present example, the predetermined number of program and erase cycles is 10,000 for all of pools A, B and C such that 10,000 P/E operations are performed in each of cycles 710-712, 720-722 and 730-731. However, alternatively, pools A, B and C could each have a different number of program and erase cycles performed during each active cycle.
In the embodiment shown in
In the present embodiment, data is only stored in an active data pool, and data is not stored in a data pool that is not active. For example, data received during active periods 710-712 may be stored in pool A, data received during active periods 720-722 may be stored in pool B, data received during active periods 730-731 may be stored in pool C. As there is no programming during the relaxation time delay for each pool, data received during relaxation time delay A 701 is not stored in pool A, data received during relaxation time delay B 702 is not stored in pool B and data received during relaxation time delay C 703 is not stored in pool C. In the present embodiment data read operations are performed as required, with any of the blocks of pools A, B or C being read as required.
It is appreciated that the duration of cycles 710-712, 720-722 and 730-732 will vary when reads are performed during a respective cycle. In the present embodiment the initial cycles are staggered so as to assure that at least one pool is active at all times. In one embodiment, if data to be stored is received at a time when no pool is active, the pool having the greatest time measurement is made active to allow for storing the incoming data.
The memory block pool wear-leveling module 320 and the program/erase module 330 may also be configured to integrate standard wear-leveling techniques into the memory block pool wear-leveling technique. Standard wear-leveling techniques can be incorporated into the individual memory block pools by evenly distributing the execution of the predetermined number of program/erase cycles among the plurality of blocks of each of memory block pools A, B and C. Alternatively, standard wear leveling could be performed by distributing the predetermined number of program/erase cycles between a different set of pools.
It would not be desirable for the P/E cycling of an active memory block pool to increase the BER of another memory block pool that is operating in a retention state.
The memory block pool technique of the present invention exploits the relaxation phase to minimize the BER for the nonvolatile memory storage module. Using the end of the relaxation phase is a trade-off among relaxation, retention, SSD capacity and program throughput. With the standard wear-leveling approach, the BER continues to increase as the number of P/E cycles increases. In contrast, in the present invention, a leveling-off of the BER is achieved as the number of P/E cycles increases using the memory block pool and associated relaxation time of the present invention. This leveling-off effect has been shown to allow an increase in P/E cycling from 60K to 1 M cycles, while still maintaining an acceptable BER of the device.
In various embodiments, the system of the present invention may be implemented in a Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuit (ASIC).
Though the method and apparatus of the present invention is described above with respect to a single level memory cell, it is within the scope of the present invention to extend the methods and apparatus of the present invention to MLC (multiple-level cell) devices, as would be evident to one of skill in the art.
Although the invention has been described with reference to particular embodiments thereof, it will be apparent to one of ordinary skill in the art that modifications to the described embodiment may be made without departing from the spirit of the invention. Accordingly, the scope of the invention will be defined by the attached claims not by the above detailed description.
Number | Name | Date | Kind |
---|---|---|---|
815137 | Beecher | Mar 1906 | A |
5615235 | Kakuishi et al. | Mar 1997 | A |
5732092 | Shinohara | Mar 1998 | A |
5875343 | Binford et al. | Feb 1999 | A |
6115788 | Thowe | Sep 2000 | A |
6539515 | Gong | Mar 2003 | B1 |
6633856 | Richardson et al. | Oct 2003 | B2 |
6895547 | Eleftheriou et al. | May 2005 | B2 |
6934804 | Hashemi | Aug 2005 | B2 |
6976194 | Cypher | Dec 2005 | B2 |
6976197 | Faust et al. | Dec 2005 | B2 |
7206992 | Xin et al. | Apr 2007 | B2 |
7209527 | Smith et al. | Apr 2007 | B2 |
7237183 | Xin | Jun 2007 | B2 |
7324559 | McGibney | Jan 2008 | B2 |
7450668 | Ghosh et al. | Nov 2008 | B2 |
7457906 | Pettey et al. | Nov 2008 | B2 |
7484158 | Sharon et al. | Jan 2009 | B2 |
7529215 | Osterling | May 2009 | B2 |
7567472 | Gatzemeier et al. | Jul 2009 | B2 |
7620784 | Panabaker | Nov 2009 | B2 |
7694047 | Alston | Apr 2010 | B1 |
7708195 | Yoshida et al. | May 2010 | B2 |
7752346 | Talayco et al. | Jul 2010 | B2 |
7801233 | Chow et al. | Sep 2010 | B1 |
7860930 | Freimuth et al. | Dec 2010 | B2 |
7904793 | Mokhlesi et al. | Mar 2011 | B2 |
7937641 | Amidi | May 2011 | B2 |
7945721 | Johnsen et al. | May 2011 | B1 |
7958430 | Kolokowsky et al. | Jun 2011 | B1 |
7975193 | Johnson | Jul 2011 | B2 |
8094508 | Gatzemeier et al. | Jan 2012 | B2 |
8140930 | Maru | Mar 2012 | B1 |
8176367 | Dreifus et al. | May 2012 | B2 |
8219894 | Au et al. | Jul 2012 | B2 |
8223745 | Johnsen et al. | Jul 2012 | B2 |
8228728 | Yang et al. | Jul 2012 | B1 |
8244946 | Gupta et al. | Aug 2012 | B2 |
8245112 | Hicken et al. | Aug 2012 | B2 |
8245117 | Wu | Aug 2012 | B1 |
8254112 | Yang et al. | Aug 2012 | B2 |
8255770 | Park et al. | Aug 2012 | B2 |
8261136 | D'Abreu et al. | Sep 2012 | B2 |
8281217 | Kim et al. | Oct 2012 | B2 |
8281227 | Inskeep et al. | Oct 2012 | B2 |
8286004 | Williams | Oct 2012 | B2 |
8307258 | Flynn et al. | Nov 2012 | B2 |
8327220 | Borchers et al. | Dec 2012 | B2 |
8335977 | Weingarten et al. | Dec 2012 | B2 |
8341502 | Steiner et al. | Dec 2012 | B2 |
8359522 | Gunnam et al. | Jan 2013 | B2 |
8392789 | Biscondi et al. | Mar 2013 | B2 |
8402201 | Flynn et al. | Mar 2013 | B2 |
8418023 | Gunnam et al. | Apr 2013 | B2 |
8429325 | Onufryk et al. | Apr 2013 | B1 |
8429497 | Tu et al. | Apr 2013 | B2 |
8473812 | Ramamoorthy et al. | Jun 2013 | B2 |
8493791 | Karakulak et al. | Jul 2013 | B2 |
8504885 | Haratsch et al. | Aug 2013 | B2 |
8504887 | Varnica et al. | Aug 2013 | B1 |
8555140 | Gunnam et al. | Oct 2013 | B2 |
8621318 | Micheloni et al. | Dec 2013 | B1 |
8640005 | Wilkerson et al. | Jan 2014 | B2 |
8656257 | Micheloni et al. | Feb 2014 | B1 |
8694849 | Micheloni et al. | Apr 2014 | B1 |
8694855 | Micheloni et al. | Apr 2014 | B1 |
8707122 | Micheloni et al. | Apr 2014 | B1 |
8769374 | Franceschini et al. | Jul 2014 | B2 |
8775913 | Haratsch et al. | Jul 2014 | B2 |
8787428 | Dai et al. | Jul 2014 | B2 |
8856622 | Ramamoorthy et al. | Oct 2014 | B2 |
8917734 | Brown | Dec 2014 | B1 |
8924824 | Lu | Dec 2014 | B1 |
8958247 | Asaoka et al. | Feb 2015 | B2 |
8995302 | Brown et al. | Mar 2015 | B1 |
9025495 | Onufryk et al. | May 2015 | B1 |
9058289 | Tai et al. | Jun 2015 | B2 |
20020181438 | McGibney | Dec 2002 | A1 |
20030033567 | Tamura et al. | Feb 2003 | A1 |
20030104788 | Kim | Jun 2003 | A1 |
20030225970 | Hashemi | Dec 2003 | A1 |
20040088636 | Cypher | May 2004 | A1 |
20040123230 | Lee et al. | Jun 2004 | A1 |
20040181735 | Xin | Sep 2004 | A1 |
20040234150 | Chang | Nov 2004 | A1 |
20040252791 | Shen et al. | Dec 2004 | A1 |
20040268015 | Pettey et al. | Dec 2004 | A1 |
20050010846 | Kikuchi et al. | Jan 2005 | A1 |
20050226355 | Kibune et al. | Oct 2005 | A1 |
20050248999 | Tamura et al. | Nov 2005 | A1 |
20050252791 | Pechtold et al. | Nov 2005 | A1 |
20050286511 | Johnsen et al. | Dec 2005 | A1 |
20060039370 | Rosen et al. | Feb 2006 | A1 |
20060050694 | Bury et al. | Mar 2006 | A1 |
20060126728 | Yu et al. | Jun 2006 | A1 |
20060206655 | Chappell et al. | Sep 2006 | A1 |
20060282603 | Onufryk et al. | Dec 2006 | A1 |
20070050688 | Thayer | Mar 2007 | A1 |
20070089031 | Huffman et al. | Apr 2007 | A1 |
20070118743 | Thornton et al. | May 2007 | A1 |
20070136628 | Doi et al. | Jun 2007 | A1 |
20070147489 | Sun et al. | Jun 2007 | A1 |
20070233939 | Kim | Oct 2007 | A1 |
20080005382 | Mimatsu | Jan 2008 | A1 |
20080016425 | Khan et al. | Jan 2008 | A1 |
20080229079 | Flynn et al. | Sep 2008 | A1 |
20080229164 | Tamura et al. | Sep 2008 | A1 |
20080256280 | Ma | Oct 2008 | A1 |
20080256292 | Flynn et al. | Oct 2008 | A1 |
20080267081 | Roeck | Oct 2008 | A1 |
20080276156 | Gunnam et al. | Nov 2008 | A1 |
20080320214 | Ma et al. | Dec 2008 | A1 |
20090067320 | Rosenberg et al. | Mar 2009 | A1 |
20090077302 | Fukuda | Mar 2009 | A1 |
20090164694 | Talayco et al. | Jun 2009 | A1 |
20090290441 | Gatzemeier et al. | Nov 2009 | A1 |
20090296798 | Banna et al. | Dec 2009 | A1 |
20090303788 | Roohparvar et al. | Dec 2009 | A1 |
20090327802 | Fukutomi et al. | Dec 2009 | A1 |
20100085076 | Danilin et al. | Apr 2010 | A1 |
20100162075 | Brannstrom et al. | Jun 2010 | A1 |
20100185808 | Yu et al. | Jul 2010 | A1 |
20100199149 | Weingarten et al. | Aug 2010 | A1 |
20100211737 | Flynn et al. | Aug 2010 | A1 |
20100211852 | Lee et al. | Aug 2010 | A1 |
20100226422 | Taubin et al. | Sep 2010 | A1 |
20100246664 | Citta et al. | Sep 2010 | A1 |
20100262979 | Borchers et al. | Oct 2010 | A1 |
20100293440 | Thatcher et al. | Nov 2010 | A1 |
20110055659 | Tu et al. | Mar 2011 | A1 |
20110066902 | Sharon et al. | Mar 2011 | A1 |
20110072331 | Sakaue et al. | Mar 2011 | A1 |
20110119553 | Gunnam et al. | May 2011 | A1 |
20110161678 | Niwa | Jun 2011 | A1 |
20110209031 | Kim et al. | Aug 2011 | A1 |
20110225341 | Satoh et al. | Sep 2011 | A1 |
20110246136 | Haratsch et al. | Oct 2011 | A1 |
20110246842 | Haratsch et al. | Oct 2011 | A1 |
20110246853 | Kim et al. | Oct 2011 | A1 |
20110296084 | Nango | Dec 2011 | A1 |
20120051144 | Weingarten et al. | Mar 2012 | A1 |
20120054413 | Brandt | Mar 2012 | A1 |
20120096192 | Tanaka et al. | Apr 2012 | A1 |
20120141139 | Bakhru et al. | Jun 2012 | A1 |
20120166690 | Regula | Jun 2012 | A1 |
20120311402 | Tseng et al. | Dec 2012 | A1 |
20130013983 | Livshitz et al. | Jan 2013 | A1 |
20130086451 | Grube et al. | Apr 2013 | A1 |
20130117616 | Tai et al. | May 2013 | A1 |
20130117640 | Tai et al. | May 2013 | A1 |
20130145235 | Alhussien et al. | Jun 2013 | A1 |
20130163327 | Karakulak et al. | Jun 2013 | A1 |
20130163328 | Karakulak et al. | Jun 2013 | A1 |
20130176779 | Chen et al. | Jul 2013 | A1 |
20130185598 | Haratsch et al. | Jul 2013 | A1 |
20130315252 | Emmadi et al. | Nov 2013 | A1 |
20130318422 | Weathers et al. | Nov 2013 | A1 |
20140040704 | Wu et al. | Feb 2014 | A1 |
20140053037 | Wang et al. | Feb 2014 | A1 |
20140068368 | Zhang et al. | Mar 2014 | A1 |
20140072056 | Fay | Mar 2014 | A1 |
20140085982 | Asaoka et al. | Mar 2014 | A1 |
20140181617 | Wu et al. | Jun 2014 | A1 |
20140185611 | Lie et al. | Jul 2014 | A1 |
20140198581 | Kim et al. | Jul 2014 | A1 |
20140281767 | Alhussien et al. | Sep 2014 | A1 |
20140281822 | Wu et al. | Sep 2014 | A1 |
20140281823 | Micheloni et al. | Sep 2014 | A1 |
20150149871 | Chen et al. | May 2015 | A1 |
20150186055 | Darragh | Jul 2015 | A1 |
Entry |
---|
NVM Express, Revision 1.0; Intel Corporation; Mar. 1, 2011. |
NVM Express, revision 1.0; Intel Corporation; pp. 103-106 and 110-114; Jul. 12, 2011. |
Number | Date | Country | |
---|---|---|---|
20160004458 A1 | Jan 2016 | US |