This subject matter is related generally to access and management of managed non-volatile memory.
Flash memory is a type of electrically erasable programmable read-only memory (EEPROM). Because flash memories are non-volatile and relatively dense, they are used to store files and other persistent objects in handheld computers, mobile phones, digital cameras, portable music players, and many other devices in which other storage solutions (e.g., magnetic disks) are inappropriate.
NAND is a type of flash memory that can be accessed like a block device, such as a hard disk or memory card. A typical block size is 32 pages of 512 bytes each for a block size of 16 KB. Each block consists of a number of pages. A typical page size is 512 bytes. Associated with each page are a number of bytes (e.g., 12-16 bytes) that are used for storage of error detection and correction checksums. Reading and programming is performed on a page basis, erasure is performed on a block basis, and data in a block can only be written sequentially. NAND relies on Error Correction Code (ECC) to compensate for bits that may flip during normal device operation. When performing erase or program operations, the NAND device can detect blocks that fail to program or erase and mark the blocks as bad in a bad block map. The data can be written to a different, good block, and the bad block map updated.
Managed NAND devices combine raw NAND with a memory controller to handle error correction and detection, as well as memory management functions of NAND memory. Managed NAND is commercially available in Ball Grid Array (BGA) packages, or other Integrated Circuit (IC) package which supports standardized processor interfaces, such as Multimedia Memory Card (MMC) and Secure Digital (SD) card. A managed NAND device can include a number of NAND devices or dies which can be accessed using one or more chip select signals. A chip select is a control line used in digital electronics to select one chip out of several chips connected to the same bus. The chip select is typically a command pin on most IC packages which connects the input pins on the device to the internal circuitry of that device. When the chip select pin is held in the inactive state, the chip or device ignores changes in the state of its input pins. When the chip select pin is held in the active state, the chip or device responds as if it is the only chip on the bus.
The Open NAND Flash Interface Working Group (ONFI) has developed a standardized low-level interface for NAND flash chips to allow interoperability between conforming NAND devices from different vendors. ONFI specification version 1.0 specifies: a standard physical interface (pin-out) for NAND flash in TSOP-48, WSOP-48, LGA-52, and BGA-63 packages; a standard command set for reading, writing, and erasing NAND flash chips; and a mechanism for self-identification. ONFI specification version 2.0 supports dual channel interfaces, with odd chip selects (also referred to as chip enable or “CE”) connected to channel 1 and even CEs connected to channel 2. The physical interface shall have no more than 8 CEs for the entire package.
While the ONFI specifications allow interoperability, the current ONFI specifications do not take full advantage of Managed NAND solutions.
The disclosed architecture uses address mapping to map a block address on a host interface to an internal block address of a non-volatile memory (NVM) device. The block address is mapped to an internal chip select for selecting a Concurrently Addressable Unit (CAU) identified by the block address. The disclosed architecture supports generic non-volatile memory commands for read, write, erase and get status operations. The architecture also supports an extended command set for supporting read and write operations that leverage a multiple CAU architecture.
In some implementations, the NVM package 104 can include a controller 106 for accessing and managing the NVM devices 108 over internal channels using internal chip select signals. An internal channel is a data path between the controller 106 and a NVM device 108. The controller 106 can perform memory management functions (e.g., wear leveling, bad block management) and can include an error correction (ECC) engine 110 for detecting and correcting data errors (e.g., flipped bits). In some implementations, the ECC engine 110 can be implemented as a hardware component in the controller 106 or as a software component executed by the controller 106. In some implementations, the ECC engine 110 can be located in the NVM devices 108. A pipeline management module 112 can be included that efficiently manages data throughput.
In some implementations, the host processor 102 and NVM package 104 can communicate information (e.g., control commands, addresses, data) over a communication channel visible to the host (“host channel”). The host channel can support standard interfaces, such as raw NAND interfaces or dual channel interfaces, such as is described in ONFI specification version 2.0. The host processor 102 can also provide a host chip enable (CE) signal. The host CE is visible to the host processor 102 to select the host channel.
In the example memory system 100, the NVM package 104 supports CE hiding. CE hiding allows the single host CE to be used for each internal channel in the NVM package 104, thus reducing the number of signals required to support the interface of the NVM package 104. Memory accesses can be mapped to internal channels and the NVM devices 108 using an address space and address mapping, as described in reference to
The run and stride parameters enable the host processor 102 to generate efficient sequences of page addresses. The run parameter identifies a number of CAUs in the NVM package 104 that are concurrently addressable using the host CE and address mapping. A CAU can be a portion of the NVM device 108 accessible from a single host channel that may be written or read at the same time as another CAU. A CAU can also be the entire NVM device 108. The stride parameter identifies a number of blocks for vendor specific operation commands within a CAU.
In the example block map shown in
The MDS parameter identifies a number of bytes associated with each page size allowing for metadata. A page size is a data area of a page of non-volatile memory. A Perfect Page Size (PPS) is a number of bytes equivalent to a page size plus MDS. A Raw Page Size (RPS) is the size of a physical page of non-volatile memory.
In some implementations, the controller 106 reads the logical address from the host channel and maps Block Address to a specific internal block address using address mapping of
In this example, even blocks are mapped to NVM device 108a and odd blocks are mapped to CAU 202 in NVM device 108b. When the controller 106 detects an even number Block Address, the controller 106 activates internal chip enable, CEø, for NVM device 108a, and when the controller 106 detects an odd number Block Address, the controller 106 activates internal chip enable, CE1, for NVM device 108b. This address mapping scheme can be extended to any desired number of CAUs and/or NVM devices in a managed NVM package. In some implementations the most significant bits of Block Address can be used to select an internal CE and the remaining Block Address bits or the entire Block Address can be combined with Page Address and Offset to for a physical address to access a block to perform an operation. In some implementations, decoding logic can be added to the NVM package or controller 106 to decode Block Address for purposes of selecting an internal CE to activate.
An advantage of the address mapping scheme described above is that the host interface of the NVM package 104 can be simplified (reduced pin count) and still support generic raw NVM commands (e.g., raw NAND commands) for read, write, erase and get status operations. Additionally, extended commands can be used to leverage the multiple CAU architecture. The NVM package 104 supports concurrent read and write operations similar to interleaving commands used with conventional raw NVM architectures (e.g., raw NAND architectures).
In some implementations, the engine 110 performs error correction on data and sends a status to the host processor through the host interface. The status informs the host processor if an operation has failed, allowing the host processor to adjust Block Address to access a different CAU or NVM device. For example, if a large number of errors occurs in response to operations on a particular CAU, the host processor can modify Block Address to avoid activating the internal CE for the defective NVM device.
The mapping will be described using an example memory architecture. For this example architecture, a block size is defined as a number of pages in an erasable block. In some implementations, 16 bytes of metadata are available for each 4 kilobytes of data. Other memory architectures are also possible. For example, the metadata can be allocated more or fewer bytes.
The address mapping scheme shown in
The NVM package 104 defines a CAU as an area that can be accessed (e.g., moving data from the NAND memory cells to an internal register) simultaneous to, or in parallel with other CAUs. In this example architecture, it is assumed that all CAUs include the same number of blocks. In other implementations, CAUs can have a different numbers of blocks. Table I below describes a example row address format for accessing a page in a CAU.
Referring to Table I, an example n-bit (e.g., 24 bits) row address can be presented to a controller in the NAND device in the following format: [CAU: Block: Page]. CAU is a number (e.g., an integer) that represents a die or plane. Block is a block offset in the CAU identified by the CAU number, and Page is a page offset in the block identified by Block. For example, in a device with 128 pages per block, 8192 blocks per CAU and 6 CAUs: X will be 7 (27=128), Y will be 13 (213=8192) and Z will be 3 (22<6<23).
The example NVM package 104 shown in
The NVM package includes an NVM controller 202 which communicates with the CAUs through control bus 208 and address/data bus 210. During operation, the NVM controller 202 receives commands from the host controller (not shown) and in response to the command asserts control signals on the control bus 208 and addresses or data on the address/data bus 210 to perform an operation (e.g., read, program, or erase operation) on one or more CAUs. In some implementations, the command includes a row address having the form [CAU: Block: Page], as described in reference to
The NVM package 104 is capable of supporting a transparent mode. Transparent mode enables access to a memory array without ECC and can be used to evaluate the performance of the controller 106. The NVM package 104 also supports generic raw NVM commands for read, write and get status operations. Tables 1-3 describe example Read, Write and Commit operations. As with conventional raw NVM, the NVM devices should be ready before a write command is issued. Readiness can be determined using a Status Read operation, as described in reference to Table 4.
In addition to the operations described above, the controller 106 can support various other commands. A Page Parameter Read command returns geometry parameters from the NVM package 104. Some examples of geometry parameters include but are not limited to: die size, block size, page size, MDS, run and stride. An Abort command causes the controller 106 to monitor the current operation and stop subsequent stride operations in progress. A Reset command stops the current operation, making the contents of the memory cells that are being altered invalid. A command register in the controller 106 is cleared in preparation for the next command. A Read ID command returns a product identification. A Read Timing command returns the setup, hold and delay times for write and erase commands. A Read Device Parameter command returns specific identification for a NVM package 104, including specification support, device version and firmware version.
An example command set is described in Table 5 below.
To leverage the multiple CAU architecture in the NVM package 104, the NVM package 104 can support access to all or several CAUs using an extended command set. The NVM package 104 can support the following extended commands, where all addresses are aligned to the PPS: Read with Address, Write with Address, Erase with Address, and Status with Address.
An example Read command with Address operation for a single page across two CAUs (run=2 and stride=1) can be as follows:
(Read)[block0 page0]
(Read)[block1 page0]
(GetPageStatus)[block0 page0]W4R{data+metadata}
(GetPageStatus)[block1 page0]W4R{data+metadata}
An example Write command with Address operation for a single page across two CAUs (run=2 and stride=1) can be as follows:
(StrideWrite)[block0 page0]<data+metadata>
(StrideWrite)[block1 page0]<data+metadata>
(GetPageStatus)[block0 page0]W4R{status}
(GetPageStatus)[block1 page0]W4R{status}
(Commit)[block0 page0]
(Commit)[block1 page0]
To leverage vendor specific commands, the NVM package supports multiple page operations within a CAU. Specifically, the NVM package supports StrideRead and StrideWrite commands.
In step 612, the counter I is incremented by one. In step 614, I is compared to S. If I<S, then the operation 600 returns to step 606. If I=S, the operation 600 starts the transfer of pages in the stride, as described in reference to
Referring to step 616 in
An example StrideRead with Address operation of eight pages that spread across two CAUs and four strides (run=2 and stride=4) can be as follows:
(StrideRead)[block0 page0]
(StrideRead)[block 1 page 0]
(StrideRead)[block2 page0]
(StrideRead)[block 3 page 0]
(StrideRead)[block4 page0]
(StrideRead)[block 5 page 0]
(LastStrideRead)[block6 page0]
(LastStrideRead)[block7 page 0]
(GetPageStatus)[block0 page0]W4R{data+metadata}
(GetPageStatus)[block1 page0]W4R{data+metadata}
(GetPageStatus)[block2 page0]W4R{data+metadata}
(GetPageStatus)[block3 page0]W4R{data+metadata}
(GetPageStatus)[block4 page0]W4R{data+metadata}
(GetPageStatus)[block5 page0]W4R{data+metadata}
(GetPageStatus)[block6 page0]W4R{data+metadata}
(GetPageStatus)[block7 page0]W4R{data+metadata}
In step 710, the host processor transfers PPS bytes of data to the NVM package. In step 712, the host processor issues a Commit command with Address to commit the writes to memory arrays. In step 714, the host processor performs a Wait for status with Address until the NVM package provides a status indicating that the data was committed to memory. In step 716, the number of pages remaining to be written is decremented by one, and the operation 700 returns to step 704.
An example StrideWrite with Address operation of eight pages that spread across two CAUs and four strides (run=2 and stride=4) can be as follows:
(StrideWrite)[block0 page0]<data+metadata>
(StrideWrite)[block 1 page 0]<data+metadata>
(GetPageStatus)[block0 page0]W4R{status}
(StrideWrite)[block2 page 0]<data+metadata>
(GetPageStatus)[block1 page0]W4R{status}
(StrideWrite)[block3 page 0]<data+metadata>
(GetPageStatus)[block2 page0]W4R{status}
(StrideWrite)[block4 page 0]<data+metadata>
(GetPageStatus)[block3 page0]W4R{status}
(StrideWrite)[block5 page1]<data+metadata>
(GetPageStatus)[block4 page0]W4R{status}
(LastStrideWrite)[block6 page1]<data+metadata>
(GetPageStatus)[block5 page0]W4R{status}
(LastStrideWrite)[block7 page1]<data+metadata>
(GetPageStatus)[block6 page0]W4R{status}
(GetPageStatus)[block7 page0]W4R{status}
While this specification contains many specifics, these should not be construed as limitations on the scope of what being claims or of what may be claimed, but rather as descriptions of features specific to particular embodiments. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understand as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the embodiments described above should not be understood as requiring such separation in all embodiments, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
Thus, particular embodiments have been described. Other embodiments are within the scope of the following claims.
This application claims the benefit of priority from U.S. Provisional Patent Application No. 61/140,436, filed Dec. 23, 2008, which is incorporated by reference herein in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4937830 | Kawashima et al. | Jun 1990 | A |
5341489 | Heiberger et al. | Aug 1994 | A |
5434872 | Petersen et al. | Jul 1995 | A |
5524218 | Byers et al. | Jun 1996 | A |
5559449 | Padoan et al. | Sep 1996 | A |
5613144 | Hall et al. | Mar 1997 | A |
5615162 | Houston | Mar 1997 | A |
5673223 | Park | Sep 1997 | A |
5732094 | Petersen et al. | Mar 1998 | A |
5751631 | Liu et al. | May 1998 | A |
6092158 | Harriman et al. | Jul 2000 | A |
6134149 | Lin | Oct 2000 | A |
6148354 | Ban et al. | Nov 2000 | A |
6449111 | Kool et al. | Sep 2002 | B1 |
6684301 | Martin | Jan 2004 | B1 |
7372715 | Han | May 2008 | B2 |
7374108 | Toombs et al. | May 2008 | B2 |
7975109 | McWilliams et al. | Jul 2011 | B2 |
7979658 | Obereiner et al. | Jul 2011 | B2 |
20020194451 | Mukaida et al. | Dec 2002 | A1 |
20030046628 | Rankin | Mar 2003 | A1 |
20030200411 | Maeda et al. | Oct 2003 | A1 |
20040139286 | Lin et al. | Jul 2004 | A1 |
20040153902 | Machado et al. | Aug 2004 | A1 |
20040257888 | Noguchi et al. | Dec 2004 | A1 |
20050166007 | Ono | Jul 2005 | A1 |
20060059406 | Micheloni et al. | Mar 2006 | A1 |
20060164907 | Nguyen | Jul 2006 | A1 |
20060248432 | Barrett | Nov 2006 | A1 |
20070043900 | Yun | Feb 2007 | A1 |
20070050668 | Gans | Mar 2007 | A1 |
20070106919 | Chang et al. | May 2007 | A1 |
20070140007 | Terauchi | Jun 2007 | A1 |
20070165458 | Leong et al. | Jul 2007 | A1 |
20070168625 | Cornwell et al. | Jul 2007 | A1 |
20080069098 | Shah et al. | Mar 2008 | A1 |
20080126776 | Takayama | May 2008 | A1 |
20080147968 | Lee et al. | Jun 2008 | A1 |
20080147994 | Jeong et al. | Jun 2008 | A1 |
20080195799 | Park et al. | Aug 2008 | A1 |
20080211303 | Ikegawa | Sep 2008 | A1 |
20080288814 | Kitahara | Nov 2008 | A1 |
20090063934 | Jo | Mar 2009 | A1 |
20090100115 | Park et al. | Apr 2009 | A1 |
20090113114 | Berenbaum et al. | Apr 2009 | A1 |
20090164698 | Ji et al. | Jun 2009 | A1 |
20090198902 | Khmelnitsky et al. | Aug 2009 | A1 |
20090198947 | Khmelnitsky et al. | Aug 2009 | A1 |
20090198952 | Khmelnitsky et al. | Aug 2009 | A1 |
20090265513 | Ryu | Oct 2009 | A1 |
20100250836 | Sokolov et al. | Sep 2010 | A1 |
20100287329 | Toelkes et al. | Nov 2010 | A1 |
20100287353 | Khmelnitsky et al. | Nov 2010 | A1 |
20110153911 | Sprouse et al. | Jun 2011 | A1 |
20110213945 | Post et al. | Sep 2011 | A1 |
Entry |
---|
Authorized officer Jacqueline Pitard, International Search Report/Written Opinion in PCT/US2010/32627 mailed Jul. 21, 2010, 10 pages. |
Toshiba, “TC58NVG0S3ETA00 Toshiba Mos Digital Integrated Circuit Silicon Gate CMOS,” Nov. 20, 2008, revision 1.00, Semico Toshiba, pp. 1-65. http://www.semicon.toshiba.co.jp/docs/datasheet/en/Memory/TC58NVG0S3ETA00—en—datasheet—110301.pdf. |
International Search Report/Written Opinion in PCT/US2010/032628 dated Aug. 11, 2010, 12 pages. |
“Increasing Boot Operations with Managed NAND,” QuickLogic® White Paper, Quicklogic Corporation [online], Retrieved from the Internet: <http://www.quicklogic.com/images/QL:—Increasing—Boot—Opt—w—Managed—NAND—WP—RevE.pdf>, 2007-2009, 8 pages. |
“Dual supply level translator for dual memory cards (mini SD/micro SD+ managed NAND),” STMicroelectronics, Paper No. ST6G3240 [online], Retrieved from the Internet: <http://www.st.com/stonline/products/literature/ds/14581.pdf>, Apr. 2008, 29 pages. |
Lim et al., “An Efficient NAND Flash File System for Flash Memory Storage,” IEEE Transactions on Computers, 2006, 55(7):906-912. |
International Preliminary Report on Patentability in PCT/US2009/065804 mailed Jul. 7, 2011, 12 pages. |
Authorized officer Yolaine Cussac, International Preliminary Report on Patentability in PCT/US2010/32627 mailed Nov. 9, 2011, 8 pages. |
International Preliminary Report on Patentability in PCT/US2010/032628 dated Nov. 9, 2011, 8 pages. |
European Search Report and Written Opinion for Application No. PCT/US2009/065804, dated May 10, 2010, 19 pages. |
Invitation to Pay Additional Fees and, Where Applicable Protest Fee for Application No. PCT/US2009/065804, dated Mar. 4, 2010, 4 pages. |
Toelkes et al., “Partial Page Operations for Non-Volatile Memory Systems”, U.S. Appl. No. 12/536,410, filed Aug. 5, 2009. |
Post et al., “Low Latency Read Operation for Managed Non-Volatile Memory”, U.S. Appl. No. 12/538,053, filed Aug. 7, 2009. |
Khmelnitsky et al., “Multipage Preparation Commands for Non-Volatile Memory Systems”, U.S. Appl. No. 12/545,011, filed Aug. 20, 2009. |
Wakrat et al., “Controller for Optimizing Throughput of Read Operations”, U.S. Appl. No. 12/509,240, filed Jul. 24, 2009. |
Wakrat et al., “Memory Array Power Cycling”, U.S. Appl. No. 12/561,158, filed Sep. 16, 2009. |
Wakrat et al., “File System Derived Metadata for Management of Non-Volatile Memory”, U.S. Appl. No. 12/561,173, filed Sep. 16, 2009. |
Number | Date | Country | |
---|---|---|---|
20100161886 A1 | Jun 2010 | US |
Number | Date | Country | |
---|---|---|---|
61140436 | Dec 2008 | US |