This application relates to the operation of re-programmable non-volatile memory systems such as semiconductor flash memory and to the ordering of the commands issued for such systems.
A host computer issues commands to a NAND storage device, such as a solid state storage device (SSD), without knowledge of the internals of the device. This may result in read/write traffic being unevenly distributed among various dies/planes/chips within the NAND storage device, keeping some dies/planes/chips busier than others, reducing the overall throughput. Consequently, such storage devices could benefit from techniques that could keep the different memory access channels busy even when the traffic from the host is not arriving evenly.
Methods are presented for operating a non-volatile memory system that including one or more non-volatile flash memory circuits. A series of commands each specifying a physical address is received on the non-volatile memory, the series of commands including read, write and erase commands for the specified physical addresses. The received series of commands are arranged into a plurality of queues for execution thereof, where separate queues are maintained for read commands, write commands, and erase commands. Sequences of commands to execute are selected from the plurality of queues, where only one of the queue is active at a time, and transmitted to the one or more non-volatile memory circuits to be executed.
Methods are also presented for a non-volatile memory system to provide access for a plurality of user applications to a non-volatile data storage section. The method includes receiving from the plurality of user applications requests for accessing corresponding user partitions of the data storage section as assigned by the memory system, wherein each of the user applications has a specified level of performance and availability for the accessing the corresponding user partition, and wherein the user application requests are specified in terms of corresponding logical addresses. The specification of the user application requests in terms of corresponding logical addresses is translated to be expressed in terms of corresponding physical addresses for the non-volatile data storage section. The method arbitrates between requests from different ones of the user applications based upon the requests' corresponding physical addresses and corresponding specified levels of performance and availability to determine an order in which to execute the user application requests. Instructions are issued for the execution of the user application requests based upon the determined order.
Various aspects, advantages, features and embodiments are included in the following description of exemplary examples thereof, which description should be taken in conjunction with the accompanying drawings. All patents, patent applications, articles, other publications, documents and things referenced herein are hereby incorporated herein by this reference in their entirety for all purposes. To the extent of any inconsistency or conflict in the definition or use of terms between any of the incorporated publications, documents or things and the present application, those of the present application shall prevail.
With respect to the memory section 102, semiconductor memory devices include volatile memory devices, such as dynamic random access memory (“DRAM”) or static random access memory (“SRAM”) devices, non-volatile memory devices, such as resistive random access memory (“ReRAM”), electrically erasable programmable read only memory (“EEPROM”), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (“FRAM”), and magnetoresistive random access memory (“MRAM”), and other semiconductor elements capable of storing information. Each type of memory device may have different configurations. For example, flash memory devices may be configured in a NAND or a NOR configuration.
The memory devices can be formed from passive and/or active elements, in any combinations. By way of non-limiting example, passive semiconductor memory elements include ReRAM device elements, which in some embodiments include a resistivity switching storage element, such as an anti-fuse, phase change material, etc., and optionally a steering element, such as a diode, etc. Further by way of non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements containing a charge storage region, such as a floating gate, conductive nanoparticles, or a charge storage dielectric material.
Multiple memory elements may be configured so that they are connected in series or so that each element is individually accessible. By way of non-limiting example, flash memory devices in a NAND configuration (NAND memory) typically contain memory elements connected in series. A NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, memory elements may be configured so that each element is individually accessible, e.g., a NOR memory array. NAND and NOR memory configurations are exemplary, and memory elements may be otherwise configured.
The semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two dimensional memory structure or a three dimensional memory structure.
In a two dimensional memory structure, the semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two dimensional memory structure, memory elements are arranged in a plane (e.g., in an x-z direction plane) which extends substantially parallel to a major surface of a substrate that supports the memory elements. The substrate may be a wafer over or in which the layer of the memory elements are formed or it may be a carrier substrate which is attached to the memory elements after they are formed. As a non-limiting example, the substrate may include a semiconductor such as silicon.
The memory elements may be arranged in the single memory device level in an ordered array, such as in a plurality of rows and/or columns. However, the memory elements may be arrayed in non-regular or non-orthogonal configurations. The memory elements may each have two or more electrodes or contact lines, such as bit lines and word lines.
A three dimensional memory array is arranged so that memory elements occupy multiple planes or multiple memory device levels, thereby forming a structure in three dimensions (i.e., in the x, y and z directions, where the y direction is substantially perpendicular and the x and z directions are substantially parallel to the major surface of the substrate).
As a non-limiting example, a three dimensional memory structure may be vertically arranged as a stack of multiple two dimensional memory device levels. As another non-limiting example, a three dimensional memory array may be arranged as multiple vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in the y direction) with each column having multiple memory elements in each column. The columns may be arranged in a two dimensional configuration, e.g., in an x-z plane, resulting in a three dimensional arrangement of memory elements with elements on multiple vertically stacked memory planes. Other configurations of memory elements in three dimensions can also constitute a three dimensional memory array.
By way of non-limiting example, in a three dimensional NAND memory array, the memory elements may be coupled together to form a NAND string within a single horizontal (e.g., x-z) memory device levels. Alternatively, the memory elements may be coupled together to form a vertical NAND string that traverses across multiple horizontal memory device levels. Other three dimensional configurations can be envisioned wherein some NAND strings contain memory elements in a single memory level while other strings contain memory elements which span through multiple memory levels. Three dimensional memory arrays may also be designed in a NOR configuration and in a ReRAM configuration.
Typically, in a monolithic three dimensional memory array, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array may also have one or more memory layers at least partially within the single substrate. As a non-limiting example, the substrate may include a semiconductor such as silicon. In a monolithic three dimensional array, the layers constituting each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array. However, layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.
Then again, two dimensional arrays may be formed separately and then packaged together to form a non-monolithic memory device having multiple layers of memory. For example, non-monolithic stacked memories can be constructed by forming memory levels on separate substrates and then stacking the memory levels atop each other. The substrates may be thinned or removed from the memory device levels before stacking, but as the memory device levels are initially formed over separate substrates, the resulting memory arrays are not monolithic three dimensional memory arrays. Further, multiple two dimensional memory arrays or three dimensional memory arrays (monolithic or non-monolithic) may be formed on separate chips and then packaged together to form a stacked-chip memory device.
Associated circuitry is typically required for operation of the memory elements and for communication with the memory elements. As non-limiting examples, memory devices may have circuitry used for controlling and driving memory elements to accomplish functions such as programming and reading. This associated circuitry may be on the same substrate as the memory elements and/or on a separate substrate. For example, a controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements.
It will be recognized that the following is not limited to the two dimensional and three dimensional exemplary structures described but cover all relevant memory structures within the spirit and scope as described herein
There are many commercially successful non-volatile solid-state memory devices being used today. These memory devices may employ different types of memory cells, each type having one or more charge storage element.
Typical non-volatile memory cells include EEPROM and flash EEPROM. Also, examples of memory devices utilizing dielectric storage elements.
In practice, the memory state of a cell is usually read by sensing the conduction current across the source and drain electrodes of the cell when a reference voltage is applied to the control gate. Thus, for each given charge on the floating gate of a cell, a corresponding conduction current with respect to a fixed reference control gate voltage may be detected. Similarly, the range of charge programmable onto the floating gate defines a corresponding threshold voltage window or a corresponding conduction current window.
Alternatively, instead of detecting the conduction current among a partitioned current window, it is possible to set the threshold voltage for a given memory state under test at the control gate and detect if the conduction current is lower or higher than a threshold current (cell-read reference current). In one implementation the detection of the conduction current relative to a threshold current is accomplished by examining the rate the conduction current is discharging through the capacitance of the bit line.
As can be seen from the description above, the more states a memory cell is made to store, the more finely divided is its threshold window. For example, a memory device may have memory cells having a threshold window that ranges from −1.5V to 5V. This provides a maximum width of 6.5V. If the memory cell is to store 16 states, each state may occupy from 200 mV to 300 mV in the threshold window. This will require higher precision in programming and reading operations in order to be able to achieve the required resolution.
When an addressed memory transistor 10 within a NAND string is read or is verified during programming, its control gate 30 is supplied with an appropriate voltage. At the same time, the rest of the non-addressed memory transistors in the NAND string 50 are fully turned on by application of sufficient voltage on their control gates. In this way, a conductive path is effectively created from the source of the individual memory transistor to the source terminal 54 of the NAND string and likewise for the drain of the individual memory transistor to the drain terminal 56 of the cell.
One difference between flash memory and other of types of memory is that a cell is programmed from the erased state. That is, the floating gate is first emptied of charge. Programming then adds a desired amount of charge back to the floating gate. It does not support removing a portion of the charge from the floating gate to go from a more programmed state to a lesser one. This means that updated data cannot overwrite existing data and is written to a previous unwritten location.
Furthermore erasing is to empty all the charges from the floating gate and generally takes appreciable time. For that reason, it will be cumbersome and very slow to erase cell by cell or even page by page. In practice, the array of memory cells is divided into a large number of blocks of memory cells. As is common for flash EEPROM systems, the block is the unit of erase. That is, each block contains the minimum number of memory cells that are erased together. While aggregating a large number of cells in a block to be erased in parallel will improve erase performance, a large size block also entails dealing with a larger number of update and obsolete data.
Each block is typically divided into a number of physical pages. A logical page is a unit of programming or reading that contains a number of bits equal to the number of cells in a physical page. In a memory that stores one bit per cell, one physical page stores one logical page of data. In memories that store two bits per cell, a physical page stores two logical pages. The number of logical pages stored in a physical page thus reflects the number of bits stored per cell. In one embodiment, the individual pages may be divided into segments and the segments may contain the fewest number of cells that are written at one time as a basic programming operation. One or more logical pages of data are typically stored in one row of memory cells. A page can store one or more sectors. A sector includes user data and overhead data.
A 2-bit code having a lower bit and an upper bit can be used to represent each of the four memory states. For example, the “0”, “1”, “2” and “3” states are respectively represented by “11”, “01”, “00” and ‘10”. The 2-bit data may be read from the memory by sensing in “full-sequence” mode where the two bits are sensed together by sensing relative to the read demarcation threshold values rV1, rV2 and rV3 in three sub-passes respectively.
An alternative arrangement to a conventional two-dimensional (2-D) NAND array is a three-dimensional (3-D) array. In contrast to 2-D NAND arrays, which are formed along a planar surface of a semiconductor wafer, 3-D arrays extend up from the wafer surface and generally include stacks, or columns, of memory cells extending upwards. Various 3-D arrangements are possible. In one arrangement a NAND string is formed vertically with one end (e.g. source) at the wafer surface and the other end (e.g. drain) on top. In another arrangement a NAND string is formed in a U-shape so that both ends of the NAND string are accessible on top, thus facilitating connections between such strings.
As with planar NAND strings, select gates 705, 707, are located at either end of the string to allow the NAND string to be selectively connected to, or isolated from, external elements 709, 711. Such external elements are generally conductive lines such as common source lines or bit lines that serve large numbers of NAND strings. Vertical NAND strings may be operated in a similar manner to planar NAND strings and both SLC and MLC operation is possible. While
A 3D NAND array can, loosely speaking, be formed tilting up the respective structures 50 and 210 of
To the right of
The next sections look further at the issuance of commands to the memory circuits, whether this is done in the controller circuit (100,
NAND flash is typically arranged as stacked dies within an integrated circuit package. Each die is further organized as planes, each plane containing a memory-array addressable as a set of [dies, planes, blocks, pages]. Read/Write/Erase commands are addressed to the aforementioned 4-tuple. For instance, [0,0,26,32] is a 4-tuple addressed to die 0, plane 0, block 26, page 32.
A host computer/controller usually sees NAND Flash as a contiguous set of logical addresses, exposed via an interconnect standard like PCIE or SATA, often called the front-end of a NAND storage device. A Flash Translation Layer (FTL) translates read/write commands issued to logical addresses (LBAs) into physical address (PBAs). The techniques described here pertain to traffic from the FTL, which is addressed to PBAs.
A queue-picker 321 switches between various command queues based on a chain-length parameter. For instance, the system may choose to execute, say, 300 read commands, before switching to the write-queues, whose chain-length, for instance is 30. 300 and 30 were picked on account of writes often being ten times longer than reads for 2-bit per cell memory arrangements. For commands with dependency, an in-band SYNC command is inserted, forcing a queue-switch (say from read to write queues). (A sync command is a non-admin command used to ‘synchronize’ all queues to a known state, where some more detailed explanation in later sections.)
The individual read, write and erase queues for a device can be further divided into die-based queues (D0, D1, D2). Once a queue is picked, say the read-queue, one command can be picked from each die queue, and send to a command-consolidator 323. The command-consolidator logic will combine commands to achieve multi-plane operation, if possible. Otherwise, it may still combine commands in ways that allow for optimal utilization of caching in NAND. The consolidated command sequences are then sent on to the memory system 300, when the command slots are then shown at 331. The memory section can contain multiple (e.g. 8 or 16) devices, each with a controller and memory chips. Coming back from the memory section is then a completion queue 325 and command completer 327 to provide any callback. This arrangement can accommodate both commands originating from the host, as well as those operations originating within the memory system, such as garbage collection or other housekeeping operations, although in this case the data need not be transferred out of the memory and the higher levels just need to be aware of the operations.
The portions of
Although
The queue picker 321 can select the active request queue based on a state machine, shown as that shown in
As described in the previous section, a SYNC forces a queue-switch, until all command queues are in sync. Synchronization primitives can be handled by maintaining a set of flags.
A sync primitive is inserted to ensure read-after-write or write-after-erase coherency. The exemplary embodiment maintains a set of two hash tables to keep track of the physical block addresses (PBAs) of commands pending completion. The first hash table corresponds to pending erase commands, and the second to pending write commands. An incoming write or erase command's PBA is passed through a hash function, which points to a location in the corresponding hash table. The entry in the table corresponds to the count of commands issued to the said PBA. Every incoming command increments the count, and completion decrements the count.
In
Commands addressed to physical addresses are reordered. An exceptions is that commands can only be re-ordered within SYNC boundaries. In
A special case of SYNC insertion is worthy of mention here. Commands of the same type (for example reads) can be reordered without restriction within SYNC boundaries. However, commands of different types (say reads and writes) can only be re-ordered if there is no dependency between them. For instance, a read to [die, plane, block, page]=[0,1,356,56] cannot be executed prior to a write issued to the same address. (The scheme to check for such collisions using hash tables has been explained above.)
In commercially available commodity block storage, the service provider may provide QoS (quality-of-service) guarantees on the performance and availability of the storage. The QoS is codified in an SLA (service-level agreement) describing the guarantees in numerical terms. One example of commodity block storage is Amazon EBS (elastic block store). Amazon EBS can be used to provide a storage infrastructure to additional web services from Amazon. Therefore, from the perspective of the web service, EBS is local storage. Amazon publishes an SLA which governs the performance of EBS.
Existing implementations of QoS in literature and products view flash storage as block devices addressable via LBAs (Logical Block Address). QoS methods applied to LBAs, while still practical, may not be the most optimal solution for flash devices. One reason is because contiguous LBAs represent sequential access in disk drives, but, in multi-threaded hosts with multiple streams operating on a flash device, contiguous LBAs may, in the worst case, be serializing the operations onto a single die. Another reason is that since the LBA to PBA mapping is hidden from the host, QoS algorithms aiming to schedule LBAs in whatever fashion them deem beneficial, may not yield the expected gains in Flash.
The FTL layer, on account of access to information (enumerated above) hidden from the host is most suited for optimizing and guaranteeing I/O access times. An exemplary embodiment of this section implements the FTL and command-reordering on the host, which allows for extra information (listed below) about the I/Os to be passed along with the command. If the flash storage interface allows for extra-information to be added to standard commands, these can later be exploited by the QoS layer. For example, FileSystem meta-data I/O can be detected, and routed to the priority read queues. This accelerates I/O as the host uses the FileSystem I/O to decode the LBAs of the ‘data’ section of a file. A unique weight parameter can also be associated with every unique requestor. When I/Os are competing for the same [device, die, plane], ones with a higher weight parameter get scheduled earlier. This parameter maps directly to the end application requesting it from the host.
The techniques described in the preceding sections have a number of advantages and useful properties. Having separate queues for reads, writes and erases allow for commands without dependency to execute independent of the order of arrival. Batch execution of each queue allows for higher utilization of NAND caching/multi-plane operations, increasing throughput.
Separate queues per die allow for a scheme that keeps all dies busy (if commands are available for the said die), even though traffic from the host arrived in a different, non-optimal order. The die-queues also reduce computation time for command-reordering. The computation required to fetch a command for a given die includes checking if the relevant queue is non-empty, and de-queuing from the head of the queue.
The use of in-band SYNC commands helps in maintaining read-after-write and write-after-erase coherency, even though incoming commands are being actively re-ordered. Additionally, the use of hash-tables and a hash-function to monitor pending commands reduces computation time, and RAM resources. This design is scalable to very large command queue depths while keeping computation time and RAM usage low.
QoS performed on PBAs vs LBAs offers a more accurate scheme for controlling access to a common storage resource by multiple applications.
The foregoing detailed description has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the above to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. The described embodiments were chosen in order to explain the principles involved and its practical application, to thereby enable others to best utilize the various embodiments and with various modifications as are suited to the particular use contemplated. It is intended that the scope be defined by the claims appended hereto.