Storage Class Memory (SCM) has recently been developed as a non-volatile storage option that is capable of providing a fine granularity of data access (i.e., byte-addressable or cache line size). In addition, SCMs typically provide a shorter data access latency, as compared to traditional non-volatile storage devices, such as a Solid-State Drive (SSD) using flash memory or a Hard Disk Drive (HDD) using a rotating magnetic disk. SCM can include, for example, memories, such as a Magnetic Resistance Random Access Memory (MRAM), a Phase Change Memory (PCM), and a Resistive RAM (RRAM).
Although SCM can allow for byte-addressable access of data (i.e., in units less than a page size or a block size), the time to write data to SCM may be much longer than the time to read data from SCM. This has slowed the adoption of SCM as a more affordable and power efficient alternative to memories conventionally used for host memory, such as Dynamic Random Access Memory (DRAM) or Static Random Access Memory (SRAM).
The features and advantages of the embodiments of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the disclosure and not to limit the scope of what is claimed.
In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one of ordinary skill in the art that the various embodiments disclosed may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail to avoid unnecessarily obscuring the various embodiments.
As shown in
Although SCM can provide faster reading and writing of data than conventional forms of non-volatile storage, SCM generally takes longer for writing data than for reading data. This can be especially apparent in cases where address indirection is used in the SCM, such as for wear leveling. As noted above, the longer write latency of SCMs can prevent the use of SCM as a replacement for volatile local memory, such as more expensive and greater power consuming Dynamic Random Access Memory (DRAM) or Static Random Access Memory (SRAM). According to one aspect of the present disclosure, a Base Address Register (BAR) is exposed by device 111 to host 101 so that read commands may be sent for byte-addressable data (e.g., for cache lines or less than a page size or block size) using a memory device interface, while write commands are sent from host 101 using a block device interface for writing data in larger blocks of data. As discussed in more detail below, data to be written in SCM 120 can be aggregated or modified in buffer 107 of memory 106 of host 101 before being flushed to SCM 120. Host 101 can then send a write command for writing the aggregated or modified block of data in SCM 120. This arrangement reduces the latency for reading and writing data in SCM 120 so that SCM 120 can be used for storing byte-addressable data that would otherwise be stored in memory 106.
In the example of
In the example of
Memory 106 serves as a main memory for host 101 and can include, for example, a volatile RAM such as DRAM or SRAM, a non-volatile RAM, or other solid-state memory. While the description herein refers to solid-state memory generally, it is understood that solid-state memory may comprise one or more of various types of memory devices such as flash integrated circuits, C-RAM, PC-RAM or PRAM, Programmable Metallization Cell RAM (PMC-RAM or PMCm), OUM, RRAM, NAND memory (e.g., Single-Level Cell (SLC) memory, Multi-Level Cell (MLC) memory (i.e., two or more levels), or any combination thereof), NOR memory, EEPROM, FeRAM, MRAM, other discrete Non-Volatile Memory (NVM) chips, or any combination thereof. In some implementations, memory 106 may be located external to host 101, but used as a main memory for host 101.
Processor circuitry 102 also uses MMU 104 to access SCM 120 of device 111 via device interface 108. In some implementations, MMU 104 can access a page table that translates virtual addresses used by processor circuitry 102 into physical addresses (e.g., byte addresses) indicating a location of where data for the virtual addresses are to be stored in or retrieved from memory 106 or SCM 120. In this regard, MMU 104 may keep track of the locations for byte-addressable data. In addition, MMU 104 may execute a memory device interface (e.g., memory device interface 10 in
Device interface 108 allows host 101 to communicate with device 111 via bus or interconnect 110. In some implementations, device interface 108 may communicate with host interface 118 of device 111 via bus or interconnect 110 using a standard, such as Peripheral Component Interconnect express (PCIe), Ethernet, or Fibre Channel. As discussed in more detail below, bus or interconnect 110 can include a bus or interconnect that can allow commands for both byte-addressable data with a memory device interface and block-addressable data with a block device interface. In other embodiments, host 101 and device 111 may communicate via two or more buses or interconnects, each providing a memory device interface, a block device interface, or both.
In this regard, processor circuitry 102 uses a plurality of logical interfaces for reading data from and writing data to SCM 120 of device 111. For writing data and reading block-addressable data, host 101 interfaces with device 111 using a block device or storage device interface such as, for example, Non-Volatile Memory express (NVMe) that may be implemented, for example, by an OS driver executed by processor circuitry 102. For reading byte-addressable data, host 101 interfaces with device 111 using a memory device interface, such as a PCIe Base Address Register (BAR) interface, Gen-Z, Open Coherent Accelerator Processor Interface (OpenCAPI), or Cache Coherent Interconnect for Accelerators (CCIX), that may be executed by processor circuitry 102. In some implementations, the memory device interface may be implemented by MMU 104, or by other circuitry of processor circuitry 102, such as a hardware accelerator.
As shown in
In addition, control circuitry 112 uses a plurality of logical interfaces for receiving and performing read and write commands from host 101 to access data in SCM 120. For reading and writing block-addressable data, control circuitry 112 interfaces with host 101 using a block device interface, which may include, for example, an NVMe interface. For reading byte-addressable data, control circuitry 112 interfaces with host 101 using a memory device interface. The memory device interface may include, for example, a PCIe BAR interface, Gen-Z, OpenCAPI, or CCIX.
Control circuitry 112 can include circuitry such as one or more processors for executing instructions and can include, for example, a CPU, a GPU, a microcontroller, a DSP, an ASIC, an FPGA, hard-wired logic, analog circuitry and/or a combination thereof. In some implementations, control circuitry 112 can include an SoC such that one or both of host interface 118 and memory 116 may be combined in a single chip with control circuitry 112. As with processor circuitry 102 of host 101 discussed above, control circuitry 112 of device 111 in some implementations can include separate components, such as separate hardware accelerators for implementing a memory device interface and a block device interface.
Memory 116 of device 111 can include, for example, a volatile RAM such as DRAM, a non-volatile RAM, or other solid-state memory. Control circuitry 112 can access memory 116 to execute instructions, such as a firmware of device 111 that can include instructions for implementing the memory device interface and the block device interface. In addition, control circuitry 112 may access memory 116 for data used while executing a firmware of device 111, data to be written in SCM 120, and/or data that has been read from SCM 120.
Those of ordinary skill in the art will appreciate that other implementations can include more or less than the elements shown in
Write request B is initially received by memory device interface 10, but is redirected by memory device interface 10 to block device interface 12, since memory device interface 10 is only used for handling read requests for byte-addressable data, as opposed to write requests. In some implementations, MMU 104 hands control of the write request to an OS of host 101 since the memory mapping to SCM 120 is marked as read-only. As noted above, SCM 120 generally performs read commands faster than write commands. In the present disclosure, SCM 120 can serve as a local memory or a partial DRAM replacement for host 101 for read requests, while write requests are performed in memory 106 of host 101. This ordinarily allows for a smaller sized local memory at host 101, which can reduce power consumption and the cost of the overall system including host 101 and device 111.
As used herein, read and write requests refer to data accesses made at a byte-level (i.e., byte-addressable data), such as cache line requests made by applications executed by processor circuitry 102 of host 101. On the other hand, read and write commands refer to commands sent to device 111 from host 101 to access data either at a byte-level in the case of read commands from memory device interface 10, or at a block-level (i.e., page or block-addressable data) from block device interface 12. A page size or block size can correspond to a unit of data in a virtual memory that is managed by an OS of host 101. Data accessed in device 111 by block device interfaces 12 and 22 in
As shown in
Memory device interface 20 executed by control circuitry 112 of device 111 is configured to only receive and perform read commands for byte-addressable data. The performance of write commands received by memory device interface 20 may be blocked or trigger an error at device 111. Such errors may or may not be reported back to host 101.
In the example of
In the case where write request B is to store byte-addressable data, block device interface 12 uses buffer 107 to aggregate or modify one or more portions of the block of data including the byte-addressable data to be written to SCM 120. Block device interface 12 sends a read command for the block of data including the byte-addressable data to be written to device 111. Block device interface 22 of device 111 receives the read command for the block and performs a read operation on SCM 120 and returns the read block including the byte-addressable data to block device interface 12 of host 101. Block device interface 12 buffers the read block of data in buffer 107 and modifies one or more byte-addressable portions of the buffered block for write request B. In some cases, additional write requests for byte-addressable data included in the buffered block may also be performed while the block is stored in buffer 107.
Block device 12 then sends a write command for the modified block including the byte-addressable data to flush the data for write request from buffer 107 to SCM 120. In some cases, the write command may include additional blocks that have been modified or written, such as data for write request D. Block device interface 22 of device 111 receives the write command, and uses optional logical-to-physical mapping module 24 to identify one or more physical addresses in SCM 120 storing one or more blocks including data B and D for write requests B and D. As noted above, logical-to-physical mapping module 24 may be omitted in other implementations, such as where SCM 120 does not use address indirection. In such implementations, block device interface 22, which may be executed by control circuitry 112 in
One or more write completion indications are returned to block device interface 22 after completing the write operations. Block device interface 22 may forward or send a write completion indication to block device interface 12 of host 101 to indicate that the write command or write commands have been completed and may also provide the new byte-addressable physical addresses for data stored in the write operations in addition to block-addressable locations for the data. In other implementations, memory device interface 20 may instead provide the updated byte-addressable physical addresses to memory device interface 10 of host 101.
Read request C is also received by block device interface 12 of host 101. The data to be retrieved for read request C is addressed in terms of pages or blocks, as opposed to being a request for byte-addressable data, such as with read request A discussed above. Block device interface 12 repackages the request as read command C and sends read command C to block device interface 22 of device 111. For its part, block device interface 22 performs read command C by using optional logical-to-physical mapping module 24, which provides a physical address for reading block-addressable data C from SCM 120. Block-addressable data C is read from SCM 120 and returned to block device interface 22, which passes the data on to block device interface 12 of host 101 to complete the command. In some cases, data C may be buffered in a memory of device 111, such as memory 116, before sending the data to host 101.
As will be appreciated by those of ordinary skill in the art, other implementations can include different components or modules than those shown in the example of
As shown in
In some implementations, memory device interface 20 of device 111 may expose a portion of the BAR as a readable and writable address range that maps to a memory of device 111, such as memory 116 in
In the example of
The entry moves to the second state after a write request is received for the data represented by the entry. As discussed above, the write request can be handled as a software event by memory device interface 10 and/or block device interface 12. This ordinarily allows for more flexibility in the design and implementation of host-side buffering than hardware solutions that may rely exclusively on MMU 104.
In the second state, a block or page including the data for the write request has been retrieved by block device interface 12 of host 101 and stored in buffer 107 of memory 106 in host 101. The prior or obsolete version of the block may remain in SCM 120, but the modified block or page in buffer 107 is the current or valid version of the data for the virtual address. Memory device interface 10 or block device interface 12 also updates page table 16 to change the access to read/write and to map the virtual address for the entry to the physical address where the data has been written in buffer 107 of memory 106.
In some implementations, memory device interface 10 or block device interface 12 may identify that there have been no previous writes to the block or page or that the write request is the first write to the block or page. In such implementations, the data to be written for the block or page may be stored in buffer 107 without first retrieving the block or page from device 111. The write request is then performed on the buffered block or page.
While the entry is in the second state, the block or page for the entry stored in memory 106 can be modified or overwritten by the same application that issued the write request or by a different application. Data corresponding to the entry, such as byte-addressable data within the buffered block or page, can also be read from the physical address in memory 106 while the entry is in the second state. Memory device interface 10 may refer to the entry in page table 16 in response to read and write requests to modify or read the byte-addressable data corresponding to the virtual address that is stored in memory 106. Temporarily storing the data in memory 106 can ordinarily allow for a faster write operation than would be possible by writing the data to SCM 120. In this regard, the buffering of the modified byte-addressable data in memory 106 can be advantageous when the buffered data is soon reused, since it can also be quickly read from memory 106. Data buffered in memory 106 may also be read quicker than data read from SCM 120. This can be especially beneficial for cache lines, which are often read or modified soon after an initial write.
In addition, the aggregation or modification of data in memory 106 and using a separate block device interface to flush an aggregated or modified block of data in one write operation is more efficient than making numerous smaller write operations in SCM 120, which has a greater write latency than its read latency. The foregoing use of both a block device interface and a memory device interface with page table 16, and the buffering of written data in buffer 107, can also provide a more efficient arrangement than switching access of a BAR of SCM 120 from read-only to read/write or switching or temporarily modifying a single interface of SCM 120 to accommodate byte-addressed and block-addressed data. Deferring writes to SCM 120 can improve performance of the system including host 101 and device 111 by allowing the writes to occur more quickly in memory 106 of host 101, and writing the aggregated or modified blocks to SCM 120 at a later time when write latency to SCM 120 is less critical for processes or threads being executed by processor circuitry 102 that may need to wait until the data has been written before continuing execution.
After the data for the entry has been modified or aggregated into one or more blocks by block device interface 12, the data for the entry is flushed or de-staged by block device interface 12 from buffer 107 to SCM 120 via block device interface 22 of device 111. Block device interface 12 of host 101 updates the entry so that access to the virtual address is unavailable or blocked while the data is being flushed to SCM 120. In some implementations, indicating in the page table that the virtual address is unavailable can include removing or deleting an entry for the virtual address or marking the entry unavailable or obsolete. This can ensure consistency of the data so that different applications are not modifying data in memory 106 before access of the flushed data in SCM 120 is returned to read-only, which could result in reading an old or obsolete version of the data. The use of memory 106 to temporarily buffer write requests provides an asynchronous storage of data where the writing of the data to SCM 120 is deferred to improve system performance in terms of Input/Output Operations Per Second (10PS), while the foregoing use of access permissions in page table 16 allows for the data to remain consistent.
As shown in
In block 502, memory device interface 10 accesses the BAR of SCM 120. In some implementations, control circuitry 112 of device 111 executing memory device interface 20 may expose a read-only BAR of SCM 120 to memory device interface 10 of host 101. This allows memory device interface 10 to have size and data type information for SCM 120 for mapping virtual addresses of host 101 to physical addresses of SCM 120 and enable direct memory access of SCM 120 by host 101 for read operations. In addition, device 111 in some implementations may also expose a read/write portion of the BAR that maps to memory 116.
In block 504, memory device interface 10 creates a page table including a plurality of entries corresponding to memory locations in SCM 120. In more detail, the entries in the page table correspond to the exposed BAR of device 111. The page table can include entries for different virtual addresses and the mapped physical addresses in SCM 120. In this regard, the created page table can include entries for virtual addresses or pages that allow memory device 10 to determine a physical location in SCM 120 of device 111 for byte-addressable data that is smaller than a page or block size. The created page table can also include an indication of the allowed access for the physical address, as in the case of page table 16 discussed above with reference to
In block 506, memory device interface 10 sets the plurality of entries in the page table as read-only. As discussed above, data can be read from SCM 120 much quicker than data of the same size can be written to SCM 120. The byte-addressable access to SCM 120 is therefore limited to read-only access. As discussed in more detail below with reference to
In block 602, memory device interface 10 or block device interface 12 receives a write request to write data corresponding to an entry in a page table (e.g., page table 16 in
In block 604, data for the write request is written in buffer 107 of memory 106 of device 111 using block device interface 12. As discussed above, the write request to store byte-addressable data may be received by block device interface 12 of host 101 after redirection from memory device interface 10 or from another module, such as from a portion of an OS of host 101. For example, in cases where the write request is initially received by memory device interface 10, the write request may trigger a fault handler that redirects the write request to block device interface 12. As discussed above with reference to
The byte-addressable data written in buffer 107 for the write request received in block 602 may be aggregated into units of a page or block size or a current version of the block or page including the byte-addressable data may be read from device 111 and stored in buffer 107 for performing the write request. As noted above, write operations take much longer to perform in SCM 120 than read operations for a given amount of data. Performing write requests in buffer 107 can result in performing less overall writes to SCM 120 and in completing the smaller intermediate writes faster in memory 106 to improve the efficiency and performance of host 101 and device 111. Write requests for data that is already in units of a block or page size may also be buffered in memory 106 in some implementations to improve the performance of write operations by reducing the latency of performing the write operations. As noted above, the faster completion of write requests can allow for processes and threads to continue execution rather than wait for data to be written to SCM 120. In other embodiments, block-addressable data may instead be written to SCM 120 without deferring the writing of such data to SCM 120 with the use of memory 106. Such an arrangement may be preferred in cases where the size of memory 106 is limited.
In block 606, block device interface 12 changes the entry for the virtual address in the page table from read-only access to both read and write access. Block device interface 12 also changes the entry for the first virtual address to point to a location or physical address in memory 106 where data for the first virtual address was written. As discussed above with reference to the page table entry state diagram of
In block 702, a read request is received by memory device interface 10 at host 101 to read byte-addressable data corresponding to an entry in a page table (e.g., page table 16 in
In block 704, memory device interface 10 uses a page table to determine whether the requested data is located in memory 106. If so, memory device interface 10 reads the data from memory 106 in block 706. On the other hand, if it is determined that the requested byte-addressable data is located in SCM 120, memory device interface 10 in block 708 sends a read request to memory device interface 20 of device 111 to directly access the requested data from SCM 120. Unlike data access performed by block device interface 12 of host 101, the read requests performed by memory device interface 10 of host 101 may not require use of the OS of host 101.
As discussed above, by allowing read-only access to a BAR of SCM 120, it is ordinarily possible to take advantage of the relatively quick read access of SCM 120 for byte-addressable data, without incurring the greater performance penalty of writing byte-addressable data to SCM 120. This can allow for a smaller main memory used by host 101 (e.g., memory 106) or a storage space savings for the host's main memory, which may be internal or external to host 101. As noted above, memory 106 can include a DRAM or SRAM in some implementations that can provide faster read and write access than SCM 120, but costs more and consumes more power for a given amount of data storage.
In block 802, block device interface 12 receives a write request to write byte-addressable data corresponding to an entry in a page table. The byte-addressable data to be written can include data within a page or block represented by the page table entry. Such write data may come from, for example, processes or threads executed by processor circuitry 102 that may flush or de-stage dirty cache lines from a cache of processor circuitry 102.
In block 804, block device interface 12 reads a block of data from SCM 120 for the block or page of data represented by the page table entry. The block or page is stored in buffer 107 of memory 106. In addition, block device interface 12 updates the page table to indicate that an entry or virtual address for the buffered block or page has read/write access and that the data for the entry is located at a physical address in memory 106.
In block 806, block device interface 12 modifies the byte-addressable data for the write request by writing the data to the block or page buffered in buffer 107. As noted above, additional write requests and read requests may also be performed on the same byte-addressable data or on other byte-addressable portions of the buffered block or page while the block or page is stored in memory 106.
In block 808, block device interface 12 indicates in the page table that the data for the buffered block or page is unavailable for reading and writing data in preparation for flushing of the modified block to SCM 120. As noted above with respect to the state diagram of
In block 810, block device interface 12 sends a write command to device 111 to flush or de-stage the modified block of data from buffer 107 to SCM 120. In some implementations, block device interface 12 may wait until a threshold number of blocks have been aggregated in buffer 107 or may wait a predetermined amount of time with no accesses to the data in a block before flushing the modified block or blocks to SCM 120 via block device interface 22 of device 111. In other cases, block device interface 12 may flush an aggregated block of data in response to reaching a block's worth of data in buffer 107, such as when new write data is collected in buffer 107 for a page or block that has not been previously stored in device 111. In yet other implementations, the flushing of a block or blocks of data from buffer 107 may depend on a remaining storage capacity of buffer 107. For example, block device interface 12 may flush one or more blocks of data from buffer 107 in response to reaching 80% of the storage capacity of buffer 107.
In block 812, block device interface 12 sets the entry for the modified block of data as read-only in the page table in response to completion of the flushing operation. This corresponds to returning to the first state from the third state in the example state diagram of
In block 814, block device interface 12 or memory device interface 10 updates the entry for the flushed block of data in the page table to point a location in SCM 120 where the block was flushed. In this regard, the new physical address of the data in SCM 120 may be received by block device interface 12 as part of the flush command completion indication or alternatively may be received by memory device interface 10 via an update process of memory device interface 20 of device 111.
In block 902, control circuitry 112 uses block device interface 22 for receiving write commands from host 101 to write data in blocks to SCM 120. As discussed above, block device interface 22 is also used to read block-addressable data from SCM 120.
In addition, control circuitry 112 uses memory device interface 20 in block 904 for receiving read commands from host 101 to read byte-addressable data from SCM 120. The use of two interfaces at device 111 allows SCM 120 to be used by host 101 as a main memory for reading byte-addressable data and as a non-volatile storage for blocks of data.
In block 906, memory device interface 20 exposes a read-only BAR for SCM 120 to host 101. As noted above, the exposed BAR for device 111 may also include a read/write portion located in memory 116 of device 111. The BAR may be exposed via, for example, a PCIe bus or interconnect that allows commands for byte-addressable data using a memory device interface, and also allows commands for block-addressable data using a block device interface, such as NVMe, for example. The use of a BAR can allow processor circuitry 102 at host 101 to create and update a page table that maps virtual addresses used by applications executed by processor circuitry 102 to physical addresses in SCM 120.
In block 1002 of
After receiving a confirmation of the completion of the write operation in SCM 120, control circuitry 112 updates memory device interface 20 in block 1004 with the physical addresses for the data written. In some implementations, the updated addresses are shared with host 101 via memory device interface 20 of device 111 so that memory device interface 10 at host 101 can update a page table.
As discussed above, the use of an SCM for reading byte-addressable data and writing to the SCM in blocks can allow the SCM replace at least some of a host's main memory, while reducing the effects of the greater write latency of the SCM. In addition, the use of the host's main memory for temporarily buffering byte-addressable data that has been modified and updating a page table for the buffered data can help ensure that an old or obsolete version of the data is not read from the SCM.
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks, modules, and processes described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Furthermore, the foregoing processes can be embodied on a computer readable medium which causes processor or control circuitry to perform or execute certain functions.
To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, and modules have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Those of ordinary skill in the art may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The various illustrative logical blocks, units, modules, processor circuitry, and control circuitry described in connection with the examples disclosed herein may be implemented or performed with a general-purpose processor, a DSP, an ASIC, an FPGA or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general-purpose processor may be a microprocessor, but in the alternative, the circuitry may be any conventional processor, controller, microcontroller, or state machine. Processor or control circuitry may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, an SoC, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The activities of a method or process described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by processor or control circuitry, or in a combination of the two. The steps of the method or algorithm may also be performed in an alternate order from those provided in the examples. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable media, an optical media, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor or control circuitry such that the circuitry can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to processor or control circuitry. The circuitry and the storage medium may reside in an ASIC or an SoC.
The foregoing description of the disclosed example embodiments is provided to enable any person of ordinary skill in the art to make or use the embodiments in the present disclosure. Various modifications to these examples will be readily apparent to those of ordinary skill in the art, and the principles disclosed herein may be applied to other examples without departing from the spirit or scope of the present disclosure. The described embodiments are to be considered in all respects only as illustrative and not restrictive. In addition, the use of language in the form of “at least one of A and B” in the following claims should be understood to mean “only A, only B, or both A and B.”
Number | Name | Date | Kind |
---|---|---|---|
4420807 | Nolta et al. | Dec 1983 | A |
5608876 | Cohen et al. | Mar 1997 | A |
6772296 | Mathiske | Aug 2004 | B1 |
6856556 | Hajeck | Feb 2005 | B1 |
7023726 | Chen et al. | Apr 2006 | B1 |
7126857 | Hajeck | Oct 2006 | B2 |
7216211 | Munguia et al. | May 2007 | B2 |
7386655 | Gorobets et al. | Jun 2008 | B2 |
7430136 | Merry, Jr. et al. | Sep 2008 | B2 |
7447807 | Merry et al. | Nov 2008 | B1 |
7502256 | Merry, Jr. et al. | Mar 2009 | B2 |
7509441 | Merry et al. | Mar 2009 | B1 |
7596643 | Merry, Jr. et al. | Sep 2009 | B2 |
7653778 | Merry, Jr. et al. | Jan 2010 | B2 |
7685337 | Merry, Jr. et al. | Mar 2010 | B2 |
7685338 | Merry, Jr. et al. | Mar 2010 | B2 |
7685374 | Diggs et al. | Mar 2010 | B2 |
7733712 | Walston et al. | Jun 2010 | B1 |
7765373 | Merry et al. | Jul 2010 | B1 |
7898855 | Merry, Jr. et al. | Mar 2011 | B2 |
7904619 | Danilak | Mar 2011 | B2 |
7912991 | Merry et al. | Mar 2011 | B1 |
7936603 | Merry, Jr. et al. | May 2011 | B2 |
7962792 | Diggs et al. | Jun 2011 | B2 |
7979601 | Berenbaum et al. | Jul 2011 | B2 |
8078918 | Diggs et al. | Dec 2011 | B2 |
8090899 | Syu | Jan 2012 | B1 |
8095851 | Diggs et al. | Jan 2012 | B2 |
8108692 | Merry et al. | Jan 2012 | B1 |
8122185 | Merry, Jr. et al. | Feb 2012 | B2 |
8127048 | Merry et al. | Feb 2012 | B1 |
8135903 | Kan | Mar 2012 | B1 |
8151020 | Merry, Jr. et al. | Apr 2012 | B2 |
8161227 | Diggs et al. | Apr 2012 | B1 |
8166245 | Diggs et al. | Apr 2012 | B2 |
8243525 | Kan | Aug 2012 | B1 |
8254172 | Kan | Aug 2012 | B1 |
8261012 | Kan | Sep 2012 | B2 |
8296625 | Diggs et al. | Oct 2012 | B2 |
8312207 | Merry, Jr. et al. | Nov 2012 | B2 |
8316176 | Phan et al. | Nov 2012 | B1 |
8341339 | Boyle et al. | Dec 2012 | B1 |
8375151 | Kan | Feb 2013 | B1 |
8392635 | Booth et al. | Mar 2013 | B2 |
8397107 | Syu et al. | Mar 2013 | B1 |
8407449 | Colon et al. | Mar 2013 | B1 |
8423722 | DeForest et al. | Apr 2013 | B1 |
8433858 | Diggs et al. | Apr 2013 | B1 |
8443167 | Fallone et al. | May 2013 | B1 |
8447920 | Syu | May 2013 | B1 |
8458435 | Rainey et al. | Jun 2013 | B1 |
8478930 | Syu | Jul 2013 | B1 |
8489854 | Colon et al. | Jul 2013 | B1 |
8503237 | Horn | Aug 2013 | B1 |
8521972 | Boyle et al. | Aug 2013 | B1 |
8549236 | Diggs et al. | Oct 2013 | B2 |
8583835 | Kan | Nov 2013 | B1 |
8601311 | Horn | Dec 2013 | B2 |
8601313 | Horn | Dec 2013 | B1 |
8612669 | Syu et al. | Dec 2013 | B1 |
8612804 | Kang et al. | Dec 2013 | B1 |
8615681 | Horn | Dec 2013 | B2 |
8631191 | Hashimoto | Jan 2014 | B2 |
8638602 | Horn | Jan 2014 | B1 |
8639872 | Boyle et al. | Jan 2014 | B1 |
8683113 | Abasto et al. | Mar 2014 | B2 |
8700834 | Horn et al. | Apr 2014 | B2 |
8700950 | Syu | Apr 2014 | B1 |
8700951 | Call et al. | Apr 2014 | B1 |
8706985 | Boyle et al. | Apr 2014 | B1 |
8707104 | Jean | Apr 2014 | B1 |
8713066 | Lo et al. | Apr 2014 | B1 |
8713357 | Jean et al. | Apr 2014 | B1 |
8719531 | Strange et al. | May 2014 | B2 |
8724392 | Asnaashari et al. | May 2014 | B1 |
8724422 | Agness et al. | May 2014 | B1 |
8725931 | Kang | May 2014 | B1 |
8745277 | Kan | Jun 2014 | B2 |
8751728 | Syu et al. | Jun 2014 | B1 |
8769190 | Syu et al. | Jul 2014 | B1 |
8769232 | Suryabudi et al. | Jul 2014 | B2 |
8775720 | Meyer et al. | Jul 2014 | B1 |
8782327 | Kang et al. | Jul 2014 | B1 |
8788778 | Boyle | Jul 2014 | B1 |
8788779 | Horn | Jul 2014 | B1 |
8788880 | Gosla et al. | Jul 2014 | B1 |
8793429 | Call et al. | Jul 2014 | B1 |
8903995 | Basak et al. | Dec 2014 | B1 |
8947803 | Yamakawa et al. | Feb 2015 | B1 |
9015123 | Mathew et al. | Apr 2015 | B1 |
9116800 | Post et al. | Aug 2015 | B2 |
9189387 | Taylor et al. | Nov 2015 | B1 |
9342453 | Nale et al. | May 2016 | B2 |
9619174 | Chen et al. | Apr 2017 | B2 |
9836404 | Ummadi et al. | Dec 2017 | B2 |
9857995 | Malina et al. | Jan 2018 | B1 |
10126981 | Malina et al. | Nov 2018 | B1 |
10423536 | Noguchi et al. | Sep 2019 | B2 |
10482009 | Sabol et al. | Nov 2019 | B1 |
10496544 | Wang et al. | Dec 2019 | B2 |
20080148048 | Govil | Jun 2008 | A1 |
20100037002 | Bennett | Feb 2010 | A1 |
20100174849 | Walston et al. | Jul 2010 | A1 |
20100250793 | Syu | Sep 2010 | A1 |
20110099323 | Syu | Apr 2011 | A1 |
20110283049 | Kang et al. | Nov 2011 | A1 |
20120166891 | Dahlen et al. | Jun 2012 | A1 |
20120210068 | Joshi et al. | Aug 2012 | A1 |
20120239860 | Atkisson et al. | Sep 2012 | A1 |
20120260020 | Suryabudi et al. | Oct 2012 | A1 |
20120278531 | Horn | Nov 2012 | A1 |
20120284460 | Guda | Nov 2012 | A1 |
20120317377 | Palay et al. | Dec 2012 | A1 |
20120324191 | Strange et al. | Dec 2012 | A1 |
20120331016 | Janson et al. | Dec 2012 | A1 |
20130080687 | Nemazie et al. | Mar 2013 | A1 |
20130091321 | Nishtala et al. | Apr 2013 | A1 |
20130132638 | Horn et al. | May 2013 | A1 |
20130145106 | Kan | Jun 2013 | A1 |
20130205114 | Badam et al. | Aug 2013 | A1 |
20130290793 | Booth et al. | Oct 2013 | A1 |
20140006686 | Chen | Jan 2014 | A1 |
20140059405 | Syu et al. | Feb 2014 | A1 |
20140101369 | Tomlin et al. | Apr 2014 | A1 |
20140108703 | Cohen et al. | Apr 2014 | A1 |
20140115427 | Lu | Apr 2014 | A1 |
20140133220 | Danilak et al. | May 2014 | A1 |
20140136753 | Tomlin et al. | May 2014 | A1 |
20140149826 | Lu et al. | May 2014 | A1 |
20140157078 | Danilak et al. | Jun 2014 | A1 |
20140181432 | Horn | Jun 2014 | A1 |
20140223255 | Lu et al. | Aug 2014 | A1 |
20140226389 | Ebsen et al. | Aug 2014 | A1 |
20140351515 | Chiu et al. | Nov 2014 | A1 |
20150058534 | Lin | Feb 2015 | A1 |
20150142996 | Lu | May 2015 | A1 |
20150302903 | Chaurasia et al. | Oct 2015 | A1 |
20150363320 | Kumar et al. | Dec 2015 | A1 |
20160027481 | Hong | Jan 2016 | A1 |
20160118130 | Chadha et al. | Apr 2016 | A1 |
20160313943 | Hashimoto et al. | Oct 2016 | A1 |
20160342357 | Ramamoorthy et al. | Nov 2016 | A1 |
20160357463 | DeNeui et al. | Dec 2016 | A1 |
20160378337 | Horspool et al. | Dec 2016 | A1 |
20170024332 | Rui et al. | Jan 2017 | A1 |
20170147499 | Mohan et al. | May 2017 | A1 |
20170160987 | Royer, Jr. et al. | Jun 2017 | A1 |
20170185347 | Flynn et al. | Jun 2017 | A1 |
20170228170 | Chen et al. | Aug 2017 | A1 |
20170270041 | Talagala et al. | Sep 2017 | A1 |
20170286291 | Thomas | Oct 2017 | A1 |
20180004438 | Hady | Jan 2018 | A1 |
20180032432 | Kowles | Feb 2018 | A1 |
20180095898 | Khosravi et al. | Apr 2018 | A1 |
20180150404 | Kim | May 2018 | A1 |
20180307620 | Zhou | Oct 2018 | A1 |
20180349041 | Zhou | Dec 2018 | A1 |
20190163375 | Amidi | May 2019 | A1 |
20190188153 | Benisty et al. | Jun 2019 | A1 |
20190310796 | Perez | Oct 2019 | A1 |
20190339904 | Myran et al. | Nov 2019 | A1 |
20190347204 | Du | Nov 2019 | A1 |
Number | Date | Country |
---|---|---|
2012134641 | Oct 2012 | WO |
Entry |
---|
Karypidis et al.; “The tangible file system”; 23rd International Conference on Distributed Computing Systems Workshops, 2003; Proceedings, Providence, Rhode Island, pp. 268-273. |
Seltzer et al.; “An NVM Carol: Visions of NVM Past, Present, and Future”; 2018 IEEE 34th International Conference on Data Engineering; Paris, 2018; pp. 15-23; available at: https://www.researchgate.net/publication/328522956_An_NVM_Carol_Visions_of_NVM_Past_Present_and_Future. |
Xu et al.; “Cast: A page-level FTL with compact address mapping and parallel data blocks”; 2012 IEEEE 31st International Performance Computing and Communications Conference (IPCCC); Austin, TX; 2012; pp. 142-151; available at: https://ieeexplore.ieee.org/document/6407747. |
Phil Mills, “Storage Class Memory—the Future of Solid State Storage,” http://www.snia.org/sites/default/education/tutorials/2009/fall/solid/PhilMills_The_Future_of_Solid_State_Storage.pdf., SNIA Education, 2009. |
Pending U.S. Appl. No. 16/176,997, filed Oct. 31, 2018, entitled “Tiered Storage Using Storage Class Memory”, James N. Malina. |
Pending U.S. Appl. No. 16/867,793, filed May 6, 2020, entitled “Page Modification Encoding and Caching”, Cassuto et al. |
Pending U.S. Appl. No. 16/196,077, filed Nov. 20, 2018, entitled “Data Access in Data Storage Device Including Storage Class Memory”, Dubeyko et al. |
Pending U.S. Appl. No. 16/246,425, filed Jan. 11, 2019, entitled “Container Key Value Store for Data Storage Devices”, Sanjay Subbarao. |
Pending U.S. Appl. No. 16/246,401, filed Jan. 11, 2019, entitled “Fine Granularity Translation Layer for Data Storage Devices”, Sanjay Subbarao. |
International Search Report and Written Opinion dated Nov. 6, 2020 from counterpart International Application No. PCT/US2020/037713, 8 pages. |
Hitachi Vantara; “Hitachi Accelerated Flash 2.0”; Sep. 2018; 20 pages; available at: https://www.hitachivantara.com/en-us/pdfd/white-paper/accelerated-flash-whitepaper.pdf. |
Kang et al.; “Subpage-based Flash Translation Layer for Solid State Drivers”; Jan. 2006; 10 pages; available at: https://cs.kaist.ac.kr/fileDownload?kind=tech&sn=340. |
Kim et al.; “Partial Page Buffering for Consumer Devices with Flash Storage”; Proceedings 2013 IEEE 3rd International Conference on Consumer Electronics—Berlin, ICCE-Berlin 2013 (pp. 177-180); available at: https://hanyang.elsevierpure.com/en/publications/partial-page-buffering-for-consumer-devices-with-flash-storage. |
Wu et al.; “Delta-FTL: Improving SSD Lifetime via Exploiting Content Locality”; Apr. 10-13, 2012; 13 pages; available at: https://cis.temple.edu/˜he/publications/Conferences/DeltaFTL-Eurosys12.pdf. |
Xia et al.; “Edelta: A Word-Enlarging Based Fast Delta Compression Approach”; Jul. 2015; 5 pages; available at: https://www.usenix.org/system/files/conference/hotstorage15/hotstorage15-xia.pdf. |
Pending U.S. Appl. No. 16/921,719, filed Jul. 6, 2020, entitled “Data Access in Data Storage Device Including Storage Class Memory”, Dubeyko et al. |