File systems typically provide access to data using a page cache stored in a main memory of a host, such as in a Dynamic Random Access Memory (DRAM). The page cache can provide a sequence of memory pages used for caching some part of the file system's object content. In more detail, the page cache can be used for caching user data as well as for metadata in a kernel space of an operating system executed by a host. This typically provides quicker access to the cached data for reading and writing than accessing the data from a Data Storage Device (DSD), such as a Hard Disk Drive (HDD) or Solid-State Drive (SSD).
Although the use of a page cache for accessing data may work well for conventional storage memory such as a rotating magnetic disk in an HDD or a NAND flash memory in an SSD, the use of a page cache for accessing data can be inefficient for DSDs that include more recently developed Storage Class Memories (SCMs) due to the quicker access times of such SCMs and the operations required for the page cache. Emerging SCMs can include, for example, Phase Change Memory (PCM), Magnetoresistive Random Access Memory (MRAM), or Resistive RAM (RRAM) that can perform read and write operations much faster than conventional memories such as a rotating magnetic disk or a NAND flash memory, and in some cases, even faster than a main memory such as DRAM.
For example, a DRAM main memory may have a read latency of 50 nanoseconds for reading data and a write latency of 50 nanoseconds for writing data. Given that a read latency for a NAND flash secondary memory may be 25 microseconds and a write latency for the NAND flash secondary memory may be 500 microseconds, the use of a page cache in a DRAM main memory in such an example can provide a quicker access of the cached data. The cost of maintaining and operating a page cache in DRAM, such as loading and flushing pages of data between the DRAM main memory and a conventional secondary memory, such as a NAND flash memory, is outweighed by the quicker access time of the DRAM main memory as compared to the secondary memory.
As noted above, emerging SCMs have data access times significantly faster than conventional memories. For example, an MRAM SCM may have a read latency of 30 nanoseconds and a write latency of 30 nanoseconds. Other emerging SCMs such as RRAM may provide for even faster access times with a read latency of only 3 nanoseconds and a write latency of only 10 nanoseconds.
Despite these quicker access times for SCMs, accessing data from a DSD with a SCM can still take microseconds due to processing by the DSD and/or the host relating to the fixed block or page size currently used to access data. This fixed block or page size may be based on, for example, a smallest writable unit of a secondary memory of the DSD, such as a 512 byte sector size or a 4 KB page size. Accordingly, there is a need for a more efficient way to access data from DSDs that include SCMs to make better use of the faster access times of SCMs. Such DSDs can include, for example, hybrid SSDs that have both an SCM and a conventional memory such as a NAND flash memory.
The features and advantages of the embodiments of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the disclosure and not to limit the scope of what is claimed.
In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one of ordinary skill in the art that the various embodiments disclosed may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail to avoid unnecessarily obscuring the various embodiments.
Main memory 104 can be used by host 101 to store data used by processor 102. Data stored in main memory 104 can include data instructions loaded from DSD 108 for execution by processor 102, and/or data used in executing instructions from applications, such as OS 12 or application 18. In some implementations, main memory 140 can be a volatile memory, such as a Dynamic Random Access Memory (DRAM).
In the example of
OS 12 includes kernel 14 and DSD driver 16, and occupies a physical address space in main memory 104. Kernel 14 is a binary image that contains a set of pre-compiled drivers and is loaded into main memory 104 from DSD 108 when OS 12 is loaded. Kernel 14 includes instructions for OS 12 for managing resources of host 101 (e.g., memory allocation) and handling read and write requests from applications, such as application 18, for execution by processor 102. DSD driver 16 provides a software interface to DSD 108 and can include instructions for communicating with DSD 108 in accordance with the processes discussed below. Application 18 can include an application executed by processor 102 that reads and/or writes data in DSD 108.
DSD interface 106 is configured to interface host 101 with DSD 108, and may communicate with DSD 108 using a standard such as, for example, Serial Advanced Technology Attachment (SATA), PCI express (PCIe), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Ethernet, Fibre Channel, or WiFi. In this regard, host 101 and DSD 108 may not be physically co-located and may communicate over a network such as a Local Area Network (LAN) or a Wide Area Network (WAN), such as the internet. In addition, DSD interface 106 may also interface with DSD 108 using a logical interface specification such as Non-Volatile Memory express (NVMe) or Advanced Host Controller Interface (AHCI) that may be implemented by DSD driver 16. As will be appreciated by those of ordinary skill in the art, DSD interface 106 can be included as part of processor 102.
As shown in
Secondary memory 114 can include, for example, a rotating magnetic disk or non-volatile solid-state memory, such as flash memory. While the description herein refers to solid-state memory generally, it is understood that solid-state memory may comprise one or more of various types of memory devices such as flash integrated circuits, NAND memory (e.g., single-level cell (SLC) memory, multi-level cell (MLC) memory (i.e., two or more levels), or any combination thereof), NOR memory, EEPROM, other discrete Non-Volatile Memory (NVM) chips, or any combination thereof.
SCM 116 can include, for example, Chalcogenide RAM (C-RAM), Phase Change Memory (PCM), Programmable Metallization Cell RAM (PMC-RAM or PMCm), Ovonic Unified Memory (OUM), Resistive RAM (RRAM), Ferroelectric Memory (FeRAM), Magnetoresistive RAM (MRAM), or 3D-XPoint memory. SCM 116 has at least one of a faster read time and a faster write time for accessing data than secondary memory 114. DSD 108 can be considered a hybrid DSD in that it includes at least two different types of memory with secondary memory 114 and SCM 116.
In addition, SCM 116 is capable of storing data at a byte-addressable level, as opposed to other types of NVM that have a smallest writable data size such as a page size of 4 KB or a sector size of 512 Bytes. As discussed in more detail below, this can allow SCM 116 or a portion thereof (e.g., address space 24) to be used as an extension or replacement of main memory 104. In cases where main memory 104 is a DRAM, the reduction in the size of main memory 104 can decrease the amount of power consumed by host 101 and the power consumption of the overall system including host 101 and DSD 108.
As shown in the example of
SCM 116 also includes address space 24 which can serve as at least a portion of the address space used by processor 102 of host 101. Address space 24 can store data at a byte-addressable level that can be accessed by processor 102. DSD 108 may provide host 101 with an indication of address space 24. Host 101 may then associate an address range for address space 24 with DSD 108 and an indication that this address range is to be used as a byte-addressable address space, such as for a page cache, for example. When accessing data in address space 24, OS 12 may provide a special command to DSD 108 that includes an address for the address range and a request, such as a load or store request. Control circuitry 110 of DSD 108 would then recognize the special command as being for address space 24 in SCM 116.
In cases where PCIe is used for communication between host 101 and DSD 108, the byte-addressable access of SCM 116 can also eliminate the need for a page cache at host 101. Such implementations can reduce the overhead and delay associated with caching data in main memory 104, which may also have a greater read and/or write latency than SCM 116. In cases where SATA or SAS are used to access data from DSD 108, a page cache will still be used by OS 12 at host 101. However, the faster access of SCM 116 as compared to secondary memory 114 ordinarily allows for faster access of data at DSD 108. SCM 116 can then be used to store data such as file system metadata or cached user data that would otherwise be stored in the main memory of the host. This can allow for a smaller amount of main memory to be used at the host, which can reduce power consumption in cases where the main memory is a DRAM.
In addition, the byte-addressable nature of SCM 116 can allow for a delta-encoding technique when data stored in byte-addressable memory is changed. In such implementations, if host 101 and DSD 108 have a piece of data stored in byte-addressable memory (i.e., in main memory 104 and in address space 24 of SCM 116), then main memory 104 and SCM 114 can exchange only the changed portions for the piece of data or the differences between two states of the piece of data, rather than sending a whole sector's worth of data, for example. This difference or binary patch (e.g., an XOR difference) will be much smaller than the whole sector's worth of data, which can reduce the amount of data traffic and processing of data between host 101 and DSD 108.
In yet other implementations, host 101 may manage DSD 108 so that processor 102 can directly access address space 24 in SCM 116. For example, DSD 108 may provide logical to physical address translation information for SCM 116 to host 101, which can be called by host 101 and executed by control circuitry 110. In one example, metadata may be stored and remain in SCM 116 for access by processor 102. Other data such as user data may also be stored in SCM 116, but may be flushed to secondary memory 114 in response to a command from host 101 or a flag included in a command from host 101. As discussed below in more detail with reference to
Data from secondary memory 114 may be cached in address space 24 using a page size for secondary memory 114 (e.g., 4 KB, 16 KB). In cases where PCIe is used for communication between host 101 and DSD 108 or in cases where SCM 116 is host-managed, processor 102 may access smaller portions of the cached data since SCM 116, unlike secondary memory 114, is byte-addressable. Processor 102 may directly access these portions of data in SCM 116 or directly perform operations on such data in address space 24, thereby reducing the amount of data being transferred between DSD 108 and host 101 as compared to conventional systems where a full page size of data would be loaded into a local main memory of host 101 (e.g., main memory 104) and flushed from host 101 to DSD 108.
In other implementations, such as when communication between host 101 and DSD 108 is through SATA or SAS, data from secondary memory 114 may be cached in address space 24 using a page size for reading data from secondary memory 114. In such implementations, processor 102 may access data in SCM 116 via a page cache in main memory 104. In some implementations, processor 102 may offload operations on data in address space 24 by requesting control circuitry 110 to perform the operations, thereby reducing the amount of data being transferred between DSD 108 and host 101 as compared to conventional systems where such data would be loaded into a local main memory of host 101 (e.g., main memory 104) for performing the operations and a result flushed from host 101 to DSD 108.
As will be appreciated by those or ordinary skill in the art, other implementations of DSD 108 and host 101 may include a different arrangement of data structures and/or components than those shown in
In some cases, application 18 or another application may only request a portion of a data extent that is smaller than a smallest writable or readable unit of secondary memory 114, such as less than a block size or sector size. In response, processor 102 may execute instructions to copy the block or sector including the requested data into address space 24 of SCM 116. The whole block or sector may be loaded into SCM 116, and then only the requested portion of the block or sector may then be returned to host 101, rather than the entire block or sector loaded from secondary memory 114. This ordinarily reduces the amount of data that needs to be transferred between host 101 and DSD 108. In this regard, any copy operations between SCM 116 and main memory 104 or the one or more caches 103 may be made on a byte basis in implementations where host 101 has direct access to SCM 116 (e.g., with a PCIe interface between host 101 and DSD 108 or where SCM 116 is host-managed).
For example, if 64 bytes are requested, a whole sector of data may be read from secondary memory 114 and loaded into address space 24 of SCM 116 before returning the requested 64 bytes to host 101. In some cases, this may take longer than host 101 retrieving a 4 KB page size from secondary memory for the same request, but future requests for such data will be faster since such data can remain in address space 24 of SCM 116 without having to copy the data from secondary memory 114. Frequently accessed data, such as file system metadata, may only be stored in address space 24 of SCM 116 or may be prioritized for storage in address space 24 to significantly improve performance with repeated accesses of the cached data over time. In such implementations, mapping table 22 may indicate whether certain portions of data are already cached in address space 24 so that the data requested by processor 102 can be accessed without having to load it from secondary memory 114 if the data has already been cached in SCM 116.
The foregoing use of SCM 116 can improve the performance of the system including host 101 and DSD 108 since latency is reduced by caching frequently accessed data, such as file system metadata, in SCM 116, and in some implementations, by also not having to use a page cache at host 101 to load data from DSD 108 into main memory 104 or write data from main memory 104 to secondary memory 114. The size of main memory 104 can also be reduced with use of address space 24 of SCM 116, which reduces the power usage of the system since SCM 116 uses less power than main memory 104 in the case where main memory 104 is a DRAM.
The unique identifiers can be stored in SCM 116 of DSD 108, while the data portions themselves are stored in secondary memory 114. In some implementations, all of the on-disk file system's metadata structures can be stored in SCM 116. The unique identifiers can be used to locate and access data portions in secondary memory 114. The sizes of the data portions can vary based on data access patterns of different applications executed by processor 102 of host 101, a file type for the data, and/or a size of the data.
In addition to their respective SCMs, each of DSD 108, 208 and 308 include a secondary memory as secondary memories 114, 214 and 314. As with secondary memory 114 in the example of
The example of
In addition, and as discussed in more detail below with reference to the example of
The file instance in the example of
In the example of
Portion n is then retrieved from secondary memory 114 and loaded into SCM 116. Control circuitry 110 may create reserved stream 34 in SCM 116 based on the request for data portion n from host 101 for sending the requested data to host 101. Processor 102 may then directly access the requested portion n in reserved stream 34 from SCM 116.
In block 602, at least a portion of SCM 116 is provided by control circuitry 110 to serve as at least a portion of an address space of processor 102 of host 101. The provided address space of SCM 116 (i.e., address space 24 in
In block 604, DSD 108 receives an instruction from processor 102 for data smaller than a smallest writable unit of secondary memory 114. For example, the instruction from processor 102 may include a request to load such data into address space 24, or a store request from processor 102 to evict data previously loaded into address space 24 to secondary memory 114. When evicting such data, the data may be treated as volatile data that is not stored or flushed to secondary memory 114 or the evicted data may be treated as persistent data so that changes to the data are stored or flushed to secondary memory 114. In other examples, and as discussed in more detail below with reference to
In block 606, data for the instruction is accessed (i.e., written or read) in the at least a portion of SCM 116 serving as at least a portion of the address space of processor 102 (e.g., address space 24) in response to the instruction received in block 604. In some cases, the data may be accessed in address space 24 in response to a read request from processor 102 to provide processor 102 with the requested data. In other cases, the data may be accessed to flush it to secondary memory 114 with or without keeping a copy of the flushed data in address space 24. In yet other cases, operations or transformations of the data may be performed by control circuitry 110 while the data is stored in SCM 116.
In block 608, control circuitry 110 also receives commands from processor 102 to read or write data in secondary memory 114 in data sizes greater than the smallest writable unit of secondary memory 114. Control circuitry 110 may optionally perform such commands without using SCM 116. In this regard, the use of SCM 116 ordinary allows DSD 108 to act similar to a main memory of host 101 with a byte-addressable address space (e.g., address space 24 in
In block 702, data is divided into differently sized portions to be stored in secondary memory 114. The sizes of the data portions can be based on, for example, at least one of data access patterns of different applications executed by processor 102, a file type for the data, and a total size of the data to be stored.
The division of data into differently sized portions can allow, for example, the access of data in units that vary from a fixed page or block size (e.g., 4 KB or 512 bytes) as typically used in current systems. This can provide for either a more granular data access that is less than a conventional block size, or result in less metadata in cases where the data portion is larger than a conventional block size, since more data can be referenced by the metadata. In this regard, different applications executed by processor 102 may access data in different sizes. For example, application 18 executed by processor 102 may generate relatively large amounts of data as compared to other applications executed by processor 102, so larger sized portions may be used for data generated by application 18 than for other applications, which would result in less metadata being generated from the larger portions of data generated by application 18 than if a smaller fixed block size were used.
In block 704, a unique identifier or fingerprint is calculated for each of the differently sized portions from block 702. As noted above, the unique identifier may be calculated by processor 102 or by control circuitry 110. In some implementations, a file system executed at host 101 may calculate the UID and store the UID with the file system's metadata. In other implementations, the metadata structure including the UID (e.g., tree structure 32 in
In block 706, the unique identifiers are stored in a tree structure that is used by OS 12 executed by processor 102. As discussed above with reference to
In block 708, a mapping is stored in SCM 116 (e.g., in mapping table 22 in
In addition, mapping table 22 in some implementations may also include an indication of whether the data portion is stored or cached in SCM 116. In block 708, control circuitry 110 of DSD 108 may optionally store such an indication of whether the corresponding data portions are stored in SCM 116 in addition to being stored in secondary memory 114. In some cases, control circuitry 110 may keep some or all of the data portions to be stored in SCM 116 for future read access by processor 102 in accordance with a caching policy. The caching policy may be based on, for example, a priority and/or a frequency of access for the respective portions of data.
The storage of the data portions in secondary memory 114 may also be deferred in some cases to more efficiently schedule write operations, or to allow for the modification of the data by processor 102 as discussed below with reference to
In block 712, control circuitry 110 optionally checks to see if the unique identifier for each data portion is already included in mapping table 22, which would indicate that a duplicate of the data portion is already stored in secondary memory 114 and/or SCM 116. If so, control circuitry 110 may prevent the storage of the data portion corresponding to the matching unique identifier. This can ordinarily prevent duplicates of data portions from being stored in DSD 108, which can conserve storage space and reduce wear on secondary memory 114 and SCM 116.
In block 802, DSD 108 receives an operation instruction from processor 102 to perform an operation on data stored in secondary memory 114. Examples of such operations can include, for example, to search for data meeting a particular search condition and apply a mathematical function or operation on data that meets the search condition.
In block 804, control circuitry 110 loads the data for the operation from secondary memory 114 into address space 24 of SCM 116. In block 806, the operation for the instruction is performed on the data loaded into SCM 116. In some implementations, control circuitry 110 of DSD 108 may perform the operation. In implementations where processor 102 of host 101 can directly access the data stored in SCM 116, such as with PCIe or where SCM 116 is host-managed, processor 102 may perform the operation on the data stored in SCM 116.
The result from performing the operation is optionally sent to processor 102 in block 808 for implementations where the operation is performed by control circuitry 110 of DSD 108. The result may include, for example, one or more numerical values or an indication of a logical result from performing the operation (e.g., true or false). By performing the operation through SCM 116 as opposed to through a main memory, such as main memory 104, it is ordinarily possible to reduce the size of main memory 104. In cases where main memory 104 is a DRAM, this can significantly reduce the power requirements of the system. In addition, by only sending the result of the operation from SCM 116 to processor 102 in cases where performance of the operation is offloaded to control circuitry 110, it is ordinarily possible to reduce the amount of data that would otherwise need to be transferred from DSD 108 to host 101 to perform the operation from main memory 104 of host 101. This reduction in data traffic can allow for more data to be transferred for other operations or data accesses at a given time.
As discussed above, the foregoing arrangements of a hybrid DSD including an SCM and the use of unique identifiers for accessing portions of data provides a more efficient access of data than conventional systems using a less granular or fixed page size. In addition, a byte-addressable SCM when combined with the disclosed unique identifiers can allow for direct access of the SCM by a host processor, which can replace the use of a page cache and the associated resources needed for the page cache.
Other Embodiments
Those of ordinary skill in the art will appreciate that the various illustrative logical blocks, modules, and processes described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Furthermore, the foregoing processes can be embodied on a computer readable medium which causes a processor or control circuitry to perform or execute certain functions.
To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, and modules have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Those of ordinary skill in the art may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
The various illustrative logical blocks, units, and modules described in connection with the examples disclosed herein may be implemented or performed with a processor or control circuitry, such as, for example, a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Microcontroller Unit (MCU), or a DSP, and can include, for example, an FPGA, an ASIC, or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor or control circuitry may also be implemented as a combination of computing devices, e.g., a combination of a DSP and an MPU, a plurality of MPUs, one or more MPUs in conjunction with a DSP core, or any other such configuration. In some implementations, the control circuity or processor may form at least part of an SoC.
The activities of a method or process described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor or control circuitry, or in a combination of hardware and software. The steps of the method or algorithm may also be performed in an alternate order from those provided in the examples. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, other types of solid state memory, registers, hard disk, removable media, optical media, or any other form of storage medium known in the art. An exemplary storage medium is coupled to a processor or a controller such that the processor or control circuitry can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor or the control circuitry.
The foregoing description of the disclosed example embodiments is provided to enable any person of ordinary skill in the art to make or use the embodiments in the present disclosure. Various modifications to these examples will be readily apparent to those of ordinary skill in the art, and the principles disclosed herein may be applied to other examples without departing from the spirit or scope of the present disclosure. The described embodiments are to be considered in all respects only as illustrative and not restrictive. In addition, the use of language in the form of “at least one of A and B” in the following claims should be understood to mean “only A, only B, or both A and B.”
This application is a continuation of application Ser. No. 16/196,077, filed on Nov. 20, 2018, titled “DATA ACCESS IN DATA STORAGE DEVICE INCLUDING STORAGE CLASS MEMORY”, the contents of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4420807 | Nolta | Dec 1983 | A |
5608876 | Cohen | Mar 1997 | A |
6772296 | Mathiske | Aug 2004 | B1 |
6856556 | Hajeck | Feb 2005 | B1 |
7023726 | Chen | Apr 2006 | B1 |
7126857 | Hajeck | Oct 2006 | B2 |
7216211 | Munguia | May 2007 | B2 |
7386655 | Gorobets | Jun 2008 | B2 |
7430136 | Merry, Jr. | Sep 2008 | B2 |
7447807 | Merry | Nov 2008 | B1 |
7502256 | Merry, Jr. | Mar 2009 | B2 |
7509441 | Merry | Mar 2009 | B1 |
7596643 | Merry, Jr. | Sep 2009 | B2 |
7653778 | Merry, Jr. | Jan 2010 | B2 |
7685337 | Merry, Jr. | Mar 2010 | B2 |
7685338 | Merry, Jr. | Mar 2010 | B2 |
7685374 | Diggs | Mar 2010 | B2 |
7733712 | Walston | Jun 2010 | B1 |
7765373 | Merry | Jul 2010 | B1 |
7898855 | Merry, Jr. | Mar 2011 | B2 |
7904619 | Danilak | Mar 2011 | B2 |
7912991 | Merry | Mar 2011 | B1 |
7936603 | Merry, Jr. | May 2011 | B2 |
7962792 | Diggs | Jun 2011 | B2 |
7979601 | Berenbaum | Jul 2011 | B2 |
8078918 | Diggs | Dec 2011 | B2 |
8090899 | Syu | Jan 2012 | B1 |
8095851 | Diggs | Jan 2012 | B2 |
8108692 | Merry | Jan 2012 | B1 |
8122185 | Merry, Jr. | Feb 2012 | B2 |
8127048 | Merry | Feb 2012 | B1 |
8135903 | Kan | Mar 2012 | B1 |
8151020 | Merry, Jr. | Apr 2012 | B2 |
8161227 | Diggs | Apr 2012 | B1 |
8166245 | Diggs | Apr 2012 | B2 |
8243525 | Kan | Aug 2012 | B1 |
8254172 | Kan | Aug 2012 | B1 |
8261012 | Kan | Sep 2012 | B2 |
8296625 | Diggs | Oct 2012 | B2 |
8312207 | Merry, Jr. | Nov 2012 | B2 |
8316176 | Phan | Nov 2012 | B1 |
8341339 | Boyle | Dec 2012 | B1 |
8375151 | Kan | Feb 2013 | B1 |
8392635 | Booth | Mar 2013 | B2 |
8397107 | Syu | Mar 2013 | B1 |
8407449 | Colon | Mar 2013 | B1 |
8423722 | Deforest | Apr 2013 | B1 |
8433858 | Diggs | Apr 2013 | B1 |
8443167 | Fallone | May 2013 | B1 |
8447920 | Syu | May 2013 | B1 |
8458435 | Rainey, III | Jun 2013 | B1 |
8478930 | Syu | Jul 2013 | B1 |
8489854 | Colon | Jul 2013 | B1 |
8503237 | Horn | Aug 2013 | B1 |
8521972 | Boyle | Aug 2013 | B1 |
8549236 | Diggs | Oct 2013 | B2 |
8583835 | Kan | Nov 2013 | B1 |
8601311 | Horn | Dec 2013 | B2 |
8601313 | Horn | Dec 2013 | B1 |
8612669 | Syu | Dec 2013 | B1 |
8612804 | Kang | Dec 2013 | B1 |
8615681 | Horn | Dec 2013 | B2 |
8631191 | Hashimoto | Jan 2014 | B2 |
8638602 | Horn | Jan 2014 | B1 |
8639872 | Boyle | Jan 2014 | B1 |
8683113 | Abasto | Mar 2014 | B2 |
8700834 | Horn | Apr 2014 | B2 |
8700950 | Syu | Apr 2014 | B1 |
8700951 | Call | Apr 2014 | B1 |
8706985 | Boyle | Apr 2014 | B1 |
8707104 | Jean | Apr 2014 | B1 |
8713066 | Lo | Apr 2014 | B1 |
8713357 | Jean | Apr 2014 | B1 |
8719531 | Strange | May 2014 | B2 |
8724392 | Asnaashari | May 2014 | B1 |
8724422 | Agness | May 2014 | B1 |
8725931 | Kang | May 2014 | B1 |
8745277 | Kan | Jun 2014 | B2 |
8751728 | Syu | Jun 2014 | B1 |
8769190 | Syu | Jul 2014 | B1 |
8769232 | Suryabudi | Jul 2014 | B2 |
8775720 | Meyer | Jul 2014 | B1 |
8782327 | Kang | Jul 2014 | B1 |
8788778 | Boyle | Jul 2014 | B1 |
8788779 | Horn | Jul 2014 | B1 |
8788880 | Gosla | Jul 2014 | B1 |
8793429 | Call | Jul 2014 | B1 |
8903995 | Basak | Dec 2014 | B1 |
8947803 | Yamakawa | Feb 2015 | B1 |
9015123 | Mathew et al. | Apr 2015 | B1 |
9116800 | Post | Aug 2015 | B2 |
9189387 | Taylor | Nov 2015 | B1 |
9342453 | Nale | May 2016 | B2 |
9619174 | Chen | Apr 2017 | B2 |
9836404 | Ummadi | Dec 2017 | B2 |
9857995 | Malina | Jan 2018 | B1 |
10126981 | Malina | Nov 2018 | B1 |
10423536 | Noguchi | Sep 2019 | B2 |
10482009 | Sabol et al. | Nov 2019 | B1 |
10496544 | Wang | Dec 2019 | B2 |
20080148048 | Govil et al. | Jun 2008 | A1 |
20100037002 | Bennett | Feb 2010 | A1 |
20100174849 | Walston | Jul 2010 | A1 |
20100250793 | Syu | Sep 2010 | A1 |
20110099323 | Syu | Apr 2011 | A1 |
20110283049 | Kang | Nov 2011 | A1 |
20120166891 | Dahlen | Jun 2012 | A1 |
20120210068 | Joshi | Aug 2012 | A1 |
20120260020 | Suryabudi | Oct 2012 | A1 |
20120027853 | Horn | Nov 2012 | A1 |
20120278531 | Horn | Nov 2012 | A1 |
20120284460 | Guda | Nov 2012 | A1 |
20120317377 | Palay et al. | Dec 2012 | A1 |
20120324191 | Strange | Dec 2012 | A1 |
20120331016 | Janson | Dec 2012 | A1 |
20130024605 | Sharon et al. | Jan 2013 | A1 |
20130080687 | Nemazie | Mar 2013 | A1 |
20130091321 | Nishtala | Apr 2013 | A1 |
20130132638 | Horn | May 2013 | A1 |
20130145106 | Kan | Jun 2013 | A1 |
20130290793 | Booth | Oct 2013 | A1 |
20140006686 | Chen et al. | Jan 2014 | A1 |
20140059405 | Syu | Feb 2014 | A1 |
20140101369 | Tomlin | Apr 2014 | A1 |
20140108703 | Cohen et al. | Apr 2014 | A1 |
20140115427 | Lu | Apr 2014 | A1 |
20140133220 | Danilak | May 2014 | A1 |
20140136753 | Tomlin | May 2014 | A1 |
20140149826 | Lu | May 2014 | A1 |
20140157078 | Danilak | Jun 2014 | A1 |
20140181432 | Horn | Jun 2014 | A1 |
20140223255 | Lu | Aug 2014 | A1 |
20140226389 | Ebsen et al. | Aug 2014 | A1 |
20140351515 | Chiu | Nov 2014 | A1 |
20150058534 | Lin | Feb 2015 | A1 |
20150142996 | Lu | May 2015 | A1 |
20150302903 | Chaurasia | Oct 2015 | A1 |
20150363320 | Kumar | Dec 2015 | A1 |
20160027481 | Hong | Jan 2016 | A1 |
20160118130 | Chadha | Apr 2016 | A1 |
20160313943 | Hashimoto | Oct 2016 | A1 |
20160342357 | Ramamoorthy | Nov 2016 | A1 |
20160357463 | DeNeui | Dec 2016 | A1 |
20160378337 | Horspool et al. | Dec 2016 | A1 |
20170024332 | Rui | Jan 2017 | A1 |
20170147499 | Mohan | May 2017 | A1 |
20170160987 | Royer, Jr. | Jun 2017 | A1 |
20170228170 | Chen et al. | Aug 2017 | A1 |
20170286291 | Thomas | Oct 2017 | A1 |
20180004438 | Hady | Jan 2018 | A1 |
20180032432 | Kowles | Feb 2018 | A1 |
20180095898 | Khosravi | Apr 2018 | A1 |
20180150404 | Kim et al. | May 2018 | A1 |
20180307602 | Xu et al. | Oct 2018 | A1 |
20180307620 | Zhou | Oct 2018 | A1 |
20180349041 | Zhou et al. | Dec 2018 | A1 |
20190163375 | Amidi et al. | May 2019 | A1 |
20190188153 | Benisty | Jun 2019 | A1 |
20190310796 | Perez et al. | Oct 2019 | A1 |
20190339904 | Myran et al. | Nov 2019 | A1 |
20190347204 | Du et al. | Nov 2019 | A1 |
20200004628 | Ben-Rubi et al. | Jan 2020 | A1 |
Number | Date | Country |
---|---|---|
2012134641 | Oct 2012 | WO |
WO-2012134641 | Oct 2012 | WO |
Entry |
---|
Wu et al.; “Delta-FTL: Improving SSD Lifetime via Exploiting Content Locality”; Apr. 10-13, 2012; 13 pages; available at: https://cis.temple.edu/˜he/publications/Conferences/DeltaFTL-Eurosys12.pdf. |
Pending U.S. Appl. No. 16/683,095, filed Nov. 13, 2019, entitled “Storage Class Memory Access”, Manzanares et al. |
Xia et al.; “Edelta: A Word-Enlarging Based Fast Delta Compression Approach”; Jul. 2015; 5 pages; available at: https://www.usenix.org/system/files/conference/hotstorage15/hotstorage15-xia.pdf. |
Hitachi Vantara; “Hitachi Accelerated Flash 2.0”; Sep. 2018; 20 pages; available at: https://www.hitachivantara.com/en-us/pdfd/white-paper/accelerated-flash-whitepaper.pdf. |
Pending U.S. Appl. No. 16/246,425, filed Jan. 11, 2019, entitled “Container Key Value Store for Data Storage Devices”, Sanjay Subbarao. |
Pending U.S. Appl. No. 16/246,401, filed Jan. 11, 2019, entitled “Fine Granularity Translation Layer for Data Storage Devices”, Sanjay Subbarao. |
Pending U.S. Appl. No. 16/176,997, filed Oct. 31, 2018, entitled “ Tiered Storage Using Storage Class Memory”, James N. Malina. |
Pending U.S. Appl. No. 16/867,793, filed May 6, 2020, entitled “Page Modification Encoding and Caching”, Cassuto et al. |
Phil Mills, “Storage Class Memory—the Future of Solid State Storage, ”http://www.snia.org/sites/default/education/tutorials/2009/fall/solid/PhilMills_The_Future_of_Solid_State_Storage.pdf., SNIA Education, 2009. |
International Search Report and Written Opinion dated Nov. 6, 2020 from International Application No. PCT/US2020/037713, 8 pages. |
Kang et al.; “Subpage-based Flash Translation Layer For Solid State Drivers”; Jan. 2006; 10 pages; available at https://cs.kaist.ac.kr/fileDownload?kind=tech&sn=340. |
Kim et al.; “Partial Page Buffering for Consumer Devices with Flash Storage”; Proceedings 2013 IEEE 3rd International Conference on Consumer Electronics—Berlin, ICCE—Berlin 2013 (p. 177-180); available at: https://hanyang.elsevierpure.com/en/publications/partial-page-buffering-for-consumer-devices-with-flash-storage. |
Number | Date | Country | |
---|---|---|---|
20200334147 A1 | Oct 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16196077 | Nov 2018 | US |
Child | 16921719 | US |