Data access in data storage device including storage class memory

Information

  • Patent Grant
  • 11169918
  • Patent Number
    11,169,918
  • Date Filed
    Monday, July 6, 2020
    4 years ago
  • Date Issued
    Tuesday, November 9, 2021
    3 years ago
Abstract
A device includes a Storage Class Memory (SCM) and a secondary memory with at least one of a greater read or write latency than the SCM. At least a portion of the SCM is provided as an address space of a processor. An SCM smallest writable unit for writing data in the SCM is smaller than a secondary memory smallest writable unit for writing data in the secondary memory. An operation instruction is received from the processor to perform an operation on data stored in the secondary memory. The data is loaded from the secondary memory into the SCM for performance of the operation.
Description
BACKGROUND

File systems typically provide access to data using a page cache stored in a main memory of a host, such as in a Dynamic Random Access Memory (DRAM). The page cache can provide a sequence of memory pages used for caching some part of the file system's object content. In more detail, the page cache can be used for caching user data as well as for metadata in a kernel space of an operating system executed by a host. This typically provides quicker access to the cached data for reading and writing than accessing the data from a Data Storage Device (DSD), such as a Hard Disk Drive (HDD) or Solid-State Drive (SSD).


Although the use of a page cache for accessing data may work well for conventional storage memory such as a rotating magnetic disk in an HDD or a NAND flash memory in an SSD, the use of a page cache for accessing data can be inefficient for DSDs that include more recently developed Storage Class Memories (SCMs) due to the quicker access times of such SCMs and the operations required for the page cache. Emerging SCMs can include, for example, Phase Change Memory (PCM), Magnetoresistive Random Access Memory (MRAM), or Resistive RAM (RRAM) that can perform read and write operations much faster than conventional memories such as a rotating magnetic disk or a NAND flash memory, and in some cases, even faster than a main memory such as DRAM.


For example, a DRAM main memory may have a read latency of 50 nanoseconds for reading data and a write latency of 50 nanoseconds for writing data. Given that a read latency for a NAND flash secondary memory may be 25 microseconds and a write latency for the NAND flash secondary memory may be 500 microseconds, the use of a page cache in a DRAM main memory in such an example can provide a quicker access of the cached data. The cost of maintaining and operating a page cache in DRAM, such as loading and flushing pages of data between the DRAM main memory and a conventional secondary memory, such as a NAND flash memory, is outweighed by the quicker access time of the DRAM main memory as compared to the secondary memory.


As noted above, emerging SCMs have data access times significantly faster than conventional memories. For example, an MRAM SCM may have a read latency of 30 nanoseconds and a write latency of 30 nanoseconds. Other emerging SCMs such as RRAM may provide for even faster access times with a read latency of only 3 nanoseconds and a write latency of only 10 nanoseconds.


Despite these quicker access times for SCMs, accessing data from a DSD with a SCM can still take microseconds due to processing by the DSD and/or the host relating to the fixed block or page size currently used to access data. This fixed block or page size may be based on, for example, a smallest writable unit of a secondary memory of the DSD, such as a 512 byte sector size or a 4 KB page size. Accordingly, there is a need for a more efficient way to access data from DSDs that include SCMs to make better use of the faster access times of SCMs. Such DSDs can include, for example, hybrid SSDs that have both an SCM and a conventional memory such as a NAND flash memory.





BRIEF DESCRIPTION OF THE DRAWINGS

The features and advantages of the embodiments of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the disclosure and not to limit the scope of what is claimed.



FIG. 1 is a block diagram of a host and a Data Storage Device (DSD) according to an embodiment.



FIG. 2 illustrates an example of use of a Storage Class Memory (SCM) by a processor according to an embodiment.



FIG. 3 illustrates an example of associating portions of data stored in a secondary memory of a DSD with unique identifiers according to an embodiment.



FIG. 4 illustrates an example of a plurality of DSDs each with an SCM configured as part of an aggregate address space according to an embodiment.



FIG. 5 provides an example of a data access operation according to an embodiment.



FIG. 6 is a flowchart for a data access process according to an embodiment.



FIG. 7 is a flowchart for a unique identifier process according to an embodiment.



FIG. 8 is a flowchart for a data operation process within an SCM of a DSD according to an embodiment.





DETAILED DESCRIPTION

In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one of ordinary skill in the art that the various embodiments disclosed may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail to avoid unnecessarily obscuring the various embodiments.


System Overview Examples


FIG. 1 is a block diagram of host 101 and Data Storage Device (DSD) 108 according to an embodiment. As shown in FIG. 1, host 101 includes processor 102, main memory 104, and DSD interface 106. Processor 102 can execute instructions, such as instructions from Operating System (OS) 12 or application 18. Processor 102 can include circuitry such as a microcontroller, a Digital Signal Processor (DSP), an Application-Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), hard-wired logic, analog circuitry and/or a combination thereof. In some implementations, processor 102 can include a System on a Chip (SoC), which may be combined with one or both of main memory 104 and DSD interface 106.


Main memory 104 can be used by host 101 to store data used by processor 102. Data stored in main memory 104 can include data instructions loaded from DSD 108 for execution by processor 102, and/or data used in executing instructions from applications, such as OS 12 or application 18. In some implementations, main memory 140 can be a volatile memory, such as a Dynamic Random Access Memory (DRAM).


In the example of FIG. 1, main memory 104 is configured to store OS 12 and application 18. OS 12 creates a byte-addressable, virtual address space for application 18 and other processes executed by processor 102 that maps to locations in main memory 104 for receiving data from files stored in DSD 108. Main memory 104 may be used by OS 12 when executing a process or a thread, such as a subset of instructions in a process.


OS 12 includes kernel 14 and DSD driver 16, and occupies a physical address space in main memory 104. Kernel 14 is a binary image that contains a set of pre-compiled drivers and is loaded into main memory 104 from DSD 108 when OS 12 is loaded. Kernel 14 includes instructions for OS 12 for managing resources of host 101 (e.g., memory allocation) and handling read and write requests from applications, such as application 18, for execution by processor 102. DSD driver 16 provides a software interface to DSD 108 and can include instructions for communicating with DSD 108 in accordance with the processes discussed below. Application 18 can include an application executed by processor 102 that reads and/or writes data in DSD 108.


DSD interface 106 is configured to interface host 101 with DSD 108, and may communicate with DSD 108 using a standard such as, for example, Serial Advanced Technology Attachment (SATA), PCI express (PCIe), Small Computer System Interface (SCSI), Serial Attached SCSI (SAS), Ethernet, Fibre Channel, or WiFi. In this regard, host 101 and DSD 108 may not be physically co-located and may communicate over a network such as a Local Area Network (LAN) or a Wide Area Network (WAN), such as the internet. In addition, DSD interface 106 may also interface with DSD 108 using a logical interface specification such as Non-Volatile Memory express (NVMe) or Advanced Host Controller Interface (AHCI) that may be implemented by DSD driver 16. As will be appreciated by those of ordinary skill in the art, DSD interface 106 can be included as part of processor 102.


As shown in FIG. 1, DSD 108 includes host interface 112, control circuitry 110, secondary memory 114, and Storage Class Memory (SCM) 116. Host interface 112 is configured to interface with host 101 and may communicate with host 101 using a standard such as, for example, SATA, PCIe, SCSI, SAS, Ethernet, Fibre Channel, or WiFi. Control circuitry 110 can include circuitry for executing instructions, such as instructions from DSD firmware 20. In this regard, control circuitry 110 can include circuitry such as one or more processors for executing instructions and can include a microcontroller, a DSP, an ASIC, an FPGA, hard-wired logic, analog circuitry and/or a combination thereof. In some implementations, control circuitry 110 can include an SoC, which may be combined with host interface 112, for example.


Secondary memory 114 can include, for example, a rotating magnetic disk or non-volatile solid-state memory, such as flash memory. While the description herein refers to solid-state memory generally, it is understood that solid-state memory may comprise one or more of various types of memory devices such as flash integrated circuits, NAND memory (e.g., single-level cell (SLC) memory, multi-level cell (MLC) memory (i.e., two or more levels), or any combination thereof), NOR memory, EEPROM, other discrete Non-Volatile Memory (NVM) chips, or any combination thereof.


SCM 116 can include, for example, Chalcogenide RAM (C-RAM), Phase Change Memory (PCM), Programmable Metallization Cell RAM (PMC-RAM or PMCm), Ovonic Unified Memory (OUM), Resistive RAM (RRAM), Ferroelectric Memory (FeRAM), Magnetoresistive RAM (MRAM), or 3D-XPoint memory. SCM 116 has at least one of a faster read time and a faster write time for accessing data than secondary memory 114. DSD 108 can be considered a hybrid DSD in that it includes at least two different types of memory with secondary memory 114 and SCM 116.


In addition, SCM 116 is capable of storing data at a byte-addressable level, as opposed to other types of NVM that have a smallest writable data size such as a page size of 4 KB or a sector size of 512 Bytes. As discussed in more detail below, this can allow SCM 116 or a portion thereof (e.g., address space 24) to be used as an extension or replacement of main memory 104. In cases where main memory 104 is a DRAM, the reduction in the size of main memory 104 can decrease the amount of power consumed by host 101 and the power consumption of the overall system including host 101 and DSD 108.


As shown in the example of FIG. 1, SCM 116 can store DSD firmware 20, mapping table 22, and address space 24. As discussed in more detail below, control circuitry 110 can associate portions of data stored in secondary memory 114 with unique identifiers for the portions of data that are calculated using the portions of data. The unique identifiers may be stored in SCM 116 and used by OS 12 to access data from files. For example, mapping table 22 can provide a mapping of the unique identifiers with indications of physical locations (e.g., Physical Block Addresses (PBAs)) where the corresponding portions of data are stored in secondary memory 114.


SCM 116 also includes address space 24 which can serve as at least a portion of the address space used by processor 102 of host 101. Address space 24 can store data at a byte-addressable level that can be accessed by processor 102. DSD 108 may provide host 101 with an indication of address space 24. Host 101 may then associate an address range for address space 24 with DSD 108 and an indication that this address range is to be used as a byte-addressable address space, such as for a page cache, for example. When accessing data in address space 24, OS 12 may provide a special command to DSD 108 that includes an address for the address range and a request, such as a load or store request. Control circuitry 110 of DSD 108 would then recognize the special command as being for address space 24 in SCM 116.


In cases where PCIe is used for communication between host 101 and DSD 108, the byte-addressable access of SCM 116 can also eliminate the need for a page cache at host 101. Such implementations can reduce the overhead and delay associated with caching data in main memory 104, which may also have a greater read and/or write latency than SCM 116. In cases where SATA or SAS are used to access data from DSD 108, a page cache will still be used by OS 12 at host 101. However, the faster access of SCM 116 as compared to secondary memory 114 ordinarily allows for faster access of data at DSD 108. SCM 116 can then be used to store data such as file system metadata or cached user data that would otherwise be stored in the main memory of the host. This can allow for a smaller amount of main memory to be used at the host, which can reduce power consumption in cases where the main memory is a DRAM.


In addition, the byte-addressable nature of SCM 116 can allow for a delta-encoding technique when data stored in byte-addressable memory is changed. In such implementations, if host 101 and DSD 108 have a piece of data stored in byte-addressable memory (i.e., in main memory 104 and in address space 24 of SCM 116), then main memory 104 and SCM 114 can exchange only the changed portions for the piece of data or the differences between two states of the piece of data, rather than sending a whole sector's worth of data, for example. This difference or binary patch (e.g., an XOR difference) will be much smaller than the whole sector's worth of data, which can reduce the amount of data traffic and processing of data between host 101 and DSD 108.


In yet other implementations, host 101 may manage DSD 108 so that processor 102 can directly access address space 24 in SCM 116. For example, DSD 108 may provide logical to physical address translation information for SCM 116 to host 101, which can be called by host 101 and executed by control circuitry 110. In one example, metadata may be stored and remain in SCM 116 for access by processor 102. Other data such as user data may also be stored in SCM 116, but may be flushed to secondary memory 114 in response to a command from host 101 or a flag included in a command from host 101. As discussed below in more detail with reference to FIG. 3, a unique identifier (i.e., UID) for the flushed data may be used by DSD 108 with mapping table 22 to identify a physical address for storing the flushed data in secondary memory 114.


Data from secondary memory 114 may be cached in address space 24 using a page size for secondary memory 114 (e.g., 4 KB, 16 KB). In cases where PCIe is used for communication between host 101 and DSD 108 or in cases where SCM 116 is host-managed, processor 102 may access smaller portions of the cached data since SCM 116, unlike secondary memory 114, is byte-addressable. Processor 102 may directly access these portions of data in SCM 116 or directly perform operations on such data in address space 24, thereby reducing the amount of data being transferred between DSD 108 and host 101 as compared to conventional systems where a full page size of data would be loaded into a local main memory of host 101 (e.g., main memory 104) and flushed from host 101 to DSD 108.


In other implementations, such as when communication between host 101 and DSD 108 is through SATA or SAS, data from secondary memory 114 may be cached in address space 24 using a page size for reading data from secondary memory 114. In such implementations, processor 102 may access data in SCM 116 via a page cache in main memory 104. In some implementations, processor 102 may offload operations on data in address space 24 by requesting control circuitry 110 to perform the operations, thereby reducing the amount of data being transferred between DSD 108 and host 101 as compared to conventional systems where such data would be loaded into a local main memory of host 101 (e.g., main memory 104) for performing the operations and a result flushed from host 101 to DSD 108.


As will be appreciated by those or ordinary skill in the art, other implementations of DSD 108 and host 101 may include a different arrangement of data structures and/or components than those shown in FIG. 1. For example, in some implementations, the components of DSD 108 and host 101 may be housed in a single device, as opposed to having a separate host and DSD. In yet other implementations, some of the components of host 101 and DSD 108 may be housed together, such as where processor 102 and SCM 116 may be in a single device with secondary memory 114 attached.



FIG. 2 illustrates an example of use of SCM 116 by processor 102 according to an embodiment. As shown in FIG. 2, processor 102 can include one or more cache levels 103 where data is loaded from, evicted, or flushed to main memory 104 and SCM 116. Such data can include, for example, portions of code and related data being processed by processor 102. File system metadata may also be accessed by the one or more cache levels 103, such as portions of an inodes table (a table that stores information about files and directories, such as ownership, access mode, and file type), dentries table (a table that stores directory entries that relate inode numbers for files to their filenames and parent directories), and an extents table (a table that indicates ranges storing a file). In addition, data extents requested by processor 102 and loaded into address space 24 can be accessed directly by the one or more cache levels 103 by bypassing main memory 104 in implementations where PCIe is used for communication between host 101 and DSD 108 or where SCM 116 is host-managed. In cases where the versions of data or metadata stored in main memory 104 and SCM 116 are the same, the one or more cache levels 103 may access the data or metadata from main memory 104 or directly from SCM 116 by bypassing main memory 104.


In some cases, application 18 or another application may only request a portion of a data extent that is smaller than a smallest writable or readable unit of secondary memory 114, such as less than a block size or sector size. In response, processor 102 may execute instructions to copy the block or sector including the requested data into address space 24 of SCM 116. The whole block or sector may be loaded into SCM 116, and then only the requested portion of the block or sector may then be returned to host 101, rather than the entire block or sector loaded from secondary memory 114. This ordinarily reduces the amount of data that needs to be transferred between host 101 and DSD 108. In this regard, any copy operations between SCM 116 and main memory 104 or the one or more caches 103 may be made on a byte basis in implementations where host 101 has direct access to SCM 116 (e.g., with a PCIe interface between host 101 and DSD 108 or where SCM 116 is host-managed).


For example, if 64 bytes are requested, a whole sector of data may be read from secondary memory 114 and loaded into address space 24 of SCM 116 before returning the requested 64 bytes to host 101. In some cases, this may take longer than host 101 retrieving a 4 KB page size from secondary memory for the same request, but future requests for such data will be faster since such data can remain in address space 24 of SCM 116 without having to copy the data from secondary memory 114. Frequently accessed data, such as file system metadata, may only be stored in address space 24 of SCM 116 or may be prioritized for storage in address space 24 to significantly improve performance with repeated accesses of the cached data over time. In such implementations, mapping table 22 may indicate whether certain portions of data are already cached in address space 24 so that the data requested by processor 102 can be accessed without having to load it from secondary memory 114 if the data has already been cached in SCM 116.


The foregoing use of SCM 116 can improve the performance of the system including host 101 and DSD 108 since latency is reduced by caching frequently accessed data, such as file system metadata, in SCM 116, and in some implementations, by also not having to use a page cache at host 101 to load data from DSD 108 into main memory 104 or write data from main memory 104 to secondary memory 114. The size of main memory 104 can also be reduced with use of address space 24 of SCM 116, which reduces the power usage of the system since SCM 116 uses less power than main memory 104 in the case where main memory 104 is a DRAM.



FIG. 3 illustrates an example of associating portions of data stored in secondary memory 114 of DSD 108 with unique identifiers according to an embodiment. As shown in FIG. 3, a file is divided into four portions indicated with different cross-hatching in the file. A unique identifier or “fingerprint” is calculated for each of the portions and stored in an extents tree, as shown with UID1 to UID4 in FIG. 3. In some implementations, the unique identifiers are calculated by applying a hash function to the data portions. The extents tree can include a UID and a length for the data portion represented by the UID. This extents tree differs from a conventional extents tree in that it includes a UID as opposed to a starting Logical Block Address (LBA) and a number of logical blocks or physical sectors for the extent. The extents tree can be represented by a radix tree or other tree data structure, such as a b-tree.


The unique identifiers can be stored in SCM 116 of DSD 108, while the data portions themselves are stored in secondary memory 114. In some implementations, all of the on-disk file system's metadata structures can be stored in SCM 116. The unique identifiers can be used to locate and access data portions in secondary memory 114. The sizes of the data portions can vary based on data access patterns of different applications executed by processor 102 of host 101, a file type for the data, and/or a size of the data.



FIG. 4 illustrates an example of a plurality of DSDs each with their own SCM that includes a reserved address space configured as part of an aggregate address space for use by processor 102 according to an embodiment. As shown in FIG. 4, processor 102 uses address space 11, which maps to or includes at least a portion of main memory 104, SCM 116 of DSD 108, SCM 216 of DSD 208, and SCM 316 of DSD 308. In implementations where PCIe is used or where SCM 116 is host-managed, processor 102 may directly access the address spaces provided by the SCMs so as to extend the aggregate address space of processor 102. In other implementations, such as where SATA or SAS is used, the access of the address spaces provided by the SCMs may not be by a direct or native access of processor 102, such as where a page cache is used by processor 102 in main memory 104.


In addition to their respective SCMs, each of DSD 108, 208 and 308 include a secondary memory as secondary memories 114, 214 and 314. As with secondary memory 114 in the example of FIG. 1, secondary memories 214 and 314 are configured to store data in a smallest writable unit that is greater than a byte, such as in a 4 KB, 8 KB, or 16 KB page size, and have at least one of a greater read latency and a greater write latency than SCMs 216 and 316, respectively.


The example of FIG. 4 illustrates the scalability of the foregoing arrangements of SCM and secondary memory within a DSD when used by a host processor as a byte-addressable address space. Such scalability can further expand the size of the address space without having to add greater power-consuming main memory (e.g., DRAM) at host 101. The use of unique identifiers as discussed above with reference to FIG. 3 further illustrates how OS 12 of host 101 can load and flush data from aggregate address space 11 in some implementations without the use of a page cache in main memory 104, thereby improving throughput and performance of processor 102 in terms of how quickly data can be accessed.


In addition, and as discussed in more detail below with reference to the example of FIG. 5, the use of unique identifiers to access data by the SCM of a DSD can allow processor 102 to load smaller portions of data directly into caches 103 of processor 102, and potentially bypass storing such data in main memory 104. In such implementations, only the result of operations performed by processor 102 may then be stored directly in SCM 116 and bypass main memory 104. The amount of data traffic is then reduced between host 101 and DSD 108 or between processor 102 and SCM 116 due to the smaller amounts of data being transferred, and performance can be improved by not needing to use main memory 104.



FIG. 5 provides an example of a data access operation according to an embodiment. As shown in FIG. 5, a file instance is represented in main memory 104 of host 101 by inode 31 including tree structure 32 (e.g., a radix tree). Tree structure 32 can include UIDs for portions of data instead of pointers to pages in the page cache that store data for the file instance. In other implementations, tree structure 32 may be stored in SCM 116 of DSD 108 if processor 102 has direct access to SCM 116, and the locations of tree structure 32 may point to an address space in SCM 116 and/or in main memory 104 used for storing the portions of the file instance.


The file instance in the example of FIG. 5 includes a data portion n that is represented by a unique identifier, UIDn, in tree structure 32. As noted above, tree structure 32 includes unique identifiers at different nodes for the different portions of data pointed to by tree structure 32, unlike tree structures in conventional inodes.


In the example of FIG. 5, a request for data portion n is sent to DSD 108 from host 101 and includes unique identifier, UIDn, which identifies the portion n for access from DSD 108. DSD 108 receives the request and uses mapping table 22 to identify a physical storage location in secondary memory 114 that stores portion n. In FIG. 5, an entry for UIDn is located in mapping table 22 and a corresponding page address and count of pages for the data portion are associated with UIDn for retrieving portion n from secondary memory 114. In other implementations, mapping table 22 may include a different arrangement, such as an extent or length of the data portion.


Portion n is then retrieved from secondary memory 114 and loaded into SCM 116. Control circuitry 110 may create reserved stream 34 in SCM 116 based on the request for data portion n from host 101 for sending the requested data to host 101. Processor 102 may then directly access the requested portion n in reserved stream 34 from SCM 116.


Example Processes


FIG. 6 is a flowchart for a data access process according to an embodiment. The data access process of FIG. 6 can be performed, for example, by control circuitry 110 of DSD 108 executing DSD firmware 20.


In block 602, at least a portion of SCM 116 is provided by control circuitry 110 to serve as at least a portion of an address space of processor 102 of host 101. The provided address space of SCM 116 (i.e., address space 24 in FIG. 1) may extend the address space available to processor 102 in main memory 104 so that data or metadata used by processor 102 may be stored in SCM 116. In some implementations, address space 24 may be part of an aggregate address space formed by portions of main memory 104 at host 101 and/or other SCMs at other DSDs (e.g., SCMs 216 and 316 in FIG. 4).


In block 604, DSD 108 receives an instruction from processor 102 for data smaller than a smallest writable unit of secondary memory 114. For example, the instruction from processor 102 may include a request to load such data into address space 24, or a store request from processor 102 to evict data previously loaded into address space 24 to secondary memory 114. When evicting such data, the data may be treated as volatile data that is not stored or flushed to secondary memory 114 or the evicted data may be treated as persistent data so that changes to the data are stored or flushed to secondary memory 114. In other examples, and as discussed in more detail below with reference to FIG. 8, the instruction from processor 102 in block 604 can include an operation instruction to perform an operation on data stored in secondary memory 114 or SCM 116.


In block 606, data for the instruction is accessed (i.e., written or read) in the at least a portion of SCM 116 serving as at least a portion of the address space of processor 102 (e.g., address space 24) in response to the instruction received in block 604. In some cases, the data may be accessed in address space 24 in response to a read request from processor 102 to provide processor 102 with the requested data. In other cases, the data may be accessed to flush it to secondary memory 114 with or without keeping a copy of the flushed data in address space 24. In yet other cases, operations or transformations of the data may be performed by control circuitry 110 while the data is stored in SCM 116.


In block 608, control circuitry 110 also receives commands from processor 102 to read or write data in secondary memory 114 in data sizes greater than the smallest writable unit of secondary memory 114. Control circuitry 110 may optionally perform such commands without using SCM 116. In this regard, the use of SCM 116 ordinary allows DSD 108 to act similar to a main memory of host 101 with a byte-addressable address space (e.g., address space 24 in FIG. 1) and as a persistent storage with secondary memory 114.



FIG. 7 is a flowchart for a unique identifier process according to an embodiment. Parts of the unique identifier process of FIG. 7 may be performed by processor 102 and other parts of the process may be performed by control circuitry 110. For example, in some implementations, one or more of blocks 702, 704, and 706 may be performed by processor 102 of host 101 executing DSD driver 16 or a file system, with the remaining blocks of FIG. 7 being performed by control circuitry 110 of DSD 108 executing DSD firmware 20. In other implementations, all of the blocks of FIG. 7 may be performed by control circuitry 110 of DSD 108.


In block 702, data is divided into differently sized portions to be stored in secondary memory 114. The sizes of the data portions can be based on, for example, at least one of data access patterns of different applications executed by processor 102, a file type for the data, and a total size of the data to be stored.


The division of data into differently sized portions can allow, for example, the access of data in units that vary from a fixed page or block size (e.g., 4 KB or 512 bytes) as typically used in current systems. This can provide for either a more granular data access that is less than a conventional block size, or result in less metadata in cases where the data portion is larger than a conventional block size, since more data can be referenced by the metadata. In this regard, different applications executed by processor 102 may access data in different sizes. For example, application 18 executed by processor 102 may generate relatively large amounts of data as compared to other applications executed by processor 102, so larger sized portions may be used for data generated by application 18 than for other applications, which would result in less metadata being generated from the larger portions of data generated by application 18 than if a smaller fixed block size were used.


In block 704, a unique identifier or fingerprint is calculated for each of the differently sized portions from block 702. As noted above, the unique identifier may be calculated by processor 102 or by control circuitry 110. In some implementations, a file system executed at host 101 may calculate the UID and store the UID with the file system's metadata. In other implementations, the metadata structure including the UID (e.g., tree structure 32 in FIG. 5) may be shared between host 101 and DSD 108 so that DSD 108 can calculate and store the unique identifier in the metadata structure accessed by the file system. The UID can be calculated, for example, by applying a hash function to the data portion. As noted in more detail below, the unique identifiers can then be used to access data at a more granular level than when using a page cache as in conventional systems or for preventing the duplication of data stored in DSD 108 in secondary memory 114 and/or SCM 116.


In block 706, the unique identifiers are stored in a tree structure that is used by OS 12 executed by processor 102. As discussed above with reference to FIG. 5, the tree structure can be, for example, part of an inode used by a file system (e.g., tree structure 32 in FIG. 5). The tree structure may be stored in main memory 104 as in the example of FIG. 5, or in SCM 116. The unique identifiers stored in block 706 may be associated with a stream ID or inode ID for the file that includes the data portion pointed to by the unique identifier so that the file system can access the data portions.


In block 708, a mapping is stored in SCM 116 (e.g., in mapping table 22 in FIG. 1) of unique identifiers with indications of physical locations or physical addresses (e.g., PBAs) where the corresponding data portions for the unique identifiers are stored in secondary memory 114. The mapping in block 708 may be stored when the portions of data corresponding to the unique identifiers are first stored or written in DSD 108. If the content of the data portion is later changed, the unique identifier for the data portion is recalculated and the mapping is updated to indicate the recalculated unique identifier. As shown in the example of FIG. 5 discussed above, the unique identifier (e.g., UID in FIG. 5) may be associated with a page and a count of pages for locating the data portion corresponding to the unique identifier. Other implementations may include a different indication of the physical storage location of the data portion in secondary memory 114.


In addition, mapping table 22 in some implementations may also include an indication of whether the data portion is stored or cached in SCM 116. In block 708, control circuitry 110 of DSD 108 may optionally store such an indication of whether the corresponding data portions are stored in SCM 116 in addition to being stored in secondary memory 114. In some cases, control circuitry 110 may keep some or all of the data portions to be stored in SCM 116 for future read access by processor 102 in accordance with a caching policy. The caching policy may be based on, for example, a priority and/or a frequency of access for the respective portions of data.


The storage of the data portions in secondary memory 114 may also be deferred in some cases to more efficiently schedule write operations, or to allow for the modification of the data by processor 102 as discussed below with reference to FIG. 8.


In block 712, control circuitry 110 optionally checks to see if the unique identifier for each data portion is already included in mapping table 22, which would indicate that a duplicate of the data portion is already stored in secondary memory 114 and/or SCM 116. If so, control circuitry 110 may prevent the storage of the data portion corresponding to the matching unique identifier. This can ordinarily prevent duplicates of data portions from being stored in DSD 108, which can conserve storage space and reduce wear on secondary memory 114 and SCM 116.



FIG. 8 is a flowchart for a data operation process within SCM 116 of DSD 108 according to an embodiment. The data operation process can be performed, for example, by control circuitry 110 executing DSD firmware 20 or by processor 102 executing DSD driver 16.


In block 802, DSD 108 receives an operation instruction from processor 102 to perform an operation on data stored in secondary memory 114. Examples of such operations can include, for example, to search for data meeting a particular search condition and apply a mathematical function or operation on data that meets the search condition.


In block 804, control circuitry 110 loads the data for the operation from secondary memory 114 into address space 24 of SCM 116. In block 806, the operation for the instruction is performed on the data loaded into SCM 116. In some implementations, control circuitry 110 of DSD 108 may perform the operation. In implementations where processor 102 of host 101 can directly access the data stored in SCM 116, such as with PCIe or where SCM 116 is host-managed, processor 102 may perform the operation on the data stored in SCM 116.


The result from performing the operation is optionally sent to processor 102 in block 808 for implementations where the operation is performed by control circuitry 110 of DSD 108. The result may include, for example, one or more numerical values or an indication of a logical result from performing the operation (e.g., true or false). By performing the operation through SCM 116 as opposed to through a main memory, such as main memory 104, it is ordinarily possible to reduce the size of main memory 104. In cases where main memory 104 is a DRAM, this can significantly reduce the power requirements of the system. In addition, by only sending the result of the operation from SCM 116 to processor 102 in cases where performance of the operation is offloaded to control circuitry 110, it is ordinarily possible to reduce the amount of data that would otherwise need to be transferred from DSD 108 to host 101 to perform the operation from main memory 104 of host 101. This reduction in data traffic can allow for more data to be transferred for other operations or data accesses at a given time.


As discussed above, the foregoing arrangements of a hybrid DSD including an SCM and the use of unique identifiers for accessing portions of data provides a more efficient access of data than conventional systems using a less granular or fixed page size. In addition, a byte-addressable SCM when combined with the disclosed unique identifiers can allow for direct access of the SCM by a host processor, which can replace the use of a page cache and the associated resources needed for the page cache.


Other Embodiments


Those of ordinary skill in the art will appreciate that the various illustrative logical blocks, modules, and processes described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Furthermore, the foregoing processes can be embodied on a computer readable medium which causes a processor or control circuitry to perform or execute certain functions.


To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, and modules have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Those of ordinary skill in the art may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.


The various illustrative logical blocks, units, and modules described in connection with the examples disclosed herein may be implemented or performed with a processor or control circuitry, such as, for example, a Central Processing Unit (CPU), a Microprocessor Unit (MPU), a Microcontroller Unit (MCU), or a DSP, and can include, for example, an FPGA, an ASIC, or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor or control circuitry may also be implemented as a combination of computing devices, e.g., a combination of a DSP and an MPU, a plurality of MPUs, one or more MPUs in conjunction with a DSP core, or any other such configuration. In some implementations, the control circuity or processor may form at least part of an SoC.


The activities of a method or process described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor or control circuitry, or in a combination of hardware and software. The steps of the method or algorithm may also be performed in an alternate order from those provided in the examples. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, other types of solid state memory, registers, hard disk, removable media, optical media, or any other form of storage medium known in the art. An exemplary storage medium is coupled to a processor or a controller such that the processor or control circuitry can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor or the control circuitry.


The foregoing description of the disclosed example embodiments is provided to enable any person of ordinary skill in the art to make or use the embodiments in the present disclosure. Various modifications to these examples will be readily apparent to those of ordinary skill in the art, and the principles disclosed herein may be applied to other examples without departing from the spirit or scope of the present disclosure. The described embodiments are to be considered in all respects only as illustrative and not restrictive. In addition, the use of language in the form of “at least one of A and B” in the following claims should be understood to mean “only A, only B, or both A and B.”

Claims
  • 1. A device, comprising: a Storage Class Memory (SCM) configured for byte-addressable access of data, wherein at least a portion of the SCM is further configured as at least a portion of an address space of a processor;a secondary memory configured to store data using a secondary memory smallest writable unit for writing data in the secondary memory, wherein the secondary memory smallest writable unit is larger than an SCM smallest writable unit for writing data in the SCM, and wherein the secondary memory has at least one of a greater read latency and a greater write latency than the SCM; andcontrol circuitry configured to: receive an operation instruction from the processor to perform an operation on data stored in the secondary memory; andload the data from the secondary memory into the SCM for performance of the operation by the control circuitry of the device or by the processor.
  • 2. The device of claim 1, wherein the operation includes at least one of searching for data meeting a search condition and applying a mathematical function on the data meeting the search condition.
  • 3. The device of claim 1, wherein the control circuitry is further configured to send a result of performing the operation on the data loaded into the SCM to the processor.
  • 4. The device of claim 1, wherein the control circuitry is further configured to associate portions of data stored in the secondary memory with unique identifiers calculated using the portions of data.
  • 5. The device of claim 4, wherein the portions of data stored in the secondary memory have different data sizes based on at least one of data access patterns of different applications executed by the processor, a file type for the data, and a size of the data.
  • 6. The device of claim 4, wherein the control circuitry is further configured to store the unique identifiers in a tree structure that is used by an operating system executed by the processor.
  • 7. The device of claim 4, wherein the control circuitry is further configured to prevent storage of a duplicate portion of data in the device using a unique identifier stored in the SCM.
  • 8. The device of claim 4, wherein the control circuitry is further configured to store a mapping of the unique identifiers with indications of physical locations where the corresponding portions of data are stored in the secondary memory.
  • 9. The device of claim 4, wherein the control circuitry is further configured to store a mapping of the unique identifiers with indications of whether the corresponding portions of data are stored in the SCM.
  • 10. The device of claim 1, wherein the control circuitry is further configured to access data in the SCM that is metadata related to data stored in the secondary memory.
  • 11. A method for operating a device including a Storage Class Memory (SCM) and a secondary memory that has at least one of a greater read latency and a greater write latency than the SCM, the method comprising: providing at least a portion of the SCM to serve as at least a portion of an address space of a processor, wherein an SCM smallest writable unit for writing data in the SCM is smaller than a secondary memory smallest writable unit for writing data in the secondary memory;receiving an operation instruction from the processor to perform an operation on data stored in the secondary memory; andloading the data from the secondary memory into the SCM for performance of the operation by control circuitry of the device or by the processor.
  • 12. The method of claim 11, wherein the operation includes at least one of searching for data meeting a search condition and applying a mathematical function on the data meeting the search condition.
  • 13. The method of claim 11, further comprising calculating unique identifiers for portions of data to be stored in the secondary memory using the portions of data to be stored in the secondary memory.
  • 14. The method of claim 13, further comprising dividing data into differently sized portions to be stored in the secondary memory, wherein the different sizes of the portions are based on at least one of data access patterns of different applications executed by the processor, a file type for the data, and a size for the data.
  • 15. The method of claim 13, further comprising storing the unique identifiers in a tree structure that is used by an operating system executed by the processor.
  • 16. The method of claim 13, further comprising preventing storage of a duplicate portion of data in the device using a unique identifier stored in the SCM.
  • 17. The method of claim 13, further comprising storing a mapping of the unique identifiers with indications of physical locations where the corresponding portions of data are stored in the secondary memory.
  • 18. The method of claim 13, further comprising storing a mapping of the unique identifiers with indications of whether the corresponding portions of data are stored in the SCM.
  • 19. A system, comprising: a processor;a Storage Class Memory (SCM) configured for byte-addressable access of data, wherein at least a portion of the SCM is further configured to serve as at least a portion of an address space of the processor;a secondary memory configured to store data using a secondary memory smallest writable unit for writing data in the secondary memory, wherein the secondary memory smallest writable unit is larger than an SCM smallest writable unit for writing data in the SCM, and wherein the secondary memory has at least one of a greater read latency and a greater write latency than the SCM; andmeans for: receiving an instruction from the processor to access data smaller than the secondary memory smallest writable unit;in response to the instruction received from the processor to access data smaller than the secondary memory smallest writable unit, accessing the data for the instruction in the at least a portion of the SCM serving as the at least a portion of the address space of the processor;receiving one or more commands from the processor to access data that is greater than the secondary memory smallest writable unit; andin response to the one or more commands received from the processor to access data that is greater than the secondary memory smallest writable unit, accessing data in the secondary memory for the one or more commands received from the processor.
  • 20. The system of claim 19, further comprising: at least one additional SCM configured for byte-addressable access of data, wherein at least a portion of the at least one additional SCM is further configured to serve as at least a portion of the address space of the processor; andat least one additional secondary memory configured to store data using the secondary memory smallest writable unit for writing data in the at least one additional secondary memory, wherein the at least one additional secondary memory has at least one of a greater read latency and a greater write latency than the at least one additional SCM.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of application Ser. No. 16/196,077, filed on Nov. 20, 2018, titled “DATA ACCESS IN DATA STORAGE DEVICE INCLUDING STORAGE CLASS MEMORY”, the contents of which are hereby incorporated by reference in their entirety.

US Referenced Citations (162)
Number Name Date Kind
4420807 Nolta Dec 1983 A
5608876 Cohen Mar 1997 A
6772296 Mathiske Aug 2004 B1
6856556 Hajeck Feb 2005 B1
7023726 Chen Apr 2006 B1
7126857 Hajeck Oct 2006 B2
7216211 Munguia May 2007 B2
7386655 Gorobets Jun 2008 B2
7430136 Merry, Jr. Sep 2008 B2
7447807 Merry Nov 2008 B1
7502256 Merry, Jr. Mar 2009 B2
7509441 Merry Mar 2009 B1
7596643 Merry, Jr. Sep 2009 B2
7653778 Merry, Jr. Jan 2010 B2
7685337 Merry, Jr. Mar 2010 B2
7685338 Merry, Jr. Mar 2010 B2
7685374 Diggs Mar 2010 B2
7733712 Walston Jun 2010 B1
7765373 Merry Jul 2010 B1
7898855 Merry, Jr. Mar 2011 B2
7904619 Danilak Mar 2011 B2
7912991 Merry Mar 2011 B1
7936603 Merry, Jr. May 2011 B2
7962792 Diggs Jun 2011 B2
7979601 Berenbaum Jul 2011 B2
8078918 Diggs Dec 2011 B2
8090899 Syu Jan 2012 B1
8095851 Diggs Jan 2012 B2
8108692 Merry Jan 2012 B1
8122185 Merry, Jr. Feb 2012 B2
8127048 Merry Feb 2012 B1
8135903 Kan Mar 2012 B1
8151020 Merry, Jr. Apr 2012 B2
8161227 Diggs Apr 2012 B1
8166245 Diggs Apr 2012 B2
8243525 Kan Aug 2012 B1
8254172 Kan Aug 2012 B1
8261012 Kan Sep 2012 B2
8296625 Diggs Oct 2012 B2
8312207 Merry, Jr. Nov 2012 B2
8316176 Phan Nov 2012 B1
8341339 Boyle Dec 2012 B1
8375151 Kan Feb 2013 B1
8392635 Booth Mar 2013 B2
8397107 Syu Mar 2013 B1
8407449 Colon Mar 2013 B1
8423722 Deforest Apr 2013 B1
8433858 Diggs Apr 2013 B1
8443167 Fallone May 2013 B1
8447920 Syu May 2013 B1
8458435 Rainey, III Jun 2013 B1
8478930 Syu Jul 2013 B1
8489854 Colon Jul 2013 B1
8503237 Horn Aug 2013 B1
8521972 Boyle Aug 2013 B1
8549236 Diggs Oct 2013 B2
8583835 Kan Nov 2013 B1
8601311 Horn Dec 2013 B2
8601313 Horn Dec 2013 B1
8612669 Syu Dec 2013 B1
8612804 Kang Dec 2013 B1
8615681 Horn Dec 2013 B2
8631191 Hashimoto Jan 2014 B2
8638602 Horn Jan 2014 B1
8639872 Boyle Jan 2014 B1
8683113 Abasto Mar 2014 B2
8700834 Horn Apr 2014 B2
8700950 Syu Apr 2014 B1
8700951 Call Apr 2014 B1
8706985 Boyle Apr 2014 B1
8707104 Jean Apr 2014 B1
8713066 Lo Apr 2014 B1
8713357 Jean Apr 2014 B1
8719531 Strange May 2014 B2
8724392 Asnaashari May 2014 B1
8724422 Agness May 2014 B1
8725931 Kang May 2014 B1
8745277 Kan Jun 2014 B2
8751728 Syu Jun 2014 B1
8769190 Syu Jul 2014 B1
8769232 Suryabudi Jul 2014 B2
8775720 Meyer Jul 2014 B1
8782327 Kang Jul 2014 B1
8788778 Boyle Jul 2014 B1
8788779 Horn Jul 2014 B1
8788880 Gosla Jul 2014 B1
8793429 Call Jul 2014 B1
8903995 Basak Dec 2014 B1
8947803 Yamakawa Feb 2015 B1
9015123 Mathew et al. Apr 2015 B1
9116800 Post Aug 2015 B2
9189387 Taylor Nov 2015 B1
9342453 Nale May 2016 B2
9619174 Chen Apr 2017 B2
9836404 Ummadi Dec 2017 B2
9857995 Malina Jan 2018 B1
10126981 Malina Nov 2018 B1
10423536 Noguchi Sep 2019 B2
10482009 Sabol et al. Nov 2019 B1
10496544 Wang Dec 2019 B2
20080148048 Govil et al. Jun 2008 A1
20100037002 Bennett Feb 2010 A1
20100174849 Walston Jul 2010 A1
20100250793 Syu Sep 2010 A1
20110099323 Syu Apr 2011 A1
20110283049 Kang Nov 2011 A1
20120166891 Dahlen Jun 2012 A1
20120210068 Joshi Aug 2012 A1
20120260020 Suryabudi Oct 2012 A1
20120027853 Horn Nov 2012 A1
20120278531 Horn Nov 2012 A1
20120284460 Guda Nov 2012 A1
20120317377 Palay et al. Dec 2012 A1
20120324191 Strange Dec 2012 A1
20120331016 Janson Dec 2012 A1
20130024605 Sharon et al. Jan 2013 A1
20130080687 Nemazie Mar 2013 A1
20130091321 Nishtala Apr 2013 A1
20130132638 Horn May 2013 A1
20130145106 Kan Jun 2013 A1
20130290793 Booth Oct 2013 A1
20140006686 Chen et al. Jan 2014 A1
20140059405 Syu Feb 2014 A1
20140101369 Tomlin Apr 2014 A1
20140108703 Cohen et al. Apr 2014 A1
20140115427 Lu Apr 2014 A1
20140133220 Danilak May 2014 A1
20140136753 Tomlin May 2014 A1
20140149826 Lu May 2014 A1
20140157078 Danilak Jun 2014 A1
20140181432 Horn Jun 2014 A1
20140223255 Lu Aug 2014 A1
20140226389 Ebsen et al. Aug 2014 A1
20140351515 Chiu Nov 2014 A1
20150058534 Lin Feb 2015 A1
20150142996 Lu May 2015 A1
20150302903 Chaurasia Oct 2015 A1
20150363320 Kumar Dec 2015 A1
20160027481 Hong Jan 2016 A1
20160118130 Chadha Apr 2016 A1
20160313943 Hashimoto Oct 2016 A1
20160342357 Ramamoorthy Nov 2016 A1
20160357463 DeNeui Dec 2016 A1
20160378337 Horspool et al. Dec 2016 A1
20170024332 Rui Jan 2017 A1
20170147499 Mohan May 2017 A1
20170160987 Royer, Jr. Jun 2017 A1
20170228170 Chen et al. Aug 2017 A1
20170286291 Thomas Oct 2017 A1
20180004438 Hady Jan 2018 A1
20180032432 Kowles Feb 2018 A1
20180095898 Khosravi Apr 2018 A1
20180150404 Kim et al. May 2018 A1
20180307602 Xu et al. Oct 2018 A1
20180307620 Zhou Oct 2018 A1
20180349041 Zhou et al. Dec 2018 A1
20190163375 Amidi et al. May 2019 A1
20190188153 Benisty Jun 2019 A1
20190310796 Perez et al. Oct 2019 A1
20190339904 Myran et al. Nov 2019 A1
20190347204 Du et al. Nov 2019 A1
20200004628 Ben-Rubi et al. Jan 2020 A1
Foreign Referenced Citations (2)
Number Date Country
2012134641 Oct 2012 WO
WO-2012134641 Oct 2012 WO
Non-Patent Literature Citations (12)
Entry
Wu et al.; “Delta-FTL: Improving SSD Lifetime via Exploiting Content Locality”; Apr. 10-13, 2012; 13 pages; available at: https://cis.temple.edu/˜he/publications/Conferences/DeltaFTL-Eurosys12.pdf.
Pending U.S. Appl. No. 16/683,095, filed Nov. 13, 2019, entitled “Storage Class Memory Access”, Manzanares et al.
Xia et al.; “Edelta: A Word-Enlarging Based Fast Delta Compression Approach”; Jul. 2015; 5 pages; available at: https://www.usenix.org/system/files/conference/hotstorage15/hotstorage15-xia.pdf.
Hitachi Vantara; “Hitachi Accelerated Flash 2.0”; Sep. 2018; 20 pages; available at: https://www.hitachivantara.com/en-us/pdfd/white-paper/accelerated-flash-whitepaper.pdf.
Pending U.S. Appl. No. 16/246,425, filed Jan. 11, 2019, entitled “Container Key Value Store for Data Storage Devices”, Sanjay Subbarao.
Pending U.S. Appl. No. 16/246,401, filed Jan. 11, 2019, entitled “Fine Granularity Translation Layer for Data Storage Devices”, Sanjay Subbarao.
Pending U.S. Appl. No. 16/176,997, filed Oct. 31, 2018, entitled “ Tiered Storage Using Storage Class Memory”, James N. Malina.
Pending U.S. Appl. No. 16/867,793, filed May 6, 2020, entitled “Page Modification Encoding and Caching”, Cassuto et al.
Phil Mills, “Storage Class Memory—the Future of Solid State Storage, ”http://www.snia.org/sites/default/education/tutorials/2009/fall/solid/PhilMills_The_Future_of_Solid_State_Storage.pdf., SNIA Education, 2009.
International Search Report and Written Opinion dated Nov. 6, 2020 from International Application No. PCT/US2020/037713, 8 pages.
Kang et al.; “Subpage-based Flash Translation Layer For Solid State Drivers”; Jan. 2006; 10 pages; available at https://cs.kaist.ac.kr/fileDownload?kind=tech&sn=340.
Kim et al.; “Partial Page Buffering for Consumer Devices with Flash Storage”; Proceedings 2013 IEEE 3rd International Conference on Consumer Electronics—Berlin, ICCE—Berlin 2013 (p. 177-180); available at: https://hanyang.elsevierpure.com/en/publications/partial-page-buffering-for-consumer-devices-with-flash-storage.
Related Publications (1)
Number Date Country
20200334147 A1 Oct 2020 US
Continuations (1)
Number Date Country
Parent 16196077 Nov 2018 US
Child 16921719 US