The present invention generally relates to solid state storage technologies and more particularly to managing an integrated heterogeneous solid state storage drive.
Non-volatile memory (NVM) is a type of computer memory that retains stored information even after power cycling—powering a device off and then on again. In contrast, volatile memory is a type of computer memory that requires power to maintain the stored information—when the power is off or interrupted, the stored data is lost. A traditional type of non-volatile memory is a hard disk drive (HDD), which stores and accesses data using one or more rotating disks (platters) coated with magnetic material.
Another type of storage memory is a solid state drive (SSD), which differs from a HDD in that digital data is stored and retrieved using electronic circuits, without any moving mechanical parts. SSDs can be used based on both volatile memory, such as dynamic random-access memory (DRAM) or static random access memory (SRAM), or non-volatile memory, such as NAND flash memory. The standard NAND flash memory can be Single Level Cell (SLC) or Multi Level Cell (MLC), including enterprise MLC (eMLC), Triple Level Cell (TLC) and Quadratic Level Cell (QLC). While the higher data density of MLC memory reduces the cost per unit of storage, SLC memory has faster write speeds, lower power consumption and higher cell endurance.
While early SSDs retain legacy HDD form factors and connectivity, modern SSDs break away from these limitations. Without the constraints of platter mechanics inherent in HDDs, modern SSDs have superior mechanical ruggedness and can come in much smaller form factors than the standard 1.8 inch, 2.5 inch and 3.5 inch for HDDs, such as M.2. M.2 is a specification for internally mounted expansion cards and associated connectors, with flexible physical specification that allows different module widths and lengths, as well as more advanced interfacing features, such as Peripheral Component Interconnect Express (PCI Express or PCIe).
Due to its low latency and high throughput, PCIe connectivity is becoming more popular and has led to more efficient access protocols and storage stacks. One example is Non-Volatile Memory Express (NVMe), a logical device communications interface or protocol specifically developed for SSDs that provides parallel, multi-queue application program interface (API) over PCIe, for a local storage device directly connected to a computer through a PCIe bus, or over a switch fabric, for a remote storage device that is connected via a communications network, such as the Ethernet, using remote direct memory access (RDMA). NVMe uses the concept of a “namespace” to enable an SSD to divide the storage capacity into multiple separately addressable units, identified by a namespace id (NSID). Storage locations within the namespace are addressable by a logical block address (LBA). Thus, any storage location within an NVMe device is identified by the combination of NSID and LBA.
Despite these advances in solid state storage technologies, existing computer systems and enterprise servers still heavily rely on legacy HDD infrastructure. Adapters that make solid state drives compatible with HDD infrastructure have been developed. One type of solid state drive adapter in the form factor of a HDD is disclosed in U.S. Patent Application Publication 2016/0259754, filed on Oct. 20, 2015, and U.S. Patent Application Publication 2016/0259597, filed on Nov. 24, 2015, which are hereby incorporated by reference. These publications disclose a solid state drive adapter that includes multiple solid state drives, an interface section, and a drive connector. This adapter, however, does not support managing the storage capacity within the solid state drives according to parameters defined by users. What is needed, therefore, is an improved technique for adapting solid state drives with smaller form factors, such as M.2 SSDs, and heterogeneous formats and densities (e.g., SLC, MLC, and TLC) to be used with computing systems and servers designed for HDD infrastructures and for managing solid state drives integrated into an HDD infrastructure.
In one embodiment, a heterogeneous integrated solid state drive includes a plurality of solid state memory devices including at least one solid state memory device of a first type and at least one solid state memory device of a second type, a controller coupled to each of the plurality of solid state memory devices and an interface coupled to the controller. The controller is configured to receive at least one user-defined memory parameter and to create at least one namespace satisfying the at least one user-defined memory parameter in at least one of the plurality of solid state memory devices. In one embodiment, the at least one user-defined memory parameter is one of a group consisting of a capacity, a quality of service level, an assured number of I/O operations per second, a bandwidth, a latency, and an endurance. In one embodiment, the controller is configured to create a namespace corresponding to a first set of memory addresses in the at least one solid state memory device of the first type and a second set of memory addresses in the at least one solid state memory device of the second type. In one embodiment, the interface is an Ethernet network interface card configured to communicate using an NVMe over Fabric protocol. In another embodiment the interface is a PCIe bus interface. In one embodiment the controller is configured to autonomously move data within the at least one namespace to implement a quality of service level.
In one embodiment, a method includes determining at least one property of at least one solid state memory device of a first type coupled to a controller, determining at least one property of at least one solid state memory device of a second type different from the first type coupled to the controller, receiving at least one user-defined memory parameter, and creating at least one namespace satisfying the at least one user-defined memory parameter in at least one of the plurality of solid state memory devices coupled to the controller. In one embodiment, the at least one user-defined memory parameter is one of a group consisting of a capacity, a quality of service level, an assured number of I/O operations per second, a bandwidth, a latency, and an endurance. In one embodiment, the method further includes the controller autonomously moving data within the at least one namespace to implement a quality of service level.
In the
Controller 114 is coupled to each of NVM devices 122, 124, 126, and 128 through a PCIe connector 116, and is coupled to a connector 112 of integrated drive 100 through a PCIe connector 118. In other embodiments, controller 114 may include a plurality of heterogeneous hardware interfaces, for example, memory bus, PCIe bus, SATA bus, SAS bus, and the like, to connect with NVM devices 122, 124, 126, and 128. In the
Controller 114 includes firmware configured to control storage and retrieval/sending of data to and from each of NVM devices 122, 124, 126, and 128. Controller 114 is also configured to create and expose namespaces to entities external to integrated drive 100. A namespace is a defined set of logical address locations that are mapped to logical address locations in one or more of NVM devices 122, 124, 126, and 128. For example, in one embodiment a particular namespace may be mapped to logical addresses in NVM device 124. In another embodiment, a particular namespace may be mapped to logical addresses in NVM device 122 and NVM device 126. Controller 114 is further configured to provide other functionalities such as tiering, implementing quality of service (QoS), and implementing service level agreements (SLAs), which are described further in conjunction with
In another embodiment, controller 114 is configured to expose NVM devices 122, 124, 126, and 128 to user space 160 transparently. In other words, a host computer coupled to integrated drive 100 would “see” each of NVM devices 122, 124, 126, and 128 as if those devices were located at the host computer. For example, in this embodiment, a namespace presented in user space 160 corresponding to NVM device 122 would have logical addresses that are mapped directly to the logical address of NVM device 122.
In step 214, the controller defines parameters of namespaces that users will be able to select. In one embodiment, the controller-defined parameters of namespaces include capacity, QoS level, and SLA. In one embodiment, a QoS level requires that a given operation be completed within a specified time limit. In one embodiment, SLA parameters include performance (e.g., assured number of I/O operations per second (IOPS), bandwidth, and latency), endurance (e.g. disk write per day), and cost. In step 216, the controller receives user-defined values for parameters of one or more namespaces to be created. For example, in one embodiment, the controller receives definitions for a namespace's capacity and SLA parameters. In step 218, the controller creates the one or more namespaces having the received user-defined values for parameters.
In the
Controller 314 is coupled to each of NVM devices 322, 324, 326, and 328 through a PCIe connector 316, and is further coupled to an Ethernet network interface card (NIC) 312 of integrated drive 300. In other embodiments, controller 314 may include a plurality of heterogeneous hardware interfaces, for example, memory bus, PCIe bus, SATA bus, SAS bus, and the like, to connect with NVM devices 322, 324, 326, and 328. In the
Controller 314 includes firmware configured to control storage and retrieval of data to and from each of NVM devices 322, 324, 326, and 328. Controller 314 is also configured to create and expose namespaces to entities external to integrated drive 300. Controller 314 is further configured to provide other functionalities such as tiering, implementing quality of service (QoS), and implementing service level agreements (SLAs), as set forth above in conjunction with
In one embodiment, when integrated drive 300 receives a RDMA write command from a host over a network, the write command is parsed by controller 314. Controller 314 may store the data in NVM device 328, as NVRAM device 338 may have low latency, and then send an acknowledgement back to the host over the network. In such a case the latency of performing the write command is determined by the latency of storing data in NVRAM device 338. At some later time, controller 314 may move the data to a lower tier, for example to TLC device 334 of NVM device 324, in accordance with its management rules.
In the
Ethernet switch 414 is coupled to each of NICs 452, 454, 456, and 458 and is further coupled to host computers 462 and 464 via a network 460. In the
Creation and exposure of namespaces to entities external to integrated drive 400 is distributed across memory controllers 442, 444, 446, and 448 and/or one or more of host computers 462 and 464. For example, in one embodiment host computer 462 sends a RDMA write command to NVM device 428 and NIC 458 writes the data to low-latency NVRAM device 438. Thus the data is written to integrated drive 400 with low latency. At some later time host computer 462 sends a RDMA move command to NVM device 424 to move the data from NVM device 428 to NVM device 424. The move command includes an identification of the source device, start address, and length of the data to be moved. NVM device 424 sends an RDMA read command to NVM device 428, which responds by sending the requested data to NVM device 424. NVM device 424 writes the requested data to TLC device 434, sends an acknowledgement to host computer 462, and sends a trim command to NVM device 428 to evacuate the request data from its original location. In another example, host computer 462 sends an RDMA copy command to NVM device 424 to copy data from NVM device 428 to NVM device 424. NVM device 424 performs the same functions as in performing a move operation except that no trim command is sent to NVM device 428. According to one embodiment, mapping of the namespaces resides in a one of host computers 462 or 464. According to another embodiment, mapping of the namespaces are distributed across host computers 462 and 464. In one embodiment, memory controllers 442, 444, 446, and 448 and/or one or more of host computers 462 and 464 also provide other functionalities such as tiering, implementing quality of service (QoS), and implementing service level agreements (SLAs), as set forth above in conjunction with
Other objects, advantages and embodiments of the various aspects of the present invention will be apparent to those who are skilled in the field of the invention and are within the scope of the description and the accompanying Figures. For example, but without limitation, structural or functional elements might be rearranged, or method steps reordered, consistent with the present invention. Similarly, a machine may comprise a single instance or a plurality of machines, such plurality possibly encompassing multiple types of machines which together provide the indicated function. The machine types described in various embodiments are not meant to limit the possible types of machines that may be used in embodiments of aspects of the present invention, and other machines that may accomplish similar tasks may be implemented as well. Similarly, principles according to the present invention, and methods and systems that embody them, could be applied to other examples, which, even if not specifically described here in detail, would nevertheless be within the scope of the present invention.
Number | Name | Date | Kind |
---|---|---|---|
6957294 | Saunders et al. | Oct 2005 | B1 |
7509420 | Moulton et al. | Mar 2009 | B2 |
8126939 | Raciborski et al. | Feb 2012 | B2 |
8132168 | Wires et al. | Mar 2012 | B2 |
8301809 | Liu et al. | Oct 2012 | B2 |
8468299 | O'Rourke et al. | Jun 2013 | B2 |
8595431 | Yamagami | Nov 2013 | B2 |
8760922 | Lassa | Jun 2014 | B2 |
8769241 | Chiang et al. | Jul 2014 | B2 |
8775868 | Colgrove et al. | Jul 2014 | B2 |
8903848 | Bahrami et al. | Dec 2014 | B1 |
9047090 | Kottilingal et al. | Jun 2015 | B2 |
9082512 | Davis et al. | Jul 2015 | B1 |
9128820 | Malina | Sep 2015 | B1 |
9146851 | Pittelko | Sep 2015 | B2 |
9164888 | Borchers et al. | Oct 2015 | B2 |
9213603 | Tiziani et al. | Dec 2015 | B2 |
20030051021 | Hirschfeld et al. | Mar 2003 | A1 |
20110035543 | Yang | Feb 2011 | A1 |
20110106802 | Pinkney | May 2011 | A1 |
20120158669 | Morsi | Jun 2012 | A1 |
20140207741 | Morsi | Jul 2014 | A1 |
20150012570 | Le et al. | Jan 2015 | A1 |
20150052176 | Akaike et al. | Feb 2015 | A1 |
20150378613 | Koseki | Dec 2015 | A1 |
20160259597 | Worley et al. | Sep 2016 | A1 |
20170024137 | Kanno | Jan 2017 | A1 |
20170123707 | Carey | May 2017 | A1 |
20200004429 | Schmisseur | Jan 2020 | A1 |
Entry |
---|
John Harker, “Tiered Storage Design Guide”, Best Practices for Cost-effective Designs, Hitachi Data Systems, pp. 1-21, Sep. 2010. |
Yongseok Oh et al., “Hybrid Solid State Drives for Improved Performance and Enhanced Lifetime,” pp. 1-5, Apr. 16, 2013. |
Number | Date | Country | |
---|---|---|---|
20180260135 A1 | Sep 2018 | US |