The present invention relates to the field of computational storage, and particularly to implementing address mapping for solid-state storage devices with built-in transparent compression.
Solid-state data storage devices, which use non-volatile NAND flash memory technology, are being pervasively deployed in various computing and storage systems. In addition to one or multiple NAND flash memory chips, each solid-state data storage device must contain a controller that manages all the NAND flash memory chips. Within each NAND flash memory chip, all the memory cells are organized in an array→block→page hierarchy, where one NAND flash memory array is partitioned into a large number (e.g., thousands) of blocks, and each block contains a certain number (e.g., 256) of pages. The size of each flash memory page is typically 16 kB or 32 kB, and the size of each flash memory block is typically tens of MBs. Data are programmed and fetched in the unit of page. However, flash memory cells must be erased before being re-programmed, and the erase operation is carried out in the unit of block (i.e., all the pages within the same block must be erased at the same time). As a result, NAND flash memory cannot support the convenient in-place data update.
To embrace the lack of the update-in-place feature of NAND flash memory, solid-state data storage devices must use indirect address mapping. Internally, solid-state data storage devices manage data storage on NAND flash memory chips in the unit of constant-size (e.g., 2 k-byte or 4 k-byte) physical sector. Each physical sector is assigned with one unique physical block address (PBA). Instead of directly exposing the PBAs to external hosts, solid-state data storage devices expose an array of logical block address (LBA) and internally manage/maintain an injective mapping between LBA and PBA. The software component responsible for managing the LBA-PBA map is called flash translation layer (FTL).
Lossless data compression is the most effective means to reduce the data storage cost. One could incorporate lossless data compression function into solid-state data storage devices, being transparent to the host. By deploying solid-state storage devices with built-in transparent compression, host servers can conveniently benefit from lower physical storage cost, without consuming host CPU cycles for compression computation/management. Nevertheless, the implementation of solid-state storage devices with built-in transparent compression is non-trivial. In particular, the runtime compression ratio variation makes it a big challenge to implement the storage device FTL that can ensure very high-speed address mapping without sacrificing the storage reliability/stability and consuming too much computing/memory resource inside storage devices.
Accordingly, embodiments of the present disclosure are directed to a system and method for implementing address mapping in solid-state storage devices with built-in transparent compression.
A first aspect of the disclosure provides a solid state storage device, comprising: a compression system that compresses and decompresses data stored in the storage device; and a controller that utilizes a three tiered logical block address (LBA)/physical block address (PBA) map to map between logical storage and physical storage, wherein the LBA/PBA map includes: a zone layer having a set of zones that expose an LBA address space of the storage device, wherein each zone spans a contiguous region of LBA addresses; a routing layer having a set of trees, wherein each tree is indexed by an LBA address and includes a root node and a set of leaf nodes, wherein each root node is associated with a zone from the zone layer, and each leaf node includes a pointer; and an mpage layer that includes a set of mpages, each mpage pointed to by a pointer from the routing layer, wherein each mpage contains LBA/PBA mapping information for LBAs within a contiguous range of LBAs.
A second aspect of the disclosure provides a method, implemented on a solid state storage device, for mapping between logic block addresses (LBAs) and physical block addresses (PBAs), comprising: receiving a request the specifies an LBA; determining an applicable zone based on the LBA from a set of zones, wherein the set of zones expose an LBA address space of the storage device; identifying at least one tree from a set of trees having a root node associated with the applicable zone; traversing the at least one tree to identify a set of leaf nodes based on the LBA, wherein each leaf node points to an mpage; and determining corresponding PBA information for the LBA by examining mapping information contained in each mpage.
The numerous advantages of the present disclosure may be better understood by those skilled in the art by reference to the accompanying figures.
Reference will now be made in detail to embodiments of the disclosure, examples of which are illustrated in the accompanying drawings.
The core of an FTL is to maintain the logical-physical mapping between the LBA logical storage address space and physical storage address space. In current practice, each LBA address associates with a 4 KB data block, and correspondingly the physical storage space inside storage devices is partitioned into 4 KB blocks, each one associates with a unique PBA address. For conventional storage devices without built-in transparent compression, the 4 KB data block at one LBA address always entirely occupies the 4 KB space at one PBA address. Therefore, storage devices without built-in transparent compression conveniently use a flat LBA-PBA map in which each LBA address has its own unique map entry that records its corresponding PBA address.
Recall that NAND flash memory does not support in-place data update. Given one entry {Li, Pi} in the LBA-PBA map, in order to update the data block at the LBA Li, we must write the new data block at another PBA Pj. As a result, we must accordingly update the LBA-PBA map entry from {Li, Pi} to {Li, Pj}. Therefore, the LBA-PBA map keeps changing as users keep writing/updating data on storage devices. To improve the data access speed performance, storage devices always try to keep the flat LBA-PBA map entirely in low-latency memory (e.g., DRAM). Therefore, in the case of normal storage devices, the memory resource usage is proportional to the total number of LBA addresses that are exposed by storage devices.
In the context of storage devices with built-in transparent compression, the 4 KB data block Di at each LBA address is compressed to a data block Ci whose size can be (much) smaller than 4 KB. Meanwhile, each PBA address always associates with a fixed 4 KB physical storage space inside storage devices. As a result, multiple compressed blocks Ci's could share the same PBA, and one compressed block Ci could span over two adjacent PBAs. Let NP denote the total number of PBA addresses (corresponding to the fixed physical NAND flash memory storage capacity insides storage devices), and let NL denote the total number of LBA addresses being exposed/supported by storage devices. In order to fully leverage transparent compression to improve effective storage capacity, we should have NL sufficiently larger (e.g., 2× or 4×) than NP, especially in the presence of highly compressible user data.
As a result, if storage devices with built-in transparent compression use the conventional flat LBA-PBA map (i.e., each LBA address has its own map entry to hold its corresponding physical storage location), the address map will have a very large size and hence demand a large amount of memory resource. This will lead to a higher cost and higher energy consumption. Meanwhile, due to the runtime compression ratio variation, not all the NL LBA addresses being exposed by the storage device could be always utilized by the host. As a result, the conventional flat LBA-PBA address map can be very inefficient for storage devices with built-in transparent compression.
Embodiments provided herein present a method to implement a low-cost logical-physical address mapping strategy that can reduce the memory usage for storage devices with built-in transparent compression.
Mapping system 20 generally includes write logic for writing data to memory 40, read logic for reading data from memory 40, trim logic 26 the works along with garbage collection system 28 to manage flash memory usage, splitting logic 32 that splits data storage units (i.e., mpages described herein), and a backup system 24 that stores the map 38 into flash memory 40.
Since all the compressed blocks Ci's are stored contiguously over the physical storage space, knowing the position of C0 and the size of all the Ci's are sufficient to locate and access any compressed block Ci. As shown in
We convert the map-entry Ej into a simplified data structure that contains (1) a fixed-size (e.g., 2 bytes) concatenation pointer, (2) the value of w, which is represented with a fixed number of bytes (e.g., 1 byte), and (3) the size of all the w Ci's, each of which is represented with a fixed number of bytes (e.g., 1 byte).
As illustrated in
As discussed above, we always try to append new map-entries into mpages, without modifying existing map-entries, which aims to minimize the operational complexity and hence minimize the CPU overhead.
When an mpage becomes almost full and does not contain invalid map-entries, we will split this mpage into two new mpages.
During the runtime, the three-layer address map entirely resides in low-latency memory (e.g., DRAM) that however may be volatile in nature. Therefore, the three-layer address map must be periodically persisted to NAND flash memory. In order to reduce the overhead, we only write modified content in the address map (e.g., modified mpages, modified routing nodes, and/or modified zones) to NAND flash memory, during which all the cross-content pointers (e.g., a pointer in a routing node that points to an mapge) will be accordingly updated to reflect the physical location in NAND flash memory. In the case of graceful shutdown, all the modified content in the address map can be safely written to NAND flash memory. As a result, storage devices can easily reconstruct the three-layer address map in low-latency memory by reading the persisted address map from NAND flash memory. However, in the case of sudden power failure, storage devices may not be able to safely write all the modified content of the address map to NAND flash memory. As a result, during system recovery, storage devices have to scan/read more data from NAND flash memory in order to correctly reconstruct the entire three-layer address map. In order to reduce the map reconstruction latency, this invention presents a method: When storage devices write modified content of the address map to NAND flash memory, every time after a certain chunk of content (e.g., 512 KB or 1 MB) have been written to NAND flash memory, storage devices append a “summary” meta-page to NAND flash memory, as illustrated in
It is understood that aspects of the present disclosure may be implemented in any manner, e.g., as a software program, or an integrated circuit board or a controller card that includes a processing core, I/O and processing logic. Aspects may be implemented in hardware or software, or a combination thereof. For example, aspects of the processing logic may be implemented using field programmable gate arrays (FPGAs), ASIC devices, or other hardware-oriented system.
Aspects may be implemented with a computer program product stored on a computer readable storage medium. The computer readable storage medium can be a tangible device that can retain and store instructions for use by an instruction execution device. The computer readable storage medium may be, for example, but is not limited to, an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. A non-exhaustive list of more specific examples of the computer readable storage medium includes the following: a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, etc. A computer readable storage medium, as used herein, is not to be construed as being transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission media (e.g., light pulses passing through a fiber-optic cable), or electrical signals transmitted through a wire.
Computer readable program instructions for carrying out operations of the present disclosure may be assembler instructions, instruction-set-architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, firmware instructions, state-setting data, or either source code or object code written in any combination of one or more programming languages, including an object oriented programming language such as Java, Python, Smalltalk, C++ or the like, and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider). In some embodiments, electronic circuitry including, for example, programmable logic circuitry, field-programmable gate arrays (FPGA), or programmable logic arrays (PLA) may execute the computer readable program instructions by utilizing state information of the computer readable program instructions to personalize the electronic circuitry, in order to perform aspects of the present disclosure.
The computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. The computer readable program instructions may also be stored in a computer readable storage medium that can direct a computer, a programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer readable storage medium having instructions stored therein comprises an article of manufacture including instructions which implement aspects of the function/act specified in the flowchart and/or block diagram block or blocks.
Aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by hardware and/or computer readable program instructions.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts or carry out combinations of special purpose hardware and computer instructions.
The foregoing description of various aspects of the present disclosure has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the concepts disclosed herein to the precise form disclosed, and obviously, many modifications and variations are possible. Such modifications and variations that may be apparent to an individual in the art are included within the scope of the present disclosure as defined by the accompanying claims.