This disclosure is generally related to the field of data storage. More specifically, this disclosure is related to a method and system for facilitating a physically isolated storage unit for multi-tenancy virtualization.
Today, various storage systems are being used to store and access the ever-increasing amount of digital content. A storage system can include various storage devices which can provide persistent memory, e.g., a solid state drive (SSD) and a hard disk drive (HDD). A cloud service can provide access to a storage system by using virtualization, in which a single physical storage drive can be used by multiple virtual machines (VMs). When a single VM is destroyed, the system may physically remove all data corresponding to the single VM, e.g., to prevent subsequent access to the data associated with the single VM. The performance of the single physical drive may be required to be of sufficient reliability to eliminate tails associated with latency distribution. Furthermore, an accelerated recycling of physical space in the single physical storage drive may extend the usage of the storage drive, which can result in a revenue increase. Additionally, providing reliability in performance may be beneficial for fulfillment of service level agreements (SLAs).
One current virtualization method involves implementing the input/output (I/O) virtualization to provide logical drives for multiple VMs, using a single root I/O virtualization (SRIOV). This method can expose multiple virtual functions (VFs), which can be instantiated by different VMs to form the logical drives. However, this method can result in some constraints, e.g.: data and I/O from different VMs may be stored in the same NAND block or page, which can result in a time-consuming process for data destruction, and can also trigger garbage collection; I/Os from multiple VMs may be placed in a random layout across the physical storage drives, which can create difficulties in balancing the I/O performance among the multiple VMs; the I/O distribution may be spread randomly across the multiple physical storage drives, which can result in hot spots and a traffic imbalance; and a single storage drive may not provide data recovery protection among the multiple physical storage drives.
Thus, while the SRIOV method can provide logical drives for multiple VMs, the above-described constraints can result in a decrease in the efficiency and performance of the overall storage system.
One embodiment provides a system which facilitates organization of data. During operation, the system allocates, to a function associated with a host, a number of block columns to obtain a physical storage space for the function, wherein a block column corresponds to a block from each of a plurality of dies of the non-volatile storage device. In response to processing an incoming host write instruction and an internal background write instruction, the system allocates a first block column to the incoming host write instruction and a second block column to the internal background write instruction, thereby extending a lifespan of the non-volatile storage device by recycling the first block column when deleting a namespace or virtual machine associated with the function.
In some embodiments, the function is a virtual function. In response to receiving a command to delete a virtual machine associated with the virtual function, the system erases the number of block columns of the physical storage space allocated for the virtual function and returns the number of block columns to a block column pool.
In some embodiments, allocating the number of block columns comprises obtaining the number of block columns from a block column pool.
In some embodiments, in response to receiving the incoming host write instruction, the system writes data associated with the host write to at least the first block column allocated to the function.
In some embodiments, the system identifies a sealed block column which is filled with data. The system executes the internal background write instruction as a garbage collection process based on the second block column, by: copying valid data from blocks of the sealed block column to blocks of the second block column; erasing data stored in the blocks of the sealed block column; and returning the sealed block column to a block column pool.
In some embodiments, the non-volatile storage device is one of a plurality of non-volatile storage devices which communicate with a global flash translation layer. The global flash translation layer allocates the number of block columns to the function, and the allocated block columns correspond to at least two of the non-volatile storage devices. The function is one of a plurality of virtual functions to which the global flash translation layer allocates block columns.
In some embodiments, the global flash translation layer maps each virtual function to an allocated physical storage space, and each physical storage space includes block columns corresponding to at least two of the non-volatile storage devices.
In some embodiments, an erasure coding (EC) encoding/decoding module in a controller performs EC encoding/decoding for the functions. Data associated with the function is stored in the allocated number of block columns across the at least two non-volatile storage devices. The system performs, by the EC encoding/decoding module, EC encoding on the data prior to the data being stored in the allocated number of block columns to obtain an EC codeword. The system distributes the EC codeword to be stored in block columns in the allocated number of block columns across the at least two non-volatile storage devices.
In some embodiments, the system divides a physical storage capacity of a non-volatile storage device into a plurality of block groups, wherein a block group comprises a plurality of block columns.
In the figures, like reference numerals refer to the same figure elements.
The following description is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the embodiments described herein are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
The embodiments described herein facilitate multi-tenancy virtualization by using physically isolated storage spaces across multiple storage drives.
As described above, virtualization is a technology in which a single physical storage drive can be used by multiple virtual machines (VMs). When a single VM is destroyed, the system may physically remove all data corresponding to the single VM, e.g., to prevent subsequent access to the data associated with the single VM. The performance of the single physical drive may be required to be of sufficient reliability to eliminate tails associated with latency distribution. Furthermore, an accelerated recycling of physical space in the single physical storage drive may extend the usage of the storage drive, which can result in a revenue increase. Additionally, providing reliability in performance may be beneficial for fulfillment of service level agreements (SLAs).
One current virtualization method involves implementing the input/output (I/O) virtualization to provide logical drives for multiple VMs, using a single root I/O virtualization (SRIOV). This method can expose multiple virtual functions (VFs), which can be instantiated different VMs to form the logical drives. An exemplary system based on the SRIOV method is described below in relation to
However, this method can result in some constraints, e.g.: data and I/O from different VMs may be stored in the same NAND block or page, which can result in a time-consuming process for data destruction, and can also trigger garbage collection; I/Os from multiple VMs may be placed in a random layout across the physical storage drives, which can create difficulties in balancing the I/O performance among the multiple VMs; the I/O distribution may be spread randomly across the multiple physical storage drives, which can result in hot spots and a traffic imbalance; and a single storage drive may not provide data recovery protection among the multiple physical storage drives. Thus, while the SRIOV method can provide logical drives for multiple VMs, the above-described constraints can result in a decrease in the efficiency and performance of the overall storage system.
The embodiments described herein address these issues by providing a system which divides the physical storage capacity of a non-volatile storage drive into block groups which include block columns, where each block column corresponds to a block from a die of the storage drive. For each virtual function (VF) associated with a host (e.g., an incoming host write instruction), the system can allocate a number of block columns, where the allocated number is based on requirements of a respective VF or its associated virtual machine (VM), as described below in relation to
Similar to this allocation of block columns in processing an incoming host write instruction, the system can also allocate block columns to an internal background write instruction, as described below in relation to
Moreover, the division and organization of the physical storage media into the block columns can result in a more efficient destruction of data, which can accelerate the readiness of the physical storage media to provide service to other VMs, as described below in relation to
Thus, by dividing and organizing the physical storage media into block columns which can be allocated to various host applications or VFs, and by allocating block columns to a host write instruction and an internal background write instruction, the described system can provide physically isolated storage spaces which can facilitate a more efficient multi-tenancy virtualization.
A “storage system infrastructure,” “storage infrastructure,” or “storage system” refers to the overall set of hardware and software components used to facilitate storage for a system. A storage system can include multiple clusters of storage servers and other servers. A “storage server” refers to a computing device which can include multiple storage devices or storage drives. A “storage device” or a “storage drive” refers to a device or a drive with a non-volatile memory which can provide persistent storage of data, e.g., a solid state drive (SSD), a hard disk drive (HDD), or a flash-based storage device.
A “non-volatile storage device” refers to a computing device, entity, server, unit, or component which can store data in a persistent or a non-volatile memory. In the embodiments described herein, the non-volatile storage device is depicted as a solid state drive (SSD), which includes a plurality of dies which can be accessed over a plurality of channels, but other non-volatile storage devices can be used.
A “computing device” refers to any server, device, node, entity, drive, or any other entity which can provide any computing capabilities.
A physical storage capacity of a non-volatile storage device can be divided or organized into “block groups.” A block group can include a plurality of “block columns.” A block column can correspond to a block from each of a plurality of dies of the non-volatile storage device.
“Allocating” block columns to a function can also be referred to as “assigning,” “mapping,” or “associating” block columns to a function.
A “sealed block column” refers to a block column which is filled with data and in a state which is ready to be recycled. An “open block column” refers to a block column which includes pages which are available to be written to or programmed. An open block column can be associated with a host write instruction (e.g., block column 432 in
A “virtual machine” or “VM” can be associated with a host, and a VM can instantiate a corresponding “virtual function” or “VF.” The embodiments described herein refer to allocating block columns to a function associated with a host, and can also refer to allocating block columns to a virtual machine corresponding to the function, which can be a VF. Some references to a VF or VM may be described as “VF/VM” or “VM/VF.”
Architecture of Exemplary Virtualization in a System in the Prior Art
The method depicted in conventional system 100 can result in some constraints. First, data and I/O from different VMs may be stored in the same NAND block or even NAND page, which can result in a time-consuming process for data destruction, and can also trigger garbage collection. The garbage collection is an internal background write operation which can result in interference with incoming write I/Os. Second, I/Os from multiple VMs may be placed in a random layout across the physical storage drives, which can create difficulties in balancing the I/O performance among the multiple VMs. Third, the distribution of the I/O may not be managed by one physical server for the multiple NVMe SSDs 150, and the usage of physical capacity on neighboring storage drives may differ greatly. Because the I/O distribution may be spread randomly across the multiple physical storage drives, this can result in hot spots and a traffic imbalance. Fourth, a single storage drive may not provide sufficient data recovery protection among the multiple physical storage drives.
Thus, while the SRIOV method can provide logical drives for multiple VMs, the above-described constraints can result in a decrease in the efficiency and performance of the overall storage system.
Physically Isolated Storage Units: Block Groups with Block Columns Across Multiple Dies
The system can divide or organize the physical space of the depicted NAND dies (i.e., 212, 218, 224, 230, 242, 248, 254, and 260) into a plurality of block groups, where a block group can include a plurality of block columns and where each block column corresponds to a block from each of a plurality of dies. The division or organization of the physical storage space of the storage media can be depicted by a communication 268. For example, a block group a 270 can include a block column 1 272 and a block column k 274. Block column 1 272 can correspond to the following blocks: block group a (“Ga”), block 1 (i.e., “Ga1”) 214 of NAND die 212; Ga2 220 of NAND die 218; GaN−1 226 of NAND die 224; and GaN 232 of NAND die 230. Similarly, block column k 274 can corresponding to the following blocks: block group z (“Gz”), block 1 (i.e., “Gz1”) 216 of NAND die 212; Gz2 222 of NAND die 218; GzN−1 228 of NAND die 224; and GzN 234 of NAND die 230. Thus, each of the 1 through k block columns of block group a 270 can include N number of blocks from N number of NAND dies. Furthermore, the system can divide or organize the storage space of the physical storage media into a plurality of block groups, e.g., block group a 270 through block group z 280. Similar to block group a 270, block group z 280 can include 1 through k number of block columns, where each block column correspond to blocks from NAND dies 1-N.
Each block column can be considered a physical storage space, and the system can allocate a certain number of block columns from the same or different block groups to a certain host, application, virtual function, or virtual machine associated with a host, e.g., in handling an incoming host I/O or write instruction. The depicted block groups with the corresponding block columns can thus form a physically isolated storage space. Any number of block columns from any number of block groups can also form a physical isolated storage space. For example, the NAND blocks from Ga1 to GaN from N different NAND dies on the N channels can form block column 1 272 of block group a 270. The system may allocate each block column only to one VF/VM, and can also allocate multiple block columns to one VF/VM. The system can allocate multiple block columns for multiple VFs/VMs in parallel. Moreover, the system can allocate block columns from the same block group or from different block groups. That is, the allocated block columns may or may not be associated with the same block group.
While processing the incoming host I/O or write instruction, the system can also perform background write instructions by using the physically isolated storage spaces. For example, in a communication 288, the system can “pair” two different types of block columns. The system can allocate a host I/O instruction (and its corresponding VF/VM) to a column i 292, which corresponds to blocks 292.1 to 292.N. The system can also allocate a garbage collection (“GC”) process 293 (i.e., an internal background write instruction) to a column i+1 294. While environment 200 depicts the allocated pair of block columns as neighboring or sequential (i and i+1) and in a 1-to-1 ratio, the system can allocate any two block columns to these two different processes, and the ratio of allocated blocks can be any ratio other than 1-to-1. Different scenarios of application and system usage may result in various combinations (i.e., ratios) in the allocation of host I/O block columns (e.g., as allocated to host I/O 291) and background write block columns (e.g., as allocated to GC 293).
Exemplary Environment for Virtualization and Block Column Allocation
The system can allocate block columns to each VF based on various conditions, including but not limited to: a size of incoming data associated with a VF/VM; a demand of the VM; a requested capacity, bandwidth, storage capacity, or other factor associated with and received from the VM; any performance, latency, or bandwidth requirements associated with a VF/VM; a historical, current, or predicted amount of physical storage space required by a VF/VM; and any other factor which can affect the amount of physical storage space required by a VF/VM. Each physical storage space can thus be a flexible domain with a variable number of block columns based on these exemplary factors and the utilization of a given VM. The system can dynamically allocate block columns to a VM/VF based on these factors, and can also adjust the allocated number of block columns based on these factors and other real-time factors or conditions detected, determined, or monitored by the system.
When the system receives incoming new data to be stored in the non-volatile storage device, or when certain background operations result in moving data to a new block column, the system can allocate a free block column (e.g., with all data erased) from a block column pool 390. The system can allocate the block columns to each VF from block column pool 390, as shown by block columns 392 and 394. For example: the system can allocate, from block column pool 390, block columns 308, 310, 312, and 314 to obtain a physical storage space 306 for VF 304; the system can allocate, from block column 390, block columns 328, 330, 332, 334, 336, and 338 to obtain a physical storage space 326 for VF 324; and the system can allocate, from block column pool 390, block columns 348 and 350 to obtain a physical storage space 346 for VF 344.
Furthermore, the system can perform a garbage collection process 362 on an identified block column, and upon recycling and erasing the data in the blocks corresponding to the identified block column, the system can return the recycled block column back to block column pool 390 for future use. Similarly, the system can perform a namespace destruction 364, which can involve removing a VM and deleting all data associated with the VM. Upon deleting the data in the blocks of all the block columns associated with the given namespace (i.e., all data associated with a given VM that is stored in the allocated block columns of a physical storage space for the given VM), the system can also return those block columns back to block column pool 390 for future use. Because each physical storage space contains only data corresponding to its given VM (and does not contain any data corresponding to other VMs), all of the data associated with a given VM is stored in its respective physical storage space, which eliminates the need to implement a garbage collection process on the respective physical storage space or on any other physical storage spaces. This is an improvement over the conventional system, in which overprovisioned space can result in more complicated data erasure procedures due to the data being scattered in various locations in the physical storage media (as depicted above in relation to the prior art environment of
Exemplary Block Columns while Processing Incoming Host Write and Internal Background Process
As described above in relation to
During operation, the system can identify block column 412 with all pages programmed as a block column to be marked as a sealed block column, and the system can seal identified block column 412. The system can execute an internal background write instruction (e.g., a garbage collection process) based on block column 422 (associated with GC write 420). The system can copy valid data from blocks 414-418 of sealed block column 412 to available blocks 424-428 of open block column 422 (via, e.g., communications 442, 444, and 446). The system can erase the data stored in blocks 414-418 of sealed block column 412, and can return sealed block column 412 back to a block column pool (not shown in
In addition, the system may seal an open block upon detecting or based upon a certain condition, e.g., detecting that no data has been programmed or written to an open block within a predetermined time interval or period, and determining that the amount of data stored in the blocks of a block column is greater than a predetermined threshold.
Mapping of Virtual Functions to Block Columns Across Storage Devices
When the capacity of a single storage drive is divided into several physically isolated storage spaces (as described above), one challenge may be the upper limit of a physical capacity which may be obtained by a single VM. One solution for overcoming such a fixed upper limit is to extend the physical storage space associated with or allocated to a VM by distributing the physical storage spaces across multiple storage devices.
Furthermore, each SSD can implement its own block column pool, and global flash translation layer 512 can collectively manage all of block column pools 528, 538, and 548 from, respectively, each of SSDs 520, 530, and 540. In some embodiments, global flash translation layer 512 can implement the block column pools for each SSD.
Exemplary Erasure Code Encoding
The system can provide a further improvement to the efficiency of the utilization of the physical storage capacity and the reliability of the data by protecting the blocks of data in a physical storage space with erasure coding. If a single storage drive is defective, the number of NAND blocks in that single storage drive should be less than the recovery strength of the erasure keyword. In other words, the constraint is that the number of NAND blocks in a single SSD which belong to a single EC group is less than the maximum recovery capability of the EC codeword. Otherwise, if that single SSD fails, the system cannot recover the data on that single SSD.
For example, given one (n,k) erasure coding scheme, the system can maximally tolerate the failure of n-k blocks. That is, the number of NAND blocks in a single storage drive which belong to the same allocated physical storage space must be less than n-k. If the entire storage drive fails, the system can still perform an EC-based recovery of the data. This can result in a more powerful and flexible protection for data recovery, as compared to the traditional or conventional RAID process. In the embodiments described herein, each EC group can allow the maximum n-k defective blocks at random locations within the respective EC group. This can result in an extension of the capacity, and can further spread the I/O access across the multiple drives while constructing a powerful EC protection, as described below in relation to
Storage controller 602 can also include an erasure coding (EC) encoder module 612 and an erasure coding (EC) decoder module 614. During operation, the system can allocate block columns from at least two of SSDs 520-540 to each VF. EC encoder 612 can perform EC encoding on data received from or associated with each VF, e.g., VF 1 606. The system can write the EC-encoded data to block columns which belong to the physical storage spaces allocated to VF 1 (i.e., physical storage spaces 522, 532, and 542), which results in the EC-encoded data spread out in multiple groups across the SSDs. That is, the system can write the EC-encoded data as EC groups across the SSDs. For example, an EC group 1 622 can be indicated with right-slanting diagonal lines, and can be stored across SSDs 520, 530, and 540. Similarly, an EC group 2 624 can be indicated with vertical lines, and can be stored similarly in SSDs 520, 530, and 540. Finally, an EC group 3 626 can be indicated with a cross-hatch pattern, and can be stored similarly in SSDs 520, 530, and 540. Thus, distributing the EC groups across the plurality of SSDs in the corresponding physical storage spaces allocated to a certain VF can result in an improvement in capacity extension and data protection.
Exemplary Method for Facilitating Organization of Data
In processing the incoming host write instruction, the system allocates a first block column to the incoming host write instruction (operation 732). In response to receiving the incoming host write instruction, the system writes data associated with the host write instruction to at least the first block column allocated to the function (operation 734). In processing the internal background write instruction, in response to receiving the internal background write operation, the system identifies a sealed block column which is filled with data (operation 742). The system allocates a second block column to the internal background write instruction (operation 744). The system executes the internal background write instruction as a garbage collection process based on the second block column (operation 746). To execute the garbage collection process, the system can copy valid data from blocks of the sealed block column to blocks of the second block column, erase data stored in the blocks of the sealed block column, and return the sealed block column to the block column pool (not shown).
Exemplary Computer System and Apparatus
Content-processing system 818 can include instructions, which when executed by computer system 800, can cause computer system 800 or processor 802 to perform methods and/or processes described in this disclosure. Specifically, content-processing system 818 can include instructions for receiving and transmitting data packets, including data to be read or written, an input/output (I/O) request (e.g., a read request or a write request), metadata, a logical block address (LBA), a physical block address (PBA), and an indicator of a VF, a VM, a block group, a block column, or a block (communication module 820).
Content-processing system 818 can include instructions for dividing a physical storage capacity of a non-volatile storage device into a plurality of block groups, wherein a block group comprises a plurality of block columns, and wherein a block column corresponds to a block from each of a plurality of dies of the non-volatile storage device (physical capacity-dividing module 822). Content-processing system 818 can include instructions for allocating, to a function associated with a host, a number of block columns to obtain a physical storage space for the function (block column-allocating module 824). Content-processing system 818 can include instructions for, in response to processing an incoming host write instruction and an internal background write instruction (host write-processing module 826 and background write-processing module 828), allocating a first block column to the incoming host write instruction and a second block column to the internal background write instruction (block column-allocating module 824).
Content-processing system 818 can include instructions for, in response to receiving a command to delete a namespace or virtual machine associated with the virtual function (block column-recycling module 830 and block column-erasing module 832): erasing the number of block columns of the physical storage space allocated for the virtual function (block column-erasing module 832); and returning the number of block columns to the block column pool (block column-allocating module 824).
Data 834 can include any data that is required as input or generated as output by the methods and/or processes described in this disclosure. Specifically, data 834 can store at least: data; a request; a logical block address (LBA); a physical block address (PBA); a mapping between a virtual machine (VM) and a virtual function (VF); a mapping between a VM and one or more physical storage spaces; an indicator of a physical storage space which includes a number of block columns; an indicator of a block group(s) or block column(s) which have been allocated to a given VM or corresponding VF; a write instruction; an incoming host write instruction; an internal background write instruction; a lifespan of a non-volatile storage device; a block column; a block of data; a page of data; an indicator of whether data is valid or invalid; an indicator of whether a block column is sealed or open, and whether the block column is associated with or assigned to a host write or a background write; a namespace corresponding to a function or a virtual function; a command to delete a VM associated with a VF; a block column pool; an indicator of a global flash translation layer; a global block column pool; a block column pool associated with or implemented by a single storage device; erasure code (EC) encoded or decoded data; an EC codeword; and a distributed EC codeword.
Apparatus 900 can comprise modules or units 902-914 which are configured to perform functions or operations similar to modules 820-832 of computer system 800 of
The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
Furthermore, the methods and processes described above can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
The foregoing embodiments described herein have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the embodiments described herein to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the embodiments described herein. The scope of the embodiments described herein is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
3893071 | Bossen | Jul 1975 | A |
4562494 | Bond | Dec 1985 | A |
4718067 | Peters | Jan 1988 | A |
4775932 | Oxley | Oct 1988 | A |
4858040 | Hazebrouck | Aug 1989 | A |
5394382 | Hu | Feb 1995 | A |
5602693 | Brunnett | Feb 1997 | A |
5715471 | Otsuka | Feb 1998 | A |
5732093 | Huang | Mar 1998 | A |
5802551 | Komatsu | Sep 1998 | A |
5930167 | Lee | Jul 1999 | A |
6098185 | Wilson | Aug 2000 | A |
6148377 | Carter | Nov 2000 | A |
6226650 | Mahajan et al. | May 2001 | B1 |
6243795 | Yang | Jun 2001 | B1 |
6457104 | Tremaine | Sep 2002 | B1 |
6658478 | Singhal | Dec 2003 | B1 |
6795894 | Neufeld | Sep 2004 | B1 |
7351072 | Muff | Apr 2008 | B2 |
7565454 | Zuberi | Jul 2009 | B2 |
7599139 | Bombet | Oct 2009 | B1 |
7953899 | Hooper | May 2011 | B1 |
7958433 | Yoon | Jun 2011 | B1 |
8085569 | Kim | Dec 2011 | B2 |
8144512 | Huang | Mar 2012 | B2 |
8166233 | Schibilla | Apr 2012 | B2 |
8260924 | Koretz | Sep 2012 | B2 |
8281061 | Radke | Oct 2012 | B2 |
8452819 | Sorenson, III | May 2013 | B1 |
8516284 | Chan | Aug 2013 | B2 |
8527544 | Colgrove | Sep 2013 | B1 |
8751763 | Ramarao | Jun 2014 | B1 |
8819367 | Fallone | Aug 2014 | B1 |
8825937 | Atkisson | Sep 2014 | B2 |
8832688 | Tang | Sep 2014 | B2 |
8868825 | Hayes | Oct 2014 | B1 |
8904061 | O'Brien, III | Dec 2014 | B1 |
8949208 | Xu | Feb 2015 | B1 |
9015561 | Hu | Apr 2015 | B1 |
9031296 | Kaempfer | May 2015 | B2 |
9043545 | Kimmel | May 2015 | B2 |
9088300 | Chen | Jul 2015 | B1 |
9092223 | Pani | Jul 2015 | B1 |
9129628 | Fallone | Sep 2015 | B1 |
9141176 | Chen | Sep 2015 | B1 |
9208817 | Li | Dec 2015 | B1 |
9213627 | Van Acht | Dec 2015 | B2 |
9280472 | Dang | Mar 2016 | B1 |
9280487 | Candelaria | Mar 2016 | B2 |
9311939 | Malina | Apr 2016 | B1 |
9336340 | Dong | May 2016 | B1 |
9436595 | Benitez | Sep 2016 | B1 |
9495263 | Pang | Nov 2016 | B2 |
9529601 | Dharmadhikari | Dec 2016 | B1 |
9529670 | O'Connor | Dec 2016 | B2 |
9575982 | Sankara Subramanian | Feb 2017 | B1 |
9588698 | Karamcheti | Mar 2017 | B1 |
9588977 | Wang | Mar 2017 | B1 |
9607631 | Rausch | Mar 2017 | B2 |
9671971 | Trika | Jun 2017 | B2 |
9747202 | Shaharabany | Aug 2017 | B1 |
9852076 | Garg | Dec 2017 | B1 |
9875053 | Frid | Jan 2018 | B2 |
9912530 | Singatwaria | Mar 2018 | B2 |
9946596 | Hashimoto | Apr 2018 | B2 |
10013169 | Fisher | Jul 2018 | B2 |
10199066 | Feldman | Feb 2019 | B1 |
10229735 | Natarajan | Mar 2019 | B1 |
10235198 | Qiu | Mar 2019 | B2 |
10268390 | Warfield | Apr 2019 | B2 |
10318467 | Barzik | Jun 2019 | B2 |
10361722 | Lee | Jul 2019 | B2 |
10437670 | Koltsidas | Oct 2019 | B1 |
10459663 | Agombar | Oct 2019 | B2 |
10642522 | Li | May 2020 | B2 |
10649657 | Zaidman | May 2020 | B2 |
10678432 | Dreier | Jun 2020 | B1 |
10756816 | Dreier | Aug 2020 | B1 |
10928847 | Suresh | Feb 2021 | B2 |
11023150 | Pletka | Jun 2021 | B2 |
11068165 | Sharon | Jul 2021 | B2 |
11138124 | Tomic | Oct 2021 | B2 |
20010032324 | Slaughter | Oct 2001 | A1 |
20020010783 | Primak | Jan 2002 | A1 |
20020039260 | Kilmer | Apr 2002 | A1 |
20020073358 | Atkinson | Jun 2002 | A1 |
20020095403 | Chandrasekaran | Jul 2002 | A1 |
20020112085 | Berg | Aug 2002 | A1 |
20020161890 | Chen | Oct 2002 | A1 |
20030074319 | Jaquette | Apr 2003 | A1 |
20030145274 | Hwang | Jul 2003 | A1 |
20030163594 | Aasheim | Aug 2003 | A1 |
20030163633 | Aasheim | Aug 2003 | A1 |
20030217080 | White | Nov 2003 | A1 |
20040010545 | Pandya | Jan 2004 | A1 |
20040066741 | Dinker | Apr 2004 | A1 |
20040103238 | Avraham | May 2004 | A1 |
20040143718 | Chen | Jul 2004 | A1 |
20040255171 | Zimmer | Dec 2004 | A1 |
20040267752 | Wong | Dec 2004 | A1 |
20040268278 | Hoberman | Dec 2004 | A1 |
20050038954 | Saliba | Feb 2005 | A1 |
20050097126 | Cabrera | May 2005 | A1 |
20050138325 | Hofstee | Jun 2005 | A1 |
20050144358 | Conley | Jun 2005 | A1 |
20050149827 | Lambert | Jul 2005 | A1 |
20050174670 | Dunn | Aug 2005 | A1 |
20050177672 | Rao | Aug 2005 | A1 |
20050177755 | Fung | Aug 2005 | A1 |
20050195635 | Conley | Sep 2005 | A1 |
20050235067 | Creta | Oct 2005 | A1 |
20050235171 | Igari | Oct 2005 | A1 |
20060031709 | Hiraiwa | Feb 2006 | A1 |
20060101197 | Georgis | May 2006 | A1 |
20060156012 | Beeson | Jul 2006 | A1 |
20060184813 | Bui | Aug 2006 | A1 |
20070033323 | Gorobets | Feb 2007 | A1 |
20070061502 | Lasser | Mar 2007 | A1 |
20070101096 | Gorobets | May 2007 | A1 |
20070250756 | Gower | Oct 2007 | A1 |
20070266011 | Rohrs | Nov 2007 | A1 |
20070283081 | Lasser | Dec 2007 | A1 |
20070283104 | Wellwood | Dec 2007 | A1 |
20070285980 | Shimizu | Dec 2007 | A1 |
20080034154 | Lee | Feb 2008 | A1 |
20080065805 | Wu | Mar 2008 | A1 |
20080082731 | Karamcheti | Apr 2008 | A1 |
20080112238 | Kim | May 2008 | A1 |
20080163033 | Yim | Jul 2008 | A1 |
20080301532 | Uchikawa | Dec 2008 | A1 |
20090006667 | Lin | Jan 2009 | A1 |
20090089544 | Liu | Apr 2009 | A1 |
20090110078 | Crinon | Apr 2009 | A1 |
20090113219 | Aharonov | Apr 2009 | A1 |
20090125788 | Wheeler | May 2009 | A1 |
20090183052 | Kanno | Jul 2009 | A1 |
20090254705 | Abali | Oct 2009 | A1 |
20090282275 | Yermalayeu | Nov 2009 | A1 |
20090287956 | Flynn | Nov 2009 | A1 |
20090307249 | Koifman | Dec 2009 | A1 |
20090307426 | Galloway | Dec 2009 | A1 |
20090310412 | Jang | Dec 2009 | A1 |
20100031000 | Flynn | Feb 2010 | A1 |
20100169470 | Takashige | Jul 2010 | A1 |
20100217952 | Iyer | Aug 2010 | A1 |
20100229224 | Etchegoyen | Sep 2010 | A1 |
20100241848 | Smith | Sep 2010 | A1 |
20100321999 | Yoo | Dec 2010 | A1 |
20100325367 | Kornegay | Dec 2010 | A1 |
20100332922 | Chang | Dec 2010 | A1 |
20110031546 | Uenaka | Feb 2011 | A1 |
20110055458 | Kuehne | Mar 2011 | A1 |
20110055471 | Thatcher | Mar 2011 | A1 |
20110060722 | Li | Mar 2011 | A1 |
20110072204 | Chang | Mar 2011 | A1 |
20110099418 | Chen | Apr 2011 | A1 |
20110153903 | Hinkle | Jun 2011 | A1 |
20110161784 | Selinger | Jun 2011 | A1 |
20110191525 | Hsu | Aug 2011 | A1 |
20110218969 | Anglin | Sep 2011 | A1 |
20110231598 | Hatsuda | Sep 2011 | A1 |
20110239083 | Kanno | Sep 2011 | A1 |
20110252188 | Weingarten | Oct 2011 | A1 |
20110258514 | Lasser | Oct 2011 | A1 |
20110289263 | McWilliams | Nov 2011 | A1 |
20110289280 | Koseki | Nov 2011 | A1 |
20110292538 | Haga | Dec 2011 | A1 |
20110296411 | Tang | Dec 2011 | A1 |
20110299317 | Shaeffer | Dec 2011 | A1 |
20110302353 | Confalonieri | Dec 2011 | A1 |
20120017037 | Riddle | Jan 2012 | A1 |
20120039117 | Webb | Feb 2012 | A1 |
20120084523 | Littlefield | Apr 2012 | A1 |
20120089774 | Kelkar | Apr 2012 | A1 |
20120096330 | Przybylski | Apr 2012 | A1 |
20120117399 | Chan | May 2012 | A1 |
20120147021 | Cheng | Jun 2012 | A1 |
20120151253 | Horn | Jun 2012 | A1 |
20120159099 | Lindamood | Jun 2012 | A1 |
20120159289 | Piccirillo | Jun 2012 | A1 |
20120173792 | Lassa | Jul 2012 | A1 |
20120203958 | Jones | Aug 2012 | A1 |
20120210095 | Nellans | Aug 2012 | A1 |
20120233523 | Krishnamoorthy | Sep 2012 | A1 |
20120246392 | Cheon | Sep 2012 | A1 |
20120278579 | Goss | Nov 2012 | A1 |
20120284587 | Yu | Nov 2012 | A1 |
20120324312 | Moyer | Dec 2012 | A1 |
20120331207 | Lassa | Dec 2012 | A1 |
20130013880 | Tashiro | Jan 2013 | A1 |
20130016970 | Koka | Jan 2013 | A1 |
20130018852 | Barton | Jan 2013 | A1 |
20130024605 | Sharon | Jan 2013 | A1 |
20130054822 | Mordani | Feb 2013 | A1 |
20130061029 | Huff | Mar 2013 | A1 |
20130073798 | Kang | Mar 2013 | A1 |
20130080391 | Raichstein | Mar 2013 | A1 |
20130145085 | Yu | Jun 2013 | A1 |
20130145089 | Eleftheriou | Jun 2013 | A1 |
20130151759 | Shim | Jun 2013 | A1 |
20130159251 | Skrenta | Jun 2013 | A1 |
20130159723 | Brandt | Jun 2013 | A1 |
20130166820 | Batwara | Jun 2013 | A1 |
20130173845 | Aslam | Jul 2013 | A1 |
20130191601 | Peterson | Jul 2013 | A1 |
20130205183 | Fillingim | Aug 2013 | A1 |
20130219131 | Alexandron | Aug 2013 | A1 |
20130227347 | Cho | Aug 2013 | A1 |
20130238955 | D Abreu | Sep 2013 | A1 |
20130254622 | Kanno | Sep 2013 | A1 |
20130318283 | Small | Nov 2013 | A1 |
20130318395 | Kalavade | Nov 2013 | A1 |
20130329492 | Yang | Dec 2013 | A1 |
20140006688 | Yu | Jan 2014 | A1 |
20140019650 | Li | Jan 2014 | A1 |
20140025638 | Hu | Jan 2014 | A1 |
20140082273 | Segev | Mar 2014 | A1 |
20140082412 | Matsumura | Mar 2014 | A1 |
20140095769 | Borkenhagen | Apr 2014 | A1 |
20140095827 | Wei | Apr 2014 | A1 |
20140108414 | Stillerman | Apr 2014 | A1 |
20140108891 | Strasser | Apr 2014 | A1 |
20140164447 | Tarafdar | Jun 2014 | A1 |
20140164879 | Tam | Jun 2014 | A1 |
20140181532 | Camp | Jun 2014 | A1 |
20140195564 | Talagala | Jul 2014 | A1 |
20140215129 | Kuzmin | Jul 2014 | A1 |
20140223079 | Zhang | Aug 2014 | A1 |
20140233950 | Luo | Aug 2014 | A1 |
20140250259 | Ke | Sep 2014 | A1 |
20140279927 | Constantinescu | Sep 2014 | A1 |
20140304452 | De La Iglesia | Oct 2014 | A1 |
20140310574 | Yu | Oct 2014 | A1 |
20140359229 | Cota-Robles | Dec 2014 | A1 |
20140365707 | Talagala | Dec 2014 | A1 |
20150019798 | Huang | Jan 2015 | A1 |
20150082317 | You | Mar 2015 | A1 |
20150106556 | Yu | Apr 2015 | A1 |
20150106559 | Cho | Apr 2015 | A1 |
20150121031 | Feng | Apr 2015 | A1 |
20150142752 | Chennamsetty | May 2015 | A1 |
20150143030 | Gorobets | May 2015 | A1 |
20150199234 | Choi | Jul 2015 | A1 |
20150227316 | Warfield | Aug 2015 | A1 |
20150234845 | Moore | Aug 2015 | A1 |
20150269964 | Fallone | Sep 2015 | A1 |
20150277937 | Swanson | Oct 2015 | A1 |
20150286477 | Mathur | Oct 2015 | A1 |
20150294684 | Qjang | Oct 2015 | A1 |
20150301964 | Brinicombe | Oct 2015 | A1 |
20150304108 | Obukhov | Oct 2015 | A1 |
20150310916 | Leem | Oct 2015 | A1 |
20150317095 | Voigt | Nov 2015 | A1 |
20150341123 | Nagarajan | Nov 2015 | A1 |
20150347025 | Law | Dec 2015 | A1 |
20150363271 | Haustein | Dec 2015 | A1 |
20150363328 | Candelaria | Dec 2015 | A1 |
20150372597 | Luo | Dec 2015 | A1 |
20160014039 | Reddy | Jan 2016 | A1 |
20160026575 | Samanta | Jan 2016 | A1 |
20160041760 | Kuang | Feb 2016 | A1 |
20160048327 | Jayasena | Feb 2016 | A1 |
20160048341 | Constantinescu | Feb 2016 | A1 |
20160054922 | Awasthi | Feb 2016 | A1 |
20160062885 | Ryu | Mar 2016 | A1 |
20160077749 | Ravimohan | Mar 2016 | A1 |
20160077764 | Ori | Mar 2016 | A1 |
20160077968 | Sela | Mar 2016 | A1 |
20160098344 | Gorobets | Apr 2016 | A1 |
20160098350 | Tang | Apr 2016 | A1 |
20160103631 | Ke | Apr 2016 | A1 |
20160110254 | Cronie | Apr 2016 | A1 |
20160132237 | Jeong | May 2016 | A1 |
20160154601 | Chen | Jun 2016 | A1 |
20160155750 | Yasuda | Jun 2016 | A1 |
20160162187 | Lee | Jun 2016 | A1 |
20160179399 | Melik-Martirosian | Jun 2016 | A1 |
20160188223 | Camp | Jun 2016 | A1 |
20160188890 | Naeimi | Jun 2016 | A1 |
20160203000 | Parmar | Jul 2016 | A1 |
20160224267 | Yang | Aug 2016 | A1 |
20160232103 | Schmisseur | Aug 2016 | A1 |
20160234297 | Ambach | Aug 2016 | A1 |
20160239074 | Lee | Aug 2016 | A1 |
20160239380 | Wideman | Aug 2016 | A1 |
20160274636 | Kim | Sep 2016 | A1 |
20160283140 | Kaushik | Sep 2016 | A1 |
20160306699 | Resch | Oct 2016 | A1 |
20160306853 | Sabaa | Oct 2016 | A1 |
20160321002 | Jung | Nov 2016 | A1 |
20160335085 | Scalabrino | Nov 2016 | A1 |
20160342345 | Kankani | Nov 2016 | A1 |
20160343429 | Nieuwejaar | Nov 2016 | A1 |
20160350002 | Vergis | Dec 2016 | A1 |
20160350385 | Poder | Dec 2016 | A1 |
20160364146 | Kuttner | Dec 2016 | A1 |
20160381442 | Heanue | Dec 2016 | A1 |
20170004037 | Park | Jan 2017 | A1 |
20170010652 | Huang | Jan 2017 | A1 |
20170075583 | Alexander | Mar 2017 | A1 |
20170075594 | Badam | Mar 2017 | A1 |
20170091110 | Ash | Mar 2017 | A1 |
20170109199 | Chen | Apr 2017 | A1 |
20170109232 | Cha | Apr 2017 | A1 |
20170123655 | Sinclair | May 2017 | A1 |
20170147499 | Mohan | May 2017 | A1 |
20170161202 | Erez | Jun 2017 | A1 |
20170162235 | De | Jun 2017 | A1 |
20170168986 | Sajeepa | Jun 2017 | A1 |
20170177217 | Kanno | Jun 2017 | A1 |
20170177259 | Motwani | Jun 2017 | A1 |
20170185498 | Gao | Jun 2017 | A1 |
20170192848 | Pamies-Juarez | Jul 2017 | A1 |
20170199823 | Hayes | Jul 2017 | A1 |
20170212708 | Suhas | Jul 2017 | A1 |
20170220254 | Warfield | Aug 2017 | A1 |
20170221519 | Matsuo | Aug 2017 | A1 |
20170228157 | Yang | Aug 2017 | A1 |
20170242722 | Qiu | Aug 2017 | A1 |
20170249162 | Tsirkin | Aug 2017 | A1 |
20170262176 | Kanno | Sep 2017 | A1 |
20170262178 | Hashimoto | Sep 2017 | A1 |
20170262217 | Pradhan | Sep 2017 | A1 |
20170269998 | Sunwoo | Sep 2017 | A1 |
20170279460 | Camp | Sep 2017 | A1 |
20170285976 | Durham | Oct 2017 | A1 |
20170286311 | Juenemann | Oct 2017 | A1 |
20170322888 | Booth | Nov 2017 | A1 |
20170344470 | Yang | Nov 2017 | A1 |
20170344491 | Pandurangan | Nov 2017 | A1 |
20170353576 | Guim Bernat | Dec 2017 | A1 |
20180024772 | Madraswala | Jan 2018 | A1 |
20180024779 | Kojima | Jan 2018 | A1 |
20180033491 | Marelli | Feb 2018 | A1 |
20180052797 | Barzik | Feb 2018 | A1 |
20180067847 | Oh | Mar 2018 | A1 |
20180069658 | Benisty | Mar 2018 | A1 |
20180074730 | Inoue | Mar 2018 | A1 |
20180076828 | Kanno | Mar 2018 | A1 |
20180088867 | Kaminaga | Mar 2018 | A1 |
20180107591 | Smith | Apr 2018 | A1 |
20180113631 | Zhang | Apr 2018 | A1 |
20180143780 | Cho | May 2018 | A1 |
20180150640 | Li | May 2018 | A1 |
20180165038 | Authement | Jun 2018 | A1 |
20180165169 | Camp | Jun 2018 | A1 |
20180165340 | Agarwal | Jun 2018 | A1 |
20180167268 | Liguori | Jun 2018 | A1 |
20180173620 | Cen | Jun 2018 | A1 |
20180188970 | Liu | Jul 2018 | A1 |
20180189175 | Ji | Jul 2018 | A1 |
20180189182 | Wang | Jul 2018 | A1 |
20180212951 | Goodrum | Jul 2018 | A1 |
20180219561 | Litsyn | Aug 2018 | A1 |
20180226124 | Perner | Aug 2018 | A1 |
20180232151 | Badam | Aug 2018 | A1 |
20180260148 | Klein | Sep 2018 | A1 |
20180270110 | Chugtu | Sep 2018 | A1 |
20180293014 | Ravimohan | Oct 2018 | A1 |
20180300203 | Kathpal | Oct 2018 | A1 |
20180321864 | Benisty | Nov 2018 | A1 |
20180322024 | Nagao | Nov 2018 | A1 |
20180329776 | Lai | Nov 2018 | A1 |
20180336921 | Ryun | Nov 2018 | A1 |
20180349396 | Blagojevic | Dec 2018 | A1 |
20180356992 | Lamberts | Dec 2018 | A1 |
20180357126 | Dhuse | Dec 2018 | A1 |
20180373428 | Kan | Dec 2018 | A1 |
20180373655 | Liu | Dec 2018 | A1 |
20180373664 | Vijayrao | Dec 2018 | A1 |
20190012111 | Li | Jan 2019 | A1 |
20190050327 | Li | Feb 2019 | A1 |
20190065085 | Jean | Feb 2019 | A1 |
20190073261 | Halbert | Mar 2019 | A1 |
20190073262 | Chen | Mar 2019 | A1 |
20190087089 | Yoshida | Mar 2019 | A1 |
20190087115 | Li | Mar 2019 | A1 |
20190087328 | Kanno | Mar 2019 | A1 |
20190116127 | Pismenny | Apr 2019 | A1 |
20190171532 | Abadi | Jun 2019 | A1 |
20190172820 | Meyers | Jun 2019 | A1 |
20190196748 | Badam | Jun 2019 | A1 |
20190196907 | Khan | Jun 2019 | A1 |
20190205206 | Hornung | Jul 2019 | A1 |
20190212949 | Pletka | Jul 2019 | A1 |
20190220392 | Lin | Jul 2019 | A1 |
20190227927 | Miao | Jul 2019 | A1 |
20190272242 | Kachare | Sep 2019 | A1 |
20190278654 | Kaynak | Sep 2019 | A1 |
20190317901 | Kachare | Oct 2019 | A1 |
20190339998 | Momchilov | Nov 2019 | A1 |
20190361611 | Hosogi | Nov 2019 | A1 |
20190377632 | Oh | Dec 2019 | A1 |
20190377821 | Pleshachkov | Dec 2019 | A1 |
20190391748 | Li | Dec 2019 | A1 |
20200004456 | Williams | Jan 2020 | A1 |
20200004674 | Williams | Jan 2020 | A1 |
20200013458 | Schreck | Jan 2020 | A1 |
20200042223 | Li | Feb 2020 | A1 |
20200042387 | Shani | Feb 2020 | A1 |
20200082006 | Rupp | Mar 2020 | A1 |
20200089430 | Kanno | Mar 2020 | A1 |
20200097189 | Tao | Mar 2020 | A1 |
20200133841 | Davis | Apr 2020 | A1 |
20200143885 | Kim | May 2020 | A1 |
20200159425 | Flynn | May 2020 | A1 |
20200167091 | Haridas | May 2020 | A1 |
20200225875 | Oh | Jul 2020 | A1 |
20200242021 | Gholamipour | Jul 2020 | A1 |
20200250032 | Goyal | Aug 2020 | A1 |
20200257598 | Yazovitsky | Aug 2020 | A1 |
20200326855 | Wu | Oct 2020 | A1 |
20200328192 | Zaman | Oct 2020 | A1 |
20200348888 | Kim | Nov 2020 | A1 |
20200387327 | Hsieh | Dec 2020 | A1 |
20200401334 | Saxena | Dec 2020 | A1 |
20200409559 | Sharon | Dec 2020 | A1 |
20200409791 | Devriendt | Dec 2020 | A1 |
20210010338 | Santos | Jan 2021 | A1 |
20210089392 | Shirakawa | Mar 2021 | A1 |
20210103388 | Choi | Apr 2021 | A1 |
20210124488 | Stoica | Apr 2021 | A1 |
Number | Date | Country |
---|---|---|
2003022209 | Jan 2003 | JP |
2011175422 | Sep 2011 | JP |
9418634 | Aug 1994 | WO |
1994018634 | Aug 1994 | WO |
Entry |
---|
C. Wu, D. Wu, H. Chou and C. Cheng, “Rethink the Design of Flash Translation Layers in a Component-Based View”, in IEEE Acess, vol. 5, pp. 12895-12912, 2017. |
Po-Liang Wu, Yuan-Hao Chang and T. Kuo, “A file-system-aware FTL design for flash-memory storage systems,” 2009, pp. 393-398. |
S. Choudhuri and T. Givargis, “Preformance improvement of block based NAND flash translation layer”, 2007 5th IEEE/ACM/IFIP International Conference on Hardware/Software Codesign and Systems Synthesis (CODES+ISSS). Saizburg, 2007, pp. 257-262. |
A. Zuck, O. Kishon and S. Toledo. “LSDM: Improving the Preformance of Mobile Storage with a Log-Structured Address Remapping Device Driver”, 2014 Eighth International Conference on Next Generation Mobile Apps, Services and Technologies, Oxford, 2014, pp. 221-228. |
J. Jung and Y. Won, “nvramdisk: A Transactional Block Device Driver for Non-Volatile RAM”, in IEEE Transactions on Computers, vol. 65, No. 2, pp. 589-600, Feb. 1, 2016. |
Te I et al. (Pensieve: a Machine Assisted SSD Layer for Extending the Lifetime: (Year: 2018). |
ARM (“Cortex-R5 and Cortex-R5F”, Technical reference Manual, Revision r1p1) (Year:2011). |
https://web.archive.org/web/20071130235034/http://en.wikipedia.org:80/wiki/logical_block_addressing wikipedia screen shot retriefed on wayback Nov. 20, 2007 showing both physical and logical addressing used historically to access data on storage devices (Year: 2007). |
Ivan Picoli, Carla Pasco, Bjorn Jonsson, Luc Bouganim, Philippe Bonnet. “uFLIP-OC: Understanding Flash I/O Patterns on Open-Channel Solid-State Drives.” APSys'17, Sep. 2017, Mumbai, India. pp. 1-7, 2017, <10.1145/3124680.3124741>. <hal-01654985>. |
EMC Powerpath Load Balancing and Failover Comparison with native MPIO operating system solutions. Feb. 2011. |
Tsuchiya, Yoshihiro et al. “DBLK: Deduplication for Primary Block Storage”, MSST 2011, Denver, CO, May 23-27, 2011 pp. 1-5. |
Chen Feng, et al. “CAFTL: A Content-Aware Flash Translation Layer Enhancing the Lifespan of Flash Memory based Solid State Devices”< FAST'11, San Jose, CA Feb. 15-17, 2011, pp. 1-14. |
Wu, Huijun et al. “HPDedup: A Hybrid Prioritized Data Deduplication Mechanism for Primary Storage in the Cloud”, Cornell Univ. arXiv: 1702.08153v2[cs.DC], Apr. 16, 2017, pp. 1-14https://www.syncids.com/#. |
WOW: Wise Ordering for Writes—Combining Spatial and Temporal Locality in Non-Volatile Caches by Gill (Year: 2005). |
Helen H. W. Chan et al. “HashKV: Enabling Efficient Updated in KV Storage via Hashing”, https://www.usenix.org/conference/atc18/presentation/chan, (Year: 2018). |
S. Hong and D. Shin, “NAND Flash-Based Disk Cache Using SLC/MLC Combined Flash Memory,” 2010 International Workshop on Storage Network Architecture and Parallel I/Os, Incline Village, NV, 2010, pp. 21-30. |
Arpaci-Dusseau et al. “Operating Systems: Three Easy Pieces”, Originally published 2015; Pertinent: Chapter 44; flash-based SSDs, available at http://pages.cs.wisc.edu/˜remzi/OSTEP/. |
Jimenex, X., Novo, D. and P. Ienne, “Pheonix:Reviving MLC Blocks as SLC to Extend NAND Flash Devices Lifetime, ”Design, Automation & Text in Europe Conference & Exhibition (DATE), 2013. |
Yang, T. Wu, H. and W. Sun, “GD-FTL: Improving the Performance and Lifetime of TLC SSD by Downgrading Worn-out Blocks,” IEEE 37th International Performance Computing and Communications Conference (IPCCC), 2018. |
Number | Date | Country | |
---|---|---|---|
20210397547 A1 | Dec 2021 | US |