This disclosure is generally related to the field of data storage. More specifically, this disclosure is related to a method and system for facilitating low-cost high-throughput storage for accessing large-size input/output (I/O) blocks in a hard disk drive a hard disk drive (HDD).
Today, various storage systems are being used to store and access the ever-increasing amount of digital content. A storage system can include storage servers with one or more storage devices or drives, and a storage device or drive can include storage media with a non-volatile memory (such as a solid state drive (SSD) or a hard disk drive (HDD)). The need for low-cost high-capacity storage can be beneficial in certain technological fields, e.g., artificial intelligence and big data analysis. High-capacity HDDs are commonly used to fulfill the low-cost need, but constraints on throughput due to the mechanical characteristics of HDDs can limit the optimal usage of the high-capacity of these HDDs. For example, because of mechanical features of HDDs relating to seek time (e.g., read/write heads moving to a desired location on a track of a platter of an HDD) and the speed of the rotation of the platters (e.g., rounds per minute or RPM), the practical throughput of a high-capacity HDD may be limited to less than approximately 200 Megabytes (MB) per second. Thus, despite the continuing increase in the storage capacity of HDDs, the overall throughput may not increase at a similar pace. Furthermore, using a conventional file system and block layers to process data at the finer granularity of an I/O request may not optimize the efficiency and performance of an HDD, which can limit the flexibility and performance of the overall storage system.
Because of these limitations on the throughput of HDDs, current systems may use SSDs as storage drives, e.g., a high-density quad-level cell (QLC) NAND SSD. However, while such high-density SSDs may provide an increased latency and capacity, these high-density SSDs may also result in a decreased endurance. One technique is to place a high-endurance low-latency storage class memory as a persistent cache between the volatile system memory and the high-density NAND SSDs. However, this technique is constrained by several factors: over-qualification and suboptimal usage; an imbalance between read and write operations; limited use of multiple high-cost Peripheral Component Interconnect Express (PCIe) lanes; and ongoing endurance issues with SSD.
Thus, due to the mismatch between the increasing HDD capacity and the limited HDD throughput, and despite current solutions and techniques, the efficiency of low-cost high-throughput storage in a hard disk drive remains a challenge.
One embodiment provides a system which facilitates operation of a storage system. During operation, the system receives a first request to write data to a hard disk drive (HDD) which comprises a plurality of platters with corresponding heads, wherein a respective platter includes a plurality of tracks. The system aligns the heads at a same first position on a first track of each platter. The system writes the data to the platters by distributing the data as a plurality of data sectors to track sectors located at the same first position on the first track of each platter.
In some embodiments, the system receives a second request to read the data from the HDD. The system identifies the first track as a location at which the data is stored. The system aligns the heads at a same random position on the first track of each platter. The system reads, during a single rotation of the platters and beginning from the same random position, all data stored on the first track of each platter. The system stores the read data in a data buffer. The system reshuffles the read data in the data buffer to obtain the data requested in the second request.
In some embodiments, the system receives a third request to read the data from the hard disk drive, wherein the data requested in the third request is stored in the first track of each platter. The system determines that the data requested in the third request is stored in the data buffer. The system retrieves the data requested in the third request from the data buffer without reading the data stored in the first track of each platter. The system returns the retrieved data.
In some embodiments, aligning the heads at the same first position or the same random position comprises activating a plurality of arms associated with the plurality of platters of the hard disk drive, wherein a respective arm is attached to a corresponding head of a respective platter. Writing the data to the platters by distributing the data as a plurality of data sectors is based on a logical block address associated with a respective data sector.
In some embodiments, a respective track includes a plurality of pre-allocated spare sectors for remapping data stored in a faulty sector of the respective track.
In some embodiments, the data buffer is maintained by the hard disk drive or a host, and reshuffling the read data in the data buffer is performed by the hard disk drive or the host.
In some embodiments, the system writes replicas of the data to other hard disk drives of a distributed storage system, wherein the replicas are written by distributing a respective replica as a plurality of data sectors to track sectors located at a same second position on a second track of each platter of a respective other hard disk drive. The system receives a third request to read the data stored in the distributed storage system. The system obtains a unique portion of the data requested in the third request from each of the hard disk drives on which the data or replicas of the data are stored. The system concatenates the unique portions in a correct order to form the data requested in the third request. The system returns the concatenated data in response to the third request.
In some embodiments, the unique portion of the data obtained from each of the hard disk drives is determined by a file system of the distributed storage system, and the concatenation of each unique portion comprises an entirety of the data requested in the third request.
In the figures, like reference numerals refer to the same figure elements.
The following description is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the embodiments described herein are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
Overview
The embodiments described herein facilitate a hard disk drive system which facilitates low-cost high-throughput storage for accessing large-size I/O blocks, by: sector remapping; distributing data across multiple platters at a same position in each platter; enhancing parallelism by reading portions of replicas from different storage drives; balancing traffic; and buffering data.
As described above, the need for low-cost high-capacity storage can be especially important in certain technological fields, e.g., artificial intelligence and big data analysis. High-capacity HDDs are commonly used to fulfill the low-cost need, but constraints on throughput due to the mechanical characteristics of HDDs can limit the optimal usage of the high-capacity of these HDDs. For example, because of mechanical features of HDDs relating to seek time (e.g., read/write heads moving to a desired location on a track of a platter of an HDD) and the speed of the rotation of the platters (e.g., rounds per minute or RPM), the practical throughput of a high-capacity HDD may be limited to less than approximately 200 Megabytes (MB) per second. Thus, despite the continuing increase in the storage capacity of HDDs, the overall throughput may not increase at a similar pace. Furthermore, using a conventional file system and block layers to process data at the finer granularity of an I/O request may not optimize the efficiency and performance of an HDD, which can limit the flexibility and performance of the overall storage system.
A current solution is to use an SSD as a storage drive. However, while SSDs may provide an increased latency and capacity, SSDs may also result in a decreased endurance. One technique is to place a high-endurance low-latency storage class memory as a persistent cache between the volatile system memory and a high-density NAND SSD, as described below in relation to
Due to the mismatch between the increasing HDD capacity and the limited HDD throughput, and despite current solutions and techniques, the efficiency of low-cost high-throughput storage in a hard disk drive remains a challenge.
The embodiments described herein address this challenge and the above-described limitations by providing a hard disk drive system which provides several distinct and collaborating mechanisms. First, instead of allocating spare sectors as the most internal tracks of a platter (as in the conventional system), the described embodiments can pre-allocate spare sectors within each respective track of the platter, which can result in a decreased latency due to shorter distances moved by the read/write head. This sector remapping is described below in relation to
Second, in the described embodiments, the system can activate all of the HDD platters at once, rather than only one platter at a time. Because the multiple N arms of an HDD share a same actuator, the N arms can move together as one and access (i.e., read from or write to) each of their respective N platters at a given time. For example, the system can write data, as a plurality of data sectors, to track sectors located at a same position on a given track of each platter. This can result in an increase in the parallelism for read/write accesses from 1 to N, as described below in relation to
Third, replicas of data may be stored on different hard disk drives of a same server or of different servers. In the described embodiments, the system can read unique portions of requested data from each respective replica, which can result in a more efficient balancing of the read traffic. This can also result in an improved overall throughput, as described below in relation to
Fourth, to perform a read request of data, instead of moving the read/write head to both a single specific track and a single sector on the specific track (as in the conventional system), the embodiments described herein can move the read/write head to a random position on the identified respective track of all platters (as described above). The system can read, during a single rotation of the platter and beginning from the same random position on each respective track, all data stored on the each respective track at which the data is stored. The system can store all the read data in a data buffer, and subsequently reshuffle the stored data as needed to obtain the requested data. This also allows for a faster execution of subsequent I/O operations which need to access the data stored in any of the respective tracks. Instead of reading a single sector from a single track, the system can read all tracks at once, align the data, and store the aligned data in the read buffer for faster retrieval, as described below in relation to
Thus, by increasing the size of the I/O block in the manner described herein, the system can provide an improved support for applications which may require high-throughput without regard for the I/O size. For example, in the fields of artificial intelligence and big data analysis, a tremendous volume of data must be moved back and forth between memory and storage. The majority of the computation is handled by the processors and the memory, while the persistent data storage requires data transfer at a high-throughput rate without being affected by the size of the I/O itself. The embodiments described herein provide an improved hard disk drive system which can more efficiently use high-capacity HDDs (i.e., optimize the usage) with an enhanced throughput (e.g., via sector remapping, distributed data placement, enhanced parallelism, balancing traffic, and buffering data). These mechanisms can result in a more efficient overall storage system. Furthermore, the described embodiments can be used to move and replicate data with high-throughput, which can make the process of migrating a large amount of data in a data center more efficient.
A “distributed storage system” or a “storage system” can include multiple storage servers. A “storage server” or a “storage system” can refer to a computing device which can include multiple storage devices or storage drives. A “storage device” or a “storage drive” refers to a device or a drive with a non-volatile memory which can provide persistent storage of data, e.g., a solid state drive (SSD), a hard disk drive (HDD), or a flash-based storage device. A storage system can also be a computer system.
A “computing device” refers to any server, device, node, entity, drive, or any other entity which can provide any computing capabilities.
The term “hard disk drive system” refers to a system which can store data on at least a hard disk drive.
The terms “block size” and “I/O block size” refer to the size of a data portion (block) associated with a single read or write operation.
Exemplary Operation of a Storage System in the Prior Art
As described above, the need for low-cost high-capacity storage can be especially important in certain technological fields, e.g., artificial intelligence and big data analysis. While high-capacity HDDs are commonly used to fulfill the low-cost need, constraints on throughput due to HDD mechanical characteristics can limit the optimal usage of these high-capacity HDDs. One current solution is to use SSDs as a storage drive. However, while SSDs may provide an increased latency and capacity, SSDs may also result in a decreased endurance. One technique is to place a high-endurance low-latency storage class memory as a persistent cache between the volatile system memory and a high-density NAND SSD. However, this technique is also constrained by several limitations, as described below in relation to
The exemplary implementation of the tiered storage system can include: a CPU 120; DRAM dual in-line memory modules (DIMMs) 132, 134, 136, and 138; Optane SSDs 142, 144, and 146; and QLC SSDs 152, 154, 156, and 158. CPU 120 can communicate with DRAMs 132-138. CPU 120 can also communicate with Optane SSDs 142-146 and QLC SSDs 152-158 via multiple PCIe lanes 170.
While the tiered storage system depicted in environment 100 may provide a high-throughput, the following factors may limit this tiered storage system. First, the system may be designed to handle workloads with a large I/O size, e.g., 1 Megabyte (MB). In order to meet a corresponding high-throughput requirement (e.g., of 500 MB/sec) of such large workloads, the tiered storage system may be overqualified. That is, the tiered storage system may provide too much power in some instances, and may thus result in a suboptimal usage. Second, an imbalance may exist between read and write operations, i.e., the write latency may be much lower than the read latency. The tiered storage system may write data into Optane SSDs 142-146 with a much lower latency than the latency involved for retrieving or reading data from QLC SSDs 152-158.
Third, in ordered to provide the required high capacity for storage, the tiered storage system may include a significant number of both Optane SSDs and QLC SSDs. However, all of these devices/components require and share resources from the multiple PCIe lanes (e.g., via PCIe 170), which can be expensive in terms of financial cost and performance. Moreover, constraints due to CPU 120 may limit the number of drives that may be serviced via the multiple PCIe lanes. Thus, the overall cost may increase significantly based on the consumption of the multiple PCIe lanes.
Fourth, both Optane SSDs and QLC SSDs have an individual data write per day (DWPD) performance, which can limit the endurance of the physical media itself. The endurance of the physical media can affect both the lifespan and performance stability, which are critical features of the overall storage system. These ongoing endurance issues related to SSDs remain a challenge.
Thus, all of these constraints can limit the flexibility and performance of the overall storage system.
Exemplary Allocation of Spare Sectors: Prior Art v. One Embodiment
If an error is detected while using or accessing a certain portion of one normal track (e.g., a “faulty sector” of track 230), any data stored in the faulty sector can be moved or copied to the spare sectors of track 210. However, this requires moving the read/write head from the faulty sector of track 230 to the appropriate sectors of track 210, which can result in an increased access latency with additional seek time and head alignments.
The embodiments of the present application address the issues associated with prior art environment 200 of
Thus, by placing and pre-allocating spare sectors on each respective track, the described embodiments can save time in remapping a certain faulty sector in a given track by only moving the head to the spare sectors in the given track. As a result, the system can avoid moving the head between the given track and internal (spare sector) tracks. This can provide an improved performance in the access latency of the HDD as the HDD no longer needs to spend time on moving the head back and forth between the given track and the different internal spare sector tracks.
Exemplary Data Placement in a High-Density HDD
Another mechanism of the described embodiments allows the system to activate all of the HDD platters at once, rather than just a single platter at a time. The system can modify the data placement to write data to track sectors located at a same position on a given track of each platter, which can result in an increase in the parallelism for read/write accesses, as described below in relation to
Thus, the system can write to a same sector of a given track of each platter at essentially the same time, e.g., to the current aligned physical locations of tracks in all of the platters. That is, at a given time, the system can write data to the platters of HDD 300 by distributing the data as a plurality of data sectors to track sectors located at a same first position on a track of each platter. For example, at a time t0, the system can align the heads at a same first position on a track of each platter. The system can then write, at essentially the same time: a sector 1 of the data to a first sector of a track 326 of platter 320; a sector 2 of the data to a first sector of a track 346 of platter 340; a sector 3 of the data to a first sector of a track 366 of platter 360; and a sector N of the data to a first sector of a track 386 of platter 380. As the platter continues to spin or rotate, the position of the N heads on each of the respective N platter moves to the same next location. At a time t1, the system can continue to write, at essentially the same time: a sector N+1 of the data to a second sector of track 326 of platter 320; a sector N+2 of the data to a second sector of track 346 of platter 340; a sector N+3 of the data to a second sector of track 366 of platter 360; and a sector 2N of the data to a second sector of track 386 of platter 380.
As the platter continues to spin or rotate, the position of the N heads on each of the respective N platter moves to the same next location. At a time t2, the system can continue to write, at essentially the same time: a sector 2N+1 of the data to a third sector of track 326 of platter 320; a sector 2N+2 of the data to a third sector of track 346 of platter 340; a sector 2N+3 of the data to a third sector of track 366 of platter 360; and a sector 3N of the data to a third sector of track 386 of platter 380.
Thus, the system can write the data by distributing the data as a plurality of data sectors, based on a logical block address associated with a respective data sector, to track sectors at a given same location on a track of each of the N platters. This can provide an increase in parallelism from 1 to N. At any moment, the system can write N sectors. Moreover, the system can read N sectors in parallel, thus increasing both the read and the write parallelism.
Exemplary Traffic Balancing in a Read Operation from Multiple Replicas
In a conventional distributed file system, multiple replicas of data may be stored on multiple drives of the same server or of different servers. In order to execute a read request, the conventional system can randomly choose one replica to read from a single drive. However, this may result in the performance of a particular product being reliant upon the throughput of the single drive.
The embodiments described herein provide a mechanism which can implement an even partition of a read request across the multiple copies, which can result in an improvement in the throughput of a read operation and eliminate the reliance on the throughput of a single drive. Moreover, the traffic may be distributed more evenly and predictably, which can result in a more efficient overall storage system.
Upon receiving a request to read Data A, the system can simultaneously read the three replicas of Data A from different starting points. For example, the system can read a first portion “P1” from Data A (replica 1) 426 from HDD 424, starting from a read pointer 428. At the same or essentially the same time, the system can also read a second portion “P2” from Data A (replica 2) 436 from HDD 434, starting from a read pointer 438, and can also read a third portion “P3” from Data A (replica 3) 446 from HDD 444, starting from a read pointer 448. These read portions can all be unique portions of Data A, which allows the system to then concatenate the retrieved portions P1, P2, and P3, in a correct order, to form the requested Data A. The file system can manage the location of the read pointers, or determine these locations in advance upon storing the data as the three replicas (426, 436, and 446) to the different HDDs (424, 434, and 444) of servers 420, 430, and 440.
Thus, the implementation of reading unique portions from the multiple replicas in parallel can result in an improved performance of the overall storage system.
Buffering Data in an Exemplary Read Operation
Generally, in order to execute a read operation, a conventional hard disk drive reads a particular sector or sectors from a particular track, which can result in many rounds of reading in order to obtain all the sectors which comprise the requested read data. For example, given six separate read requests (e.g., I/O 1 through I/O 6), in order to execute an I/O 1 read request, an HDD must first move the head to the correct track. Because the head may not be currently located at the desired location in the correct track, the platter must rotate or spin once in order to read I/O 2 from the desired location in the correct track. The head can pick up the signal from the desired location in the correct track, and return the requested data for the I/O read request. Subsequently, in order to execute I/O 2 through I/O 6 read requests, the system must repeat the above procedure for each of I/O 2 through I/O 6, even if I/O 2 through I/O 6 are located on the same track as I/O 1. Performing these multiple operations can result in an increased latency and a decreased efficiency.
The embodiments described herein provide a mechanism which improves the above-described latency involved in read operations. The mechanism includes a read buffer which works with the physical actuation of all arms of all platters in a same location, as described above in relation to
Furthermore, applications running on the host side may be designed to have concentrated reads that may occur in the same or nearby data chunks which are stored in a group of tracks, e.g., from the N platters of the HDD, as in
The system can pick a same random location (i.e., 519, 529, 539, and 549) at which to place the read/write heads, and read, during a single rotation and beginning from the same random location, all data stored on the given track of each respective platter. The system can store all of the read data in a data buffer 550 (e.g., via communications 552, 554, 556, and 558). The data can be transmitted to and received by a reshuffle module 560 (via a communication 570). Reshuffle module 560 can reshuffle the data in the data buffer, e.g., by organizing the data to obtain data corresponding to a request from a host. Reshuffle module 560 can return the requested data to a host or other entity (via a communication 572). The data buffer and the reshuffle module may reside in the hard disk drive (e.g., as part of the HDD controller), in another component of the hard disk drive storage system, or in the host.
Thus, the system can provide another improvement to the performance of a hard disk drive system, where the improvement includes both the increased latency in retrieval and the savings in power consumption by eliminating repeated reads. The system provides this improvement based on reading all the data from the given track of all platters during a single rotation, storing the read data in the data buffer, and making the data in the data buffer available for subsequent retrieval directly from the data buffer.
Method for Facilitating Operation of a Storage System
The system receives a second request to read the data from the HDD (operation 608). The system identifies the first track as a location at which the data is stored (operation 610). The system aligns the heads at a same random position on the first track of each platter (operation 612). The system reads, during a single rotation of the platters and beginning from the same random position, all data stored on the first track of each platter (operation 614). The system stores the read data in a data buffer (operation 616). The system reshuffles the read data in the data buffer to obtain the data requested in the second request (operation 618). The operation continues at either Label A of
Exemplary Computer System and Apparatus
Content-processing system 718 can include instructions, which when executed by computer system 700, can cause computer system 700, processor 702, or controller 704 to perform methods and/or processes described in this disclosure. In some embodiments, controller 704 may comprise modules 720-730 depicted in
Content-processing system 718 can further include instructions for receiving a first request to write data to a hard disk drive (HDD) which comprises a plurality of platters with corresponding heads, wherein a respective platter includes a plurality of tracks, and wherein a respective track includes a plurality of pre-allocated spare sectors for remapping data stored in a faulty sector of the respective track (communication module 720). Content-processing system 718 can include instructions for aligning the heads at a same first position on a first track of each platter (head-aligning module 722). Content-processing system 718 can include instructions for writing the data to the platters by distributing the data as a plurality of data sectors to track sectors located at the same first position on the first track of each platter (data-writing module 724).
Content-processing system 718 can additionally include instructions for receiving a second request to read the data from the HDD (communication module 720). Content-processing system 718 can include instructions for identifying the first track as a location at which the data is stored (track-selecting module 726). Content-processing system 718 can include instructions for aligning the heads at a same random position on the first track of each platter (head-aligning module 722). Content-processing system 718 can also include instructions for reading, during a single rotation of the platters and beginning from the same random position, all data stored on the first track of each platter (data-reading module 728). Content-processing system 718 can include instructions for storing the read data in a data buffer (data-writing module 724). Content-processing system 718 can include instructions for reshuffling the read data in the data buffer to obtain the data requested in the second request (data-reshuffling module 730).
Content-processing system 718 can further include instructions for writing replicas of the data to other hard disk drives of a distributed storage system, wherein the replicas are written by distributing a respective replica as a plurality of data sectors to track sectors located at a same second position on a second track of each platter of a respective other hard disk drive (data-writing module 724 and replica-managing module 732). Content-processing system 718 can include instructions for receiving a third request to read the data stored in the distributed storage system (communication module 720). Content-processing system 718 can include instructions for obtaining a unique portion of the data requested in the third request from each of the hard disk drives on which the data or replicas of the data are stored (replica-managing module 732 and data-reading module 728). Content-processing system 718 can include instructions for concatenating the unique portions in a correct order to form the data requested in the third request (replica-managing module 732 and data-reading module 728). Content-processing system 718 can include instructions for returning the concatenated data in response to the third request (communication module 720).
Data 734 can include any data that is required as input or generated as output by the methods and/or processes described in this disclosure. Specifically, data 734 can store at least: data; a request; a read request; a write request; an input/output (I/O) request; data or metadata associated with a read request, a write request, or an I/O request; a physical block address (PB A); a logical block address (LBA); an indicator of a platter, a track, a sector, a spare sector, or a position on a track of a platter; a position on a track of a platter; a data sector; a track sector; a replica; a portion of a replica; a unique portion of data; concatenated portions of data; an order for concatenating portions of data; and an indicator of a hard drive or a server.
Apparatus 800 can also include a controller (not shown) with modules/units configured to perform various operations. Specifically, apparatus 800 (or its controller) can comprise modules or units 802-814 which are configured to perform functions or operations similar to modules 720-732 of computer system 700 of
The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
Furthermore, the methods and processes described above can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
The foregoing embodiments described herein have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the embodiments described herein to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the embodiments described herein. The scope of the embodiments described herein is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
4562494 | Bond | Dec 1985 | A |
4775932 | Oxley | Oct 1988 | A |
4858040 | Hazebrouck | Aug 1989 | A |
5602693 | Brunnett | Feb 1997 | A |
5930167 | Lee | Jul 1999 | A |
6148377 | Carter | Nov 2000 | A |
6226650 | Mahajan et al. | May 2001 | B1 |
6243795 | Yang | Jun 2001 | B1 |
6795894 | Neufeld | Sep 2004 | B1 |
7565454 | Zuberi | Jul 2009 | B2 |
7958433 | Yoon | Jun 2011 | B1 |
8085569 | Kim | Dec 2011 | B2 |
8144512 | Huang | Mar 2012 | B2 |
8166233 | Schibilla | Apr 2012 | B2 |
8260924 | Koretz | Sep 2012 | B2 |
8281061 | Radke | Oct 2012 | B2 |
8452819 | Sorenson, III | May 2013 | B1 |
8516284 | Chan | Aug 2013 | B2 |
8751763 | Ramarao | Jun 2014 | B1 |
8825937 | Atkisson | Sep 2014 | B2 |
8949208 | Xu | Feb 2015 | B1 |
9043545 | Kimmel | May 2015 | B2 |
9088300 | Chen | Jul 2015 | B1 |
9092223 | Pani | Jul 2015 | B1 |
9141176 | Chen | Sep 2015 | B1 |
9208817 | Li | Dec 2015 | B1 |
9280472 | Dang | Mar 2016 | B1 |
9280487 | Candelaria | Mar 2016 | B2 |
9311939 | Malina | Apr 2016 | B1 |
9436595 | Benitez | Sep 2016 | B1 |
9529601 | Dharmadhikari | Dec 2016 | B1 |
9588698 | Karamcheti | Mar 2017 | B1 |
9588977 | Wang | Mar 2017 | B1 |
10013169 | Fisher | Jul 2018 | B2 |
10229735 | Natarajan | Mar 2019 | B1 |
10235198 | Qiu | Mar 2019 | B2 |
20020010783 | Primak | Jan 2002 | A1 |
20020039260 | Kilmer | Apr 2002 | A1 |
20020073358 | Atkinson | Jun 2002 | A1 |
20020161890 | Chen | Oct 2002 | A1 |
20030074319 | Jaquette | Apr 2003 | A1 |
20030163594 | Aasheim | Aug 2003 | A1 |
20030163633 | Aasheim | Aug 2003 | A1 |
20030217080 | White | Nov 2003 | A1 |
20040010545 | Pandya | Jan 2004 | A1 |
20040103238 | Avraham | May 2004 | A1 |
20040143718 | Chen | Jul 2004 | A1 |
20040255171 | Zimmer | Dec 2004 | A1 |
20040268278 | Hoberman | Dec 2004 | A1 |
20050038954 | Saliba | Feb 2005 | A1 |
20050097126 | Cabrera | May 2005 | A1 |
20050149827 | Lambert | Jul 2005 | A1 |
20050174670 | Dunn | Aug 2005 | A1 |
20050177755 | Fung | Aug 2005 | A1 |
20050195635 | Conley | Sep 2005 | A1 |
20050235067 | Creta | Oct 2005 | A1 |
20050235171 | Igari | Oct 2005 | A1 |
20060156012 | Beeson | Jul 2006 | A1 |
20070033323 | Gorobets | Feb 2007 | A1 |
20070061502 | Lasser | Mar 2007 | A1 |
20070101096 | Gorobets | May 2007 | A1 |
20070283081 | Lasser | Dec 2007 | A1 |
20070285980 | Shimizu | Dec 2007 | A1 |
20080034154 | Lee | Feb 2008 | A1 |
20080112238 | Kim | May 2008 | A1 |
20080301532 | Uchikawa | Dec 2008 | A1 |
20090113219 | Aharonov | Apr 2009 | A1 |
20090282275 | Yermalayeu | Nov 2009 | A1 |
20090307249 | Koifman | Dec 2009 | A1 |
20090310412 | Jang | Dec 2009 | A1 |
20100031000 | Flynn | Feb 2010 | A1 |
20100169470 | Takashige | Jul 2010 | A1 |
20100217952 | Iyer | Aug 2010 | A1 |
20100229224 | Etchegoyen | Sep 2010 | A1 |
20100241848 | Smith | Sep 2010 | A1 |
20100321999 | Yoo | Dec 2010 | A1 |
20100325367 | Kornegay | Dec 2010 | A1 |
20110055458 | Kuehne | Mar 2011 | A1 |
20110055471 | Thatcher | Mar 2011 | A1 |
20110072204 | Chang | Mar 2011 | A1 |
20110099418 | Chen | Apr 2011 | A1 |
20110153903 | Hinkle | Jun 2011 | A1 |
20110161784 | Selinger | Jun 2011 | A1 |
20110191525 | Hsu | Aug 2011 | A1 |
20110218969 | Anglin | Sep 2011 | A1 |
20110231598 | Hatsuda | Sep 2011 | A1 |
20110292538 | Haga | Dec 2011 | A1 |
20110299317 | Shaeffer | Dec 2011 | A1 |
20110302353 | Confalonieri | Dec 2011 | A1 |
20120039117 | Webb | Feb 2012 | A1 |
20120084523 | Littlefield | Apr 2012 | A1 |
20120117399 | Chan | May 2012 | A1 |
20120147021 | Cheng | Jun 2012 | A1 |
20120159289 | Piccirillo | Jun 2012 | A1 |
20120210095 | Nellans | Aug 2012 | A1 |
20120233523 | Krishnamoorthy | Sep 2012 | A1 |
20120246392 | Cheon | Sep 2012 | A1 |
20120278579 | Goss | Nov 2012 | A1 |
20120284587 | Yu | Nov 2012 | A1 |
20130013880 | Tashiro | Jan 2013 | A1 |
20130061029 | Huff | Mar 2013 | A1 |
20130073798 | Kang | Mar 2013 | A1 |
20130080391 | Raichstein | Mar 2013 | A1 |
20130145085 | Yu | Jun 2013 | A1 |
20130145089 | Eleftheriou | Jun 2013 | A1 |
20130151759 | Shim | Jun 2013 | A1 |
20130159251 | Skrenta | Jun 2013 | A1 |
20130159723 | Brandt | Jun 2013 | A1 |
20130166820 | Batwara | Jun 2013 | A1 |
20130173845 | Aslam | Jul 2013 | A1 |
20130191601 | Peterson | Jul 2013 | A1 |
20130219131 | Alexandron | Aug 2013 | A1 |
20130318283 | Small | Nov 2013 | A1 |
20130318395 | Kalavade | Nov 2013 | A1 |
20140019650 | Li | Jan 2014 | A1 |
20140082273 | Segev | Mar 2014 | A1 |
20140108414 | Stillerman | Apr 2014 | A1 |
20140181532 | Camp | Jun 2014 | A1 |
20140195564 | Talagala | Jul 2014 | A1 |
20140233950 | Luo | Aug 2014 | A1 |
20140250259 | Ke | Sep 2014 | A1 |
20140279927 | Constantinescu | Sep 2014 | A1 |
20140304452 | De La Iglesia | Oct 2014 | A1 |
20140310574 | Yu | Oct 2014 | A1 |
20140359229 | Cota-Robles | Dec 2014 | A1 |
20140365707 | Talagala | Dec 2014 | A1 |
20150019798 | Huang | Jan 2015 | A1 |
20150082317 | You | Mar 2015 | A1 |
20150106556 | Yu | Apr 2015 | A1 |
20150106559 | Cho | Apr 2015 | A1 |
20150142752 | Chennamsetty | May 2015 | A1 |
20150227316 | Warfield | Aug 2015 | A1 |
20150277937 | Swanson | Oct 2015 | A1 |
20150294684 | Qjang | Oct 2015 | A1 |
20150301964 | Brinicombe | Oct 2015 | A1 |
20150304108 | Obukhov | Oct 2015 | A1 |
20150363271 | Haustein | Dec 2015 | A1 |
20150372597 | Luo | Dec 2015 | A1 |
20160014039 | Reddy | Jan 2016 | A1 |
20160026575 | Samanta | Jan 2016 | A1 |
20160048341 | Constantinescu | Feb 2016 | A1 |
20160098344 | Gorobets | Apr 2016 | A1 |
20160110254 | Cronie | Apr 2016 | A1 |
20160162187 | Lee | Jun 2016 | A1 |
20160179399 | Melik-Martirosian | Jun 2016 | A1 |
20160188223 | Camp | Jun 2016 | A1 |
20160188890 | Naeimi | Jun 2016 | A1 |
20160203000 | Parmar | Jul 2016 | A1 |
20160232103 | Schmisseur | Aug 2016 | A1 |
20160239074 | Lee | Aug 2016 | A1 |
20160239380 | Wideman | Aug 2016 | A1 |
20160274636 | Kim | Sep 2016 | A1 |
20160306853 | Sabaa | Oct 2016 | A1 |
20160343429 | Nieuwejaar | Nov 2016 | A1 |
20160350002 | Vergis | Dec 2016 | A1 |
20160350385 | Poder | Dec 2016 | A1 |
20160364146 | Kuttner | Dec 2016 | A1 |
20170010652 | Huang | Jan 2017 | A1 |
20170075583 | Alexander | Mar 2017 | A1 |
20170075594 | Badam | Mar 2017 | A1 |
20170109232 | Cha | Apr 2017 | A1 |
20170147499 | Mohan | May 2017 | A1 |
20170161202 | Erez | Jun 2017 | A1 |
20170162235 | De | Jun 2017 | A1 |
20170168986 | Sajeepa | Jun 2017 | A1 |
20170212708 | Suhas | Jul 2017 | A1 |
20170228157 | Yang | Aug 2017 | A1 |
20170249162 | Tsirkin | Aug 2017 | A1 |
20170262176 | Kanno | Sep 2017 | A1 |
20170262178 | Hashimoto | Sep 2017 | A1 |
20170285976 | Durham | Oct 2017 | A1 |
20170286311 | Juenemann | Oct 2017 | A1 |
20170344470 | Yang | Nov 2017 | A1 |
20170344491 | Pandurangan | Nov 2017 | A1 |
20170353576 | Guim Bernat | Dec 2017 | A1 |
20180024772 | Madraswala | Jan 2018 | A1 |
20180024779 | Kojima | Jan 2018 | A1 |
20180088867 | Kaminaga | Mar 2018 | A1 |
20180107591 | Smith | Apr 2018 | A1 |
20180143780 | Cho | May 2018 | A1 |
20180150640 | Li | May 2018 | A1 |
20180167268 | Liguori | Jun 2018 | A1 |
20180189182 | Wang | Jul 2018 | A1 |
20180212951 | Goodrum | Jul 2018 | A1 |
20180270110 | Chugtu | Sep 2018 | A1 |
20180293014 | Ravimohan | Oct 2018 | A1 |
20180329776 | Lai | Nov 2018 | A1 |
20180336921 | Ryun | Nov 2018 | A1 |
20180349396 | Blagojevic | Dec 2018 | A1 |
20180356992 | Lamberts | Dec 2018 | A1 |
20180373428 | Kan | Dec 2018 | A1 |
20190012111 | Li | Jan 2019 | A1 |
20190073262 | Chen | Mar 2019 | A1 |
20190205206 | Hornung | Jul 2019 | A1 |
20190391748 | Li | Dec 2019 | A1 |
Number | Date | Country |
---|---|---|
2003022209 | Jan 2003 | JP |
2011175422 | Sep 2011 | JP |
9418634 | Aug 1994 | WO |
1994018634 | Aug 1994 | WO |
Entry |
---|
S. Hong and D. Shin, “NAND Flash-Based Disk Cache Using SLC/MLC Combined Flash Memory,” 2010 International Workshop on Storage Network Architecture and Parallel I/Os, Incline Village, NV, 2010, pp. 21-30. |
Arpaci-Dusseau et al. “Operating Systems: Three Easy Pieces”, Originally published 2015; Pertinent: Chapter 44; flash-based SSDs, available at http://pages.cs.wisc.edu/˜remzi/OSTEP/. |
https://web.archive.org/web/20071130235034/http://en.wikipedia.org:80/wiki/logical_block_addressing wikipedia screen shot retriefed on wayback Nov. 20, 2007 showing both physical and logical addressing used historically to access data on storage devices (Year: 2007). |
Ivan Picoli, Carla Pasco, Bjorn Jonsson, Luc Bouganim, Philippe Bonnet. “uFLIP-OC: Understanding Flash I/O Patterns on Open-Channel Solid-State Drives.” APSys'17, Sep. 2017, Mumbai, India. pp. 1-7, 2017, <10.1145/3124680.3124741>. <hal-01654985>. |
EMC Powerpath Load Balancing and Failover Comparison with native MPIO operating system solutions. Feb. 2011. |
Tsuchiya, Yoshihiro et al. “DBLK: Deduplication for Primary Block Storage”, MSST 2011, Denver, CO, May 23-27, 2011 pp. 1-5. |
Chen Feng, et al. “CAFTL: A Content-Aware Flash Translation Layer Enhancing the Lifespan of Flash Memory based Solid State Devices”< FAST '11, San Jose, CA Feb. 15-17, 2011, pp. 1-14. |
Wu, Huijun et al. “HPDedup: A Hybrid Prioritized Data Deduplication Mechanism for Primary Storage in the Cloud”, Cornell Univ. arXiv: 1702.08153v2[cs.DC], Apr. 16, 2017, pp. 1-14. |
WOW: Wise Ordering for Writes—Combining Spatial and Temporal Locality in Non-Volatile Caches by Gill (Year: 2005). |
Helen H. W. Chan et al. “HashKV: Enabling Efficient Updated in KV Storage via Hashing”, https://www.usenix.org/conference/atc18/presentation/chan, (Year: 2018). |