Flash memory based solid-state drives (SSD) have been used widely in both consumer computers and enterprise servers. There are two main types of flash memory, which are named after the NAND and NOR logic gates. NAND type flash memory may be written and read in blocks, each of which comprises a number of pages.
Since the NAND flash storage cells in SSDs have very unique properties, SSD's normal usages are very inefficient. For example, although it can be randomly read or programmed a byte or a word at a time, NAND flash memory can only be erased a block at a time. To rewrite a single NAND Flash page, the whole erase block (which contains a lot of flash pages) has to be erased first.
Since NAND flash based storage devices (e.g., SSDs) do not allow in-place updating, a garbage collection operation is performed when the available free block count reaches a certain threshold in order to prepare space for subsequent writes. The garbage collection includes reading valid data from one erase block and writing the valid data to another block, while invalid data is not transferred to a new block. It takes a relatively significant amount of time to erase a NAND erase block, and each erase block has a limited number of erase cycles (from about 3K times to 10K times). Thus, garbage collection overhead is one of the biggest speed limiters in the technology class, incurring higher data I/O latency and lower I/O performance. Therefore, operating systems (OS) and applications, which don't treat hot/cold data differently, and store them together, will see performance degradation over time (compared to OS's and applications that do treat hot and cold data differently), as well as a shorter SSD lifetime as more erase cycles are needed, causing the NAND cells to wear out faster.
SSD vendors and storage technical committees have come up with a new SSD and standard, called “multi-stream SSD,” to overcome this issue by providing OSs and applications with interfaces that separately store data with different lifespans called “streams.” Streams are host hints that indicate when data writes are associated with one another or have a similar lifetime. That is, a group of individual data writes are a collective stream and each stream is given a stream ID by the OS or an application. For example, “hot” data can be assigned a unique stream ID and the data for that stream ID would be written to the same erase block in the SSD. Because the data within an erase block has a similar lifetime or is associated with one another, there is a greater chance that an entire erase block is freed when data is deleted by a host system, thereby significantly reducing garbage collection overhead because an entire target block would either be valid (and hence no need to erase), or invalid (we can erase, but no need to write). Accordingly, device endurance, and performance should increase.
However, to utilize this new interface, many changes within the applications (including source code) and the OS are required. As a typical computer can have tens or hundreds of software applications installed and running, it's very difficult for all applications, especially for legacy and closed-source applications, to adapt to those changes, in order to use SSDs more efficiently. In addition, multi-stream SSD has limited applicability in that multi-stream SSD is only compatible for use by operating systems and applications.
What is needed is an improved data property based data placement in a storage device, and more particularly, to an autonomous process that enables computer devices to utilize data property based data placement (e.g., multi-stream) solid-state drives.
The example embodiments provide methods and systems for providing an interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device. Aspects of the example embodiments include: executing a software component at an operating system level in the computer device that monitors update statistics of data item modifications into the nonvolatile memory device, including one or more of update frequencies for at least a portion of the data items, accumulated update and delete frequencies specific to each file type, and an origin of the data item; storing the update statistics for the data items and data item types in a database; and intercepting operations, including create, write, and update, of performed by applications to the data items, and automatically assigning a data property identifier to the data items based on current update statistics in the database, such that the data items and assigned data property identifiers are transmitted over a memory channel to the nonvolatile memory device.
The example embodiments further provide a computer device, comprising: a memory; an operating system; and a processor coupled to the memory, the processor executing a software component provided within the operating system, the software component configured to: monitor update statistics of data item modifications into a nonvolatile memory device, including one or more of update frequencies for at least a portion of the data items, accumulated update and delete frequencies specific to each file type, and an origin of the data item; store the update statistics or the data items and data item types in a database; and intercept all operations, including create, write, and update, of performed by applications to the data items, and automatically assign a data property identifier to each of the data items based on current update statistics in the database, such that the data items and assigned data property identifiers are transmitted over a memory channel to the nonvolatile memory device for storage, thereby enabling the computer device to utilize data property-based data placement inside a nonvolatile memory device.
The example embodiments also provide an executable software product stored on a non-transitory computer-readable storage medium containing program instructions for providing an interface for enabling a computer device to utilize data property-based data placement inside a nonvolatile memory device, the program instructions for: executing a software component at an operating system level in the computer device that monitors update statistics of data item modifications into the nonvolatile memory device, including one or more of update frequencies for at least a portion of the data items, accumulated update and delete frequencies specific to each file type, and an origin of the data item; storing, by the software component, the update statistics for the data items and the data item types in a database; and intercepting all operations, including create, write, and update, of performed by applications to the data items, and automatically assigning a data property identifier to each of the data items based on current update statistics in the database, such that the data items and assigned data property identifiers are transmitted over a memory channel to the nonvolatile memory device for storage.
These and/or other features and utilities of the present general inventive concept will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the embodiments of the present general inventive concept, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below in order to explain the present general inventive concept while referring to the figures.
Advantages and features of the present invention and methods of accomplishing the same may be understood more readily by reference to the following detailed description of embodiments and the accompanying drawings. The present general inventive concept may, however, be embodied in many different forms and should not be construed as being limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will fully convey the concept of the general inventive concept to those skilled in the art, and the present general inventive concept will only be defined by the appended claims. In the drawings, the thickness of layers and regions are exaggerated for clarity.
The use of the terms “a” and “an” and “the” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted.
The term “algorithm” or “module”, as used herein, means, but is not limited to, a software or hardware component, such as a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC), which performs certain tasks. An algorithm or module may advantageously be configured to reside in the addressable storage medium and configured to execute on one or more processors. Thus, an algorithm or module may include, by way of example, components, such as software components, object-oriented software components, class components and task components, processes, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables. The functionality provided for the components and components or modules may be combined into fewer components or modules or further separated into additional components and components or modules.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It is noted that the use of any and all examples, or exemplary terms provided herein is intended merely to better illuminate the invention and is not a limitation on the scope of the invention unless otherwise specified. Further, unless defined otherwise, all terms defined in generally used dictionaries may not be overly interpreted.
In one aspect, the example embodiments provide a heuristic and autonomous interface for enabling computer systems to utilize a data property-based data placement method (e.g., multi-streaming) in storage devices, such as SSDs, which does not require changes to applications.
The system includes a host system 10 coupled to an SSD 12 over a channel 14. As is well-known, an SSD has no moving parts to store data and does not require constant power to retain that data. Components of the host system 10 that are relevant to this disclosure include a processor 16, which executes computer instructions from a memory 18 including, an operating system (OS) 20 and a file system 21. The host system 10 may include other components (not shown), such as a memory controller for interfacing with the channel 14. The host system 10 and the SSD 12 communicate commands and data items 26 over the channel 14. In one embodiment, the host system may be a typical computer or server running any type of OS. Example types of OSs include single- and multi-user, distributed, templated, embedded, real-time, and library. In another embodiment, the system may be a standalone component, such as a device controller, in which case the OS may comprise a lightweight OS (or parts thereof) or even firmware.
The SSD 12 includes a storage controller 22 and a nonvolatile memory (NVM) array 24 to store data from the host system 10. The storage controller manages 22 the data stored in the NVM array 24 and communicates with the host system over the channel 14 via communication protocols. The NVM array 24 may comprise any type of nonvolatile random-access memory (NVRAM) including flash memory, ferroelectric RAM (F-RAM), magnetoresistive RAM (MRAM), phase-change memory (PCM), millipede memory, and the like. Both the SSD 12 and channel 14 may support multi-channel memory architectures, such as dual channel architecture; and may also support single, double or quad rate data transfers.
According to the example embodiments, in order to reduce garbage collection overhead in the SSD 12, the example embodiments provide an improved data property-based data placement in the SSD. This is accomplished by providing a heuristic interface 26 that enables both applications and hardware components to separately store data items in the SSD 12 that have different lifespans. In addition, in some embodiments, use of a heuristic interface 26 requires no changes to user applications running on the host system 10.
In one embodiment, the heuristic interface 26 comprises at least one software component installed at the operating system level that continuously monitors and stores usage and update statistics of all data items 28, such as files. After an initial warm-up or training period, any create/write/update operation performed on the data items 28 by the host system 10 are assigned a dynamic data property identifier 30 according to current usage and update statistics. In one embodiment, the actual assignment of the data property identifiers 30 may be performed by software hooks in the file system 21 of the OS 20.
The software component stores the update statistics for the data items and data item types in a database (block 202). In one embodiment, the update statistics may be stored both in the host system memory 18 and in the SSD 12.
The software component intercepts most, if not all, operations, including create, write, and update, performed by applications to the data items, and automatically assigns a data property identifier to each of the data items based on current update statistics in the database, such that the data items and assigned property identifiers are transmitted over the memory channel to the nonvolatile memory device for storage (block 204). In one embodiment, the data property identifier acts as a tag, and need not actually transmit any information on what data property the identifier represents.
According to one embodiment, the heuristic interface 26 uses the current update statistics to associate or assign the data property identifiers 30 to each of the data items 28 based on one or more data properties indicating data similarity, such as a data lifetime, a data type, data size, and a physical data source. In a further embodiment, logical block address (LBA) ranges could also be used as a data property indicating data similarity. For example, a pattern of LBA accesses could be an indicator of similarity and call for a grouping of the data. In this manner, data items 28 having the same or similar data properties are assigned the same data property identifier value.
Because the heuristic interface 26 is provided at the operating system level, no changes are required to existing applications in order to make those applications compatible with the data property-based data placement process of the example embodiment. Accordingly, the heuristic interface 26 may be implemented in any type of computing device having a processor and operating system to expand use of conventional multi-streaming beyond applications and operating systems.
Computing devices 300C and 300D may represent hardware devices, such as a switch, a router, a RAID system, or a host bus adapter (HBA) system, a sensor system or stand-alone device (such as a scanner or camera) in which respective heuristic interfaces 26C and 26D are provided within hardware device controllers 306C and 306D. The heuristic interface 26C intercepts data item operations performed by the device controller 306C, and automatically assigns data property identifiers 30C to each of the data items from the device controller 306C based on current update statistics. Similarly, the heuristic interface 26D intercepts data item operations performed by the device controller 306D, and automatically assigns data property identifiers 30D to each of the data items from the device controller 306D based on current update statistics.
In one embodiment, the data type may include properties outside of the application of origin. In a further embodiment, the one or more data properties could also include logical block address (LBA) ranges. It should be noted the data property listed here are not the only types of data properties that can be considered. For example, some other data property, which is currently not known, could be used to assign determine data similarity in the future. In one embodiment, the data items received from the operating system or the application may include data property identifiers that are associated with the data items by another process besides the heuristic interface 26.
The storage controller receives over the channel from a hardware device controller (e.g., controllers 306C and 306D) another series of data items to be stored, wherein each of the data items includes a second data property identifier that is associated with the data items based on one or more of the data properties indicating data similarity, including a data lifetime, a data type, and a physical data source (block 402).
The storage controller reads the data property identifiers and identifies which blocks of the memory device to store the corresponding data items, such that the data items having the same data property identifiers are stored in a same block (block 404), and stores the data items into the identified blocks (block 406).
For example, with respect to
According to the heuristic interface 26 of the example embodiments, a data property ID 30 may be assigned to the data items 28 that are output from any type input device for storage. For example, assume the heuristic interface 28 is implemented within a digital security camera that takes an image periodically, e.g., once a second. The heuristic interface 28 may assign a data property ID 30 to each image file based on data properties indicating data similarity, such as the capture rate of the images, the image file size, and the origin of the images (e.g., device ID and/or GPS location). Note that such data properties need not be associated with a particular application or file system as is the metadata used by conventional multi-streaming.
According to some example embodiments, the heuristic interface 26 comprises two software components: a stats demon 600 installed within the operating system 20, and map hooks 602 implemented as system call hooks at the file system level. The stats demon 600 may continuously run in the background to manage and maintain at least one heuristic/statistical database 604 of data item modifications to the SSD 12 (shown in the expanded dotted line box). The map hooks 602 may intercept all file update operations and automatically assign a data property ID 30 to each of the file update operations 610 according to current statistics in the database 604.
In one embodiment, the statistics stored in the heuristic database 604 may comprise two types, filename-based statistics and file type-based statistics. The filename-based statistics may record update frequencies (includes rewrite, update, delete, truncate operations) for each data file. The file type-based statistics may record accumulated update and delete frequencies for specific file types. In one embodiment, both sets of statistics are updated by each file update, and may be stored in both memory 18 and on the SSD 12.
In one embodiment, the heuristic database 604 may store the two types of statistics using respective tables, referred to herein as a filename-based statistics table 606 and a file type-based statistics table 608. In one embodiment, the tables may be implemented as hash tables. Each entry in the filename-based statistics table 606 may include a key, which may be a file name, and a value, which may comprise a total number of updates (includes rewrite, update, truncate) for the file during its whole lifetime. Responsive to the creation of a new file, the stats daemon 600 may create a new entry to the filename-based statistics table 606. Responsive to the file being deleted, stats daemon 600 may delete the corresponding entry in filename-based statistics table 606.
Each entry in the file type-based table 608 may include a key, which may be the file type (mp3, mpg, xls and etc.), and a value, which may comprise all total updates (includes rewrite, update, truncate and delete) made to files of this type divided by the number of files of this file type. Responsive to the creation of a new type, the stats daemon 600 may create a new entry in the file type-based statistics table 608, and the stats demon 600 may not delete any entry after the entry is added.
The stats demon 600 also is responsible for loading these two hash tables from SSD 12 to the memory 18 after operating system boot up, and flushes the tables to the SSD periodically for permanent storage. Both hash tables can be stored in the SSD 12 as normal data files.
For a newly installed operating system, the heuristic interface 26 may require a configurable warm-up period to collect sufficient statistics to effectively assign data property identifiers 30.
The map hooks 602 may be implemented as system call hooks at the file system level to intercept all file update operations (create/write/update/delete) 610 made to all files by the applications 300 and the OS 20. Most operating systems, such as Windows and Linux, provide file system hooks for system programming purposes.
Responsive to intercepting file update operations 610, the map hooks 602 may be performed by a heuristic database update operation 612 to send related information to the stats daemon 600. The map hooks 602 then performs a data property ID calculation via block 614 that reads the heuristic database 604, and calculates and assigns a data property identifier 30 for the actual file writes according to current statistics via line 616. File creations or fresh file writes may be assigned data property IDs 30 according to the file's type and current file type statistics; while the file data updates may be assigned data property IDs 30 according to its update frequency and current update frequencies statistics. Finally, the map hooks 602 forwards the actual file writes to the underlying file system 21 via block 618.
The map hooks 602 may use an algorithm based on two statistics hash tables 606 and 608 to calculate and assign data property IDs, below is a simplified example implementation. The hash table lookup and calculations overhead can be very minimal, thus the SSD read/write throughput won't be affected. More information can be added into these statistics hash tables 606 and 608, and more complicated stream ID calculation algorithms can be used.
In one embodiment, an example simplified data property ID calculation algorithm is a follows:
As an example of the data property ID calculation, assume the following: a user is editing a photo app to edit a photo file named foo.jpg (a JPEG file type); and the SSD of the user's computer is configured to handle up to four data property IDs or stream IDs. When the user saves the photo onto the SSD 12, the map hooks 602 intercept the file save request, determines the file type and updates the update frequency of that particular file and to the JPEG file type in the heuristic database 604.
The maps hooks 602 also search the current statistics for the write frequency (e.g., per day) for the JPEG file type as well as the maximum write frequency over all file types. In this example, assume that that the JPEG file type has a write frequency per day of 10 and that a “.meta” file type has a write frequency per day of 100, which is the highest or maximum write frequency over all file types. The map hooks 602 may then calculate the data property ID to be assigned to the file save operation using the equation:
Data property ID=floor((Write_Frequency_This_FileType/Max_Write_Frequency_in_FileType_HashTable)×Number_of_Available_Streams_in_NVM_Device)
Data property ID=floor((10/100)*4)+1=1
The data property ID of 1 is then sent with the file data through the file system 21 and block layer 23 to the SSD 12 for storage and block that stores other data having an assigned data property ID of 0.
In an alternative embodiment, the heuristic interface 26 (i.e., the stats demon 600 and the map hooks 602) and assignment of data property IDs based on update statistics in the heuristic database 604, may be implemented within the OS block storage layer 23 or even inside the SSD 12. For both cases, the stats demon 604 only needs to maintain a lookup table to store update statistics for each storage block, and the map hooks 602 may calculate and assign the data property IDs to each block update according to update frequencies stored in the lookup table. In the case of implementation inside the OS block storage layer 23, the map hooks 602 may be implemented in the OS block storage layer instead of the file system 21, the stats demon 604 may be a kernel mode daemon which is part of operating system, and the statistic tables 606 and 608 can be stored in both memory and a specific partition inside SSD 12. In the case of implementation inside a data property-based SSD 12, the statistics tables 606 and 608 can be stored in a NAND flash memory spare area to save storage space, and the in-memory part of the statistics tables 606 and 608 can be stored and merged with current SSD FTL mapping tables.
In one embodiment, the heuristic interface 26 is implemented as a software component. In another embodiment, the heuristic interface 26 could be implemented as a combination of hardware and software. Although the stats demon 600 and the map hooks 602 are shown as separate components, the functionality of each may be combined into a lesser or a greater number of modules/components. For example, in another embodiment, the stats demon 600 and the map hooks 602 may be implemented as one integrated component.
The heuristic interface 26 of the example embodiments may be applied to a broad range of storage markets from client to enterprise, which could be applied to a disk for a single standalone machine (such as desktop, laptop, workstation, server, and the like), storage array, software-define storage (SDS), application-specific storage, virtual machine (VM), virtual desktop infrastructure (VDI), content distribution network (CDN), and the like.
In one embodiment, for example, the NVM array 24 of the SSD 12 may be formed of a plurality of non-volatile memory chips, i.e., a plurality of flash memories. As another example, the NVM array 24 may be formed of different-type nonvolatile memory chips (e.g., PRAM, FRAM, MRAM, etc.) instead of flash memory chips. Alternatively, the array 24 can be formed of volatile memories, i.e., DRAM or SRAM, and may have a hybrid type where two or more types of memories are mixed.
A methods and systems for a heuristic and autonomous interface for enabling computer systems to utilize the data placement method has been disclosed. The present invention has been described in accordance with the embodiments shown, and there could be variations to the embodiments, and any variations would be within the spirit and scope of the present invention. For example, the exemplary embodiment can be implemented using hardware, software, a computer readable medium containing program instructions, or a combination thereof. Software written according to the present invention is to be either stored in some form of computer-readable medium such as a memory, a hard disk, or a CD/DVD-ROM and is to be executed by a processor. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
This application is a Continuation application of U.S. patent application Ser. No. 16/676,356, filed Nov. 6, 2019, which claims priority to and the benefit of U.S. patent application Ser. No. 15/090,799 entitled HEURISTIC INTERFACE FOR ENABLING A COMPUTER DEVICE TO UTILIZE DATA PROPERTY-BASED DATA PLACEMENT INSIDE A NONVOLATILE MEMORY DEVICE and filed Apr. 5, 2016, which claims priority to and the benefit of U.S. Provisional Patent Application No. 62/192,045 entitled DATA PROPERTY BASED DATA PLACEMENT IN STORAGE DEVICE and filed Jul. 13, 2015, and U.S. Provisional Patent Application No. 62/245,100 entitled AUTONOMOUS MECHANISM AND ALGORITHM FOR COMPUTER SYSTEM TO UTILIZE MULTI-STREAM SOLID-STATE DRIVE and filed Oct. 22, 2015, the contents all of which are incorporated by reference in their entirety herein.
Number | Name | Date | Kind |
---|---|---|---|
4641197 | Miyagi | Feb 1987 | A |
4827411 | Arrowood et al. | May 1989 | A |
6282663 | Khazam | Aug 2001 | B1 |
6438555 | Orton | Aug 2002 | B1 |
6484235 | Horst et al. | Nov 2002 | B1 |
6920331 | Sim et al. | Jul 2005 | B1 |
7660264 | Eiriksson et al. | Feb 2010 | B1 |
7870128 | Jensen et al. | Jan 2011 | B2 |
7970806 | Park et al. | Jun 2011 | B2 |
8112813 | Goodwin et al. | Feb 2012 | B1 |
8312217 | Chang et al. | Nov 2012 | B2 |
8495035 | Aharonov | Jul 2013 | B2 |
8566513 | Yamashita | Oct 2013 | B2 |
8615703 | Eisenhuth et al. | Dec 2013 | B2 |
8738882 | Post et al. | May 2014 | B2 |
8838877 | Wakrat et al. | Sep 2014 | B2 |
8874835 | Davis et al. | Oct 2014 | B1 |
8996450 | Rubio | Mar 2015 | B1 |
9021185 | Ban | Apr 2015 | B2 |
9042181 | Flynn et al. | May 2015 | B2 |
9158770 | Beadles | Oct 2015 | B1 |
9213633 | Canepa et al. | Dec 2015 | B2 |
9280466 | Kunimatsu et al. | Mar 2016 | B2 |
9286204 | Kikkawa et al. | Mar 2016 | B2 |
9330305 | Zhao et al. | May 2016 | B2 |
9413587 | Smith et al. | Aug 2016 | B2 |
9575981 | Dorman et al. | Feb 2017 | B2 |
9892041 | Banerjee et al. | Feb 2018 | B1 |
20010016068 | Shibata | Aug 2001 | A1 |
20020011431 | Graef et al. | Jan 2002 | A1 |
20020032027 | Kirani et al. | Mar 2002 | A1 |
20020059317 | Black et al. | May 2002 | A1 |
20020081040 | Uchida | Jun 2002 | A1 |
20020103860 | Terada et al. | Aug 2002 | A1 |
20030055747 | Carr et al. | Mar 2003 | A1 |
20030120952 | Tarbotton et al. | Jun 2003 | A1 |
20050078944 | Risan et al. | Apr 2005 | A1 |
20050165777 | Hurst-Hiller et al. | Jul 2005 | A1 |
20060070030 | Laborczfalvi | Mar 2006 | A1 |
20060168517 | Itoh et al. | Jul 2006 | A1 |
20060242076 | Fukae et al. | Oct 2006 | A1 |
20070050777 | Hutchinson et al. | Mar 2007 | A1 |
20070156998 | Gorobets | Jul 2007 | A1 |
20070225962 | Brunet et al. | Sep 2007 | A1 |
20080187345 | Sorihashi | Aug 2008 | A1 |
20080250190 | Johnson | Oct 2008 | A1 |
20080263579 | Mears et al. | Oct 2008 | A1 |
20090323022 | Uchida | Dec 2009 | A1 |
20100017487 | Patinkin | Jan 2010 | A1 |
20100030822 | Dawson et al. | Feb 2010 | A1 |
20100146538 | Cheong et al. | Jun 2010 | A1 |
20100288828 | Pradhan et al. | Nov 2010 | A1 |
20110066790 | Mogul et al. | Mar 2011 | A1 |
20110106780 | Heras et al. | May 2011 | A1 |
20110115924 | Yu et al. | May 2011 | A1 |
20110145499 | Ananthanarayanan et al. | Jun 2011 | A1 |
20110167221 | Pangal et al. | Jul 2011 | A1 |
20110221864 | Filippini et al. | Sep 2011 | A1 |
20110246706 | Gomyo et al. | Oct 2011 | A1 |
20110276539 | Thiam | Nov 2011 | A1 |
20120007952 | Otsuka | Jan 2012 | A1 |
20120042134 | Risan | Feb 2012 | A1 |
20120054447 | Swart | Mar 2012 | A1 |
20120060013 | Mukherjee | Mar 2012 | A1 |
20120072798 | Unesaki et al. | Mar 2012 | A1 |
20120096217 | Son | Apr 2012 | A1 |
20120131304 | Franceschini et al. | May 2012 | A1 |
20120150917 | Sundaram et al. | Jun 2012 | A1 |
20120158827 | Mathews | Jun 2012 | A1 |
20120191900 | Kunimatsu et al. | Jul 2012 | A1 |
20120239869 | Chiueh et al. | Sep 2012 | A1 |
20120254524 | Fujimoto et al. | Oct 2012 | A1 |
20120278532 | Bolanowski | Nov 2012 | A1 |
20120317337 | Johar et al. | Dec 2012 | A1 |
20120323977 | Fortier et al. | Dec 2012 | A1 |
20130019109 | Kang et al. | Jan 2013 | A1 |
20130019310 | Ben-Itzhak et al. | Jan 2013 | A1 |
20130024483 | Mohr et al. | Jan 2013 | A1 |
20130024559 | Susanta et al. | Jan 2013 | A1 |
20130024599 | Huang et al. | Jan 2013 | A1 |
20130050743 | Steely et al. | Feb 2013 | A1 |
20130097207 | Sassa | Apr 2013 | A1 |
20130111336 | Dorman et al. | May 2013 | A1 |
20130111547 | Kraemer | May 2013 | A1 |
20130159626 | Katz et al. | Jun 2013 | A1 |
20130183951 | Chien | Jul 2013 | A1 |
20130279395 | Aramoto et al. | Oct 2013 | A1 |
20130290601 | Sablok et al. | Oct 2013 | A1 |
20130304944 | Young et al. | Nov 2013 | A1 |
20140049628 | Motomura et al. | Feb 2014 | A1 |
20140074899 | Halevy et al. | Mar 2014 | A1 |
20140181499 | Glod | Jun 2014 | A1 |
20140195921 | Grosz et al. | Jul 2014 | A1 |
20140208007 | Cohen et al. | Jul 2014 | A1 |
20140215129 | Kuzmin et al. | Jul 2014 | A1 |
20140281158 | Ravimohan et al. | Sep 2014 | A1 |
20140281172 | Seo et al. | Sep 2014 | A1 |
20140289492 | Ranjith Reddy | Sep 2014 | A1 |
20140333790 | Wakazono | Nov 2014 | A1 |
20150026257 | Balakrishnan et al. | Jan 2015 | A1 |
20150058790 | Kim et al. | Feb 2015 | A1 |
20150074337 | Jo et al. | Mar 2015 | A1 |
20150113652 | Ben-Itzhak et al. | Apr 2015 | A1 |
20150146259 | Enomoto | May 2015 | A1 |
20150169449 | Barrell et al. | Jun 2015 | A1 |
20150186648 | Lakhotia | Jul 2015 | A1 |
20150188960 | Alhaidar et al. | Jul 2015 | A1 |
20150278699 | Danielsson | Oct 2015 | A1 |
20160048445 | Gschwind et al. | Feb 2016 | A1 |
20160070819 | Söderberg | Mar 2016 | A1 |
20160078245 | Amarendran et al. | Mar 2016 | A1 |
20160094603 | Liao et al. | Mar 2016 | A1 |
20160139838 | D'Sa et al. | May 2016 | A1 |
20160170873 | Uchigaito et al. | Jun 2016 | A1 |
20160179386 | Zhang | Jun 2016 | A1 |
20160196076 | Oh | Jul 2016 | A1 |
20160197950 | Tsai et al. | Jul 2016 | A1 |
20160203197 | Rastogi et al. | Jul 2016 | A1 |
20160219024 | Verzun et al. | Jul 2016 | A1 |
20160239615 | Dorn | Aug 2016 | A1 |
20160266792 | Amaki et al. | Sep 2016 | A1 |
20160283125 | Hashimoto et al. | Sep 2016 | A1 |
20160313943 | Hashimoto et al. | Oct 2016 | A1 |
20170017663 | Huo et al. | Jan 2017 | A1 |
20170039372 | Koval et al. | Feb 2017 | A1 |
20170109096 | Jean et al. | Apr 2017 | A1 |
20170123666 | Sinclair et al. | May 2017 | A1 |
20170300426 | Chai et al. | Oct 2017 | A1 |
20170308772 | Li et al. | Oct 2017 | A1 |
20170339230 | Yeom et al. | Nov 2017 | A1 |
20180012032 | Radich et al. | Jan 2018 | A1 |
Number | Date | Country |
---|---|---|
103324703 | Sep 2013 | CN |
103620549 | Mar 2014 | CN |
103842962 | Jun 2014 | CN |
103942010 | Jul 2014 | CN |
104111898 | Oct 2014 | CN |
104391569 | Mar 2015 | CN |
104423800 | Mar 2015 | CN |
104572491 | Apr 2015 | CN |
2-302855 | Dec 1990 | JP |
11-327978 | Nov 1999 | JP |
2006-235960 | Sep 2006 | JP |
2007-102998 | Apr 2007 | JP |
2007-172447 | Jul 2007 | JP |
2012-104974 | May 2012 | JP |
2012-170751 | Sep 2012 | JP |
2012-020544 | Oct 2013 | JP |
2014-167790 | Sep 2014 | JP |
2014-522537 | Sep 2014 | JP |
2015-8358 | Jan 2015 | JP |
2015-5723812 | May 2015 | JP |
10-2014-0033099 | Mar 2014 | KR |
10-2014-0094468 | Jul 2014 | KR |
10-2014-0112303 | Sep 2014 | KR |
WO 2012104974 | Aug 2012 | WO |
WO 2012170751 | Dec 2012 | WO |
WO 2013012901 | Jan 2013 | WO |
WO 2015005634 | Jan 2015 | WO |
WO 2015008358 | Jan 2015 | WO |
WO 2015020811 | Feb 2015 | WO |
Entry |
---|
Sun, Chao, et al., “SEA-SSD: A Storage Engine Assisted SSD With Application-Coupled Simulation Platform,” IEEE Transactions on Circuits and Systems I: Regular Papers, vol. 62, Issue 1, 2015, 2 pages. |
Extended European Search Report dated Aug. 9, 2016 for EP16172142. |
Kang et al., The Multi-streamed Solid-State Drive, HotStorage; 14-6th USENIX Workshop on Hot Topics in Storage and File Systems, Jun. 17-18, 2014. |
Kang et al., “The Multi-Streamed Solid State Drive,” Memory Solutions Lab, Memory Division, Samsung Electronics Co., 2014. |
Ryu et al., “FlashStream: a multi-tiered storage architecture for adaptive HTTP streaming,” Proceedings of the 21st ACM international conference on Multimedia, ACM, 2013, https://doi.org/10.1145/2502081.2502122. |
Stoica et al., “Improving flash write performance by using update frequency.” The 39th International Conference on Very Large Data Bases, Aug. 26-30, 2013, RNa del Garda, Trento, Italy, Proceedings of the VLDB Endowment, vol. 6 No. 9, 733-744. |
U.S. Appl. No. 15/090,799, filed Apr. 5, 2016. |
Wu et al., “A File System FTL Design for Flash Memory Storage Systems,” EDAA, 978-3-9810801-5-5, Sep. 2009. |
Number | Date | Country | |
---|---|---|---|
20220171740 A1 | Jun 2022 | US |
Number | Date | Country | |
---|---|---|---|
62245100 | Oct 2015 | US | |
62192045 | Jul 2015 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16676356 | Nov 2019 | US |
Child | 17671481 | US | |
Parent | 15090799 | Apr 2016 | US |
Child | 16676356 | US |