This disclosure is generally related to the field of data storage. More specifically, this disclosure is related to a system and method for facilitating reduction of latency and mitigation of write amplification in a multi-tenancy storage drive.
Today, various storage systems are being used to store and access the ever-increasing amount of digital content. A storage system can include storage servers with one or more storage devices, and a storage device can include physical storage media with a non-volatile memory (such as a solid state drive (SSD) or a hard disk drive (HDD)). A storage system can serve thousands of applications, and input/output (I/O) requests may be received by a respective storage drive from tens of different applications. In such a “multi-tenancy” scenario, a single storage drive may serve many different applications. The performance of each storage drive in a multi-tenancy scenario is thus critical in order to sustain and grow the hyperscale infrastructure.
One current method for data placement in a multi-tenant storage system involves treating all incoming I/O requests evenly to avoid I/O starvation. However, this method can result in a significant write amplification, as described below in relation to
One embodiment provides a system and method for facilitating data placement. During operation, the system receives a chunk of data to be written to a non-volatile memory, wherein the chunk includes a plurality of sectors, and wherein the plurality of sectors are assigned with consecutive logical block addresses. The system writes the sectors from a first buffer to the non-volatile memory at a first physical page address. The system creates, in a data structure, a first entry which maps the logical block addresses of the written sectors to the first physical page address.
In some embodiments, prior to writing the sectors from the first buffer to the non-volatile memory, in response to determining that a first sector is associated with an existing stream for the chunk, the system appends the first sector to one or more other sectors stored in the first buffer, wherein the first buffer is associated with the existing stream. Writing the sectors from the first buffer to the non-volatile memory comprises, in response to detecting that a total size of the stored sectors in the first buffer is the same as a first size of a physical page in the non-volatile memory, writing the stored sectors from the first buffer to the non-volatile memory.
In some embodiments, the system marks as available a space in the first buffer corresponding to the written sectors. The first buffer can be stored in a volatile cache or a non-volatile memory.
In some embodiments, in response to appending the first sector to one or more other sectors stored in the first buffer, the system generates an acknowledgment of a write commit for an application from which the chunk of data is received.
In some embodiments, in response to determining that a second sector is not associated with an existing stream for the chunk, and in response to successfully allocating a second buffer associated with a new stream, the system writes the second sector to the second buffer. In response to unsuccessfully allocating the second buffer, the system successfully obtains a reserved buffer from a reserved pool of buffers and writes the second sector to the reserved buffer.
In some embodiments, in response to unsuccessfully allocating the second buffer, the system performs the following operations: unsuccessfully obtains a reserved buffer from a reserved pool of buffers; identifies a third buffer with sectors of a total size less than the first size; appends dummy data to the third buffer to obtain third data of the first size; writes the third data from the third buffer to the non-volatile memory at a second physical page address; marks as available a space in the third buffer corresponding to the third data; creates, in the data structure, a second entry which maps logical block addresses of sectors of the third data to the second physical page address; allocates the third buffer as the new buffer; and writes the second sector to the third buffer.
In some embodiments, the chunk comprises a plurality of logical extents and is associated with a unique application. A respective logical extent comprises a plurality of logical pages. A respective logical page comprises one or more sectors with consecutive logical block addresses. A logical block address corresponds to a sector of the chunk.
In some embodiments, the non-volatile memory comprises a NAND-based flash memory, the respective logical page is a NAND logical page, and the first physical page address corresponds to a NAND physical page.
In some embodiments, the data structure is stored in the volatile cache and the non-volatile memory, and the first entry indicates the logical addresses of the written sectors based on the two least significant bits.
In the figures, like reference numerals refer to the same figure elements.
The following description is presented to enable any person skilled in the art to make and use the embodiments, and is provided in the context of a particular application and its requirements. Various modifications to the disclosed embodiments will be readily apparent to those skilled in the art, and the general principles defined herein may be applied to other embodiments and applications without departing from the spirit and scope of the present disclosure. Thus, the embodiments described herein are not limited to the embodiments shown, but are to be accorded the widest scope consistent with the principles and features disclosed herein.
Overview
The embodiments described herein facilitate an improved storage system which decreases the latency and mitigates the write amplification of I/O requests in a multi-tenant storage system by: assigning consecutive LBAs to sectors in large logical extents of a chunk of data; maintaining a simplified mapping table which uses a smaller amount of memory; and using stream buffers which reshuffle and group data into sizes corresponding to the size of a unit in the physical storage media.
As described above, a storage system can serve thousands of applications, and input/output (I/O) requests may be received by a respective storage drive from tens of different applications. Because a single storage drive may serve many different applications, the performance of each storage drive in such a multi-tenancy scenario is critical in order to sustain and grow the hyperscale infrastructure.
The I/O requests received by a single drive at any given moment can be mixed. The mixed I/O requests can form a large number of combinations which are difficult to predict and, consequently, difficult to optimize in advance. Furthermore, the mixed I/O requests can involve multiple differing features, e.g.: read or write requests of different sizes, priorities, and types (sequential/random); I/O request which require different amounts of bandwidth, processing, and storage; and applications of different priorities sending mixed requests at differing frequencies and intervals. Thus, it can be challenging to provide optimization on all I/O requests without sacrificing performance for a few. As the number of applications being served continues to increase, the performance of each drive in a multi-tenancy scenario can become more critical to support the development of the storage infrastructure.
One current method for data placement in a multi-tenancy storage system involves treating all incoming I/O requests evenly to avoid I/O starvation. To ensure that the I/Os from all applications can be served to sufficiently satisfy the service level agreements (SLAs), the system can mark a respective I/O request with the corresponding application identifier. This allows a respective storage drive to select an I/O evenly (e.g., based on an equal chance), which can result in consolidating I/O requests from the multiple “tenants” (i.e., applications). However, this method can result in a significant write amplification, as out-of-date sectors can create “holes” in the physical pages. Subsequently, when the physical pages with holes are to be recycled, valid sectors (surrounding the holes or in the same physical page to be recycled) must be copied out and re-programmed. This can lead to a significant write amplification, as described below in relation to
The embodiments described herein address these challenges by providing a system which merges I/O requests from a large number of applications into chunks, which are then written to the storage drives. The system can divide a data chunk into a plurality of logical extents, where a logical extent can include consecutive logical block addresses (LBAs). A logical extent can include a plurality of logical pages, and a logical page can include sectors with consecutive LBAs. The size of a logical page can match the physical access granularity of current physical storage media, such as NAND flash, as described below in relation to
Using a logical page which matches the size of a physical page in NAND flash allows the system to maintain a simplified mapping table, which uses a smaller amount of memory and can also provide a faster query latency, as described below in relation to
Thus, the embodiments described herein can provide a reduced latency, both in accessing the mapping table and the data stored in like groups in physical pages of the storage media. The system can mitigate the write amplification by using stream buffers which hold and organize the sectors based on their application or chunk identifiers, and by writing the data in like groups of a size which matches a physical unit in the NAND flash (e.g., to a physical NAND page). These improvements can result in a multi-tenant storage system with increased efficiency and performance.
A “distributed storage system” can include multiple storage servers. A “storage server” or a “storage system” refers to a computing device which can include multiple storage devices or storage drives. A “storage device” or a “storage drive” refers to a device or a drive with a non-volatile memory which can provide persistent storage of data, e.g., a solid state drive (SSD) or a hard disk drive (HDD).
The terms “multi-tenant storage system” and “multi-tenancy storage system” refer to a scenario in which a single system serves multiple customers or “tenants.” One example is a single storage drive which serves multiple applications, customers, or users.
The term “simplified mapping table” refers to a mapping table which has a shorter depth and width than a conventional mapping table.
The terms “logical page” and “logical NAND page” refer to a unit of data whose size matches a physical access granularity of NAND flash, e.g., of a physical NAND page.
The terms “NAND page address” and “NPA” refer to a physical address or location of a page in the storage media of physical NAND flash.
The term “I/O starvation” refers to an imbalance among I/O requests from multiple applications. Some applications may have I/O requests which require a longer latency that may violate a service level agreement (SLA).
Exemplary Data Placement in a Multi-Tenant Storage System in the Prior Art
As described above, one current method for data placement in a multi-tenant storage system involves treating all incoming I/O requests evenly to avoid I/O starvation. To ensure that the I/Os from all applications can be served to sufficiently satisfy the service level agreements (SLAs), the system can mark a respective I/O request with the corresponding application identifier. This allows a respective storage drive to select an I/O evenly or equally (e.g., based on an equal chance), which can result in consolidating I/O requests from the multiple tenants or applications. However, this method can result in a significant write amplification, as out-of-date sectors can create holes in the physical pages. Subsequently, when the physical pages with holes are to be recycled, valid sectors in the physical pages to be recycled must be copied out and re-programmed. This can lead to a significant write amplification, as described below in relation to
For example, the following three portions of data can be written to or placed into a block 140 (via a communication 172): data LBA 11111 from chunk 1110; data LBA 21121 from chunk 2120; and data LBA 31131 from chunk 3130. Similarly, the following three portions of data can be written to or placed into a block 150 (via a communication 174): data LBA 12112 from chunk 1110; data LBA 22122 from chunk 2120; and data LBA 32132 from chunk 3130. Similarly, the following three portions of data can be written to or placed into a block 160 (via a communication 176): data LBA 13113 from chunk 1110; data LBA 23123 from chunk 2120; and data LBA 33133 from chunk 3130.
The system of environment 100 can gather sectors of data from different applications (shown as portions of different chunks) to form a physical page which is the same a NAND program unit. While this can achieve execution of a write command, it can also lead to a significant write amplification, which can affect the performance of the storage drive.
When the system updates existing (stored) data, certain stored portions (which are spread across multiple blocks) may be marked as invalid. This can create holes in the physical pages. Subsequently, when the system performs a garbage collection or recycling process, the system must copy out the valid data from the units which hold the invalid data (e.g., the holes) to release the capacity in order to accommodate incoming sectors. For example, when an update 178 occurs related to chunk 2120, the system can mark the following three blocks as invalid: LBA 21121 in block 140; LBA 22122 in block 150; and LBA 23123 in block 160. During a subsequent garbage collection or recycling process, the system must copy out the valid sectors from those blocks (e.g.: LBA 11111 and LBA 31131 from block 140; LBA 12112 and LBA 32132 from block 150; and LBA 13113 and LBA 33133 from block 160) in order to allow the storage media to be re-programmed. This write amplification can result in a decreased performance, as the increase of program/erase cycles can result in a reduced lifespan and also consume the overall bandwidth for handling an I/O request (e.g., a NAND read/write operation).
Using Logical Extents and Logical Pages to Facilitate Data Placement
For example, logical extent 222 can include logical NAND pages 230, 240, 250, 260, and 270, where each logical NAND page can include four 4 KB sectors with consecutive LBAs. For example: logical NAND page 230 can include LBAs 232, 234, 236, and 238; logical NAND page 240 can include LBAs 242, 244, 246, and 248; logical NAND page 250 can include LBAs 252, 254, 256, and 258; logical NAND page 260 can include LBAs 262, 264, 266, and 268; and logical NAND page 270 can include LBAs 272, 274, 276, and 278. Using these logical NAND pages can also result in an optimization for the flash translation layer (FTL) which can reduce the amount of memory used, as described below in relation to
Exemplary Mapping Table: Prior Art Vs. One Embodiment
In contrast,
Thus, by using the NPA-based mapping table, the embodiments described herein can significantly reduce the usage of memory required for maintaining the mapping table. An FTL (or other) module can maintain mapping table 330 in a volatile cache (such as DRAM) and/or in a persistent media (such as NAND flash).
Using Stream Buffers to Facilitate Data Placement in a Multi-Tenant Storage System
In addition to using the simplified mapping table of
Based on the ever-increasing density of NAND flash, the parallelism of programming NAND flash also continues to increase. A single channel selection can control multiple NAND dies (e.g., three NAND dies), and each NAND die can include multiple planes (e.g., two or four). Thus, selecting a single channel can enable three NAND dies with six total planes. This allows for six NAND physical pages to be programmed together at the same time via one channel. For example, given a NAND physical page of a size of 16 KB, this allows the described system to accumulate 24 LBAs of 4 KB size before programming the NAND physical page. There is a high likelihood that consecutive LBAs from a same chunk's logical extent can be merged into the NAND page size (e.g., 16 KB). When a single 4 KB I/O enters the data buffer of the storage device, the system can commit that single 4 KB I/O as a success to the corresponding application (e.g., generate an acknowledgment or a notification of a write commit).
Subsequently, the system can asynchronously program or write that single 4 KB I/O from the power-loss protected data buffer to the NAND flash. As long as the data buffer has sufficient capacity protected by charge-backed capacitors, the system can accumulate the small I/Os as described below in relation to
The prior art system of environment 100 can include three NAND dies 430, 436, and 442, and at least two planes per die (e.g., planes 432 and 434 of NAND die 430; planes 438 and 440 of NAND die 436; and planes 444 and 446 of NAND die 442). During operation, the prior art system of environment 400 places the “mixed” data into the various planes of the NAND dies. For example: data 412 is placed into plane 446 of NAND die 442; data 414 is placed into plane 444 of NAND die 442; data 416 is placed into plane 440 of NAND die 436; data 418 is placed into plane 438 of NAND die 436; data 420 is placed into plane 434 of NAND die 430; and data 422 is placed into plane 432 of NAND die 430.
The data placement of
For example, when the system determines that four 4 KB C's are stored in a buffer (not shown), the system writes those four C's to plane 474 of NAND die 470. Similarly, when the system determines that four 4 KB A's are stored in a buffer (not shown), the system writes those four A's to plane 478 of NAND die 476. The snapshot depicted in environment 450 also shows that two D's are held in a stream buffer 454, four A's are held in a stream buffer 456, two E's are held in a stream buffer 458, and four B's are held in a stream buffer 460.
Similarly, the system determines (from
Stream buffer 454 (which holds two sectors of D) and stream buffer 458 (which holds two sectors of E) are currently open or waiting for other similarly identified sectors to form a full NAND physical page. That is, stream buffer 454 is waiting for two more sectors of D, while stream buffer 458 is waiting for two more sectors of E before writing the consequently formed pages of data to the NAND flash.
Once a sector has been written to a stream buffer, the system can generate an acknowledgment of a successful write for a corresponding application. The application can subsequently use the LBA to read and obtain the 4 KB data (e.g., to execute a read request). The system can search the mapping table based on the most significant bits (MSBs) to locate the 16 KB NPA. The NPA points to the physical NAND page with the four LBAs, and the system can subsequently use the two least significant bits (LSBs) to select which 4 KB portion is to be retrieved from or sent out from the NAND flash die. For example, once the correct mapping table entry is located, the system can identify: the first 4 KB sector with LBA LSBs of “00; the second 4 KB sector with LBA LSBs of “01”; the third 4 KB sector with LBA LSBs of “10”; and the fourth 4 KB sector with LBA LSBs of “11,” e.g., as indicated by logical NAND page 230 (with LPAs and LSBs in
For example, in stream buffer 510, sectors A1511, A2512, A3513, and A4514 are accumulated to form a full NAND page, and are thus sent to a physical page (via a communication 552). Also in stream buffer 510, sectors A5515, A6516, and A7517 are waiting for one more sector to form a full page.
Similarly, in stream buffer 520, sectors B1521, B2522, B3523, and B4524 are accumulated to form a full NAND page, and are thus sent to a physical page (via a communication 554). Also in stream buffer 520, sectors B5525, B6526, and B7527 are waiting for one more sector to form a full page.
Additionally, in stream buffer 530, four sectors (depicted with right-slanting diagonal lines) have already been sent to a physical page (via a communication 556). Also in stream buffer 530, sectors Ki+1 535, Ki+2 536, Ki+3 537, and Ki+4 538 are accumulated to form a full NAND page, and are thus sent to a physical page (via a communication 558).
Because each stream buffer only holds sectors from the logical extent of a given chunk, when the capacity of a given stream buffer approaches a limit, the system must recycle the given stream buffer. If the system experiences a power loss, the system must also recycle the open stream buffers. To recycle a stream buffer, the system can fill an open stream buffer with a prefixed or predetermined data pattern (e.g., dummy data), and can subsequently program the content as an entire NAND page. The mechanism of allocating and recycling stream buffers is described below in relation to
Exemplary Methods for Facilitating Data Placement in a Multi-Tenant Storage System
The system assigns consecutive logical block addresses to the plurality of sectors (operation 604). If the first sector is not associated with an existing stream for the chunk (decision 606), the operation continues at Label A of
If the first sector is associated with an existing stream for the chunk (decision 606), the system appends the first sector to one or more other sectors stored in a first buffer associated with the existing stream (operation 608). In response to appending the first sector to one or more other sectors stored in the first buffer, the system generates an acknowledgment of a write commit for an application from which the chunk of data is received (not shown). If the system detects that a total size of the stored sectors in the first buffer is not the same as a first size of a physical page in the non-volatile memory (decision 610), the operation returns. In some embodiments, the operation checks to see if any other sectors remain to be written, and the operation may return to decision 606. Otherwise, if no more sectors remain to be written, the operation returns.
If the system detects that a total size of the stored sectors in the first buffer is the same as a first size of a physical page in the non-volatile memory (decision 610), the system writes the stored sectors from the first buffer to the non-volatile memory at a first physical page address (operation 612). The system marks as available a space in the first buffer corresponding to the written sectors (operation 614). The system creates, in a data structure, a first entry which maps the logical block addresses of the written sectors to the first physical page address (operation 616), and the operation returns.
If the system does not allocate the second buffer successfully (decision 624), the system obtains a reserved buffer from a reserved pool of buffers (operation 628). If the system successfully obtains a reserved buffer (i.e., the reserved pool is not used up) (decision 630), the system writes the first sector to the reserved buffer (operation 632), and the operation continues at operation 610 of
Exemplary Computer System
Content-processing system 718 can include instructions, which when executed by computer system 700, can cause computer system 700 to perform methods and/or processes described in this disclosure. Specifically, content-processing system 718 can include instructions for receiving and transmitting data packets, including data to be read or written, an input/output (I/O) request (e.g., a read request or a write request), a sector, a logical block address, a physical block address, an acknowledgment, and a notification.
Content-processing system 718 can include instructions for receiving a chunk of data to be written to a non-volatile memory, wherein the chunk includes a plurality of sectors (communication module 720). Content-processing system 718 can include instructions for assigning consecutive logical block addresses to the plurality of sectors (LBA-assigning module 722). Content-processing system 718 can include instructions for, in response to determining that a first sector is associated with an existing stream for the chunk (stream buffer-managing module 724), appending the first sector to one or more other sectors stored in a first buffer associated with the existing stream (data-writing module 730). Content-processing system 718 can include instructions for marking as available a space in the first buffer corresponding to the written sectors (stream buffer-managing module 724).
Content-processing system 718 can include instructions for detecting that a total size of the stored sectors in the first buffer is the same as a first size of a physical page in the non-volatile memory (stream buffer-managing module 724). Content-processing system 718 can include instructions for writing the stored sectors from the first buffer to the non-volatile memory at a first physical page address (data-writing module 730). Content-processing system 718 can include instructions for creating, in a data structure, a first entry which maps the logical block addresses of the written sectors to the first physical page address (table-managing module 726).
Content-processing system 718 can include instructions for, in response to appending the first sector to one or more other sectors stored in the first buffer (data-writing module 730), generating an acknowledgment of a write commit for an application from which the chunk of data is received (acknowledgment-generating module 728).
Content-processing system 718 can include instructions for allocating a second buffer associated with a new stream (stream buffer-managing module 724). Content-processing system 718 can include instructions for obtaining a reserved buffer from a reserved pool of buffers (stream buffer-managing module 724).
Data 732 can include any data that is required as input or generated as output by the methods and/or processes described in this disclosure. Specifically, data 732 can store at least: data; a chunk of data; a logical extent of data; a sector of data; a corresponding LBA; a logical page; a PBA; a physical page address (PPA); a NAND physical page address (NPA); a mapping table; an FTL module; an FTL mapping table; an entry; an entry mapping LBAs to an NPA; a request; a read request; a write request; an input/output (I/O) request; data associated with a read request, a write request, or an I/O request; an indicator or marking that a space in a buffer is available to be written to; an acknowledgment or notification of a write commit; a size; a logical page size; a size of a plurality of sectors; a physical page size; a NAND physical page size; a size of a physical granularity in a storage media; a stream buffer; a reserved buffer; a pool of reserved buffers; a most significant bit (MSB); and a least significant bit (LSB).
The data structures and code described in this detailed description are typically stored on a computer-readable storage medium, which may be any device or medium that can store code and/or data for use by a computer system. The computer-readable storage medium includes, but is not limited to, volatile memory, non-volatile memory, magnetic and optical storage devices such as disk drives, magnetic tape, CDs (compact discs), DVDs (digital versatile discs or digital video discs), or other media capable of storing computer-readable media now known or later developed.
The methods and processes described in the detailed description section can be embodied as code and/or data, which can be stored in a computer-readable storage medium as described above. When a computer system reads and executes the code and/or data stored on the computer-readable storage medium, the computer system performs the methods and processes embodied as data structures and code and stored within the computer-readable storage medium.
Furthermore, the methods and processes described above can be included in hardware modules. For example, the hardware modules can include, but are not limited to, application-specific integrated circuit (ASIC) chips, field-programmable gate arrays (FPGAs), and other programmable-logic devices now known or later developed. When the hardware modules are activated, the hardware modules perform the methods and processes included within the hardware modules.
The foregoing embodiments described herein have been presented for purposes of illustration and description only. They are not intended to be exhaustive or to limit the embodiments described herein to the forms disclosed. Accordingly, many modifications and variations will be apparent to practitioners skilled in the art. Additionally, the above disclosure is not intended to limit the embodiments described herein. The scope of the embodiments described herein is defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5307410 | Bennett | Apr 1994 | A |
5675648 | Townsend | Oct 1997 | A |
6505247 | Steger | Jan 2003 | B1 |
8041335 | Khetawat | Oct 2011 | B2 |
8266433 | Przykucki | Sep 2012 | B1 |
8990550 | Hushon | Mar 2015 | B1 |
9077577 | Ashrafi | Jul 2015 | B1 |
9130742 | Yao | Sep 2015 | B2 |
9294267 | Kamath | Mar 2016 | B2 |
9323901 | Nair | Apr 2016 | B1 |
9698979 | Armstrong | Jul 2017 | B2 |
9830467 | Harold | Nov 2017 | B1 |
9887976 | Hughes | Feb 2018 | B2 |
20010023416 | Hosokawa | Sep 2001 | A1 |
20050071632 | Pauker | Mar 2005 | A1 |
20050071677 | Khanna | Mar 2005 | A1 |
20050135620 | Kastella | Jun 2005 | A1 |
20050144440 | Catherman | Jun 2005 | A1 |
20050144484 | Wakayama | Jun 2005 | A1 |
20050259825 | Trifonov | Nov 2005 | A1 |
20060026693 | Bade | Feb 2006 | A1 |
20060056630 | Zimmer | Mar 2006 | A1 |
20060101266 | Klassen | May 2006 | A1 |
20060155922 | Gorobets | Jul 2006 | A1 |
20070016794 | Harrison | Jan 2007 | A1 |
20070076889 | DeRobertis | Apr 2007 | A1 |
20070147292 | Van Ewijk | Jun 2007 | A1 |
20070192598 | Troxel | Aug 2007 | A1 |
20080065881 | Dawson | Mar 2008 | A1 |
20080114983 | Sherkin | May 2008 | A1 |
20080123859 | Mamidwar | May 2008 | A1 |
20080165973 | Miranda Gavillan | Jul 2008 | A1 |
20080219449 | Ball | Sep 2008 | A1 |
20080222734 | Redlich | Sep 2008 | A1 |
20090019285 | Chen | Jan 2009 | A1 |
20090034733 | Raman | Feb 2009 | A1 |
20090055892 | Lu | Feb 2009 | A1 |
20090092252 | Noll | Apr 2009 | A1 |
20090106551 | Boren | Apr 2009 | A1 |
20090125444 | Cochran | May 2009 | A1 |
20090147958 | Calcaterra | Jun 2009 | A1 |
20090193184 | Yu | Jul 2009 | A1 |
20090204812 | Baker | Aug 2009 | A1 |
20090262942 | Maeda | Oct 2009 | A1 |
20090271634 | Boult | Oct 2009 | A1 |
20100132015 | Lee | May 2010 | A1 |
20100169953 | Hofer | Jul 2010 | A1 |
20100199336 | Tan | Aug 2010 | A1 |
20100211787 | Bukshpun | Aug 2010 | A1 |
20100265077 | Humble | Oct 2010 | A1 |
20100277435 | Han | Nov 2010 | A1 |
20100281254 | Carro | Nov 2010 | A1 |
20100299526 | Wiseman | Nov 2010 | A1 |
20110069972 | Wiseman | Mar 2011 | A1 |
20110099367 | Thom | Apr 2011 | A1 |
20110126011 | Choi | May 2011 | A1 |
20110167503 | Horal | Jul 2011 | A1 |
20110209202 | Otranen | Aug 2011 | A1 |
20110213979 | Wiseman | Sep 2011 | A1 |
20110231615 | Ober | Sep 2011 | A1 |
20110302408 | Mcdermott | Dec 2011 | A1 |
20120032781 | Moon | Feb 2012 | A1 |
20120045002 | Zivkovic | Feb 2012 | A1 |
20120084570 | Kuzin | Apr 2012 | A1 |
20120087500 | Ukita | Apr 2012 | A1 |
20120166993 | Anderson | Jun 2012 | A1 |
20120177201 | Ayling | Jul 2012 | A1 |
20120210408 | Lu | Aug 2012 | A1 |
20120250863 | Bukshpun | Oct 2012 | A1 |
20120265892 | Ma | Oct 2012 | A1 |
20130083926 | Hughes | Apr 2013 | A1 |
20130101119 | Nordholt | Apr 2013 | A1 |
20130138875 | Kamphenkel | May 2013 | A1 |
20130142336 | Fries | Jun 2013 | A1 |
20130159704 | Chandrasekaran | Jun 2013 | A1 |
20130208894 | Bovino | Aug 2013 | A1 |
20130219454 | Hewinson | Aug 2013 | A1 |
20130227286 | Brisson | Aug 2013 | A1 |
20130246641 | Vimpari | Sep 2013 | A1 |
20130251145 | Lowans | Sep 2013 | A1 |
20130259233 | Baba | Oct 2013 | A1 |
20130262873 | Read | Oct 2013 | A1 |
20130267204 | Schultz | Oct 2013 | A1 |
20130308506 | Kim | Nov 2013 | A1 |
20130311707 | Kawamura | Nov 2013 | A1 |
20130315395 | Jacobs | Nov 2013 | A1 |
20140068765 | Choi | Mar 2014 | A1 |
20140104137 | Brown | Apr 2014 | A1 |
20140141725 | Jesme | May 2014 | A1 |
20140173713 | Zheng | Jun 2014 | A1 |
20140237565 | Fleysher | Aug 2014 | A1 |
20140259138 | Fu | Sep 2014 | A1 |
20140281500 | Ignatchenko | Sep 2014 | A1 |
20140281511 | Kaushik | Sep 2014 | A1 |
20140281548 | Boyer | Sep 2014 | A1 |
20140331050 | Armstrong | Nov 2014 | A1 |
20140351915 | Otranen | Nov 2014 | A1 |
20150046709 | Anspach | Feb 2015 | A1 |
20150062904 | Sanga | Mar 2015 | A1 |
20150074337 | Jo | Mar 2015 | A1 |
20150089624 | Kim | Mar 2015 | A1 |
20150095987 | Potash | Apr 2015 | A1 |
20150134727 | Lee | May 2015 | A1 |
20150134947 | Varcoe | May 2015 | A1 |
20150181308 | Ducharme | Jun 2015 | A1 |
20150193338 | Sundaram | Jul 2015 | A1 |
20150207926 | Brown | Jul 2015 | A1 |
20150222619 | Hughes | Aug 2015 | A1 |
20150236852 | Tanizawa | Aug 2015 | A1 |
20150263855 | Schulz | Sep 2015 | A1 |
20150270963 | Tanizawa | Sep 2015 | A1 |
20150271147 | Tanizawa | Sep 2015 | A1 |
20150288517 | Evans | Oct 2015 | A1 |
20150288542 | Ashrafi | Oct 2015 | A1 |
20150309924 | Chen | Oct 2015 | A1 |
20150317469 | Liu | Nov 2015 | A1 |
20150325242 | Lu | Nov 2015 | A1 |
20150326613 | Devarajan | Nov 2015 | A1 |
20150350181 | Call | Dec 2015 | A1 |
20150379261 | Daigle | Dec 2015 | A1 |
20150381363 | Teixeira | Dec 2015 | A1 |
20160013937 | Choi | Jan 2016 | A1 |
20160021068 | Jueneman | Jan 2016 | A1 |
20160080708 | Urata | Mar 2016 | A1 |
20160087946 | Yang | Mar 2016 | A1 |
20160087950 | Barbir | Mar 2016 | A1 |
20160105439 | Hunt | Apr 2016 | A1 |
20160127127 | Zhao | May 2016 | A1 |
20160149700 | Fu | May 2016 | A1 |
20160210105 | Ru | Jul 2016 | A1 |
20160226846 | Fu | Aug 2016 | A1 |
20160241396 | Fu | Aug 2016 | A1 |
20160248581 | Fu | Aug 2016 | A1 |
20160283116 | Ramalingam | Sep 2016 | A1 |
20160283125 | Hashimoto | Sep 2016 | A1 |
20160294783 | Piqueras Jover | Oct 2016 | A1 |
20160306552 | Liu | Oct 2016 | A1 |
20160313943 | Hashimoto | Oct 2016 | A1 |
20160337329 | Sood | Nov 2016 | A1 |
20160357452 | Kadam | Dec 2016 | A1 |
20160359839 | Natividad | Dec 2016 | A1 |
20160366713 | Sonnino | Dec 2016 | A1 |
20170034167 | Figueira | Feb 2017 | A1 |
20170104588 | Camenisch | Apr 2017 | A1 |
20170214525 | Zhao | Jul 2017 | A1 |
20170230173 | Choi | Aug 2017 | A1 |
20170302448 | Luk | Oct 2017 | A1 |
20170324730 | Otranen | Nov 2017 | A1 |
20170371585 | Lazo | Dec 2017 | A1 |
20180048466 | Chen | Feb 2018 | A1 |
20180063709 | Morrison | Mar 2018 | A1 |
20180077449 | Herz | Mar 2018 | A1 |
20180262907 | Alanis | Sep 2018 | A1 |
20180351734 | Zhao | Dec 2018 | A1 |
20190103962 | Howe | Apr 2019 | A1 |
20190179751 | Kanno | Jun 2019 | A1 |
20190391756 | Wang | Dec 2019 | A1 |
20200201570 | Kim | Jun 2020 | A1 |
Number | Date | Country |
---|---|---|
104662599 | May 2015 | CN |
106375840 | Feb 2017 | CN |
0962070 | Dec 1999 | EP |
2385690 | Nov 2011 | EP |
3007478 | Apr 2016 | EP |
2012098543 | Jul 2012 | WO |
2013026086 | Feb 2013 | WO |
2016070141 | May 2016 | WO |
Entry |
---|
Kim et al. “A Space-Efficient Flash Translation Layer for CompactFlash Systems.” May 2002. IEEE. IEEE Transactions on Consumer Electronics. vol. 48. pp. 366-375. |
Kang et al. “The Multi-streamed Solid-State Drive.” Jun. 2014. USENIX. HotStorage '14. |
Bhimani et al. “FIOS: Feature Based I/O Stream Identification for Improving Endurance of Multi-Stream SSDs.” Jul. 2018. IEEE. 11th International Conference on Cloud Computing. pp. 17-24. |
Intel. “Understanding the Flash Translation Layer (FTL) Specification.” Dec. 1998. Application Note AP-684. |
Tien-Sheng Lin, et al., Quantulm Aulthentication and Secure Communication Protocols, 2006 IEEE (Year:2006). |
Charles H. Bennett et al., Quantum cryptography: Public key distribution and coin tossing, www.elsevier.com/locate/tcx, 2014 (Year: 2014). |
R. Alleaume et al., Using Quantum key distribution for cryptographic purposes: A survey, www.elsevier.com/locate/tcs, 2009 (Year:2009). |
Toung-Shang Wei et al., Comment on “Quantum Key Distribution and Quantum Authentication Based on Entangled State”, Springer, 2011 (Year: 2011). |
Ivan Damgard et al., Secure Identification and QKD in the bounded-quantum-storage model, www.elsevier.com/locate/tcs, 2009 (Year: 2009). |
Valerio Scarani et al., The black paper of quantum cryptography: Real implementation problems, www.elsevier.com/locate/tcs, 2014. |
Jung-Lun Hsu et al., Dynamic quantum secret sharing, Springer, 2012. |
Ci-Hong Liao et al., Dynamic quantum secret sharing protocol based on GHZ state, Springer, 2014. |
Xugang Ren et al., A Novel Dynamic User Authentication Scheme, 2012 International Symposium on Communications and Information Technologies, 2012. |
Phyllis A. Schneck et al., Dynamic Authentication for High-Performance Networked Applications, 1998 IEEE. |
Lanjun Dang, An Improved Mutual Authentication Scheme for Smart Card Secure Messaging, Proceedings of the IEEE International Conference on E-Commerce Technology for Dynamic E-Business (CEC-East'04), 2004. |
Wenjie Liu, et al., Authenticated Quantum Secure Direct Communication with Qutrits, Fourth International Conference on Natural Computation, IEEE 2008. |
Mark Hillery et al. “Quantum Secret Sharing”, American Physical Society 1999. |
J G Rarity et al. “Ground to satellite secure key exchange using quantum cryptography”, New Journal of Physics 2002. |
Richard J Hughes et al. “Practical free-space quantum key distribution over 10 km in daylight and at night”, New Journal of Physics 2002. |
Baker et al. “Recommendation for Key Management—Part 1: General (Revision 3)”, NIST Special Publication 800-57, Jul. 2012 (Year:2012). |
Sufyan T. Faraj, “A Novel Extension of SSL/TLS Based on Quantum Key Distribution”, Computer and Communication Engineering , 2008. ICCCE 2008. International Conference on, IEEE, Piscataway, NJ, USA, May 16, 2008, pp. 919-922. |
Number | Date | Country | |
---|---|---|---|
20210191851 A1 | Jun 2021 | US |