Solid state memory storage devices may be used to store data. Such solid state storage devices may be based on solid state memory such as, for example, Flash Memory, Phase Change Memory (PCM) and Spin Torque Magnetic Random Access memory. To enhance write performance, a write-back cache may buffer data to be written to the solid state memory.
Specific embodiments of the technology will now be described in detail with reference to the accompanying figures. Like elements in the various figures are denoted by like reference numerals for consistency.
In the following detailed description of embodiments of the technology, numerous specific details are set forth in order to provide a more thorough understanding of the technology. However, it will be apparent to one of ordinary skill in the art that the technology may be practiced without these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
Throughout the application, ordinal numbers (e.g., first, second, third, etc.) may be used as an adjective for an element (i.e., any noun in the application). The use of ordinal numbers is not to imply or create any particular ordering of the elements nor to limit any element to being only a single element unless expressly disclosed, such as by the use of the terms “before”, “after”, “single”, and other such terminology. Rather, the use of ordinal numbers is to distinguish between the elements. By way of an example, a first element is distinct from a second element, and the first element may encompass more than one element and succeed (or precede) the second element in an ordering of elements.
In the following description of
In general, embodiments of the technology relate to the implementation of a write-back cache in solid state memory storage systems. A write-back cache may enhance write performance by buffering data to be written to a storage medium, e.g., solid state memory, until the data can be written. The write-back cache may thus serve as a temporary storage where the data to be written may be kept while the solid state memory storage system is serving other write requests and/or other operations.
If no protective measures are in place, a power loss may result in loss and/or corruption of data, because the content of the write-back cache cannot be written to the storage medium, even though the write operations were already acknowledged when the data were received by the write-back cache. Such power losses may occur, for example, due to an electrical problem, but also if a solid state storage module is spontaneously removed from a modular solid state storage chassis. The removal of solid state storage modules from the chassis may be a common operation in hot-pluggable storage solutions.
In one or more embodiments of the technology, a backup power source ensures that electric power is provided until all data, held by the write-back cache, has been written to the storage medium. Accordingly, the backup power source is sized to power a storage module for an amount of time necessary to write all data in the write-back cache to the storage medium. This amount of time may depend on various factors. More specifically, the amount of time depends on the size of the write-back cache, and the rate at which the necessary write operations to the storage medium can be performed. Further, the rate at which the necessary write operations to the medium can be performed, in accordance with one or more embodiments of the technology, is variable. If the data that remains in the write-back cache are directed to memory locations whose writing require different resources that can be simultaneously accessed, these write operations may, to at least some extent, be performed in parallel and may thus be rapidly completed. In contrast, if the data are directed to memory locations whose writing require the same resources, the write operations may only be sequentially performed, thus requiring more time. Generally, the aggregate throughput for write operations to a storage medium is higher than a worst case because typical writes target memory locations whose writing require different resources that may be accessed in parallel, whereas in the worst case, the use of resources is concentrated to a few resources that become a bottleneck.
In one or more embodiments of the technology, the write-back cache is, therefore, dynamically sized, to ensure that all data can be written to the storage medium before the backup power source fails. The write-back cache, in accordance with one or more embodiments of the technology, only accepts new data fragments to be written to the storage medium if it can be guaranteed that, within the time available before the backup power source fails, all data fragments in the write-back cache can be written to the storage medium, based on the mix of writes operations to be performed. This may result in a smaller available cache if many write operations require the same resources, thereby causing performance bottlenecks, or it may result in a larger available write-back cache if the writing of the data fragments remaining in the cache requires a mix of resources, thus allowing write operations to be performed in parallel. The bandwidth availability of the targeted resources, in accordance with an embodiment of the technology, is explicitly considered to properly size the available write-back cache.
In one embodiment of the technology, the clients (160A-160M) may be any type of physical system that includes functionality to issue a read request to the storage appliance (100) and/or to issue a write request to the storage appliance (100). Though not shown in
In one embodiment of the technology, the clients (160A-160M) are configured to execute an operating system (OS) that includes a file system, a block device driver, an application programming interface (API) to enable the client to access the storage appliance, and/or a user programming library. The file system, the block device driver and/or the user programming library provide mechanisms for the storage and retrieval of files from the storage appliance (100). More specifically, the file system, the block device driver and/or the user programming library include functionality to perform the necessary actions to issue read requests and write requests to the storage appliance. They may also provide programming interfaces to enable the creation and deletion of files, reading and writing of files, performing seeks within a file, creating and deleting directories, managing directory contents, etc. In addition, they may also provide management interfaces to create and delete file systems. In one embodiment of the technology, to access a file, the operating system (via the file system, the block device driver and/or the user programming library) typically provides file manipulation interfaces to open, close, read, and write the data within each file and/or to manipulate the corresponding metadata.
In one embodiment of the technology, the clients (160A-160M) interface with the fabric (140) of the storage appliance (100) to communicate with the storage appliance (100), as further described below.
In one embodiment of the technology, the storage appliance (100) is a system that includes persistent storage such as solid state memory, and is configured to service read requests and/or write requests from one or more clients (160A-160M).
The storage appliance (100), in accordance with one or more embodiments of the technology, includes one or more storage modules (120A-120N) organized in a storage array (110), a control module (150), and a fabric (140) that interfaces the storage module(s) (120A-120N) with the clients (160A-160M) and the control module (150). Each of these components is described below.
The storage array (110), in accordance with an embodiment of the technology, accommodates one or more storage modules (120A-120N). The storage array may enable a modular configuration of the storage appliance, where storage modules may be added to or removed from the storage appliance (100), as needed or desired. A storage module (120), in accordance with an embodiment of the technology, is described below, with reference to
Continuing with the discussion of the storage appliance (100), the storage appliance includes the fabric (140). The fabric (140) may provide connectivity between the clients (160A-160M), the storage module(s) (120A-120N) and the control module (150) using one or more of the following protocols: Peripheral Component Interconnect (PCI), PCI-Express (PCIe), PCI-eXtended (PCI-X), Non-Volatile Memory Express (NVMe), Non-Volatile Memory Express (NVMe) over a PCI-Express fabric, Non-Volatile Memory Express (NVMe) over an Ethernet fabric, and Non-Volatile Memory Express (NVMe) over an Infiniband fabric. Those skilled in the art will appreciate that the technology is not limited to the aforementioned protocols.
Further, in one or more embodiments of the technology, the storage appliance (100) includes the control module (150). In general, the control module (150) is a hardware module that may be configured to perform administrative tasks such as allocating and de-allocating memory regions in the solid state memory modules (120A-120N) and directing read and write requests, issued, e.g., by a client (160A-160M) to the appropriate storage module (120A-120N).
The control module (150) interfaces with the fabric (140) in order to communicate with the storage module(s) (120A-120N) and/or the clients (160A-160M). The control module may support one or more of the following communication standards: PCI, PCIe, PCI-X, Ethernet (including, but not limited to, the various standards defined under the IEEE 802.3a-802.3bj), Infiniband, and Remote Direct Memory Access (RDMA) over Converged Ethernet (RoCE), or any other communication standard necessary to interface with the fabric (140).
In one embodiment of the technology, the storage interface channels (132) establish interfaces between the storage module controller (124) and the solid state memory dies (122) of the storage medium (142). The storage interface channels (132) may be electrical bus interfaces of any type, as needed to support data transfers between the storage module controller and the solid state memory dies (122) of the solid state storage medium (142). A single storage interface channel may establish a one-to-one connection between the storage module controller and a particular solid state memory die, or it may establish a one-to-many switched or non-switched connection between the storage module controller and multiple solid state memory dies. One-to-many configurations may rely, for example, on the address of the memory location to be written to, in order to identify the solid state memory die on which the write operation is to be performed.
In one embodiment of the technology, any component of the storage module (120) that is involved in the writing of a data fragment to the storage medium (142) is considered a resource. For example, a solid state memory die to which the data fragment is written, is considered a resource. Further, a storage interface channel connecting to the solid state memory die to which the data fragment is to be written is considered a resource. Other resources may include a partition or block of the memory die, e.g., a particular segment of the solid state memory die to which the data fragment is to be written, or any other independently addressable unit of the solid state memory die. Accordingly, write operations, depending on the targeted memory location, may require different resources. Consider, for example, a first write operation to a memory location in solid state memory die 1 (122.1). In this scenario, the required resources include at least solid state memory die 1 (122.1) and the storage interface channel (132) that connects the storage module controller (124) to solid state memory die 1 (122.1). In this scenario, solid state memory dies (122.2-122.9) and their storage interface channels are not considered resources because they are not required for the write operation to solid state memory die 1 (122.1). Accordingly, resources, in accordance with one or more embodiments of the technology, are write-operation-specific, i.e., to determine the resources required for the completion of a write operation, the memory location targeted by the write operations needs to be known.
Continuing with the discussion of the storage module (120), shown in
In one embodiment of the technology, the storage module controller (124) includes a processor (128) (e.g., one or more cores, or micro-cores of a processor that are configured to execute instructions) and memory (130) (e.g., volatile memory that may be, but is not limited to, dynamic random-access memory (DRAM), synchronous DRAM, SDR SDRAM, and DDR SDRAM) to perform at least one of the steps described in
In one or more embodiments of the technology, the memory (130) accommodates one or more counters (138). The counter(s) may be manipulated by the processor (128) and/or by the FPGA (126), as described in
In one or more embodiments of the technology, the memory (130) further accommodates one or more counter limits (140). For each counter (138), there may be a counter limit (140). The counter limits may be fixed limits, configured to limit the number of data fragments in the write-back cache such that all data fragments in the write-back cache can be written to the storage medium (142) prior to failure of the backup power source (144).
In one or more embodiments of the technology, counters and counter limits are resource-specific. For example, a single counter limit may be set for a particular solid state memory die. Assume, for example, that an exemplary solid state memory die can perform 10,000 write operations in 100 ms. Further assume that the backup power source (144) is capable of powering the storage module (100) for one second. The counter limit may thus be set to 100,000, thus allowing the buffering of 100,000 data fragments to be written to the solid state memory die. These writes can all be completed prior to a power loss. The corresponding counter is monitored to prevent the acceptance of additional data fragments in the write-back cache, once the limit, set by the counter limit, is reached. Different counter limits may be used for different resources. Consider, for example, the use of additional counters for individual partitions on the previously discussed memory die. Although the memory die is capable of performing 10,000 write operations within 100 ms, assume that a partition of the memory die can only handle 100 write operations. Accordingly, it may be important to dedicate a counter to the partition, because if many write requests are directed to a single or a few partitions, the partition may be the resource that causes a bottleneck even though the aggregate bandwidth of the entire solid state memory would be sufficient. In the example, a counter limit of 1,000 may thus be used to limit the data fragments in the write-back cache, that are directed to the partition. Those skilled in the art will appreciate that counter limits may be set for any resource that may become a bottleneck when write operations are performed. Further, counters and corresponding counter limits may in addition or alternatively be implemented for groups of resources. For example, assume that a write operation always involves writes to all solid memory dies (122.1-122.9). In this scenario, a single counter and a single counter limit is sufficient to track write activity to the solid state memory dies. However, even though the storage interface channels (132) are used to transfer the data fragments to be written to the solid state memory dies, the same counter may not be used to track activity on the storage interface channels, because the storage interface channels may also serve other solid state memory dies. A separate counter may thus still be necessary to track the activity of the storage interface channels.
In one or more embodiments of the technology, the storage module controller (124) includes the previously introduced write-back cache (136). The write-back cache may be any type of memory, e.g., synchronous dynamic random-access memory (SDRAM), suitable for buffering data to be written to the storage medium (142). In one embodiment of the technology, the write-back cache is volatile memory and therefore needs to be continuously powered in order to maintain the stored data. While the write-back cache (136), in accordance with an embodiment of the technology, is of a fixed size, e.g., multiple megabytes (MB), the actually allowed amount of data that may be held by the write-back cache is dynamically controlled by the counter(s) (138), as further described with reference to
In one or more embodiments of the technology, the storage module includes a backup power source. The backup power source (140) is configured to provide electric power to components of the storage module (120) until the content of the write-back cache has been completely written to the storage medium (142). The backup power source may be any type of electric power source, including, but not limited to a battery, a capacitor and a super-capacitor.
One skilled in the art will recognize that the architecture of the system is not limited to the components shown in
Further those skilled in the art will appreciate that in the storage module (120) any component may be considered a resource, if it is relied upon for the completion of a write operation. As previously noted, combinations of counters and counter limits may be used to track activity of resources and to limit the number of data fragments targeting a particular resource or set of resources. Counters and counter limits may be assigned to any resource or to any combination of resources, without departing from the technology.
Turning to
In Step 202, the physical address corresponding to the logical address is identified. This address translation may enable the storage appliance to identify the memory location where the data fragment is to be stored. The address translation may be performed by the storage module controller, e.g., by an FPGA of the storage module controller that may be specialized in rapidly performing logical to physical address translations during write or read operations. The address translation may be based on the mapping established by the map variables in the memory region record.
In Step 204, the destination storage module in which the data fragment is to be stored is identified, based on the physical address obtained in Step 202.
In Step 206, a determination is made about whether capacity is available to accommodate the data fragment. In other words, it is determined whether the data fragment can be forwarded directly to the destination storage module, or whether it can be buffered prior to forwarding it to the destination storage module. If free capacity is available, the method may proceed to Step 208, where the data fragment is either stored in a buffer or directly forwarded to the destination storage module. If no free capacity is available, the method may proceed to Step 210.
In Step 210, the execution of the write request is delayed until it can be accommodated. Alternatively, the write request may be denied. If the write request is denied, the client that provided the write request may be notified that the write request was denied.
Alternatively, Steps 206-210 may be skipped, and the data fragment may be directly processed as described in
Turning to
In Step 302, one resource of the identified required resources is selected. Resources may be selected in any order, without departing from the technology.
In Step 304, a determination is made about whether the counter, specific to the selected resource is less than the counter_limit for the selected resource. The determination is made to determine whether the additional data fragment, if stored in the write-back cache, can be written to the storage medium in a timely manner, in case of a power failure. If a determination is made that the counter has reached the counter_limit, the method may remain in Step 304, i.e., the method may not proceed with storing the data fragment in the write-back cache, until the counter has been decremented, e.g., after another data packet, currently stored in the write-back cache, and that required the resource, has been written to the storage medium, thus freeing up bandwidth for the newly received data package. If a determination is made that the counter is less than the counter_limit, the method may proceed to Step 306.
In Step 306, a determination is made about whether additional required resources are remaining, i.e., resources that are also required for the writing of the data fragment to the storage medium. If such required resources are remaining, the method may return to Step 302, to repeat Steps 302 and 304 for this required resource. If no required resources are remaining, the method may proceed to Step 308.
Steps 302-306, in combination, ensure that no data fragment is stored in the write-back cache unless all resources required for the writing of the data fragment to the storage medium have sufficient bandwidth to guarantee that the write can be completed within the time during which the memory module can be powered by the backup power source. The storing of a data fragment in the write-back cache may thus be delayed until the required resources have sufficient bandwidth.
In Step 308, the data fragment is obtained from the buffer or directly from the client, via the fabric, and in Step 310, the date fragment is stored in the write-back cache.
In Step 312, the counter (s) are incremented. Specifically, any counter that is associated with the required resource(s) identified in Step 300 is incremented.
In Step 314, the storage module acknowledges the write request. Accordingly, the storage appliance considers the write as completed, even though the data fragment to be written currently resides in the write-back cache and has not yet been written to the storage medium.
Turning to
Embodiments of the technology enable solid state storage systems in which spontaneous power failures do not result in the loss of data that are buffered in a write-back cache of a storage module. In a system in accordance with one or more embodiments of the technology, the allowable data in the write-back cache is limited such that, within the time during which a backup power source powers the storage module, the data in the write-back cache can be written to the storage medium. The limitation on the write-back cache is dynamically controlled such that a maximum volume of data, for which the writing to the storage medium can be guaranteed, can always be held by the write-back cache. Embodiments of the technology thus provide performance advantages over solutions where the write-back cache is limited to a small, constant size. Embodiment of the technology may be employed in particular in storage systems that support hot-pluggable storage modules that may be spontaneously removed during operation. In such a system, all data in the write-back cache may be written to the storage medium, without a loss of data, in accordance with one or more embodiment of the technology.
While the technology has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this disclosure, will appreciate that other embodiments can be devised which do not depart from the scope of the technology as disclosed herein. Accordingly, the scope of the technology should be limited only by the attached claims.
Number | Name | Date | Kind |
---|---|---|---|
6016275 | Han | Jan 2000 | A |
6862675 | Wakimoto | Mar 2005 | B1 |
7559004 | Chang et al. | Jul 2009 | B1 |
8189379 | Camp et al. | May 2012 | B2 |
8259506 | Sommer et al. | Sep 2012 | B1 |
8305812 | Levy et al. | Nov 2012 | B2 |
8335893 | Tagawa | Dec 2012 | B2 |
8694724 | Linnell | Apr 2014 | B1 |
8819503 | Melik-Martirosian | Aug 2014 | B2 |
8868842 | Yano et al. | Oct 2014 | B2 |
8891303 | Higgins et al. | Nov 2014 | B1 |
8934284 | Patapoutian et al. | Jan 2015 | B2 |
8995197 | Steiner et al. | Mar 2015 | B1 |
9026764 | Hashimoto | May 2015 | B2 |
9195586 | Cometti et al. | Nov 2015 | B2 |
9330767 | Steiner et al. | May 2016 | B1 |
9368225 | Pinkovich et al. | Jun 2016 | B1 |
9496043 | Camp et al. | Nov 2016 | B1 |
9564233 | Cho et al. | Feb 2017 | B1 |
9606737 | Kankani et al. | Mar 2017 | B2 |
9645177 | Cohen et al. | May 2017 | B2 |
9690655 | Tabrizi et al. | Jun 2017 | B2 |
9710180 | Van Gaasbeck | Jul 2017 | B1 |
9740425 | Bakshi et al. | Aug 2017 | B2 |
9798334 | Tabrizi et al. | Oct 2017 | B1 |
9842060 | Jannyavula Venkata et al. | Dec 2017 | B1 |
9864525 | Kankani et al. | Jan 2018 | B2 |
9891844 | Kankani et al. | Feb 2018 | B2 |
9905289 | Jeon et al. | Feb 2018 | B1 |
20050172082 | Liu | Aug 2005 | A1 |
20050223185 | Lee | Oct 2005 | A1 |
20050278486 | Trika | Dec 2005 | A1 |
20070260811 | Merry, Jr. et al. | Nov 2007 | A1 |
20070263444 | Gorobets et al. | Nov 2007 | A1 |
20070266200 | Gorobets et al. | Nov 2007 | A1 |
20080082725 | Elhamias | Apr 2008 | A1 |
20080082726 | Elhamias | Apr 2008 | A1 |
20090144598 | Yoon et al. | Jun 2009 | A1 |
20100306577 | Dreifus et al. | Dec 2010 | A1 |
20100306580 | McKean et al. | Dec 2010 | A1 |
20100332923 | D'Abreu et al. | Dec 2010 | A1 |
20110173484 | Schuette et al. | Jul 2011 | A1 |
20110202818 | Yoon et al. | Aug 2011 | A1 |
20120110239 | Goss et al. | May 2012 | A1 |
20120192035 | Nakanishi | Jul 2012 | A1 |
20120236656 | Cometti | Sep 2012 | A1 |
20120239991 | Melik-Martirosian | Sep 2012 | A1 |
20120268994 | Nagashima | Oct 2012 | A1 |
20120290899 | Cideciyan et al. | Nov 2012 | A1 |
20130019057 | Stephens | Jan 2013 | A1 |
20130047044 | Weathers et al. | Feb 2013 | A1 |
20130094286 | Sridharan et al. | Apr 2013 | A1 |
20130151214 | Ryou | Jun 2013 | A1 |
20130176784 | Cometti et al. | Jul 2013 | A1 |
20130185487 | Kim et al. | Jul 2013 | A1 |
20130227200 | Cometti et al. | Aug 2013 | A1 |
20130311712 | Aso | Nov 2013 | A1 |
20140006688 | Yu et al. | Jan 2014 | A1 |
20140101499 | Griffin | Apr 2014 | A1 |
20140181378 | Saeki et al. | Jun 2014 | A1 |
20140181595 | Hoang et al. | Jun 2014 | A1 |
20140195725 | Bennett | Jul 2014 | A1 |
20140208174 | Ellis et al. | Jul 2014 | A1 |
20140215129 | Kuzmin et al. | Jul 2014 | A1 |
20140229799 | Hubris et al. | Aug 2014 | A1 |
20140293699 | Yang et al. | Oct 2014 | A1 |
20140347936 | Ghaly | Nov 2014 | A1 |
20140359202 | Sun et al. | Dec 2014 | A1 |
20140365836 | Jeon et al. | Dec 2014 | A1 |
20150078094 | Nagashima | Mar 2015 | A1 |
20150082121 | Wu et al. | Mar 2015 | A1 |
20150227418 | Cai et al. | Aug 2015 | A1 |
20150357045 | Moschiano et al. | Dec 2015 | A1 |
20160004464 | Shen | Jan 2016 | A1 |
20160092304 | Tabrizi et al. | Mar 2016 | A1 |
20160093397 | Tabrizi et al. | Mar 2016 | A1 |
20160148708 | Tuers et al. | May 2016 | A1 |
20160306591 | Ellis et al. | Oct 2016 | A1 |
20160342345 | Kankani et al. | Nov 2016 | A1 |
20170090783 | Fukutomi et al. | Mar 2017 | A1 |
20170109040 | Raghu et al. | Apr 2017 | A1 |
20170228180 | Shen | Aug 2017 | A1 |
20170235486 | Martineau et al. | Aug 2017 | A1 |
20170262336 | Tabrizi et al. | Sep 2017 | A1 |
20170315753 | Blount | Nov 2017 | A1 |
20180018269 | Jannyavula Venkata et al. | Jan 2018 | A1 |
20180032439 | Jenne | Feb 2018 | A1 |
20180034476 | Kayser et al. | Feb 2018 | A1 |
20180039795 | Gulati | Feb 2018 | A1 |
20180060230 | Kankani et al. | Mar 2018 | A1 |
Number | Date | Country |
---|---|---|
102150140 | Aug 2011 | CN |
103902234 | Jul 2014 | CN |
2011-100519 | May 2011 | JP |
2012-203957 | Oct 2012 | JP |
2013176784 | Sep 2013 | JP |
Entry |
---|
Chen, Feng, Rubao Lee, and Xiaodong Zhang. “Essential roles of exploiting internal parallelism of flash memory based solid state drives in high-speed data processing.” High Performance Computer Architecture (HPCA), 2011 IEEE 17th International Symposium on. IEEE, 2011. (Year: 2011). |
Hyojin Choi et al.; “VLSI Implementation of BCH Error Correction for Multilevel Cell NAND Flash Memory”; IEEE Transactions on Very Large Scale Integration (VLSI) Systems; vol. 18, No. 5; pp. 843-847; May 2010 (5 pages). |
Te-Hsuan Chen et al.; “An Adaptive-Rate Error Correction Scheme for NAND Flash Memory”; 27th IEEE VLSI Test Symposium; pp. 53-58; 2009 (6 pages). |
Eran Gal et al.; “Algorithms and Data Structures for Flash Memories”; ACM Computing Surveys (CSUR); vol. 37, No. 2; pp. 138-163; Jun. 2005 (30 pages). |
Mendel Rosenblum et al.; “The Design and Implementation of a Log-Structured File System”; ACM Transactions on Computer Systems; vol. 10; No. 1; pp. 26-52; Feb. 1992 (27 pages). |
Chanik Park et al.; “A Reconfigurable FTL (Flash Translation Layer) Architecture for NAND Flash-Based Applications”; ACM Transactions on Embedded Computing Systems; vol. 7, No. 4, Article 38; Jul. 2008 (23 pages). |
Yu Cai et al.; “Flash Correct-and-Refresh: Retention-Aware Error Management for Increased Flash Memory Lifetime”; Proceedings of the IEEE International Conference on Computer Design (ICCD); pp. 94-101; 2012 (10 pages). |
Beomkyu Shin et al.; “Error Control Coding and Signal Processing for Flash Memories”; IEEE International Symposiu on Circuits and Systems (ISCAS); pp. 409-412; 2012 (4 pages). |
Hlaleh Tabrizi et al.; “A Learning-based Network Selection Method in Heterogeneous Wireless Systems”; IEEE Global Telecommunications Conference (GLOBECOM 2011); 2011 (5 pages). |
Neal Mielke et al.; “Recovery Effects in the Distributed Cycling of Flash Memories”; IEEE 44th Annual International Reliability Physics Symposium; pp. 29-35; 2006 (7 pages). |
Ramesh Pyndiah et al.; “Near Optimum Decoding of Product Codes”; Global Telecommunicaitons Conference (GLOBECOM '94), Communications: The Global Bridge pp. 339-343; 1994 (5 pages). |
Junsheng Han et al.; “Reliable Memories with Subline Accesses”; International Symposium on Information Theory (ISIT); pp. 2531-2535, Jun. 2007 (5 pages). |
Ankit Singh Rawat et al.; “Locality and Availability in Distributed Storage,” arXiv:1402.2011v1 [cs.IT]; Feb. 10, 2014 (9 pages). |
Parikshit Gopalan et al.; “On the Locality of Codeword Symbols”; arXiv:1106.3625v1[cs.IT]; Jun. 18, 2011 (17 pages). |
Frédérique Oggier et al.; “Self-repairing Homomorphic Codes for Distributed Storage Systems”; IEEE INFOCOM 2011; pp. 1215-1223; 2011 (9 pages). |
Dimitris Papailiopoulos et al.; “Simple Regenerating Codes: Network Coding for Cloud Storage”; arXiv:1109.0264v1 [cs.IT]; Sep. 1, 2011 (9 pages). |
Osama Khan et al.; “In Search of I/O-Optimal Recovery from Disk Failures”; HotStorage 2011; Jun. 2011 (5 pages). |
Cheng Huang et al.; “Pyramid Codes: Flexible Schemes to Trade Space for Access Efficiency in Reliable Data Storage Systems”; Sixth IEEE International Symposium on Network Computing and Applications (NCA); 2007 (8 pages). |
Hongchao Zhou et al.; “Error-Correcting Schemes with Dynamic Thresholds in Nonvolatile Memories”; 2011 IEEE International Symposium on Information Theory Proceedings; pp. 2143-2147; 2011; (5 pages). |
Borja Peleato et al.; “Towards Minimizing Read Time for Nand Flash”; Globecom 2012—Symposium on Selected Areas in Communication; pp. 3219-3224; 2012 (6 pages). |
Yongjune Kim et al.; “Modulation Coding for Flash Memories”; 2013 International Conference on Computing, Networking and Communications, Data Storage Technology and Applications Symposium; pp. 961-967; 2013 (7 pages). |
Yu Cai et al.; “Program Interference in MLC NAND Flash Memory: Characterization, Modeling, and Mitigation”; 2013 IEEE International Conference on Computer Design (ICCD); pp. 123-130; 2013 (8 pages). |
Yu Cai et al.; “Threshold Voltage Distribution in MLC NAND Flash Memory: Characterization, Analysis, and Modeling”; Proceedings of the Conference on Design, Automation and Test in Europe; pp. 1285-1290; 2013 (6 pages). |
Eitan Yaakobi et al.; Error Characterization and Coding Schemes for Flash Memories; IEEE Globecom 2010 Workshop on Application of Communication Theory to Emerging Memory Technologies; pp. 1856-1860; 2010 (5 pages). |
Borja Peleato et al.; “Maximizing MLC NAND lifetime and reliability in the presence of write noise”; IEEE International Conference on Communications (ICC); pp. 3752-3756; 2012 (5 pages). |