This disclosure relates to data storage and, in particular, to systems and methods for efficiently referencing data stored on a non-volatile storage medium.
A storage system may map logical addresses to storage locations of a storage device. Physical addressing metadata used to reference the storage locations may consume significant memory resources. Moreover, the size of the physical addressing metadata may limit the size of the storage resources the system is capable of referencing.
Disclosed herein are embodiments of a method for referencing data on a storage medium. The method may comprise arranging a plurality of data segments for storage at respective offsets within a storage location of a solid-state storage medium, mapping front-end addresses of the data segments to an address of the storage location in a first index, and generating a second index configured for storage on the solid-state storage medium, wherein the second index is configured to associate the front-end addresses of the data segments with respective offsets of the data segments within the storage location. In some embodiments, the method further includes compressing one or more of the data segments for storage on the solid-state storage medium such that a compressed size of the compressed data segments differs from an uncompressed size of the data segments, wherein the offsets of the data segments within the storage location are based on the compressed size of the one or more data segments.
The disclosed method may further comprise storing the second index on the storage medium. The second index may be stored on the storage location that comprises the plurality of data segments. The offsets may be omitted from the first index, which may reduce the overhead of the first index and/or allow the first index to reference a larger storage address space. The storage address of a data segment associated with a particular front-end address may be determined by use of a storage location address mapped to the particular front-end address in the first index and a data segment offset associated with the particular front-end address of the second index stored on the storage location. Accessing a requested data segment of a specified front-end address may include accessing a physical address of a storage location mapped to the specified front-end address in the first index, and reading the second index stored on the storage location to determine an offset of the requested data segment within the storage location.
Disclosed herein are embodiments of an apparatus for referencing data stored on a storage medium. The apparatus may include a storage layer configured to store data packets within storage units of a non-volatile storage medium, wherein the storage units are configured to store a plurality of data packets, a data layout module configured to determine relative locations of the stored data packets within the storage units, and an offset index module configured to generate offset indexes for the storage units based on the determined relative locations of the data packets stored within the storage units, wherein the offset index of a storage unit is configured to associate logical identifiers of data packets stored within the storage unit with the determined relative locations of the data packets within the storage unit.
In some embodiments, the disclosed apparatus further includes a compression module configured to compress data of one or more of the data packets, such that a compressed size of the data differs from an uncompressed size of the data, wherein the offset index module is configured to determine the offsets of the data packets based on the compressed size of the data. The apparatus may further comprise a translation module which may be used to associate logical identifiers with media addresses of storage units comprising data packets corresponding to the logical identifiers, wherein the storage layer is configured to access a data packet corresponding to a logical identifier by use of a media address of a storage unit associated with the logical identifier by the translation module, and an offset index indicating a relative location of the data packet within the storage unit, wherein the offset index is stored at a pre-determined location within the storage unit.
The storage layer may be configured to store the offset indexes of the storage units at pre-determined locations within the storage units. The storage layer may be further configured to store each offset index within the storage unit that comprises data packets indexed by the offset index.
The storage medium may comprise a solid-state storage array comprising a plurality of columns, each column comprising a respective solid-state storage element, and wherein each of the storage units comprises physical storage units on two or more columns of the solid-state storage array. The solid-state storage array may comprise a plurality of columns, each column comprising a respective solid-state storage element. The offset indexes may indicate a relative location of a data packet within a column of the solid-state storage array. In some embodiments, the storage medium is a solid-state storage array comprising a plurality of independent channels, each channel comprising a plurality of solid-state storage elements, and wherein the offset indexes indicate relative locations of data packets within respective independent channels.
Disclosed herein are further embodiments of a method for referencing data stored on a storage medium, by: segmenting physical addresses of data stored on a solid-state storage array into respective first portions and second portions, wherein the first portions of the physical addresses correspond to storage unit addresses, and wherein the second portions correspond to data offsets within respective storage units, mapping logical addresses of the data to respective first portions of the physical addresses, and storing the second portions of the physical addresses within respective storage units. The method may further comprise compressing the data for storage on the solid-state storage device, wherein the data offsets within respective storage units are based on a compressed size of the data.
Data corresponding to a logical address may be accessed by combining a first portion of the physical address mapped to the logical address with a second portion of the physical address stored on a storage unit corresponding to the first portion of the physical address. In some embodiments, each storage unit comprises a plurality of storage units corresponding to respective solid-state storage elements. Alternatively, or in addition, the storage unit may comprise a page on a solid-state storage element, and the second portions of the physical addresses may correspond to a data offsets within the pages.
The computing system 100 may comprise a storage layer 130, which may be configured to provide storage services to one or more storage clients 106. The storage clients 106 may include, but are not limited to: operating systems (including bare metal operating systems, guest operating systems, virtual machines, virtualization environments, and the like), file systems, database systems, remote storage clients (e.g., storage clients communicatively coupled to the computing system 100 and/or storage layer 130 through the network 105), and/or the like.
The storage layer 130 (and/or modules thereof) may be implemented in software, hardware and/or a combination thereof. In some embodiments, portions of the storage layer 130 are embodied as executable instructions, such as computer program code, which may be stored on a persistent, non-transitory storage medium, such as the non-volatile storage resources 103. The instructions and/or computer program code may be configured for execution by the processing resources 101. Alternatively, or in addition, portions of the storage layer 130 may be embodied as machine components, such as general and/or application-specific components, programmable hardware, FPGAs, ASICs, hardware controllers, storage controllers, and/or the like.
The storage layer 130 may be configured to perform storage operations on a storage medium 140. The storage medium 140 may comprise any storage medium capable of storing data persistently. As used herein, “persistent” data storage refers to storing information on a persistent, non-volatile storage medium. The storage medium 140 may include non-volatile storage media such as solid-state storage media in one or more solid-state storage devices or drives (SSD), hard disk drives (e.g., Integrated Drive Electronics (IDE) drives, Small Computer System Interface (SCSI) drives, Serial Attached SCSI (SAS) drives, Serial AT Attachment (SATA) drives, etc.), tape drives, writable optical drives (e.g., CD drives, DVD drives, Blu-ray drives, etc.), and/or the like.
In some embodiments, the storage medium 140 comprises non-volatile solid-state memory, which may include, but is not limited to, NAND flash memory, NOR flash memory, nano RAM (NRAM), magneto-resistive RAM (MRAM), phase change RAM (PRAM), Racetrack memory, Memristor memory, nanocrystal wire-based memory, silicon-oxide based sub-10 nanometer process memory, graphene memory, Silicon-Oxide-Nitride-Oxide-Silicon (SONOS), Resistive random-access memory (RRAM), programmable metallization cell (PMC), conductive-bridging RAM (CBRAM), and/or the like. Although particular embodiments of the storage medium 140 are disclosed herein, the teachings of this disclosure could be applied to any suitable form of memory including both non-volatile and volatile forms. Accordingly, although particular embodiments of the storage layer 130 are disclosed in the context of non-volatile, solid-state storage devices 140, the storage layer 130 may be used with other storage devices and/or storage media.
In some embodiments, the storage device 130 includes volatile memory, which may include, but is not limited to RAM, dynamic RAM (DRAM), static RAM (SRAM), synchronous dynamic RAM (SDRAM), etc. The storage medium 140 may correspond to memory of the processing resources 101, such as a CPU cache (e.g., L1, L2, L3 cache, etc.), graphics memory, and/or the like. In some embodiments, the storage medium 140 is communicatively coupled to the storage layer 130 by use of an interconnect 127. The interconnect 127 may include, but is not limited to peripheral component interconnect (PCI), PCI express (PCI-e), serial advanced technology attachment (serial ATA or SATA), parallel ATA (PATA), small computer system interface (SCSI), IEEE 1394 (FireWire), Fiber Channel, universal serial bus (USB), and/or the like. Alternatively, the storage medium 140 may be a remote storage device that is communicatively coupled to the storage layer 130 through the network 105 (and/or other communication interface, such as a Storage Area Network (SAN), a Virtual Storage Area Network (VSAN), or the like). The interconnect 127 may, therefore, comprise a remote bus, such as a PCE-e bus, a network connection (e.g., Infiniband), a storage network, Fibre Channel Protocol (FCP) network, HyperSCSI, and/or the like.
The storage layer 130 may be configured to manage storage operations on the storage medium 140 by use of, inter alia, a storage controller 139. The storage controller 139 may comprise software and/or hardware components including, but not limited to: one or more drivers and/or other software modules operating on the computing system 100, such as storage drivers, I/O drivers, filter drivers, and/or the like, hardware components, such as hardware controllers, communication interfaces, and/or the like, and so on. The storage medium 140 may be embodied on a storage device 141. Portions of the storage layer 139 (e.g., storage controller 139) may be implemented as hardware and/or software components (e.g., firmware) of the storage device 141.
The storage controller 139 may be configured to implement storage operations at particular storage locations of the storage medium 140. As used herein, a storage location refers to unit of storage of a storage resource (e.g., a storage medium and/or device) that is capable of storing data persistently; storage locations may include, but are not limited to: pages, groups of pages (e.g., logical pages and/or offsets within a logical page), storage divisions (e.g., physical erase blocks, logical erase blocks, etc.), sectors, locations on a magnetic disk, battery-backed memory locations, and/or the like. The storage locations may be addressable within a storage address space 144 of the storage medium 140. Storage addresses may correspond to physical addresses, media addresses, back-end addresses, address offsets, and/or the like. Storage addresses may correspond to any suitable storage address space 144, storage addressing scheme and/or arrangement of storage locations.
The storage layer 130 may comprise an interface 131 through which storage clients 106 may access storage services provided by the storage layer. The storage interface 131 may include one or more of: a block device interface, a virtualized storage interface, an object storage interface, a database storage interface, and/or other suitable interface and/or Application Programming Interface (API).
The storage layer 130 may provide for referencing storage resources through a front-end interface. As used herein, a front-end interface refers to the identifiers used by the storage clients 106 to reference storage resources and/or services of the storage layer 130. A front-end interface may correspond to a front-end address space 132 that comprises a set, range, and/or extent of front-end addresses or identifiers. As used herein, a front-end address refers to an identifier used to reference data and/or storage resources; front-end addresses may include, but are not limited to: names (e.g., file names, distinguished names, etc.), data identifiers, logical identifiers (LIDs), logical addresses, logical block addresses (LBAs), logical unit number (LUN) addresses, virtual storage addresses, storage addresses, physical addresses, media addresses, and/or the like. In some embodiments, the front-end address space 132 comprises a logical address space, comprising a plurality of logical identifiers, LBAs, and/or the like.
The translation module 134 may be configured to map front-end identifiers of the front-end address space 132 to storage resources (e.g., data stored within the storage address space 144 of the storage medium 140). The front-end address space 132 may be independent of the back-end storage resources (e.g., the storage medium 140); accordingly, there may be no set or pre-determined mappings between front-end addresses of the front-end address space 132 and the storage addresses of the storage address space 144 of the storage medium 140. In some embodiments, the front-end address space 132 is sparse, thinly provisioned, and/or over-provisioned, such that the size of the front-end address space 132 differs from the storage address space 144 of the storage medium 140.
The storage layer 130 may be configured to maintain storage metadata 135 pertaining to storage operations performed on the storage medium 140. The storage metadata 135 may include, but is not limited to: a forward index comprising any-to-any mappings between front-end identifiers of the front-end address space 132 and storage addresses within the storage address space 144 of the storage medium 140, a reverse index pertaining to the contents of the storage locations of the storage medium 140, one or more validity bitmaps, reliability testing and/or status metadata, status information (e.g., error rate, retirement status, and so on), and/or the like. Portions of the storage metadata 135 may be maintained within the volatile memory resources 102 of the computing system 100. Alternatively, or in addition, portions of the storage metadata 135 may be stored on non-volatile storage resources 103 and/or the storage medium 140.
The storage layer 130 may be configured to maintain the any-to-any mappings in a forward map 152. The forward map 152 may comprise any suitable data structure, including, but not limited to: an index, a map, a hash map, a hash table, an extended-range tree, a b-tree, and/or the like. The forward map 152 may comprise entries 153 corresponding to front-end identifiers that have been allocated for use to reference data stored on the storage medium 140. The entries 153 of the forward map 152 may associate front-end identifiers 154A-D with respective storage addresses 156A-D within the storage address space 144. The forward map 152 may be sparsely populated, and as such, may omit entries corresponding to front-end identifiers that are not currently allocated by a storage client 106 and/or are not currently in use to reference valid data stored on the storage medium 140. In some embodiments, the forward map 152 comprises a range-encoded data structure, such that one or more of the entries 153 may correspond to a plurality of front-end identifiers (e.g., a range, extent, and/or set of front-end identifiers). In the
Referring to
A solid-state storage array 115 may also be referred to as a logical storage element (LSE). As disclosed in further detail herein, the solid-state storage array 115 may comprise logical storage units (rows 117). As used herein, a “logical storage unit” or row 117 refers to a logical construct combining two or more physical storage units, each physical storage unit on a respective column 118 of the array 115. A logical erase block refers to a set of two or more physical erase blocks, a logical page refers to a set of two or more pages, and so on. In some embodiments, a logical erase block may comprise erase blocks within respective logical storage elements 115 and/or banks. Alternatively, a logical erase block may comprise erase blocks within a plurality of different arrays 115 and/or may span multiple banks of solid-state storage elements.
Referring back to
The log storage module 136 may be configured to store data sequentially at an append point 180 within the physical address space 144. Data may be appended at the append point 180 and, when the storage location 182 is filled, the append point 180 may advance 181 to a next available storage location. As used herein, an “available” logical page refers to a logical page that has been initialized (e.g., erased) and has not yet been programmed. Some types of storage media can only be reliably programmed once after erasure. Accordingly, an available storage location may refer to a storage division 160A-N that is in an initialized (or erased) state. Storage divisions 160A-N may be reclaimed for use in a storage recovery process, which may comprise relocating valid data (if any) on the storage division 160A-N that is being reclaimed to other storage division(s) 160A-N and erasing the storage division 160A-N.
In the
After storing data on the “last” storage location within the storage address space 144 (e.g., storage location N 189 of storage division 160N), the append point 180 wraps back to the first storage division 160A (or the next available storage division, if storage division 160A is unavailable). Accordingly, the log storage module 136 may treat the storage address space 144 as a loop or cycle.
The storage layer 130 may be configured to modify and/or overwrite data out-of-place. As used herein, modifying and/or overwriting data “out-of-place” refers to performing storage operations at different storage addresses rather than modifying and/or overwriting the data at its current storage location (e.g., overwriting the original physical location of the data “in-place”). Performing storage operations out-of-place may avoid write amplification, since existing, valid data on the storage division 160A-N comprising the data that is being modified need not be erased and/or recopied. Moreover, writing data “out-of-place” may remove erasure from the latency path of many storage operations (the erasure latency is no longer part of the “critical path” of a write operation). In the
In some embodiments, the storage layer 130 is configured to scan the storage address space 144 of the storage medium 140 to identify storage divisions 160A-N to reclaim. As disclosed above, reclaiming a storage division 160A-N may comprise relocating valid data on the storage division 160A-N (if any) and erasing the storage division 160A-N. The storage layer 130 may be further configured to store data in association with persistent metadata (e.g., in a self-describing format). The persistent metadata may comprise information about the data, such as the front-end identifier(s) associated with the data, data size, data length, and the like. Embodiments of a packet format comprising persistent, contextual metadata pertaining to data stored within the storage log are disclosed in further detail below in conjunction with
Referring back to
The storage controller 139 may comprise a request module 231 configured to receive storage requests from the storage layer 130 and/or storage clients 106. The request module 231 may be configured to transfer data to/from the storage controller 139 in response to the requests. Accordingly, the request module 231 may comprise and/or be communicatively coupled to one or more direct memory access (DMA) modules, remote DMA modules, interconnect controllers, bus controllers, bridges, buffers, network interfaces, and the like.
The storage controller 139 may comprise a write module 240 configured to process data for storage on the storage medium 140. In some embodiments, the write module 240 comprises one or more stages configured to process and/or format data for storage on the storage medium 140, which may include, but are not limited to: a compression module 242, a packet module 244, an ECC write module 246, and a write buffer 250. In some embodiments, the write module 240 may further comprise a whitening module, configured to whiten data for storage on the storage medium 140, one or more encryption modules configured to encrypt data for storage on the storage medium 140, and so on. The read module 241 may comprise one or more modules configured to process and/or format data read from the storage medium 140, which may include, but are not limited to: a read buffer 251, the data layout module 248, an ECC read module 247, a depacket module 245, and a decompression module 243.
In some embodiments, the write module 240 comprises a write pipeline configured to process data for storage in a plurality of pipeline stages or modules, as disclosed herein. Similarly, in some embodiments, the read module 241 may comprise a read pipeline configured to process data read from the solid-state storage array 115 in a plurality of pipeline stages or modules, as disclosed herein.
The compression module 242 may be configured to compress data for storage on the storage medium 140. Data may be compressed using any suitable compression algorithm and/or technique. The data compression module 242 may be configured to compress the data, such that a compressed size of the data stored on the storage medium 140 differs from the original, uncompressed size of the data. The compression module 242 may be configured to compress data using different compression algorithms and/or compression levels, which may result in variable compression ratios between the original, uncompressed size of certain data segments and the size of the compressed data segments. The compression module 242 may be further configured to perform one or more whitening transformations on the data segments and/or data packets generated by the packet module 244 (disclosed in further detail below). The data whitening transformations may comprise decorrelating the data, which may provide wear-leveling benefits for certain types of storage media. The compression module 242 may be further configured to encrypt data for storage on the storage medium 140 by use of one or more of a media encryption key, a user encryption key, and/or the like.
The packet module 244 may be configured to generate data packets comprising data to be stored on the storage medium 140. As disclosed above, the write module 240 may be configured to store data in a storage log, in which data segments are stored in association with self-describing metadata in a packet format as illustrated in
In some embodiments, the packet module 244 may be configured to generate packets of arbitrary lengths and/or sizes in accordance with the size of storage requests received via the request receiver module 231, data compression performed by the compression module 242, configuration, preferences, and so on. The packet module 244 may be configured to generate packets of one or more pre-determined sizes. In one embodiment, in response to a request to write 24k of data to the solid-state storage medium 110, the packet module 244 may be configured to generate six packets, each packet comprising 4k of the data; in another embodiment, the packet module 244 may be configured to generate a single packet comprising 24k of data in response to the request.
The persistent metadata 314 may comprise the front-end identifier(s) 315 corresponding to the packet data segment 312. Accordingly, the persistent metadata 314 may be configured to associate the packet data segment 312 with one or more LIDs, LBAs, and/or the like. The persistent metadata 314 may be used to associate the packet data segment 312 with the front-end identifier(s) independently of the storage metadata 135. Accordingly, the storage layer 130 may be capable of reconstructing the storage metadata 135 (e.g., the forward map 152) by use of the storage log stored on the storage medium 140. The persistent metadata 314 may comprise other persistent metadata, which may include, but is not limited to, data attributes (e.g., an access control list), data segment delimiters, signatures, links, data layout metadata, and/or the like.
In some embodiments, the data packet 170 may be associated with a log sequence indicator 318. The log sequence indicator 318 may be persisted on the storage division 160A-N comprising the data packet 310. Alternatively, the sequence indicator 318 may be persisted elsewhere on the storage medium 140. In some embodiments, the sequence indicator 178 is applied to the storage divisions 160A-N when the storage divisions 160A-N are reclaimed (e.g., erased, when the first or last storage unit is programmed, etc.). The log sequence indicator 318 may be used to determine the log-order of packets 310 within the storage log stored on the storage medium 140 (e.g., determine an ordered sequence of data packets 170).
Referring back to
In some embodiments, the ECC write module 246 is configured to generate ECC codewords, each of which may comprise a data of length N and a syndrome of length S. For example, the ECC write module 246 may be configured to encode data segments into 240-byte ECC codewords, each ECC codeword comprising 224 bytes of data and 16 bytes of ECC syndrome information. In this embodiment, the ECC encoding may be capable of correcting more bit errors than the manufacturer of the storage medium 140 requires. In other embodiments, the ECC write module 246 may be configured to encode data in a symbolic ECC encoding, such that each data segment of length N produces a symbol of length X. The ECC write module 246 may encode data according to a selected ECC strength. As used herein, the “strength” of an error-correcting code refers to the number of errors that can be detected and/or corrected by use of the error correcting code. In some embodiments, the strength of the ECC encoding implemented by the ECC write module 246 may be adaptive and/or configurable. The strength of the ECC encoding may be selected according to the reliability and/or error rate of the storage medium 140. As disclosed in further detail herein, the strength of the ECC encoding may be independent of the partitioning and/or data layout on the storage medium 140, which may allow the storage layer 130 to select a suitable ECC encoding strength based on the conditions of the storage medium 140, user requirements, and the like, as opposed to static and/or pre-determined ECC settings imposed by the manufacturer of the storage medium 140.
As illustrated in
The ECC write module 246 may be configured to generate ECC codewords 420A-N having a uniform, fixed size; each ECC codeword 420A-N may comprise N bytes of packet data and S syndrome bytes, such that each ECC codeword 420A-N comprises N+S bytes. In some embodiments, each ECC codeword comprises 240 bytes, and includes 224 bytes of packet data (N) and 16 byes of error correction code (S). The disclosed embodiments are not limited in this regard, however, and could be adapted to generate ECC codewords 420A-N of any suitable size, having any suitable ratio between N and S. Moreover, the ECC write module 242 may be further adapted to generate ECC symbols, or other ECC codewords, comprising any suitable ratio between data and ECC information.
As depicted in
Referring back to
In some embodiments, the write module 240 further comprises a write buffer 250 configured to buffer data for storage within respective page write buffers of the storage medium 140. The write buffer 250 may comprise one or more synchronization buffers to synchronize a clock domain of the storage controller 139 with a clock domain of the storage medium 140 (and/or interconnect 127).
The log storage module 136 may be configured to select storage location(s) for data storage operations and/or may provide addressing and/or control information to the storage controller 139. Accordingly, the log storage module 136 may provide for storing data sequentially at an append point 180 within the storage address space 144 of the storage medium 140. The storage address at which a particular data segment is stored may be independent of the front-end identifier(s) associated with the data segment. As disclosed above, the translation module 134 may be configured to associate the front-end interface of data segments (e.g., front-end identifiers of the data segments) with the storage address(es) of the data segments on the storage medium 140. In some embodiments, the translation module 134 may leverage storage metadata 135 to perform logical-to-physical translations; the storage metadata 135 may include, but is not limited to: a forward map 152 comprising arbitrary, any-to-any mappings 150 between front-end identifiers and storage addresses; a reverse map comprising storage address validity indicators and/or any-to-any mappings between storage addresses and front-end identifiers; and so on. The storage metadata 135 may be maintained in volatile memory, such as the volatile memory 102 of the computing system 100. In some embodiments, the storage layer 130 is configured to periodically store portions of the storage metadata 135 on a persistent storage medium, such as the storage medium 140, non-volatile storage resources 103, and/or the like.
The storage controller 139 may further comprise a read module 241 that is configured to read data from the storage medium 140 in response to requests received via the request module 231. The read module 241 may be configured to process data read from the storage medium 140, and provide the processed data to the storage layer 130 and/or a storage client 106 (by use of the request module 231). The read module 241 may comprise one or more modules configured to process and/or format data read from the storage medium 140, which may include, but is not limited to: read buffer 251, data layout module 248, ECC read module 247, a depacket 245, and a decompression module 243. In some embodiments, the read module 241 further includes a dewhiten module configured to perform one or more dewhitening transforms on the data, a decryption module configured to decrypt encrypted data stored on the storage medium 140, and so on. Data processed by the read module 241 may flow to the storage layer 130 and/or directly to the storage client 106 via the request module 231, and/or other interface or communication channel (e.g., the data may flow directly to/from a storage client via a DMA or remote DMA module of the storage layer 130).
Read requests may comprise and/or reference the data using the front-end interface of the data, such as a front-end identifier (e.g., a logical identifier, an LBA, a range and/or extent of identifiers, and/or the like). The back-end addresses associated with data of the request may be determined based, inter alia, on the any-to-any mappings 150 maintained by the translation module 134 (e.g., forward map 152), metadata pertaining to the layout of the data on the storage medium 140, and so on. Data may stream into the read module 241 via a read buffer 251. The read buffer 251 may correspond to page read buffers of one or more of the solid-state storage arrays 115A-N. The read buffer 251 may comprise one or more synchronization buffers configured to synchronize a clock domain of the read buffer 251 with a clock domain of the storage medium 140 (and/or interconnect 127).
The data layout module 248 may be configured to reconstruct one or more data segments from the contents of the read buffer 251. Reconstructing the data segments may comprise recombining and/or reordering contents of the read buffer (e.g., ECC codewords) read from various columns 118 in accordance with a layout of the data on the solid-state storage arrays 115A-N as indicated by the storage metadata 135. As disclosed in further detail herein, in some embodiments, reconstructing the data may comprise stripping data associated with one or more columns 118 from the read buffer 251, reordering data of one or more columns 118, and so on.
The read module 241 may comprise an ECC read module 247 configured to detect and/or correct errors in data read from the solid-state storage medium 110 using, inter alia, the ECC encoding of the data (e.g., as encoded by the ECC write module 246), parity data (e.g., using parity substitution), and so on. As disclosed above, the ECC encoding may be capable of detecting and/or correcting a pre-determined number of bit errors, in accordance with the strength of the ECC encoding. The ECC read module 247 may be capable of detecting more bit errors than can be corrected.
The ECC read module 247 may be configured to correct any “correctable” errors using the ECC encoding. In some embodiments, the ECC read module 247 may attempt to correct errors that cannot be corrected by use of the ECC encoding using other techniques, such as parity substitution, or the like. Alternatively, or in addition, the ECC read module 247 may attempt to recover data comprising uncorrectable errors from another source. For example, in some embodiments, data may be stored in a RAID configuration. In response to detecting an uncorrectable error, the ECC read module 247 may attempt to recover the data from the RAID, or other source of redundant data (e.g., a mirror, backup copy, or the like).
In some embodiments, the ECC read module 247 may be configured to generate an interrupt in response to reading data comprising uncorrectable errors. The interrupt may comprise a message indicating that the requested data is in error, and may indicate that the ECC read module 247 cannot correct the error using the ECC encoding. The message may comprise the data that includes the error (e.g., the “corrupted data”).
The interrupt may be caught by the storage layer 130 or other process, which, in response, may be configured to reconstruct the data using parity substitution, or other reconstruction technique, as disclosed herein. Parity substitution may comprise iteratively replacing portions of the corrupted data with a “parity mask” (e.g., all ones) until a parity calculation associated with the data is satisfied. The masked data may comprise the uncorrectable errors, and may be reconstructed using other portions of the data in conjunction with the parity data. Parity substitution may further comprise reading one or more ECC codewords from the solid-state storage array 115A-N (in accordance with an adaptive data structure layout on the array 115), correcting errors within the ECC codewords (e.g., decoding the ECC codewords), and reconstructing the data by use of the corrected ECC codewords and/or parity data. In some embodiments, the corrupted data may be reconstructed without first decoding and/or correcting errors within the ECC codewords. Alternatively, uncorrectable data may be replaced with another copy of the data, such as a backup or mirror copy. In another embodiment, the storage layer 130 stores data in a RAID configuration, from which the corrupted data may be recovered.
As depicted in
The storage layer 130 may further comprise a groomer module 138 configured to reclaim storage resources of the storage medium 140. The groomer module 138 may operate as an autonomous, background process, which may be suspended and/or deferred while other storage operations are in process. The log storage module 136 and groomer module 138 may manage storage operations so that data is spread throughout the storage address space 144 of the storage medium 140, which may improve performance and data reliability, and avoid overuse and underuse of any particular storage locations, thereby lengthening the useful life of the storage medium 140 (e.g., wear-leveling, etc.). As disclosed above, data may be sequentially appended to a storage log within the storage address space 144 at an append point 180, which may correspond to a particular storage address within one or more of the banks 119A-N (e.g., physical address 0 of bank 119A). Upon reaching the end of the storage address space 144 (e.g., physical address N of bank 119N), the append point 180 may revert to the initial position (or next available storage location).
As disclosed above, operations to overwrite and/or modify data stored on the storage medium 140 may be performed “out-of-place.” The obsolete version of overwritten and/or modified data may remain on the storage medium 140 while the updated version of the data is appended at a different storage location (e.g., at the current append point 180). Similarly, an operation to delete, erase, or TRIM data from the storage medium 140 may comprise indicating that the data is invalid (e.g., does not need to be retained on the storage medium 140). Marking data as invalid may comprise modifying a mapping between the front-end identifier(s) of the data and the storage address(es) comprising the invalid data, marking the storage address as invalid in a reverse map, and/or the like.
The groomer module 138 may be configured to select sections of the solid-state storage medium 140 for grooming operations. As used herein, a “section” of the storage medium 140 may include, but is not limited to: an erase block, a logical erase block, a die, a plane, one or more pages, a portion of a solid-state storage element 116A-Y, a portion of a row 117 of a solid-state storage array 115, a portion of a column 118 of a solid-state storage array 115, and/or the like. A section may be selected for grooming operations in response to various criteria, which may include, but are not limited to: age criteria (e.g., data refresh), error metrics, reliability metrics, wear metrics, resource availability criteria, an invalid data threshold, and/or the like. A grooming operation may comprise relocating valid data on the selected section (if any). The operation may further comprise preparing the section for reuse, which may comprise erasing the section, marking the section with a sequence indicator, such as the sequence indicator 318, and/or placing the section into a queue of storage sections that are available to store data. The groomer module 138 may be configured to schedule grooming operations with other storage operations and/or requests. In some embodiments, the storage controller 139 may comprise a groomer bypass (not shown) configured to relocate data from a storage section by transferring data read from the section from the read module 241 directly into the write module 240 without being routed out of the storage controller 139.
The storage layer 130 may be further configured to manage out-of-service conditions on the storage medium 140. As used herein, a section of the storage medium 140 that is “out-of-service” (OOS) refers to a section that is not currently being used to store valid data. The storage layer 130 may be configured to monitor storage operations performed on the storage medium 140 and/or actively scan the solid-state storage medium 140 to identify sections that should be taken out of service. The storage metadata 135 may comprise OOS metadata that identifies OOS sections of the solid-state storage medium 140. The storage layer 130 may be configured to avoid OOS sections by, inter alia, streaming padding (and/or nonce) data to the write buffer 250 such that padding data will map to the identified OOS sections. In some embodiments, the storage layer 130 may be configured to manage OOS conditions by replacing OOS sections of the storage medium 140 with replacement sections. Alternatively, or in addition, a hybrid OOS approach may be used that combines adaptive padding and replacement techniques; the padding approach to managing OOS conditions may be used in portions of the storage medium 140 comprising a relatively small number of OOS sections; as the number of OOS sections increases, the storage layer 130 may replace one or more of the OOS sections with replacements sections. Further embodiments of apparatus, systems, and methods for detecting and/or correcting data errors, and managing OOS conditions, are disclosed in U.S. Patent Application Publication No. 2009/0287956 (U.S. application Ser. No. 12/467,914), entitled, “Apparatus, System, and Method for Detecting and Replacing a Failed Data Storage,” filed May 18, 2009, and U.S. Patent Application Publication No. 2013/0019072 (U.S. application Ser. No. 13/354,215), entitled, “Apparatus, System, and Method for Managing Out-of-Service Conditions,” filed Jan. 19, 2012 for John Strasser et al., each of which is hereby incorporated by reference in its entirety.
As disclosed above, the storage medium 140 may comprise one or more solid-state storage arrays 115A-N. A solid-state storage array 115A-N may comprise a plurality of independent columns 118 (respective solid-state storage elements 116A-Y), which may be coupled to the storage layer 130 in parallel via the interconnect 127. Accordingly, storage operations performed on an array 115A-N may be performed on a plurality of solid-state storage elements 116A-Y. Performing a storage operation on a solid-state storage array 115A-N may comprise performing the storage operation on each of the plurality of solid-state storage elements 116A-Y comprising the array 115A-N: a read operation may comprise reading a physical storage unit (e.g., page) from a plurality of solid-state storage elements 116A-Y; a program operation may comprise programming a physical storage unit (e.g., page) on a plurality of solid-state storage elements 116A-Y; an erase operation may comprise erasing a section (e.g., erase block) on a plurality of solid-state storage elements 116A-Y; and so on. Accordingly, a program operation may comprise the write module 240 streaming data to program buffers of a plurality of solid-state storage elements 116A-Y (via the write buffer 250 and interconnect 127) and, when the respective program buffers are sufficiently full, issuing a program command to the solid-state storage elements 116A-Y. The program command may cause one or more storage units on each of the storage elements 116A-Y to be programmed in parallel.
The solid-state storage elements 116A-Y may be partitioned into sections, such as physical storage divisions 530 (e.g., physical erase blocks). Each erase block may comprise a plurality of physical storage units 532, such as pages. The physical storage units 532 within a physical storage division 530 may be erased as a group. Although
As depicted in
Storage operations performed on the solid-state storage array 115 may operate on multiple solid-state storage elements 116A-Y: an operation to program data to a logical storage unit 542 may comprise programming data to each of 25 physical storage units (e.g., one storage unit per non-volatile storage element 116A-Y); an operation to read data from a logical storage unit 542 may comprise reading data from 25 physical storage units (e.g., pages); an operation to erase a logical storage division 540 may comprise erasing 25 physical storage divisions (e.g., erase blocks); and so on. Since the columns 118 are independent, storage operations may be performed across different sets and/or portions of the array 115. For example, a read operation on the array 115 may comprise reading data from physical storage unit 532 at a first physical address of solid-state storage element 116A and reading data from a physical storage unit 532 at a different physical address of one or more other solid-state storage elements 116B-N.
Arranging solid-state storage elements 116A-Y into a solid-state storage array 115 may be used to address certain properties of the storage medium 140. Some embodiments may comprise an asymmetric storage medium 140, in which it takes longer to program data onto the solid-state storage elements 116A-Y than it takes to read data therefrom (e.g., 10 times as long). Moreover, in some cases, data may only be programmed to physical storage divisions 530 that have first been initialized (e.g., erased). Initialization operations may take longer than program operations (e.g., 10 times as long as a program, and by extension 100 times as long as a read operation). Managing groups of solid-state storage elements 116A-Y in an array 115 (and/or in independent banks 119A-N as disclosed herein) may allow the storage layer 130 to perform storage operations more efficiently, despite the asymmetric properties of the storage medium 140. In some embodiments, the asymmetry in read, program, and/or erase operations is addressed by performing these operations on multiple solid-state storage elements 116A-Y in parallel. In the embodiment depicted in
In some embodiments, portions of the solid-state storage array 115 may be configured to store data and other portions of the array 115 may be configured to store error detection and/or recovery information. Columns 118 used for data storage may be referred to as “data columns” and/or “data solid-state storage elements.” Columns used to store data error detection and/or recovery information may be referred to as a “parity column” and/or “recovery column.” The array 115 may be configured in an operational mode in which one of the solid-state storage elements 116Y is used to store parity data, whereas other solid-state storage elements 116A-X are used to store data. Accordingly, the array 115 may comprise data solid-state storage elements 116A-X and a recovery solid-state storage element 116Y. In this operational mode, the effective storage capacity of the rows (e.g., logical pages 542) may be reduced by one physical storage unit (e.g., reduced from 25 physical pages to 24 physical pages). As used herein, the “effective storage capacity” of a storage unit refers to the number of storage units or divisions that are available to store data and/or the total amount of data that can be stored on a logical storage unit. The operational mode described above may be referred to as a “24+1” configuration, denoting that twenty-four (24) physical storage units 532 are available to store data, and one (1) of the physical storage units 532 is used for parity. The disclosed embodiments are not limited to any particular operational mode and/or configuration, and could be adapted to use any number of the solid-state storage elements 116A-Y to store error detection and/or recovery data.
As disclosed above, the storage controller 139 may be configured to interleave storage operations between a plurality of independent banks 119A-N of solid-state storage arrays 115A-N, which may further ameliorate asymmetry between erase, program, and read operations.
Some operations performed by the storage controller 139 may cross bank boundaries. The storage controller 139 may be configured to manage groups of logical erase blocks 540 that include erase blocks of multiple arrays 115A-N within different respective banks 119A-N. Each group of logical erase blocks 540 may comprise erase blocks 531A-N on each of the arrays 115A-N. The erase blocks 531A-N comprising the logical erase block group 540 may be erased together (e.g., in response to a single erase command and/or signal or in response to a plurality of separate erase commands and/or signals). Performing erase operations on logical erase block groups 540 comprising large numbers of erase blocks 531A-N within multiple arrays 115A-N may further mask the asymmetric properties of the solid-state storage medium 140, as disclosed herein.
The storage controller 139 may be configured to perform some storage operations within boundaries of the arrays 115A-N and/or banks 119A-N. In some embodiments, the read, write, and/or program operations may be performed within rows 117 of the solid-state storage arrays 115A-N (e.g., on logical pages 542A-N within arrays 115A-N of respective banks 119A-N). As depicted in
The bank interleave module 252 may be configured to append data to the solid-state storage medium 110 by programming data to the arrays 115A-N in accordance with a sequential interleave pattern. The sequential interleave pattern may comprise programming data to a first logical page (LP_0) of array 115A within bank 119A, followed by the first logical page (LP_0) of array 115B within the next bank 119B, and so on, until data is programmed to the first logical page LP_0 of each array 115A-N within each of the banks 119A-N. As depicted in
Sequentially interleaving programming operations as disclosed herein may increase the time between concurrent programming operations on the same array 115A-N and/or bank 119A-N, which may reduce the likelihood that the storage controller 139 will have to stall storage operations while waiting for a programming operation to complete. As disclosed above, programming operations may take significantly longer than other operations, such as read and/or data streaming operations (e.g., operations to stream the contents of the write buffer 250 to an array 115A-N via the bus 127A-N). The interleave pattern of
As depicted in
The erase block groups of the arrays 115A-N may, therefore, be managed as logical erase blocks 540A-N that span the arrays 115A-N. Referring to
Referring back to
The write module 240 may comprise a packet module 244 configured to generate data packets comprising data segments for storage on the array 115, as disclosed above. In the
The ECC write module 246 is configured to generate ECC datastructures (ECC codewords 620) comprising portions of one or more packets 610, as disclosed above. The ECC codewords 620 may be of a fixed size. In the
The data layout module 248 may be configured to lay out data for horizontal storage within rows 117 of the array 115. The data layout module 248 may be configured to buffer and/or arrange data segments (e.g., the ECC codewords 621, 622, and 623) into data rows 667 comprising 24 bytes of data. The data layout module 248 may be capable of buffering one or more ECC codewords 620 (by use of the write buffer 251). In the
The data layout module 248 may be configured to lay out data segments for horizontal storage within rows 117 of the array 115. The data layout module 248 may be configured to buffer and/or arrange data segments (e.g., the ECC codewords 621, 622, and 623) into data rows 667 comprising 24 bytes of data. The data layout module 248 may be capable of buffering one or more ECC codewords 620 (by use of the write buffer 251). In the
The data layout module 248 may be further configured to stream 24-byte data rows to a parity module 637, which may be configured to generate a parity byte for each 24-byte group. The data layout module 248 streams the resulting 25-byte data rows 667 to the array 115 via the bank controller 252 and interconnect 127 (and/or write buffer 250, as disclosed above). The storage controller 139 may be configured to stream the data rows 667 to respective program buffers of the solid-state storage array 115 (e.g., stream to program buffers of respective solid-state storage elements 116A-Y). Accordingly, each cycle of the interconnect 127 may comprise transferring a byte of a data row 667 to a program buffer of a respective solid-state storage element 116A-Y. In the
As illustrated in
The storage locations of the solid-state storage array 115 may be capable of storing a large number of ECC codewords 610 and/or packets 610. For example, the solid-state storage elements may comprise 8 kb pages, such that the storage capacity of a storage location (row 117) is 192 kb. Accordingly, each storage location within the array 115 may be capable of storing approximately 819 240B ECC codewords (352 packets 610). The storage address of a data segment may, therefore, comprise: a) the address of the storage location on which the ECC codewords 620 and/or packets 610 comprising the data segment are stored, and b) an offset of the ECC codewords 620 and/or packets 610 within the row 117. The storage location or offset 636 of the packet 610A within the logical page 542A may be determined based on the horizontal layout of the data packet 610A. The offset 636 may identify the location of the ECC codewords 621, 622, and/or 623 comprising the packet 610A (and/or may identify the location of the last ECC codeword 623 comprising data of the packet 610A). Accordingly, in some embodiments, the offset may be relative to one or more datastructures on the solid-state storage array 115 (e.g., a packet offset and/or ECC codeword offset). Another offset 638 may identify the location of the last ECC codeword of a next packet 620 (e.g., packet 610B), and so on.
As depicted in
Since the data is spread across the columns 0-23 (solid-state storage elements 116A-X), reading data of the ECC codeword 621 may require accessing a plurality of columns 118. Moreover, the smallest read unit may be an ECC codeword 620 (and/or packet 610). Referring back to
Portions of the storage metadata 135, including portions of the forward map 152, may be stored in volatile memory of the computing system 100 and/or storage layer 130. The memory footprint of the storage metadata 135 may grow in proportion to the number of entries 153 that are included in the forward map 152, as well as the size of the entries 153 themselves. The memory footprint of the forward map 152 may be related the size (e.g., number of bits) used to represent the storage address of each entry 153. The memory footprint of the forward map 153 may impact the performance of the computing system 100 hosting the storage layer 130. For example, the computing device 100 may exhaust its volatile memory resources 102, and be forced to page swap memory to non-volatile storage resources 103, or the like. Even small reductions in the size of the entries 153 may have a significant impact on the overall memory footprint of the storage metadata 135 when scaled to a large number of entries 153.
The number of the storage addresses 154A-D may also determine the storage capacity that the forward map 152 is capable of referencing (e.g., may determine the number of unique storage locations that can be referenced by the entries 153 of the forward map 152). In one embodiment, for example, the entries 153 may comprise 32 bit storage addresses 154A-D. As disclosed above, a portion of each 32 bit storage addresses 154A-D may be used to address a specific storage location (e.g., logical page), and other portions of the storage addresses 154A-D may determine the offset within the storage location. If 4 bits are needed to represent storage location offsets, the 32 bit storage addresses 154A-D may only be capable of addressing 2^28 unique storage locations. However, if offset information is stored on non-volatile storage media (e.g., on the logical pages themselves), the full 32 bits of the physical address may be used to reference unique logical pages. Therefore, a 32 bit address may address 2^32 unique logical pages rather than only 2^28 logical pages. Accordingly, segmenting storage addresses may effectively increase the number of unique storage locations that can be referenced by the forward map 152.
Referring to
The data segment mapped to the front-end address 754 may be stored in the packet 610D. The storage location address 757 (first portion of the storage address 756) comprises the media address of the logical page 542 within the array 115. The offset 759D indicates an offset of the packet 610D within the logical page 542.
Referring to the system 701 of
The storage layer 130 may be configured to leverage the on-media offset index 749 to reduce the size of the entries 153 in the forward map 152 and/or enable the entries 153 to reference larger storage address spaces 144. As illustrated in
The storage layer 130 may determine the full storage address of a data segment by use of the storage location address 757 maintained within the forward map 152 and the offset index 749 stored on the storage medium 140. Accordingly, accessing data associated with the front-end address 754D may comprise a) accessing the storage location address 757 within the entry 153 corresponding to the front-end address 754D in the forward map 152, b) reading the offset index 749 from the logical page 542 at the specified storage location address 757, and c) accessing the packet 610D comprising the data segment at offset 759D by use of the offset index 749.
Referring to the system 702 of
As illustrated in
Referring back to
Reading data corresponding to a front-end address may comprise accessing an entry 153 associated with the front-end address to determine the physical address of the storage location comprising the requested data. The read module 241 may be configured to read the storage location by, inter alia, issuing a read command to one of the solid-state storage arrays 115A-N, which may cause the storage elements 116A-Y comprising the array 115A-N to transfer the contents of a particular page into a read buffer. The offset index module 249 may be configured to determine the offset of the requested data by a) streaming the portion of the read buffer 251 comprising the offset index 749 into the read module 241 and b) parsing the offset index 749 to determine the offset of the requested data. The read module 241 may then access the portions of the read buffer 251 comprising the requested data by use of the determined offset.
As disclosed herein, the packet module 244 may be configured to store data segments 312 in a packet format 310 that comprises persistent metadata 314. The persistent metadata 314 may comprise one or more front-end identifiers 315 corresponding to the data segment 312. Inclusion of the front-end interface metadata 315 may increase the on-media overhead imposed by the packet format 310. The offset index 749 generated by the offset index module 249, which, in some embodiments, is stored with the corresponding data packets, may also include the front-end interface of the data segment 312. Accordingly, in some embodiments, the packet format 310 may be modified to omit front-end interface metadata from the persistent metadata 314.
Referring back to
The read module 241 may be configured to perform a read operation to read a storage location of one of the solid-state storage arrays 115A, transfer the contents of the storage location into respective read buffers of the solid-state storage elements 116A-Y, and stream the data into the read buffer 251 by use of the 24-byte interconnect 127 and/or bank controller 252. The stream time (Ts) of the read operation may refer to the time required to stream the ECC codewords 620 (and/or packets 610) into the read module 241. In the horizontal data layout of
Given the horizontal data arrangement within the solid-state storage array 115, and the latencies disclosed herein, an input/output operations per second (IOPS) metric may be quantified. The IOPS to read an ECC codeword 620 may be expressed as:
In Equation 1, Tr is the read time of the solid-state storage elements 116A-Y, Ts is the stream time (e.g., the clock speed times the number of cycles required), and C is the number of independent columns 118 used to store the data. Equation 1 may be scaled by the number of independent banks 119A-N available to storage layer 130. In the horizontal data structure layout of
In Equation 2, the number of columns is twenty-four (24), and Sc is the cycle time of the bus 127. The cycle time is scaled by 10 since, as disclosed above, a horizontal 240-byte ECC codeword 620 may be streamed in 10 cycles of the interconnect 127.
The storage layer 130 may be configured to store data in different configurations, layouts, and/or arrangements within a solid-state storage array 115. As disclosed above, in some embodiments, the data layout module 248 is configured to arrange data within respective independent columns, each comprising a subset of the columns 118 of the array 115 (e.g., subsets of the solid-state storage elements 116A-Y). Alternatively, or in addition, the data layout module 248 may be configured to store data vertically within respective “vertical stripes.” The vertical stripes may have a configurable depth, which may be a factor of the page size of the solid-state storage elements 116A-Y comprising the array 115.
As depicted in
In some embodiments, the storage controller 139 may comprise a plurality of packet modules 242 and/or ECC write modules 246 (e.g., multiple, independent write modules 240) configured to operate in parallel. Data of the parallel write modules 240 may flow into the data layout module 248 in a checkerboard pattern such that the data is arranged in the vertical format disclosed herein.
The vertical arrangement of
The vertical data layout of
The reduced IOPS metric may be offset by the increased throughput (reduced read overhead) and/or different Tr and Ts latency times. These considerations may vary from device to device and/or application to application. Moreover, the IOPS metric may be ameliorated by the fact that multiple, independent ECC codewords 620 can be streamed simultaneously. Therefore, in some embodiments, the data layout used by the storage layer 130 (and data layout module 248) may be configurable (e.g., by a user setting or preference, firmware update, or the like).
The pages of the solid-state storage elements 116A-Y may be capable of storing a large number of ECC codewords 620 and/or data packets 610. Accordingly, the vertical data arrangement of
The forward map 152 may be configured to index front-end identifiers to pages of respective solid-state storage elements 116A-Y. Accordingly, the forward map 152 may include a subset of the full storage address 1057 (the portion of the address that identifies the particular page comprising the data segment), and may omit addressing information pertaining to the offset of the data segment within the page. The storage layer 130 may be configured to access the data segment corresponding to front-end address 854B by: a) identifying the page comprising the data segment associated with the front-end address 854B by use of the forward map 152; b) reading the identified page; c) determining the offset of the data packet 810B by use of the offset index 749 stored on the identified page; and d) reading the packet 810B at the determined offset.
In some embodiments, the data layout module 248 may be configured to lay out and/or arrange data in an adaptive channel configuration. As used herein, an adaptive channel configuration refers to a data layout in which the columns 118 of the array 115 are divided into a plurality of independent channels, each channel comprising a set of columns 118 of the solid-state storage array 115. The channels may comprise subsets of the solid-state storage elements 116A-Y. In some embodiments, an adaptive channel configuration may comprise a fully horizontal data layout, in which data segments are stored within a channel comprising 24 columns 118 of the array 115, as disclosed in conjunction with
In alternative adaptive channel configurations, the data layout module 248 may be configured to buffer 24/N ECC codewords 620, where N corresponds to the configuration of the adaptive channels used for each ECC codeword 620. ECC codewords 620 may be stored within independent channels comprising N columns 118 (e.g., N solid-state storage elements 116A-Y). Accordingly, the horizontal arrangement of
In some embodiments, data segments may be arranged in adjacent columns 118 within the array 115 (e.g., a data structure may be stored in columns 0-4). Alternatively, columns may be non-adjacent and/or interleaved with other data segments (e.g., a data segment may be stored on columns 0, 2, 4, and 6 and another data segment may be stored on columns 2, 3, 5, and 7). The data layout module 248 may be configured to adapt the data layout in accordance with out-of-service conditions within the array 115; if a column 118 (or portion thereof) is out of service, the data layout module 238 may be configured to adapt the data layout accordingly (e.g., arrange data to avoid the out of service portions of the array 115, as disclosed above).
The stream time Ts of an ECC codeword 620 in the independent channel embodiments of
The IOPS metric may be modified according to a number of data segments that can be read in parallel. The six-column independent channel configuration may enable four different ECC codewords (and/or packets) to be read from the array 115 concurrently.
In some embodiments, the storage layer 130 may be configured to store data in an adaptive vertical stripe configuration. As used herein, a vertical stripe configuration refers to storing data structures vertically within vertical stripes having a predetermined depth within the columns 118 of the solid-state storage array. Multiple vertical stripes may be stored within rows 117 of the array 115. The depth of the vertical stripes may, therefore, determine read-level parallelism, whereas the vertical ECC configuration may provide error detection, correction, and/or reconstruction benefits.
The depth of the vertical stripes 646A-N and the size of typical read operations may determine, inter alia, the number of channels (columns) needed to perform read operations (e.g., determine the number of channels used to perform a read operation, stream time Ts, and so on). For example, a 4 kb data packet may be contained within 5 ECC codewords, including ECC codewords 3 through 7. Reading the 4 kb packet from the array 115 may, therefore, comprise reading data from two columns (columns 0 and 1). A larger 8 kb data structure may span 10 ECC codewords (ECC codewords 98-107), and as such, reading the 8 kb data structure may comprise reading data from three columns of the array (columns 0, 1, and 2). Configuring the vertical stripes 646A-N with an increased depth may decrease the number of columns needed for a read operation, which may increase the stream time Ts for the individual read, but may allow for other independent read operations to be performed in parallel. Decreasing depth may increase the number of columns needed for read operations, which may decrease stream time Ts, but result in decreasing the number of other, independent read operations that can be performed in parallel.
The depth of the vertical stripe 746B may be increased to 8 kb, which may be sufficient to hold eight vertically aligned ECC codewords. The data structure 610 may be stored within 17 ECC codewords, as disclosed above. However, the modified depth of the vertical stripe 746B may result in the data structure occupying three columns (columns 0 through 2) rather than six. Accordingly, reading the data structure 610 may comprise reading data from an independent channel comprising three columns, which may increase the number of other, independent read operations that can occur in parallel on other columns (e.g., columns 3 and 4). The stream time Ts of the read operation may double as compared to the stream time of the vertical stripe 746A.
The data layout module 248 may be configured to buffer the ECC codewords 620 for storage in vertical stripes, as disclosed herein. The data layout module 248 may comprise a fill module 660 that is configured to rotate the serial stream of ECC codewords 620 into vertical stripes by use of, inter alia, one or more cross point switches, FIFO buffers 662A-X, and the like. The FIFO buffers 662A-X may each correspond to a respective column of the array 115. The fill module 660 may be configured to rotate and/or buffer the ECC codewords 620 according to a particular vertical code word depth, which may be based on the ECC codeword 620 size and/or size of physical storage units of the array 115.
The data layout module 248 may be further configured to manage OOS conditions within the solid-state storage array 115. As disclosed above, an OOS condition may indicate that one or more columns 118 of the array are not currently in use to store data. The storage metadata 135 may identify columns 118 that are out of service within various portions of the solid-state storage array 115 (e.g., rows 117, logical erase blocks 540, or the like). In the
In some embodiments, the data layout module 248 may comprise a parity module 637 that is configured to generate parity data in accordance with the vertical strip data configuration. The parity data may be generated horizontally, on a byte-by-byte basis within rows 117 of the array 115 as disclosed above. The parity data P0 may correspond to ECC codewords 0, 4, through 88; the parity data P1 may correspond to ECC codewords 1, 5, through 89, and so on. The data layout module 248 may include a parity control FIFO 662Y configured to manage OOS conditions for parity calculations (e.g., ignore data within OOS columns for the purposes of the parity calculation).
The vertical stripe data configuration generated by the data layout module 248 (and parity module 637) may flow to write buffers of the solid-state storage elements 116A-Y within the array 115 through the write buffer and/or bank controller 252, as disclosed above. In some embodiments, data rows 667 generated by write module 240 may comprise one byte for each data column in the array 115 (columns 116A-X). Each byte in a data row 667 may correspond to a respective ECC codeword 620 and may include a corresponding parity byte. Accordingly, each data row 667 may comprise horizontal byte-wise parity information from which any of the bytes within the row 667 may be reconstructed, as disclosed herein. A data row 667A may comprise a byte of ECC codeword 0 for storage on column 0, a byte of ECC codeword 4 for storage on column 1, padding data for column 1, a byte of ECC codeword 88 for storage on column 23, and so on. The data row 667 may further comprise a parity byte 668A for storage on column 24 (or other column), as disclosed above.
The data may be programmed unto the solid-state storage array 115 as a plurality of vertical stripes 646A-N within a logical page 542, as disclosed above (e.g., by programming the contents of program buffers to physical storage units of the solid-state storage elements 116A-Y within the array 115). In the
As disclosed above, packets may span vertical stripes. In the
In some embodiments, step 1110 may further comprise compressing one or more of the data segments such that a compressed size of the data segments differs from the original, uncompressed size of the data segments. Step 1110 may further include encrypting and/or whitening the data segments, as disclosed herein.
Step 1120 may comprise mapping front-end addresses of the data segments using, inter alia, a forward map 152, as disclosed herein. Step 1120 may comprise segmenting the storage addresses of the data segments into a first portion that addresses the storage location comprising the data segments (e.g., the physical address of the logical page 542 comprising the data segments), and second portions comprising the respective offsets of the data segments within the storage location. Step 1120 may further comprise indexing the front-end addresses to the first portion of the storage address, and omitting the second portion of the storage address from the entries 153 of the forward index 152. Step 1120 may comprise determining the data segment offsets based on a compressed size of the data segments, as disclosed herein. Accordingly, the offsets determined at step 1120 may differ from offsets based on the original, uncompressed size of the data segments.
Step 1130 may comprise generating an offset index for the storage location by use of the offset index module 249, as disclosed herein. Step 1130 may comprise generating an offset index 749 data structure that is configured for storage on the storage medium 140. The offset index 749 may be configured for storage at a predetermined offset and/or location within the storage location comprising the indexed data segments. The offset index 749 may be configured to map front-end addresses of the data segments stored within the storage location to respective offsets of the data segments within the storage location, as disclosed herein. In some embodiments, step 1130 further comprises storing the offset index 749 on the storage medium 140, which may comprise streaming the offset index 749 to program buffers of the storage elements 116A-Y comprising a solid-state storage array 115A-N and/or issuing a program command to the solid-state storage elements 116A-Y, as disclosed herein.
Step 1220 may comprise determining an offset of the requested data within the identified storage location. Step 1220 may comprise a) reading the identified storage location, b) accessing an offset index 749 at a predetermined location with the identified storage location, and c) determining the offset of data corresponding to the front-end address by use of the offset index. Accordingly, step 1220 may comprise forming the full storage address of the requested data by combining the address of the storage location maintained in the forward map 152 with the offset maintained in the on-media offset index 749.
Step 1230 may comprise accessing the requested data. Step 1230 may include streaming one or more ECC codewords 620 comprising the data packets 610 in which the requested data was stored from read buffers of the storage elements 116A-Y comprising a storage array 115A-N. Step 1230 may comprise streaming the data from the offset determined at step 1220. Step 1230 may further include processing the ECC codeword(s) 620 and/or packet(s) 610 comprising the requested data, as disclosed herein (e.g., by use of the ECC read module 247 and/or depacket module 245). Step 1230 may further comprise decompressing the requested data by use of the decompression module 243, decrypting the data, dewhitening the data, and so on, as disclosed herein.
The above description provides numerous specific details for a thorough understanding of the embodiments described herein. However, those of skill in the art will recognize that one or more of the specific details may be omitted, or other methods, components, or materials may be used. In some cases, operations are not shown or described in detail.
Furthermore, the described features, operations, or characteristics may be combined in any suitable manner in one or more embodiments. It will also be readily understood that the order of the steps or actions of the methods described in connection with the embodiments disclosed may be changed as would be apparent to those skilled in the art. Thus, any order in the drawings or Detailed Description is for illustrative purposes only and is not meant to imply a required order, unless specified to require an order.
Embodiments may include various steps, which may be embodied in machine-executable instructions to be executed by a general-purpose or special-purpose computer (or other electronic device). Alternatively, the steps may be performed by hardware components that include specific logic for performing the steps, or by a combination of hardware, software, and/or firmware.
Embodiments may also be provided as a computer program product including a computer-readable storage medium having stored instructions thereon that may be used to program a computer (or other electronic device) to perform processes described herein. The computer-readable storage medium may include, but is not limited to: hard drives, floppy diskettes, optical disks, CD-ROMs, DVD-ROMs, ROMs, RAMs, EPROMs, EEPROMs, magnetic or optical cards, solid-state memory devices, or other types of medium/machine-readable medium suitable for storing electronic instructions.
As used herein, a software module or component may include any type of computer instruction or computer executable code located within a memory device and/or computer-readable storage medium. A software module may, for instance, comprise one or more physical or logical blocks of computer instructions, which may be organized as a routine, program, object, component, data structure, etc., that performs one or more tasks or implements particular abstract data types.
In certain embodiments, a particular software module may comprise disparate instructions stored in different locations of a memory device, which together implement the described functionality of the module. Indeed, a module may comprise a single instruction or many instructions, and may be distributed over several different code segments, among different programs, and across several memory devices. Some embodiments may be practiced in a distributed computing environment where tasks are performed by a remote processing device linked through a communications network. In a distributed computing environment, software modules may be located in local and/or remote memory storage devices. In addition, data being tied or rendered together in a database record may be resident in the same memory device, or across several memory devices, and may be linked together in fields of a record in a database across a network.
It will be understood by those having skill in the art that many changes may be made to the details of the above-described embodiments without departing from the underlying principles of the disclosure.
This application is a continuation-in-part of, and claims priority to, U.S. patent application Ser. No. 13/784,705, entitled, “Systems and Methods for Adaptive Data Storage,” filed Mar. 4, 2013, which claims priority to U.S. Provisional Patent Application Ser. No. 61/606,253, entitled, “Adaptive Data Arrangement,” filed Mar. 2, 2012 for David Flynn et al. and to U.S. Provisional Patent Application Ser. No. 61/606,755, entitled, “Adaptive Data Arrangement,” filed Mar. 5, 2012, for David Flynn et al.; this application also claims priority to U.S. Provisional Patent Application Ser. No. 61/663,464, entitled, “Systems and Methods for Referencing Data on a Non-Volatile Storage Medium,” filed Jun. 22, 2012 for David Flynn et al., each of which is hereby incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4571674 | Hartung | Feb 1986 | A |
5261068 | Gaskins et al. | Sep 1993 | A |
5291496 | Andaleon et al. | Mar 1994 | A |
5307497 | Feigenbaum et al. | Apr 1994 | A |
5313475 | Cromer et al. | May 1994 | A |
5325509 | Lautzenhaiser | Jun 1994 | A |
5337275 | Garner | Aug 1994 | A |
5392427 | Barrett et al. | Feb 1995 | A |
5404485 | Ban | Apr 1995 | A |
5438671 | Miles | Aug 1995 | A |
5469555 | Ghosh et al. | Nov 1995 | A |
5499354 | Aschoff et al. | Mar 1996 | A |
5504882 | Chai | Apr 1996 | A |
5551003 | Mattson et al. | Aug 1996 | A |
5559988 | Durante et al. | Sep 1996 | A |
5594883 | Pricer | Jan 1997 | A |
5596736 | Kerns | Jan 1997 | A |
5680579 | Young et al. | Oct 1997 | A |
5603001 | Sukegawa et al. | Nov 1997 | A |
5701434 | Nakagawa | Dec 1997 | A |
5845313 | Estakhri et al. | Jan 1998 | A |
5745792 | Jost | Apr 1998 | A |
5754563 | White | May 1998 | A |
5787486 | Chin et al. | Jul 1998 | A |
5798968 | Lee et al. | Aug 1998 | A |
5809527 | Cooper et al. | Sep 1998 | A |
5809543 | Byers et al. | Sep 1998 | A |
5845329 | Onishi et al. | Dec 1998 | A |
5890192 | Lee et al. | Mar 1999 | A |
5907856 | Estakhri et al. | May 1999 | A |
5961660 | Capps, Jr. et al. | May 1999 | A |
5924113 | Estakhri et al. | Jul 1999 | A |
5930815 | Estakhri et al. | Jul 1999 | A |
5933847 | Ogawa | Aug 1999 | A |
5953737 | Estakhri et al. | Sep 1999 | A |
5960462 | Solomon et al. | Sep 1999 | A |
5860083 | Sukegawa | Dec 1999 | A |
6000019 | Dykstal et al. | Dec 1999 | A |
6128695 | Estakhri et al. | Mar 2000 | A |
6069827 | Sinclair | May 2000 | A |
6073232 | Kroeker et al. | Jun 2000 | A |
6101601 | Matthews et al. | Aug 2000 | A |
6141249 | Estakhri et al. | Oct 2000 | A |
6145051 | Estakhri et al. | Nov 2000 | A |
6170039 | Kishida | Jan 2001 | B1 |
6170047 | Dye | Jan 2001 | B1 |
6172906 | Estakhri et al. | Jan 2001 | B1 |
6173381 | Dye | Jan 2001 | B1 |
6209003 | Mattis et al. | Mar 2001 | B1 |
6223308 | Estakhri et al. | Apr 2001 | B1 |
6230234 | Estakhri et al. | May 2001 | B1 |
6240040 | Akaogi et al. | May 2001 | B1 |
6330688 | Brown | Dec 2001 | B1 |
6356986 | Solomon et al. | Mar 2002 | B1 |
6370631 | Dye | Apr 2002 | B1 |
6385710 | Goldman et al. | May 2002 | B1 |
6393513 | Estakhri et al. | May 2002 | B2 |
6404647 | Minne | Jun 2002 | B1 |
6412080 | Fleming et al. | Jun 2002 | B1 |
6418478 | Ignatius et al. | Jul 2002 | B1 |
6516380 | Kenchammana-Hoskote et al. | Feb 2003 | B2 |
6523102 | Dye et al. | Feb 2003 | B1 |
6567889 | DeKoning et al. | May 2003 | B1 |
6587915 | Kim | Jul 2003 | B1 |
6601211 | Norman | Jul 2003 | B1 |
6611836 | Davis et al. | Aug 2003 | B2 |
6622200 | Hasbun et al. | Sep 2003 | B1 |
6625685 | Cho et al. | Sep 2003 | B1 |
6671757 | Multer et al. | Dec 2003 | B1 |
6675349 | Chen | Jan 2004 | B1 |
6715046 | Shoham et al. | Mar 2004 | B1 |
6725321 | Sinclair et al. | Apr 2004 | B1 |
6728851 | Estakhri et al. | Apr 2004 | B1 |
6751155 | Gorobets | Jun 2004 | B2 |
6754774 | Gruner et al. | Jun 2004 | B2 |
6757800 | Estakhri et al. | Jun 2004 | B1 |
6775185 | Fujisawa et al. | Aug 2004 | B2 |
6779088 | Benveniste et al. | Aug 2004 | B1 |
6779094 | Selkirk et al. | Aug 2004 | B2 |
6785785 | Piccirillo et al. | Aug 2004 | B2 |
6801979 | Estakhri | Oct 2004 | B1 |
6804755 | Selkirk et al. | Oct 2004 | B2 |
6877076 | Cho et al. | Apr 2005 | B1 |
6880049 | Gruner et al. | Apr 2005 | B2 |
6883069 | Yoshida | Apr 2005 | B2 |
6883079 | Priborsky | Apr 2005 | B1 |
6901499 | Aasheim et al. | May 2005 | B2 |
6910170 | Choi et al. | Jun 2005 | B2 |
6912537 | Selkirk et al. | Jun 2005 | B2 |
6912618 | Estakhri et al. | Jun 2005 | B2 |
6914853 | Coulson | Jul 2005 | B2 |
6938133 | Johnson et al. | Aug 2005 | B2 |
6973531 | Chang et al. | Dec 2005 | B1 |
6977599 | Windmer | Dec 2005 | B2 |
6978342 | Estakhri et al. | Dec 2005 | B1 |
6981070 | Luk et al. | Dec 2005 | B1 |
6996676 | Megiddo | Feb 2006 | B2 |
7010652 | Piccirillo et al. | Mar 2006 | B2 |
7010662 | Aasheim et al. | Mar 2006 | B2 |
7013376 | Hooper, III | Mar 2006 | B2 |
7013379 | Testardi | Mar 2006 | B1 |
7043599 | Ware et al. | May 2006 | B1 |
7047366 | Ezra | May 2006 | B1 |
7050337 | Iwase et al. | May 2006 | B2 |
7058769 | Danilak | Jun 2006 | B1 |
7076723 | Saliba | Jul 2006 | B2 |
7082512 | Aasheim et al. | Jul 2006 | B2 |
7085879 | Aasheim et al. | Aug 2006 | B2 |
7089391 | Geiger et al. | Aug 2006 | B2 |
7093101 | Aasheim et al. | Aug 2006 | B2 |
7096321 | Modha | Aug 2006 | B2 |
7111140 | Estakhri et al. | Sep 2006 | B2 |
7130956 | Rao | Oct 2006 | B2 |
7130957 | Rao | Oct 2006 | B2 |
7143228 | Lida et al. | Nov 2006 | B2 |
7149947 | MacLellan et al. | Dec 2006 | B1 |
7167953 | Megiddo et al. | Jan 2007 | B2 |
7178081 | Lee et al. | Feb 2007 | B2 |
7181572 | Walmsley | Feb 2007 | B2 |
7194577 | Johnson et al. | Mar 2007 | B2 |
7203815 | Haswell | Apr 2007 | B2 |
7197657 | Tobias | May 2007 | B1 |
7215580 | Gorobets | May 2007 | B2 |
7219238 | Saito et al. | May 2007 | B2 |
7234082 | Lai et al. | Jun 2007 | B2 |
7243203 | Scheuerlein | Jul 2007 | B2 |
7246179 | Camara et al. | Jul 2007 | B2 |
7280536 | Testardi | Oct 2007 | B2 |
7305520 | Voigt et al. | Dec 2007 | B2 |
7337201 | Yellin et al. | Feb 2008 | B1 |
7340566 | Voth et al. | Mar 2008 | B2 |
7356651 | Liu et al. | Apr 2008 | B2 |
7360015 | Matthews et al. | Apr 2008 | B2 |
7360037 | Higaki et al. | Apr 2008 | B2 |
7366808 | Kano et al. | Apr 2008 | B2 |
7392365 | Selkirk et al. | Jun 2008 | B2 |
7424593 | Estakhri et al. | Sep 2008 | B2 |
7437510 | Rosenbluth et al. | Oct 2008 | B2 |
7441090 | Estakhri et al. | Oct 2008 | B2 |
7444460 | Nakanishi et al. | Oct 2008 | B2 |
7447847 | Louie et al. | Nov 2008 | B2 |
7450420 | Sinclair et al. | Nov 2008 | B2 |
7487235 | Andrews et al. | Feb 2009 | B2 |
7487320 | Bansai et al. | Feb 2009 | B2 |
7500000 | Groves et al. | Mar 2009 | B2 |
7523249 | Estakhri et al. | Apr 2009 | B1 |
7529905 | Sinclair | May 2009 | B2 |
7536491 | Kano et al. | May 2009 | B2 |
7549013 | Estakhri et al. | Jun 2009 | B2 |
7552271 | Sinclair et al. | Jun 2009 | B2 |
7580287 | Aritome | Aug 2009 | B2 |
7620769 | Lee et al. | Nov 2009 | B2 |
7631162 | Gorobets | Dec 2009 | B2 |
7640390 | Iwamura et al. | Dec 2009 | B2 |
7660911 | McDaniel | Feb 2010 | B2 |
7676628 | Compton et al. | Mar 2010 | B1 |
7644239 | Cenk-Ergan et al. | May 2010 | B2 |
7725628 | Phan et al. | May 2010 | B1 |
7734643 | Waterhouse et al. | Jun 2010 | B1 |
7774392 | Lin | Aug 2010 | B2 |
7778961 | Chang et al. | Aug 2010 | B2 |
7873803 | Cheng | Jan 2011 | B2 |
7925879 | Yasaki et al. | Apr 2011 | B2 |
7934072 | Hobbet et al. | Apr 2011 | B2 |
7970806 | Park et al. | Jun 2011 | B2 |
7970919 | Duran | Jun 2011 | B1 |
8028120 | Mo et al. | Sep 2011 | B2 |
8078794 | Lee et al. | Dec 2011 | B2 |
8112574 | Lee et al. | Feb 2012 | B2 |
8533391 | Song et al. | Sep 2013 | B2 |
8572310 | Oh et al. | Oct 2013 | B2 |
8694722 | Gorobets et al. | Apr 2014 | B2 |
8745011 | Kishi | Jun 2014 | B2 |
8838875 | Park | Sep 2014 | B2 |
9152556 | Olbrich et al. | Oct 2015 | B2 |
9286198 | Bennett | Mar 2016 | B2 |
20010008007 | Halligan et al. | Jul 2001 | A1 |
20020069318 | Chow et al. | Jun 2002 | A1 |
20020194451 | Mukaida et al. | Dec 2002 | A1 |
20030061296 | Craddock et al. | Mar 2003 | A1 |
20030093741 | Argon et al. | May 2003 | A1 |
20030198084 | Fujisawa et al. | Oct 2003 | A1 |
20040093463 | Shang | May 2004 | A1 |
20040186946 | Lee | Sep 2004 | A1 |
20040268359 | Hanes | Dec 2004 | A1 |
20050002263 | Iwase et al. | Jan 2005 | A1 |
20050015539 | Horii et al. | Jan 2005 | A1 |
20050027951 | Piccirillo et al. | Feb 2005 | A1 |
20050055497 | Estakhri et al. | Mar 2005 | A1 |
20050076107 | Goud et al. | Apr 2005 | A1 |
20050132259 | Emmot et al. | Jun 2005 | A1 |
20050144361 | Gonzalez et al. | Jun 2005 | A1 |
20050149618 | Cheng | Jul 2005 | A1 |
20050149819 | Hwang | Jul 2005 | A1 |
20050177672 | Rao | Aug 2005 | A1 |
20050177687 | Rao | Aug 2005 | A1 |
20050193166 | Johnson et al. | Sep 2005 | A1 |
20050229090 | Shen et al. | Oct 2005 | A1 |
20050240713 | Wu et al. | Oct 2005 | A1 |
20050246510 | Retnammana et al. | Nov 2005 | A1 |
20050257213 | Chu et al. | Nov 2005 | A1 |
20050276092 | Hansen et al. | Dec 2005 | A1 |
20060004955 | Ware et al. | Jan 2006 | A1 |
20060026339 | Rostampour | Feb 2006 | A1 |
20060059326 | Aasheim et al. | Mar 2006 | A1 |
20060075057 | Gildea et al. | Apr 2006 | A1 |
20060090048 | Okumoto et al. | Apr 2006 | A1 |
20060106968 | Wooi Teoh | May 2006 | A1 |
20060143396 | Cabot | Jun 2006 | A1 |
20060149902 | Yun et al. | Jul 2006 | A1 |
20060152981 | Ryu | Jul 2006 | A1 |
20060224849 | Islam et al. | Oct 2006 | A1 |
20060248387 | Nicholson et al. | Nov 2006 | A1 |
20070016699 | Minami | Jan 2007 | A1 |
20070050571 | Nakamura et al. | Mar 2007 | A1 |
20070067326 | Morris et al. | Mar 2007 | A1 |
20070118676 | Kano et al. | May 2007 | A1 |
20070124474 | Margulis | May 2007 | A1 |
20070124540 | van Riel | May 2007 | A1 |
20070150689 | Pandit et al. | Jun 2007 | A1 |
20070162830 | Stek et al. | Jul 2007 | A1 |
20070198770 | Horii et al. | Aug 2007 | A1 |
20070204128 | Lee | Aug 2007 | A1 |
20070204197 | Yokokawa | Aug 2007 | A1 |
20070233455 | Zimmer et al. | Oct 2007 | A1 |
20070233937 | Coulson et al. | Oct 2007 | A1 |
20070245217 | Valle | Oct 2007 | A1 |
20070250660 | Gill et al. | Oct 2007 | A1 |
20070271468 | McKenny et al. | Nov 2007 | A1 |
20070271572 | Gupta et al. | Nov 2007 | A1 |
20070274150 | Gorobets | Nov 2007 | A1 |
20070276897 | Tameshige et al. | Nov 2007 | A1 |
20070300008 | Rogers et al. | Dec 2007 | A1 |
20080005748 | Matthew et al. | Jan 2008 | A1 |
20080010395 | Mylly et al. | Jan 2008 | A1 |
20080043769 | Hirai | Feb 2008 | A1 |
20080059752 | Serizawa | Mar 2008 | A1 |
20080091876 | Fujibayashi et al. | Apr 2008 | A1 |
20080098159 | Song et al. | Apr 2008 | A1 |
20080109090 | Esmaili et al. | May 2008 | A1 |
20080114962 | Holt | May 2008 | A1 |
20080120469 | Kornegay | May 2008 | A1 |
20080123211 | Chang et al. | May 2008 | A1 |
20080126700 | El-Batal et al. | May 2008 | A1 |
20080126852 | Brandyberry et al. | May 2008 | A1 |
20080133963 | Katano et al. | Jun 2008 | A1 |
20080137658 | Wang | Jun 2008 | A1 |
20080140819 | Bailey et al. | Jun 2008 | A1 |
20080183965 | Shiga et al. | Jul 2008 | A1 |
20080201535 | Hara | Aug 2008 | A1 |
20080205286 | Li et al. | Aug 2008 | A1 |
20080229046 | Raciborski | Sep 2008 | A1 |
20080235443 | Chow et al. | Sep 2008 | A1 |
20080276040 | Moritoki | Nov 2008 | A1 |
20080313364 | Flynn et al. | Dec 2008 | A1 |
20090043952 | Estakhri et al. | Feb 2009 | A1 |
20090070541 | Yochai | Mar 2009 | A1 |
20090083478 | Kunimatsu et al. | Mar 2009 | A1 |
20090083485 | Cheng | Mar 2009 | A1 |
20090089518 | Hobbet et al. | Apr 2009 | A1 |
20090125650 | Sebire | May 2009 | A1 |
20090144496 | Kawaguchi | Jun 2009 | A1 |
20090157956 | Kano | Jun 2009 | A1 |
20090204750 | Estakhri et al. | Aug 2009 | A1 |
20090228637 | Moon | Sep 2009 | A1 |
20090235017 | Estakhri et al. | Sep 2009 | A1 |
20090276654 | Butterworth | Nov 2009 | A1 |
20090294847 | Maruyama et al. | Nov 2009 | A1 |
20090300277 | Jeddeloh | Dec 2009 | A1 |
20090313453 | Stefanus et al. | Dec 2009 | A1 |
20090327602 | Moore et al. | Dec 2009 | A1 |
20090327804 | Moshayedi | Dec 2009 | A1 |
20100017441 | Todd | Jan 2010 | A1 |
20100017556 | Chin | Jan 2010 | A1 |
20100023674 | Aviles | Jan 2010 | A1 |
20100023676 | Moon | Jan 2010 | A1 |
20100023682 | Lee | Jan 2010 | A1 |
20100030946 | Kano et al. | Feb 2010 | A1 |
20100077133 | Jeon | Mar 2010 | A1 |
20100077194 | Zhao et al. | Mar 2010 | A1 |
20100268869 | Roh | Oct 2010 | A1 |
20120124315 | Wang | May 2012 | A1 |
Number | Date | Country |
---|---|---|
1100001 | May 2001 | EP |
1418502 | May 2004 | EP |
1814039 | Mar 2009 | EP |
0123416 | Sep 2001 | GB |
4242848 | Aug 1992 | JP |
200259525 | Sep 2000 | JP |
2009122850 | Jun 2009 | JP |
WO1994019746 | Sep 1994 | WO |
WO1995018407 | Jul 1995 | WO |
WO1996012225 | Apr 1996 | WO |
WO200131512 | May 2001 | WO |
WO200101365 | Jan 2002 | WO |
WO2008073421 | Jun 2008 | WO |
Entry |
---|
Agigatech, Bulletproof Memory for RAID Servers, Part 1, http://agigatech.com/blog/bulletproof-memory-for-raid-servers-part-1/, last visited Feb. 16, 2010. |
Anonymous, “Method for Fault Tolerance in Nonvolatile Storage”, http://ip.com, IP.com No. IPCOM000042269D, 2005. |
Ari, “Performance Boosting and Workload Isolation in Storage Area Networks with SanCache,” Hewlett Packard Laboratories, Proceedings of the 23rd IEEE / 14th SA Goddard Conference on Mass Storage Systems and Technologies (MSST 2006), May 2006, pp. 263-227. |
Bandulet “Object-Based Storage Devices,” Jul. 2007 http://developers.sun.com/solaris/articles/osd.htme, visited Dec. 1, 2011. |
Bitmicro, “BiTMICRO Introduces E-Disk PMC Flash Disk Module at Military & Aerospace Electronics East 2004,” http://www. bitmicro.com/press.sub, published May 18, 2004, visited Mar. 8, 2011. |
Casey, “Disk I/O Performance Scaling: the File Cashing Solution,” Solid Data Systems, Inc., Paper #528, Mar. 2011, pp. 1-8. |
Casey, “San Cache: SSD in the San,” Storage Inc., http://www.solidata.com/resourses/pdf/storageing.pdf, 2000, visited May 20, 2011. |
Casey, “Solid State File-Caching for Performance and Scalability,” SolidData Quarter 1 2000, http://www/storagesearch. com/3dram.html, visited May 20, 2011. |
Data Direct Networks, “White Paper: S2A9550 Overview,” www.//datadirectnet. com, 2007. |
Feresten, “Netapp Thin Provisioning: Better for Business, Netapp White Paper,” WP-7017-0307, http://media.netapp.com/documents/wp-thin-provisioning.pdf, Mar. 2007, visited Jun. 19, 2012. |
Gill, “WOW: Wise Ordering for Writes—Combining Spatial and Temporal Locality in Non-Volatile Caches,” IBM, Fast ″05: 4th USENIX Conference on File and Storage Technologies, 2005. |
Intel, “Non-Volatile Memory Host Controller Interface (NVMHCI) 1.0,” Apr. 14, 2008. |
Johnson, “An Introduction to Block Device Drivers,” Jan. 1, 1995. |
Kawaguchi, “A Flash-Memory Based File System,” TCON'95 Proceedings of the USENIX 1995 Technical Conference Proceedings, p. 13. |
Leventhal, “Flash Storage Memory,” Communications of the ACM, vol. 51, No. 7, pp. 47-51, Jul. 2008. |
Mesnier, “Object-Based Storage,” IEEE Communications Magazine, Aug. 2003, pp. 84-90. |
Micron Technology, Inc., “NAND Flash 101: An Introduction to ND Flash and How to Design It in to Your Next Product (TN-29-19),” http://www.micron.com/˜/media/Documents/Products/Technical%20Note/ND%20Flash/145tn2919_nd_101.pdf; 2006, visited May 10, 2010. |
Micron, “TN-29-08: Technical Note, Hamming Codes for ND Flash Memory Devices,” Mar. 10, 2010. |
Micron, “TN-29-17: NAND Flash Design and Use Considerations,” Mar. 10, 2010. |
Microsoft, “How NTFS Works,” Apr. 9, 2010. |
Morgenstern, David, “Is There a Flash Memory RAID in your Future?”, http://www.eweek.com—eWeek, Ziff Davis Enterprise Holdings Inc., Nov. 8, 2006, visited Mar. 18, 2010. |
Novell, “File System Primer”, http://wiki.novell.com/index.php/File_System_Primer, 2006, visited Oct. 18, 2006. |
PIVOT3, “RAIGE Cluster: Technology Overview,” White Paper, www.pivot3.com, Jun. 2007. |
Plank, “A Tutorial on Reed-Solomon Coding for Fault Tolerance in RAID-like System,” Department of Computer Science, University of Tennessee, pp. 995-1012, Sep. 1997. |
Rosenblum, “The Design and Implementation of a Log-Structured File System,” ACM Transactions on Computer Systems, vol. 10 Issue 1, Feb. 1992. |
Samsung Electronics, “Introduction to Samsung's Linux Flash File System—RFS Application Note”, Version 1.0, Nov. 2006. |
Seagate Research, “The Advantages of Object-Based Storage-Secure, Scalable, Dynamic Storage Devices,” Technology Paper TP-536, Apr. 2005. |
Singer, Dan, “Implementing MLC ND Flash for Cost-Effective, High Capacity Memory,” M-Systems, White Paper, 91-SR014-02-8L, Rev. 1.1, Sep. 2003. |
Solidata, “Best Practices Guide, Sybase: Maximizing Performance through Solid State File-Caching,” http://soliddate.com/resources/pdf/bp-sybase.pdf. May 2000, cited May 18, 2011. |
Spansion, “Data Management Software (DMS) for AMD Simultaneous Read/Write Flash Memory Devices”, published Jul. 7, 2003. |
Van Hensbergen, “Dynamic Policy Disk Caching for Storage Networking,” IBM Research Division, RC24123 (W0611-189), Nov. 2006. |
Wang, “OBFS: A File System for Object-based Storage Devices”, 21st IEE/12th SA Goddard Conference on Mass Storage Systems and Technologies, Apr. 2004. |
Woodhouse, “JFFS: The Journaling Flash File System,” Ottawa Linux Symposium, http://sources.redhat.com/jffs2/jffs2.pdf, Jul. 2001. |
Wu, “eNVy: A Non-Volatile, Main Memory Storage System,” ACM 0-89791-660-3/94/0010, ASPLOS-VI Proceedings of the sixth international conference on Architectural support for programming languages and operating systems, pp. 86-97, 1994. |
Yerrick, “Block Device,” http://www.pineight.com/ds/block, last visited Mar. 1, 2010. |
Number | Date | Country | |
---|---|---|---|
20130282953 A1 | Oct 2013 | US |
Number | Date | Country | |
---|---|---|---|
61663464 | Jun 2012 | US | |
61606755 | Mar 2012 | US | |
61606253 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13784705 | Mar 2013 | US |
Child | 13925410 | US |