NAND flash memory is available from different vendors, with different flash memory device interfaces and protocols. These protocols include asynchronous SDR (single data rate), synchronous DDR (double data rate), Toggle inode (also a type of DDR or double data rate, in various release versions and from various manufacturers) and ONFI (Open NAND Flash Interface Working Group Standard, also a type of DDR or double data rate, in various release versions and from various manufacturers), and others may be developed. The proliferation of flash memory device interfaces and protocols poses a problem to designers of flash controllers for various storage devices, who generally choose one flash memory device interface and one protocol, and design the flash controller according to those. It then becomes difficult to change suppliers, or cope with shortages in the marketplace or advances in flash memory product capabilities during a flash controller product lifetime. Also, flash memory device characteristics may change over the lifespan of a flash die, which can degrade the performance of a storage system that uses a particular flash controller and flash memory die(s). In addition, upgrades to the system or software upgrades tend to be disruptive and the calibration of a system may be lost during a power interruption to the system. It is within this context that the embodiments arise.
In some embodiments, a method for communicating with memory, performed by a memory controller, is provided. The method includes sampling reads from a plurality of memory devices and storing first calibration points in first buffers, based on the sampling, with at least one first calibration point and corresponding first buffer for each of the plurality of memory devices. The method includes sampling a read from a second memory device in background while performing a read from a first memory device using the first calibration point in the first buffer corresponding to the first memory device. The method includes
storing a second calibration point in a second buffer, for the second memory device, based on the sampling in the background, with the first buffer for the second memory device having the first calibration point used for ongoing reads of the second memory device. A memory controller that performs the method is also provided.
Other aspects and advantages of the embodiments will become apparent from the following detailed description taken in conjunction with the accompanying drawings which illustrate, by way of example, the principles of the described embodiments.
The described embodiments and the advantages thereof may best be understood by reference to the following description taken in conjunction with the accompanying drawings. These drawings in no way limit any changes in form and detail that may be made to the described embodiments by one skilled in the art without departing from the spirit and scope of the described embodiments.
Various storage systems described herein, and further storage systems, can be optimized for distribution of selected data, according to various criteria, in flash or other solid-state memory. The embodiments for the distributed flash wear leveling system are optimized for faster read access to the flash or other solid-state memory. Flash memory that is worn, i.e., that has a large number of program/erase cycles, often or usually has a greater error rate during read accesses, and this adds to read latency for data bits as a result of the processing time overhead to perform error correction. Various embodiments of the storage system track program/erase cycles, or track read errors or error rates, for example on a page, block, die, package, board, storage unit or storage node basis, are aware of faster and slower types or designs of flash memory or portions of flash memory, or otherwise determine relative access speeds for flash memory. The storage system then places data selectively in faster access or slower access locations in or portions of flash memory (or other solid-state memory). One embodiments of the storage system writes data bits to faster access portions of flash memory and parity bits to slower access portions of flash memory. Another embodiment takes advantage of faster and slower access pages of triple level cell flash memory. Principles of operation, variations, and implementation details for distributed flash wear leveling are further discussed below, with reference to
The embodiments below describe a storage cluster that stores user data, such as user data originating from one or more user or client systems or other sources external to the storage cluster. The storage cluster distributes user data across storage nodes housed within a chassis, using erasure coding and redundant copies of metadata. Erasure coding refers to a method of data protection or reconstruction in which data is stored across a set of different locations, such as disks, storage nodes or geographic locations. Flash memory is one type of solid-state memory that may be integrated with the embodiments, although the embodiments may be extended to other types of solid-state memory or other storage medium, including non-solid state memory. Control of storage locations and workloads are distributed across the storage locations in a clustered peer-to-peer system. Tasks such as mediating communications between the various storage nodes, detecting when a storage node has become unavailable, and balancing I/Os (inputs and outputs) across the various storage nodes, are all handled on a distributed basis. Data is laid out or distributed across multiple storage nodes in data fragments or stripes that support data recovery in some embodiments. Ownership of data can be reassigned within a cluster, independent of input and output patterns. This architecture described in more detail below allows a storage node in the cluster to fail, with the system remaining operational, since the data can be reconstructed from other storage nodes and thus remain available for input and output operations. In various embodiments, a storage node may be referred to as a cluster node, a blade, or a server.
The storage cluster is contained within a chassis, i.e., an enclosure housing one or more storage nodes. A mechanism to provide power to each storage node, such as a power distribution bus, and a communication mechanism, such as a communication bus that enables communication between the storage nodes are included within the chassis. The storage cluster can run as an independent system in one location according to some embodiments. In one embodiment, a chassis contains at least two instances of both the power distribution and the communication bus which may be enabled or disabled independently. The internal communication bus may be an Ethernet bus, however, other technologies such as Peripheral Component Interconnect (PCI) Express, InfiniBand, and others, are equally suitable. The chassis provides a port for an external communication bus for enabling communication between multiple chassis, directly or through a switch, and with client systems. The external communication may use a technology such as Ethernet, InfiniBand, Fibre Channel, etc. In some embodiments, the external communication bus uses different communication bus technologies for inter-chassis and client communication. If a switch is deployed within or between chassis, the switch may act as a translation between multiple protocols or technologies. When multiple chassis are connected to define a storage cluster, the storage cluster may be accessed by a client using either proprietary interfaces or standard interfaces such as network file system (NFS), common internet file system (CIFS), small computer system interface (SCSI) or hypertext transfer protocol (HTTP). Translation from the client protocol may occur at the switch, chassis external communication bus or within each storage node.
Each storage node may be one or more storage servers and each storage server is connected to one or more non-volatile solid state memory units, which may be referred to as storage units or storage devices. One embodiment includes a single storage server in each storage node and between one to eight non-volatile solid state memory units, however this one example is not meant to be limiting. The storage server may include a processor, dynamic random access memory (DRAM) and interfaces for the internal communication bus and power distribution for each of the power buses. Inside the storage node, the interfaces and storage unit share a communication bus, e.g., PCI Express, in some embodiments. The non-volatile solid state memory units may directly access the internal communication bus interface through a storage node communication bus, or request the storage node to access the bus interface. The non-volatile solid state memory unit contains an embedded central processing unit (CPU), solid state storage controller, and a quantity of solid state mass storage, e.g., between 2-32 terabytes (TB) in some embodiments. An embedded volatile storage medium, such as DRAM, and an energy reserve apparatus are included in the non-volatile solid state memory unit. In some embodiments, the energy reserve apparatus is a capacitor, super-capacitor, or battery that enables transferring a subset of DRAM contents to a stable storage medium in the case of power loss. In some embodiments, the non-volatile solid state memory unit is constructed with a storage class memory, such as phase change or magnetoresistive random access memory (MRAM) that substitutes for DRAM and enables a reduced power hold-up apparatus.
One of many features of the storage nodes and non-volatile solid state storage is the ability to proactively rebuild data in a storage cluster. The storage nodes and non-volatile solid state storage can determine when a storage node or non-volatile solid state storage in the storage cluster is unreachable, independent of whether there is an attempt to read data involving that storage node or non-volatile solid state storage. The storage nodes and non-volatile solid state storage then cooperate to recover and rebuild the data in at least partially new locations. This constitutes a proactive rebuild, in that the system rebuilds data without waiting until the data is needed for a read access initiated from a client system employing the storage cluster. These and further details of the storage memory and operation thereof are discussed below.
Each storage node 150 can have multiple components. In the embodiment shown here, the storage node 150 includes a printed circuit board 158 populated by a CPU 156, i.e., processor, a memory 154 coupled to the CPU 156, and a non-volatile solid state storage 152 coupled to the CPU 156, although other mountings and/or components could be used in further embodiments. The memory 154 has instructions which are executed by the CPU 156 and/or data operated on by the CPU 156. As further explained below, the non-volatile solid state storage 152 includes flash or, in further embodiments, other types of solid-state memory.
Referring to
Every piece of data, and every piece of metadata, has redundancy in the system in some embodiments. In addition, every piece of data and every piece of metadata has an owner, which may be referred to as an authority. If that authority is unreachable, for example through failure of a storage node, there is a plan of succession for how to find that data or that metadata. In various embodiments, there are redundant copies of authorities 168. Authorities 168 have a relationship to storage nodes 150 and non-volatile solid state storage 152 in some embodiments. Each authority 168, covering a range of data segment numbers or other identifiers of the data, may be assigned to a specific non-volatile solid state storage 152. In some embodiments the authorities 168 for all of such ranges are distributed over the non-volatile solid state storages 152 of a storage cluster. Each storage node 150 has a network port that provides access to the non-volatile solid state storage(s) 152 of that storage node 150. Data can be stored in a segment, which is associated with a segment number and that segment number is an indirection for a configuration of a RAID (redundant array of independent disks) stripe in some embodiments. The assignment and use of the authorities 168 thus establishes an indirection to data. Indirection may be referred to as the ability to reference data indirectly, in this case via an authority 168, in accordance with some embodiments. A segment identifies a set of non-volatile solid state storage 152 and a local identifier into the set of non-volatile solid state storage 152 that may contain data. In some embodiments, the local identifier is an offset into the device and may be reused sequentially by multiple segments. In other embodiments the local identifier is unique for a specific segment and never reused. The offsets in the non-volatile solid state storage 152 are applied to locating data for writing to or reading from the non-volatile solid state storage 152 (in the form of a RAID stripe). Data is striped across multiple units of non-volatile solid state storage 152, which may include or be different from the non-volatile solid state storage 152 having the authority 168 for a particular data segment.
If there is a change in where a particular segment of data is located, e.g., during a data move or a data reconstruction, the authority 168 for that data segment should be consulted, at that non-volatile solid state storage 152 or storage node 150 having that authority 168. In order to locate a particular piece of data, embodiments calculate a hash value for a data segment or apply an inode number or a data segment number. The output of this operation points to a non-volatile solid state storage 152 having the authority 168 for that particular piece of data. In some embodiments there are two stages to this operation. The first stage maps an entity identifier (ID), e.g., a segment number, inode number, or directory number to an authority identifier. This mapping may include a calculation such as a hash or a bit mask. The second stage is mapping the authority identifier to a particular non-volatile solid state storage 152, which may be done through an explicit mapping. The operation is repeatable, so that when the calculation is performed, the result of the calculation repeatably and reliably points to a particular non-volatile solid state storage 152 having that authority 168. The operation may include the set of reachable storage nodes as input. If the set of reachable non-volatile solid state storage units changes the optimal set changes. In some embodiments, the persisted value is the current assignment (which is always true) and the calculated value is the target assignment the cluster will attempt to reconfigure towards. This calculation may be used to determine the optimal non-volatile solid state storage 152 for an authority in the presence of a set of non-volatile solid state storage 152 that are reachable and constitute the same cluster. The calculation also determines an ordered set of peer non-volatile solid state storage 152 that will also record the authority to non-volatile solid state storage mapping so that the authority may be determined even if the assigned non-volatile solid state storage is unreachable. A duplicate or substitute authority 168 may be consulted if a specific authority 168 is unavailable in some embodiments.
With reference to
In some systems, for example in UNIX-style file systems, data is handled with an index node or inode, which specifies a data structure that represents an object in a file system. The object could be a file or a directory, for example. Metadata may accompany the object, as attributes such as permission data and a creation timestamp, among other attributes. A segment number could be assigned to all or a portion of such an object in a file system. In other systems, data segments are handled with a segment number assigned elsewhere. For purposes of discussion, the unit of distribution is an entity, and an entity can be a file, a directory or a segment. That is, entities are units of data or metadata stored by a storage system. Entities are grouped into sets called authorities. Each authority has an authority owner, which is a storage node that has the exclusive right to update the entities in the authority. In other words, a storage node contains the authority, and that the authority, in turn, contains entities.
A segment is a logical container of data in accordance with some embodiments. A segment is an address space between medium address space and physical flash locations, i.e., the data segment number, are in this address space. Segments may also contain metadata, which enable data redundancy to be restored (rewritten to different flash locations or devices) without the involvement of higher level software. In one embodiment, an internal format of a segment contains client data and medium mappings to determine the position of that data. Each data segment is protected, e.g., from memory and other failures, by breaking the segment into a number of data and parity shards, where applicable. The data and parity shards are distributed, i.e., striped, across non-volatile solid state storage 152 coupled to the host CPUs 156 (See
A series of address-space transformations takes place across an entire storage system. At the top are the directory entries (file names) which link to an inode. Inodes point into medium address space, where data is logically stored. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Medium addresses may be mapped through a series of indirect mediums to spread the load of large files, or implement data services like deduplication or snapshots. Segment addresses are then translated into physical flash locations. Physical flash locations have an address range bounded by the amount of flash in the system in accordance with some embodiments. Medium addresses and segment addresses are logical containers, and in some embodiments use a 128 bit or larger identifier so as to be practically infinite, with a likelihood of reuse calculated as longer than the expected life of the system. Addresses from logical containers are allocated in a hierarchical fashion in some embodiments. Initially, each non-volatile solid state storage unit 152 may be assigned a range of address space. Within this assigned range, the non-volatile solid state storage 152 is able to allocate addresses without synchronization with other non-volatile solid state storage 152.
Data and metadata is stored by a set of underlying storage layouts that are optimized for varying workload patterns and storage devices. These layouts incorporate multiple redundancy schemes, compression formats and index algorithms. Some of these layouts store information about authorities and authority masters, while others store file metadata and file data. The redundancy schemes include error correction codes that tolerate corrupted bits within a single storage device (such as a NAND flash chip), erasure codes that tolerate the failure of multiple storage nodes, and replication schemes that tolerate data center or regional failures. In some embodiments, low density parity check (LDPC) code is used within a single storage unit. Reed-Solomon encoding is used within a storage cluster, and mirroring is used within a storage grid in some embodiments. Metadata may be stored using an ordered log structured index (such as a Log Structured Merge Tree), and large data may not be stored in a log structured layout.
In order to maintain consistency across multiple copies of an entity, the storage nodes agree implicitly on two things through calculations: (1) the authority that contains the entity, and (2) the storage node that contains the authority. The assignment of entities to authorities can be done by pseudo randomly assigning entities to authorities, by splitting entities into ranges based upon an externally produced key, or by placing a single entity into each authority. Examples of pseudorandom schemes are linear hashing and the Replication Under Scalable Hashing (RUSH) family of hashes, including Controlled Replication Under Scalable Hashing (CRUSH). In some embodiments, pseudo-random assignment is utilized only for assigning authorities to nodes because the set of nodes can change. The set of authorities cannot change so any subjective function may be applied in these embodiments. Some placement schemes automatically place authorities on storage nodes, while other placement schemes rely on an explicit mapping of authorities to storage nodes. In some embodiments, a pseudorandom scheme is utilized to map from each authority to a set of candidate authority owners. A pseudorandom data distribution function related to CRUSH may assign authorities to storage nodes and create a list of where the authorities are assigned. Each storage node has a copy of the pseudorandom data distribution function, and can arrive at the same calculation for distributing, and later finding or locating an authority. Each of the pseudorandom schemes requires the reachable set of storage nodes as input in some embodiments in order to conclude the same target nodes. Once an entity has been placed in an authority, the entity may be stored on physical devices so that no expected failure will lead to unexpected data loss. In some embodiments, rebalancing algorithms attempt to store the copies of all entities within an authority in the same layout and on the same set of machines.
Examples of expected failures include device failures, stolen machines, datacenter fires, and regional disasters, such as nuclear or geological events. Different failures lead to different levels of acceptable data loss. In some embodiments, a stolen storage node impacts neither the security nor the reliability of the system, while depending on system configuration, a regional event could lead to no loss of data, a few seconds or minutes of lost updates, or even complete data loss.
In the embodiments, the placement of data for storage redundancy is independent of the placement of authorities for data consistency. In some embodiments, storage nodes that contain authorities do not contain any persistent storage. Instead, the storage nodes are connected to non-volatile solid state storage units that do not contain authorities. The communications interconnect between storage nodes and non-volatile solid state storage units consists of multiple communication technologies and has non-uniform performance and fault tolerance characteristics. In some embodiments, as mentioned above, non-volatile solid state storage units are connected to storage nodes via PCI express, storage nodes are connected together within a single chassis using Ethernet backplane, and chassis are connected together to form a storage cluster. Storage clusters are connected to clients using Ethernet or fiber channel in some embodiments. If multiple storage clusters are configured into a storage grid, the multiple storage clusters are connected using the Internet or other long-distance networking links, such as a “metro scale” link or private link that does not traverse the internet.
Authority owners have the exclusive right to modify entities, to migrate entities from one non-volatile solid state storage unit to another non-volatile solid state storage unit, and to add and remove copies of entities. This allows for maintaining the redundancy of the underlying data. When an authority owner fails, is going to be decommissioned, or is overloaded, the authority is transferred to a new storage node. Transient failures make it non-trivial to ensure that all non-faulty machines agree upon the new authority location. The ambiguity that arises due to transient failures can be achieved automatically by a consensus protocol such as Paxos, hot-warm failover schemes, via manual intervention by a remote system administrator, or by a local hardware administrator (such as by physically removing the failed machine from the cluster, or pressing a button on the failed machine). In some embodiments, a consensus protocol is used, and failover is automatic. If too many failures or replication events occur in too short a time period, the system goes into a self-preservation mode and halts replication and data movement activities until an administrator intervenes in accordance with some embodiments.
As authorities are transferred between storage nodes and authority owners update entities in their authorities, the system transfers messages between the storage nodes and non-volatile solid state storage units. With regard to persistent messages, messages that have different purposes are of different types. Depending on the type of the message, the system maintains different ordering and durability guarantees. As the persistent messages are being processed, the messages are temporarily stored in multiple durable and non-durable storage hardware technologies. In some embodiments, messages are stored in RAM, NVRAM and on NAND flash devices, and a variety of protocols are used in order to make efficient use of each storage medium. Latency-sensitive client requests may be persisted in replicated NVRAM, and then later NAND, while background rebalancing operations are persisted directly to NAND.
Persistent messages are persistently stored prior to being transmitted. This allows the system to continue to serve client requests despite failures and component replacement. Although many hardware components contain unique identifiers that are visible to system administrators, manufacturer, hardware supply chain and ongoing monitoring quality control infrastructure, applications running on top of the infrastructure address virtualize addresses. These virtualized addresses do not change over the lifetime of the storage system, regardless of component failures and replacements. This allows each component of the storage system to be replaced over time without reconfiguration or disruptions of client request processing.
In some embodiments, the virtualized addresses are stored with sufficient redundancy. A continuous monitoring system correlates hardware and software status and the hardware identifiers. This allows detection and prediction of failures due to faulty components and manufacturing details. The monitoring system also enables the proactive transfer of authorities and entities away from impacted devices before failure occurs by removing the component from the critical path in some embodiments.
Storage clusters 160, in various embodiments as disclosed herein, can be contrasted with storage arrays in general. The storage nodes 150 are part of a collection that creates the storage cluster 160. Each storage node 150 owns a slice of data and computing required to provide the data. Multiple storage nodes 150 cooperate to store and retrieve the data. Storage memory or storage devices, as used in storage arrays in general, are less involved with processing and manipulating the data. Storage memory or storage devices in a storage array receive commands to read, write, or erase data. The storage memory or storage devices in a storage array are not aware of a larger system in which they are embedded, or what the data means. Storage memory or storage devices in storage arrays can include various types of storage memory, such as RAM, solid state drives, hard disk drives, etc. The storage units 152 described herein have multiple interfaces active simultaneously and serving multiple purposes. In some embodiments, some of the functionality of a storage node 150 is shifted into a storage unit 152, transforming the storage unit 152 into a combination of storage unit 152 and storage node 150. Placing computing (relative to storage data) into the storage unit 152 places this computing closer to the data itself. The various system embodiments have a hierarchy of storage node layers with different capabilities. By contrast, in a storage array, a controller owns and knows everything about all of the data that the controller manages in a shelf or storage devices. In a storage cluster 160, as described herein, multiple controllers in multiple storage units 152 and/or storage nodes 150 cooperate in various ways (e.g., for erasure coding, data sharding, metadata communication and redundancy, storage capacity expansion or contraction, data recovery, and so on).
The physical storage is divided into named regions based on application usage in some embodiments. The NVRAM 204 is a contiguous block of reserved memory in the storage unit 152 DRAM 216, and is backed by NAND flash. NVRAM 204 is logically divided into multiple memory regions written for two as spool (e.g., spool_region). Space within the NVRAM 204 spools is managed by each authority 512 independently. Each device provides an amount of storage space to each authority 512. That authority 512 further manages lifetimes and allocations within that space. Examples of a spool include distributed transactions or notions. When the primary power to a storage unit 152 fails, onboard super-capacitors provide a short duration of power hold up. During this holdup interval, the contents of the NVRAM 204 are flushed to flash memory 206. On the next power-on, the contents of the NVRAM 204 are recovered from the flash memory 206.
As for the storage unit controller, the responsibility of the logical “controller” is distributed across each of the blades containing authorities 512. This distribution of logical control is shown in
Still referring to
There are fundamental differences between the ONFI and Toggle protocols in terms of physical flash signaling layer. The present flash controller design allows abstraction of much of the low-level complexity away from upper-level software. Upper-level software could, for example, issue “flash read” or “flash write” commands which in turn are processed differently by the controller depending upon the type of flash to which the controller is communicating. The physical controller could decode the command and translate the decoded command to the correct protocol, depending upon the type of flash and corresponding channel configuration.
Each channel 215 in the flash controller 102 has its own phy controls 217, 219, channel configuration registers 221 and software calibrated I/O module 223, the combination of which are selectable and tunable on an individual, per channel basis, as to protocol, operating frequency, and signal timing. The channel 215 labeled channel N is depicted as having Micron™ ONFI (Open NAND Flash Interface) phy controls 219 (i.e., physical device controls for the ONFI protocol according to the Micron™ manufacturer flash devices), per the selected protocol for channel N. Channel N is coupled to multiple NAND flash devices 108, which, in this example, are Micron™ flash memories that use the ONFI protocol. The flash controller 102 could be operated with flash devices 106 that are all the same (or flash devices 108 that are all the same, etc.), or mixes of flash devices 106, 108 of the various protocols, flash memory device interfaces and manufacturers. Each channel 215 should have the same flash memory devices across that channel 215, but which flash memory device and associated flash memory device interface that channel has is independent of each other channel.
Software program commands 110, which are device independent (i.e., not dependent on a particular flash memory protocol or flash memory device interface) are written by an external device (i.e., a device external to the flash controller 102), such as a processor, into the microcode command FIFO 207 of the flash controller 102. Read/write data 203 is read from or written into the data FIFOs 209. More specifically, write data intended for the flash memories is written into one or more write FIFOs, and read data from the flash memories is read from one or more read FIFOs, collectively illustrated as data FIFOs 209. A memory mapped control/configuration interface 211 is used for control/configuration data, which could also be from an external device such as a processor. The microcode command FIFO 207, the data FIFOs 209, and the memory mapped control/configuration interface 211 are coupled to the multithreaded/virtualized microcode sequence engine 213, which couples to the channels 215, e.g., channels 1 through N. Each channel 215 has a dedicated one or more threads, in a multithreaded operation of the multithreaded/virtualized microcode sequence engine 213. This multithreading virtualizes the microcode sequence engine 213, as if each channel 215 had its own microcode sequence engine 213. In further embodiments, there are multiple physical microcode sequence engines 213, e.g., in a multiprocessing multithreaded operation. This would still be considered an embodiment of the multithreaded/virtualized microcode sequence engine 213.
In some embodiments, state machines control the channels 215. These may act as the above-described virtualized microcode sequence engines 213. For example, in various embodiments, each channel has a state machine, or a state machine could control two channels, two state machines could control each channel, etc. These state machines could be implemented in hardware and fed by the multithreaded/virtualized microcode sequence engine 213, or implemented in threads of the multithreaded/virtualized microcode sequence engine 213, or combinations thereof. In some embodiments, software injects commands into state machine queues, and state machines arbitrate for channels, then issue read or write commands to channels, depending upon operations. In some embodiments, the state machines implement reads, writes and erases, with other commands such as reset, initialization sequences, feature settings, etc., communicated from an external processor along a bypass path which could be controlled by a register. Each state machine could have multiple states for a write, further states for a read, and still further states for erasure cycle(s), with timing and/or frequency (i.e., as affect signal rate) controlled by states, state transitions, and/or an embodiment of the software calibrated I/O module 223.
The microcode command FIFO 207 allows upstream logic to present transactions to the flash controller 102. The format of the command allows for the upstream logic to present entire transactions (with indicators for start of transaction, and end of transaction). The flash controller begins operating upon entire transactions on receipt of end of transaction markers, in some embodiments. In addition to the microcode command FIFO 207, there are two data FIFOs 209, and in some embodiments more than two, to handle data flowing in and out of flash. Also, there is a memory-mapped register interface 211 for the upstream logic to be able to program the different parameters used to set up the flash controller (e.g., calibration, flash mode, flash type, etc.) as described above. The embodiments described below provide for a process for how the channel configuration registers 220 are loaded, and a mechanism for how the software calibrated I/O module 222 generates timing for signal rates or generates signals in some embodiments, is further described below with reference to each of
Each channel 214 in the flash controller 102 has its own phy controls 216, 218, channel configuration registers 220 and software calibrated I/O module 222, the combination of which are selectable and tunable on an individual, per channel basis, as to protocol, operating frequency, and signal timing. The channel 214 labeled channel N is depicted as having Micron™ ONFI (Open NAND Flash Interface) phy controls 218 (i.e., physical device controls for the ONFI protocol according to the Micron™ manufacturer flash devices), per the selected protocol for channel N. Channel N is coupled to multiple NAND flash devices 108, which, in this example, are Micron™ flash memories that use the ONFI protocol. The flash controller 102 could be operated with flash devices 106 that are all the same (or flash devices 108 that are all the same, etc.), or mixes of flash devices 106, 108 of the various protocols, flash memory device interfaces and manufacturers. Each channel 214 should have the same flash memory devices across that channel 214, but which flash memory device and associated flash memory device interface that channel has is independent of each other channel.
Software program commands 110, which are device independent (i.e., not dependent on a particular flash memory protocol or flash memory device interface) are written by an external device (i.e., a device external to the flash controller 102), such as a processor, into the microcode command FIFO 206 of the flash controller 102. Read/write data 202 is read from or written into the data FIFOs 208. More specifically, write data intended for the flash memories is written into one or more write FIFOs, and read data from the flash memories is read from one or more read FIFOs, collectively illustrated as data FIFOs 208. A memory mapped control/configuration interface 210 is used for control/configuration data, which could also be from an external device such as a processor. The microcode command FIFO 206, the data FIFOs 208, and the memory mapped control/configuration interface 210 are coupled to the multithreaded/virtualized microcode sequence engine 212, which couples to the channels 214, e.g., channels 1 through N. Each channel 214 has a dedicated one or more threads, in a multithreaded operation of the multithreaded/virtualized microcode sequence engine 212. This multithreading virtualizes the microcode sequence engine 212, as if each channel 214 had its own microcode sequence engine 212. In further embodiments, there are multiple physical microcode sequence engines 212, e.g., in a multiprocessing multithreaded operation. This would still be considered an embodiment of the multithreaded/virtualized microcode sequence engine 212.
In some embodiments, state machines control the channels 214. These may act as the above-described virtualized microcode sequence engines 212. For example, in various embodiments, each channel has a state machine, or a state machine could control two channels, two state machines could control each channel, etc. These state machines could be implemented in hardware and fed by the multithreaded/virtualized microcode sequence engine 212, or implemented in threads of the multithreaded/virtualized microcode sequence engine 212, or combinations thereof. In some embodiments, software injects commands into state machine queues, and state machines arbitrate for channels, then issue read or write commands to channels, depending upon operations. In some embodiments, the state machines implement reads, writes and erases, with other commands such as reset, initialization sequences, feature settings, etc., communicated from an external processor along a bypass path which could be controlled by a register. Each state machine could have multiple states for a write, further states for a read, and still further states for erasure cycle(s), with timing and/or frequency (i.e., as affect signal rate) controlled by states, state transitions, and/or an embodiment of the software calibrated I/O module 222.
The microcode command FIFO 206 allows upstream logic to present transactions to the flash controller 102. The format of the command allows for the upstream logic to present entire transactions (with indicators for start of transaction, and end of transaction). The flash controller begins operating upon entire transactions on receipt of end of transaction markers, in some embodiments. In addition to the microcode command FIFO 206, there are two data FIFOs 208, and in some embodiments more than two, to handle data flowing in and out of flash. Also, there is a memory-mapped register interface 210 for the upstream logic to be able to program the different parameters used to set up the flash controller (e.g., calibration, flash mode, flash type, etc.) as described above and further described with reference to
With reference to
The following is an example of a read data command. This involves sending the flash a specified command value, followed by a specified address value. The sequence of events, performed by an external device such as a processor in communication with the flash controller 102, is:
The flash controller 102 waits for the transaction to be programmed in its entirety before beginning to operate on it in some embodiments. The flash controller 102 parses the three microcode entries, and generates the correct signals on the bus between the flash memory controller 102, which could be implemented on an FPGA (field programmable gate array), and the particular flash memory device 106 on the selected channel 215. In some embodiments, both start and end of transaction markers are referenced. In some embodiments only the end of transaction markers are referenced, with the start of transaction markers being implicit. Exact sequencing on the selected channel 215 would then look like the signals seen on a datasheet from a flash memory vendor.
In some embodiments, the calibration logic is split between programmable logic (e.g., implemented in Verilog on an FPGA that implements the flash controller 102), and software that runs on a processor, external to the flash controller 102. This external software could enter in calibration values (e.g., through the memory mapped control/configuration interface 211), which changes the behavior of the calibration logic (e.g., the software calibrated I/O module 223). The external software then monitors the fidelity of the data coming back from the bus (e.g., by monitoring errors in the read data), and running through various calibration points before settling on an optimal setting for each channel 215. This could be accomplished with an embodiment of the flash age tracker 602, either internal to the flash controller 102, or external to the flash controller, e.g., coupled to the external processor.
In
Referring to
Referring to
Referring to
With reference to
for channel=1 to 2^n
for chip_enable=1 to 2^n
end
end
from list of valid samples create a list of samples that work for all chip_enable and LUN
pick the middle sample from the list as a final calibration point for a channel and store it in a register end
The basic flash channel calibration algorithm can be improved to run faster as described below in some embodiments. For the first channel, execute the flash channel calibration algorithm as described above and find a final calibration point. Since flash channels should not vary by much the sample point search for all other channels can be concentrated around final calibration point of the first channel in this embodiment. The sample_point range can be final_sample_point_channel0−2 to final_sample_point_channel0+2 instead of 1 to #max_sample_points.
It should be appreciated that the methods described herein may be performed with a digital processing system, such as a conventional, general-purpose computer system. Special purpose computers, which are designed or programmed to perform only one function may be used in the alternative.
Display 1511 is in communication with CPU 1501, memory 1503, and mass storage device 1507, through bus 1505. Display 1511 is configured to display any visualization tools or reports associated with the system described herein. Input/output device 1509 is coupled to bus 1505 in order to communicate information in command selections to CPU 1501. It should be appreciated that data to and from external devices may be communicated through the input/output device 1509. CPU 1501 can be defined to execute the functionality described herein to enable the functionality described with reference to
Detailed illustrative embodiments are disclosed herein. However, specific functional details disclosed herein are merely representative for purposes of describing embodiments. Embodiments may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It should be understood that although the terms first, second, etc. may be used herein to describe various steps or calculations, these steps or calculations should not be limited by these terms. These terms are only used to distinguish one step or calculation from another. For example, a first calculation could be termed a second calculation, and, similarly, a second step could be termed a first step, without departing from the scope of this disclosure. As used herein, the term “and/or” and the “/” symbol includes any and all combinations of one or more of the associated listed items.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “includes”, and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Therefore, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
With the above embodiments in mind, it should be understood that the embodiments might employ various computer-implemented operations involving data stored in computer systems. These operations are those requiring physical manipulation of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. Further, the manipulations performed are often referred to in terms, such as producing, identifying, determining, or comparing. Any of the operations described herein that form part of the embodiments are useful machine operations. The embodiments also relate to a device or an apparatus for performing these operations. The apparatus can be specially constructed for the required purpose, or the apparatus can be a general-purpose computer selectively activated or configured by a computer program stored in the computer. In particular, various general-purpose machines can be used with computer programs written in accordance with the teachings herein, or it may be more convenient to construct a more specialized apparatus to perform the required operations.
A module, an application, a layer, an agent or other method-operable entity could be implemented as hardware, firmware, or a processor executing software, or combinations thereof. It should be appreciated that, where a software-based embodiment is disclosed herein, the software can be embodied in a physical machine such as a controller. For example, a controller could include a first module and a second module. A controller could be configured to perform various actions, e.g., of a method, an application, a layer or an agent.
The embodiments can also be embodied as computer readable code on a tangible non-transitory computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion. Embodiments described herein may be practiced with various computer system configurations including hand-held devices, tablets, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. The embodiments can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
Although the method operations were described in a specific order, it should be understood that other operations may be performed in between described operations, described operations may be adjusted so that they occur at slightly different times or the described operations may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing.
In various embodiments, one or more portions of the methods and mechanisms described herein may form part of a cloud-computing environment. In such embodiments, resources may be provided over the Internet as services according to one or more various models. Such models may include Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). In IaaS, computer infrastructure is delivered as a service. In such a case, the computing equipment is generally owned and operated by the service provider. In the PaaS model, software tools and underlying equipment used by developers to develop software solutions may be provided as a service and hosted by the service provider. SaaS typically includes a service provider licensing software as a service on demand. The service provider may host the software, or may deploy the software to a customer for a given period of time. Numerous combinations of the above models are possible and are contemplated.
Various units, circuits, or other components may be described or claimed as “configured to” perform a task or tasks. In such contexts, the phrase “configured to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs the task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks is expressly intended not to invoke 35 U.S.C. 112, sixth paragraph, for that unit/circuit/component. Additionally, “configured to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks.
The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the embodiments and its practical applications, to thereby enable others skilled in the art to best utilize the embodiments and various modifications as may be suited to the particular use contemplated. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5390327 | Lubbers et al. | Feb 1995 | A |
5479653 | Jones | Dec 1995 | A |
5649093 | Hanko et al. | Jul 1997 | A |
5764767 | Beimel et al. | Jun 1998 | A |
6182214 | Hardjono | Jan 2001 | B1 |
6275898 | DeKoning | Aug 2001 | B1 |
6535417 | Tsuda | Mar 2003 | B2 |
6643748 | Wieland | Nov 2003 | B1 |
6725392 | Frey et al. | Apr 2004 | B1 |
6836816 | Kendall | Dec 2004 | B2 |
6985995 | Holland et al. | Jan 2006 | B2 |
7032125 | Holt et al. | Apr 2006 | B2 |
7051155 | Talagala et al. | May 2006 | B2 |
7065617 | Wang | Jun 2006 | B2 |
7069383 | Yamamoto et al. | Jun 2006 | B2 |
7076606 | Orsley | Jul 2006 | B2 |
7107480 | Moshayedi et al. | Sep 2006 | B1 |
7159150 | Kenchammana-Hosekote et al. | Jan 2007 | B2 |
7162575 | Dalal et al. | Jan 2007 | B2 |
7164608 | Lee | Jan 2007 | B2 |
7334156 | Land et al. | Feb 2008 | B2 |
7370220 | Nguyen et al. | May 2008 | B1 |
7424498 | Patterson | Sep 2008 | B1 |
7424592 | Karr | Sep 2008 | B1 |
7444532 | Masuyama et al. | Oct 2008 | B2 |
7480658 | Heinla et al. | Jan 2009 | B2 |
7536506 | Ashmore et al. | May 2009 | B2 |
7558859 | Kasiolas | Jul 2009 | B2 |
7565446 | Talagala et al. | Jul 2009 | B2 |
7613947 | Coatney | Nov 2009 | B1 |
7681104 | Sim-Tang et al. | Mar 2010 | B1 |
7681105 | Sim-Tang et al. | Mar 2010 | B1 |
7730258 | Smith | Jun 2010 | B1 |
7743276 | Jacobsen et al. | Jun 2010 | B2 |
7757038 | Kitahara | Jul 2010 | B2 |
7778960 | Chatterjee et al. | Aug 2010 | B1 |
7814272 | Barrall et al. | Oct 2010 | B2 |
7814273 | Barrall et al. | Oct 2010 | B2 |
7818531 | Barrall et al. | Oct 2010 | B2 |
7827351 | Suetsugu et al. | Nov 2010 | B2 |
7827439 | Matthew et al. | Nov 2010 | B2 |
7870105 | Arakawa et al. | Jan 2011 | B2 |
7885938 | Greene et al. | Feb 2011 | B1 |
7886111 | Klemm et al. | Feb 2011 | B2 |
7908448 | Chatterjee et al. | Mar 2011 | B1 |
7916538 | Jeon et al. | Mar 2011 | B2 |
7941697 | Mathew et al. | May 2011 | B2 |
7958303 | Shuster | Jun 2011 | B2 |
7971129 | Watson | Jun 2011 | B2 |
7991822 | Bish et al. | Aug 2011 | B2 |
8010485 | Chatterjee et al. | Aug 2011 | B1 |
8010829 | Chatterjee et al. | Aug 2011 | B1 |
8020047 | Courtney | Sep 2011 | B2 |
8046548 | Chatterjee et al. | Oct 2011 | B1 |
8051361 | Sim-Tang et al. | Nov 2011 | B2 |
8051362 | Li et al. | Nov 2011 | B2 |
8082393 | Galloway et al. | Dec 2011 | B2 |
8086634 | Mimatsu | Dec 2011 | B2 |
8086911 | Taylor | Dec 2011 | B1 |
8090837 | Shin et al. | Jan 2012 | B2 |
8108502 | Tabbara et al. | Jan 2012 | B2 |
8117388 | Jernigan, IV | Mar 2012 | B2 |
8140821 | Raizen et al. | Mar 2012 | B1 |
8145736 | Tewari et al. | Mar 2012 | B1 |
8145838 | Miller et al. | Mar 2012 | B1 |
8145840 | Koul et al. | Mar 2012 | B2 |
8176360 | Frost et al. | May 2012 | B2 |
8180855 | Aiello et al. | May 2012 | B2 |
8200922 | McKean et al. | Jun 2012 | B2 |
8225006 | Karamcheti | Jul 2012 | B1 |
8239618 | Kotzur et al. | Aug 2012 | B2 |
8244999 | Chatterjee et al. | Aug 2012 | B1 |
8305811 | Jeon | Nov 2012 | B2 |
8315999 | Chatley et al. | Nov 2012 | B2 |
8327080 | Der | Dec 2012 | B1 |
8351290 | Huang et al. | Jan 2013 | B1 |
8375146 | Sinclair | Feb 2013 | B2 |
8397016 | Talagala et al. | Mar 2013 | B2 |
8402152 | Duran | Mar 2013 | B2 |
8412880 | Leibowitz et al. | Apr 2013 | B2 |
8423739 | Ash et al. | Apr 2013 | B2 |
8429436 | Filingim et al. | Apr 2013 | B2 |
8473778 | Simitci | Jun 2013 | B2 |
8479037 | Chatterjee et al. | Jul 2013 | B1 |
8498967 | Chatterjee et al. | Jul 2013 | B1 |
8522073 | Cohen | Aug 2013 | B2 |
8533527 | Daikokuya et al. | Sep 2013 | B2 |
8544029 | Bakke et al. | Sep 2013 | B2 |
8549224 | Zeryck | Oct 2013 | B1 |
8589625 | Colgrove et al. | Nov 2013 | B2 |
8595455 | Chatterjee et al. | Nov 2013 | B2 |
8615599 | Takefman et al. | Dec 2013 | B1 |
8627136 | Shankar et al. | Jan 2014 | B2 |
8627138 | Clark | Jan 2014 | B1 |
8660131 | Vermunt et al. | Feb 2014 | B2 |
8661218 | Piszczek et al. | Feb 2014 | B1 |
8700875 | Barron et al. | Apr 2014 | B1 |
8706694 | Chatterjee et al. | Apr 2014 | B2 |
8706914 | Duchesneau | Apr 2014 | B2 |
8713405 | Healey et al. | Apr 2014 | B2 |
8725730 | Keeton et al. | May 2014 | B2 |
8756387 | Frost et al. | Jun 2014 | B2 |
8762793 | Grube et al. | Jun 2014 | B2 |
8775858 | Gower et al. | Jul 2014 | B2 |
8775868 | Colgrove et al. | Jul 2014 | B2 |
8788913 | Xin et al. | Jul 2014 | B1 |
8799746 | Baker et al. | Aug 2014 | B2 |
8819311 | Liao | Aug 2014 | B2 |
8819383 | Jobanputra et al. | Aug 2014 | B1 |
8824261 | Miller et al. | Sep 2014 | B1 |
8843700 | Salessi et al. | Sep 2014 | B1 |
8850108 | Hayes et al. | Sep 2014 | B1 |
8850288 | Lazier et al. | Sep 2014 | B1 |
8856593 | Eckhardt et al. | Oct 2014 | B2 |
8856619 | Cypher | Oct 2014 | B1 |
8862847 | Feng et al. | Oct 2014 | B2 |
8862928 | Xavier et al. | Oct 2014 | B2 |
8868825 | Hayes | Oct 2014 | B1 |
8874836 | Hayes | Oct 2014 | B1 |
8886778 | Nedved et al. | Nov 2014 | B2 |
8898383 | Yamamoto et al. | Nov 2014 | B2 |
8898388 | Kimmel | Nov 2014 | B1 |
8904231 | Coatney et al. | Dec 2014 | B2 |
8918478 | Ozzie et al. | Dec 2014 | B2 |
8930307 | Colgrove et al. | Jan 2015 | B2 |
8930633 | Amit et al. | Jan 2015 | B2 |
8949502 | McKnight et al. | Feb 2015 | B2 |
8959110 | Smith et al. | Feb 2015 | B2 |
8977597 | Ganesh et al. | Mar 2015 | B2 |
9003144 | Hayes et al. | Apr 2015 | B1 |
9009724 | Gold et al. | Apr 2015 | B2 |
9021053 | Bembo et al. | Apr 2015 | B2 |
9021215 | Meir et al. | Apr 2015 | B2 |
9025393 | Wu | May 2015 | B2 |
9043372 | Makkar et al. | May 2015 | B2 |
9053808 | Sprouse | Jun 2015 | B2 |
9058155 | Cepulis et al. | Jun 2015 | B2 |
9116819 | Cope et al. | Aug 2015 | B2 |
9117536 | Yoon | Aug 2015 | B2 |
9122401 | Zaltsman et al. | Sep 2015 | B2 |
9134908 | Horn et al. | Sep 2015 | B2 |
9153337 | Sutardja | Oct 2015 | B2 |
9189650 | Jaye et al. | Nov 2015 | B2 |
9201733 | Verma | Dec 2015 | B2 |
9207876 | Shu et al. | Dec 2015 | B2 |
9251066 | Colgrove et al. | Feb 2016 | B2 |
9323667 | Bennett | Apr 2016 | B2 |
9323681 | Apostolides et al. | Apr 2016 | B2 |
9348538 | Mallaiah et al. | May 2016 | B2 |
9384082 | Lee et al. | Jul 2016 | B1 |
9390019 | Patterson | Jul 2016 | B2 |
9405478 | Koseki et al. | Aug 2016 | B2 |
9432541 | Ishida | Aug 2016 | B2 |
9477632 | Du | Oct 2016 | B2 |
9552299 | Stalzer | Jan 2017 | B2 |
9818478 | Chung | Nov 2017 | B2 |
9829066 | Thomas et al. | Nov 2017 | B2 |
20020144059 | Kendall | Oct 2002 | A1 |
20030105984 | Masuyama et al. | Jun 2003 | A1 |
20030110205 | Johnson | Jun 2003 | A1 |
20040161086 | Buntin et al. | Aug 2004 | A1 |
20050001652 | Malik et al. | Jan 2005 | A1 |
20050076228 | Davis et al. | Apr 2005 | A1 |
20050235132 | Karr et al. | Oct 2005 | A1 |
20050278460 | Shin et al. | Dec 2005 | A1 |
20050283649 | Turner et al. | Dec 2005 | A1 |
20060015683 | Ashmore et al. | Jan 2006 | A1 |
20060114930 | Lucas et al. | Jun 2006 | A1 |
20060174157 | Barrall et al. | Aug 2006 | A1 |
20060248294 | Nedved et al. | Nov 2006 | A1 |
20070033205 | Pradhan | Feb 2007 | A1 |
20070079068 | Draggon | Apr 2007 | A1 |
20070214194 | Reuter | Sep 2007 | A1 |
20070214314 | Reuter | Sep 2007 | A1 |
20070234016 | Davis et al. | Oct 2007 | A1 |
20070268905 | Baker et al. | Nov 2007 | A1 |
20080080709 | Michtchenko et al. | Apr 2008 | A1 |
20080095375 | Takeoka et al. | Apr 2008 | A1 |
20080107274 | Worthy | May 2008 | A1 |
20080155191 | Anderson et al. | Jun 2008 | A1 |
20080295118 | Liao | Nov 2008 | A1 |
20090077208 | Nguyen et al. | Mar 2009 | A1 |
20090138654 | Sutardja | May 2009 | A1 |
20090216910 | Duchesneau | Aug 2009 | A1 |
20090216920 | Lauterbach et al. | Aug 2009 | A1 |
20100017444 | Chatterjee et al. | Jan 2010 | A1 |
20100042636 | Lu | Feb 2010 | A1 |
20100094806 | Apostolides et al. | Apr 2010 | A1 |
20100115070 | Missimilly | May 2010 | A1 |
20100125695 | Wu et al. | May 2010 | A1 |
20100162076 | Sim-Tang et al. | Jun 2010 | A1 |
20100169707 | Mathew et al. | Jul 2010 | A1 |
20100174576 | Naylor | Jul 2010 | A1 |
20100268908 | Ouyang et al. | Oct 2010 | A1 |
20100312915 | Marowsky-Bree et al. | Dec 2010 | A1 |
20110035540 | Fitzgerald et al. | Feb 2011 | A1 |
20110040925 | Frost et al. | Feb 2011 | A1 |
20110060927 | Fillingim et al. | Mar 2011 | A1 |
20110119462 | Leach et al. | May 2011 | A1 |
20110219170 | Frost et al. | Sep 2011 | A1 |
20110238625 | Hamaguchi et al. | Sep 2011 | A1 |
20110264843 | Haines et al. | Oct 2011 | A1 |
20110302369 | Goto et al. | Dec 2011 | A1 |
20120011398 | Eckhardt | Jan 2012 | A1 |
20120079318 | Colgrove et al. | Mar 2012 | A1 |
20120110249 | Jeong et al. | May 2012 | A1 |
20120131253 | McKnight | May 2012 | A1 |
20120150826 | Retnamma et al. | Jun 2012 | A1 |
20120158923 | Mohamed et al. | Jun 2012 | A1 |
20120191900 | Kunimatsu et al. | Jul 2012 | A1 |
20120198152 | Terry et al. | Aug 2012 | A1 |
20120198261 | Brown et al. | Aug 2012 | A1 |
20120209943 | Jung | Aug 2012 | A1 |
20120226934 | Rao | Sep 2012 | A1 |
20120246435 | Meir et al. | Sep 2012 | A1 |
20120260055 | Murase | Oct 2012 | A1 |
20120311557 | Resch | Dec 2012 | A1 |
20130022201 | Glew et al. | Jan 2013 | A1 |
20130036314 | Glew et al. | Feb 2013 | A1 |
20130042056 | Shats | Feb 2013 | A1 |
20130060884 | Bembo et al. | Mar 2013 | A1 |
20130067188 | Mehra et al. | Mar 2013 | A1 |
20130073894 | Xavier et al. | Mar 2013 | A1 |
20130124776 | Hallak et al. | May 2013 | A1 |
20130132800 | Healy et al. | May 2013 | A1 |
20130151653 | Sawicki et al. | Jun 2013 | A1 |
20130151771 | Tsukahara et al. | Jun 2013 | A1 |
20130173853 | Ungureanu et al. | Jul 2013 | A1 |
20130238554 | Yucel et al. | Sep 2013 | A1 |
20130259234 | Acar et al. | Oct 2013 | A1 |
20130262758 | Smith et al. | Oct 2013 | A1 |
20130339314 | Carpenter et al. | Dec 2013 | A1 |
20130339635 | Amit et al. | Dec 2013 | A1 |
20130339818 | Baker et al. | Dec 2013 | A1 |
20140032815 | Mangalindan | Jan 2014 | A1 |
20140040535 | Lee | Feb 2014 | A1 |
20140040702 | He et al. | Feb 2014 | A1 |
20140047263 | Coatney et al. | Feb 2014 | A1 |
20140047269 | Kim | Feb 2014 | A1 |
20140063721 | Herman et al. | Mar 2014 | A1 |
20140064048 | Cohen et al. | Mar 2014 | A1 |
20140068224 | Fan et al. | Mar 2014 | A1 |
20140075252 | Luo et al. | Mar 2014 | A1 |
20140136880 | Shankar et al. | May 2014 | A1 |
20140181402 | White | Jun 2014 | A1 |
20140237164 | Le et al. | Aug 2014 | A1 |
20140279936 | Bembo et al. | Sep 2014 | A1 |
20140280025 | Eidson et al. | Sep 2014 | A1 |
20140289588 | Nagadomi et al. | Sep 2014 | A1 |
20140380125 | Calder et al. | Dec 2014 | A1 |
20140380126 | Yekhanin et al. | Dec 2014 | A1 |
20150032720 | James | Jan 2015 | A1 |
20150039645 | Lewis | Feb 2015 | A1 |
20150039849 | Lewis | Feb 2015 | A1 |
20150089283 | Kermarrec et al. | Mar 2015 | A1 |
20150089623 | Sondhi et al. | Mar 2015 | A1 |
20150100746 | Rychlik | Apr 2015 | A1 |
20150134824 | Mickens et al. | May 2015 | A1 |
20150153800 | Lucal et al. | Jun 2015 | A1 |
20150180714 | Chunn | Jun 2015 | A1 |
20150280959 | Vincent | Oct 2015 | A1 |
20160147582 | Karakulak | May 2016 | A1 |
Number | Date | Country |
---|---|---|
2164006 | Mar 2010 | EP |
2256621 | Dec 2010 | EP |
2639997 | Sep 2013 | EP |
02-130033 | Feb 2002 | WO |
2006069235 | Jun 2006 | WO |
2008103569 | Aug 2008 | WO |
2008157081 | Dec 2008 | WO |
2012174427 | Dec 2012 | WO |
2013032544 | Mar 2013 | WO |
2013032825 | Jul 2013 | WO |
Entry |
---|
Wong, Theodore M., et al., “Verifiable secret redistribution for archive systems,” In: Proceedings on First International IEEE Security in Storage Workshop 2002, (SISW '02), pp. 1-12, Dec. 11, 2002. |
Schmid, Patrick: “RAID Scaling Charts, Part 3:4-128 kB Stripes Compared”, Tom's Hardware, Nov. 27, 2007 (http://www.tomshardware.com/reviews/RAID-SCALING-CHARTS.1735-4.html), See pp. 1-2. |
Stalzer, Mark A., “FlashBlades: System Architecture and Applications,” Proceedings of the 2nd Workshop on Architectures and Systems for Big Data, Association for Computing Machinery, New York, NY, 2012, pp. 10-14. |
Ju-Kyeong Kim et al., “Data Access Frequency based Data Replication Method using Erasure Codes in Cloud Storage System”, Journal of the Institute of Electronics and Information Engineers, Feb. 2014, vol. 51, No. 2, pp. 85-91. |
Hwang, Kai, et al. “RAID-x: A New Distributed Disk Array for I/O-Centric Cluster Computing,” HPDC '00 Proceedings of the 9th IEEE International Symposium on High Performance Distributed Computing, IEEE, 2000, pp. 279-286. |
Storer, Mark W., et al., “Pergamum: Replacing Tape with Energy Efficient, Reliable, Disk-Based Archival Storage,” Fast '08: 6th USENIX Conference on File and Storage Technologies, San Jose, CA, Feb. 26-29, 2008 pp. 1-16. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2015/018169, dated May 15, 2015. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2015/034302, dated Sep. 11, 2015. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2015/039135, dated Sep. 18, 2015. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2015/039136, dated Sep. 23, 2015. |
International Search Report, PCT/US2015/039142, dated Sep. 24, 2015. |
International Search Report, PCT/US2015/034291, dated Sep. 30, 2015. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2015/039137, dated Oct. 1, 2015. |
International Search Report, PCT/US2015/044370, dated Dec. 15, 2015. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/031039, dated May 5, 2016. |
International Search Report, PCT/US2015/014604, dated May 19, 2016. |
International Search Report, PCT/US2015/014361, dated May 30, 2016. |
International Search Report, PCT/US2016/014356, dated Jun. 28, 2016. |
International Search Report, PCT/US2016/014357, dated Jun. 29, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/016504, dated Jul. 6, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/024391, dated Jul. 12, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/026529, dated Jul. 19, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/023485, dated Jul. 21, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/033306, dated Aug. 19, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/047808, dated Nov. 25, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/042147, dated Nov. 30, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/054080, dated Dec. 21, 2016. |
International Search Report and the Written Opinion of the International Searching Authority, PCT/US2016/056917, dated Jan. 27, 2017. |
Number | Date | Country | |
---|---|---|---|
62366081 | Jul 2016 | US |