1. Field of the Invention
The present invention relates generally to US Classification 711/216. The present invention relates to an improved storage array controller for flash-based storage devices.
2. Description of the Related Art
U.S. Pat. No. 6,480,936 describes a cache control unit for a storage apparatus.
U.S. Pat. No. 7,574,556 and U.S. Pat. No. 7,500,050 describe destaging of writes in a non-volatile cache.
U.S. Pat. No. 7,253,981 describes the re-ordering of writes in a disk controller.
U.S. Pat. No. 6,957,302 describes the use of a write stack drive in combination with a normal drive.
U.S. Pat. No. 5,893,164 describes a method of tracking incomplete writes in a disk array.
U.S. Pat. No. 6,219,289 describes a data writing apparatus for a tester to write data to a plurality of electric devices.
U.S. Pat. No. 7,318,118 describes a disk drive controller that completes some writes to flash memory of a hard disk drive for subsequent de-staging to the disk, whereas for other writes the data is written directly to disk.
U.S. Pat. No. 6,427,184 describes a disk controller that detects a sequential I/O stream from a host computer.
U.S. Pat. No. 7,216,199 describes a storage controller that continuously writes write-requested data to a stripe on a disk without using a write buffer.
US Publication 2008/0307192 describes storage address re-mapping.
The invention is an improved storage array controller that adds a level of indirection between host system and storage array. The storage array controller controls a storage array comprising at least one solid-state storage device. The storage array controller improvements include: garbage collection, sequentialization of writes, combining of writes, aggregation of writes, increased reliability, improved performance, and addition of resources and functions to a computer system with a storage subsystem.
So that the features of the present invention can be understood, a more detailed description of the invention, briefly summarized above, may be had by reference to typical embodiments, some of which are illustrated in the accompanying drawings. It is to be noted, however, that the accompanying drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of the scope of the invention, for the invention may admit to other equally effective embodiments. The following detailed description makes reference to the accompanying drawings that are now briefly described.
While the invention is susceptible to various modifications, combinations, and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail. It should be understood, however, that the accompanying drawings and detailed description are not intended to limit the invention to the particular form disclosed, but on the contrary, the intention is to cover all modifications, combinations, equivalents and alternatives falling within the spirit and scope of the present invention as defined by the accompanying claims.
Glossary and Conventions
Terms that are special to the field of the invention or specific to this description are defined in this description and the first use of such special terms (which normally includes the definition of that term) is highlighted in italics for the convenience of the reader. Table 1 shows a glossary, also for the convenience of the reader. If any information from Table 1 used for claim interpretation or other purpose conflict with the description text, figures or other tables, then the information in the description shall apply.
In this description there are several figures that depict similar structures with similar parts or components. Thus, as an example, to avoid confusion a disk command in
In the following detailed description and in the accompanying drawings, specific terminology and images are used in order to provide a thorough understanding. In some instances, the terminology and images may imply specific details that are not required to practice all embodiments. Similarly, the embodiments described and illustrated are representative and should not be construed as precise representations, as there are prospective variations on what is disclosed that will be obvious to someone with skill in the art. Thus this disclosure is not limited to the specific embodiments described and shown but embraces all prospective variations that fall within its scope. For brevity, not all steps may be detailed, where such details will be known to someone with skill in the art having benefit of this disclosure.
High-Level Description
This invention focuses on storage arrays that include flash-based storage devices or solid-state storage devices. The solid-state storage device will typically be a solid-state disk (SSD) and we will use an SSD in our examples, but the solid-state storage device does not have to be an SSD. An SSD may for example, comprise flash devices, but could also comprise other forms of solid-state memory components or devices (e.g. SRAM, DRAM, MRAM, etc.) and the storage may be volatile, non-volatile, etc., a combination of different types of solid-state memory components, or a combination of solid-state memory with other types of storage devices (often called a hybrid disk). Such storage arrays may additionally include hard-disk drives (HD or HDD). We use the term storage device to refer to any kind of HDD, SSD, hybrid disk, etc.
A storage array controller is logically located between a host system (or host) and multiple storage devices (e.g. multiple SSDs, multiple HDDs, or a mix of SSDs and HDDs, or a mix of various storage devices, etc.). An SSD typically contains its own SSD controller (which may be implemented as an SSD Controller Chip), but a storage array controller may have more resources than an SSD controller. This invention allows a storage array controller to use resources, such as larger memory size, non-volatile memory, etc. as well as unique information (because a storage array controller is higher than the SSD controller(s) in the storage array hierarchy, i.e. further from the storage devices) in order to manage and control a storage array as well as provide information to the SSD controller(s). A storage array controller also has additional unique information since it typically connects to multiple SSDs.
A Storage Array Controller
In the embodiment of
In alternative systems and/or for different uses of Computer System 150, the Operating System 208 may be a real-time OS (RTOS, etc,); an embedded OS for PDAs (e.g. iOS, Palm OS, Symbian OS, Microsoft Windows CE, Embedded Linux, NetBSD, etc.); an embedded OS for smartphones, cell phones, featurephones, etc. (e.g. BlackBerry OS, Embedded Linux, Android, MeeGo, iPhone OS, Palm OS, Symbian OS, etc.); a desktop or platform OS (e.g. Microsoft Windows, Linux, Mac OS, Google Chrome OS, etc.); a proprietary OS, platform, or framework for storage applications (e.g. LSI FastPath, Fusion-MPT, etc.); or other operating system, operating environment (cloud-based etc.), virtual operating system or virtual machine (VM, etc.). In some such systems, e.g. for systems that use proprietary frameworks or for specialized storage appliances or storage systems, Operating System 208 or a part of it, or similar functions to Operating System 208, may be integrated with Storage Array Controller 108, or parts of Storage Array Controller 108.
In the embodiment of
In the embodiment of
Without departing from the scope of the invention other embodiments than are shown in
Without departing from the scope of the invention other embodiments for Storage Array Controller 108 and Solid-State Disk 218 than are shown in
Storage Array Controller Functions
In the embodiment of
In
In
In
In
In
In different and/or alternative systems and/or for different uses of Computer System 150 the Host System 202 may use a different scheme than Host Filesystem Logic 502, Host Filesystem Map 504, Host Filesystem Freelist 506, and their related structures, components, and logic. For example, implementations and embodiments described here are not limited to systems that use a Host Filesystem Map 504. For example, embedded systems, object-oriented filesystems, or other systems may have no notion of files and filenames. These systems will still generate what we call an HBA, or the equivalent of an HBA, or what would be called an LBA in the absence of a storage array controller.
Unless we explicitly state otherwise, we assume that the host block size (HBS) is equal to the disk block size (DBS). In some embodiments, the HBA may be a composite or union of a logical unit number (LUN) that identifies a logical portion of the storage array or disk or other device in the storage array; an LBA; the virtual machine (VM), if any; a UserID that identifies the user application; a VolumeID that identifies a logical volume; and other data that may be used for logical access or management purposes. Note that to simplify the description and clarify the figures, the LUN and other details may be omitted, or shown separately from HBA in some figures. It should still be clear that operations may be performed on different LUNs.
In
In
In
In
In
Because the terms just described can be confusing we summarize the above again briefly. With just a single disk, the host provides an LBA directly to the disk; the disk controller converts the LBA to the physical disk sector (for an HDD) or to the PBN (for an SSD). In the presence of a storage array controller the host still provides an LBA, but now to the storage array controller instead of a disk (and thus we call the LBA an HBA to avoid confusion). The storage array controller then maps this HBA to an ABA and provides the ABA to the disk. The disk (HDD or SSD) then converts this ABA to a physical disk address: either the physical disk sector (for an HDD) or PBN (for an SSD).
It is important to understand the additional layer of hierarchy that a storage array controller introduces. This additional layer of hierarchy (i.e. additional level of indirection, or additional mapping, re-mapping, etc.) permits the improvements described here. The storage hierarchy of
Alternative Implementations
Data structures, fields, commands, etc. and the algorithms, functions, and operations, etc. used in embodiments here are defined in terms of software operations, code and pseudo-code, but it should be noted that the algorithms etc. may be performed in hardware; software; firmware; microcode; a combination of hardware, software, firmware or microcode; or in any other manner that performs the same function and/or has the same effect. The data structures etc. (or parts of them) may be stored in the storage array controller in SRAM, DRAM, embedded flash, or other memory. The data structures (or parts of them) may also be stored outside the storage array controller, for example on any of the storage devices of a storage array (the local storage or remote storage, i.e. remote from the storage array connected to the storage array controller) or on a host system (the local host or a remote host, i.e. remote from the host connected to the storage array controller). For example,
Details Common to Several Embodiments
We will now define some of the data structures used in one or more of the embodiments described here (including a map and a freelist). A map hr_map is defined between HBAs and ABAs, for example, as hr_map[hba]->aba. Thus hr_map takes an HBA as input and returns an ABA. We say that the HBA maps to that ABA (we can also say that the storage array controller maps or re-maps data from the operating system). A special symbol or bit may indicate that an entry in hr_map[hba] is unmapped, and/or we can use a special table entry to indicate an entry in hr_map[hba] is unmapped. A freelist uses a structure aba_free to maintain a list of free blocks that may be used.
Distinction Between Storage Array Controller and Disk Controller Etc.
We have used the term storage array controller throughout this description (rather than, for example, storage controller) in order to avoid confusion with disk controller or SSD controller. In
Disk Trim Command and Equivalents
The algorithms, operations, etc. in various embodiments described here may use a disk trim command (trim command or just trim). A disk trim command was proposed to the disk-drive industry in the 2007 timeframe and introduced in the 2009 timeframe. One such disk trim command is a standard storage command, part of the ATA interface standard, and is intended for use with an SSD. A disk trim command is issued to the SSD; the disk trim command specifies a number of disk sectors on the SSD using data ranges and LBAs (or using ABAs or the DBAs contained in ABAs in the presence of a storage array controller); and the disk trim command is directed to the specified disk sectors. The disk trim command allows an OS to tell an SSD that the disk sectors specified in the trim command are no longer required and may be deleted or erased. The disk trim command allows the SSD to increase performance by executing housekeeping functions, such as erasing flash blocks, that the SSD could not otherwise execute without the information in the disk trim command.
It should be noted from the above explanation that when we say, for example, “place an ABA in a disk trim command,” the disk trim command may actually require an LBA (if it is a standard ATA command for example), and that LBA is the DBA portion of the ABA. To simplify the description we may thus refer to an LBA, DBA and ABA as referring to the same block address, and thus mean the same thing, at the disk level.
In certain embodiments, the disk trim command and other storage commands have fixed and well-specified formats defined by various standards (e.g. SATA, SCSI, etc.). These standard commands are often complicated with many long fields, complicated appearance, and complex use. Storage commands typically also vary in format depending on the type of storage bus (e.g. SCSI, ATA, etc.). In this description in order that the invention may be made clear, we may simplify disk trim commands, storage commands and other commands in the text and figures in order to simplify the description and explanation (and the format of the storage commands may also vary between different figures and different embodiments). The embodiments described here are intended to work with any standard command set (and/or any non-standard and/or proprietary command set) even though a command or command set shown in the text or in a figure in this description may not exactly follow any standard formats, for example.
In certain embodiments, the Storage Array Controller 108 may read from a storage device to determine if a disk trim command (or equivalent command, operation, etc.) is supported. For example by inspecting the IDENTIFY DEVICE data word using the ATA command set.
In certain embodiments, the result of a read from a storage device after a trim command is performed may be configured: e.g. (a) deterministic read data (e.g. always zero) (b) determinate read data but set to any value (c) indeterminate read data (indeterminate means two successive reads may return different values).
In certain embodiments, the disk trim command may take the form of a command with a field or bit set to a certain value: e.g. an ATA trim command is DATA SET MANAGEMENT command with trim bit set. Other command sets and standards may use similar techniques.
In certain embodiments, the disk trim command may take the form of a command (or command structure, data structure, subroutine call, function call, etc.) issued at a higher level (e.g. within the OS, within a device driver, etc.) that may then be converted to a lower-level disk command. For example, in the Linux OS, a discard at the block device level may be translated to a disk trim command (or equivalent command) directed at a storage device. The translation or conversion of one form of disk trim command (or equivalent command) to another form of disk trim command (or equivalent command) at the same or at a different layer of storage hierarchy may take place in software (e.g. host system OS, storage management systems, firmware, device drivers, frameworks such as LSI Fusion-MPT, etc.) or may be performed by Storage Array Controller 108 (in software, hardware, firmware, or in a combination of these). Of course during the translation or conversion, in various alternative embodiments, it may be necessary and/or advantageous to convert or translate one or more disk trim commands (or equivalent commands) as follows: (a) one or more commands to a single command; (b) one command to one or more commands; (c) convert between command types (e.g. between standards, to/from non-standard, to/from proprietary, different forms, different formats, etc.); (d) translate one or more bits or fields in command(s); (e) merge one or more commands; (f) accumulate commands; (g) aggregate commands; (h) alter, delete, insert or modify commands; (i) alter the timing of commands (e.g. with respect to other disk trim commands, with respect to other storage commands, etc.); (j) modify the form (e.g. insert, change, delete, re-order, accumulate, aggregate, combine, etc.) and function (e.g., purpose, content, format, timing, relative timing, dependence, etc.) of one or more related storage commands; (k) one or more of any of these in combination.
A disk trim command may also take the form of a different storage command that is used or re-used (e.g. re-purposed, re-targeted, etc.) as a disk trim command. For example, many OS (or storage management systems, firmware, device drivers, frameworks such as LSI Fusion-MPT, etc.) allow regions of storage to be managed (e.g. maintaining disk quotas, thin provisioning, etc.). For example, in order to provide thin provisioning in a storage array, physical storage is allocated to virtual LUNs on demand and is deallocated (or freed, we will not use the term deallocate or deallocation to avoid confusion with its use in the context of memory) when no longer required. Typically physical storage is allocated in large chunks (typically hundreds of kBytes). If the storage array uses SCSI storage devices or a SCSI bus or framework, then a T10 SCSI UNMAP command may allow one or more of the allocated chunks to be freed. A discard command (or other equivalent command) from the OS may be converted to an UNMAP command (or other equivalent command) and effectively used as a disk trim command. Thus, storage management may often require functions similar to disk trim to be performed (discarding unwanted data, freeing flash blocks, deallocating or freeing storage space, etc.). Thus, in certain embodiments, one or more standard commands (e.g. T10 SCSI WRITE SAME command, T10 SCSI UNMAP command, etc.) may be used to perform a function similar or identical to one or more disk trim commands as described here.
In certain embodiments, the disk trim command may be proprietary (e.g. non-standard, unique to a particular vendor, etc.) to one or more storage devices in a storage array. For example, some SSDs may have a unique command to free data. Thus, in certain other embodiments, one or more non-standard commands may be used to perform a function similar or identical to the disk trim commands used in the described examples.
Thus, in certain embodiments, one or more standard commands (e.g. T10 SCSI WRITE SAME command, T10 SCSI UNMAP command, Linux discard, ATA trim command, etc.) or other similar non-standard commands (proprietary commands, private functions and calls, hidden fields and bits, etc.) may be used to perform a function similar or identical to one or more disk trim commands of different form, format, type, or function (e.g. ATA trim command, Linux discard, etc.).
Storage Arrays
A storage array controller performs certain functions instead of (or in addition to) an OS running on a host system; and a storage array controller also performs certain functions instead of (or in addition to) an SSD controller(s) in a storage array. A storage array controller is logically located between a host system and an SSD. An SSD contains its own SSD controller, but a storage array controller may have more resources than an SSD controller. The algorithms described here allow a storage array controller to use resources, such as larger memory size, non-volatile memory, etc. as well as unique information (because a storage array controller is higher than an SSD controller in a storage array hierarchy, i.e. further from the storage devices) in order to manage and control a storage array as well as provide information to an SSD controller. For example, a storage array controller is aware of LUNs but a SSD controller is not. This hierarchical management approach has other advantages and potential uses that are explained throughout this description in the forms of various algorithms that may be employed by themselves or in combination.
Note that the various storage-array configuration alternatives as well as other various possibilities for the storage array configuration(s), storage bus(es), and various storage device(s) will not necessarily be shown in all of the figures in order to simplify the description.
Garbage Collection
In the context of solid-state storage, typically flash memory, when a flash page (or some other portion, block, section, subset, etc.) of a storage device is no longer required (i.e. it is obsolete, no longer valid, is invalid, etc.) that flash page is marked as dirty. When an entire flash block (typically, in the 2010 timeframe, between 16 and 256 flash pages) is dirty, the entire flash block is erased and free space reclaimed. If free space on the device is low, a flash block is chosen that has some dirty flash pages and some clean (i.e. pages that are not dirty, are good, valid, etc.) flash pages. The clean flash pages are transferred (i.e. written, moved, copied, etc.) to a new flash block. All the original clean flash pages are marked as dirty and the old flash block is erased. In the context of solid-state storage, this process of transferring flash pages to new flash blocks and erasing old flash blocks is called garbage collection. The exact technique used for garbage collection, well-known to someone skilled in the art, is not a key part of the algorithms, embodiments and examples that are described here. One basic idea is that garbage collection, in certain embodiments, may be performed by the storage array controller.
Sequentializing Writes
In the embodiment of
A storage command is more commonly called a disk command or just command, a term we will avoid using in isolation to avoid confusion. To avoid such confusion we will use storage command when we are talking about commands in general; but we will save disk command (or disk write, etc.) for the command as it arrives at (or is received by) the disk (either SSD or HDD, usually via a standard interface or storage bus, e.g. SATA, SCSI, etc.); we will use the term host command (or host write, etc.) for the command as it leaves (or is transmitted by) the OS. A disk command may be the same as a host command, for example, when there is a direct connection between the OS on a host system and a single disk (or other storage device). In the presence of Storage Array Controller 108, the host commands are translated (e.g. converted, re-formatted, etc.) to disk commands by Storage Array Controller 108. We also say the Storage Array Controller 108 creates a sequence (e.g. string, chain, collection, group, one or more, etc.) of disk commands (e.g. for reads, writes, control commands, query commands, status commands, etc.) by using the sequence of host commands (e.g. extracts, translates fields and formats in commands, inserts write data, etc.). (Note that a sequence of commands does not imply sequential writes.) The Storage Array Controller 108 also creates (e.g. translates, converts, etc.) responses (or read responses, query responses, status responses, error responses, etc.) from a storage device to the host system (e.g. creates fields and formats in packets, inserts read data, inserts information, inserts status, inserts error codes, etc.).
In
In
In
In
In
In
In
Other Embodiments of Filesystems, Maps etc.
In the embodiment shown for example in
In Microsoft Windows, for example, the translation of filename to start sector is in the master file table (MFT) or file allocation table (FAT). In Microsoft Windows, the LBA and HBA corresponding to zero and low-numbered addresses are boot sectors (and thus not used for data) and in our description we have ignored this. In several examples and figures, in order to simplify and clarify our explanations, we have used example sectors whose addresses would normally correspond to boot sectors.
As another example of an area where we were forced to simplify our explanations for clarity, the NT filesystem (or NT file system, NTFS) in a Windows OS uses a more complex map than we have shown in, for example,
Thus an offset into a file is mapped to File Virtual Clusters which are mapped to File Logical Clusters which are mapped to the Physical Disk Clusters. If, for example, a file test.txt was fragmented into four pieces, the MFT record for test.txt will contain four VCNs (1, 2, 3, 4). If the first run begins at cluster 997 and includes clusters 998 and 999, the MFT will list VCN=1, LCN=997, clusters=3. The three other runs in the file would be addressed similarly. Thus the mapping of Windows, as we explained, is more complex than that shown in
Garbage Collection with an Unused Portion of a Disk
In
In
Note that in
In alternative embodiments: (a) Storage Capacity C31106 may be spread across one or more Solid-State Disk 218 in Storage Array 148; (b) one area corresponding to Storage Capacity C31106 on a single Solid-State Disk 218 may be used by Storage Array Controller 108 for garbage collection across one or more Solid-State Disk 218 in Storage Array 148; (c) Storage Capacity C31106 may be varied according to (i) size of Storage Array 148; (ii) performance required from Solid-State Disk 218 in Storage Array 148; (iii) performance required from Storage Array 148; (iv) performance and/or other characteristics of Storage Array 148; (v) size of Solid-State Disk 218; (d) Storage Capacity C31106 may be varied dynamically based on space available or other criteria (performance, load, data patterns, use, etc.); (e) wear-level of Solid-State Disk 218 (e.g. length of use and/or service, amount of data written, etc.).
Write Sequentialization in a RAID Array
In
RAID (Redundant Array of Independent Disks or Redundant Array of Inexpensive Disks) provides increased storage reliability through redundancy by combining multiple disk drives in a storage array into a disk group or volume group. RAID distributes data across multiple disk drives, but the storage array is addressed by the operating system as a single disk. The segment size is the amount of data written to a single drive in a physical (or virtual) disk group before writing data to the next disk drive in the disk group. A set of contiguous data segments that spans all members of the disk group is a stripe.
A RAID-5 storage array uses sector-level striping with parity information distributed across all member disks. RAID 5 may be implemented in: (a) the disk controller; (b) a storage array controller (or other types of RAID cards with onboard processors, etc.); (c) operating systems independently of the disk controller, RAID controller, etc. (known as software RAID) e.g. Microsoft Windows Dynamic Disks, Linux and RAID, RAID-Z, etc; (d) the system CPU in conjunction with a RAID controller (a form of software RAID); (e) a combination of one or more these.
For example, in a RAID 5 storage array with 5 member disks in a disk group (4 data+1 parity) using a segment size of 128 kBytes, the first 128 kBytes of a large write command is written to the first drive, the next 128 kBytes to the next drive, and so on. The stripe size is the length of a stripe, in this case 512 kBytes (4 times 128 kBytes). (Sometimes stripe and stripe size are used or defined to include the parity information, but we will not use this alternative definition.) For a RAID 1 array, with a 2+2 disk group, 128 kBytes would be written to each of two drives and same data to the mirrored drives. If the write command size is larger than the stripe size, writes repeat until all data is written to the storage array.
A full write is the most efficient type of write that can be performed corresponding to writing an entire stripe of data. A partial write occurs when less than a full stripe of data is modified and written. In RAID levels 5 and 6 the situation is complex, as parity data must be recalculated and written for a whole stripe. For small writes, this requires a multi-step process to: (a) read the old data segments and read the old parity segment; (b) compare the old data segments with the write data and modify the parity segment; (c) write the new data segments and write the new parity segment. This read-modify-write (RMW) multi-step process for writes smaller than the stripe size is inefficient and slow. A RMW also increases the number of disk writes compared to a full write and thus increases wear on flash memory in an SSD for example.
One weakness of RAID-5 systems occurs in the event of a failure while there are pending writes (potential lost writes). In such a failure event the parity information may become inconsistent with the associated data in a stripe. Data loss can then occur if a disk fails when the incorrect parity information is used to reconstruct the missing data from the failed disk. This vulnerability is known as a write hole. A battery-backed write cache (BBWC) is commonly used in a RAID controller to reduce the probability for this failure mechanism to occur.
In
In
In
Thus, in
In other embodiments: (a) the use of buffers (e.g. Superblock 914 in
In still other embodiments: (a) a RAID array may be formed using one or more portions of a single disk or storage device (SSD, HDD, etc.); (b) the Storage Array 148 may use nested RAID, i.e. (i) one RAID level within one or more disks (intra-disk RAID) and (ii) the same or different RAID level across disks (inter-disk RAID); (c) disks may be used in a disk pool that is a subset of Storage Array 148, so that a stripe is not across all disks in the array; (d) the Storage Array Controller 108 may be used to reduce or eliminate any write-hole weakness or vulnerability in a RAID array by using onboard DRAM, flash, or other means to prevent or reduce the probability of lost writes or inconsistent parity information, etc; (e) one or more alternative RAID levels or configurations may be used (including more than one configuration or level in combination) in Storage Array 148 (or portions of Storage Array 148) such as: (i) combinations of RAID levels (e.g. RAID 50, RAID 10, etc.); (ii) distributed parity; (iii) non-distributed parity; (iv) left asymmetric algorithm, left symmetric algorithm, right asymmetric algorithm, right symmetric algorithm (these algorithms are well known to one skilled in the art), etc; (f) a mixture of different storage devices with different size or type may be used in Storage Array 148 (e.g. zero, one or more SSD; zero, one or more HDD; zero, one or more other storage devices, etc.); (g) Storage Array Controller 108 may use an SSD (or onboard memory, cache, other storage device, etc.) as storage cache (or other form of staging storage or cache) for Storage Array 148; (h) the embodiment described in (g) or other embodiments may also have the Storage Array 148 (or portions of Storage Array 148) be protected (by RAID or other means); (i) combinations of one or more of these and/or previously described alternative embodiments.
Writes of Varying Lengths
In
Write Aggregation
In
In previous figures and in the text we have often simplified the drawings and examples and ignored this complication, which may often occur in practice, in order to clarify the description. If one were to observe write commands on a bus, host write commands may often over-write each other. Storage Array Controller 108 may collapse or aggregate these writes, as shown in
Balanced Writes
In
Data Scrubbing
In
Data Manipulation
In
Data Streaming
In
Write Commit
In
Conclusion
Numerous variations, combinations, and modifications based on the above description will become apparent to someone with skill in the art once the above description is fully understood. It is intended that the claims that follow be interpreted to embrace all such variations, combinations, and modifications.
This application is a continuation-in-part of U.S. non-provisional patent application Ser. No. 12/876,393, filed Sep. 7, 2010. U.S. non-provisional patent application Ser. No. 12/876,393 is incorporated herein by reference in its entirety and in its description includes information (e.g. definitions of special terms, illustrative examples, data, other information, etc.) that may be relevant to this application. If any definitions (e.g. figure reference signs, specialized terms, examples, data, information, etc.) from any related material (e.g. parent application, other related application, material incorporated by reference, material cited, extrinsic reference, etc.) conflict with this application (e.g. abstract, description, summary, claims, etc.) for any purpose (e.g. prosecution, claim support, claim interpretation, claim construction, etc.), then the definitions in this application shall apply.
Number | Name | Date | Kind |
---|---|---|---|
4506323 | Pusic et al. | Mar 1985 | A |
4598357 | Swenson et al. | Jul 1986 | A |
4779189 | Legvold et al. | Oct 1988 | A |
4805098 | Mills et al. | Feb 1989 | A |
4875155 | Iskiyan et al. | Oct 1989 | A |
4916605 | Beardsley et al. | Apr 1990 | A |
4920536 | Hammond et al. | Apr 1990 | A |
4987533 | Clark et al. | Jan 1991 | A |
5333143 | Blaum et al. | Jul 1994 | A |
5390186 | Murata et al. | Feb 1995 | A |
5404500 | Legvold et al. | Apr 1995 | A |
5410667 | Belsan et al. | Apr 1995 | A |
5418921 | Cortney et al. | May 1995 | A |
5437022 | Beardsley et al. | Jul 1995 | A |
5526511 | Swenson et al. | Jun 1996 | A |
5542066 | Mattson et al. | Jul 1996 | A |
5544343 | Swenson et al. | Aug 1996 | A |
5551003 | Mattson et al. | Aug 1996 | A |
5568628 | Satoh et al. | Oct 1996 | A |
5581724 | Belsan et al. | Dec 1996 | A |
5596736 | Kerns et al. | Jan 1997 | A |
5627990 | Cord et al. | May 1997 | A |
5634109 | Chen et al. | May 1997 | A |
5636359 | Beardsley et al. | Jun 1997 | A |
5640530 | Beardsley et al. | Jun 1997 | A |
5682527 | Cooper et al. | Oct 1997 | A |
5694570 | Beardsley et al. | Dec 1997 | A |
5696932 | Smith et al. | Dec 1997 | A |
5715424 | Jesionowski et al. | Feb 1998 | A |
5717884 | Gzym et al. | Feb 1998 | A |
5717888 | Candelaria et al. | Feb 1998 | A |
5754888 | Yang et al. | May 1998 | A |
5774682 | Benhase et al. | Jun 1998 | A |
5778426 | DeKoning et al. | Jul 1998 | A |
5790828 | Jost et al. | Aug 1998 | A |
5802557 | Vishlitzky et al. | Sep 1998 | A |
5813032 | Bhargava et al. | Sep 1998 | A |
5815656 | Candelaria et al. | Sep 1998 | A |
5822781 | Wells et al. | Oct 1998 | A |
5887199 | Ofer et al. | Mar 1999 | A |
5892978 | Munguia et al. | Apr 1999 | A |
5893164 | Legg et al. | Apr 1999 | A |
5900009 | Vishlitzky et al. | May 1999 | A |
5930481 | Benhase et al. | Jul 1999 | A |
5949970 | Sipple et al. | Sep 1999 | A |
6003114 | Bachmat et al. | Dec 1999 | A |
6006342 | Beardsley et al. | Dec 1999 | A |
6016530 | Auclair et al. | Jan 2000 | A |
6029229 | Vishlitzky et al. | Feb 2000 | A |
6052822 | Kim et al. | Apr 2000 | A |
6073209 | Bergsten et al. | Jun 2000 | A |
6098191 | Yamamoto et al. | Aug 2000 | A |
6101588 | Farley et al. | Aug 2000 | A |
6119209 | Bauman et al. | Sep 2000 | A |
6141731 | Beardsley et al. | Oct 2000 | A |
6157991 | Arnon et al. | Dec 2000 | A |
6189080 | Ofer et al. | Feb 2001 | B1 |
6192450 | Bauman et al. | Feb 2001 | B1 |
6219289 | Satoh et al. | Apr 2001 | B1 |
6243795 | Yang et al. | Jun 2001 | B1 |
6256705 | Li et al. | Jul 2001 | B1 |
6272662 | Jadav et al. | Aug 2001 | B1 |
6275897 | Bachmat et al. | Aug 2001 | B1 |
6311252 | Raz et al. | Oct 2001 | B1 |
6330655 | Vishlitzky et al. | Dec 2001 | B1 |
6332197 | Jadav et al. | Dec 2001 | B1 |
6336164 | Gerdt et al. | Jan 2002 | B1 |
6341331 | McNutt et al. | Jan 2002 | B1 |
6370534 | Odom et al. | Apr 2002 | B1 |
6393426 | Odom et al. | May 2002 | B1 |
6408370 | Yamamoto et al. | Jun 2002 | B2 |
6425050 | Beardsley et al. | Jul 2002 | B1 |
6427184 | Kaneko et al. | Jul 2002 | B1 |
6442659 | Blumenau et al. | Aug 2002 | B1 |
6446141 | Nolan et al. | Sep 2002 | B1 |
6463503 | Jones et al. | Oct 2002 | B1 |
6467022 | Buckland et al. | Oct 2002 | B1 |
6473830 | Li et al. | Oct 2002 | B2 |
6480936 | Ban et al. | Nov 2002 | B1 |
6484234 | Kedem et al. | Nov 2002 | B1 |
6490664 | Jones et al. | Dec 2002 | B1 |
6513097 | Beardsley et al. | Jan 2003 | B1 |
6513102 | Garrett et al. | Jan 2003 | B2 |
6516320 | Odom et al. | Feb 2003 | B1 |
6567888 | Kedem et al. | May 2003 | B2 |
6587921 | Chiu et al. | Jul 2003 | B2 |
6591335 | Sade et al. | Jul 2003 | B1 |
6594726 | Vishlitzky et al. | Jul 2003 | B1 |
6604171 | Sade et al. | Aug 2003 | B1 |
6615318 | Jarvis et al. | Sep 2003 | B2 |
6615332 | Yamamoto et al. | Sep 2003 | B2 |
6629199 | Vishlitzky et al. | Sep 2003 | B1 |
6654830 | Taylor et al. | Nov 2003 | B1 |
6658542 | Beardsley et al. | Dec 2003 | B2 |
6684289 | Gonzalez et al. | Jan 2004 | B1 |
6704837 | Beardsley et al. | Mar 2004 | B2 |
6757790 | Chalmer et al. | Jun 2004 | B2 |
6766414 | Francis et al. | Jul 2004 | B2 |
6775738 | Ash et al. | Aug 2004 | B2 |
6782444 | Vishlitzky et al. | Aug 2004 | B1 |
6785771 | Ash et al. | Aug 2004 | B2 |
6842843 | Vishlitzky et al. | Jan 2005 | B1 |
6857050 | Lee et al. | Feb 2005 | B2 |
6865648 | Naamad et al. | Mar 2005 | B1 |
6871272 | Butterworth et al. | Mar 2005 | B2 |
6948009 | Jarvis et al. | Sep 2005 | B2 |
6948033 | Ninose et al. | Sep 2005 | B2 |
6957302 | Fairchild et al. | Oct 2005 | B2 |
6965979 | Burton et al. | Nov 2005 | B2 |
6993627 | Butterworth et al. | Jan 2006 | B2 |
6996690 | Nakamura et al. | Feb 2006 | B2 |
7007196 | Lee et al. | Feb 2006 | B2 |
7024530 | Jarvis et al. | Apr 2006 | B2 |
7032065 | Gonzalez et al. | Apr 2006 | B2 |
7051174 | Ash et al. | May 2006 | B2 |
7055009 | Factor et al. | May 2006 | B2 |
7058764 | Bearden et al. | Jun 2006 | B2 |
7080207 | Bergsten et al. | Jul 2006 | B2 |
7080208 | Ashmore et al. | Jul 2006 | B2 |
7080232 | Aasheim et al. | Jul 2006 | B2 |
7085892 | Martinez et al. | Aug 2006 | B2 |
7085907 | Ash et al. | Aug 2006 | B2 |
7089357 | Ezra et al. | Aug 2006 | B1 |
7103717 | Abe et al. | Sep 2006 | B2 |
7107403 | Modha et al. | Sep 2006 | B2 |
7124128 | Springer et al. | Oct 2006 | B2 |
7124243 | Burton et al. | Oct 2006 | B2 |
7130956 | Rao et al. | Oct 2006 | B2 |
7130957 | Rao et al. | Oct 2006 | B2 |
7136966 | Hetrick et al. | Nov 2006 | B2 |
7139933 | Hsu et al. | Nov 2006 | B2 |
7159139 | Vishlitzky et al. | Jan 2007 | B2 |
7171513 | Gonzalez et al. | Jan 2007 | B2 |
7171516 | Lowe et al. | Jan 2007 | B2 |
7171610 | Ash et al. | Jan 2007 | B2 |
7191207 | Blount et al. | Mar 2007 | B2 |
7191303 | Yamamoto et al. | Mar 2007 | B2 |
7191306 | Myoung et al. | Mar 2007 | B2 |
7213110 | Nakayama et al. | May 2007 | B2 |
7216199 | Mizuno et al. | May 2007 | B2 |
7216208 | Yamamoto et al. | May 2007 | B2 |
7253981 | Ng et al. | Aug 2007 | B2 |
7254686 | Islam et al. | Aug 2007 | B2 |
7266653 | Tross et al. | Sep 2007 | B2 |
7269690 | Abe et al. | Sep 2007 | B2 |
7275134 | Yang et al. | Sep 2007 | B2 |
7293048 | Cochran et al. | Nov 2007 | B2 |
7293137 | Factor et al. | Nov 2007 | B2 |
7299411 | Blair et al. | Nov 2007 | B2 |
7318118 | Chu et al. | Jan 2008 | B2 |
7360019 | Abe et al. | Apr 2008 | B2 |
7366846 | Boyd et al. | Apr 2008 | B2 |
7380058 | Kanai et al. | May 2008 | B2 |
7380059 | Burton et al. | May 2008 | B2 |
7395377 | Gill et al. | Jul 2008 | B2 |
7411757 | Chu et al. | Aug 2008 | B2 |
7421535 | Jarvis et al. | Sep 2008 | B2 |
7421552 | Long et al. | Sep 2008 | B2 |
7426623 | Lasser et al. | Sep 2008 | B2 |
7437515 | Naamad et al. | Oct 2008 | B1 |
7447843 | Ishikawa et al. | Nov 2008 | B2 |
7454656 | Okada et al. | Nov 2008 | B2 |
7464221 | Nakamura et al. | Dec 2008 | B2 |
7472222 | Auerbach et al. | Dec 2008 | B2 |
7496714 | Gill et al. | Feb 2009 | B2 |
7500050 | Gill et al. | Mar 2009 | B2 |
7539815 | Zohar et al. | May 2009 | B2 |
7543109 | Bell et al. | Jun 2009 | B1 |
7565485 | Factor et al. | Jul 2009 | B2 |
7574556 | Gill et al. | Aug 2009 | B2 |
7577787 | Yochai et al. | Aug 2009 | B1 |
7581063 | Factor et al. | Aug 2009 | B2 |
7594023 | Gemmell | Sep 2009 | B2 |
7624229 | Longinov et al. | Nov 2009 | B1 |
7627714 | Ash et al. | Dec 2009 | B2 |
7650480 | Jiang | Jan 2010 | B2 |
7657707 | Yamamoto et al. | Feb 2010 | B2 |
7660948 | Bates et al. | Feb 2010 | B2 |
7676633 | Fair et al. | Mar 2010 | B1 |
7680982 | Ash et al. | Mar 2010 | B2 |
7689769 | Bates et al. | Mar 2010 | B2 |
7689869 | Terashita et al. | Mar 2010 | B2 |
8055938 | Chatterjee et al. | Nov 2011 | B1 |
8214607 | Williams | Jul 2012 | B2 |
8275933 | Flynn et al. | Sep 2012 | B2 |
20010029570 | Yamamoto et al. | Oct 2001 | A1 |
20020004885 | Francis et al. | Jan 2002 | A1 |
20020032835 | Li et al. | Mar 2002 | A1 |
20020035666 | Beardsley et al. | Mar 2002 | A1 |
20020073277 | Butterworth et al. | Jun 2002 | A1 |
20020073285 | Butterworth et al. | Jun 2002 | A1 |
20020124138 | Garrett et al. | Sep 2002 | A1 |
20020129202 | Yamamoto et al. | Sep 2002 | A1 |
20020194429 | Chiu et al. | Dec 2002 | A1 |
20030028724 | Kedem et al. | Feb 2003 | A1 |
20030037204 | Ash et al. | Feb 2003 | A1 |
20030070041 | Beardsley et al. | Apr 2003 | A1 |
20030105928 | Ash et al. | Jun 2003 | A1 |
20030140198 | Ninose et al. | Jul 2003 | A1 |
20030149843 | Jarvis et al. | Aug 2003 | A1 |
20030158999 | Hauck et al. | Aug 2003 | A1 |
20030159001 | Chalmer et al. | Aug 2003 | A1 |
20030167252 | Odom et al. | Sep 2003 | A1 |
20030204677 | Bergsten et al. | Oct 2003 | A1 |
20030225948 | Jarvis et al. | Dec 2003 | A1 |
20030229767 | Lee et al. | Dec 2003 | A1 |
20030229826 | Lee et al. | Dec 2003 | A1 |
20030233613 | Ash et al. | Dec 2003 | A1 |
20040019740 | Nakayama et al. | Jan 2004 | A1 |
20040049638 | Ashmore et al. | Mar 2004 | A1 |
20040059870 | Ash et al. | Mar 2004 | A1 |
20040085849 | Myoung et al. | May 2004 | A1 |
20040088484 | Yamamoto et al. | May 2004 | A1 |
20040123028 | Kanai et al. | Jun 2004 | A1 |
20040133855 | Blair et al. | Jul 2004 | A1 |
20040148486 | Burton et al. | Jul 2004 | A1 |
20040181639 | Jarvis et al. | Sep 2004 | A1 |
20040181640 | Factor et al. | Sep 2004 | A1 |
20040186968 | Factor et al. | Sep 2004 | A1 |
20040199737 | Yamamoto et al. | Oct 2004 | A1 |
20040205296 | Bearden et al. | Oct 2004 | A1 |
20040205297 | Bearden et al. | Oct 2004 | A1 |
20040230737 | Burton et al. | Nov 2004 | A1 |
20040255026 | Blount et al. | Dec 2004 | A1 |
20040260735 | Martinez et al. | Dec 2004 | A1 |
20040260882 | Martinez et al. | Dec 2004 | A1 |
20040267706 | Springer et al. | Dec 2004 | A1 |
20040267902 | Yang et al. | Dec 2004 | A1 |
20050005188 | Hsu et al. | Jan 2005 | A1 |
20050021906 | Nakamura et al. | Jan 2005 | A1 |
20050071549 | Tross et al. | Mar 2005 | A1 |
20050071550 | Lowe et al. | Mar 2005 | A1 |
20050071599 | Modha et al. | Mar 2005 | A1 |
20050102553 | Cochran et al. | May 2005 | A1 |
20050120168 | Vishlitzky et al. | Jun 2005 | A1 |
20050177687 | Rao et al. | Aug 2005 | A1 |
20050193240 | Ash et al. | Sep 2005 | A1 |
20050198411 | Bakke et al. | Sep 2005 | A1 |
20050228941 | Abe et al. | Oct 2005 | A1 |
20050228954 | Factor et al. | Oct 2005 | A1 |
20050240809 | Ash et al. | Oct 2005 | A1 |
20050251628 | Jarvis et al. | Nov 2005 | A1 |
20050270843 | Koren et al. | Dec 2005 | A1 |
20050273555 | Factor et al. | Dec 2005 | A1 |
20060004957 | Hand et al. | Jan 2006 | A1 |
20060020855 | Okada et al. | Jan 2006 | A1 |
20060031639 | Benhase et al. | Feb 2006 | A1 |
20060069888 | Martinez et al. | Mar 2006 | A1 |
20060080501 | Auerbach et al. | Apr 2006 | A1 |
20060106891 | Mahar et al. | May 2006 | A1 |
20060161700 | Boyd et al. | Jul 2006 | A1 |
20060168403 | Kolovson et al. | Jul 2006 | A1 |
20060184740 | Ishikawa et al. | Aug 2006 | A1 |
20060224849 | Rezaul et al. | Oct 2006 | A1 |
20060265568 | Burton et al. | Nov 2006 | A1 |
20060294301 | Zohar et al. | Dec 2006 | A1 |
20070033356 | Erlikhman | Feb 2007 | A1 |
20070038833 | Yamamoto et al. | Feb 2007 | A1 |
20070050571 | Nakamura et al. | Mar 2007 | A1 |
20070094446 | Sone et al. | Apr 2007 | A1 |
20070118695 | Lowe et al. | May 2007 | A1 |
20070180328 | Cornwell et al. | Aug 2007 | A1 |
20070186047 | Jarvis et al. | Aug 2007 | A1 |
20070220200 | Gill et al. | Sep 2007 | A1 |
20070220201 | Gill et al. | Sep 2007 | A1 |
20070245080 | Abe et al. | Oct 2007 | A1 |
20070250660 | Gill et al. | Oct 2007 | A1 |
20070260846 | Burton et al. | Nov 2007 | A1 |
20070288710 | Boyd et al. | Dec 2007 | A1 |
20070300008 | Rogers et al. | Dec 2007 | A1 |
20080010411 | Yang et al. | Jan 2008 | A1 |
20080021853 | Modha et al. | Jan 2008 | A1 |
20080024899 | Chu et al. | Jan 2008 | A1 |
20080040553 | Ash et al. | Feb 2008 | A1 |
20080052456 | Ash et al. | Feb 2008 | A1 |
20080065669 | Factor et al. | Mar 2008 | A1 |
20080071971 | Kim et al. | Mar 2008 | A1 |
20080071993 | Factor et al. | Mar 2008 | A1 |
20080091875 | Mannenbach et al. | Apr 2008 | A1 |
20080104193 | Mimatsu et al. | May 2008 | A1 |
20080126708 | Gill et al. | May 2008 | A1 |
20080137284 | Flynn et al. | Jun 2008 | A1 |
20080140724 | Flynn et al. | Jun 2008 | A1 |
20080140909 | Flynn et al. | Jun 2008 | A1 |
20080140910 | Flynn et al. | Jun 2008 | A1 |
20080140932 | Flynn et al. | Jun 2008 | A1 |
20080141043 | Flynn et al. | Jun 2008 | A1 |
20080155190 | Ash et al. | Jun 2008 | A1 |
20080155198 | Factor et al. | Jun 2008 | A1 |
20080168220 | Gill et al. | Jul 2008 | A1 |
20080168234 | Gill et al. | Jul 2008 | A1 |
20080168304 | Flynn et al. | Jul 2008 | A1 |
20080183882 | Flynn et al. | Jul 2008 | A1 |
20080183953 | Flynn et al. | Jul 2008 | A1 |
20080195807 | Kubo et al. | Aug 2008 | A1 |
20080201523 | Ash et al. | Aug 2008 | A1 |
20080225474 | Flynn et al. | Sep 2008 | A1 |
20080229010 | Maeda et al. | Sep 2008 | A1 |
20080229079 | Flynn et al. | Sep 2008 | A1 |
20080250200 | Jarvis et al. | Oct 2008 | A1 |
20080250210 | Ash et al. | Oct 2008 | A1 |
20080256183 | Flynn et al. | Oct 2008 | A1 |
20080256286 | Ash et al. | Oct 2008 | A1 |
20080256292 | Flynn et al. | Oct 2008 | A1 |
20080259418 | Bates et al. | Oct 2008 | A1 |
20080259764 | Bates et al. | Oct 2008 | A1 |
20080259765 | Bates et al. | Oct 2008 | A1 |
20080270692 | Cochran et al. | Oct 2008 | A1 |
20080295102 | Akaike et al. | Nov 2008 | A1 |
20080307192 | Sinclair et al. | Dec 2008 | A1 |
20080313312 | Flynn et al. | Dec 2008 | A1 |
20080313364 | Flynn et al. | Dec 2008 | A1 |
20090043961 | Nakamura et al. | Feb 2009 | A1 |
20090077312 | Miura et al. | Mar 2009 | A1 |
20090113112 | Ye et al. | Apr 2009 | A1 |
20090125671 | Flynn et al. | May 2009 | A1 |
20090132760 | Flynn et al. | May 2009 | A1 |
20090150605 | Flynn et al. | Jun 2009 | A1 |
20090150641 | Flynn et al. | Jun 2009 | A1 |
20090150744 | Flynn et al. | Jun 2009 | A1 |
20090150894 | Huang et al. | Jun 2009 | A1 |
20090157950 | Selinger et al. | Jun 2009 | A1 |
20090168525 | Olbrich et al. | Jul 2009 | A1 |
20090172257 | Prins et al. | Jul 2009 | A1 |
20090172258 | Olbrich et al. | Jul 2009 | A1 |
20090172259 | Prins et al. | Jul 2009 | A1 |
20090172260 | Olbrich et al. | Jul 2009 | A1 |
20090172261 | Prins et al. | Jul 2009 | A1 |
20090172262 | Olbrich et al. | Jul 2009 | A1 |
20090172263 | Olbrich et al. | Jul 2009 | A1 |
20090172308 | Prins et al. | Jul 2009 | A1 |
20090172499 | Olbrich et al. | Jul 2009 | A1 |
20090182932 | Tan et al. | Jul 2009 | A1 |
20090193184 | Yu et al. | Jul 2009 | A1 |
20090198885 | Manoj | Aug 2009 | A1 |
20090204872 | Yu et al. | Aug 2009 | A1 |
20090216944 | Gill et al. | Aug 2009 | A1 |
20090216954 | Benhase et al. | Aug 2009 | A1 |
20090222596 | Flynn et al. | Sep 2009 | A1 |
20090222617 | Yano et al. | Sep 2009 | A1 |
20090222621 | Ash et al. | Sep 2009 | A1 |
20090222643 | Chu et al. | Sep 2009 | A1 |
20090228637 | Moon et al. | Sep 2009 | A1 |
20090228660 | Edwards et al. | Sep 2009 | A1 |
20090259800 | Kilzer et al. | Oct 2009 | A1 |
20090282301 | Flynn et al. | Nov 2009 | A1 |
20090287956 | Flynn et al. | Nov 2009 | A1 |
20090300298 | Ash et al. | Dec 2009 | A1 |
20090300408 | Ash et al. | Dec 2009 | A1 |
20090307416 | Luo et al. | Dec 2009 | A1 |
20090327603 | McKean et al. | Dec 2009 | A1 |
20100017556 | Chin et al. | Jan 2010 | A1 |
20100049902 | Kakihara et al. | Feb 2010 | A1 |
20100082882 | Im et al. | Apr 2010 | A1 |
20100082917 | Yang et al. | Apr 2010 | A1 |
20100082931 | Hatfield et al. | Apr 2010 | A1 |
20100088463 | Im et al. | Apr 2010 | A1 |
20100088482 | Hinz et al. | Apr 2010 | A1 |
20100174853 | Lee et al. | Jul 2010 | A1 |
20100235569 | Nishimoto et al. | Sep 2010 | A1 |
20110060863 | Kimura et al. | Mar 2011 | A1 |
20110093648 | Belluomini et al. | Apr 2011 | A1 |
20110138113 | Leach et al. | Jun 2011 | A1 |
20110145306 | Boyd et al. | Jun 2011 | A1 |
20110231594 | Sugimoto et al. | Sep 2011 | A1 |
20120151130 | Merry et al. | Jun 2012 | A1 |
20120151254 | Horn | Jun 2012 | A1 |
20120198127 | Lan et al. | Aug 2012 | A1 |
Entry |
---|
U.S. Appl. No. 13/894,525, Response dated Aug. 4, 2014. |
Yoon Jae Seong et al, Hydra: A Block-Mapped Parallel Flash Memory Solid State Disk Architecture, IEEE Transactions on Computers, vol. 59, No. 7, Jul. 2010. |
Jeong-Uk Kang et al, A Superblock-based Flash Translation Layer for NAND Flash Memory, Oct. 2006. |
Machine Translation for Korean Application KR1020080127633. Applicant INDILINX Co (Jan. 2007-050237-1). Inventor Su-Gil Jeong. Document Date Dec. 16, 2008. |
Number | Date | Country | |
---|---|---|---|
20120059978 A1 | Mar 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12876393 | Sep 2010 | US |
Child | 12983876 | US |