The present invention relates generally to memory devices, and particularly to methods and systems for memory over-provisioning.
Several types of memory devices, such as Flash memories, use arrays of analog memory cells for storing data. Each analog memory cell stores a quantity of an analog value, also referred to as a storage value, such as an electrical charge or voltage. This analog value represents the information stored in the cell. In Flash memories, for example, each analog memory cell holds a certain amount of electrical charge. The range of possible analog values is typically divided into intervals, each interval corresponding to one or more data bit values. Data is written to an analog memory cell by writing a nominal analog value that corresponds to the desired bit or bits.
Some memory devices, commonly referred to as Single-Level Cell (SLC) devices, store a single bit of information in each memory cell, i.e., each memory cell can be programmed to assume either of two possible programming levels. Higher-density devices, often referred to as Multi-Level Cell (MLC) devices, store two or more bits per memory cell, i.e., can be programmed to assume more than two possible programming levels.
An embodiment of the present invention that is described herein provides a method for data storage, including:
in a memory that includes multiple memory blocks, specifying at a first time a first over-provisioning overhead, and storing data in the memory while retaining in the memory blocks memory areas, which do not hold valid data and whose aggregated size is at least commensurate with the specified first over-provisioning overhead;
compacting portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas; and
at a second time subsequent to the first time, specifying a second over-provisioning overhead that is different from the first over-provisioning overhead, and continuing to store the data and compact the data portions while complying with the second over-provisioning overhead.
In some embodiments, storing the data at the first time includes encoding the data with an Error Correction Code (ECC) having a given redundancy level and storing the encoded data, and specifying the second over-provisioning overhead includes modifying the given redundancy level of the ECC. In another embodiment, storing the data at the first time includes encoding the data with an Error Detection Code (EDC) having a given size and storing the encoded data, and specifying the second over-provisioning overhead includes modifying the given size of the EDC.
In some embodiments, each memory block includes multiple memory cells, storing the data at the first time includes programming the data at a given number of bits per cell, and specifying the second over-provisioning overhead includes modifying the given number of bits per cell. In an embodiment, modifying the given number of bits per cell includes modifying a number of programming levels that are used for programming the memory cells. In another embodiment, modifying the given number of bits per cell includes modifying a coding rate of an Error Correction Code (ECC) that is used for encoding the data.
In yet another embodiment, storing the data at the first time includes storing N pages in a given memory block, and continuing to store the data at the second time includes storing M pages in the given block, M≠N. In still another embodiment, specifying the first and second over-provisioning overheads includes compressing the data and storing the compressed data at one of the first and second times, and storing the data without compression at the other of the first and second times.
In some embodiments, specifying the second over-provisioning overhead includes evaluating a predefined adaptation criterion with respect to at least some of the memory blocks, and setting the second over-provisioning overhead responsively to meeting the adaptation criterion. Evaluating the adaptation criterion may includes assessing a wear level of the at least some of the memory blocks, assessing an expected number of errors in the at least some of the memory blocks, and/or assessing a target storage reliability of the data in the at least some of the memory blocks. In an embodiment, the adaptation criterion depends on a preference between programming speed and a capacity of the memory. In another embodiment, the adaptation criterion depends on a frequency at which the data in the at least some of the memory blocks changes.
In a disclosed embodiment, storing the data includes accepting the data from a host for storage in a long-term storage device, and temporarily caching the data in the memory. Specifying the second over-provisioning overhead may include receiving from the host a request to free cache memory resources, and setting the second over-provisioning overhead in response to the request. In an embodiment, the data is received from a host for storage in the memory, the memory has a specified user capacity that is available to the host, and specifying the second over-provisioning overhead does not change the specified user capacity. In another embodiment, specifying the second over-provisioning overhead includes accepting an indication whether a data item that is stored in the memory is also stored in an additional storage location, and setting the second over-provisioning overhead responsively to the indication.
In yet another embodiment, the memory includes multiple memory devices each holding a subset of the memory blocks, and specifying the first and second over-provisioning overheads includes assigning one of the memory devices to serve as a spare memory device for replacing a faulty memory device, and, until the spare memory device replaces the faulty memory device, using the spare memory device to increase the first over-provisioning overhead. In still another embodiment, the memory includes multiple memory portions each holding a subset of the memory blocks, and specifying the first over-provisioning overhead includes individually specifying respective values of the first over-provisioning overhead separately for the memory portions. Specifying the respective values of the over-provisioning overhead may include setting a respective value of the first over-provisioning overhead for a given memory portion based on an expected endurance of the given memory portion.
In some embodiments, the memory includes multiple memory devices each holding a subset of the memory blocks, specifying the first over-provisioning overhead at the first time includes assigning each memory device a respective range of logical addresses, and specifying the second over-provisioning overhead at the second time includes re-assigning the logical addresses among the memory devices in response to a failure of a given memory device. In an embodiment, the memory includes multiple memory devices that are grouped in two or more groups, specifying the first over-provisioning ratio includes individually specifying respective values of the first over-provisioning overhead for the groups, and the method further includes selecting, responsively to the values, one of the groups for storing an input data item, and storing the input data item in the selected group.
In a disclosed embodiment, storing the data at the first time includes storing a first portion of the data at a first storage density and a second portion of the data at a second storage density that is different from the first storage density, and specifying the second over-provisioning ratio includes, at the second time, modifying a ratio between the first and second portions of the data. Storing the data may include storing frequently-changing data at the first storage density, and rarely-changing data at the second storage density. In an embodiment, compacting the portions of the data includes selecting the previously-programmed memory blocks for compaction based on an estimated endurance of the blocks.
There is additionally provided, in accordance with an embodiment of the present invention, a method for data storage, including:
in a memory that includes multiple memory blocks, predefining a range of logical addresses for storing data in the memory;
defining a number of physical storage locations in the memory blocks, such that data storage in the number of the physical storage locations retains in the memory blocks memory areas that do not hold valid data and whose aggregated size is at least commensurate with an initial over-provisioning overhead;
at a first time, storing the data in the memory by mapping the logical addresses to the number of the physical storage locations, while complying with the initial over-provisioning overhead, and compacting portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas; and
at a second time subsequent to the first time, defining a modified over-provisioning overhead that is different from the initial over-provisioning overhead, modifying the number of the physical storage locations so as to comply with the modified over-provisioning overhead, and continuing to store the data by mapping the logical addresses to the modified number of the physical storage locations and compacting the data portions.
In some embodiments, defining and modifying the number of physical storage locations include applying a mapping process, which maps between the logical addresses and the physical storage locations and which varies in accordance with the over-provisioning overhead. Applying the mapping process may include defining a data structure for holding a mapping between the logical addresses and the physical storage locations, and modifying at least one of a size of the data structure and a variable range of the data structure in accordance with the over-provisioning overhead.
There is also provided, in accordance with an embodiment of the present invention, apparatus for data storage, including:
a memory, including multiple memory blocks; and
a processor, which is configured to specify at a first time a first over-provisioning overhead and store data in the memory while retaining in the memory blocks memory areas, which do not hold valid data and whose aggregated size is at least commensurate with the specified first over-provisioning overhead, to compact portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas, and, at a second time subsequent to the first time, to specify a second over-provisioning overhead that is different from the first over-provisioning overhead and to continue to store the data and compact the data portions while complying with the second over-provisioning overhead.
There is further provided, in accordance with an embodiment of the present invention, apparatus for data storage, including:
a memory, including multiple memory blocks; and
a processor, which is configured to predefine a range of logical addresses for storing data in the memory, to define a number of physical storage locations in the memory blocks, such that data storage in the number of the physical storage locations retains in the memory blocks memory areas that do not hold valid data and whose aggregated size is at least commensurate with an initial over-provisioning overhead, to store data in the memory at a first time by mapping the logical addresses to the number of the physical storage locations, while complying with the initial over-provisioning overhead, and compacting portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas, and, at a second time subsequent to the first time, to define a modified over-provisioning overhead that is different from the initial over-provisioning overhead, to modify the number of the physical storage locations so as to comply with the modified over-provisioning overhead, and to continue to store the data by mapping the logical addresses to the modified number of the physical storage locations and compact the data portions.
There is also provided, in accordance with an embodiment of the present invention, apparatus for data storage, including:
an interface, which is configured to communicate with a memory that includes multiple memory blocks; and
a processor, which is configured to specify at a first time a first over-provisioning overhead and store data in the memory while retaining in the memory blocks memory areas, which do not hold valid data and whose aggregated size is at least commensurate with the specified first over-provisioning overhead, to compact portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas, and, at a second time subsequent to the first time, to specify a second over-provisioning overhead that is different from the first over-provisioning overhead, and to continue to store the data and compact the data portions while complying with the second over-provisioning overhead.
There is additionally provided, in accordance with an embodiment of the present invention, apparatus for data storage, including:
an interface, which is configured to communicate with a memory that includes multiple memory blocks; and
a processor, which is configured to predefine a range of logical addresses for storing data in the memory, to define a number of physical storage locations in the memory blocks, such that data storage in the number of the physical storage locations retains in the memory blocks memory areas that do not hold valid data and whose aggregated size is at least commensurate with an initial over-provisioning overhead, to store data in the memory at a first time by mapping the logical addresses to the number of the physical storage locations, while complying with the initial over-provisioning overhead, and compacting portions of the data from one or more previously-programmed memory blocks containing one or more of the retained memory areas, and, at a second time subsequent to the first time, to define a modified over-provisioning overhead that is different from the initial over-provisioning overhead, to modify the number of the physical storage locations so as to comply with the modified over-provisioning overhead, and to continue to store the data by mapping the logical addresses to the modified number of the physical storage locations and compacting the data portions.
The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:
In some types of non-volatile memory, such as NAND Flash memory, memory cells need to be erased before they can be programmed with new data. Erasure of memory cells is typically performed in blocks. As a result, modifying a single page or even a single bit may involve erasure and subsequent programming of an entire block, which can sometimes hold 1 MB of data or more. Some memory systems overcome this problem by using logical addressing. In such a system, when a page having a certain logical address is modified, the modified page is stored in a new physical location in another block, and the previous physical location of the page is marked as not holding valid data. As data storage progresses over time, more and more areas that do not hold valid data (and are therefore ready for erasure) appear as “holes” in the memory blocks. The system typically employs a “garbage collection” process, which compacts valid data from one or more partially-programmed blocks and creates empty blocks that are available for erasure and new programming.
In order to increase the efficiency of the garbage collection process, the memory system is often over-provisioned in terms of memory size. In other words, the actual physical storage capacity of the system is larger than the specified logical capacity available to a host. The aggregated size of the memory areas that do not hold valid data (“holes”) is referred to as an over-provisioning overhead. The over-provisioning overhead can be specified as an over-provisioning ratio, which is defined as a fraction of the specified system capacity. For example, when the system uses an over-provisioning ratio of 5% and the memory is full from the host's perspective, each memory block is only 95% programmed, on average.
When the system is over-provisioned, garbage collection can be performed more efficiently. In other words, the number of copy operations per block compaction or consolidation can be reduced. The efficiency of the garbage collection process increases as a function of the over-provisioning ratio used in the system. Thus, increasing the over-provisioning ratio reduces the wearing of memory cells, and also increases the programming throughput. The effect of the over-provisioning overhead on cell wearing and storage throughput is particularly strong when the memory is full or nearly full.
Embodiments of the present invention that are described herein provide improved methods and systems for data storage. In some embodiments, a memory system comprises a processor, which accepts data from a host and stores the data in a memory comprising multiple memory blocks. The embodiments described herein refer mainly to Solid State Disks (SSDs), but the disclosed methods can also be used in various other types of memory systems.
In some embodiments, the processor modifies the over-provisioning overhead in an adaptive manner, so as to optimize the system performance for given circumstances. Typically, the processor specifies and applies a certain over-provisioning overhead, evaluates a predefined adaptation criterion, and changes the over-provisioning overhead (i.e., specifies a different over-provisioning overhead) if the criterion is met. Several example criteria are described herein. The adaptation criterion may consider, for example, the wear level and/or health level of the memory blocks. As another example, the adaptation criterion may depend on whether the stored data is critical or non-critical, or whether the data in question is already backed-up elsewhere.
Several example techniques for modifying the over-provisioning overhead are described herein. For example, when the stored data is first encoded with an Error Correction Code (ECC), the processor may trade between memory space allocated to ECC redundancy bits and memory space available for over-provisioning. As another example, the storage density (number of bits per cell) used for storing the data can be changed, thereby increasing or decreasing the memory space available for over-provisioning. As yet another example, the processor may trade between data compression and over-provisioning overhead. In some embodiments that are described herein, adaptive over-provisioning is applied in a memory system that serves as cache memory for a long-term storage device.
In some embodiments, the processor stores data in the memory using logical-to-physical address mapping. In these embodiments, the processor stores the data by mapping a predefined range of logical addresses to a certain number of physical storage locations in the memory blocks. In some embodiments, the processor modifies the over-provisioning overhead by modifying the number of physical storage locations without modifying the range of logical addresses.
In an example implementation, the processor decreases the over-provisioning overhead over the lifetime of the memory system. At the beginning of the system's life, the memory blocks are still fresh, and the number of read errors is expected to be small. Therefore, data can be stored with modest ECC redundancy, and more memory resources can be made available for over-provisioning. After the memory undergoes heavy cycling, e.g., after a number of years, higher ECC redundancy may be needed to achieve the desired storage reliability. The over-provisioning overhead can be reduced to enable the higher ECC redundancy. The disclosed techniques enable the system to achieve the highest possible storage throughput for the present conditions, or to achieve any other desired performance trade-off.
System 20 comprises multiple memory devices 28, each comprising multiple analog memory cells. In the present example, devices 28 comprise non-volatile NAND Flash devices, although any other suitable memory type, such as NOR and Charge Trap Flash (CTF) Flash cells, phase change RAM (PRAM, also referred to as Phase Change Memory—PCM), Nitride Read Only Memory (NROM), Ferroelectric RAM (FRAM), magnetic RAM (MRAM) and/or Dynamic RAM (DRAM) cells, can also be used. Each memory device may comprise a packaged device or an unpackaged semiconductor chip or die. A typical SSD may comprise several devices, each providing a storage space of 4 GB. Generally, however, system 20 may comprise any suitable number of memory devices of any desired type and size. Although the system configuration of
System 20 comprises an SSD controller 32, which accepts data from host 24 and stores it in memory devices 28, and retrieves data from the memory devices and provides it to the host. SSD controller 32 comprises a host interface 36 for communicating with host 24, a memory interface 40 for communicating with memory devices 28, and a processor 44 that processes the stored and retrieved data. In particular, processor 44 carries out adaptive over-provisioning schemes that are described in detail below. In some embodiments, controller 32 encodes the stored data with an Error Correction Code (ECC). In these embodiments, controller 32 comprises an ECC unit 48, which encodes the data before stored in devices 28 and decodes the ECC of data retrieved from devices 28.
Each memory device 28 comprises a memory cell array 56. The memory array comprises multiple analog memory cells 60. In the context of the present patent application and in the claims, the term “analog memory cell” is used to describe any memory cell that holds a continuous, analog value of a physical parameter, such as an electrical voltage or charge. Any suitable type of analog memory cells, such as the types listed above, can be used. In the present example, each memory device 28 comprises a non-volatile memory of NAND Flash cells.
The charge levels stored in the cells and/or the analog voltages or currents written into and read out of the cells are referred to herein collectively as analog values or storage values. Although the embodiments described herein mainly address threshold voltages, the methods and systems described herein may be used with any other suitable kind of storage values.
System 20 stores data in the analog memory cells by programming the cells to assume respective memory states, which are also referred to as programming levels. The programming levels are selected from a finite set of possible levels, and each level corresponds to a certain nominal storage value. For example, a 2 bit/cell MLC can be programmed to assume one of four possible programming levels by writing one of four possible nominal storage values into the cell.
In the present example, each memory device 28 comprises a reading/writing (R/W) unit 52, which accepts data for storage from SSD controller 32, converts the data into analog storage values and writes them into memory cells 60 of that memory device. When reading data out of array 56, R/W unit 52 typically converts the storage values of memory cells 60 into digital samples having a resolution of one or more bits, and provides the digital samples to controller 32. Data is typically written to and read from the memory cells in groups that are referred to as pages. In some embodiments, the R/W unit can erase a group of cells 60 by applying one or more negative erasure pulses to the cells.
SSD controller 32, and in particular processor 44, may be implemented in hardware. Alternatively, the SSD controller may comprise a microprocessor that runs suitable software, or a combination of hardware and software elements. In some embodiments, processor 44 comprises a general-purpose processor, which is programmed in software to carry out the functions described herein. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on tangible media, such as magnetic, optical, or electronic memory.
The configuration of
In the exemplary system configuration shown in
In an example configuration, memory cells 60 in a given array 56 are arranged in multiple rows and columns. The memory cells in each row are connected by word lines, and the memory cells in each column are connected by bit lines. The memory array is typically divided into multiple pages, i.e., groups of memory cells that are programmed and read simultaneously. Pages are sometimes sub-divided into sectors. In some embodiments, each page comprises an entire row of the array. In alternative embodiments, each row (word line) can be divided into two or more pages. For example, in some devices each row is divided into two pages, one comprising the odd-order cells and the other comprising the even-order cells. In a typical implementation, a two-bit-per-cell memory device may have four pages per row, a three-bit-per-cell memory device may have six pages per row, and a four-bit-per-cell memory device may have eight pages per row.
Erasing of cells is usually carried out in blocks that contain multiple pages. Typical memory devices may comprise several thousand erasure blocks (also referred to as memory blocks or simply blocks, for brevity). In a typical two-bit-per-cell MLC device, each erasure block is on the order of 32 or 64 word lines, each comprising hundreds of thousands of memory cells. Each word line of such a device is often partitioned into four pages (odd/even order cells, least/most significant bit of the cells). Three-bit-per cell devices having 32 word lines per erasure block would have 192 pages per erasure block, and four-bit-per-cell devices would have 256 pages per block. Alternatively, other block sizes and configurations can also be used.
Some memory devices comprise two or more separate memory cell arrays, often referred to as planes. Since each plane has a certain “busy” period between successive write operations, data can be written alternately to the different planes in order to increase programming speed.
System 20 has a certain total (physical) capacity that memory devices 28 are capable of storing. Some of this total capacity is used for storing user data bits, i.e., data that is accepted for storage from host 24. Other portions of the total capacity may be used for other purposes, e.g., for storing information that is produced internally to system 20. For example, when the stored data is encoded with an ECC, some of the total capacity is used for storing redundancy bits of the ECC, produced by ECC unit 48. Additionally or alternatively, portions of the total capacity of system 20 can be used for storing any other suitable kind of information in addition to user data received from the host. Typically, host 24 is aware only of the specified user capacity of system 20 (e.g., the logical address space used for accessing the memory), and the remaining memory resources are hidden and not available to the host. In other words, the size of the address space available to the host for storing data in system 20 is the user capacity.
In system 20, the actual memory space that is used for storing data is larger than the specified (logical) capacity of the system. When storing data in the different memory blocks of system 20, processor 44 retains in the memory blocks some memory areas that do not hold valid data. The aggregated size of these memory areas (also referred to as “holes”) is referred to as over-provisioning overhead. The over-provisioning overhead is typically expressed as an over-provisioning ratio, which is defined as a fraction of the specified logical system capacity. As explained above, the memory holes are often created when logical data pages are updated and therefore stored in other blocks.
The term “valid data” refers to any data that is useful in subsequent operation of the system. Valid data may comprise, for example, user data, ECC redundancy bits, and/or metadata or other information generated by the system. Thus, a memory area that does not hold any sort of valid data can be considered ready for erasure. Erasure of a memory area that does not hold valid data will not cause damage to any information that is stored in the system. For example, an area of this sort may hold older, obsolete versions of logical pages that were updated and stored in other physical locations.
The memory areas that do not hold valid data are typically distributed among the different memory blocks of the system. For example, when system 20 operates at an over-provisioning ratio of 5% and the memory is fully-programmed from the point of view of the host, only 95% of the pages in each memory block are actually programmed with valid data, on average. The exact percentage may vary from block to block, but on average, 5% of the pages in each block do not hold valid data. The over-provisioning overhead enables system 20 to maintain a pool of memory blocks that are (or can be) erased and ready for programming, by compacting or consolidating partially-filled memory blocks. In some embodiments, processor 44 modifies the over-provisioning ratio (i.e., modifies the aggregate size of the memory holes remaining in the memory blocks) in an adaptive manner, as will be explained in detail below.
As noted above, memory devices 28 comprise multiple memory blocks, and each block comprises multiple pages. Programming is performed page by page, and erasure is performed en-bloc for each block. Thus, a given block may be empty (i.e., contain no valid data, such as immediately after erasure), fully-programmed (i.e., have all its pages programmed with valid data) or partially-programmed (i.e., have only part of its pages programmed with valid data). During operation, system 20 continually carries out three processes, namely data storage, garbage collection and adaptive over-provisioning. These processes are typically performed independently of one another, as shown in
In the storage process, processor 44 accepts data for storage from the host, and stores it in one or more selected memory blocks. Processor 44 accepts from host 24 user data for storage, at a user data input step 78. Processor 44 processes the user data, e.g., encodes the data with an ECC using ECC unit 48 and/or adds other sorts of management information. Processor 44 selects a given block for storing the data, at a next block selection step 82. Any suitable selection criteria can be used for this purpose. In some cases, processor 44 may select a block from the pool of erased blocks, i.e., a block that currently contains no data. In other cases, processor 44 may select a block that is partially-programmed but has sufficient space for storing the data in question. In other cases, processor 44 may select two or more blocks, either erased or partially-programmed, for storing the data. Processor 44 then stores the data in the selected block, at a storage step 86.
Note that the data storage process may produce blocks that are fragmented and partially-programmed, since when a certain logical page is updated, the previous version of the page becomes invalid and therefore fragments the block in which it is stored. The extent of partial programming and data fragmentation may depend, for example, on the kind of data programming by the host (e.g., sequential vs. random programming) and on the block selection criteria used by processor 44. Deletion of data by the host also contributes to data fragmentation and partial programming of blocks.
In the garbage collection process, processor 44 compacts portions of valid data from one or more partially-programmed blocks, so as to clear blocks for erasure. In an example embodiment, processor 44 selects two or more blocks for consolidation, at a consolidation selection step 90. Processor 90 may use any suitable selection criteria for this purpose. For example, the processor may select the most fragmented blocks, the blocks whose consolidation is closest to producing a fully-programmed block, or make any other suitable selection. Processor 44 consolidates the selected blocks, at a consolidation step 94. Consolidation is typically performed by copying the data from the selected block to one or more new blocks obtained from the pool of erased blocks. Alternatively, data can be copied from one of the selected blocks into non-programmed pages in another selected block. In yet another embodiment, a single block is selected for compaction, and its valid data is copied to another block. Processor 44 then erases the block or blocks whose data was copied elsewhere, at an erasure step 98. Processor 44 adds these blocks to the pool of erased blocks that are available for programming.
Note that the efficiency of the garbage collection (block compaction) process and the data storage process depends on the amount of over-provisioning overhead used in system 20. Consider, for example, a scenario in which the system uses a 5% over-provisioning ratio. In this case, if a fully-programmed block contains M bits, the system stores 5000·M bits of data in 5000/0.95≅5263 memory blocks instead of 5000. When the memory is full from the host's perspective, the memory blocks in system 20 are actually 95% programmed, on average. In this situation, clearing partially-programmed blocks by compaction involves a relatively high number of copy operations.
In contrast, consider another scenario in which the system uses a 15% over-provisioning ratio, i.e., assigns 5000/0.855≅882 blocks for storing the 5000·M bits of data. In this scenario, when the memory is full from the host's perspective, the memory blocks are only 85% programmed, on average. Consolidating and clearing partially-programmed blocks in this scenario incurs a much smaller number of copy operations than in the former scenario of 5% over-provisioning.
Generally, the average number of additional programming operations needed per each write operation (also referred to as “write amplification”) decreases with the over-provisioning ratio. Increasing the over-provisioning ratio increases the storage throughput of the system, at the expense of larger memory. In addition, a larger over-provisioning ratio increases the lifetime of the memory, reduces the power consumption of the storage process and reduces cell wearing, since it reduces the number of copy operations performed in block compaction. Decreasing the over-provisioning ratio, on the other hand, uses less memory space at the expense of degraded storage throughput, memory lifetime, power consumption and cell wearing.
In some embodiments, processor 44 adaptively modifies the over-provisioning overhead used in system 20, in order to optimize the system performance for given circumstances. In other words, processor 44 sets a certain over-provisioning ratio at a given point in time, and another over-provisioning ratio at a different point in time, based on a certain adaptation criterion. Typically, processor 44 evaluates the adaptation criterion, at a criterion evaluation step 102. Processor 44 checks whether the criterion is met, at a criterion checking step 106. If the criterion is met, processor 44 modifies the over-provisioning ratio, at an over-provisioning adaptation step 110.
Processor 44 may use any suitable criterion in order to decide when, and to what extent, to modify the over-provisioning ratio. The criterion is typically defined over at least some of the memory blocks. The criterion may consider, for example, the number of Programming and Erasure (P/E) cycles that the blocks have gone through or any other suitable measure of the wear level of the blocks. Additionally or alternatively, the criterion may consider the health level of the blocks, e.g., the likelihood of encountering data errors in the storage and retrieval process. As another example, processor 44 changes the over-provisioning ratio as a result of a system preference. For example, at a certain time it may be preferable to increase programming speed at the expense of capacity, in which case the processor sets a relatively high over-provisioning ratio is appropriate. At another time it may be preferable to increase capacity at the expense of programming speed, in which case the processor sets a relatively low over-provisioning ratio. Additionally or alternatively, any other suitable criterion can be used.
In some embodiments, processor 44 trades-off the amount of over-provisioning with the amount of ECC redundancy. In other words, processor 44 may divide the total storage capacity of system 20 between ECC redundancy and over-provisioning. For example, when the expected number of errors is relatively low (e.g., when the system is in the beginning of its life and the memory cells are not yet heavily cycled), processor 44 can define a relatively low ECC redundancy level (e.g., high ECC code rate) and assign more memory space for data storage at a higher over-provisioning ratio. When the average wear of the memory blocks increases, e.g., after several months or years of service or following a certain number of P/E cycles, processor 44 may decide to increase the ECC redundancy and therefore reduce the over-provisioning ratio. Note that the user capacity, as seen by the host, does not change throughout these adaptations.
Additionally or alternatively, processor 44 may modify the trade-off between ECC redundancy and over-provisioning ratio based on the required storage reliability of the data. When the specified storage reliability is low, processor 44 may reduce the ECC redundancy, and in return increase the over-provisioning ratio and improve the programming throughput. When the specified storage reliability is high, processor 44 may increase the ECC redundancy and decrease the over-provisioning ratio. Further additionally or alternatively, processor 44 may trade ECC redundancy vs. over-provisioning resources using any other suitable criterion.
The left-hand-side of
The above-describe technique is also applicable to Error Detection Codes (EDC), either in addition to or instead of ECC. In alternative embodiments, processor 44 encodes the data for storage with a certain EDC, such as a Cyclic Redundancy Check (CRC) code. Processor 44 may set different trade-offs between EDC size (and thus error detection reliability) for over-provisioning overhead, i.e., increase the over-provisioning overhead while reducing EDC size (e.g., the number of bits allocated to EDC per page) or vice versa.
In alternative embodiments, processor 44 can modify the over-provisioning ratio by modifying the storage density per memory cell, i.e., the number of bits per cell. The storage density can be modified, for example, by modifying the number of programming levels (programming states) that are used for storing the data, and/or by modifying the ECC code rate. When using a larger number of bits per cell, a given data size can be stored in fewer memory pages, and more space can be used for over-provisioning.
In some embodiments, processor 44 modifies the over-provisioning ratio by modifying the number of programming levels that are used for storing data, at least for some of the word lines in some of the memory blocks. Processor 44 may increase or decrease the number of programming levels from any suitable initial number to any suitable modified number, such as from two levels to four levels, from four levels to eight levels, or vice versa.
Moreover, the initial and/or modified number of programming levels need not necessarily be a power of two. For example, processor 44 may initially store the data using eight programming levels (i.e., at a density of 3 bits/cell). At a later point in time, the processor may reduce the over-provisioning ratio by decreasing the number of programming levels to six, i.e., reducing the storage density to approximately 2.5 bits/cell. Alternatively, processor 44 may store data using three programming levels, or any other suitable number.
At a different point in time, processor 44 reduces the ECC redundancy level, so that less memory cells in each memory page 154 are needed for storing ECC redundancy bits 162. As a result, more memory cells are available for increasing the over-provisioning ratio. For example, the bottom of
The example of
In some embodiments, processor 44 decides whether or not to compress the user data prior to storage. The decision may be based, for example, on the type of data and/or wear level of the memory. If the data is compressed, processor 44 can use a relatively high over-provisioning ratio. Otherwise, a lower over-provisioning ratio is typically used. In an example scenario, processor 44 may apply data compression, and a high over-provisioning ratio, at the beginning of the system's lifetime. At a later point in time, processor 44 may store the data without compression, and reduce the over-provisioning ratio accordingly.
Further alternatively, processor 44 may modify the storage configuration used for storing data in the memory in any other suitable manner, in order to clear memory resources and increase the over-provisioning ratio. Example mechanisms for modifying the storage configuration are described, for example, in PCT International Publication WO 2007/132456, whose disclosure is incorporated herein by reference.
In some embodiments, SSD controller 32 stores data in memory devices 28 using logical-to-physical address mapping. In these embodiments, host 24 exchanges data with the SSD controller by addressing a predefined range of logical addresses. Processor 44 in the SSD controller maintains a mapping between the logical addresses and a certain number of physical storage locations (e.g., physical pages) in the memory blocks of memory devices 28. Processor 44 stores incoming data by mapping the logical addresses to the physical storage locations.
In some embodiments, processor 44 adaptively modifies the over-provisioning overhead by modifying the number of physical storage locations without modifying the range of logical addresses. In a typical implementation, processor 44 initially defines a certain number of physical storage locations, so as to comply with a certain initial over-provisioning overhead. In other words, the initial number of physical storage locations is defined so as to retain a sufficient amount of memory areas that do not contain valid data (including user data, ECC redundancy and/or metadata), as derived from the initial over-provisioning overhead. Initially, the controller stores data and performs garbage collection using this initial logical-to-physical address mapping.
At a later point in time, processor 44 defines a modified over-provisioning overhead that is different from the initial over-provisioning overhead. In order to comply with the modified over-provisioning overhead, the processor modifies (increases or decreases) the number of physical storage locations that are used in the logical-to-physical address mapping. Typically, this modification is performed without modifying the range of logical addresses used between the SSD controller and the host. Processor 44 may modify the number of physical storage locations using any of the techniques described above, e.g., by trading-off ECC redundancy resources, by modifying the storage density (the number of bits/cell), or by performing data compression.
After modifying the number of physical storage locations, processor 44 updates the logical-to-physical address mapping accordingly, so as to comply with the modified over-provisioning overhead. Subsequent data storage and garbage collection are performed using thus updated mapping.
Typically, processor 44 defines and maintains a certain data structure (e.g., one or more tables) for holding the logical-to-physical address mapping. In some embodiments, upon modifying the over-provisioning overhead, processor 44 modifies the size of this data structure accordingly. Processor 44 may employ a logical-to-physical address mapping process that is designed to operate with a variable over-provisioning overhead. This process is typically used for data storage, data retrieval and garbage collection. In particular, such a process may use one or more logical-to-physical mapping tables whose size and/or variable range varies.
In some embodiments, a non-volatile memory system (e.g., SSD) is used as a cache memory for a long-term storage device (e.g., magnetic disk). The adaptive over-provisioning techniques described herein can be applied in such cache applications, as well. As noted above, adaptive over-provisioning is important in maintaining high storage throughput. In cache memory applications, applying adaptive over-provisioning techniques in the cache memory can increase the storage throughput of the entire storage system.
In some embodiments, host 164 may request SSD 172 to delete some of the cached data items in order to free cache memory resources. Various storage protocols support “cache trim” commands, and host 164 may use such a command for this purpose. Host 164 may issue a trim command to SSD 172, for example, upon detecting that the storage throughput of system 160 has deteriorated, or upon deciding that higher throughput is desired. In response to a trim command, SSD 172 may delete one or more of the cached data items, and use the released memory space to increase the over-provisioning ratio. The higher over-provisioning overhead helps to improve the storage throughput of SSD 172, and therefore of system 160 as a whole.
If, on the other hand, the current storage throughput of system 160 is insufficient, the host sends a “cache trim” command to SSD 172, at a trim requesting step 192. The command requests SSD 172 to free some cache memory resources in order to increase the throughput. In response to the trim command, SSD 172 deletes one or more data items and uses the released memory space to increase the over-provisioning overhead, at an over-provisioning increasing step 196. SSD 172 may select data items for deletion based on any suitable criterion, such as the least-accessed items or the oldest items. The increased over-provisioning overhead increases the storage throughput of SSD 172. The method then loops back to step 180 above.
In some applications, some or all of the data that is provided for storage in system 20 is also backed-up in another storage location. For example, system 20 may be part of a redundant storage system (e.g., Redundant Array of Independent Disks—RAID). As another example, system 20 may comprise a SSD in a mobile computer, which backs-up data to another storage location using a network connection when it is connected to a network.
In some embodiments, SSD controller 32 is notified whether or not certain data is backed-up in another location in addition to system 20. In an example embodiment, host 24 and SSD controller 32 support a command interface, using which the host informs the SSD controller whether a given data item that is sent for storage is backed-up in an additional location. The status of a given data item may change over time. For example, in a mobile computer application, a given data item may be stored exclusively in system 20 while the computer is disconnected from a network (e.g., mobile), and then be backed-up over a network connection when the mobile computer connects to the network. In some embodiments, processor 44 adjusts the over-provisioning overhead based on these notifications.
For example, when the notifications indicate that a given data item is backed-up in another location in addition to system 20, it may be permissible to store this data item in system 20 at reduced storage reliability. Therefore, processor 44 may store the given data item at a denser storage configuration having reduced storage reliability (e.g., using less ECC redundancy and/or using more bits/cell). The extra memory space that is freed by the denser storage configuration can be used to increase the over-provisioning ratio. In some embodiments, SSD controller 32 applies internal RAID in system 20, i.e., stores data items in system 20 using RAID redundancy. When a given data item is known to be backed-up externally to system 20, processor 44 may store this data item without internal RAID redundancy.
In some scenarios, the host informs the SSD controller that a given data item, which was not previously backed-up in another location, is now backed-up. For example, a mobile computer may at some point be connected to a network connection, which enables backup of locally-stored data items to remote storage. Upon receiving such a notification, the SSD controller may change the storage configuration of this data item, and modify the over-provisioning overhead accordingly.
In some embodiments, one or more of memory devices 28 are assigned as spare devices that are not used for normal data storage. If a given memory device 28 fails, it is replaced by one of the spare memory devices. In some embodiments, when a spare device is not used to replace a faulty device, it can be used as an additional over-provisioning area.
Typically, system 20 is specified to provide a certain endurance, e.g., to endure a certain number of programming cycles. Because of the “write amplification” effect described above, the system-level endurance specification translates to a higher endurance requirement from devices 28. When system 20 comprises multiple memory devices 28, the memory devices may differ from one another in their endurance levels, e.g., in the number of programming and erasure cycles they are able to endure. In some cases, the endurance level of each memory device can be estimated or predicted.
In some embodiments, processor 44 individually assigns each memory device 28 a respective range of logical addresses, whose size matches the expected endurance of the memory device. The remaining physical memory space of each memory device is used for over-provisioning. Consider, for example, a system comprising 1 GB memory devices (i.e., the physical storage size of each device is 1 GB). Some of these devices may have high endurance, while others may have poorer endurance, e.g., because of statistical manufacturing process variations among the devices.
In an example embodiment, processor 44 may assign each higher-endurance device a logical address range of 900 MB, and use an over-provisioning ratio of 10% on these devices. For the lower-endurance devices, processor 44 may assign an address range of 800 MB, and use an over-provisioning ratio of 20% on these devices. As a result, a lower-endurance device will need to handle a smaller number of programming cycles relative to a high-endurance device. Assuming a statistical mixture of higher- and lower-endurance devices, the system-level endurance specification can be met without discarding lower-endurance devices. This technique increases manufacturing yield and thus reduces cost.
The description above refers to setting separate, possibly different over-provisioning overheads to different memory devices 28. Alternatively, processor 44 may assign separate, possibly different over-provisioning overheads to any other suitable group of memory cells, e.g., to different planes or different dies within a given memory device 28. The appropriate over-provisioning overhead for each cell group or device can be determined, for example, during manufacturing tests.
The following analysis demonstrates the potential value of assigning different over-provisioning overheads to different memory portions. In many practical cases, the write amplification factor can be approximated by 1/OP, wherein OP denotes the over-provisioning ratio. Consider an example scenario in which a certain portion 0<P<1 of the memory can endure C1 P/E cycles, and the remaining 1−P of the memory can endure C2>C1 cycles. This scenario also assumes purely random (i.e., non-sequential) programming of the memory. Let S denote the logical capacity of the memory.
If both portions of the memory were assigned the same OP, both portions will first perform C1·OP·S, and then P of the blocks will end their life. If P>OP, then the entire memory will end its life. Otherwise, the memory will continue operating with an over-provisioning ratio of OP−P. Thus, the total amount of data that can be written during memory lifetime is:
A=(C1·OP+MAX{0,(C2−C1)·(OP−P)})·S, [1]
wherein S denotes the logical size of the memory.
Consider, on the other hand, an implementation in which the memory portions P and (1−P) are assigned different over-provisioning ratios OP1 and OP2, respectively. OP1 and OP2 are selected such that both portions end their life after approximately the same number of user cycles, i.e.:
C1·OP1=C2·OP2. [2]
Since only a portion OP of the memory can be used for over-provisioning, P·OP1+(1−P)·OP2≦OP. Therefore, if all the over-provisioning area is utilized, we can write:
OP2=(OP−P·OP1)/(1−P) [3]
Solving Equations [2] and [3] gives:
OP1=C2·OP/((1−P)C1+P·C2)
OP2=C1·OP/((1−P)·C1+P·C2) [4]
The total amount of data that can be written during the lifetime of the memory is thus:
A′=C1·C2/((1−P)C1+P·C2)·OP·S [5]
Consider, for example, two numerical examples where OP=25%. If, for example, C1=20K cycles, C2=50K cycles and P=10%, then A=9500·S and A′=10870·S. If P would be 0 then A′ would reach 12500·S. As can be seen, A′ is considerably greater than A, meaning that assigning over-provisioning overheads separately to different memory portions based on endurance can potentially increase the total endurance of the memory.
As another example, if C1=30K cycles, C2=50K cycles and P=10%, then A=7500·S and A′=9375·S, i.e., a 25% improvement. Using SLC storage in portion P of the memory can only improve the endurance by a factor of less than 1/(1−P), even if the SLC has infinite endurance: If, for example, C1=50K cycles, C2=1000K cycles and P=90%, then A=12500·S and A′=12500·S/0.9. For cases where P<OP, A′ may be lower than A. For example, if C1=10 cycles, C2=50K cycles and P=0.01, then A=12000·S and A′=250·S.
The above analysis can be generalized to an implementation in which n memory segments whose relative sizes are denoted P1 . . . Pn and whose endurances are denoted C1 . . . Cn. Again, the write amplification factor is approximated by 1/OP. The n segments are assigned over-provisioning ratios denoted OP1 . . . OPn, which are selected such that all n segments end their life approximately after the same number of user cycles:
Ci·OPi=K,i=1 . . . n [6]
wherein K is a constant. If OP denotes the overall over-provisioning ratio of the entire memory, we are limited by the constraint:
Σi=1 . . . nPi·OPi=OP [7]
From solving Equations [6] and [7] we can approximate that the total amount of data that can be written over the lifetime of the memory is
A″=C″·OP·S [8]
wherein S denotes the logical size of the memory, and C″ denotes the harmonic average of the endurances of the segments of the SSD:
C″=1/Σi=1 . . . n(Pi/Ci). [9]
Consider, for example, a memory in which 50% of the blocks can endure 20K cycles and 50% can endure 50K cycles. By appropriate allocation of over-provisioning ratios to the different blocks, the memory can achieve an effective endurance of C″=1/(0.5/20000+0.5/50000)=28570 cycles, instead of 20000 cycles using conventional schemes and even wear leveling.
In some embodiments, processor 44 selects blocks for garbage collection and erasure based on their estimated endurance. Thus, for example, blocks that are estimated to have long endurance will be compacted and erased when they contain a certain number of invalid pages (e.g., ten pages), whereas blocks that are estimated to have short endurance will be compacted and erased only when they reach a higher number of invalid pages (e.g., twenty pages).
In some embodiments, e.g., in a SSD application, memory devices 28 are grouped in two or more groups that are referred to as channels. Upon receiving a given data item for storage, processor 44 selects one of the channels, and sends the data item for storage in the selected channel. In some embodiments, processor 44 assigns a respective over-provisioning ratio individually for each channel. The over-provisioning ratio may differ from one channel to another. The processor selects a channel for storing a given data item based on the over-provisioning ratios. For example, processor may send an incoming data item for storage in the channel that currently has the highest over-provisioning overhead among all the channels.
In some embodiments, each memory device 28 (or each die within each memory device) is assigned a respective sub-range of logical addresses. In the event that a given device fails, processor 44 may re-map the logical addresses to devices 28, so as to divide the overall range of logical addresses among the remaining functional devices. When re-mapping the logical addresses, processor reduces the over-provisioning ratio slightly. As a result, the system can remain operational without re-formatting. The re-mapping and over-provisioning reduction process can be performed gradually, e.g., for subsequent write operations.
In some embodiments, processor 44 stores some of the data in memory devices 28 at a certain storage density (e.g., SLC) and some of the data at a different storage density (e.g., MLC). In particular, processor 44 may store frequently-changed data (“hot data”) at a certain high-endurance and high-speed storage configuration (e.g., SLC), and rarely-changed data (“cold data”) at a lower-endurance and lower-speed but lower-cost storage configuration (e.g., MLC). In some embodiments, processor 44 may change the relative portion of the data that is stored using the first storage density changes over time. For example, the proportion between the volumes of “hot” and “cold” data may change over time.
In an embodiment, processor 44 reacts to such a change by modifying the memory spaces that are allocated to the two storage densities (e.g., increase the SLC space at the expense of MLC space, or vice versa). As a result of this modification, the ratio between the physical capacity of the memory and the user capacity of the memory changes, as well. For example, allocating more memory to SLC storage and less memory to MLC storage reduces the physical capacity of the memory, and vice versa. Therefore, changing the relative memory allocation to the different storage densities changes the over-provisioning ratio of the system.
It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.
This application claims the benefit of U.S. Provisional Patent Application 61/224,897, filed Jul. 12, 2009, U.S. Provisional Patent Application 61/293,814, filed Jan. 11, 2010, and U.S. Provisional Patent Application 61/334,606, filed May 14, 2010, whose disclosures are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3668631 | Griffith et al. | Jun 1972 | A |
3668632 | Oldham | Jun 1972 | A |
4058851 | Scheuneman | Nov 1977 | A |
4112502 | Scheuneman | Sep 1978 | A |
4394763 | Nagano et al. | Jul 1983 | A |
4413339 | Riggle et al. | Nov 1983 | A |
4556961 | Iwahashi et al. | Dec 1985 | A |
4558431 | Satoh | Dec 1985 | A |
4608687 | Dutton | Aug 1986 | A |
4654847 | Dutton | Mar 1987 | A |
4661929 | Aoki et al. | Apr 1987 | A |
4768171 | Tada | Aug 1988 | A |
4811285 | Walker et al. | Mar 1989 | A |
4899342 | Potter et al. | Feb 1990 | A |
4910706 | Hyatt | Mar 1990 | A |
4993029 | Galbraith et al. | Feb 1991 | A |
5056089 | Furuta et al. | Oct 1991 | A |
5077722 | Geist et al. | Dec 1991 | A |
5126808 | Montalvo et al. | Jun 1992 | A |
5163021 | Mehrotra et al. | Nov 1992 | A |
5172338 | Mehrotta et al. | Dec 1992 | A |
5182558 | Mayo | Jan 1993 | A |
5182752 | DeRoo et al. | Jan 1993 | A |
5191584 | Anderson | Mar 1993 | A |
5200959 | Gross et al. | Apr 1993 | A |
5237535 | Mielke et al. | Aug 1993 | A |
5272669 | Samachisa et al. | Dec 1993 | A |
5276649 | Hoshita et al. | Jan 1994 | A |
5287469 | Tsuboi | Feb 1994 | A |
5365484 | Cleveland et al. | Nov 1994 | A |
5388064 | Khan | Feb 1995 | A |
5416646 | Shirai | May 1995 | A |
5416782 | Wells et al. | May 1995 | A |
5446854 | Khalidi et al. | Aug 1995 | A |
5450424 | Okugaki et al. | Sep 1995 | A |
5469444 | Endoh et al. | Nov 1995 | A |
5473753 | Wells et al. | Dec 1995 | A |
5479170 | Cauwenberghs et al. | Dec 1995 | A |
5508958 | Fazio et al. | Apr 1996 | A |
5519831 | Holzhammer | May 1996 | A |
5532962 | Auclair et al. | Jul 1996 | A |
5533190 | Binford et al. | Jul 1996 | A |
5541886 | Hasbun | Jul 1996 | A |
5600677 | Citta et al. | Feb 1997 | A |
5638320 | Wong et al. | Jun 1997 | A |
5657332 | Auclair et al. | Aug 1997 | A |
5675540 | Roohparvar | Oct 1997 | A |
5682352 | Wong et al. | Oct 1997 | A |
5687114 | Khan | Nov 1997 | A |
5696717 | Koh | Dec 1997 | A |
5726649 | Tamaru et al. | Mar 1998 | A |
5726934 | Tran et al. | Mar 1998 | A |
5742752 | De Koening | Apr 1998 | A |
5748533 | Dunlap et al. | May 1998 | A |
5748534 | Dunlap et al. | May 1998 | A |
5751637 | Chen et al. | May 1998 | A |
5761402 | Kaneda et al. | Jun 1998 | A |
5798966 | Keeney | Aug 1998 | A |
5799200 | Brant et al. | Aug 1998 | A |
5801985 | Roohparvar et al. | Sep 1998 | A |
5838832 | Barnsley | Nov 1998 | A |
5860106 | Domen et al. | Jan 1999 | A |
5867114 | Barbir | Feb 1999 | A |
5867428 | Ishii et al. | Feb 1999 | A |
5867429 | Chen et al. | Feb 1999 | A |
5877986 | Harari et al. | Mar 1999 | A |
5889937 | Tamagawa | Mar 1999 | A |
5901089 | Korsh et al. | May 1999 | A |
5909449 | So et al. | Jun 1999 | A |
5912906 | Wu et al. | Jun 1999 | A |
5930167 | Lee et al. | Jul 1999 | A |
5937424 | Leak et al. | Aug 1999 | A |
5942004 | Cappelletti | Aug 1999 | A |
5946716 | Karp et al. | Aug 1999 | A |
5969986 | Wong et al. | Oct 1999 | A |
5982668 | Ishii et al. | Nov 1999 | A |
5991517 | Harari et al. | Nov 1999 | A |
5995417 | Chen et al. | Nov 1999 | A |
6009014 | Hollmer et al. | Dec 1999 | A |
6009016 | Ishii et al. | Dec 1999 | A |
6023425 | Ishii et al. | Feb 2000 | A |
6034891 | Norman | Mar 2000 | A |
6040993 | Chen et al. | Mar 2000 | A |
6041430 | Yamauchi | Mar 2000 | A |
6073204 | Lakhani et al. | Jun 2000 | A |
6101614 | Gonzales et al. | Aug 2000 | A |
6128237 | Shirley et al. | Oct 2000 | A |
6134140 | Tanaka et al. | Oct 2000 | A |
6134143 | Norman | Oct 2000 | A |
6134631 | Jennings | Oct 2000 | A |
6141261 | Patti | Oct 2000 | A |
6151246 | So et al. | Nov 2000 | A |
6157573 | Ishii et al. | Dec 2000 | A |
6166962 | Chen et al. | Dec 2000 | A |
6169691 | Pasotti et al. | Jan 2001 | B1 |
6178466 | Gilbertson et al. | Jan 2001 | B1 |
6185134 | Tanaka et al. | Feb 2001 | B1 |
6209113 | Roohparvar | Mar 2001 | B1 |
6212654 | Lou et al. | Apr 2001 | B1 |
6219276 | Parker | Apr 2001 | B1 |
6219447 | Lee et al. | Apr 2001 | B1 |
6222762 | Guterman et al. | Apr 2001 | B1 |
6230233 | Lofgren et al. | May 2001 | B1 |
6240458 | Gilbertson | May 2001 | B1 |
6259627 | Wong | Jul 2001 | B1 |
6275419 | Guterman et al. | Aug 2001 | B1 |
6278632 | Chevallier | Aug 2001 | B1 |
6279069 | Robinson et al. | Aug 2001 | B1 |
6288944 | Kawamura | Sep 2001 | B1 |
6292394 | Cohen et al. | Sep 2001 | B1 |
6301151 | Engh et al. | Oct 2001 | B1 |
6304486 | Yano | Oct 2001 | B1 |
6307776 | So et al. | Oct 2001 | B1 |
6314044 | Sasaki et al. | Nov 2001 | B1 |
6317363 | Guterman et al. | Nov 2001 | B1 |
6317364 | Guterman et al. | Nov 2001 | B1 |
6345004 | Omura et al. | Feb 2002 | B1 |
6360346 | Miyauchi et al. | Mar 2002 | B1 |
6363008 | Wong | Mar 2002 | B1 |
6363454 | Lakhani et al. | Mar 2002 | B1 |
6366496 | Torelli et al. | Apr 2002 | B1 |
6385092 | Ishii et al. | May 2002 | B1 |
6392932 | Ishii et al. | May 2002 | B1 |
6396742 | Korsh et al. | May 2002 | B1 |
6397364 | Barkan | May 2002 | B1 |
6405323 | Lin et al. | Jun 2002 | B1 |
6405342 | Lee | Jun 2002 | B1 |
6418060 | Yong et al. | Jul 2002 | B1 |
6442585 | Dean et al. | Aug 2002 | B1 |
6445602 | Kokudo et al. | Sep 2002 | B1 |
6452838 | Ishii et al. | Sep 2002 | B1 |
6456528 | Chen | Sep 2002 | B1 |
6466476 | Wong et al. | Oct 2002 | B1 |
6467062 | Barkan | Oct 2002 | B1 |
6469931 | Ban et al. | Oct 2002 | B1 |
6480948 | Virajpet et al. | Nov 2002 | B1 |
6490236 | Fukuda et al. | Dec 2002 | B1 |
6522580 | Chen et al. | Feb 2003 | B2 |
6525952 | Araki et al. | Feb 2003 | B2 |
6532556 | Wong et al. | Mar 2003 | B1 |
6538922 | Khalid et al. | Mar 2003 | B1 |
6549464 | Tanaka et al. | Apr 2003 | B2 |
6553510 | Pekny et al. | Apr 2003 | B1 |
6558967 | Wong | May 2003 | B1 |
6560152 | Cernea | May 2003 | B1 |
6567311 | Ishii et al. | May 2003 | B2 |
6577539 | Iwahashi | Jun 2003 | B2 |
6584012 | Banks | Jun 2003 | B2 |
6615307 | Roohparvar | Sep 2003 | B1 |
6621739 | Gonzales et al. | Sep 2003 | B2 |
6640326 | Buckingham et al. | Oct 2003 | B1 |
6643169 | Rudelic et al. | Nov 2003 | B2 |
6646913 | Micheloni et al. | Nov 2003 | B2 |
6678192 | Gongwer et al. | Jan 2004 | B2 |
6683811 | Ishii et al. | Jan 2004 | B2 |
6687155 | Nagasue | Feb 2004 | B2 |
6707748 | Lin et al. | Mar 2004 | B2 |
6708257 | Bao | Mar 2004 | B2 |
6714449 | Khalid | Mar 2004 | B2 |
6717847 | Chen | Apr 2004 | B2 |
6731557 | Beretta | May 2004 | B2 |
6732250 | Durrant | May 2004 | B2 |
6738293 | Iwahashi | May 2004 | B1 |
6751766 | Guterman et al. | Jun 2004 | B2 |
6757193 | Chen et al. | Jun 2004 | B2 |
6774808 | Hibbs et al. | Aug 2004 | B1 |
6781877 | Cernea et al. | Aug 2004 | B2 |
6804805 | Rub | Oct 2004 | B2 |
6807095 | Chen et al. | Oct 2004 | B2 |
6807101 | Ooishi et al. | Oct 2004 | B2 |
6809964 | Moschopoulos et al. | Oct 2004 | B2 |
6819592 | Noguchi et al. | Nov 2004 | B2 |
6829167 | Tu et al. | Dec 2004 | B2 |
6845052 | Ho et al. | Jan 2005 | B1 |
6851018 | Wyatt et al. | Feb 2005 | B2 |
6851081 | Yamamoto | Feb 2005 | B2 |
6856546 | Guterman et al. | Feb 2005 | B2 |
6862218 | Guterman et al. | Mar 2005 | B2 |
6870767 | Rudelic et al. | Mar 2005 | B2 |
6870773 | Noguchi et al. | Mar 2005 | B2 |
6873552 | Ishii et al. | Mar 2005 | B2 |
6879520 | Hosono et al. | Apr 2005 | B2 |
6882567 | Wong | Apr 2005 | B1 |
6894926 | Guterman et al. | May 2005 | B2 |
6907497 | Hosono et al. | Jun 2005 | B2 |
6925009 | Noguchi et al. | Aug 2005 | B2 |
6930925 | Guo et al. | Aug 2005 | B2 |
6934188 | Roohparvar | Aug 2005 | B2 |
6937511 | Hsu et al. | Aug 2005 | B2 |
6958938 | Noguchi et al. | Oct 2005 | B2 |
6963505 | Cohen | Nov 2005 | B2 |
6972993 | Conley et al. | Dec 2005 | B2 |
6988175 | Lasser | Jan 2006 | B2 |
6992932 | Cohen | Jan 2006 | B2 |
6999344 | Hosono et al. | Feb 2006 | B2 |
7002843 | Guterman et al. | Feb 2006 | B2 |
7006379 | Noguchi et al. | Feb 2006 | B2 |
7012835 | Gonzalez et al. | Mar 2006 | B2 |
7020017 | Chen et al. | Mar 2006 | B2 |
7023735 | Ban et al. | Apr 2006 | B2 |
7031210 | Park et al. | Apr 2006 | B2 |
7031214 | Tran | Apr 2006 | B2 |
7031216 | You | Apr 2006 | B2 |
7039846 | Hewitt et al. | May 2006 | B2 |
7042766 | Wang et al. | May 2006 | B1 |
7054193 | Wong | May 2006 | B1 |
7054199 | Lee et al. | May 2006 | B2 |
7057958 | So et al. | Jun 2006 | B2 |
7065147 | Ophir et al. | Jun 2006 | B2 |
7068539 | Guterman et al. | Jun 2006 | B2 |
7071849 | Zhang | Jul 2006 | B2 |
7072222 | Ishii et al. | Jul 2006 | B2 |
7079555 | Baydar et al. | Jul 2006 | B2 |
7088615 | Guterman et al. | Aug 2006 | B2 |
7099194 | Tu et al. | Aug 2006 | B2 |
7102924 | Chen et al. | Sep 2006 | B2 |
7113432 | Mokhlesi | Sep 2006 | B2 |
7130210 | Bathul et al. | Oct 2006 | B2 |
7139192 | Wong | Nov 2006 | B1 |
7139198 | Guterman et al. | Nov 2006 | B2 |
7145805 | Ishii et al. | Dec 2006 | B2 |
7151692 | Wu | Dec 2006 | B2 |
7158058 | Yu | Jan 2007 | B1 |
7170781 | So et al. | Jan 2007 | B2 |
7170802 | Cernea et al. | Jan 2007 | B2 |
7173859 | Hemink | Feb 2007 | B2 |
7177184 | Chen | Feb 2007 | B2 |
7177195 | Gonzales et al. | Feb 2007 | B2 |
7177199 | Chen et al. | Feb 2007 | B2 |
7177200 | Ronen et al. | Feb 2007 | B2 |
7184338 | Nakagawa et al. | Feb 2007 | B2 |
7187195 | Kim | Mar 2007 | B2 |
7187592 | Guterman et al. | Mar 2007 | B2 |
7190614 | Wu | Mar 2007 | B2 |
7193898 | Cernea | Mar 2007 | B2 |
7193921 | Choi et al. | Mar 2007 | B2 |
7196644 | Anderson et al. | Mar 2007 | B1 |
7196928 | Chen | Mar 2007 | B2 |
7196933 | Shibata | Mar 2007 | B2 |
7197594 | Raz et al. | Mar 2007 | B2 |
7200062 | Kinsely et al. | Apr 2007 | B2 |
7210077 | Brandenberger et al. | Apr 2007 | B2 |
7221592 | Nazarian | May 2007 | B2 |
7224613 | Chen et al. | May 2007 | B2 |
7231474 | Helms et al. | Jun 2007 | B1 |
7231562 | Ohlhoff et al. | Jun 2007 | B2 |
7243275 | Gongwer et al. | Jul 2007 | B2 |
7254690 | Rao | Aug 2007 | B2 |
7254763 | Aadsen et al. | Aug 2007 | B2 |
7257027 | Park | Aug 2007 | B2 |
7259987 | Chen et al. | Aug 2007 | B2 |
7266026 | Gongwer et al. | Sep 2007 | B2 |
7266069 | Chu | Sep 2007 | B2 |
7269066 | Nguyen et al. | Sep 2007 | B2 |
7272757 | Stocken | Sep 2007 | B2 |
7274611 | Roohparvar | Sep 2007 | B2 |
7277355 | Tanzawa | Oct 2007 | B2 |
7280398 | Lee et al. | Oct 2007 | B1 |
7280409 | Misumi et al. | Oct 2007 | B2 |
7280415 | Hwang et al. | Oct 2007 | B2 |
7283399 | Ishii et al. | Oct 2007 | B2 |
7289344 | Chen | Oct 2007 | B2 |
7301807 | Khalid et al. | Nov 2007 | B2 |
7301817 | Li et al. | Nov 2007 | B2 |
7308525 | Lasser et al. | Dec 2007 | B2 |
7310255 | Chan | Dec 2007 | B2 |
7310269 | Shibata | Dec 2007 | B2 |
7310271 | Lee | Dec 2007 | B2 |
7310272 | Mokhlesi et al. | Dec 2007 | B1 |
7310347 | Lasser | Dec 2007 | B2 |
7312727 | Feng et al. | Dec 2007 | B1 |
7321509 | Chen et al. | Jan 2008 | B2 |
7328384 | Kulkarni et al. | Feb 2008 | B1 |
7342831 | Mokhlesi et al. | Mar 2008 | B2 |
7343330 | Boesjes et al. | Mar 2008 | B1 |
7345924 | Nguyen et al. | Mar 2008 | B2 |
7345928 | Li | Mar 2008 | B2 |
7349263 | Kim et al. | Mar 2008 | B2 |
7356755 | Fackenthal | Apr 2008 | B2 |
7363420 | Lin et al. | Apr 2008 | B2 |
7365671 | Anderson | Apr 2008 | B1 |
7388781 | Litsyn et al. | Jun 2008 | B2 |
7397697 | So et al. | Jul 2008 | B2 |
7405974 | Yaoi et al. | Jul 2008 | B2 |
7405979 | Ishii et al. | Jul 2008 | B2 |
7408804 | Hemink et al. | Aug 2008 | B2 |
7408810 | Aritome et al. | Aug 2008 | B2 |
7409473 | Conley et al. | Aug 2008 | B2 |
7409623 | Baker et al. | Aug 2008 | B2 |
7420847 | Li | Sep 2008 | B2 |
7433231 | Aritome | Oct 2008 | B2 |
7433697 | Karaoguz et al. | Oct 2008 | B2 |
7434111 | Sugiura et al. | Oct 2008 | B2 |
7437498 | Ronen | Oct 2008 | B2 |
7440324 | Mokhlesi | Oct 2008 | B2 |
7440331 | Hemink | Oct 2008 | B2 |
7441067 | Gorobetz et al. | Oct 2008 | B2 |
7447970 | Wu et al. | Nov 2008 | B2 |
7450421 | Mokhlesi et al. | Nov 2008 | B2 |
7453737 | Ha | Nov 2008 | B2 |
7457163 | Hemink | Nov 2008 | B2 |
7457897 | Lee et al. | Nov 2008 | B1 |
7460410 | Nagai et al. | Dec 2008 | B2 |
7460412 | Lee et al. | Dec 2008 | B2 |
7466592 | Mitani et al. | Dec 2008 | B2 |
7468907 | Kang et al. | Dec 2008 | B2 |
7468911 | Lutze et al. | Dec 2008 | B2 |
7469049 | Feng | Dec 2008 | B1 |
7471581 | Tran et al. | Dec 2008 | B2 |
7483319 | Brown | Jan 2009 | B2 |
7487329 | Hepkin et al. | Feb 2009 | B2 |
7487394 | Forhan et al. | Feb 2009 | B2 |
7492641 | Hosono et al. | Feb 2009 | B2 |
7508710 | Mokhlesi | Mar 2009 | B2 |
7526711 | Orio | Apr 2009 | B2 |
7539061 | Lee | May 2009 | B2 |
7539062 | Doyle | May 2009 | B2 |
7551492 | Kim | Jun 2009 | B2 |
7558109 | Brandman et al. | Jul 2009 | B2 |
7558839 | McGovern | Jul 2009 | B1 |
7568135 | Cornwell et al. | Jul 2009 | B2 |
7570520 | Kamei et al. | Aug 2009 | B2 |
7574555 | Porat et al. | Aug 2009 | B2 |
7590002 | Mokhlesi et al. | Sep 2009 | B2 |
7593259 | Kim | Sep 2009 | B2 |
7594093 | Kancherla | Sep 2009 | B1 |
7596707 | Vemula | Sep 2009 | B1 |
7609787 | Jahan et al. | Oct 2009 | B2 |
7613043 | Cornwell et al. | Nov 2009 | B2 |
7616498 | Mokhlesi et al. | Nov 2009 | B2 |
7619918 | Aritome | Nov 2009 | B2 |
7631245 | Lasser | Dec 2009 | B2 |
7633798 | Sarin et al. | Dec 2009 | B2 |
7633802 | Mokhlesi | Dec 2009 | B2 |
7639532 | Roohparvar et al. | Dec 2009 | B2 |
7644347 | Alexander et al. | Jan 2010 | B2 |
7656734 | Thorp et al. | Feb 2010 | B2 |
7660158 | Aritome | Feb 2010 | B2 |
7660183 | Ware et al. | Feb 2010 | B2 |
7661000 | Ueda et al. | Feb 2010 | B2 |
7661054 | Huffman et al. | Feb 2010 | B2 |
7665007 | Yang et al. | Feb 2010 | B2 |
7680987 | Clark et al. | Mar 2010 | B1 |
7733712 | Walston et al. | Jun 2010 | B1 |
7742351 | Inoue et al. | Jun 2010 | B2 |
7761624 | Karamcheti et al. | Jul 2010 | B2 |
7797609 | Neuman | Sep 2010 | B2 |
7810017 | Radke | Oct 2010 | B2 |
7848149 | Gonzalez et al. | Dec 2010 | B2 |
7869273 | Lee et al. | Jan 2011 | B2 |
7885119 | Li | Feb 2011 | B2 |
7904783 | Brandman et al. | Mar 2011 | B2 |
7928497 | Yaegashi | Apr 2011 | B2 |
7929549 | Talbot | Apr 2011 | B1 |
7930515 | Gupta et al. | Apr 2011 | B2 |
7945825 | Cohen et al. | May 2011 | B2 |
7978516 | Olbrich et al. | Jul 2011 | B2 |
8014094 | Jin | Sep 2011 | B1 |
8037380 | Cagno et al. | Oct 2011 | B2 |
8040744 | Gorobets et al. | Oct 2011 | B2 |
8065583 | Radke | Nov 2011 | B2 |
20010002172 | Tanaka et al. | May 2001 | A1 |
20010006479 | Ikehashi et al. | Jul 2001 | A1 |
20020038440 | Barkan | Mar 2002 | A1 |
20020056064 | Kidorf et al. | May 2002 | A1 |
20020118574 | Gongwer et al. | Aug 2002 | A1 |
20020133684 | Anderson | Sep 2002 | A1 |
20020166091 | Kidorf et al. | Nov 2002 | A1 |
20020174295 | Ulrich et al. | Nov 2002 | A1 |
20020196510 | Hietala et al. | Dec 2002 | A1 |
20030002348 | Chen et al. | Jan 2003 | A1 |
20030103400 | Van Tran | Jun 2003 | A1 |
20030161183 | Tran | Aug 2003 | A1 |
20030189856 | Cho et al. | Oct 2003 | A1 |
20040057265 | Mirabel et al. | Mar 2004 | A1 |
20040057285 | Cernea et al. | Mar 2004 | A1 |
20040083333 | Chang et al. | Apr 2004 | A1 |
20040083334 | Chang et al. | Apr 2004 | A1 |
20040105311 | Cernea et al. | Jun 2004 | A1 |
20040114437 | Li | Jun 2004 | A1 |
20040160842 | Fukiage | Aug 2004 | A1 |
20040223371 | Roohparvar | Nov 2004 | A1 |
20050007802 | Gerpheide | Jan 2005 | A1 |
20050013165 | Ban | Jan 2005 | A1 |
20050024941 | Lasser et al. | Feb 2005 | A1 |
20050024978 | Ronen | Feb 2005 | A1 |
20050030788 | Parkinson et al. | Feb 2005 | A1 |
20050086574 | Fackenthal | Apr 2005 | A1 |
20050121436 | Kamitani et al. | Jun 2005 | A1 |
20050144361 | Gonzalez et al. | Jun 2005 | A1 |
20050157555 | Ono et al. | Jul 2005 | A1 |
20050162913 | Chen | Jul 2005 | A1 |
20050169051 | Khalid et al. | Aug 2005 | A1 |
20050189649 | Maruyama et al. | Sep 2005 | A1 |
20050213393 | Lasser | Sep 2005 | A1 |
20050224853 | Ohkawa | Oct 2005 | A1 |
20050240745 | Iyer et al. | Oct 2005 | A1 |
20050243626 | Ronen | Nov 2005 | A1 |
20060004952 | Lasser | Jan 2006 | A1 |
20060028875 | Avraham et al. | Feb 2006 | A1 |
20060028877 | Meir | Feb 2006 | A1 |
20060101193 | Murin | May 2006 | A1 |
20060106972 | Gorobets et al. | May 2006 | A1 |
20060107136 | Gongwer et al. | May 2006 | A1 |
20060129750 | Lee et al. | Jun 2006 | A1 |
20060133141 | Gorobets | Jun 2006 | A1 |
20060156189 | Tomlin | Jul 2006 | A1 |
20060179334 | Brittain et al. | Aug 2006 | A1 |
20060190699 | Lee | Aug 2006 | A1 |
20060203546 | Lasser | Sep 2006 | A1 |
20060218359 | Sanders et al. | Sep 2006 | A1 |
20060221692 | Chen | Oct 2006 | A1 |
20060221705 | Hemink et al. | Oct 2006 | A1 |
20060221714 | Li et al. | Oct 2006 | A1 |
20060239077 | Park et al. | Oct 2006 | A1 |
20060239081 | Roohparvar | Oct 2006 | A1 |
20060256620 | Nguyen et al. | Nov 2006 | A1 |
20060256626 | Werner et al. | Nov 2006 | A1 |
20060256891 | Yuan et al. | Nov 2006 | A1 |
20060271748 | Jain et al. | Nov 2006 | A1 |
20060285392 | Incarnati et al. | Dec 2006 | A1 |
20060285396 | Ha | Dec 2006 | A1 |
20070006013 | Moshayedi et al. | Jan 2007 | A1 |
20070019481 | Park | Jan 2007 | A1 |
20070033581 | Tomlin et al. | Feb 2007 | A1 |
20070047314 | Goda et al. | Mar 2007 | A1 |
20070047326 | Nguyen et al. | Mar 2007 | A1 |
20070050536 | Kolokowsky | Mar 2007 | A1 |
20070058446 | Hwang et al. | Mar 2007 | A1 |
20070061502 | Lasser et al. | Mar 2007 | A1 |
20070067667 | Ikeuchi et al. | Mar 2007 | A1 |
20070074093 | Lasser | Mar 2007 | A1 |
20070086239 | Litsyn et al. | Apr 2007 | A1 |
20070086260 | Sinclair | Apr 2007 | A1 |
20070089034 | Litsyn et al. | Apr 2007 | A1 |
20070091677 | Lasser et al. | Apr 2007 | A1 |
20070091694 | Lee et al. | Apr 2007 | A1 |
20070103978 | Conley et al. | May 2007 | A1 |
20070103986 | Chen | May 2007 | A1 |
20070104211 | Opsasnick | May 2007 | A1 |
20070109845 | Chen | May 2007 | A1 |
20070109849 | Chen | May 2007 | A1 |
20070115726 | Cohen et al. | May 2007 | A1 |
20070118713 | Guterman et al. | May 2007 | A1 |
20070143378 | Gorobetz | Jun 2007 | A1 |
20070143531 | Atri | Jun 2007 | A1 |
20070159889 | Kang et al. | Jul 2007 | A1 |
20070159892 | Kang et al. | Jul 2007 | A1 |
20070159907 | Kwak | Jul 2007 | A1 |
20070168837 | Murin | Jul 2007 | A1 |
20070171714 | Wu et al. | Jul 2007 | A1 |
20070183210 | Choi et al. | Aug 2007 | A1 |
20070189073 | Aritome | Aug 2007 | A1 |
20070195602 | Fong et al. | Aug 2007 | A1 |
20070206426 | Mokhlesi | Sep 2007 | A1 |
20070208904 | Hsieh et al. | Sep 2007 | A1 |
20070226599 | Motwani | Sep 2007 | A1 |
20070236990 | Aritome | Oct 2007 | A1 |
20070253249 | Kang et al. | Nov 2007 | A1 |
20070256620 | Viggiano et al. | Nov 2007 | A1 |
20070263455 | Cornwell et al. | Nov 2007 | A1 |
20070266232 | Rodgers et al. | Nov 2007 | A1 |
20070271424 | Lee et al. | Nov 2007 | A1 |
20070280000 | Fujiu et al. | Dec 2007 | A1 |
20070291571 | Balasundaram | Dec 2007 | A1 |
20070297234 | Cernea et al. | Dec 2007 | A1 |
20080010395 | Mylly et al. | Jan 2008 | A1 |
20080025121 | Tanzawa | Jan 2008 | A1 |
20080043535 | Roohparvar | Feb 2008 | A1 |
20080049504 | Kasahara et al. | Feb 2008 | A1 |
20080049506 | Guterman | Feb 2008 | A1 |
20080052446 | Lasser et al. | Feb 2008 | A1 |
20080055993 | Lee | Mar 2008 | A1 |
20080080243 | Edahiro et al. | Apr 2008 | A1 |
20080082730 | Kim et al. | Apr 2008 | A1 |
20080089123 | Chae et al. | Apr 2008 | A1 |
20080104309 | Cheon et al. | May 2008 | A1 |
20080104312 | Lasser | May 2008 | A1 |
20080109590 | Jung et al. | May 2008 | A1 |
20080115017 | Jacobson | May 2008 | A1 |
20080123420 | Brandman et al. | May 2008 | A1 |
20080123426 | Lutze et al. | May 2008 | A1 |
20080126686 | Sokolov et al. | May 2008 | A1 |
20080130341 | Shalvi et al. | Jun 2008 | A1 |
20080148115 | Sokolov et al. | Jun 2008 | A1 |
20080151618 | Sharon et al. | Jun 2008 | A1 |
20080151667 | Miu et al. | Jun 2008 | A1 |
20080158958 | Sokolov et al. | Jul 2008 | A1 |
20080181001 | Shalvi | Jul 2008 | A1 |
20080198650 | Shalvi et al. | Aug 2008 | A1 |
20080198654 | Toda | Aug 2008 | A1 |
20080209116 | Caulkins | Aug 2008 | A1 |
20080209304 | Winarski et al. | Aug 2008 | A1 |
20080215798 | Sharon et al. | Sep 2008 | A1 |
20080219050 | Shalvi et al. | Sep 2008 | A1 |
20080239093 | Easwar et al. | Oct 2008 | A1 |
20080239812 | Abiko et al. | Oct 2008 | A1 |
20080253188 | Aritome | Oct 2008 | A1 |
20080263262 | Sokolov et al. | Oct 2008 | A1 |
20080263676 | Mo et al. | Oct 2008 | A1 |
20080270730 | Lasser et al. | Oct 2008 | A1 |
20080282106 | Shalvi et al. | Nov 2008 | A1 |
20080288714 | Salomon et al. | Nov 2008 | A1 |
20090013233 | Radke | Jan 2009 | A1 |
20090024905 | Shalvi et al. | Jan 2009 | A1 |
20090034337 | Aritome | Feb 2009 | A1 |
20090043831 | Antonopoulos et al. | Feb 2009 | A1 |
20090043951 | Shalvi et al. | Feb 2009 | A1 |
20090049234 | Oh et al. | Feb 2009 | A1 |
20090073762 | Lee et al. | Mar 2009 | A1 |
20090086542 | Lee et al. | Apr 2009 | A1 |
20090089484 | Chu | Apr 2009 | A1 |
20090091979 | Shalvi | Apr 2009 | A1 |
20090094930 | Schwoerer | Apr 2009 | A1 |
20090106485 | Anholt | Apr 2009 | A1 |
20090112949 | Ergan et al. | Apr 2009 | A1 |
20090132755 | Radke | May 2009 | A1 |
20090144600 | Perlmutter et al. | Jun 2009 | A1 |
20090150894 | Huang et al. | Jun 2009 | A1 |
20090157950 | Selinger | Jun 2009 | A1 |
20090157964 | Kasorla et al. | Jun 2009 | A1 |
20090158126 | Perlmutter et al. | Jun 2009 | A1 |
20090168524 | Golov et al. | Jul 2009 | A1 |
20090172257 | Prins et al. | Jul 2009 | A1 |
20090172261 | Prins et al. | Jul 2009 | A1 |
20090193184 | Yu et al. | Jul 2009 | A1 |
20090199074 | Sommer et al. | Aug 2009 | A1 |
20090204824 | Lin et al. | Aug 2009 | A1 |
20090204872 | Yu et al. | Aug 2009 | A1 |
20090213653 | Perlmutter et al. | Aug 2009 | A1 |
20090213654 | Perlmutter et al. | Aug 2009 | A1 |
20090225595 | Kim | Sep 2009 | A1 |
20090228761 | Perlmutter et al. | Sep 2009 | A1 |
20090240872 | Perlmutter et al. | Sep 2009 | A1 |
20090265509 | Klein | Oct 2009 | A1 |
20090300227 | Nochimowski et al. | Dec 2009 | A1 |
20090323412 | Mokhlesi et al. | Dec 2009 | A1 |
20090327608 | Eschmann | Dec 2009 | A1 |
20100017650 | Chin et al. | Jan 2010 | A1 |
20100034022 | Dutta et al. | Feb 2010 | A1 |
20100057976 | Lasser | Mar 2010 | A1 |
20100061151 | Miwa et al. | Mar 2010 | A1 |
20100082883 | Chen et al. | Apr 2010 | A1 |
20100083247 | Kanevsky et al. | Apr 2010 | A1 |
20100110580 | Takashima | May 2010 | A1 |
20100124088 | Shalvi et al. | May 2010 | A1 |
20100131697 | Alrod et al. | May 2010 | A1 |
20100131827 | Sokolov et al. | May 2010 | A1 |
20100142268 | Aritome | Jun 2010 | A1 |
20100142277 | Yang et al. | Jun 2010 | A1 |
20100157675 | Shalvi et al. | Jun 2010 | A1 |
20100165689 | Rotbard et al. | Jul 2010 | A1 |
20100169547 | Ou | Jul 2010 | A1 |
20100169743 | Vogan et al. | Jul 2010 | A1 |
20100174847 | Paley et al. | Jul 2010 | A1 |
20100211803 | Lablans | Aug 2010 | A1 |
20100287217 | Borchers et al. | Nov 2010 | A1 |
20110010489 | Yeh | Jan 2011 | A1 |
20110010490 | Kwon et al. | Jan 2011 | A1 |
20110060969 | Ramamoorthy et al. | Mar 2011 | A1 |
20110066793 | Burd | Mar 2011 | A1 |
20110075482 | Shepard et al. | Mar 2011 | A1 |
20110107049 | Kwon et al. | May 2011 | A1 |
20110149657 | Haratsch et al. | Jun 2011 | A1 |
20110199823 | Bar-Or et al. | Aug 2011 | A1 |
20110264843 | Haines et al. | Oct 2011 | A1 |
20110283049 | Kang et al. | Nov 2011 | A1 |
20110302354 | Miller | Dec 2011 | A1 |
20110302477 | Goss et al. | Dec 2011 | A1 |
20120117309 | Schuette | May 2012 | A1 |
20120233396 | Flynn et al. | Sep 2012 | A1 |
20120246443 | Meir et al. | Sep 2012 | A1 |
Number | Date | Country |
---|---|---|
0783754 | Jul 1997 | EP |
1434236 | Jun 2004 | EP |
1605509 | Dec 2005 | EP |
9610256 | Apr 1996 | WO |
9828745 | Jul 1998 | WO |
02100112 | Dec 2002 | WO |
03100791 | Dec 2003 | WO |
2007046084 | Apr 2007 | WO |
2007132452 | Nov 2007 | WO |
2007132453 | Nov 2007 | WO |
2007132456 | Nov 2007 | WO |
2007132457 | Nov 2007 | WO |
2007132458 | Nov 2007 | WO |
2007146010 | Dec 2007 | WO |
2008026203 | Mar 2008 | WO |
2008053472 | May 2008 | WO |
2008053473 | May 2008 | WO |
2008068747 | Jun 2008 | WO |
2008077284 | Jul 2008 | WO |
2008083131 | Jul 2008 | WO |
2008099958 | Aug 2008 | WO |
2008111058 | Sep 2008 | WO |
2008124760 | Oct 2008 | WO |
2008139441 | Nov 2008 | WO |
2009037691 | Mar 2009 | WO |
2009037697 | Mar 2009 | WO |
2009038961 | Mar 2009 | WO |
2009050703 | Apr 2009 | WO |
2009053961 | Apr 2009 | WO |
2009053962 | Apr 2009 | WO |
2009053963 | Apr 2009 | WO |
2009063450 | May 2009 | WO |
2009072100 | Jun 2009 | WO |
2009072101 | Jun 2009 | WO |
2009072102 | Jun 2009 | WO |
2009072103 | Jun 2009 | WO |
2009072104 | Jun 2009 | WO |
2009072105 | Jun 2009 | WO |
2009074978 | Jun 2009 | WO |
2009074979 | Jun 2009 | WO |
2009078006 | Jun 2009 | WO |
2009095902 | Aug 2009 | WO |
2011024015 | Mar 2011 | WO |
Entry |
---|
US 7,161,836, 01/2007, Wan et al. (withdrawn) |
Hong et al., “NAND Flash-based Disk Cache Using SLC/MLC Combined Flash Memory”, 2010 International Workshop on Storage Network Architecture and Parallel I/Os, pp. 21-30, USA, May 3, 2010. |
U.S. Appl. No. 11/945,575 Official Action dated Aug. 24, 2010. |
U.S. Appl. No. 12/045,520 Official Action dated Nov. 16, 2010. |
U.S. Appl. No. 12/880,101 “Reuse of Host Hibernation Storage Space by Memory Controller”, filed Sep. 12, 2010. |
U.S. Appl. No. 12/890,724 “Error Correction Coding Over Multiple Memory PAges”, filed Sep. 27, 2010. |
U.S. Appl. No. 12/171,797 Official Action dated Aug. 25, 2010. |
U.S. Appl. No. 12/497,707 Official Action dated Sep. 15, 2010. |
U.S. Appl. No. 11/995,801 Official Action dated Oct. 15, 2010. |
Numonyx, “M25PE16: 16-Mbit, page-erasable serial flash memory with byte-alterability, 75 MHz SPI bus, standard pinout”, Apr. 2008. |
Ankolekar et al., “Multibit Error-Correction Methods for Latency-Constrained Flash Memory Systems”, IEEE Transactions on Device and Materials Reliability, vol. 10, No. 1, pp. 33-39, Mar. 2010. |
U.S. Appl. No. 12/344,233 Official Action dated Jun. 24, 2011. |
U.S. Appl. No. 11/995,813 Official Action dated Jun. 16, 2011. |
Berman et al., “Mitigating Inter-Cell Coupling Effects in MLC NAND Flash via Constrained Coding”, Flash Memory Summit, Santa Clara, USA, Aug. 19, 2010. |
U.S. Appl. No. 12/178,318 Official Action dated May 31, 2011. |
CN Patent Application # 200780026181.3 Official Action dated Apr. 8, 2011. |
Agrell et al., “Closest Point Search in Lattices”, IEEE Transactions on Information Theory, vol. 48, No. 8, pp. 2201-2214, Aug. 2002. |
Bez et al., “Introduction to Flash memory”, Proceedings of the IEEE, vol. 91, No. 4, pp. 489-502, Apr. 2003. |
Blahut, R.E., “Theory and Practice of Error Control Codes,” Addison-Wesley, May 1984, section 3.2, pp. 47-48. |
Chang, L., “Hybrid Solid State Disks: Combining Heterogeneous NAND Flash in Large SSDs”, ASPDAC, Jan. 2008. |
Cho et al., “Multi-Level NAND Flash Memory with Non-Uniform Threshold Voltage Distribution,” IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, Feb. 5-7, 2001, pp. 28-29 and 424. |
Databahn™, “Flash memory controller IP”, Denali Software, Inc., 1994 https://www.denali.com/en/products/databahn—flash.jsp. |
Datalight, Inc., “FlashFX Pro 3.1 High Performance Flash Manager for Rapid Development of Reliable Products”, Nov. 16, 2006. |
Duann, N., Silicon Motion Presentation “SLC & MLC Hybrid”, Flash Memory Summit, Santa Clara, USA, Aug. 2008. |
Eitan et al., “Can NROM, a 2-bit, Trapping Storage NVM Cell, Give a Real Challenge to Floating Gate Cells?”, Proceedings of the 1999 International Conference on Solid State Devices and Materials (SSDM), p. 522-524, Tokyo, Japan 1999. |
Eitan et al., “Multilevel Flash Cells and their Trade-Offs”, Proceedings of the 1996 IEEE International Electron Devices Meeting (IEDM), pp. 169-172, New York, USA 1996. |
Engh et al., “A self adaptive programming method with 5 mV accuracy for multi-level storage in FLASH”, pp. 115-118, Proceedings of the IEEE 2002 Custom Integrated Circuits Conference, May 12-15, 2002. |
Goodman et al., “On-Chip ECC for Multi-Level Random Access Memories,” Proceedings of the IEEE/CAM Information Theory Workshop, Ithaca, USA, Jun. 25-29, 1989. |
Han et al., “An Intelligent Garbage Collection Algorithm for Flash Memory Storages”, Computational Science and Its Applications—ICCSA 2006, vol. 3980/2006, pp. 1019-1027, Springer Berlin / Heidelberg, Germany, May 11, 2006. |
Han et al., “CATA: A Garbage Collection Scheme for Flash Memory File Systems”, Ubiquitous Intelligence and Computing, vol. 4159/2006, pp. 103-112, Springer Berlin / Heidelberg, Aug. 25, 2006. |
Horstein, “On the Design of Signals for Sequential and Nonsequential Detection Systems with Feedback,” IEEE Transactions on Information Theory IT-12:4 (Oct. 1966), pp. 448-455. |
Jung et al., in “A 117 mm.sup.2 3.3V Only 128 Mb Multilevel NAND Flash Memory for Mass Storage Applications,” IEEE Journal of Solid State Circuits, (11:31), Nov. 1996, pp. 1575-1583. |
Kawaguchi et al. 1995. A flash-memory based file system. In Proceedings of the USENIX 1995 Technical Conference , New Orleans, Louisiana. 155-164. |
Kim et al., “Future Memory Technology including Emerging New Memories”, Proceedings of the 24th International Conference on Microelectronics (MIEL), vol. 1, pp. 377-384, Nis, Serbia and Montenegro, May 16-19, 2004. |
Lee et al., “Effects of Floating Gate Interference on NAND Flash Memory Cell Operation”, IEEE Electron Device Letters, vol. 23, No. 5, pp. 264-266, May 2002. |
Maayan et al., “A 512 Mb NROM Flash Data Storage Memory with 8 MB/s Data Rate”, Proceedings of the 2002 IEEE International Solid-State circuits Conference (ISSCC 2002), pp. 100-101, San Francisco, USA, Feb. 3-7, 2002. |
Mielke et al., “Recovery Effects in the Distributed Cycling of Flash Memories”, IEEE 44th Annual International Reliability Physics Symposium, pp. 29-35, San Jose, USA, Mar. 2006. |
ONFI, “Open NAND Flash Interface Specification,” revision 1.0, Dec. 28, 2006. |
Phison Electronics Corporation, “PS8000 Controller Specification (for SD Card)”, revision 1.2, Document No. S-07018, Mar. 28, 2007. |
Shalvi, et al., “Signal Codes,” Proceedings of the 2003 IEEE Information Theory Workshop (ITW'2003), Paris, France, Mar. 31-Apr. 4, 2003. |
Shiozaki, A., “Adaptive Type-II Hybrid Broadcast ARQ System”, IEEE Transactions on Communications, vol. 44, Issue 4, pp. 420-422, Apr. 1996. |
Suh et al., “A 3.3V 32Mb NAND Flash Memory with Incremental Step Pulse Programming Scheme”, IEEE Journal of Solid-State Circuits, vol. 30, No. 11, pp. 1149-1156, Nov. 1995. |
ST Microelectronics, “Bad Block Management in NAND Flash Memories”, Application note AN-1819, Geneva, Switzerland, May 2004. |
ST Microelectronics, “Wear Leveling in Single Level Cell NAND Flash Memories,” Application note AN-1822 Geneva, Switzerland, Feb. 2007. |
Takeuchi et al., “A Double Level VTH Select Gate Array Architecture for Multi-Level NAND Flash Memories”, Digest of Technical Papers, 1995 Symposium on VLSI Circuits, pp. 69-70, Jun. 8-10, 1995. |
Takeuchi et al., “A Multipage Cell Architecture for High-Speed Programming Multilevel NAND Flash Memories”, IEEE Journal of Solid State Circuits, vol. 33, No. 8, Aug. 1998. |
Wu et al., “eNVy: A non-Volatile, Main Memory Storage System”, Proceedings of the 6th International Conference on Architectural support for programming languages and operating systems, pp. 86-87, San Jose, USA, 1994. |
International Application PCT/IL2007/000575 Patentability report dated Mar. 26, 2009. |
International Application PCT/IL2007/000575 Search Report dated May 30, 2008. |
International Application PCT/IL2007/000576 Patentability Report dated Mar. 19, 2009. |
International Application PCT/IL2007/000576 Search Report dated Jul. 7, 2008. |
International Application PCT/IL2007/000579 Patentability report dated Mar. 10, 2009. |
International Application PCT/IL2007/000579 Search report dated Jul. 3, 2008. |
International Application PCT/IL2007/000580 Patentability Report dated Mar. 10, 2009. |
International Application PCT/IL2007/000580 Search Report dated Sep. 11, 2008. |
International Application PCT/IL2007/000581 Patentability Report dated Mar. 26, 2009. |
International Application PCT/IL2007/000581 Search Report dated Aug. 25, 2008. |
International Application PCT/IL2007/001059 Patentability report dated Apr. 19, 2009. |
International Application PCT/IL2007/001059 Search report dated Aug. 7, 2008. |
International Application PCT/IL2007/001315 search report dated Aug. 7, 2008. |
International Application PCT/IL2007/001315 Patentability Report dated May 5, 2009. |
International Application PCT/IL2007/001316 Search report dated Jul. 22, 2008. |
International Application PCT/IL2007/001316 Patentability Report dated May 5, 2009. |
International Application PCT/IL2007/001488 Search report dated Jun. 20, 2008. |
International Application PCT/IL2008/000329 Search report dated Nov. 25, 2008. |
International Application PCT/IL2008/000519 Search report dated Nov. 20, 2008. |
International Application PCT/IL2008/001188 Search Report dated Jan. 28, 2009. |
International Application PCT/IL2008/001356 Search Report dated Feb. 3, 2009. |
International Application PCT/IL2008/001446 Search report dated Feb. 20, 2009. |
U.S. Appl. No. 11/949,135 Official Action dated Oct. 2, 2009. |
U.S. Appl. No. 12/019,011 Official Action dated Nov. 20, 2009. |
Sommer, N., U.S. Appl. No. 12/171,797 “Memory Device with Non-Uniform Programming Levels” filed Jul. 11, 2008. |
Shalvi et al., U.S. Appl. No. 12/251,471 “Compensation for Voltage Drifts in Analog Memory Cells” filed Oct. 15, 2008. |
Sommer et al., U.S. Appl. No. 12/497,707 “Data Storage in Analog Memory Cells with Protection Against Programming Interruption” filed Jul. 6, 2009. |
Winter et al., U.S. Appl. No. 12/534,893 “Improved Data Storage in Analog Memory Cells Using Modified Pass Voltages” filed Aug. 4, 2009. |
Winter et al., U.S. Appl. No. 12/534,898 “Data Storage Using Modified Voltages” filed Aug. 4, 2009. |
Shalvi et al., U.S. Appl. No. 12/551,583 “Segmented Data Storage” filed Sep. 1, 2009. |
Shalvi et al., U.S. Appl. No. 12/551, 567 “Reliable Data Storage in Analog Memory Cells Subjected to Long Retention Periods” filed Sep. 1, 2009. |
Perlmutter et al., U.S. Appl. No. 12/558,528 “Estimation of Memory Cell Read Thresholds by Sampling Inside Programming Level Distribution Intervals” filed Sep. 13, 2009. |
Sokolov, D., U.S. Appl. No. 12/579,430 “Efficient Programming of Analog Memory Cell Devices” filed Oct. 15, 2009. |
Shalvi, O., U.S. Appl. No. 12/579,432 “Efficient Data Storage in Storage Device Arrays” filed Oct. 15, 2009. |
Sommer et al., U.S. Appl. No. 12/607,078 “Data Scrambling in Memory Devices” filed Oct. 28, 2009. |
Sommer et al., U.S. Appl. No. 12/607,085 “Data Scrambling Schemes for Memory Devices” filed Oct. 28, 2009. |
Sommer et al., U.S. Appl. No. 12/649,358 “Efficient Readout Schemes for Analog Memory Cell Devices” filed Dec. 30, 2009. |
Sommer et al., U.S. Appl. No. 12/649,360 “Efficient Readout Schemes for Analog Memory Cell Devices Using Multiple Read Threshold Sets” filed Dec. 30, 2009. |
Shachar et al. U.S. Appl. No. 12/688,883 “Hierarchical data storage system” filed Jan. 17, 2010. |
Shalvi, O., U.S. Appl. No. 12/758,044 “Memory device with negative thresholds” filed Apr. 12, 2010. |
Sokolov et al., U.S. Appl. No. 12/714,501 “Selective Activation of Programming Schemes in Analog Memory Cell Arrays” filed Feb. 28, 2010. |
Sokolov et al., U.S. Appl. No. 12/728,287 “Use of host system resources by memory controller” filed Mar. 22, 2010. |
Sommer et al., U.S. Appl. No. 12/728,296 “Database of Memory Read Threshods” filed Mar. 22, 2010. |
Sommer et al., U.S. Appl. No. 12/758,003 “Selective re-programming of analog memory cells” filed Apr. 11, 2010. |
Huffman, A., “Non-Volatile Memory Host Controller Interface (NVMHCI)”, Specification 1.0, Apr. 14, 2008. |
U.S. Appl. No. 11/957,970 Official Action dated May 20, 2010. |
Panchbhai et al., “Improving Reliability of NAND Based Flash Memory Using Hybrid SLC/MLC Device”, Project Proposal for CSci 8980—Advanced Storage Systems, University of Minnesota, USA, Spring 2009. |
Jedec Standard JESD84-C44, “Embedded MultiMediaCard (eMMC) Mechanical Standard, with Optional Reset Signal”, Jedec Solid State Technology Association, USA, Jul. 2009. |
Jedec, “UFS Specification”, version 0.1, Nov. 11, 2009. |
SD Group and SD Card Association, “SD Specifications Part 1 Physical Layer Specification”, version 3.01, draft 1.00, Nov. 9, 2009. |
Compaq et al., “Universal Serial Bus Specification”, revision 2.0, Apr. 27, 2000. |
Serial ATA International Organization, “Seiral ATA Revision 3.0 Specification”, Jun. 2, 2009. |
Gotou, H., “An Experimental Confirmation of Automatic Threshold Voltage Convergence in a Flash Memory Using Alternating Word-Line Voltage Pulses”, IEEE Electron Device Letters, vol. 18, No. 10, pp. 503-505, Oct. 1997. |
U.S. Appl. No. 12/323,544 Office Action dated Dec. 13, 2011. |
U.S. Appl. No. 12/332,368 Office Action dated Nov. 10, 2011. |
U.S. Appl. No. 12/063,544 Office Action dated Dec. 14, 2011. |
U.S. Appl. No. 12/186,867 Office Action dated Jan. 17, 2012. |
U.S. Appl. No. 12/119,069 Office Action dated Nov. 14, 2011. |
U.S. Appl. No. 12/037,487 Office Action dated Jan. 3, 2012. |
U.S. Appl. No. 11/995,812 Office Action dated Oct. 28, 2011. |
U.S. Appl. No. 12/551,567 Office Action dated Oct. 27, 2011. |
U.S. Appl. No. 12/618,732 Office Action dated Nov. 4, 2011. |
U.S. Appl. No. 12/649,382 Office Action dated Jan. 6, 2012. |
U.S. Appl. No. 13/284,909, filed Oct. 30, 2011. |
U.S. Appl. No. 13/284,913, filed Oct. 30, 2011. |
U.S. Appl. No. 13/338,335, filed Dec. 28, 2011. |
U.S. Appl. No. 13/355,536, filed Jan. 22, 2012. |
Kim et al., “Multi-bit Error Tolerant Caches Using Two-Dimensional Error Coding”, Proceedings of the 40th Annual ACM/IEEE International Symposium on Microarchitecture (MICRO-40), Chicago, USA, Dec. 1-5, 2007. |
U.S. Appl. No. 11/995,814 Official Action dated Dec. 17, 2010. |
U.S. Appl. No. 12/388,528 Official Action dated Nov. 29, 2010. |
U.S. Appl. No. 12/251,471 Official Action dated Jan. 3, 2011. |
Engineering Windows 7, “Support and Q&A for Solid-State Drives”, e7blog, May 5, 2009. |
Micron Technology Inc., “Memory Management in NAND Flash Arrays”, Technical Note, year 2005. |
Kang et al., “A Superblock-based Flash Translation Layer for NAND Flash Memory”, Proceedings of the 6th ACM & IEEE International Conference on Embedded Software, pp. 161-170, Seoul, Korea, Oct. 22-26, 2006. |
Park et al., “Sub-Grouped Superblock Management for High-Performance Flash Storages”, IEICE Electronics Express, vol. 6, No. 6, pp. 297-303, Mar. 25, 2009. |
“How to Resolve “Bad Super Block: Magic Number Wrong” in BSD”, Free Online Articles Director Article Base, posted Sep. 5, 2009. |
Ubuntu Forums, “Memory Stick Failed IO Superblock”, posted Nov. 11, 2009. |
Super User Forums, “SD Card Failure, can't read superblock”, posted Aug. 8, 2010. |
U.S. Appl. No. 12/987,174, filed Jan. 10, 2011. |
U.S. Appl. No. 12/987,175, filed Jan. 10, 2011. |
U.S. Appl. No. 12/963,649, filed Dec. 9, 2010. |
U.S. Appl. No. 13/021,754, filed Feb. 6, 2011. |
Wei, L., “Trellis-Coded Modulation With Multidimensional Constellations”, IEEE Transactions on Information Theory, vol. IT-33, No. 4, pp. 483-501, Jul. 1987. |
U.S. Appl. No. 13/114,049 Official Action dated Sep. 12, 2011. |
U.S. Appl. No. 12/405,275 Official Action dated Jul. 29, 2011. |
Conway et al., “Sphere Packings, Lattices and Groups”, 3rd edition, chapter 4, pp. 94-135, Springer, New York, USA 1998. |
Chinese Patent Application # 200780040493.X Official Action dated Jun. 15, 2011. |
U.S. Appl. No. 12/037,487 Official Action dated Oct. 3, 2011. |
U.S. Appl. No. 12/649,360 Official Action dated Aug. 9, 2011. |
U.S. Appl. No. 13/192,504, filed Jul. 28, 2011. |
U.S. Appl. No. 13/192,852, filed Aug. 2, 2011. |
U.S. Appl. No. 13/231,963, filed Sep. 14, 2011. |
U.S. Appl. No. 13/239,408, filed Sep. 22, 2011. |
U.S. Appl. No. 13/239,411, filed Sep. 22, 2011. |
U.S. Appl. No. 13/214,257, filed Aug. 22, 2011. |
U.S. Appl. No. 13/192,501, filed Jul. 28, 2011. |
U.S. Appl. No. 13/192,495, filed Jul. 28, 2011. |
U.S. Appl. No. 12/323,544 Official Action dated Mar. 9, 2012. |
Chinese Patent Application # 200780026181.3 Official Action dated Mar. 7, 2012. |
Chinese Patent Application # 200780026094.8 Official Action dated Feb. 2, 2012. |
U.S. Appl. No. 12/332,370 Official Action dated Mar. 8, 2012. |
U.S. Appl. No. 12/579,432 Official Action dated Feb. 29, 2012. |
U.S. Appl. No. 12/522,175 Official Action dated Mar. 27, 2012. |
U.S. Appl. No. 12/607,085 Official Action dated Mar. 28, 2012. |
Budilovsky et al., “Prototyping a High-Performance Low-Cost Solid-State Disk”, SYSTOR—The 4th Annual International Systems and Storage Conference, Haifa, Israel, May 30-Jun. 1, 2011. |
NVM Express Protocol, “NVM Express”, Revision 1.0b, Jul. 12, 2011. |
SCSI Protocol, “Information Technology—SCSI Architecture Model—5 (SAM-5)”, INCITS document T10/2104-D, revision 01, Jan. 28, 2009. |
SAS Protocol, “Information Technology—Serial Attached SCSI—2 (SAS-2)”, INCITS document T10/1760-D, revision 15a, Feb. 22, 2009. |
U.S. Appl. No. 12/534,898 Official Action dated Mar. 23, 2011. |
U.S. Appl. No. 13/047,822, filed Mar. 15, 2011. |
U.S. Appl. No. 13/069,406, filed Mar. 23, 2011. |
U.S. Appl. No. 13/088,361, filed Apr. 17, 2011. |
Number | Date | Country | |
---|---|---|---|
61224897 | Jul 2009 | US | |
61293814 | Jan 2010 | US | |
61334606 | May 2010 | US |