Reuse of host hibernation storage space by memory controller

Information

  • Patent Grant
  • 8694814
  • Patent Number
    8,694,814
  • Date Filed
    Sunday, September 12, 2010
    14 years ago
  • Date Issued
    Tuesday, April 8, 2014
    10 years ago
Abstract
A method for data storage includes, in a host system that operates alternately in a normal state and a hibernation state, reserving a hibernation storage space in a non-volatile storage device for storage of hibernation-related information in preparation for entering the hibernation state. While the host system is operating in the normal state, a storage task other than storage of the hibernation-related information is performed using at least a portion of the reserved hibernation storage space.
Description
FIELD OF THE INVENTION

The present invention relates generally to memory devices, and particularly to reusing memory space allocated for storing hibernation data.


BACKGROUND OF THE INVENTION

Some computing devices, such as notebook computers, support a hibernation state. The hibernation state is typically a low power consumption state that preserves the state of the computing device and its applications so that operation can later resume without having to restart the applications or the operating system. When preparing to enter hibernation, the computing device stores application data and other information in non-volatile memory. When returning from hibernation to normal operation, the computing device retrieves the stored information, and resumes operation from the point at which it began to hibernate.


Some storage devices, such as Solid-State Disks (SSD), use arrays of analog memory cells for non-volatile data storage. Each analog memory cell stores a quantity of an analog value, also referred to as a storage value, such as an electrical charge or voltage. This analog value represents the information stored in the cell. In Flash memories, for example, each analog memory cell holds a certain amount of electrical charge. The range of possible analog values is typically divided into intervals, each interval corresponding to one or more data bit values. Data is written to an analog memory cell by writing a nominal analog value that corresponds to the desired bit or bits.


Some memory devices, commonly referred to as Single-Level Cell (SLC) devices, store a single bit of information in each memory cell, i.e., each memory cell can be programmed to assume two possible programming levels. Higher-density devices, often referred to as Multi-Level Cell (MLC) devices, store two or more bits per memory cell, i.e., can be programmed to assume more than two possible programming levels.


SUMMARY OF THE INVENTION

An embodiment of the present invention that is described herein provides a method for data storage, including:


in a host system that operates alternately in a normal state and a hibernation state, reserving a hibernation storage space in a non-volatile storage device for storage of hibernation-related information in preparation for entering the hibernation state; and


while the host system is operating in the normal state, performing a storage task other than storage of the hibernation-related information using at least a portion of the reserved hibernation storage space.


In some embodiments, the non-volatile storage device includes multiple memory blocks, and performing the storage task includes allocating over-provisioning memory overhead for copying valid data from partially-programmed memory blocks so as to produce memory blocks ready for erasure, such that at least some of the over-provisioning memory overhead is allocated in the hibernation storage space. In a disclosed embodiment, performing the storage task includes caching the user data accepted from the host system in the hibernation storage space, and subsequently copying the cached user data to storage locations outside the hibernation storage space. In an embodiment, caching the user data includes writing the user data to the hibernation storage space at a first storage throughput, and copying the cached user data includes storing the user data outside the hibernation storage space at a second storage throughput that is lower than the first storage throughput.


In some embodiments, reserving the hibernation storage space includes allocating a set of the storage locations by the host system to serve as the hibernation storage space, and performing the storage task includes identifying at least part of the storage locations in the set, and performing the storage task using the identified storage locations. In an embodiment, identifying the storage locations in the set includes receiving a notification from the host system indicative of the set of storage locations. In an alternative embodiment, identifying the storage locations in the set includes automatically identifying a file holding the hibernation-related information in a file system of the host system. In yet another embodiment, the method includes, in preparation for entering the hibernation state, receiving the hibernation-related information from the host system using one or more dedicated hibernation write commands, and identifying the storage locations in the set includes detecting the storage locations written to using the dedicated hibernation write commands.


In a disclosed embodiment, the method includes detecting that the host system is preparing to enter the hibernation state. In an embodiment, the method includes switching to store the hibernation-related information using a high-speed storage configuration responsively to detecting that the host system is preparing to enter the hibernation state. In another embodiment, detecting that the host system is preparing to enter the hibernation state includes detecting one or more dedicated hibernation write commands received from the host system. Alternatively, detecting that the host system is preparing to enter the hibernation state includes detecting one or more write commands to storage locations belonging to the hibernation storage space. Further alternatively, detecting that the host system is preparing to enter the hibernation state includes receiving a notification from the host system indicating a preparation to enter the hibernation state.


In some embodiments, the method includes detecting that the host system is preparing to exit the hibernation state. In an embodiment, detecting that the host system is preparing to exit the hibernation state includes detecting one or more read commands from storage locations belonging to the hibernation storage space. In an alternative embodiment, detecting that the host system is preparing to exit the hibernation state includes receiving a notification from the host system indicating a preparation to exit the hibernation state.


In still another embodiment, performing the storage task includes using at least the portion of the hibernation storage space only responsively to verifying that the hibernation-related information is invalid. In another embodiment, the method includes marking the hibernation-related information as invalid after the host system exits from the hibernation state and retrieves the hibernation-related information from the non-volatile storage device. In yet another embodiment, the method includes, in preparation for entering the hibernation state, storing part of the hibernation-related information in the portion of the hibernation storage space used for performing the storage task.


In some embodiments, the method includes writing user data at a first throughput, and, in preparation for entering the hibernation state, writing the hibernation-related information to the hibernation storage space at a second throughput, higher than the first throughput. In an embodiment, the method includes, in preparation for entering the hibernation state, compressing the hibernation-related information and storing the compressed hibernation-related information in the hibernation storage space. In another embodiment, the method includes receiving a notification from the host system indicating that the host system is preparing to enter the hibernation state, and vacating the portion of the hibernation storage space in response to the notification.


There is additionally provided, in accordance with an embodiment of the present invention, a method for data storage, including:


accepting from a host system data, which belongs to one or more files that are organized in accordance with a file system, and storing the data in a non-volatile storage device;


processing the data stored in the non-volatile storage device so as to identify a file that was marked as invalid by the file system of the host system; and


releasing a memory space occupied by the identified file in the non-volatile storage device.


There is also provided, in accordance with an embodiment of the present invention, apparatus for data storage, including:


a non-volatile memory; and


a memory controller, which is configured to store data for a host system that operates alternately in a normal state and a hibernation state, to reserve a hibernation storage space in the non-volatile memory for storage of hibernation-related information in preparation for entering the hibernation state, and, while the host system is operating in the normal state, to perform a storage task other than storage of the hibernation-related information using at least a portion of the reserved hibernation storage space.


There is further provided, in accordance with an embodiment of the present invention, apparatus for data storage, including:


a non-volatile memory; and


a memory controller, which is configured to accept from a host system data belonging to one or more files that are organized in accordance with a file system, to store the data in the non-volatile storage device, to process the data stored in the non-volatile storage device so as to identify a file that was marked as invalid by the file system of the host system, and to release a memory space occupied by the identified file in the non-volatile storage device.


The present invention will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram that schematically illustrates a computing device that supports a hibernation state, in accordance with an embodiment of the present invention;



FIG. 2 is a diagram that schematically illustrates a hibernation storage space used for storage management during normal operation, in accordance with an embodiment of the present invention; and



FIG. 3 is a flow chart that schematically illustrates a method for operating a Solid-State Disk (SSD) in a computing device that supports hibernation, in accordance with an embodiment of the present invention.





DETAILED DESCRIPTION OF EMBODIMENTS
Overview

Embodiments of the present invention that are described herein provide improved methods and systems for operating non-volatile storage devices (e.g., Solid-State Disks-SSD) in computing systems that support a hibernation state. The term “hibernation state” refers to any operational state in which the computing system reduces its energy consumption by deactivating at least part of its circuitry, and backs-up certain information to non-volatile storage before entering this operational state. Operational states or modes that are sometimes referred to as “standby,” “sleep” or “battery-save” are also regarded herein as hibernation states.


In some embodiments, a host system stores data in a non-volatile storage device, which comprises a non-volatile memory and a memory controller. The host system supports a hibernation state and a normal state, and may alternate between the two states. When preparing to enter the hibernation state, the host system creates a hibernation file containing hibernation-related information, and stores the hibernation file in a reserved hibernation storage space in the non-volatile memory. When returning from the hibernation state to the normal stage, the host system retrieves the hibernation file and uses the hibernation-related information to resume normal operation.


In some embodiments of the present invention, the memory controller reuses the reserved hibernation storage space for storage tasks other than storing the hibernation-related information while the host system is operating in the normal state. In some embodiments, the memory controller accepts user data for storage from the host system, and stores the user data in storage locations that are outside the hibernation storage space. The memory controller manages the storage of the user data, however, using at least part of the hibernation storage space. The memory controller may use the hibernation storage space for various functions. Several examples of such uses, e.g., over-provisioning and binary caching, are described herein.


In some embodiments, the hibernation storage space is reserved by the host system, and the memory controller automatically identifies at least some storage locations belonging to the hibernation storage space, and uses the identified locations for storage management or other tasks.


In many practical systems, the hibernation storage space occupies a considerable portion of the non-volatile storage device, but is largely unused during normal operation. The disclosed techniques enable the memory controller to exploit this storage resource, and to improve storage speed and reliability by reusing it.


System Description


FIG. 1 is a block diagram that schematically illustrates a host system 20 that supports a hibernation state, in accordance with an embodiment of the present invention. In the present example, system 20 comprises a mobile computing device such as a notebook or laptop computer. Alternatively, the methods and systems described herein can be used in other computing devices such as Personal Digital Assistants (PDAs), in mobile communication terminals such as mobile phones, or in any other suitable host system.


Host system 20 comprises a host processor 24. The host processor typically runs a certain Operating System (OS), and may run any desired number of software applications. Host system 20 comprises a Random Access Memory (RAM) 28, in which host processor 24 stores data, program instructions and/or any other information. In addition, host system 20 comprises a non-volatile storage device, in the present example a Solid-State Disk (SSD) 32. SSD 32 comprises a host interface 36 for communicating with host processor 24, an SSD controller 40, and one or more non-volatile memory devices 44. Each memory device 44 comprises an array 48 of multiple analog memory cells 52. A Read/Write (R/W) unit 56 writes data into memory cells 52 of array 48, and retrieves data from the memory cells.


In the context of the present patent application and in the claims, the term “analog memory cell” is used to describe any memory cell that holds a continuous, analog value of a physical parameter, such as an electrical voltage or charge. Array 48 may comprise analog memory cells of any kind, such as, for example, NAND, NOR and Charge Trap Flash (CTF) Flash cells, phase change RAM (PRAM, also referred to as Phase Change Memory-PCM), Nitride Read Only Memory (NROM), Ferroelectric RAM (FRAM), magnetic RAM (MRAM) and/or Dynamic RAM (DRAM) cells. Flash memory devices are described, for example, by Bez et al., in “Introduction to Flash Memory,” Proceedings of the IEEE, volume 91, number 4, April, 2003, pages 489-502, which is incorporated herein by reference. Multi-level Flash cells and devices are described, for example, by Eitan et al., in “Multilevel Flash Cells and their Trade-Offs,” Proceedings of the 1996 IEEE International Electron Devices Meeting (IEDM), New York, N.Y., pages 169-172, which is incorporated herein by reference. The paper compares several kinds of multilevel Flash cells, such as common ground, DINOR, AND, NOR and NAND cells.


NROM cells are described by Eitan et al., in “Can NROM, a 2-bit, Trapping Storage NVM Cell, Give a Real Challenge to Floating Gate Cells?” Proceedings of the 1999 International Conference on Solid State Devices and Materials (SSDM), Tokyo, Japan, Sep. 21-24, 1999, pages 522-524, which is incorporated herein by reference. NROM cells are also described by Maayan et al., in “A 512 Mb NROM Flash Data Storage Memory with 8 MB/s Data Rate”, Proceedings of the 2002 IEEE International Solid-State Circuits Conference (ISSCC 2002), San Francisco, Calif., Feb. 3-7, 2002, pages 100-101, which is incorporated herein by reference. FRAM, MRAM and PRAM cells are described, for example, by Kim and Koh in “Future Memory Technology including Emerging New Memories,” Proceedings of the 24th International Conference on Microelectronics (MIEL), Nis, Serbia and Montenegro, May 16-19, 2004, volume 1, pages 377-384, which is incorporated herein by reference.


The charge levels stored in memory cells 52 and/or the analog voltages or currents written into and read out of the memory cells are referred to herein collectively as analog values or storage values. The storage values may comprise threshold voltages, electrical charge levels, or any other suitable kind of storage values. R/W unit 56 stores data in the analog memory cells by programming the cells to assume respective memory states, which are also referred to as programming levels. The programming levels are selected from a finite set of possible levels, and each level corresponds to a certain nominal storage value. For example, a 2 bit/cell MLC can be programmed to assume one of four possible programming levels by writing one of four possible nominal storage values into the cell. Typically, R/W unit 56 converts data for storage in the memory device to analog storage values, and writes them into memory cells 52. When reading data out of array 48, R/W unit 56 converts the storage values of memory cells 52 into digital samples. Data is typically written to and read from the memory cells in groups that are referred to as pages. The R/W unit erases a block of cells 52 by applying one or more negative erasure pulses to the cells.


Some or all of the functions of SSD controller 40 may be implemented in hardware. Alternatively, SSD controller 40 may comprise a microprocessor that runs suitable software, or a combination of hardware and software elements. In some embodiments, SSD controller 40 comprises a general-purpose processor, which is programmed in software to carry out the functions described herein. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on non-transitory tangible media, such as magnetic, optical, or electronic memory.


The configuration of FIG. 1 is an exemplary system configuration, which is shown purely for the sake of conceptual clarity. Any other suitable memory system configuration can also be used. Elements that are not necessary for understanding the principles of the present invention have been omitted from the figure for clarity. In the example system configuration shown in FIG. 1, memory devices 44 and SSD controller 40 are implemented as two separate Integrated Circuits (ICs). In alternative embodiments, however, the memory devices and the SSD controller may be integrated on separate semiconductor dies in a single Multi-Chip Package (MCP) or System on Chip (SoC), and may be interconnected by an internal bus. Further alternatively, some or all of the SSD circuitry may reside on the same die on which one or more of the memory devices are disposed. Further alternatively, some or all of the functionality of SSD controller 40 can be implemented in software and carried out by host processor 24. In some embodiments, host processor 24 and SSD controller 40 may be fabricated on the same die, or on separate dies in the same device package.


In an example configuration of array 48, memory cells 52 are arranged in multiple rows and columns, and each memory cell comprises a floating-gate transistor. The gates of the transistors in each row are connected by word lines, and the sources of the transistors in each column are connected by bit lines. The memory array is typically divided into multiple pages, i.e., groups of memory cells that are programmed and read simultaneously. Pages are sometimes sub-divided into sectors. In some embodiments, each page comprises an entire row of the array. In alternative embodiments, each row (word line) can be divided into two or more pages. For example, in some devices each row is divided into two pages, one comprising the odd-order cells and the other comprising the even-order cells. In a typical implementation, a two-bit-per-cell memory device may have four pages per row, a three-bit-per-cell memory device may have six pages per row, and a four-bit-per-cell memory device may have eight pages per row.


Erasing of cells is usually carried out in blocks that contain multiple pages. Typical memory devices may comprise several thousand erasure blocks. In some two-bit-per-cell MLC devices, each erasure block is on the order of thirty-two word lines, each comprising several tens of thousands of memory cells. Each word line of such a device is often partitioned into four pages (odd/even order cells, least/most significant bit of the cells). Three-bit-per cell devices having thirty-two word lines per erasure block would have 192 pages per erasure block, and four-bit-per-cell devices would have 256 pages per block. Alternatively, other block sizes and configurations can also be used. Some memory devices comprise two or more separate memory cell arrays, often referred to as planes. Since each plane has a certain “busy” period between successive write operations, data can be written alternately to the different planes in order to increase programming speed.


Typically, host processor 24 reads and writes data in SSD 32 by specifying logical addresses of the data (e.g., using Logical Block Addressing—LBA). SSD controller 40 translates the logical addresses into respective physical storage locations in memory devices 44. Typically, the host processor is unaware of the actual physical storage locations in which the data is stored, and the logical-to-physical translation may change over time.


Using Hibernation Storage Space for Storage Management Functions

Host system 20 supports at least two operational states—a normal state and a hibernation state. When operating in the normal state, host processor 24, RAM 28 and SSD 32 are connected to electrical power and are fully operational. When operating in the hibernation state, at least some of the elements of host system 20 are deactivated in order to save power. When preparing to enter the hibernation state, host processor 24 sends hibernation-related information for storage in SSD 32. Upon returning to normal operation, the hibernation-related information enables the host processor to resume operation of host system 20 from the point it began hibernating. The hibernation-related information may comprise, for example, the state and variables of the operating system and of the applications running on processor 24, information stored in RAM 28, or any other suitable information.


In some embodiments, when preparing to enter hibernation, the host processor builds a hibernation file and stores the hibernation file in SSD 32 (main storage). When returning from the hibernation state to the normal state, the host processor retrieves the hibernation file from the SSD, extracts the hibernation-related information from the file, and resumes the host system operation using the extracted information.


Typically, host processor 24 pre-assigns and reserves a certain logical storage space in SSD 32 (i.e., a certain set of logical addresses or LBAs) for the hibernation file. This space is referred to herein as “hibernation storage space.” During normal operation, the user data is not stored in this space. The hibernation storage space is reserved in order to ensure that sufficient space is available for storing the hibernation file when necessary. Any suitable memory size may be reserved. Since the hibernation storage space is specified in terms of logical addresses, the physical storage location of the hibernation file in memory devices 44 may change over time. Typically although not necessarily, the hibernation storage space is on the order of the size of RAM 28.


As can be appreciated, the hibernation storage space may occupy a considerable portion of the total storage capacity of SSD 32. (In an example embodiment, the total RAM size is on the order of 2-4 GB, and the SSD main storage capacity is on the order of 64-256 GB. In alternative embodiments, any other suitable RAM and SSD sizes can also be used.) The hibernation storage space is accessed when preparing to enter hibernation and upon returning from hibernation to normal operation, and is otherwise mostly unused during normal operation.


In some embodiments, SSD controller 40 uses at least part of the hibernation storage space in SSD 32 for performing storage management functions when operating in the normal operation state. When devices 44 comprise Flash devices, the storage management functions are sometimes referred to as “Flash management.” For example, the SSD controller may use the hibernation storage space for increasing the over-provisioning ratio, to serve as a binary cache, or for any other suitable purpose. The over-provisioning and binary cache examples are described in detail further below. These uses of the hibernation storage space are typically temporary in nature, since the SSD controller is usually requested to make the hibernation storage space available for storing the hibernation file upon entering hibernation. The embodiments described herein address mainly storage management (e.g., Flash management) functions. In alternative embodiments, however, the SSD controller may use the hibernation storage space for storing data or for any other suitable purpose when the host system operates in the normal state.



FIG. 2 is a diagram that schematically illustrates a hibernation storage space used for storage management during normal operation, in accordance with an embodiment of the present invention. As seen at the top of the figure, the overall memory space in devices 44 comprises multiple memory blocks 60. A certain portion of the total memory space is reserved as a hibernation storage space 64. The hibernation storage space may comprise any suitable number of pages or parts of pages, and may be distributed among memory blocks 60 in any suitable manner.


The bottom of FIG. 2 shows a portion 68 of hibernation storage space 64, which is used by SSD controller 40 for storage management functions. In some embodiments, SSD controller 40 uses all of space 64 for storage management. Alternatively, the SSD controller may use only part of the hibernation storage space for this purpose.


As explained above, the host processor reserves a certain set of logical addresses to serve as hibernation storage space 64, i.e., for storing the hibernation file. The physical storage location of the hibernation file in memory devices 44 may change over time. Portion 68, which is used for storage management, may also comprise a set of logical addresses whose corresponding physical storage locations may change over time.


SSD controller 40 may identify whether the host system is currently in the normal state or in the hibernation state, and/or whether the host system is currently entering or exiting the hibernation state. Moreover, the SSD controller may identify the logical addresses (e.g., LBAs) in which the hibernation-related information (e.g., hibernation file) is stored. These identification tasks can be performed based on suitable notifications from the host processor, or automatically regardless of any host notification. Several examples of such techniques are described below.


In some embodiments, SSD controller 40 identifies at least some of the (logical or physical) locations of the hibernation storage space automatically, i.e., without being notified of these locations by host processor 24. For example, the SSD controller may automatically identify the size and storage locations (e.g., LBAs) of a file named “HIBERFILE.SYS” that holds the hibernation-related information. The file name and attributes may change with the type of file system used by the host processor.


SSD controller 40 may identify the storage locations of the hibernation file HIBERFILE.SYS using various techniques. In an example embodiment, controller 40 first identifies the Master File Table (MFT) record corresponding to HIBERFILE.SYS in the root folder. The SSD controller may, for example, search exhaustively through the MFT records. This technique does not require support of NT File System (NTFS) features in the SSD controller, but on the other hand is relatively time-consuming. Alternatively, the SSD controller may open the root folder data and search this data for HIBERFILE.SYS. This technique is fast, but may require the SSD controller to support various NTFS features in order to parse the root folder data.


Having identified the MFT record corresponding to HIBERFILE.SYS, the SSD controller opens this record and parses its attached attributes. In some embodiments, the SSD controller verifies, using the record attributes, that the record in question is still valid and still references HIBERFILE.SYS (a precaution against a scenario in which the host decides to stop hibernation and lets a different file occupy this record space).


From the record attributes, the SSD controller finds the fragments (typically start/end cluster pairs) that specify the storage locations of the hibernation file HIBERFILE.SYS. In some cases, the file is not fragmented, in which case a single start/end pair would indicate the storage location of the entire file. In other cases, the file is fragmented into several fragments, in which case the MFT record would contain the descriptions of all the start/end cluster pairs specifying the file's storage locations. In some cases, the file is heavily fragmented, such that the MFT record is too small to hold all the relevant location descriptions. In these cases, the MFT record typically contains an attribute that indicates a cluster range holding the full list of start/end cluster pairs. In any of these cases, the SSD controller analyzes the content of the MFT record in order to find the storage locations assigned to the hibernation file.


In alternative embodiments, the host processor may store the hibernation file using a dedicated “WRITE_HIBERNATE_DATA” command, which is different from the write command used for storing user data. In these embodiments, the SSD controller can identify the storage locations (e.g., LBAs) that are written using the “WRITE_HIBERNATE_DATA” command, and conclude that these locations correspond to the hibernation storage space. Having identified the location of the hibernation storage space, the SSD controller may use at least part of this space for storage management. Alternatively, the SSD controller may receive a notification from the host processor, indicating the logical addresses in which the hibernation file is stored.


SSD Operation Method Description


FIG. 3 is a flow chart that schematically illustrates a method for operating SSD 32, in accordance with an embodiment of the present invention. In this method, the SSD controller releases logical storage space (LBAs in the present example) that is used by the hibernation file for other purposes when the host system is not in hibernation. In the present embodiment, the SSD controller automatically detects situations in which the host system prepares to enter the hibernation state, by detecting multiple write commands to the hibernation storage space. Such multiple write commands are assumed to indicate that the host processor has started to copy the content of RAM 28 into the hibernation file. Upon detecting this event, the SSD controller switches to storage using a high-speed configuration, in order to speed-up entry to hibernation.


The method of FIG. 3 begins with SSD controller 40 initializing following boot of the host system, at an initialization step 70. At this stage, the host system is assumed to be in the hibernation state. Thus, the SSD controller initializes an internal flag denoted HIBERNATION FLAG to TRUE.


The SSD controller checks whether the host system has ended the hibernation state, at a hibernation checking step 74. In an example embodiment, the SSD controller examines the content of memory devices 44, finds the file system tree, and in particular identifies the hibernation file HIBERFILE.SYS. The SSD then examines the data in the first cluster of the hibernation file, and checks whether this data is all-zeros or not. (Typically, examining the first four bytes of the first cluster is sufficient for determining whether the hibernation file is valid or not.) If the beginning of first cluster is non-zero, then the hibernation file is valid, meaning that the host system is still in the hibernation state. If the beginning of the first cluster is all-zero, the SSD controller concludes that the host processor has marked the hibernation file as invalid, and therefore the host system is no longer in the hibernation state. Alternatively, the SSD controller may decide whether or not hibernation has ended based on a notification from the host processor, or using any other suitable method.


If the SSD controller concludes that the host system exited the hibernation state and entered the normal state, the SSD controller releases at least some of the logical addresses (LBAs in the present example) that were occupied by the (now invalid) hibernation file, at a LBA releasing step 78. The released LBAs can be used by the SSD controller for other purposes, e.g., for serving as additional over-provisioning space.


In some embodiments, the SSD controller determines which LBAs are occupied by the hibernation file by examining the file system information stored in memory devices 44. In alternative embodiments, the SSD controller is notified by the host processor as to the LBAs used for storing the hibernation file. The SSD controller typically finds the physical storage locations in memory devices 44 that correspond to these LBAs.


In some embodiments, the SSD controller may find the LBAs used by the hibernation file at any other suitable time, not necessarily when exiting from hibernation. Upon releasing the hibernation file LBAs, the SSD controller sets the HIBERNATION FLAG to FALSE. If the SSD controller concludes, at step 74 above, that the host system is still in hibernation, then step 78 is skipped, i.e., the LBAs of the hibernation file are not released.


The SSD controller receives and executes a write command accepted from the host system, at a writing step 82. The SSD controller stores the data received in the write command in memory devices 44. This write command may be written to a LBA that is part of the hibernation storage space (i.e., to a LBA that is used by the hibernation file) or to a LBA that is outside the hibernation storage space. The SSD controller now checks whether the host system is in hibernation, at a state checking step 86. The SSD controller may determine the system state using any of the techniques described herein. If the host system is in hibernation, the method loops back to step 74 above.


If the host system is not in hibernation, the SSD controller evaluates a criterion for detecting whether the host system is currently preparing to enter the hibernation state. For this purpose, the SSD controller maintains a counter denoted N, which counts the number of write commands that are addressed to LBAs that belong to the hibernation storage space. If the SSD controller detects a certain number of such write commands, it concludes that the host processor has started backing-up the RAM content to the hibernation file. Upon detecting this event, the SSD controller switches to a high-speed programming configuration, in order to increase the speed at which the hibernation file is stored.


If step 86 concludes that the host system is not in hibernation, the SSD controller checks whether the LBA specified in the write command (received at step 82) is inside or outside the hibernation storage area, at a LBA checking step 90. If step 90 concludes that the write command was written into the hibernation storage area, the SSD controller reduces the over-provisioning overhead, at an over-provisioning reduction step 94. The SSD controller then increments N, at an incrementing step 98. If step 90 concludes that the write command was written outside the hibernation storage area, steps 94 and 98 are skipped.


The SSD controller checks whether the current value of N indicates that entry to hibernation has began, at an entry checking step 102. In an embodiment, the SSD controller compares N to a certain threshold and concludes that the host system is preparing to enter hibernation if N exceeds the threshold. In alternative embodiments, the SSD controller may use other techniques for detecting that the host processor has began storing the hibernation file. For example, the SSD controller may check whether one or more “WRITE HIBERNATION DATA” commands are accepted from the host processor. Further alternatively, any other suitable criterion can also be used.


If the SSD controller detects that the host system prepares to enter hibernation (i.e., is in the process of storing the hibernation file), the SSD controller begins storing data using a high-speed storage configuration, at a high-speed storage step 106. Storing the hibernation file at high speed is highly advantageous in many practical cases, such as when the host system enters hibernation because of low battery. Any suitable high-speed storage configuration can be used. In an example embodiment, the SSD controller normally stores data using multiple bits per cell (MLC), and at step 106 switches to store data using a single bit per cell (SLC). As another example, the SSD controller may begin storing data in parallel on a higher number of memory devices (e.g., dies) than the number of devices used for normal storage.


As yet another example, the SSD controller may switch to storing data using a SLC cache, which is later copied to MLC storage.


The method then loops back to step 82 for receiving and executing subsequent write commands from the host system. If the current value of N does not indicate entry to hibernation, step 106 is skipped, and the method loops back directly to step 82.


Additional Embodiments and Variations

In some embodiments, when the SSD controller determines that the host system returns to the normal state (and assuming the hibernation-related information has already been read by the host processor) the SSD controller marks the hibernation-related information as invalid. In some embodiments, after host processor 24 retrieves the hibernation file from the SSD, SSD controller 40 receives from the host processor a command instructing it to invalidate the hibernation file. The command may comprise a dedicated command that is defined specifically for invalidating the hibernation file. Alternatively, the host processor may invalidate the hibernation file using a command that is also used for other purposes, such as a TRIM command. For example, the Advanced Technology Attachment (ATA) protocol supports a “ATA DATA SET MANAGEMENT (TRIM)” command that can be used for this purpose. Other protocols may support similar commands.


Typically, the SSD controller marks the hibernation-related information as invalid, and sends an acknowledgement to the host processor. In alternative embodiments, the SSD controller can identify the storage locations used by the hibernation-related information, and mark these locations as invalid without explicit instructions from the host processor. Such mechanism is feasible, for example, in file systems such as File Allocation Table (FAT), FAT32, New Technology File System (NTFS), EX2, or in any other suitable file system in which the hibernation file has detectable attributes or characteristics.


The SSD controller may decide whether the hibernation-related information is valid or invalid using any suitable method. For example, in some embodiments the hibernation file is stored as a linked list of LBAs. In this configuration, each LBA comprises a portion of the hibernation-related information, and a link to the next LBA. As noted above, the beginning of first cluster of the hibernation file is all-zero if the hibernation file is invalid, and not all-zero if the hibernation file is valid. Thus, by checking the value of the beginning of first cluster in the first LBA of the hibernation file, the SSD controller can determine whether the hibernation-related information is valid or not. Alternatively, any other suitable technique can be used.


In some embodiments, the SSD controller detects certain data that is not related to hibernation but was nevertheless stored in the hibernation storage space by the host operating system. This sort of data will typically not be marked as invalid upon readout of the hibernation file, and the SSD controller will treat it similarly to user data. The space occupied by such data is typically not used for management functions.


In some embodiments, SSD controller 40 stores and/or retrieves the hibernation file using programming/readout operations that are different from the operations used for storing and retrieving user data. These programming/readout operations are typically faster than the respective operations used for user data, and thus increase the speed of switching to and from hibernation. Other performance parameters, such as power consumption, may be compromised in these operations, in order to increase speed.


In some embodiments, SSD controller 40 compresses the hibernation-related information before storing it in the hibernation storage space. The host processor may not be aware of this compression. The compression enables the SSD controller to reduce the size of the reserved hibernation storage space, and thus free memory resources for storing user data or for any other purpose. In addition, the time needed to store and retrieve the hibernation file can be shortened considerably. In alternative embodiments, compression of the hibernation-related information can be performed by the host processor operating system. Since the content of RAM 28 typically comprises executable code and data, compression ratios on the order of 30-40% can be achieved, providing a corresponding reduction in memory utilization and storage/retrieval time.


As explained above, in some embodiments the SSD controller identifies the LBAs occupied by the hibernation file HIBERFILE.SYS by scanning the file system information of the host system as it is stored in memory devices 44, identifying the linked chain of LBAs that stores this file, and checking whether the beginning of the first cluster of the hibernation file is zero (examining the first four bytes will typically suffice to determine if the hibernation file is valid or not, since usually the first 512 bytes are set to zero when the file is invalid). In some embodiments, the SSD controller can use this technique in order to identify other files that were declared invalid by the host system's file system. Once identified, the memory space occupied by such files can be released.


Example Storage Management Functions Using the Hibernation Storage Space

As explained above, SSD controller 40 can use some or all of the hibernation storage space for performing storage management functions as well as for storing other kinds of data and for other purposes. For example, the SSD controller may use the hibernation storage space to increase the over-provisioning overhead of the SSD. Over-provisioning is a mechanism deployed in Flash devices and other analog memory cell devices due to the fact that (1) Data is written in the device page-by-page, (2) memory cells cannot be overwritten and need to be erased first, and (3) memory cells are erased in memory block units, each block comprising multiple pages.


When using over-provisioning, the actual physical memory size that is available for storing data is larger than the specified memory capacity (i.e., the size of the address space accessible to the host processor). The ratio between the actual physical capacity and the specified capacity is typically defined as the over-provisioning ratio. Consider, for example, a SSD that is operating at an over-provisioning ratio of 15% and is fully-programmed from the point of view of the host processor. In this scenario, each memory block in the SSD will be, on average, only 85% programmed. On average, 15% of each memory block will comprise invalid data or un-programmed memory cells.


In order to erase memory blocks and make them available for programming, the SSD controller copies valid data from partially-programmed blocks, so as to condense the data and clear memory blocks for erasure. This process is sometimes referred to as garbage collection. As can be appreciated, the SSD needs to perform a number of programming operations per each new page being programmed. The average number of programming operations per each new page (sometimes referred to as write amplification) increases as a function of the over-provisioning ratio. Thus, the over-provisioning ratio has a considerable impact on the achievable SSD programming throughput. This effect is particularly noticeable when the SSD is fully or nearly fully programmed.


In some embodiments, SSD controller 40 uses the hibernation storage space to increase the over-provisioning ratio of the SSD. In other words, during normal operation, there is no need to reserve memory space for hibernation-related information, and this space can be used as extra over-provisioning space. As a result, the SSD programming throughput can be significantly increased. When preparing to enter hibernation, the hibernation storage area can no longer be used for over-provisioning, and the over-provisioning ratio is reduced accordingly.


Additionally or alternatively, SSD controller 40 can use some or all of the hibernation storage space as a write cache memory. In these embodiments, SSD controller 40 accepts user data for storage from host processor 24, caches the user data temporarily in the hibernation storage space, and later copies the cached data to long-term storage locations outside the hibernation storage space. Write caching can be used in various ways to improve programming performance. Storage schemes that use write caching are described, for example, in U.S. patent application Ser. Nos. 12/186,867, 12/332,370, 12/551,567 and 12/579,430, which are assigned to the assignee of the present patent application and whose disclosures are incorporated herein by reference.


In some embodiments, the SSD controller caches the user data in the hibernation storage space using a storage configuration that is optimized for throughput, possibly at the expense of other performance parameters such as retention or density. Later, the SSD controller copies the cached data to its long-term storage locations outside the hibernation storage space. The SSD controller may store the data in the long-term storage locations using a different storage configuration, typically having lower throughput. The long-term storage configuration may be optimized for retention and/or density. For example, the SSD controller may cache the user data using only two programming levels per memory cell (i.e., at a density of one bit per cell), and later store the data using a higher number of programming levels per memory cell (i.e., at a density of more than one bit per cell). Alternatively, any other caching and/or long-term storage configuration can be used.


In some embodiments, the SSD controller uses the hibernation storage space for storing data, e.g., user data or management data, while the host system is not in hibernation. When the host system prepares to enter hibernation, the SSD controller vacates this storage space, e.g., by copying the data to other storage locations outside the hibernation storage space, or by compressing the data. The SSD controller vacates the storage space, for example, in response to a notification from the host system indicating entry to hibernation.


It will be appreciated that the embodiments described above are cited by way of example, and that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art.

Claims
  • 1. A method for data storage, comprising: in a host system that operates alternately in a normal state and a hibernation state, reserving a hibernation storage space in a non-volatile storage device to store hibernation-related information in preparation for entering the hibernation state, wherein the non-volatile storage device includes multiple memory blocks allocated as a user data storage space, over-provisioning memory, and the hibernation storage space; andwhile the host system is operating in the normal state:allocating at least a portion of the hibernation storage space as over-provisioning memory, thereby allowing valid data to be copied from partially-programmed memory blocks in the user data storage space to the at least a portion of the hibernation storage space, and the valid data from the at least a portion of the hibernation storage space to be copied to the user data storage space subsequent to an erasure operation being performed one or more locations of the user data storage space.
  • 2. The method according to claim 1, further comprising allocating at least another portion of the hibernation storage space as a cache, and caching user data accepted from the host system in the at least another portion of the hibernation storage space, and subsequently copying the cached user data to storage locations outside the hibernation storage space.
  • 3. The method according to claim 2, wherein caching the user data comprises writing the user data to the hibernation storage space at a first storage throughput, and wherein copying the cached user data comprises storing the user data outside the hibernation storage space at a second storage throughput that is lower than the first storage throughput.
  • 4. The method according to claim 1, wherein reserving the hibernation storage space comprises allocating a set of storage locations of the user data storage space by the host system to serve as the hibernation storage space, and wherein copying valid data to the at least a portion of the hibernation storage space comprises identifying at least part of the storage locations in the set, and using the identified storage locations.
  • 5. The method according to claim 4, wherein identifying the storage locations in the set comprises receiving a notification from the host system indicative of the set of storage locations.
  • 6. The method according to claim 4, wherein identifying the storage locations in the set comprises automatically identifying a file holding the hibernation-related information in a file system of the host system.
  • 7. The method according to claim 4, further comprising, in preparation for entering the hibernation state, receiving the hibernation-related information from the host system using one or more dedicated hibernation write commands, wherein identifying the storage locations in the set comprises detecting the storage locations written to using the dedicated hibernation write commands.
  • 8. The method according to claim 1, further comprising detecting that the host system is preparing to enter the hibernation state.
  • 9. The method according to claim 8, further comprising, responsively to detecting that the host system is preparing to enter the hibernation state, switching to store the hibernation-related information using a high-speed storage configuration.
  • 10. The method according to claim 8, wherein detecting that the host system is preparing to enter the hibernation state comprises detecting one or more dedicated hibernation write commands received from the host system.
  • 11. The method according to claim 8, wherein detecting that the host system is preparing to enter the hibernation state comprises detecting one or more write commands to storage locations belonging to the hibernation storage space.
  • 12. The method according to claim 8, wherein detecting that the host system is preparing to enter the hibernation state comprises receiving a notification from the host system indicating a preparation to enter the hibernation state.
  • 13. The method according to claim 1, further comprising detecting that the host system is preparing to exit the hibernation state.
  • 14. The method according to claim 13, wherein detecting that the host system is preparing to exit the hibernation state comprises detecting one or more read commands from storage locations belonging to the hibernation storage space.
  • 15. The method according to claim 13, wherein detecting that the host system is preparing to exit the hibernation state comprises receiving a notification from the host system indicating a preparation to exit the hibernation state.
  • 16. The method according to claim 1, further comprising copying valid data from the partially-programmed memory blocks in the user data storage space to the at least the portion of the hibernation storage space only responsively to verifying that the hibernation-related information is invalid.
  • 17. The method according to claim 1, further comprising, after the host system exits from the hibernation state and retrieves the hibernation-related information from the non-volatile storage device, marking the hibernation-related information as invalid.
  • 18. The method according to claim 1, further comprising, in preparation for entering the hibernation state, storing part of the hibernation-related information in the at least a portion of the hibernation storage space.
  • 19. The method according to claim 1, further comprising writing user data at a first throughput, and, in preparation for entering the hibernation state, writing the hibernation-related information to the hibernation storage space at a second throughput, higher than the first throughput.
  • 20. The method according to claim 1, further comprising, in preparation for entering the hibernation state, compressing the hibernation-related information and storing the compressed hibernation-related information in the hibernation storage space.
  • 21. The method according to claim 1, further comprising receiving a notification from the host system indicating that the host system is preparing to enter the hibernation state, and vacating the at least a portion of the hibernation storage space in response to the notification.
  • 22. Apparatus for data storage, comprising: a non-volatile memory including multiple memory blocks; anda memory controller coupled to the non-volatile memory and configured to: store data for a host system that operates alternately in a normal state and a hibernation state;allocate the multiple memory blocks as a user data storage space, over-provisioning memory, and to reserve a hibernation storage space in the non-volatile memory to store hibernation-related information in preparation for entering the hibernation state; andwhile the host system is operating in the normal state, the memory controller is configured to allocate at least a portion of the hibernation storage space as over-provisioning memory, thereby allowing valid data to be copied from partially-programmed memory blocks in the user data storage space to the at least a portion of the hibernation storage space, and the valid data from the at least a portion of the hibernation storage space to be copied to the user data storage space subsequent to an erasure operation being performed one or more locations of the user data storage space.
  • 23. The apparatus according to claim 22, wherein the memory controller is configured to allocate at least another portion of the hibernation storage space as a cache, and to cache user data accepted from the host system in the at least another portion of the hibernation storage space, and to subsequently copy the cached user data to storage locations outside the hibernation storage space.
  • 24. The apparatus according to claim 23, wherein the memory controller is configured to cache the user data in the hibernation storage space at a first storage throughput, and to store the user data outside the hibernation storage space at a second storage throughput that is lower than the first storage throughput.
  • 25. The apparatus according to claim 22, wherein a set of storage locations of the user data storage space is identified by the host system to serve as the hibernation storage space, and wherein the memory controller is configured to identify at least part of the storage locations in the set, and to manage the storage using the identified storage locations.
  • 26. The apparatus according to claim 25, wherein the memory controller is configured to identify the storage locations in the set by receiving a notification from the host system indicative of the set of storage locations.
  • 27. The apparatus according to claim 25, wherein the memory controller is configured to identify the storage locations in the set by automatically identifying a file holding the hibernation-related information in a file system of the host system.
  • 28. The apparatus according to claim 25, wherein the memory controller is configured to receive the hibernation-related information from the host system using one or more dedicated hibernation write commands, and to identify the storage locations in the set by detecting the storage locations written to using the dedicated hibernation write commands.
  • 29. The apparatus according to claim 22, wherein the memory controller is configured to detect that the host system is preparing to enter the hibernation state.
  • 30. The apparatus according to claim 29, wherein the memory controller is configured to switch to store the hibernation-related information using a high-speed storage configuration responsively to detecting that the host system is preparing to enter the hibernation state.
  • 31. The apparatus according to claim 29, wherein the memory controller is configured to detect that the host system is preparing to enter the hibernation state by detecting one or more dedicated hibernation write commands received from the host system.
  • 32. The apparatus according to claim 29, wherein the memory controller is configured to detect that the host system is preparing to enter the hibernation state by detecting one or more write commands to storage locations belonging to the hibernation storage space.
  • 33. The apparatus according to claim 29, wherein the memory controller is configured to detect that the host system is preparing to enter the hibernation state by receiving a notification from the host system indicating a preparation to enter the hibernation state.
  • 34. The apparatus according to claim 22, wherein the memory controller is configured to detect that the host system is preparing to exit the hibernation state.
  • 35. The apparatus according to claim 34, wherein the memory controller is configured to detect that the host system is preparing to exit the hibernation state by detecting one or more read commands from storage locations belonging to the hibernation storage space.
  • 36. The apparatus according to claim 34, wherein the memory controller is configured to detect that the host system is preparing to exit the hibernation state by receiving a notification from the host system indicating a preparation to exit the hibernation state.
  • 37. The apparatus according to claim 22, wherein the memory controller is configured to copy valid data from the partially-programmed memory blocks in the user data storage space to the at least the portion of the hibernation storage space only responsively to verifying that the hibernation-related information is invalid.
  • 38. The apparatus according to claim 22, wherein, after the host system exits from the hibernation state and retrieves the hibernation-related information from the non-volatile storage device, the memory controller is configured to mark the hibernation-related information as invalid.
  • 39. The apparatus according to claim 22, wherein, in preparation for entering the hibernation state, the memory controller is configured to store part of the hibernation-related information in the at least a portion of the hibernation storage space.
  • 40. The apparatus according to claim 22, wherein the memory controller is configured to write user data at a first throughput, and to write the hibernation-related information to the hibernation storage space at a second throughput, higher than the first throughput.
  • 41. The apparatus according to claim 22, wherein, in preparation for entering the hibernation state, the memory controller is configured to compress the hibernation-related information and store the compressed hibernation-related information in the hibernation storage space.
  • 42. The apparatus according to claim 22, wherein the memory controller is configured to receive a notification from the host system indicating that the host system is preparing to enter the hibernation state, and to vacate the at least a portion of the hibernation storage space in response to the notification.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application 61/293,676, filed Jan. 10, 2010, and U.S. Provisional Patent Application 61/324,429, filed Apr. 15, 2010, whose disclosures are incorporated herein by reference.

US Referenced Citations (572)
Number Name Date Kind
3668631 Griffith et al. Jun 1972 A
3668632 Oldham Jun 1972 A
4058851 Scheuneman Nov 1977 A
4112502 Scheuneman Sep 1978 A
4394763 Nagano et al. Jul 1983 A
4413339 Riggle et al. Nov 1983 A
4556961 Iwahashi et al. Dec 1985 A
4558431 Satoh Dec 1985 A
4608687 Dutton Aug 1986 A
4654847 Dutton Mar 1987 A
4661929 Aoki et al. Apr 1987 A
4768171 Tada Aug 1988 A
4811285 Walker et al. Mar 1989 A
4899342 Potter et al. Feb 1990 A
4910706 Hyatt Mar 1990 A
4993029 Galbraith et al. Feb 1991 A
5056089 Furuta et al. Oct 1991 A
5077722 Geist et al. Dec 1991 A
5126808 Montalvo et al. Jun 1992 A
5163021 Mehrotra et al. Nov 1992 A
5172338 Mehrotta et al. Dec 1992 A
5182558 Mayo Jan 1993 A
5182752 DeRoo et al. Jan 1993 A
5191584 Anderson Mar 1993 A
5200959 Gross et al. Apr 1993 A
5237535 Mielke et al. Aug 1993 A
5272669 Samachisa et al. Dec 1993 A
5276649 Hoshita et al. Jan 1994 A
5287469 Tsuboi Feb 1994 A
5365484 Cleveland et al. Nov 1994 A
5388064 Khan Feb 1995 A
5416646 Shirai May 1995 A
5416782 Wells et al. May 1995 A
5446854 Khalidi et al. Aug 1995 A
5450424 Okugaki et al. Sep 1995 A
5469444 Endoh et al. Nov 1995 A
5473753 Wells et al. Dec 1995 A
5479170 Cauwenberghs et al. Dec 1995 A
5508958 Fazio et al. Apr 1996 A
5519831 Holzhammer May 1996 A
5532962 Auclair et al. Jul 1996 A
5533190 Binford et al. Jul 1996 A
5541886 Hasbun Jul 1996 A
5600677 Citta et al. Feb 1997 A
5638320 Wong et al. Jun 1997 A
5657332 Auclair et al. Aug 1997 A
5675540 Roohparvar Oct 1997 A
5682352 Wong et al. Oct 1997 A
5687114 Khan Nov 1997 A
5696717 Koh Dec 1997 A
5726649 Tamaru et al. Mar 1998 A
5726934 Tran et al. Mar 1998 A
5742752 De Koening Apr 1998 A
5748533 Dunlap et al. May 1998 A
5748534 Dunlap et al. May 1998 A
5751637 Chen et al. May 1998 A
5761402 Kaneda et al. Jun 1998 A
5798966 Keeney Aug 1998 A
5799200 Brant et al. Aug 1998 A
5801985 Roohparvar et al. Sep 1998 A
5838832 Barnsley Nov 1998 A
5860106 Domen et al. Jan 1999 A
5867114 Barbir Feb 1999 A
5867428 Ishii et al. Feb 1999 A
5867429 Chen et al. Feb 1999 A
5877986 Harari et al. Mar 1999 A
5889937 Tamagawa Mar 1999 A
5901089 Korsh et al. May 1999 A
5909449 So et al. Jun 1999 A
5912906 Wu et al. Jun 1999 A
5930167 Lee et al. Jul 1999 A
5937424 Leak et al. Aug 1999 A
5942004 Cappelletti Aug 1999 A
5946716 Karp et al. Aug 1999 A
5969986 Wong et al. Oct 1999 A
5982668 Ishii et al. Nov 1999 A
5991517 Harari et al. Nov 1999 A
5995417 Chen et al. Nov 1999 A
6009014 Hollmer et al. Dec 1999 A
6009016 Ishii et al. Dec 1999 A
6023425 Ishii et al. Feb 2000 A
6034891 Norman Mar 2000 A
6040993 Chen et al. Mar 2000 A
6041430 Yamauchi Mar 2000 A
6073204 Lakhani et al. Jun 2000 A
6101614 Gonzales et al. Aug 2000 A
6128237 Shirley et al. Oct 2000 A
6134140 Tanaka et al. Oct 2000 A
6134143 Norman Oct 2000 A
6134631 Jennings Oct 2000 A
6141261 Patti Oct 2000 A
6151246 So et al. Nov 2000 A
6157573 Ishii et al. Dec 2000 A
6166962 Chen et al. Dec 2000 A
6169691 Pasotti et al. Jan 2001 B1
6178466 Gilbertson et al. Jan 2001 B1
6185134 Tanaka et al. Feb 2001 B1
6209113 Roohparvar Mar 2001 B1
6212654 Lou et al. Apr 2001 B1
6219276 Parker Apr 2001 B1
6219447 Lee et al. Apr 2001 B1
6222762 Guterman et al. Apr 2001 B1
6230233 Lofgren et al. May 2001 B1
6240458 Gilbertson May 2001 B1
6259627 Wong Jul 2001 B1
6275419 Guterman et al. Aug 2001 B1
6278632 Chevallier Aug 2001 B1
6279069 Robinson et al. Aug 2001 B1
6288944 Kawamura Sep 2001 B1
6292394 Cohen et al. Sep 2001 B1
6301151 Engh et al. Oct 2001 B1
6304486 Yano Oct 2001 B1
6307776 So et al. Oct 2001 B1
6314044 Sasaki et al. Nov 2001 B1
6317363 Guterman et al. Nov 2001 B1
6317364 Guterman et al. Nov 2001 B1
6345004 Omura et al. Feb 2002 B1
6360346 Miyauchi et al. Mar 2002 B1
6363008 Wong Mar 2002 B1
6363454 Lakhani et al. Mar 2002 B1
6366496 Torelli et al. Apr 2002 B1
6385092 Ishii et al. May 2002 B1
6392932 Ishii et al. May 2002 B1
6396742 Korsh et al. May 2002 B1
6397364 Barkan May 2002 B1
6405323 Lin et al. Jun 2002 B1
6405342 Lee Jun 2002 B1
6418060 Yong et al. Jul 2002 B1
6442585 Dean et al. Aug 2002 B1
6445602 Kokudo et al. Sep 2002 B1
6452838 Ishii et al. Sep 2002 B1
6456528 Chen Sep 2002 B1
6466476 Wong et al. Oct 2002 B1
6467062 Barkan Oct 2002 B1
6469931 Ban et al. Oct 2002 B1
6480948 Virajpet et al. Nov 2002 B1
6490236 Fukuda et al. Dec 2002 B1
6522580 Chen et al. Feb 2003 B2
6525952 Araki et al. Feb 2003 B2
6532556 Wong et al. Mar 2003 B1
6538922 Khalid et al. Mar 2003 B1
6549464 Tanaka et al. Apr 2003 B2
6553510 Pekny Apr 2003 B1
6558967 Wong May 2003 B1
6560152 Cernea May 2003 B1
6567311 Ishii et al. May 2003 B2
6577539 Iwahashi Jun 2003 B2
6584012 Banks Jun 2003 B2
6615307 Roohparvar Sep 2003 B1
6621739 Gonzalez et al. Sep 2003 B2
6640326 Buckingham et al. Oct 2003 B1
6643169 Rudelic et al. Nov 2003 B2
6646913 Micheloni et al. Nov 2003 B2
6678192 Gongwer et al. Jan 2004 B2
6683811 Ishii et al. Jan 2004 B2
6687155 Nagasue Feb 2004 B2
6707748 Lin et al. Mar 2004 B2
6708257 Bao Mar 2004 B2
6714449 Khalid Mar 2004 B2
6717847 Chen Apr 2004 B2
6731557 Beretta May 2004 B2
6732250 Durrant May 2004 B2
6738293 Iwahashi May 2004 B1
6751766 Guterman et al. Jun 2004 B2
6757193 Chen et al. Jun 2004 B2
6774808 Hibbs et al. Aug 2004 B1
6781877 Cernea et al. Aug 2004 B2
6804805 Rub Oct 2004 B2
6807095 Chen et al. Oct 2004 B2
6807101 Ooishi et al. Oct 2004 B2
6809964 Moschopoulos et al. Oct 2004 B2
6819592 Noguchi et al. Nov 2004 B2
6829167 Tu et al. Dec 2004 B2
6845052 Ho et al. Jan 2005 B1
6851018 Wyatt et al. Feb 2005 B2
6851081 Yamamoto Feb 2005 B2
6856546 Guterman et al. Feb 2005 B2
6862218 Guterman et al. Mar 2005 B2
6870767 Rudelic et al. Mar 2005 B2
6870773 Noguchi et al. Mar 2005 B2
6873552 Ishii et al. Mar 2005 B2
6879520 Hosono et al. Apr 2005 B2
6882567 Wong Apr 2005 B1
6883037 Kadatch et al. Apr 2005 B2
6894926 Guterman et al. May 2005 B2
6907497 Hosono et al. Jun 2005 B2
6925009 Noguchi et al. Aug 2005 B2
6930925 Guo et al. Aug 2005 B2
6934188 Roohparvar Aug 2005 B2
6937511 Hsu et al. Aug 2005 B2
6958938 Noguchi et al. Oct 2005 B2
6963505 Cohen Nov 2005 B2
6972993 Conley et al. Dec 2005 B2
6988175 Lasser Jan 2006 B2
6992932 Cohen Jan 2006 B2
6999344 Hosono et al. Feb 2006 B2
7002843 Guterman et al. Feb 2006 B2
7006379 Noguchi et al. Feb 2006 B2
7012835 Gonzalez et al. Mar 2006 B2
7020017 Chen et al. Mar 2006 B2
7023735 Ban et al. Apr 2006 B2
7031210 Park et al. Apr 2006 B2
7031214 Tran Apr 2006 B2
7031216 You Apr 2006 B2
7039846 Hewitt et al. May 2006 B2
7042766 Wang et al. May 2006 B1
7054193 Wong May 2006 B1
7054199 Lee et al. May 2006 B2
7057958 So et al. Jun 2006 B2
7065147 Ophir et al. Jun 2006 B2
7068539 Guterman et al. Jun 2006 B2
7071849 Zhang Jul 2006 B2
7072222 Ishii et al. Jul 2006 B2
7079555 Baydar et al. Jul 2006 B2
7088615 Guterman et al. Aug 2006 B2
7099194 Tu et al. Aug 2006 B2
7102924 Chen et al. Sep 2006 B2
7113432 Mokhlesi Sep 2006 B2
7130210 Bathul et al. Oct 2006 B2
7139192 Wong Nov 2006 B1
7139198 Guterman et al. Nov 2006 B2
7145805 Ishii et al. Dec 2006 B2
7151692 Wu Dec 2006 B2
7158058 Yu Jan 2007 B1
7170781 So et al. Jan 2007 B2
7170802 Cernea et al. Jan 2007 B2
7173859 Hemink Feb 2007 B2
7177184 Chen Feb 2007 B2
7177195 Gonzalez et al. Feb 2007 B2
7177199 Chen et al. Feb 2007 B2
7177200 Ronen et al. Feb 2007 B2
7184338 Nakagawa et al. Feb 2007 B2
7187195 Kim Mar 2007 B2
7187592 Guterman et al. Mar 2007 B2
7190614 Wu Mar 2007 B2
7193898 Cernea Mar 2007 B2
7193921 Choi et al. Mar 2007 B2
7196644 Anderson et al. Mar 2007 B1
7196928 Chen Mar 2007 B2
7196933 Shibata Mar 2007 B2
7197594 Raz et al. Mar 2007 B2
7200062 Kinsely et al. Apr 2007 B2
7210077 Brandenberger et al. Apr 2007 B2
7221592 Nazarian May 2007 B2
7224613 Chen et al. May 2007 B2
7231474 Helms et al. Jun 2007 B1
7231562 Ohlhoff et al. Jun 2007 B2
7243275 Gongwer et al. Jul 2007 B2
7254690 Rao Aug 2007 B2
7254763 Aadsen et al. Aug 2007 B2
7257027 Park Aug 2007 B2
7259987 Chen et al. Aug 2007 B2
7266026 Gongwer et al. Sep 2007 B2
7266069 Chu Sep 2007 B2
7269066 Nguyen et al. Sep 2007 B2
7272757 Stocken Sep 2007 B2
7274611 Roohparvar Sep 2007 B2
7277355 Tanzawa Oct 2007 B2
7280398 Lee et al. Oct 2007 B1
7280409 Misumi et al. Oct 2007 B2
7280415 Hwang et al. Oct 2007 B2
7283399 Ishii et al. Oct 2007 B2
7289344 Chen Oct 2007 B2
7301807 Khalid et al. Nov 2007 B2
7301817 Li et al. Nov 2007 B2
7308525 Lasser et al. Dec 2007 B2
7310255 Chan Dec 2007 B2
7310269 Shibata Dec 2007 B2
7310271 Lee Dec 2007 B2
7310272 Mokhesi et al. Dec 2007 B1
7310347 Lasser Dec 2007 B2
7312727 Feng et al. Dec 2007 B1
7321509 Chen et al. Jan 2008 B2
7328384 Kulkarni et al. Feb 2008 B1
7342831 Mokhlesi et al. Mar 2008 B2
7343330 Boesjes et al. Mar 2008 B1
7345924 Nguyen et al. Mar 2008 B2
7345928 Li Mar 2008 B2
7349263 Kim et al. Mar 2008 B2
7356755 Fackenthal Apr 2008 B2
7363420 Lin et al. Apr 2008 B2
7365671 Anderson Apr 2008 B1
7388781 Litsyn et al. Jun 2008 B2
7397697 So et al. Jul 2008 B2
7405974 Yaoi et al. Jul 2008 B2
7405979 Ishii et al. Jul 2008 B2
7408804 Hemink et al. Aug 2008 B2
7408810 Aritome et al. Aug 2008 B2
7409473 Conley et al. Aug 2008 B2
7409623 Baker et al. Aug 2008 B2
7420847 Li Sep 2008 B2
7433231 Aritome Oct 2008 B2
7433697 Karaoguz et al. Oct 2008 B2
7434111 Sugiura et al. Oct 2008 B2
7437498 Ronen Oct 2008 B2
7440324 Mokhlesi Oct 2008 B2
7440331 Hemink Oct 2008 B2
7441067 Gorobetz et al. Oct 2008 B2
7447970 Wu et al. Nov 2008 B2
7450421 Mokhlesi et al. Nov 2008 B2
7453737 Ha Nov 2008 B2
7457163 Hemink Nov 2008 B2
7457897 Lee et al. Nov 2008 B1
7460410 Nagai et al. Dec 2008 B2
7460412 Lee et al. Dec 2008 B2
7466592 Mitani et al. Dec 2008 B2
7468907 Kang et al. Dec 2008 B2
7468911 Lutze et al. Dec 2008 B2
7469049 Feng Dec 2008 B1
7471581 Tran et al. Dec 2008 B2
7483319 Brown Jan 2009 B2
7487329 Hepkin et al. Feb 2009 B2
7487394 Forhan et al. Feb 2009 B2
7492641 Hosono et al. Feb 2009 B2
7508710 Mokhlesi Mar 2009 B2
7526711 Orio Apr 2009 B2
7539061 Lee May 2009 B2
7539062 Doyle May 2009 B2
7551492 Kim Jun 2009 B2
7558109 Brandman et al. Jul 2009 B2
7558839 McGovern Jul 2009 B1
7568135 Cornwell et al. Jul 2009 B2
7570520 Kamei et al. Aug 2009 B2
7574555 Porat et al. Aug 2009 B2
7590002 Mokhlesi et al. Sep 2009 B2
7593259 Kim Sep 2009 B2
7594093 Kancherla Sep 2009 B1
7596707 Vemula Sep 2009 B1
7609787 Jahan et al. Oct 2009 B2
7613043 Cornwell et al. Nov 2009 B2
7616498 Mokhlesi et al. Nov 2009 B2
7619918 Aritome Nov 2009 B2
7631245 Lasser Dec 2009 B2
7633798 Sarin et al. Dec 2009 B2
7633802 Mokhlesi Dec 2009 B2
7639532 Roohparvar et al. Dec 2009 B2
7644347 Alexander et al. Jan 2010 B2
7656734 Thorp et al. Feb 2010 B2
7660158 Aritome Feb 2010 B2
7660183 Ware et al. Feb 2010 B2
7661000 Ueda et al. Feb 2010 B2
7661054 Huffman et al. Feb 2010 B2
7665007 Yang et al. Feb 2010 B2
7680987 Clark et al. Mar 2010 B1
7733712 Walston et al. Jun 2010 B1
7742351 Inoue et al. Jun 2010 B2
7761624 Karamcheti et al. Jul 2010 B2
7797609 Neuman Sep 2010 B2
7810017 Radke Oct 2010 B2
7848149 Gonzalez et al. Dec 2010 B2
7869273 Lee et al. Jan 2011 B2
7885119 Li Feb 2011 B2
7904783 Brandman et al. Mar 2011 B2
7928497 Yaegashi Apr 2011 B2
7929549 Talbot Apr 2011 B1
7930515 Gupta et al. Apr 2011 B2
7945825 Cohen et al. May 2011 B2
7978516 Olbrich et al. Jul 2011 B2
8014094 Jin Sep 2011 B1
8037380 Cagno et al. Oct 2011 B2
8040744 Gorobets et al. Oct 2011 B2
8065583 Radke Nov 2011 B2
20010002172 Tanaka et al. May 2001 A1
20010006479 Ikehashi et al. Jul 2001 A1
20020038440 Barkan Mar 2002 A1
20020056064 Kidorf et al. May 2002 A1
20020118574 Gongwer et al. Aug 2002 A1
20020133684 Anderson Sep 2002 A1
20020166091 Kidorf et al. Nov 2002 A1
20020174295 Ulrich et al. Nov 2002 A1
20020196510 Hietala et al. Dec 2002 A1
20030002348 Chen et al. Jan 2003 A1
20030103400 Van Tran Jun 2003 A1
20030161183 Van Tran Aug 2003 A1
20030189856 Cho et al. Oct 2003 A1
20040057265 Mirabel et al. Mar 2004 A1
20040057285 Cernea et al. Mar 2004 A1
20040064647 DeWhitt et al. Apr 2004 A1
20040083333 Chang et al. Apr 2004 A1
20040083334 Chang et al. Apr 2004 A1
20040105311 Cernea et al. Jun 2004 A1
20040114437 Li Jun 2004 A1
20040160842 Fukiage Aug 2004 A1
20040223371 Roohparvar Nov 2004 A1
20050007802 Gerpheide Jan 2005 A1
20050013165 Ban Jan 2005 A1
20050024941 Lasser et al. Feb 2005 A1
20050024978 Ronen Feb 2005 A1
20050030788 Parkinson et al. Feb 2005 A1
20050086574 Fackenthal Apr 2005 A1
20050121436 Kamitani et al. Jun 2005 A1
20050144361 Gonzalez et al. Jun 2005 A1
20050157555 Ono et al. Jul 2005 A1
20050162913 Chen Jul 2005 A1
20050169051 Khalid et al. Aug 2005 A1
20050189649 Maruyama et al. Sep 2005 A1
20050213393 Lasser Sep 2005 A1
20050224853 Ohkawa Oct 2005 A1
20050240745 Iyer et al. Oct 2005 A1
20050243626 Ronen Nov 2005 A1
20060004952 Lasser Jan 2006 A1
20060028875 Avraham et al. Feb 2006 A1
20060028877 Meir Feb 2006 A1
20060101193 Murin May 2006 A1
20060106972 Gorobets et al. May 2006 A1
20060107136 Gongwer et al. May 2006 A1
20060129750 Lee et al. Jun 2006 A1
20060133141 Gorobets Jun 2006 A1
20060156189 Tomlin Jul 2006 A1
20060179334 Brittain et al. Aug 2006 A1
20060190699 Lee Aug 2006 A1
20060203546 Lasser Sep 2006 A1
20060218359 Sanders et al. Sep 2006 A1
20060221692 Chen Oct 2006 A1
20060221705 Hemink et al. Oct 2006 A1
20060221714 Li et al. Oct 2006 A1
20060239077 Park et al. Oct 2006 A1
20060239081 Roohparvar Oct 2006 A1
20060256620 Nguyen et al. Nov 2006 A1
20060256626 Werner et al. Nov 2006 A1
20060256891 Yuan et al. Nov 2006 A1
20060271748 Jain et al. Nov 2006 A1
20060285392 Incarnati et al. Dec 2006 A1
20060285396 Ha Dec 2006 A1
20070006013 Moshayedi et al. Jan 2007 A1
20070019481 Park Jan 2007 A1
20070033581 Tomlin et al. Feb 2007 A1
20070047314 Goda et al. Mar 2007 A1
20070047326 Nguyen et al. Mar 2007 A1
20070050536 Kolokowsky Mar 2007 A1
20070058446 Hwang et al. Mar 2007 A1
20070061502 Lasser et al. Mar 2007 A1
20070067667 Ikeuchi et al. Mar 2007 A1
20070074093 Lasser Mar 2007 A1
20070086239 Litsyn et al. Apr 2007 A1
20070086260 Sinclair Apr 2007 A1
20070089034 Litsyn et al. Apr 2007 A1
20070091677 Lasser et al. Apr 2007 A1
20070091694 Lee et al. Apr 2007 A1
20070103978 Conley et al. May 2007 A1
20070103986 Chen May 2007 A1
20070104211 Opsasnick May 2007 A1
20070109845 Chen May 2007 A1
20070109849 Chen May 2007 A1
20070115726 Cohen et al. May 2007 A1
20070118713 Guterman et al. May 2007 A1
20070143378 Gorobetz Jun 2007 A1
20070143531 Atri Jun 2007 A1
20070159889 Kang et al. Jul 2007 A1
20070159892 Kang et al. Jul 2007 A1
20070159907 Kwak Jul 2007 A1
20070168837 Murin Jul 2007 A1
20070171714 Wu et al. Jul 2007 A1
20070183210 Choi et al. Aug 2007 A1
20070189073 Aritome Aug 2007 A1
20070195602 Fong et al. Aug 2007 A1
20070206426 Mokhlesi Sep 2007 A1
20070208904 Hsieh et al. Sep 2007 A1
20070226599 Motwani Sep 2007 A1
20070236990 Aritome Oct 2007 A1
20070253249 Kang et al. Nov 2007 A1
20070256620 Viggiano et al. Nov 2007 A1
20070263455 Cornwell et al. Nov 2007 A1
20070266232 Rodgers et al. Nov 2007 A1
20070271424 Lee et al. Nov 2007 A1
20070280000 Fujiu et al. Dec 2007 A1
20070291571 Balasundaram Dec 2007 A1
20070297234 Cernea et al. Dec 2007 A1
20080010395 Mylly et al. Jan 2008 A1
20080025121 Tanzawa Jan 2008 A1
20080043535 Roohparvar Feb 2008 A1
20080049504 Kasahara et al. Feb 2008 A1
20080049506 Guterman Feb 2008 A1
20080052446 Lasser et al. Feb 2008 A1
20080055993 Lee Mar 2008 A1
20080080243 Edahiro et al. Apr 2008 A1
20080082730 Kim et al. Apr 2008 A1
20080089123 Chae et al. Apr 2008 A1
20080104309 Cheon et al. May 2008 A1
20080104312 Lasser May 2008 A1
20080109590 Jung et al. May 2008 A1
20080115017 Jacobson May 2008 A1
20080123420 Brandman et al. May 2008 A1
20080123426 Lutze et al. May 2008 A1
20080126686 Sokolov et al. May 2008 A1
20080130341 Shalvi et al. Jun 2008 A1
20080148115 Sokolov et al. Jun 2008 A1
20080151618 Sharon et al. Jun 2008 A1
20080151667 Miu et al. Jun 2008 A1
20080158958 Sokolov et al. Jul 2008 A1
20080181001 Shalvi Jul 2008 A1
20080198650 Shalvi et al. Aug 2008 A1
20080198654 Toda Aug 2008 A1
20080209116 Caulkins Aug 2008 A1
20080209304 Winarski et al. Aug 2008 A1
20080215798 Sharon et al. Sep 2008 A1
20080219050 Shalvi et al. Sep 2008 A1
20080239093 Easwar et al. Oct 2008 A1
20080239812 Abiko et al. Oct 2008 A1
20080253188 Aritome Oct 2008 A1
20080263262 Sokolov et al. Oct 2008 A1
20080263676 Mo et al. Oct 2008 A1
20080270730 Lasser et al. Oct 2008 A1
20080282106 Shalvi et al. Nov 2008 A1
20080288714 Salomon et al. Nov 2008 A1
20090013233 Radke Jan 2009 A1
20090024905 Shalvi et al. Jan 2009 A1
20090034337 Aritome Feb 2009 A1
20090043831 Antonopoulos et al. Feb 2009 A1
20090043951 Shalvi et al. Feb 2009 A1
20090049234 Oh et al. Feb 2009 A1
20090073762 Lee et al. Mar 2009 A1
20090086542 Lee et al. Apr 2009 A1
20090089484 Chu Apr 2009 A1
20090091979 Shalvi Apr 2009 A1
20090094930 Schwoerer Apr 2009 A1
20090106485 Anholt Apr 2009 A1
20090112949 Ergan et al. Apr 2009 A1
20090132755 Radke May 2009 A1
20090144600 Perlmutter et al. Jun 2009 A1
20090150894 Huang et al. Jun 2009 A1
20090157950 Selinger Jun 2009 A1
20090157964 Kasorla et al. Jun 2009 A1
20090158126 Perlmutter et al. Jun 2009 A1
20090168524 Golov et al. Jul 2009 A1
20090172257 Prins et al. Jul 2009 A1
20090172261 Prins et al. Jul 2009 A1
20090193184 Yu et al. Jul 2009 A1
20090199074 Sommer et al. Aug 2009 A1
20090204824 Lin et al. Aug 2009 A1
20090204872 Yu et al. Aug 2009 A1
20090213653 Perlmutter et al. Aug 2009 A1
20090213654 Perlmutter et al. Aug 2009 A1
20090225595 Kim Sep 2009 A1
20090228761 Perlmutter et al. Sep 2009 A1
20090240872 Perlmutter et al. Sep 2009 A1
20090265509 Klein Oct 2009 A1
20090300227 Nochimowski et al. Dec 2009 A1
20090323412 Mokhlesi et al. Dec 2009 A1
20090327608 Eschmann et al. Dec 2009 A1
20100017650 Chin et al. Jan 2010 A1
20100034022 Dutta et al. Feb 2010 A1
20100057976 Lasser Mar 2010 A1
20100061151 Miwa et al. Mar 2010 A1
20100082883 Chen et al. Apr 2010 A1
20100083247 Kanevsky et al. Apr 2010 A1
20100110580 Takashima May 2010 A1
20100124088 Shalvi et al. May 2010 A1
20100131697 Alrod et al. May 2010 A1
20100131827 Sokolov et al. May 2010 A1
20100142268 Aritome Jun 2010 A1
20100142277 Yang et al. Jun 2010 A1
20100157675 Shalvi et al. Jun 2010 A1
20100165689 Rotbard et al. Jul 2010 A1
20100169547 Ou Jul 2010 A1
20100169743 Vogan et al. Jul 2010 A1
20100174847 Paley et al. Jul 2010 A1
20100195390 Shalvi Aug 2010 A1
20100199150 Shalvi et al. Aug 2010 A1
20100211803 Lablans Aug 2010 A1
20100220509 Sokolov et al. Sep 2010 A1
20100220510 Shalvi Sep 2010 A1
20100250836 Sokolov et al. Sep 2010 A1
20100287217 Borchers et al. Nov 2010 A1
20110010489 Yeh Jan 2011 A1
20110060969 Ramamoorthy et al. Mar 2011 A1
20110066793 Burd Mar 2011 A1
20110075482 Shepard et al. Mar 2011 A1
20110107049 Kwon et al. May 2011 A1
20110149657 Haratsch et al. Jun 2011 A1
20110199823 Bar-Or et al. Aug 2011 A1
20110302354 Miller Dec 2011 A1
Foreign Referenced Citations (43)
Number Date Country
0783754 Jul 1997 EP
1434236 Jun 2004 EP
1605509 Dec 2005 EP
9610256 Apr 1996 WO
9828745 Jul 1998 WO
2002100112 Dec 2002 WO
03100791 Dec 2003 WO
2007046084 Apr 2007 WO
2007132452 Nov 2007 WO
2007132453 Nov 2007 WO
2007132456 Nov 2007 WO
2007132457 Nov 2007 WO
2007132458 Nov 2007 WO
2007146010 Dec 2007 WO
2008026203 Mar 2008 WO
2008053472 May 2008 WO
2008053473 May 2008 WO
2008068747 Jun 2008 WO
2008077284 Jul 2008 WO
2008083131 Jul 2008 WO
2008099958 Aug 2008 WO
2008111058 Sep 2008 WO
2008124760 Oct 2008 WO
2008139441 Nov 2008 WO
2009037691 Mar 2009 WO
2009037697 Mar 2009 WO
2009038961 Mar 2009 WO
2009050703 Apr 2009 WO
2009053961 Apr 2009 WO
2009053962 Apr 2009 WO
2009053963 Apr 2009 WO
2009063450 Apr 2009 WO
2009072100 Jun 2009 WO
2009072101 Jun 2009 WO
2009072102 Jun 2009 WO
2009072103 Jun 2009 WO
2009072104 Jun 2009 WO
2009072105 Jun 2009 WO
2009074978 Jun 2009 WO
2009074979 Jun 2009 WO
2009078006 Jun 2009 WO
2009095902 Aug 2009 WO
2011024015 Mar 2011 WO
Non-Patent Literature Citations (159)
Entry
US 7,161,836, 01/2007, Wan et al. (withdrawn)
U.S. Appl. No. 12/534,898 Official Action dated Mar. 23, 2011.
U.S. Appl. No. 13/047,822, filed Mar. 15, 2011.
U.S. Appl. No. 13/069,406, filed Mar. 23, 2011.
U.S. Appl. No. 13/088,361, filed Apr. 17, 2011.
U.S. Appl. No. 12/323,544 Office Action dated Dec. 13, 2011.
U.S. Appl. No. 12/332,368 Office Action dated Nov. 10, 2011.
U.S. Appl. No. 12/063,544 Office Action dated Dec. 14, 2011.
U.S. Appl. No. 12/186,867 Office Action dated Jan. 17, 2012.
U.S. Appl. No. 12/119,069 Office Action dated Nov. 14, 2011.
U.S. Appl. No. 12/037,487 Office Action dated Jan. 3, 2012.
U.S. Appl. No. 11/995,812 Office Action dated Oct. 28, 2011.
U.S. Appl. No. 12/551,567 Office Action dated Oct. 27, 2011.
U.S. Appl. No. 12/618,732 Office Action dated Nov. 4, 2011.
U.S. Appl. No. 12/649,382 Office Action dated Jan. 6, 2012.
U.S. Appl. No. 13/284,909, filed Oct. 30, 2011.
U.S. Appl. No. 13/284,913, filed Oct. 30, 2011.
U.S. Appl. No. 13/338,335, filed Dec. 28, 2011.
U.S. Appl. No. 13/355,536, filed Jan. 22, 2012.
Kim et al., “Multi-bit Error Tolerant Caches Using Two-Dimensional Error Coding”, Proceedings of the 40th Annual ACM/IEEE International Symposium on Microarchitecture (MICRO-40), Chicago, USA, Dec. 1-5, 2007.
Agrell et al., “Closest Point Search in Lattices”, IEEE Transactions on Information Theory, vol. 48, No. 8, pp. 2201-2214, Aug. 2002.
Bez et al., “Introduction to Flash memory”, Proceedings of the IEEE, vol. 91, No. 4, pp. 489-502, Apr. 2003.
Blahut, R. E., “Theory and Practice of Error Control Codes,” Addison-Wesley, May, 1984, section 3.2, pp. 47-48.
Chang, L., “Hybrid Solid State Disks: Combining Heterogeneous NAND Flash in Large SSDs”, ASPDAC, Jan. 2008.
Cho et al., “Multi-Level NAND Flash Memory with Non-Uniform Threshold Voltage Distribution,” IEEE International Solid-State Circuits Conference (ISSCC), San Francisco, CA, Feb. 5-7, 2001, pp. 28-29 and 424.
Databahn™, “Flash memory controller IP”, Denali Software, Inc., 1994 https://www.denali.com/en/products/databahn—flash.jsp.
Datalight, Inc., “FlashFX Pro 3.1 High Performance Flash Manager for Rapid Development of Reliable Products”, Nov. 16, 2006.
Duann, N., Silicon Motion Presentation “SLC & MLC Hybrid”, Flash Memory Summit, Santa Clara, USA, Aug. 2008.
Eitan et al., “Can NROM, a 2-bit, Trapping Storage NVM Cell, Give a Real Challenge to Floating Gate Cells?”, Proceedings of the 1999 International Conference on Solid State Devices and Materials (SSDM), pp. 522-524, Tokyo, Japan 1999.
Eitan et al., “Multilevel Flash Cells and their Trade-Offs”, Proceedings of the 1996 IEEE International Electron Devices Meeting (IEDM), pp. 169-172, New York, USA 1996.
Engh et al., “A self adaptive programming method with 5 mV accuracy for multi-level storage in Flash”, pp. 115-118, Proceedings of the IEEE 2002 Custom Integrated Circuits Conference, May 12-15, 2002.
Goodman et al., “On-Chip ECC for Multi-Level Random Access Memories,” Proceedings of the IEEE/CAM Information Theory Workshop, Ithaca, USA, Jun. 25-29, 1989.
Han et al., “An Intelligent Garbage Collection Algorithm for Flash Memory Storages”, Computational Science and Its Applications—ICCSA 2006, vol. 3980/2006, pp. 1019-1027, Springer Berlin / Heidelberg, Germany, May 11, 2006.
Han et al., “CATA: A Garbage Collection Scheme for Flash Memory File Systems”, Ubiquitous Intelligence and Computing, vol. 4159/2006, pp. 103-112, Springer Berlin / Heidelberg, Aug. 25, 2006.
Horstein, “On the Design of Signals for Sequential and Nonsequential Detection Systems with Feedback,” IEEE Transactions on Information Theory IT-12:4 (Oct. 1966), pp. 448-455.
Jung et al., In “A 117 mm.sup.2 3.3V Only 128 Mb Multilevel NAND Flash Memory for Mass Storage Applications,” IEEE Journal of Solid State Circuits, (11:31), Nov. 1996, pp. 1575-1583.
Kawaguchi et al. 1995. A flash-memory based file system. In Proceedings of the USENIX 1995 Technical Conference , New Orleans, Louisiana. 155-164.
Kim et al., “Future Memory Technology including Emerging New Memories”, Proceedings of the 24th International Conference on Microelectronics (MIEL), vol. 1, pp. 377-384, Nis, Serbia and Montenegro, May 16-19, 2004.
Lee et al., “Effects of Floating Gate Interference on NAND Flash Memory Cell Operation”, IEEE Electron Device Letters, vol. 23, No. 5, pp. 264-266, May 2002.
Maayan et al., “A 512 Mb NROM Flash Data Storage Memory with 8 Mb/s Data Rate”, Proceedings of the 2002 IEEE International Solid-State circuits Conference (ISSCC 2002), pp. 100-101, San Francisco, USA, Feb. 3-7, 2002.
Mielke et al., “Recovery Effects in the Distributed Cycling of Flash Memories”, IEEE 44th Annual International Reliability Physics Symposium, pp. 29-35, San Jose, USA, Mar. 2006.
Onfi, “Open NAND Flash Interface Specification,” revision 1.0, Dec. 28, 2006.
Phison Electronics Corporation, “PS8000 Controller Specification (for SD Card)”, revision 1.2, Document No. S-07018, Mar. 28, 2007.
Shalvi, et al., “Signal Codes,” Proceedings of the 2003 IEEE Information Theory Workshop (ITW'2003), Paris, France, Mar. 31-Apr. 4, 2003.
Shiozaki, A., “Adaptive Type-II Hybrid Broadcast ARQ System”, IEEE Transactions on Communications, vol. 44, Issue 4, pp. 420-422, Apr. 1996.
Suh et al., “A 3.3V 32Mb NAND Flash Memory with Incremental Step Pulse Programming Scheme”, IEEE Journal of Solid-State Circuits, vol. 30, No. 11, pp. 1149-1156, Nov. 1995.
St Microelectronics, “Bad Block Management in NAND Flash Memories”, Application note AN-1819, Geneva, Switzerland, May 2004.
ST Microelectronics, “Wear Leveling in Single Level Cell NAND Flash Memories,” Application note AN-1822 Geneva, Switzerland, Feb. 2007.
Takeuchi et al., “A Double Level Vth Select Gate Array Architecture for Multi-Level NAND Flash Memories”, Digest of Technical Papers, 1995 Symposium on VLSI Circuits, pp. 69-70, Jun. 8-10, 1995.
Takeuchi et al., “A Multipage Cell Architecture for High-Speed Programming Multilevel NAND Flash Memories”, IEEE Journal of Solid State Circuits, vol. 33, No. 8, Aug. 1998.
Wu et al., “eNVy: A non-Volatile, Main Memory Storage System”, Proceedings of the 6th International Conference on Architectural support for programming languages and operating systems, pp. 86-87, San Jose, USA, 1994.
International Application PCT/IL2007/000575 Patentability report dated Mar. 26, 2009.
International Application PCT/IL2007/000575 Search Report dated May 30, 2008.
International Application PCT/IL2007/000576 Patentability Report dated Mar. 19, 2009.
International Application PCT/IL2007/000576 Search Report dated Jul. 7, 2008.
International Application PCT/IL2007/000579 Patentability report dated Mar. 10, 2009.
International Application PCT/IL2007/000579 Search report dated Jul. 3, 2008.
International Application PCT/IL2007/000580 Patentability Report dated Mar. 10, 2009.
International Application PCT/IL2007/000580 Search Report dated Sep. 11, 2008.
International Application PCT/IL2007/000581 Patentability Report dated Mar. 26, 2009.
International Application PCT/IL2007/000581 Search Report dated Aug. 25, 2008.
International Application PCT/IL2007/001059 Patentability report dated Apr. 19, 2009.
International Application PCT/IL2007/001059 Search report dated Aug. 7, 2008.
International Application PCT/IL2007/001315 search report dated Aug. 7, 2008.
International Application PCT/IL2007/001315 Patentability Report dated May 5, 2009.
International Application PCT/IL2007/001316 Search report dated Jul. 22, 2008.
International Application PCT/IL2007/001316 Patentability Report dated May 5, 2009.
International Application PCT/IL2007/001488 Search report dated Jun. 20, 2008.
International Application PCT/IL2008/000329 Search report dated Nov. 25, 2008.
International Application PCT/IL2008/000519 Search report dated Nov. 20, 2008.
International Application PCT/IL2008/001188 Search Report dated Jan. 28, 2009.
International Application PCT/IL2008/001356 Search Report dated Feb. 3, 2009.
International Application PCT/IL2008/001446 Search report dated Feb. 20, 2009.
U.S. Appl. No. 11/949,135 Official Action dated Oct. 2, 2009.
U.S. Appl. No. 12/019,011 Official Action dated Nov. 20, 2009.
Sommer, N., U.S. Appl. No. 12/171,797 “Memory Device with Non-Uniform Programming Levels” filed Jul. 11, 2008.
Shalvi et al., U.S. Appl. No. 12/251,471 “Compensation for Voltage Drifts in Analog Memory Cells” filed Oct. 15, 2008.
Sommer et al., U.S. Appl. No. 12/497,707 “Data Storage in Analog Memory Cells with Protection Against Programming Interruption” filed Jul. 6, 2009.
Winter et al., U.S. Appl. No. 12/534,893 “Improved Data Storage in Analog Memory Cells Using Modified Pass Voltages” filed Aug. 4, 2009.
Winter et al., U.S. Appl. No. 12/534,898 “Data Storage Using Modified Voltages” filed Aug. 4, 2009.
Shalvi et al., U.S. Appl. No. 12/551,583 “Segmented Data Storage” filed Sep. 1, 2009.
Shalvi et al., U.S. Appl. No. 12/551,567 “Reliable Data Storage in Analog Memory Cells Subjected to Long Retention Periods” filed Sep. 1, 2009.
Perlmutter et al., U.S. Appl. No. 12/558,528 “Estimation of Memory Cell Read Thresholds by Sampling Inside Programming Level Distribution Intervals” filed Sep. 13, 2009.
Sokolov, D., U.S. Appl. No. 12/579,430 “Efficient Programming of Analog Memory Cell Devices” filed Oct. 15, 2009.
Shalvi, O., U.S. Appl. No. 12/579,432 “Efficient Data Storage in Storage Device Arrays” filed Oct. 15, 2009.
Sommer et al., U.S. Appl. No. 12/607,078 “Data Scrambling in Memory Devices” filed Oct. 28, 2009.
Sommer et al., U.S. Appl. No. 12/607,085 “Data Scrambling Schemes for Memory Devices” filed Oct. 28, 2009.
Sommer et al., U.S. Appl. No. 12/649,358 “Efficient Readout Schemes for Analog Memory Cell Devices” filed Dec. 30, 2009.
Sommer et al., U.S. Appl. No. 12/649,360 “Efficient Readout Schemes for Analog Memory Cell Devices Using Multiple Read Threshold Sets” filed Dec. 30, 2009.
Shachar et al. U.S. Appl. No. 12/688,883 “Hierarchical data storage system” filed Jan. 17, 2010.
Sommer et al., U.S. Appl. No. 12/728,296 “Database of Memory Read Thresholds” filed Mar. 22, 2010.
Sommer et al., U.S. Appl. No. 12/758,003 “Selective re-programming of analog memory cells” filed Apr. 11, 2010.
Huffman, A., “Non-Volatile Memory Host Controller Interface (NVMHCI)”, Specification 1.0, Apr. 14, 2008.
U.S. Appl. No. 11/957,970 Official Action dated May 20, 2010.
Panchbhai et al., “Improving Reliability of NAND Based Flash Memory Using Hybrid SLC/MLC Device”, Project Proposal for CSci 8980—Advanced Storage Systems, University of Minnesota, USA, Spring 2009.
JEDEC Standard JESD84-C44, “Embedded MultiMediaCard (eMMC) Mechanical Standard, with Optional Reset Signal”, Jedec Solid State Technology Association, USA, Jul. 2009.
JEDEC, “UFS Specification”, version 0.1, Nov. 11, 2009.
SD Group and SD Card Association, “SD Specifications Part 1 Physical Layer Specification”, version 3.01, draft 1.00, Nov. 9, 2009.
Compaq et al., “Universal Serial Bus Specification”, revision 2.0, Apr. 27, 2000.
Serial ATA International Organization, “Serial ATA Revision 3.0 Specification”, Jun. 2, 2009.
Gotou, H., “An Experimental Confirmation of Automatic Threshold Voltage Convergence in a Flash Memory Using Alternating Word-Line Voltage Pulses”, IEEE Electron Device Letters, vol. 18, No. 10, pp. 503-505, Oct. 1997.
U.S. Appl. No. 12/880,101 “Reuse of Host Hibernation Storage Space by Memory Controller”, filed Sep. 12, 2010.
U.S. Appl. No. 12/890,724 “Error Coding Over Multiple Memory Pages”, filed Sep. 27, 2010.
U.S. Appl. No. 12/171,797 Official Action dated Aug. 25, 2010.
U.S. Appl. No. 12/497,707 Official Action dated Sep. 15, 2010.
U.S. Appl. No. 11/995,801 Official Action dated Oct. 15, 2010.
Numonyx, “M25PE16: 16-Mbit, page-erasable serial flash memory with byte-alterability, 75 MHz SPI bus, standard pinout”, Apr. 2008.
Shalvi et al., U.S. Appl. No. 12/822,207 “Adaptive Over-Provisioning in Memory Systems” filed Jun. 24, 2010.
Hong et al., “NAND Flash-based Disk Cache Using SLC/MLC Combined Flash Memory”, 2010 International Workshop on Storage Network Architecture and Parallel I/Os, pp. 21-30, USA, May 3, 2010.
U.S. Appl. No. 11/945,575 Official Action dated Aug. 24, 2010.
U.S. Appl. No. 12/045,520 Official Action dated Nov. 16, 2010.
U.S. Appl. No. 12/323,544 Official Action dated Mar. 9, 2012.
Chinese Patent Application # 200780026181.3 Official Action dated Mar. 7, 2012.
Chinese Patent Application # 200780026094.8 Official Action dated Feb. 2, 2012.
U.S. Appl. No. 12/332,370 Official Action dated Mar. 8, 2012.
U.S. Appl. No. 12/579,432 Official Action dated Feb. 29, 2012.
U.S. Appl. No. 12/522,175 Official Action dated Mar. 27, 2012.
U.S. Appl. No. 12/607,085 Official Action dated Mar. 28, 2012.
Budilovsky et al., “Prototyping a High-Performance Low-Cost Solid-State Disk”, SYSTOR—The 4th Annual International Systems and Storage Conference, Haifa, Israel, May 30-Jun. 1, 2011.
NVM Express Protocol, “NVM Express”, Revision 1.0b, Jul. 12, 2011.
SCSI Protocol, “Information Technology—SCSI Architecture Model-5 (SAM-5)”, INCITS document T10/2104-D, revision 01, Jan. 28, 2009.
SAS Protocol, “Information Technology—Serial Attached SCSI-2 (SAS-2)”, INCITS document T10/1760-D, revision 15a, Feb. 22, 2009.
Ankolekar et al., “Multibit Error-Correction Methods for Latency-Constrained Flash Memory Systems”, IEEE Transactions on Device and Materials Reliability, vol. 10, No. 1, pp. 33-39, Mar. 2010.
U.S. Appl. No. 12/344,233 Official Action dated Jun. 24, 2011.
U.S. Appl. No. 11/995,813 Official Action dated Jun. 16, 2011.
Berman et al., “Mitigating Inter-Cell Coupling Effects in MLC NAND Flash via Constrained Coding”, Flash Memory Summit, Santa Clara, USA, Aug. 19, 2010.
U.S. Appl. No. 12/178,318 Official Action dated May 31, 2011.
CN Patent Application # 200780026181.3 Official Action dated Apr. 8, 2011.
Eitan et al., “Can NROM, a 2-bit, Trapping Storage NVM Cell, Give a Real Challenge to Floating Gate Cells?”, Proceedings of the 1999 International Conference on Solid State Devices and Materials (SSDM), p. 522-524, Tokyo, Japan 1999.
Huffman, A., “Non-Volatile Memory Host Controller Interface (NVMCHI)”, Specification 1.0, Apr. 14, 2008.
U.S. Appl. No. 11/995,814 Official Action dated Dec. 17, 2010.
U.S. Appl. No. 12/388,528 Official Action dated Nov. 29, 2010.
U.S. Appl. No. 12/251,471 Official Action dated Jan. 3, 2011.
Engineering Windows 7, “Support and Q&A for Solid-State Drives”, e7blog, May 5, 2009.
Micron Technology Inc., “Memory Management in NAND Flash Arrays”, Technical Note, year 2005.
Kang et al., “A Superblock-based Flash Translation Layer for NAND Flash Memory”, Proceedings of the 6th ACM & IEEE International Conference on Embedded Software, pp. 161-170, Seoul, Korea, Oct. 22-26, 2006.
Park et al., “Sub-Grouped Superblock Management for High-Performance Flash Storages”, IEICE Electronics Express, vol. 6, No. 6, pp. 297-303, Mar. 25, 2009.
“How to Resolve “Bad Super Block: Magic Number Wrong” in BSD”, Free Online Articles Director Article Base, posted Sep. 5, 2009.
Ubuntu Forums, “Memory Stick Failed IO Superblock”, posted Nov. 11, 2009.
Super User Forums, “SD Card Failure, can't read superblock”, posted Aug. 8, 2010.
U.S. Appl. No. 12/987,174, filed Jan. 10, 2011.
U.S. Appl. No. 12/987,175, filed Jan. 10, 2011.
U.S. Appl. No. 12/963,649, filed Dec. 9, 2010.
U.S. Appl. No. 13/021,754, filed Feb. 6, 2011.
Wei, L., “Trellis-Coded Modulation With Multidimensional Constellations”, IEEE Transactions on Information Theory, vol. IT-33, No. 4, pp. 483-501, Jul. 1987.
U.S. Appl. No. 13/114,049 Official Action dated Sep. 12, 2011.
U.S. Appl. No. 12/405,275 Official Action dated Jul. 29, 2011.
Conway et al., “Sphere Packings, Lattices and Groups”, 3rd edition, chapter 4, pp. 94-135, Springer, New York, USA 1998.
Chinese Patent Application # 200780040493.X Official Action dated Jun. 15, 2011.
U.S. Appl. No. 12/037,487 Official Action dated Oct. 3, 2011.
U.S. Appl. No. 12/649,360 Official Action dated Aug. 9, 2011.
U.S. Appl. No. 13/192,504, filed Jul. 28, 2011.
U.S. Appl. No. 13/192,852, filed Aug. 2, 2011.
U.S. Appl. No. 13/231,963, filed Sep. 14, 2011.
U.S. Appl. No. 13/239,408, filed Sep. 22, 2011.
U.S. Appl. No. 13/239,411, filed Sep. 22, 2011.
U.S. Appl. No. 13/214,257, filed Aug. 22, 2011.
U.S. Appl. No. 13/192,501, filed Jul. 28, 2011.
U.S. Appl. No. 13/192,495, filed Jul. 28, 2011.
Provisional Applications (2)
Number Date Country
61324429 Apr 2010 US
61293676 Jan 2010 US