Controlling memory readout reliability and throughput by adjusting distance between read thresholds

Information

  • Patent Grant
  • 11556416
  • Patent Number
    11,556,416
  • Date Filed
    Wednesday, December 22, 2021
    3 years ago
  • Date Issued
    Tuesday, January 17, 2023
    a year ago
Abstract
An apparatus for data storage includes an interface and a processor. The interface is configured to communicate with a memory device that includes (i) a plurality of memory cells and (ii) a data compression module. The processor is configured to determine a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells, and based on the maximal number of errors, to determine an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so as to meet following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput requirement.
Description
TECHNICAL FIELD

Embodiments described herein relate generally to data storage, and particularly to methods and systems for controlling memory readout reliability and throughput by adjusting a distance between read thresholds.


BACKGROUND

In various storage systems, a memory controller stores data in memory cells of a memory device. Upon reading data from the memory cells, the memory device may send to the memory controller confidence levels associated with the data bits read, to be used in soft decoding. The memory device may send the confidence levels to the memory controller in a compressed form, to reduce bandwidth on the channel between the memory device and the memory controller.


Methods for transferring compressed confidence levels are known in the art. For example, U.S. Pat. No. 9,229,861 describes a method for data storage that includes storing data in a group of analog memory cells by writing respective input storage values to the memory cells in the group. After storing the data, respective output storage values are read from the analog memory cells in the group. Respective confidence levels of the output storage values are estimated, and the confidence levels are compressed. The output storage values and the compressed confidence levels are transferred from the memory cells over an interface to a memory controller.


SUMMARY

An embodiment that is described herein provides an apparatus for data storage that includes an interface and a processor. The interface is configured to communicate with a memory device that includes (i) a plurality of memory cells and (ii) a data compression module. The processor is configured to determine a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells, and based on the maximal number of errors, to determine an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so as to meet following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput requirement.


In some embodiments, the processor is configured to determine the interval by estimating multiple attainable compression rates for different respective settings of the interval, and to select a setting of the interval that meets the conditions. In other embodiments, the processor is configured to determine the interval for maximizing the readout throughput from the memory device. In yet other embodiments, the processor is configured to determine the interval for minimizing a probability of decoding failure in decoding CWs using the soft decoding scheme.


In an embodiment, the processor is configured to decide to apply to subsequent CWs read from the memory cells a hard decoding scheme or the soft decoding scheme, based on an average number of errors detected in previously read CWs. In another embodiment, the processor is configured to decide to apply the soft decoding scheme to subsequent CWs read from the memory cells, in response to detecting that a first readout throughput achievable using hard decoding is smaller than a second readout throughput achievable using soft decoding with confidence levels that were compressed by the compression module. In yet another embodiment, the processor is configured to set a data rate of the interface depending on the compression rate being configured.


In some embodiments, the processor is configured to identify low parallelism random readout operations that are not constrained by a data rate of the interface, and to set the compression module so as not to compress confidence levels of the identified readout operations. In other embodiments, the memory cells belong to multiple dies, and the processor is configured to read compressed confidence levels from a first die among the multiple dies while one or more other dies among the multiple dies are occupied in compressing local confidence levels. In yet other embodiments, the data compression module supports multiple compression configurations, and the processor is configured to select a compression configuration among the supported compression configurations that meets the readout throughput requirement.


In an embodiment, the multiple compression configurations have multiple respective constant compression rates. In another embodiment, the processor is configured to configure the data compression module to produce compressed confidence levels using a variable-rate compression configuration, and to receive the compressed confidence levels via the interface in multiple data segments having respective data lengths, in accordance with the variable-rate compression configuration. In yet another embodiment, the compression module supports a lossy compression scheme, and the processor is configured to estimate the maximal number of errors, depending on a number of errors contributed by the lossy compression scheme. In further yet another embodiment, the processor is configured to determine the interval so as to achieve a specified tradeoff between soft decoding capability and readout throughput.


There is additionally provided, in accordance with an embodiment that is described herein, a method for data storage, including, in a memory controller that communicates with a memory device that includes (i) a plurality of memory cells and (ii) a data compression module, determining a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells. Based on the maximal number of errors, an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme is determined, so as to meet following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput requirement.


There is additionally provided, in accordance with an embodiment that is described herein, an apparatus for data storage, including an interface and a processor. The interface is configured to communicate with a memory device that includes a plurality of memory cells. The processor is configured to determine a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells, and based on the maximal number of errors, to determine an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so that the soft decoding scheme achieves a specified decoding capability requirement.


In some embodiments, the processor is configured to determine the interval so that the soft decoding scheme aims to correct the maximal number of errors with a lowest decoding failure rate. In other embodiments, the processor is configured to determine the maximal number of errors by modeling underlying voltage distributions as Gaussian distributions and calculating the maximal number of errors based on the estimated Gaussian distributions. In yet other embodiments, the processor is configured to model the Gaussian distributions by determining a number of memory cells that fall between adjacent read thresholds, and calculating a variance parameter of the Gaussian distributions based on the estimated number of memory cells.


In an embodiment the processor is configured to determine the maximal number of errors by retrieving a CW from the memory cells using a single read threshold, decoding the retrieved CW using a hard decoding scheme for producing a decoded CW, and in response to detecting that the CW is successfully decodable using the hard decoding scheme, to calculate the maximal number of errors by comparing between the retrieved CW and the decoded CW. In another embodiment, the processor is configured to determine the interval by mapping the maximal number of errors into the interval using a predefined function. In yet another embodiment, the predefined function is based on finding, for selected numbers of errors, respective intervals that aim to maximize mutual information measures between CWs as stored in the memory cells and respective readouts of the CWs retrieved from the memory cells.


There is additionally provided, in accordance with an embodiment that is described herein, a method for data storage, including, in a memory controller that communicates with a memory device that includes a plurality of memory cells, determining a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells. Based on the maximal number of errors, an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme is determined, for achieving a specified decoding capability requirement.


There is additionally provided, in accordance with an embodiment that is described herein, an apparatus for data storage, including an interface and a processor. The interface is configured to communicate with a memory device that includes (i) a plurality of memory cells and (ii) a data compression module. The processor is configured to select an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by a soft decoding scheme, based on the selected interval, to estimate statistical properties of confidence levels corresponding to readouts of the CWs, and based on the estimated statistical properties, to determine an attainable compression rate for compressing the confidence levels, the attainable compression rate dictates a corresponding attainable readout throughput, and to configure the compression module in accordance with the attainable compression rate for transmitting the compressed confidence levels at the attainable readout throughput.


In some embodiments, the processor is configured to estimate the statistical properties by estimating a ratio between a first number of the confidence levels indicative of a low confidence level and a second overall number of the confidence levels. In other embodiments, the processor is configured to determine the attainable compression rate by mapping the ratio into the attainable compression rate using a predefined function.


There is additionally provided, in accordance with an embodiment that is described herein, method for data storage, including, in a memory controller that communicates with a memory device that includes (i) a plurality of memory cells and (ii) a data compression module, selecting an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by a soft decoding scheme. Based on the selected interval, statistical properties of confidence levels corresponding to readouts of the CWs are estimated. Based on the estimated statistical properties, an attainable compression rate for compressing the confidence levels is determined, the attainable compression rate dictates a corresponding attainable readout throughput. The compression module is configured in accordance with the attainable compression rate for transmitting the compressed confidence levels at the attainable readout throughput.


These and other embodiments will be more fully understood from the following detailed description of the embodiments thereof, taken together with the drawings in which:





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram that schematically illustrates a memory system, in accordance with an embodiment that is described herein;



FIG. 2 is a flow chart that schematically illustrates a method for adjusting the interval between read thresholds for meeting soft decoding and readout throughput requirements, in accordance with an embodiment that is described herein;



FIG. 3 is a diagram that schematically illustrates graphs depicting attainable readout throughputs corresponding to stress levels applied to the memory device, in accordance with an embodiment that is described herein;



FIG. 4 is a flow chart that schematically illustrates a method for optimizing soft decoding capabilities by adapting an interval between read thresholds, in accordance with an embodiment that is described herein;



FIG. 5 is a flow chart that schematically illustrates a method for determining an interval between read thresholds that is optimal for a given stress level, in accordance with an embodiment that is described herein;



FIG. 6 is a flow chart that schematically illustrates a method for estimating the attainable compression rate of confidence levels depending on the interval between read thresholds, in accordance with an embodiment that is described herein; and



FIG. 7 is a diagram that schematically illustrates efficient scheduling in sending hard data and compressed confidence levels from two dies to the memory controller over a common channel, in accordance with an embodiment that is described herein.





DETAILED DESCRIPTION OF EMBODIMENTS
Overview

Embodiments that are described herein provide methods and system for controlling readout reliability and throughput from memory cells, by adjusting a distance between read threshold.


Memory cells in a memory device may become stressed for various reasons such as aging, the number of program/erase cycles applied, retention, disturbances from neighboring memory cells and the like. As the stress level to which the memory cells are subjected increases, readout reliability degrades, and data retrieved from these memory cells may contain a larger number of errors.


To mitigate readout errors, the data is typically stored in the memory cells encoded using a suitable Error Correction Code (ECC) and decoded using the ECC upon reading to recover the unencoded data. Decoding the ECC may be carried out using various types of decoding schemes, e.g., a hard decoding scheme or a soft decoding scheme. In hard decoding, for a given position of the read threshold, a data unit is read once from a group of memory cells using the given read threshold. In soft decoding, multiple readouts are read from the same group of memory cells using respective multiple read thresholds in the vicinity of the given read threshold, and each readout is assigned corresponding confidence levels that may improve soft decoding performance. Soft decoding schemes typically have higher decoding capabilities than hard decoding schemes, meaning that soft decoding can correct a higher number of errors in a read data unit compared to hard decoding. The decoding capabilities of a soft decoding scheme depend on the positions of the read thresholds used, and more specifically on the distance between adjacent read thresholds, which distance is also referred to herein as a “sampling interval” or simply “interval” for brevity.


Typically, a larger sampling interval between the read thresholds results in better soft decoding performance. Increasing the sampling interval above a certain point may, however, result in decreasing soft decoding performance of the soft decoding scheme.


The confidence levels associated respectively with the multiple readouts are sent to the memory controller to be used by the ECC soft decoder, which may reduce the readout throughput over the interface between the memory controller and the memory device significantly. Methods for reducing the amount of data sent over the interface by applying data compression to the confidence levels on the memory side and de-compression in the memory controller side are described, for example, in U.S. Pat. No. 9,229,861, whose disclosure is incorporated herein by reference. (In the event of any inconsistencies between any incorporated document and this document, it is intended that this document control.)


As noted above, soft decoding performance and the attainable compression ratio of the confidence levels are affected by the sampling interval between the read thresholds. The attainable compression rate of the confidence levels is also affected by the number of readout errors, which may alter the statistical properties of the confidence levels, and therefore also their attainable compressibility.


As the sampling interval increases, the soft decoder may be able to correct a larger number of errors, but the attainable compression rate of the confidence levels may decrease, and vice versa. A tradeoff thus exists between readout throughput (or compression rate) and the soft decoding performance that may be measured, for example, by the number of errors that can be corrected using soft decoding, e.g., with a lowest decoding failure rate.


As will be described in detail below, in the disclosed techniques, the tradeoff between soft decoding performance and readout throughput can be controlled by adapting the sampling interval between the read thresholds used for the soft decoding. As a result, when the memory cells experience low levels of stress, modest error correction capabilities may be sufficient, which allows achieving a relatively high compression rate and therefore a high readout throughput. When the memory cells experience high levels of stress, larger soft decoding capabilities are required, in which case a low compression rate and therefore lower readout throughput may be attainable.


Consider an embodiment of an apparatus for data storage, comprising an interface and a processor. The interface is configured to communicate with a memory device that comprises (i) a plurality of memory cells and (ii) a data compression module. The processor is configured to determine a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells, and to determine, based on the maximal number of errors, an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so as to meet following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput requirement.


In some embodiments, the processor determines the interval by estimating multiple attainable compression rates for different respective settings of the interval and selects a setting of the interval that meets the conditions. In other embodiments, the processor determines the interval for maximizing the readout throughput from the memory device. To maximize decoding capabilities, the processor may determine the interval for minimizing a probability of decoding failure in decoding CWs using the soft decoding scheme.


The processor applies hard decoding or soft decoding as appropriate. For example, the processor may decide to apply to subsequent CWs read from the memory cells a hard decoding scheme or the soft decoding scheme, based on an average number of errors detected in previously read CWs. In an embodiment, the processor decides to apply the soft decoding scheme to subsequent CWs read from the memory cells, in response to detecting that a first readout throughput achievable using hard decoding is smaller than a second readout throughput achievable using soft decoding with confidence levels that were compressed by the compression module. The throughput in using hard decoding may fall below the throughput in soft decoding because in response to hard decoding failure the processor may trigger additional decoding stages, resulting in increased latency. In another embodiment, the processor sets a data rate (or an operational clock frequency) of the interface depending on the compression rate being configured. For example, when the compression rate is higher, the processor may set a higher data rate to compensate for the read performance loss, at the expense of higher thermal power.


The processor may decide to apply data compression to the confidence levels depending on underlying methods used for retrieving data from one or more dies. For example, the processor may identify low parallelism random readout operations that are not constrained by a data rate of the interface, and in response, set the compression module so as not to compress the confidence levels of the identified readout operations. The compression of confidence levels often increases readout latency, and therefore it may be beneficial not to compress at low parallelism. In an embodiment in which the memory cells belong to multiple dies, the processor may read compressed confidence levels from a first die among the multiple dies while one or more other dies among the multiple dies are occupied in compressing local confidence levels.


In some embodiments, the data compression module supports multiple compression configurations, and the processor selects a compression configuration among the supported compression configurations that meets the readout throughput requirement. For example, the multiple compression configurations may have multiple respective constant compression rates.


In some embodiments, the processor configures the data compression module to produce compressed confidence levels using a variable-rate compression configuration. The processor receives the compressed confidence levels via the interface in multiple data segments having respective data lengths, in accordance with the variable-rate compression configuration.


In an embodiment in which the compression module supports a lossy compression scheme, the processor may estimate the maximal number of correctable errors (to be used for determining the interval), depending on a number of errors contributed by the lossy compression scheme.


In an embodiment, as opposed to optimizing only soft decoding performance or readout throughput alone, the processor determines the interval so as to achieve a specified tradeoff between soft decoding capability and readout throughput. For example, it may be required to reduce soft decoding performance in order to increase readout throughput.


In some embodiments, the processor determines the interval to maximize decoding capabilities, without imposing any requirements on the readout throughput. For example, the processor may determine the interval so that the soft decoding scheme aims to correct a specified maximal number of errors with a lowest decoding failure rate.


In some embodiments, the processor determines the maximal number of errors by modeling underlying voltage distributions as Gaussian distributions, and calculating the maximal number of errors based on the estimated Gaussian distributions. To model the Gaussian distributions the processor may determine a number of memory cells that fall between adjacent read thresholds and calculate a variance parameter (or a standard deviation parameter) of the Gaussian distributions based on the estimated number of memory cells.


Using soft decoding for determining the maximal number of errors is not mandatory. In alterative embodiments, the processor may perform hard decoding to a readout sampled using a single read threshold, and if hard decoding succeeds, calculate the maximal number of errors by comparing between the readouts before and after the hard decoding operation.


In an embodiment, the processor is configured to map the maximal number of errors into the interval using a predefined function. For example, the predefined function is based on finding, for selected numbers of errors, respective intervals that aim to maximize mutual information measures between CWs as stored in the memory cells and respective readouts of the CWs retrieved from the memory cells.


In some embodiments, the processor selects an interval between read thresholds using any suitable method, and determines the attainable compression for that selected interval. To this end, the processor estimates, based on the selected interval, statistical properties of confidence levels corresponding to readouts of the CWs, and based on the estimated statistical properties, determines the attainable compression rate, which also dictates a corresponding attainable readout throughput. The processor configures the compression module in accordance with the attainable compression rate for transmitting the compressed confidence levels at the attainable readout throughput.


The processor may use any suitable statistical properties of the confidence levels. In an example embodiment, the processor estimates the statistical properties by estimating a ratio between a first number of the confidence levels indicative of a low confidence level and a second overall number of the confidence levels.


In some embodiments, the processor determines the attainable compression rate by mapping the ratio into the attainable compression rate using a predefined function.


In the disclosed techniques, the interval between read thresholds is set to meet readout reliability and throughput requirements in transmitting compressed confidence levels from the memory device to the memory controller. The memory controller may monitor the stress applied to the memory cells along the lifetime of the memory device, and to adapt the interval to retain high reliability performance while gradually reducing readout throughput as the memory device ages or becoming subjected to higher stress levels. Alternatively, the memory controller may set the interval to meet a high readout reliability requirement or a high readout throughput requirement, independently.


System Description


FIG. 1 is a block diagram that schematically illustrates a memory system 20, in accordance with an embodiment that is described herein. System 20 can be used in various host systems and devices, such as in computing devices, cellular phones or other communication terminals, removable memory modules (e.g., “disk-on-key” devices), Solid State Disks (SSD), digital cameras, music and other media players and/or any other system or device in which data is stored and retrieved.


System 20 comprises a memory device 24, which stores data in a memory cell array 28. The memory cell array comprises multiple memory cells 32, e.g., analog memory cells. In the context of the present patent application and in the claims, the term “memory cell” is used to describe any memory cell that holds a continuous, analog level of a physical quantity, such as an electrical voltage or charge. Memory cell array 28 may comprise memory cells of any kind, such as, for example, NAND, NOR and CTF Flash cells, PCM, NROM, FRAM, MRAM and DRAM cells. Memory cells 32 may comprise Single-Level Cells (SLC) or Multi-Level Cells (MLC, also referred to as multi-bit cells).


The charge levels stored in the memory cells and/or the analog voltages or currents written into and read out of the memory cells are referred to herein collectively as analog values or storage values. Although the embodiments described herein mainly address threshold voltages, the methods and systems described herein may be used with any other suitable kind of storage values.


System 20 stores data in the memory cells by programming the memory cells to assume respective memory states, which are also referred to as programming levels. The programming states are selected from a finite set of possible states, and each state corresponds to a certain nominal storage value. For example, a 2 bit/cell MLC can be programmed to assume one of four possible programming states by writing one of four possible nominal storage values to the cell.


Memory device 24 comprises a reading/writing (R/W) unit 36, which converts data for storage in the memory device to storage values and writes them into memory cells 32. In alternative embodiments, the R/W unit does not perform the conversion, but is provided with voltage samples, i.e., with the storage values for storage in the cells. When reading data out of memory cell array 28, R/W unit 36 converts the storage values of memory cells 32 into digital samples having a resolution of one or more bits. The R/W unit typically reads data from memory cells 32 by comparing the storage values of the memory cells to one or more read thresholds. Data is typically written to and read from the memory cells in groups that are referred to as pages. In some embodiments, the R/W unit can erase a group of memory cells 32 by applying one or more negative erasure pulses to the memory cells.


The storage and retrieval of data in and out of memory device 24 is performed by a memory controller 40, which communicates with device 24 over a suitable interface. In some embodiments, memory controller 40 produces the storage values for storing in the memory cells and provides these values to R/W unit 36. Alternatively, memory controller 40 may provide the data for storage, and the conversion to storage values is carried out by the R/W unit internally to the memory device.


Memory controller 40 communicates with a host 44, for accepting data for storage in the memory device and for outputting data retrieved from the memory device. In some embodiments, some or even all of the functions of controller 40 may be implemented in hardware. Alternatively, controller 40 may comprise a microprocessor that runs suitable software, or a combination of hardware and software elements.


In some embodiments, R/W unit 36 comprises a data compression module 45, which compresses some of the information that is to be sent to memory controller 40. The memory controller comprises a data decompression module 46, which decompresses the compressed information received from memory device 24. In particular, R/W unit 36 may produce confidence levels of the storage values read from memory cells 32, and data compression module 45 may compress these confidence levels and send the compressed confidence levels to memory controller 40. (In some embodiments, data compression module 45 can also be used for compressing other types of information, such as stored data that is retrieved from memory cells 32.)


Memory controller 40 uses the storage values read from cells 32, and the associated confidence levels, to reconstruct the stored data. In an example embodiment, memory controller 40 comprises an Error Correction Code (ECC) module 47, which encodes the data for storage using a suitable ECC, and decodes the ECC of the data retrieved from memory cells 32. ECC module 47 may apply any suitable type of ECC, such as, for example, a Low-Density Parity Check (LDPC) code or a Bose-Chaudhuri-Hocquenghem (BCH) code. In some embodiments, ECC module 47 uses the confidence levels to improve the ECC decoding performance. Several example methods for obtaining and compressing confidence levels, as well as for using the confidence levels in ECC decoding, are described hereinbelow.


The configuration of FIG. 1 is an example system configuration, which is shown purely for the sake of conceptual clarity. Any other suitable memory system configuration can also be used. Elements that are not necessary for understanding the principles of the present invention, such as various interfaces, addressing circuits, timing and sequencing circuits and debugging circuits, have been omitted from the figure for clarity.


In the example system configuration shown in FIG. 1, memory device 24 and memory controller 40 are implemented as two separate Integrated Circuits (ICs). In alternative embodiments, however, the memory device and the memory controller may be integrated on separate semiconductor dies in a single Multi-Chip Package (MCP) or System on Chip (SoC), and may be interconnected by an internal bus. Further alternatively, some or all of the circuitry of the memory controller may reside on the same die on which the memory array is disposed. Further alternatively, some or all of the functionality of controller 40 can be implemented in software and carried out by a processor or other element of the host system. In some embodiments, host 44 and memory controller 40 may be fabricated on the same die, or on separate dies in the same device package.


In some implementations, a single memory controller may be connected to multiple memory devices 24. In yet another embodiment, some or all of the memory controller functionality may be carried out by a separate unit, referred to as a memory extension, which acts as a slave of memory device 24. Typically, memory controller 40 comprises a general-purpose processor, which is programmed in software to carry out the functions described herein. The software may be downloaded to the processor in electronic form, over a network, for example, or it may, alternatively or additionally, be provided and/or stored on tangible media, such as magnetic, optical, or electronic memory.


Memory cells 32 of memory cell array 28 are typically arranged in a grid having multiple rows and columns, commonly referred to as word lines and bit lines, respectively. The array is typically divided into multiple pages, i.e., groups of memory cells that are programmed and read simultaneously. Memory cells are typically erased in groups of word lines that are referred to as erasure blocks. In some embodiments, a given memory device comprises multiple memory cell arrays, which may be fabricated on separate dies.


Compression of Confidence Level Information

The storage values stored in memory cells 32 are often associated with varying confidence levels. In other words, when attempting to retrieve data from memory cells 32 by reading their storage values, different storage values may have different likelihoods of truly representing the data that was originally stored. The memory cells that are programmed to a given programming state typically have storage values that are distributed in accordance with a certain voltage distribution. The properties of the voltage distributions depend on various factors and impairments, such as inaccuracies in the programming process, interference from neighboring cells, aging effects, and many others.


Within a given voltage distribution, some memory cells may have higher confidence levels (i.e., high likelihood of being read correctly) while other memory cells may have lower confidence levels (i.e., higher likelihood of causing read errors). For example, R/W unit 36 typically reads the memory cells by comparing their storage values to one or more read thresholds, which are positioned between adjacent programming states. Thus, a storage value located in a boundary region between adjacent programming states has a relatively high likelihood of falling on the wrong side of a read threshold and causing a read error. A storage value located in the middle of the distribution can usually be regarded as reliable.


In some embodiments, memory controller 40 uses estimates of these confidence levels to improve the performance of the data readout process. For example, in some embodiments, ECC module 47 decodes the ECC by operating on soft metrics, such as Log Likelihood Ratios (LLRs) of the read storage values or of individual bits represented by these storage values. As another example, some of the storage values that are regarded as unreliable or uncertain may be marked as erasures to the ECC module. Estimated confidence levels of the read storage values can be used to mark certain storage values as erasures, and/or to produce soft metrics. Soft metrics, erasures and/or any other suitable metrics that assist the ECC module in decoding the ECC are referred to herein as ECC metrics. Additionally or alternatively, the confidence levels can be used in any suitable way to reconstruct the stored data.


The confidence levels of the storage values can be estimated in various ways. In some embodiments, R/W unit retrieves data from a group of memory cells 32 by comparing their storage values to one or more read thresholds. The R/W unit estimates the confidence levels of these storage values by re-reading the memory cells with a different set of read thresholds, which are positioned so as to identify storage values that are located in boundary regions between adjacent programming states.


As noted above, ECC module 47 in memory controller 40 decodes the ECC based on the estimated confidence levels of the read storage values. In order to provide this information to ECC module 47, R/W unit 36 transfers the estimated confidence levels from memory device 24 to memory controller 40 over the interface that connects the two devices. As can be appreciated, the additional communication volume created by transferring the estimated confidence levels is high, reduces the readout throughput, and may even be unfeasible.


In order to reduce the communication volume between the memory device and the memory controller (or otherwise between the memory cells and the ECC decoder), R/W unit 36 compresses the estimated confidence levels before transferring them to the memory controller. The term “data compression” (or simply “compression” for brevity) in this context typically means any process that reduces the communication rate or bandwidth that is used for transferring the estimated confidence levels. Compression may be lossless (i.e., required to maintain the original confidence level values without error) or lossy (i.e., allows a certain error probability due to the compression and decompression process).


R/W unit 36 may compress the estimated confidence levels using any suitable compression scheme. For example, instead of transferring a sequence of estimated confidence levels, the R/W unit may transfer the run lengths of the sequence, i.e., the numbers of successive “0” and “1” runs in the sequence. This compression scheme is commonly known as run-length coding.


Controlling Memory Readout Reliability and Throughput

Data compression may be applied to confidence levels derived from multiple readouts retrieved from the same group of memory cells, wherein the multiple readouts are retrieved from the memory cells using multiple distinct read thresholds. In the present context and in the claims, the distance between adjacent read thresholds is also referred to as a “sampling interval” or simply “interval” for brevity.


For a certain stress level, the interval between read thresholds that are used for soft decoding may be set to an optimal interval that attains highest soft decoding performance. In some embodiments, to optimize error correction capability in soft decoding, the memory controller continuously monitors the stress level (the stress level may be measured by the average number of errors in the read data), calculates the optimal sampling interval between the read thresholds, and sets the read thresholds in accordance with the optimal sampling interval for subsequent read operations. In this manner, it is possible to maintain near-optimal decoding performance over varying stress conditions (e.g., while ignoring any readout throughput requirement).


Sending the confidence levels in a compressed form (rather than the raw confidence levels) reduces the data volume transferred from the memory device to the memory controller over the interface that connects between them. Higher compression rates are typically desirable because they result in higher readout throughputs.


Applying data compression to the confidence levels can improve the readout throughput only up to a certain maximal readout throughput, because the compression module has limited compression capabilities that depend on the underlying compression scheme used, and on statistical properties of the confidence levels to be compressed. In general, the statistical properties of the confidence levels depend on the average number of errors in the data read, and on the interval between the read thresholds used for soft decoding.


The interval between the read thresholds thus affects both soft decoding performance and the attainable compression rate of the confidence levels. Typically, increasing the interval results in higher soft decoding capabilities but on the other hand reduces the attainable compression rate and the readout throughput. For example, Let INT1 denote an optimal interval between the read thresholds. INT1 may be set for correcting an average number N1 of readout errors expected towards the end of life of the memory device, with a minimal probability of decoding failure.


At some time prior to the end of life, the actual average number of readout errors may be N2<N1. If at that time the interval INT2 is set equal to INT1, the attainable compression rate may be smaller than the compression rate that would be attainable with an interval INT2 that is optimal for correcting N2<N1 errors (with INT2<INT1). This means that depending on the present stress level, the interval between the read thresholds may be adjusted so that soft decoding capabilities are reduced to a minimal level necessary for reliable readout, while achieving the highest compression rate and therefore the highest readout throughput under these conditions.



FIG. 2 is a flow chart that schematically illustrates a method for adjusting the interval between read thresholds for meeting soft decoding and readout throughput requirements, in accordance with an embodiment that is described herein. The method will be described as executed by memory controller 40.


The method begins at a decoding requirement stage 100, with memory controller 40 determining a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells.


The memory controller may determine the maximal number of errors, e.g., based on measuring an average number of errors detected in previously read data. The maximal number of errors reflects the health state or the stress level of the memory cells, and is typically expected to increase as the memory device ages. In some embodiments, the memory controller determines the maximal number of errors by estimating the average number of errors at the present stress level. The memory controller may estimate the average number of errors based on retrieving one or more previously stored CWs. In alternative embodiments, the memory controller may estimate the average number of errors based on the confidence levels as will be described below.


At an interval setting stage 104, the memory controller determines, based on the maximal number of errors, an interval between adjacent read thresholds for reading CWs to be processed using a soft decoding scheme, so as to meet the following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput. Following stage 104 the method loops back to stage 100 to determine another maximal number of errors.


The memory controller may determine the interval between the read thresholds by estimating multiple attainable compression rates for different respective settings of the interval between the read thresholds, and to select a setting of the interval that meets the conditions. In an embodiment, the throughput requirement may specify maximizing the readout throughput from the memory device (by maximizing the attainable compression rate).


In some embodiments, the processor may read data from the memory cells in a hard reading mode or in a soft reading mode. In a hard reading mode, a single read threshold is used for producing a single readout that is decoded using a suitable hard decoding scheme. In the soft reading mode, multiple reading thresholds are used for producing multiple respective readouts. Based on the multiple readouts, confidence levels are produces in the memory device, and are typically transferred to the memory controller in a compressed form.


The memory controller may switch between the hard and soft reading modes, e.g., based on the prevailing conditions of the memory cells. The soft reading mode can be invoked, for example, in response to the memory controller detecting that the number of errors exceeds the error correction capability attainable using hard decoding, or in response to detecting that the readout throughput drops below a throughput that is attainable in the soft reading mode. In this manner, the degradation in readout throughput (the degradation is associated with transferring compressed confidence levels to the memory device) can be minimized depending on the state of life (or stress level) of the memory device. In some embodiments, memory controller 40 of memory system 20 supports switching between the hard reading mode and the soft reading mode, e.g., based on the state of the memory device.


In some embodiments, when the memory device is at a state close to start of life, data read from memory cells is still highly reliable, and therefore using the hard reading mode with a low-complexity hard decoder is sufficient. As the memory device ages, the readout reliability degrades, and soft decoding may be required for coping with the increased error rate.


In some embodiments, the memory controller decides to apply to subsequent data read from the memory cells a hard decoding scheme or the soft decoding scheme, based on an average number of errors detected in previously read data (CWs). In some embodiments, the memory controller decides to apply the soft decoding scheme to subsequent data read from the memory device, in response to detecting that a first readout throughput achievable using hard decoding is smaller than a second readout throughput achievable using soft decoding with confidence levels that were compressed by the compression module.


The compressed confidence levels are typically transferred to the memory controller over the interface with some latency. Such latency may be controlled, for example, by the processor properly setting the data rate of the interface depending on the compression rate. For example, for a lower compression rate, the memory controller configures the interface to a higher data rate, and vice versa. To this end, in an embodiment, the memory controller increases the bus frequency (and therefore the data rate over the interface) when data compression is applied to the confidence levels, in order to compensate for additional compressed data transferred over the bus.


In another embodiment, the memory controller identifies low parallelism random readout operations that are not constrained by the data rate of the interface, and sets the compression module so as not to compress the confidence levels of the identified readout operations. In this embodiment, the confidence levels of the identified readout operation are transferred to the memory controller uncompressed.


In a multi-die memory device, the latency incurred by applying data compression to the confidence levels can be “hidden” by properly ordering the readout operations from the different dies. In an embodiment, the memory controller reads compressed confidence levels from a first die among the multiple dies while one or more other dies among the multiple dies are occupied in compressing local confidence levels. An efficient task scheduling of this sort for a two-die memory system will be described below with reference to FIG. 7.


In some embodiments, data compression module 45 supports multiple compression configurations. In such embodiments, the memory controller may select a compression configuration among the supported compression configurations that meets the throughput requirement. For example, the multiple compression configurations may have multiple respective constant compression rates. When two or more compression configurations meet the throughput requirement, the processor may select among these compression configurations based on any other suitable criterion such as, for example, minimal latency.


In some embodiments, the data compression module comprises a variable-rate compression configuration. In such embodiments, the memory controller receives the compressed confidence levels via the interface in multiple data segments having respective data lengths, in accordance with the variable-rate compression configuration. Operating in a variable-rate compression configuration may require coordination between the memory controller and the memory device in transferring the compressed confidence levels. In some embodiments, in memory systems operating with compression schemes having respective fixed compression rates, it may be required to switch among the different compression schemes so as to utilize the compression scheme that maximizes the compression rate in a given state of the memory device.


In some embodiments, the memory controller estimates the attainable compression rate, programs the desired compression scheme in the memory device, and requests transmission of a specific data size in order to spare bandwidth and optimize performance. In such embodiments, a gradual decrease is achieved in the readout throughput as the stress level on the memory device increases. The memory controller may estimate the attainable compression rate based, for example, on the number of measured errors and on the sampling interval between adjacent read thresholds, in an embodiment. In another embodiment, the memory controller may estimate the attainable compression ratio by direct evaluation of the statistical properties of the confidence levels based on the number of memory cells falling between read thresholds.



FIG. 3 is a diagram that schematically illustrates graphs depicting attainable readout throughputs corresponding to stress levels applied to the memory device, in accordance with an embodiment that is described herein.


In graphs 200, 204 and 206 of FIG. 3, the horizontal axis corresponds to stress levels applied to the memory cells (e.g., measured as the average number of readout errors), and the vertical axis corresponds to readout throughput from the memory device. Graphs 200 and 206 correspond to embodiments in which the memory device supports variable-rate compression, and the interval between the read thresholds is adapted to meet error correction and readout throughput requirements, as described above. Graph 204 corresponds to a compression scheme in which no compression is applied in the hard decoding mode, and a single fixed-rate compression is applied in soft decoding mode.


A vertical dotted line 208 separates between low stress levels that require hard decoding, and high stress levels that require soft decoding. For stress levels below line 208 operating in the hard reading mode with hard decoding is sufficiently reliable. For stress levels above line 208 operating in the soft reading mode with a suitable soft decoding scheme is required for reliable decoding. It is also assumed that when using soft decoding, the confidence levels are transferred to the memory controller in a compressed form.


In the range of stress levels below line 208, the memory device transfers to the memory controller only hard data but no confidence levels. Consequently, the memory system achieves a maximal readout throughput denoted Max. TP. When soft decoding is applied, and the interval is set optimal for a highest expected stress level (indicated using vertical dotted line 212), the readout throughput drops to a value denoted Min. Comp. TP, because the compressed confidence values that are transferred over the interface require additional bandwidth. As shown in the figure, when soft decoding is applied and the readout thresholds are adjusted to meet the error correction and readout throughput requirements, the readout throughput in graph 200 reduces gradually with increasing the stress level. This behavior is desirable, compared to a sharp degradation in the throughput as seen in graphs 204 and 206.


As described above, the data compression module may support multiple data compression configurations having different respective constant compression rates. For example, lines 204 (nonadaptive scheme) and 206 (adaptive scheme) correspond to two different fixed-rate compression configurations. In this example, when switching from hard decoding mode to soft decoding mode, the highest attainable compression rate corresponds to the horizontal line of graph 206. As the stress level increases, the memory controller may need to increase the sampling interval and switch to a compression rate indicated by the horizontal line of graph 204, which is lower than that of graph 206.


Methods for Adjusting the Interval Between Read Thresholds for Maximizing Soft Decoding Capability


FIG. 4 is a flow chart that schematically illustrates a method for optimizing soft decoding capabilities by adapting an interval between read thresholds, in accordance with an embodiment that is described herein. The method will be described as executed by memory controller 40.


The method begins at a decoding requirement stage 250, with memory controller 40 determining a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells. Stage 250 is essentially similar to stage 100 of the method of FIG. 2 above. The maximal number of errors typically reflects the stress level applied to the memory cells.


At an interval setting stage 254, the memory controller determines, based on the maximal number of errors, an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so as to achieve a specified decoding capability requirement. In some embodiments, as will be described with reference to FIG. 5 below, the memory controller determines an optimal interval between the read thresholds so that the soft decoding scheme aims to correct the maximal number of errors with a lowest decoding failure rate.


In some embodiments, the memory controller determines the interval at stage 254 independently of any previous settings of the interval. In other embodiments, the memory controller stores one or more previous values of the interval and uses the stored interval values together with the present interval value to determine a final interval value to be set. For example, the memory controller applies a smoothing filter or a control loop to the previous and present interval values so as to smooth among the interval values along multiple setting operations of the interval. At a read thresholds setting stage 258, the memory controller configures the read thresholds based on the interval of stage 254 for subsequent read operations. Following stage 258 the method terminates.


Methods for Determining an Optimal Interval Between Read Thresholds

Next is described in detail a method for determining an optimal interval between read thresholds for a given stress level imposed on the memory cells.



FIG. 5 is a flow chart that schematically illustrates a method for determining an interval between read thresholds that is optimal for a given stress level, in accordance with an embodiment that is described herein.


In some embodiments, the method of FIG. 5 may be used in implementing stage 254 of the method of FIG. 4 above, when the soft decoding requirement specifies to maximize the decoding performance.


The method begins with the memory controller reading a CW from a group of memory cells using multiple read thresholds, to produce multiple respective readouts, at a soft reading stage 272. In an embodiment, the multiple read thresholds may be centered about an optimal read threshold.


At a number of errors estimation stage 276, the memory controller estimates the number of errors (Ne). The estimation of Ne may be based, for example, on estimating the underlying voltage distributions, as will be described in detail below. Methods for implementing stage 276 will be described in detail further below.


Following stage 276, the average number of errors Ne is available, and the memory controller proceeds to an optimal interval determination stage 280. Ne of stage 268 or 276 may be used as the maximal number of errors, in some embodiments described above. At stage 280, the memory controller maps the number of errors Ne into an optimal interval that minimizes decoding failure rate for Ne. In some embodiments, optimal interval values for respective Ne values are determined beforehand and used at stage 280, e.g., in the form of a formulated function or a lookup table.


At an interval setting stage 284, the memory controller sets the read thresholds with the optimal interval for subsequent soft read operations. In some embodiments, multiple optimal interval values that are produced at stage 280 are subjected to a smoothing process, and the resulting smoothed interval is used as the optimal interval at stage 284. Following stage 284 the method terminates.


Next is described a method for implementing the estimation of the average number of errors of stage 276. It is assumed that the CW is read from a page of a given bit significance value. For example, in a TLC device, the CW may be stored in one of three page-types denoted a Least Significance Bit (LSB) page, a Most Significant Bit (MSB) page and an Upper Significance Bit (USB) page.


A middle range (or a zone) of threshold voltages between adjacent PVs is sometimes denoted a “Read Voltage” (RV). For reading data from the memory cells, the memory controller typically sets one or more read thresholds for selected RVs, depending on the underlying page type. For example, for reading a LSB page the memory controller may set one or more read thresholds for a single RV, whereas in reading a MSB or a USB page, the memory controller may set multiple read thresholds in each of multiple relevant RVs. In the present example, for a memory device that stores data in M programming states (PVs), the memory controller may set for an mth RV (denoted RVm, m=1 . . . M−1) two read thresholds denoted T1m and T2m (T1m<T2m).


To estimate the number of errors (Ne) the memory controller models the underlying voltage distributions. In the present example, the mth voltage distribution (m=0 . . . M−1) is modeled as a Gaussian distribution given by:











f
m

(
v
)

=


1


2

π


σ
m
2






exp
[

exp

(

-



(

v
-

μ
m


)

2


2


σ
m
2




)

]






Equation


1







In Equation 1, μm denotes the mth nominal programming voltage PVm, and σm2 denotes the variance parameter of the mth distribution about PVm. Since the nominal programming voltages are known, it is sufficient to estimate the variances (or standard deviations) to determine the Gaussian distributions.


In some embodiments, for RVm, the memory controller estimates the number of memory cells (denoted NCRVm) falling between T1m and T2m, and uses NCRVm to solve Equation 2 below for σm.


Let RD1 and RD2 denote readouts corresponding to read thresholds T1m and T2m, respectively. In an embodiment, the memory controller estimates NCRVm by performing a logical XOR operation between RD1 and RD2, and counting the number of ‘1’ values in the outcome of the XOR operation. Next, the memory controller estimates σm by solving the following equation:










NC
RVm

=


1
M

[


Q

(



T


1
m


-

μ

m
-
1




σ
m


)

-

Q

(



T


2
m


-

μ

m
-
1




σ
m


)

+

Q

(



μ
m

-

T


2

m







σ
m




)

-

Q

(



μ
m

-

T


1
m




σ
m


)


]





Equation


2







wherein in Equation 2:

    • M denotes the total number of PVs, e.g., M=8 for a TLC device.
    • m=0 . . . M−1 denotes the mth PV.
    • RVm for m=1 . . . M−1 denotes the index of the RV corresponding to the zone between PVm and PVm−1.
    • μm and μm-1 denote the nominal programming voltages of PVm and PVm−1.
    • σm denotes a common standard deviation of the Gaussian distributions corresponding to PVm and PVm−1.
    • T1m and T2m denote the left side and right side read threshold used for RVm.
    • NCRVm denotes the number of memory cells falling between read thresholds T1m and T2m.
    • Q(⋅) is the tail distribution function of the standard normal distribution, also known as the Q-function.


As noted above, for certain page types, the memory device reads a CW by setting T1m and T2m for multiple m values of RVm. In this case the number of memory cells between two read thresholds corresponds to multiple RVs and should be divided among the RVs before solving Equation 2 for a specific RVm. In one embodiment, the memory controller divides the number of memory cells evenly among the relevant RVs. In another embodiment, the memory controller divides the number of memory cells in accordance with a predefined ratio among the relevant RVs.


In some embodiments, the memory controller solves Equation 2 numerically for estimating σm. Using the estimated Gaussian distributions fm(v) and fm-1(v), the memory controller estimates the number of errors Ne (m) for each relevant RVm, and maps Ne (m) into the optimal interval between T1m and T2m. It should be noted that in general, different optimal intervals may be determined for different RVs.


In some embodiments, the mapping of Ne (m) into the interval between the read thresholds is determined beforehand, e.g., based on evaluating the Mutual Information between the bits of the stored CW and the corresponding quantized voltages read from the corresponding memory cells.


In the fields of probability theory and information theory, the mutual information measure quantifies the “amount of information” (e.g., in units of bits) obtained about one random variable by observing the other random variable. In the present context, the mutual information measures the amount of information obtained on the correct CW bits as stored, by observing the CW bits retrieved from the memory device.


The mutual information depends on the interval between read thresholds and reaches a maximal value for a certain interval value. The interval value that maximizes the mutual information results in minimizing the probability of decoding failure for Ne and is therefore considered an “optimal interval.” The mapping of Ne to the optimal interval can be derived by tabulating for several values of the number of errors respective optimal intervals. Alternatively, the mapping function may be implemented using any suitable form.


In some embodiments, the compression module implements a lossy compression scheme. In such embodiments, the compression operation applied to the confidence levels may increase the number of errors to be corrected. In an embodiment, the memory controller takes into consideration the average number of errors (Ne) and the number errors expected to be caused by the lossy compression scheme, in mapping the number of errors to the optimal interval.


Methods for Estimating an Attainable Compression Rate Given an Interval Between Read Thresholds

The confidence levels of a given CW typically have a large number of high confidence values and a much lower number of low confidence values. The attainable compression rate of the confidence levels typically dependents on the statistical properties of the confidence levels, which in turn depend on the interval between the read thresholds and on the underlying voltage distributions about the PVs. By modeling the underlying voltage distributions, and for a selected interval, the memory controller can estimate the attainable compression rate, as described herein.


When a CW is read using read thresholds T1m and T2m per RVm, the corresponding confidence levels are indicative of a high confidence level (a ‘0’ value) for memory cells that fall below T1m and above T2m, and are indicative of a low confidence level (‘1’ value) for memory cells that fall between T1m and T2m. A useful statistical property of the confidence levels is the ratio between the number of ‘1’ values and the overall number of memory cells in the sequence of confidence levels. This ratio is denoted R1 and is also referred to herein as a “ones ratio.”


Typically, a sequence of confidence levels having a low-valued ones-to-zero ratio has relatively long contiguous subsequences of zeros, and is therefore better compressible than a sequence of confidence level having a high-valued ones ratio.


When the interval between the read thresholds decreases, R1 decreases and the attainable compression rate increases. On the other hand, when the interval between the read thresholds increases, R1 increases, and the attainable compression rate decreases. Consequently, the attainable compression rate increases with decreasing the interval, and vice versa.


In some embodiments, the memory controller estimates R1 directly by counting the number of memory cells falling in the inner-zone, e.g., by applying a logical bitwise XOR operation between readouts corresponding to T1m and T2m).


In other embodiments, the memory controller estimates the ones ratio as given by:










R

1

=





"\[LeftBracketingBar]"



"\[RightBracketingBar]"


M






m

𝕡



[


Q

(



T


1
m


-

μ

m
-
1




σ
m


)

-

Q

(



T


2
m


-

μ

m
-
1




σ
m


)

+

Q

(



μ
m

-

T


2
m





σ
m




)

-

Q

(



μ
m

-

T


1
m




σ
m


)


]







Equation


3







wherein in Equation 3:

    • M denotes the total number of PVs, e.g., M=8 for a TLC device.
    • m=0 . . . M−1 denotes the mth PV.
    • custom character denotes the set of RVs (RVm for selected m values) participating in reading the underlying CW, and |custom character| denotes the cardinality of custom character.
    • μm and μm-1 denote the nominal programming voltages of PVm and PVm−1.
    • T1m and T2m denote the left side and right side read threshold used for RVm.
    • Q(⋅) is the tail distribution function of the standard normal distribution, also known as the Q-function.


It is assumed that prior to applying Equation 3, the memory controller has estimated the underlying voltage distributions, e.g., Gaussian distributions in the present example, meaning that μm, μm-1 and σm are known. The memory controller may estimate the voltage distributions, for example, using the methods described above that make use of Equation 2.


In an embodiment the processor may determine the maximal number of errors using an alternative method to the one described above. In the alternative embodiment, the memory controller estimates the maximal number of errors by retrieving a CW from the memory cells using a single read threshold, decoding the retrieved CW using a hard decoding scheme for producing a decoded CW, and in response to detecting that the CW is successfully decodable using the hard decoding scheme, calculating the maximal number of errors by comparing between the retrieved CW and the decoded CW.



FIG. 6 is a flow chart that schematically illustrates a method for estimating the attainable compression rate of confidence levels depending on the interval between read thresholds, in accordance with an embodiment that is described herein.


When the memory controller estimates R1 using Equation 3 above, it is assumed that before or during execution of the present method, memory controller 40 estimates the underlying voltage distributions, as described above.


The method of FIG. 6 begins with memory controller 40 selecting an interval between read thresholds for reading CWs stored in the memory cells for processing by a soft decoding scheme, at an interval selection stage 300. The memory device produces from multiple readouts of a CW, confidence levels that the memory controller uses in decoding the CW using a soft decoding scheme. The memory controller may select the interval using any suitable method. For example, in one embodiment, the memory controller selects an optimal interval for a given stress level using the method of FIG. 5. In other embodiments, the memory controller may select an interval shorter than the optimal interval, e.g., for increasing the attainable compression rate while compromising on reduced soft decoding capabilities.


Based on the interval selected at stage 300, the memory controller estimates statistical properties of the confidence levels corresponding the retrieved CW, at a statistical-properties estimation stage 304. For example, the memory controller determines T1m and T2m based on the interval, and assuming having modeled the underlying voltage distributions, calculates R1, e.g., using Equation 3 above.


At an attainable compression rate determination stage 308, based on the estimated statistical properties (e.g., R1 of Equation 3), the memory controller determines an attainable compression rate for compressing the confidence levels, the attainable compression rate corresponds to an attainable readout throughput. In some embodiments, the memory controller maps the ones ratio R1 to the attainable compression rate using a predefined mapping or function. The mapping depends on the underlying compression scheme and is determined beforehand.


At a compression configuration stage 312, the memory controller configures the compression module in the memory device, in accordance with the attainable compression rate for transmitting the confidence levels at (or close to) the attainable readout throughput. When the compression scheme is a variable-rate compression scheme, the memory controller configures the memory device to transmit the compressed confidence levels with a size limitation of the transactions. When the compression scheme is based on multiple fixed-rate compression schemes, the memory controller selects a suitable fixed-rate scheme that achieves the attainable compression rate. Following stage 312 the method terminates.


Efficient Task Scheduling in a Multi-Die Memory System


FIG. 7 is a diagram that schematically illustrates efficient scheduling in sending hard data and compressed confidence levels from two dies to the memory controller over a common channel, in accordance with an embodiment that is described herein. In describing FIG. 7 it is assumed that two dies denoted “Die 0” and “Die 1” are connected to a memory controller via a common channel (e.g., a bus or link).


In FIG. 7, tasks are depicted as blocks whose lengths represent respective durations of these tasks. In addition, when a second task occurs after a first task, the second task is depicted to the right of the first task.


Tasks related to a Die 0 are depicted in the upper part of the figure, and tasks related to Die 1 are depicted in the lower part of the figure. In the present example, each of the two dies comprises four planes denoted P0 . . . P3. Alternatively, other number of planes per die can also be used. Each of Die 0 and Die 1 is operated in an independent plane interleaving mode to maximize performance. In this mode, the memory device supports independent read operations across multiple planes (independent both in time and in address spaces). Throughput may be maximized in this mode, by proper scheduling of reading from the different planes.


The memory controller reads data from Die 0 and Die 1 over the common channel. The various task types in FIG. 7 are summarized herein. In the figure, repeating tasks are numbered only once for the sake of clarity. Task 350 refers to memory array sensing with confidence intervals from one plane. Tasks 354 and 358 refer respectively to copying hard data and soft data to the output buffer of the memory device. Task 362 refers to compression of soft data. Tasks 366 and 370 respectively refer to outputting hard data and soft data to the memory controller.


In some embodiments, memory controller 40 starts a read operation by sending one or commands indicating to the memory device to read a CW (or multiple CWs) from a selected plane of Die 0 or Die 1, using one or more read thresholds. In case of soft decoding, the memory controller may indicate to the memory device multiple read thresholds to be used, e.g., having a selected interval between adjacent read thresholds. In response to the command(s) the memory device produces hard data and corresponding soft data (e.g., confidence levels) and sends the hard data and the soft data to the memory controller, for decoding the CW in question.


As shown in the figure, the memory device sends the hard data and corresponding soft data to the memory controller at different time slots. In the present example, the memory device sequentially transmits the hard data from P0 up to P3, and later sequentially transmits the soft data from P0 up to P3. This scheduling order is given by way of example, and in alternative embodiments other suitable orders and schedules can also be used.


As shown in the figure, the data compression duration (RLE) is very long, e.g., can be even longer than the time duration it would have taken to transfer to the memory controller the soft data uncompressed.


As can be seen, the memory system queues operations within the memory device so that data compression operations in one die will be performed in parallel to outputting data by the other die, for efficient utilization of the channel. The resulting periodic order of outputting data from the memory device to the memory controller is given as:

    • Output hard data from Die 0.
    • Output soft data from Die 1 (the memory controller decodes four CWs read from P0 . . . P3 of Die 1, and purges previously stored hard data).
    • Output hard data from Die 1.
    • Output soft data from Die 0 (the memory controller decodes four CWs read from P0 . . . P3 of Die 0, and purges previously stored hard data).


A cycle that follows such an output sequence is depicted in the figure using dotted-line arrows. In alternative embodiments, other efficient output sequences can also be used.


Using the scheduling of tasks depicted in FIG. 7, the memory controller requires sufficient buffering area to store two full dies worth of data (to store pages read from the two dies across all planes), so that the common channel from the two dies to the memory controller is utilized continuously. Storage space in the memory controller may be further saved by applying various scheduling schemes with improved optimization by trading-off performance.


Although FIG. 7 refers to an embodiment having a single channel shared by two dies, in other embodiments, the scheduling used in FIG. 7 may be extended to support more than two dies. Moreover, in a memory system that comprises multiple channels, wherein each channel connects two or more dies, the scheduling scheme in FIG. 7 (or an extended scheme for more than two dies per channel) can be used in parallel over the multiple channels.


The embodiments described above are given by way of example, and other suitable embodiments can also be used.


It will be appreciated that the embodiments described above are cited by way of example, and that the following claims are not limited to what has been particularly shown and described hereinabove. Rather, the scope includes both combinations and sub-combinations of the various features described hereinabove, as well as variations and modifications thereof which would occur to persons skilled in the art upon reading the foregoing description and which are not disclosed in the prior art. Documents incorporated by reference in the present patent application are to be considered an integral part of the application except that to the extent any terms are defined in these incorporated documents in a manner that conflicts with the definitions made explicitly or implicitly in the present specification, only the definitions in the present specification should be considered.


Various aspects regarding efficient scheduling in FIG. 7 may be summarized as follows:


1. Fully utilizing the parallelism of each die by parallelizing the compression operations of different planes and/or parallelizing compression and data output operations of different planes. With optimal scheduling of plane operations, the parallelism can be fully utilized.


2. Utilizing multiple dies by reading from one die both confidence levels and hard data, while the other die performs data compression.

Claims
  • 1. An apparatus for data storage, comprising: an interface, configured to communicate with a memory device that comprises (i) a plurality of memory cells and (ii) a data compression module; anda processor, configured to: determine a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells; andbased on the maximal number of errors, determine an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so as to meet following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput requirement.
  • 2. The apparatus according to claim 1, wherein the processor is configured to determine the interval, by estimating multiple attainable compression rates for different respective settings of the interval, and to select a setting of the interval that meets the conditions.
  • 3. The apparatus according to claim 1, wherein the processor is configured to determine the interval for maximizing the readout throughput from the memory device.
  • 4. The apparatus according to claim 1, wherein the processor is configured to determine the interval for minimizing a probability of decoding failure in decoding CWs using the soft decoding scheme.
  • 5. The apparatus according to claim 1, wherein the processor is configured to decide to apply to subsequent CWs read from the memory cells a hard decoding scheme or the soft decoding scheme, based on an average number of errors detected in previously read CWs.
  • 6. The apparatus according to claim 1, wherein the processor is configured to decide to apply the soft decoding scheme to subsequent CWs read from the memory cells, in response to detecting that a first readout throughput achievable using hard decoding is smaller than a second readout throughput achievable using soft decoding with confidence levels that were compressed by the compression module.
  • 7. The apparatus according to claim 1, wherein the processor is configured to set a data rate of the interface depending on the compression rate being configured.
  • 8. The apparatus according to claim 1, wherein the processor is configured to identify low parallelism random readout operations that are not constrained by a data rate of the interface, and to set the compression module so as not to compress confidence levels of the identified readout operations.
  • 9. The apparatus according to claim 1, wherein the memory cells belong to multiple dies, and wherein the processor is configured to read compressed confidence levels from a first die among the multiple dies while one or more other dies among the multiple dies are occupied in compressing local confidence levels.
  • 10. The apparatus according to claim 1, wherein the data compression module supports multiple compression configurations, and wherein the processor is configured to select a compression configuration among the supported compression configurations that meets the readout throughput requirement.
  • 11. The apparatus according to claim 9, wherein the multiple compression configurations have multiple respective constant compression rates.
  • 12. The apparatus according to claim 1, wherein the processor is configured to configure the data compression module to produce compressed confidence levels using a variable-rate compression configuration, and to receive the compressed confidence levels via the interface in multiple data segments having respective data lengths, in accordance with the variable-rate compression configuration.
  • 13. The apparatus according to claim 1, wherein the compression module supports a lossy compression scheme, and wherein the processor is configured to estimate the maximal number of errors, depending on a number of errors contributed by the lossy compression scheme.
  • 14. The apparatus according to claim 1, wherein the processor is configured to determine the interval so as to achieve a specified tradeoff between soft decoding capability and readout throughput.
  • 15. A method for data storage, comprising: in a memory controller that communicates with a memory device that comprises (i) a plurality of memory cells and (ii) a data compression module,determining a maximal number of errors that are required to be corrected by applying a soft decoding scheme to data retrieved from the memory cells; andbased on the maximal number of errors, determining an interval between multiple read thresholds for reading Code Words (CWs) stored in the memory cells for processing by the soft decoding scheme, so as to meet following conditions: (i) the soft decoding scheme achieves a specified decoding capability requirement, and (ii) a compression rate of the compression module when applied to confidence levels corresponding to readouts of the CWs, achieves a specified readout throughput requirement.
  • 16. The method according to claim 15, wherein determining the interval, comprises estimating multiple attainable compression rates for different respective settings of the interval, and selecting a setting of the interval that meets the conditions.
  • 17. The method according to claim 15, wherein determining the interval comprises determining the interval for maximizing the readout throughput from the memory device.
  • 18. The method according to claim 15, wherein determining the interval comprises determining the interval for minimizing a probability of decoding failure in decoding CWs using the soft decoding scheme.
  • 19. The method according to claim 15, and comprising deciding to apply to subsequent CWs read from the memory cells a hard decoding scheme or the soft decoding scheme, based on an average number of errors detected in previously read CWs.
  • 20. The method according to claim 15, and comprising deciding to apply the soft decoding scheme to subsequent CWs read from the memory cells, in response to detecting that a first readout throughput achievable using hard decoding is smaller than a second readout throughput achievable using soft decoding with confidence levels that were compressed by the compression module.
  • 21. The method according to claim 15, and comprising setting a data rate of the interface depending on the compression rate being configured.
  • 22. The method according to claim 15, and comprising identify low parallelism random readout operations that are not constrained by a data rate of the interface, and setting the compression module so as not to compress confidence levels of the identified readout operations.
  • 23. The method according to claim 15, wherein the memory cells belong to multiple dies, and comprising reading compressed confidence levels from a first die among the multiple dies while one or more other dies among the multiple dies are occupied in compressing local confidence levels.
  • 24. The method according to claim 15, wherein the data compression module supports multiple compression configurations, and comprising selecting a compression configuration among the supported compression configurations that meets the readout throughput requirement.
  • 25. The method according to claim 24, wherein the multiple compression configurations have multiple respective constant compression rates.
  • 26. The method according to claim 15, and comprising configuring the data compression module to produce compressed confidence levels using a variable-rate compression configuration, and receiving the compressed confidence levels in multiple data segments having respective data lengths, in accordance with the variable-rate compression configuration.
  • 27. The method according to claim 15, wherein the compression module supports a lossy compression scheme, and comprising estimating the maximal number of errors, depending on a number of errors contributed by the lossy compression scheme.
  • 28. The method according to claim 15, wherein determining the interval comprises determining the interval so as to achieve a specified tradeoff between soft decoding capability and readout throughput.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application claims the benefit of U.S. Provisional Patent Application 63/184,230, filed May 5, 2021, whose disclosure is incorporated herein by reference.

US Referenced Citations (612)
Number Name Date Kind
3668631 Griffith et al. Jun 1972 A
3668632 Oldham Jun 1972 A
4058851 Scheuneman Nov 1977 A
4112502 Scheuneman Sep 1978 A
4394763 Nagano et al. Jul 1983 A
4413339 Riggle et al. Nov 1983 A
4556961 Iwahashi et al. Dec 1985 A
4558431 Satoh Dec 1985 A
4608687 Dutton Aug 1986 A
4654847 Dutton Mar 1987 A
4661929 Aoki et al. Apr 1987 A
4768171 Tada Aug 1988 A
4811285 Walker et al. Mar 1989 A
4899342 Potter et al. Feb 1990 A
4910706 Hyatt Mar 1990 A
4993029 Galbraith et al. Feb 1991 A
5056089 Furuta et al. Oct 1991 A
5077722 Geist et al. Dec 1991 A
5126808 Montalvo et al. Jun 1992 A
5163021 Mehrotra et al. Nov 1992 A
5172338 Mehrotta et al. Dec 1992 A
5182558 Mayo Jan 1993 A
5182752 Deroo et al. Jan 1993 A
5191584 Anderson Mar 1993 A
5200959 Gross et al. Apr 1993 A
5237535 Mielke et al. Aug 1993 A
5272669 Samachisa et al. Dec 1993 A
5276649 Hoshita et al. Jan 1994 A
5287469 Tsuboi Feb 1994 A
5365484 Cleveland et al. Nov 1994 A
5388064 Khan Feb 1995 A
5416646 Shirai May 1995 A
5416782 Wells et al. May 1995 A
5446854 Khalidi et al. Aug 1995 A
5450424 Okugaki et al. Sep 1995 A
5469444 Endoh et al. Nov 1995 A
5473753 Wells et al. Dec 1995 A
5479170 Cauwenberghs et al. Dec 1995 A
5495486 Gheewala Feb 1996 A
5508958 Fazio et al. Apr 1996 A
5519831 Holzhammer May 1996 A
5532962 Auclair et al. Jul 1996 A
5533190 Binford et al. Jul 1996 A
5541886 Hasbun Jul 1996 A
5600677 Citta et al. Feb 1997 A
5638320 Wong et al. Jun 1997 A
5657332 Auclair et al. Aug 1997 A
5675540 Roohparvar Oct 1997 A
5682352 Wong et al. Oct 1997 A
5687114 Khan Nov 1997 A
5696717 Koh Dec 1997 A
5726649 Tamaru et al. Mar 1998 A
5726934 Tran et al. Mar 1998 A
5742752 De Koning Apr 1998 A
5748533 Dunlap et al. May 1998 A
5748534 Dunlap et al. May 1998 A
5751637 Chen et al. May 1998 A
5761402 Kaneda et al. Jun 1998 A
5798966 Keeney Aug 1998 A
5799200 Brant et al. Aug 1998 A
5801985 Roohparvar et al. Sep 1998 A
5838832 Barnsley Nov 1998 A
5860106 Domen et al. Jan 1999 A
5867114 Barbir Feb 1999 A
5867428 Ishii et al. Feb 1999 A
5867429 Chen et al. Feb 1999 A
5877986 Harari et al. Mar 1999 A
5889937 Tamagawa Mar 1999 A
5901089 Korsh et al. May 1999 A
5909449 So et al. Jun 1999 A
5912906 Wu et al. Jun 1999 A
5930167 Lee et al. Jul 1999 A
5937424 Leak et al. Aug 1999 A
5942004 Cappelletti Aug 1999 A
5946716 Karp et al. Aug 1999 A
5969986 Wong et al. Oct 1999 A
5982668 Ishii et al. Nov 1999 A
5991517 Harari et al. Nov 1999 A
5995417 Chen et al. Nov 1999 A
6009014 Hollmer et al. Dec 1999 A
6009016 Ishii et al. Dec 1999 A
6023425 Ishii et al. Feb 2000 A
6034891 Norman Mar 2000 A
6040993 Chen et al. Mar 2000 A
6041430 Yamauchi Mar 2000 A
6073204 Lakhani et al. Jun 2000 A
6101614 Gonzales et al. Aug 2000 A
6128237 Shirley et al. Oct 2000 A
6134140 Tanaka et al. Oct 2000 A
6134143 Norman Oct 2000 A
6134631 Jennings Oct 2000 A
6141261 Patti Oct 2000 A
6151246 So et al. Nov 2000 A
6157573 Ishii et al. Dec 2000 A
6166962 Chen et al. Dec 2000 A
6169691 Pasotti et al. Jan 2001 B1
6178466 Gilbertson et al. Jan 2001 B1
6185134 Tanaka et al. Feb 2001 B1
6209113 Roohparvar Mar 2001 B1
6212654 Lou et al. Apr 2001 B1
6219276 Parker Apr 2001 B1
6219447 Lee et al. Apr 2001 B1
6222762 Guterman et al. Apr 2001 B1
6230233 Lofgren et al. May 2001 B1
6240458 Gilbertson May 2001 B1
6259627 Wong Jul 2001 B1
6275419 Guterman et al. Aug 2001 B1
6278632 Chevallier Aug 2001 B1
6279069 Robinson et al. Aug 2001 B1
6288944 Kawamura Sep 2001 B1
6292394 Cohen et al. Sep 2001 B1
6301151 Engh et al. Oct 2001 B1
6304486 Yano Oct 2001 B1
6307776 So et al. Oct 2001 B1
6314044 Sasaki et al. Nov 2001 B1
6317363 Guterman et al. Nov 2001 B1
6317364 Guterman et al. Nov 2001 B1
6345004 Omura et al. Feb 2002 B1
6360346 Miyauchi et al. Mar 2002 B1
6363008 Wong Mar 2002 B1
6363454 Lakhani et al. Mar 2002 B1
6366496 Torelli et al. Apr 2002 B1
6385092 Ishii et al. May 2002 B1
6392932 Ishii et al. May 2002 B1
6396742 Korsh et al. May 2002 B1
6397364 Barkan May 2002 B1
6405323 Lin et al. Jun 2002 B1
6405342 Lee Jun 2002 B1
6418060 Yong et al. Jul 2002 B1
6442585 Dean et al. Aug 2002 B1
6445602 Kokudo et al. Sep 2002 B1
6452838 Ishii et al. Sep 2002 B1
6456528 Chen Sep 2002 B1
6466476 Wong et al. Oct 2002 B1
6467062 Barkan Oct 2002 B1
6469931 Ban et al. Oct 2002 B1
6480948 Virajpet et al. Nov 2002 B1
6490236 Fukuda et al. Dec 2002 B1
6522580 Chen et al. Feb 2003 B2
6525952 Araki et al. Feb 2003 B2
6532556 Wong et al. Mar 2003 B1
6538922 Khalid et al. Mar 2003 B1
6549464 Tanaka et al. Apr 2003 B2
6553510 Pekny et al. Apr 2003 B1
6558967 Wong May 2003 B1
6560152 Cernea May 2003 B1
6567311 Ishii et al. May 2003 B2
6577539 Iwahashi Jun 2003 B2
6584012 Banks Jun 2003 B2
6615307 Roohparvar Sep 2003 B1
6621739 Gonzales et al. Sep 2003 B2
6640326 Buckingham et al. Oct 2003 B1
6643169 Rudelic et al. Nov 2003 B2
6646913 Micheloni et al. Nov 2003 B2
6678192 Gongwer et al. Jan 2004 B2
6683811 Ishii et al. Jan 2004 B2
6687155 Nagasue Feb 2004 B2
6707748 Lin et al. Mar 2004 B2
6708257 Bao Mar 2004 B2
6714449 Khalid Mar 2004 B2
6717847 Chen Apr 2004 B2
6731557 Beretta May 2004 B2
6732250 Durrant May 2004 B2
6738293 Iwahashi May 2004 B1
6751766 Guterman et al. Jun 2004 B2
6757193 Chen et al. Jun 2004 B2
6774808 Hibbs et al. Aug 2004 B1
6781877 Cernea et al. Aug 2004 B2
6804805 Rub Oct 2004 B2
6807095 Chen et al. Oct 2004 B2
6807101 Ooishi et al. Oct 2004 B2
6809964 Moschopoulos et al. Oct 2004 B2
6819592 Noguchi et al. Nov 2004 B2
6829167 Tu et al. Dec 2004 B2
6845052 Ho et al. Jan 2005 B1
6851018 Wyatt et al. Feb 2005 B2
6851081 Yamamoto Feb 2005 B2
6856546 Guterman et al. Feb 2005 B2
6862218 Guterman et al. Mar 2005 B2
6870767 Rudelic et al. Mar 2005 B2
6870773 Noguchi et al. Mar 2005 B2
6873552 Ishii et al. Mar 2005 B2
6879520 Hosono et al. Apr 2005 B2
6882567 Wong Apr 2005 B1
6894926 Guterman et al. May 2005 B2
6907497 Hosono et al. Jun 2005 B2
6925009 Noguchi et al. Aug 2005 B2
6930925 Guo et al. Aug 2005 B2
6934188 Roohparvar Aug 2005 B2
6937511 Hsu et al. Aug 2005 B2
6958938 Noguchi et al. Oct 2005 B2
6963505 Cohen Nov 2005 B2
6972993 Conley et al. Dec 2005 B2
6988175 Lasser Jan 2006 B2
6992932 Cohen Jan 2006 B2
6999344 Hosono et al. Feb 2006 B2
7002843 Guterman et al. Feb 2006 B2
7006379 Noguchi et al. Feb 2006 B2
7012835 Gonzales et al. Mar 2006 B2
7020017 Chen et al. Mar 2006 B2
7023735 Ban et al. Apr 2006 B2
7031210 Park et al. Apr 2006 B2
7031214 Tran Apr 2006 B2
7031216 You Apr 2006 B2
7039846 Hewitt et al. May 2006 B2
7042766 Wang et al. May 2006 B1
7054193 Wong May 2006 B1
7054199 Lee et al. May 2006 B2
7057958 So et al. Jun 2006 B2
7065147 Ophir et al. Jun 2006 B2
7068539 Guterman et al. Jun 2006 B2
7071849 Zhang Jul 2006 B2
7072222 Ishii et al. Jul 2006 B2
7079555 Baydar et al. Jul 2006 B2
7088615 Guterman et al. Aug 2006 B2
7096406 Kanazawa et al. Aug 2006 B2
7099194 Tu et al. Aug 2006 B2
7102924 Chen et al. Sep 2006 B2
7113432 Mokhlesi Sep 2006 B2
7130210 Bathul et al. Oct 2006 B2
7139192 Wong Nov 2006 B1
7139198 Guterman et al. Nov 2006 B2
7145805 Ishii et al. Dec 2006 B2
7151692 Wu Dec 2006 B2
7158058 Yu Jan 2007 B1
7170781 So et al. Jan 2007 B2
7170802 Cernea et al. Jan 2007 B2
7173859 Hemink Feb 2007 B2
7177184 Chen Feb 2007 B2
7177195 Gonzales et al. Feb 2007 B2
7177199 Chen et al. Feb 2007 B2
7177200 Ronen et al. Feb 2007 B2
7184338 Nagakawa et al. Feb 2007 B2
7187195 Kim Mar 2007 B2
7187592 Guterman et al. Mar 2007 B2
7190614 Wu Mar 2007 B2
7193898 Cernea Mar 2007 B2
7193921 Choi et al. Mar 2007 B2
7196644 Anderson et al. Mar 2007 B1
7196928 Chen Mar 2007 B2
7196933 Shibata Mar 2007 B2
7197594 Raz et al. Mar 2007 B2
7200062 Kinsely et al. Apr 2007 B2
7210077 Brandenberger et al. Apr 2007 B2
7221592 Nazarian May 2007 B2
7224613 Chen et al. May 2007 B2
7231474 Helms et al. Jun 2007 B1
7231562 Ohlhoff et al. Jun 2007 B2
7243275 Gongwer et al. Jul 2007 B2
7254690 Rao Aug 2007 B2
7254763 Aadsen et al. Aug 2007 B2
7257027 Park Aug 2007 B2
7259987 Chen et al. Aug 2007 B2
7266026 Gongwer et al. Sep 2007 B2
7266069 Chu Sep 2007 B2
7269066 Nguyen et al. Sep 2007 B2
7272757 Stocken Sep 2007 B2
7274611 Roohparvar Sep 2007 B2
7277355 Tanzana Oct 2007 B2
7280398 Lee et al. Oct 2007 B1
7280409 Misumi et al. Oct 2007 B2
7280415 Hwang et al. Oct 2007 B2
7283399 Ishii et al. Oct 2007 B2
7289344 Chen Oct 2007 B2
7301807 Khalid et al. Nov 2007 B2
7301817 Li et al. Nov 2007 B2
7308525 Lasser et al. Dec 2007 B2
7310255 Chan Dec 2007 B2
7310269 Shibata Dec 2007 B2
7310271 Lee Dec 2007 B2
7310272 Mokhesi et al. Dec 2007 B1
7310347 Lasser Dec 2007 B2
7312727 Feng et al. Dec 2007 B1
7321509 Chen et al. Jan 2008 B2
7328384 Kulkarni et al. Feb 2008 B1
7342831 Mokhlesi et al. Mar 2008 B2
7343330 Boesjes et al. Mar 2008 B1
7345924 Nguyen et al. Mar 2008 B2
7345928 Li Mar 2008 B2
7349263 Kim et al. Mar 2008 B2
7356755 Fackenthal Apr 2008 B2
7363420 Lin et al. Apr 2008 B2
7365671 Anderson Apr 2008 B1
7369434 Radke May 2008 B2
7388781 Litsyn et al. Jun 2008 B2
7397697 So et al. Jul 2008 B2
7405974 Yaoi et al. Jul 2008 B2
7405979 Ishii et al. Jul 2008 B2
7408804 Hemink et al. Aug 2008 B2
7408810 Aritome et al. Aug 2008 B2
7409473 Conley et al. Aug 2008 B2
7409623 Baker et al. Aug 2008 B2
7420847 Li Sep 2008 B2
7433231 Aritome Oct 2008 B2
7433697 Karaoguz et al. Oct 2008 B2
7434111 Sugiura et al. Oct 2008 B2
7437498 Ronen Oct 2008 B2
7440324 Mokhlesi Oct 2008 B2
7440331 Hemink Oct 2008 B2
7441067 Gorobetz et al. Oct 2008 B2
7447970 Wu et al. Nov 2008 B2
7450421 Mokhlesi et al. Nov 2008 B2
7453737 Ha Nov 2008 B2
7457163 Hemink Nov 2008 B2
7457897 Lee et al. Nov 2008 B1
7460410 Nagai et al. Dec 2008 B2
7460412 Lee et al. Dec 2008 B2
7466592 Mitani et al. Dec 2008 B2
7468907 Kang et al. Dec 2008 B2
7468911 Lutze et al. Dec 2008 B2
7469049 Feng Dec 2008 B1
7471581 Tran et al. Dec 2008 B2
7483319 Brown Jan 2009 B2
7487329 Hepkin et al. Feb 2009 B2
7487394 Forhan et al. Feb 2009 B2
7492641 Hosono et al. Feb 2009 B2
7508710 Mokhlesi Mar 2009 B2
7526711 Orio Apr 2009 B2
7539061 Lee May 2009 B2
7539062 Doyle May 2009 B2
7551492 Kim Jun 2009 B2
7558109 Brandman et al. Jul 2009 B2
7558839 McGovern Jul 2009 B1
7568135 Cornwell et al. Jul 2009 B2
7570520 Kamei et al. Aug 2009 B2
7574555 Porat et al. Aug 2009 B2
7590002 Mokhlesi et al. Sep 2009 B2
7593259 Kim et al. Sep 2009 B2
7594093 Kancherla Sep 2009 B1
7596707 Vemula Sep 2009 B1
7609787 Jahan et al. Oct 2009 B2
7613043 Cornwell et al. Nov 2009 B2
7616498 Mokhlesi et al. Nov 2009 B2
7619918 Aritome Nov 2009 B2
7631245 Lasser Dec 2009 B2
7633798 Sarin et al. Dec 2009 B2
7633802 Mokhlesi Dec 2009 B2
7639532 Roohparvar et al. Dec 2009 B2
7644347 Alexander et al. Jan 2010 B2
7656734 Thorp et al. Feb 2010 B2
7660158 Aritome Feb 2010 B2
7660183 Ware et al. Feb 2010 B2
7661000 Ueda et al. Feb 2010 B2
7661054 Huffman et al. Feb 2010 B2
7665007 Yang et al. Feb 2010 B2
7680987 Clark et al. Mar 2010 B1
7733712 Walston et al. Jun 2010 B1
7742351 Inoue et al. Jun 2010 B2
7761624 Karamcheti et al. Jul 2010 B2
7797609 Neuman Sep 2010 B2
7810017 Radke Oct 2010 B2
7848149 Gonzales et al. Dec 2010 B2
7869273 Lee et al. Jan 2011 B2
7885119 Li Feb 2011 B2
7904783 Brandman et al. Mar 2011 B2
7925936 Sommer Apr 2011 B1
7928497 Yaegashi Apr 2011 B2
7929549 Talbot Apr 2011 B1
7930515 Gupta et al. Apr 2011 B2
7945825 Cohen et al. May 2011 B2
7978516 Olbrich et al. Jul 2011 B2
7995388 Winter et al. Aug 2011 B1
8000141 Shalvi et al. Aug 2011 B1
3014094 Jin Sep 2011 A1
3040744 Gorobets et al. Oct 2011 A1
8037380 Cagno et al. Oct 2011 B2
8059457 Perlmutter et al. Nov 2011 B2
8065583 Radke Nov 2011 B2
8230300 Perlmutter et al. Jul 2012 B2
8239747 Cho et al. Aug 2012 B2
8374014 Rotbard et al. Feb 2013 B2
8400858 Meir et al. Mar 2013 B2
8429493 Sokolov et al. Apr 2013 B2
8479080 Shalvi et al. Jul 2013 B1
8493781 Meir et al. Jul 2013 B1
8493783 Meir et al. Jul 2013 B2
8495465 Anholt et al. Jul 2013 B1
8570804 Shalvi et al. Oct 2013 B2
8572311 Shalvi et al. Oct 2013 B1
8572423 Isachar et al. Oct 2013 B1
8599611 Shalvi et al. Dec 2013 B2
8645794 Meir et al. Feb 2014 B1
8677054 Meir et al. Mar 2014 B1
8677203 Shalvi et al. Mar 2014 B1
8694814 Salomon et al. Apr 2014 B1
8694853 Sommer Apr 2014 B1
8694854 Dar et al. Apr 2014 B1
8767459 Kasorla et al. Jul 2014 B1
8886990 Meir et al. Nov 2014 B2
8990665 Steiner Mar 2015 B1
8996793 Steiner Mar 2015 B1
9021181 Rotbard et al. Apr 2015 B1
9032263 Yang May 2015 B2
9104580 Meir Aug 2015 B1
9214965 Fitzpatrick et al. Dec 2015 B2
9229861 Perlmutter et al. Jan 2016 B2
9454414 Micheloni et al. Sep 2016 B2
9671972 Perlmutter et al. Jun 2017 B2
9672942 Yoon Jun 2017 B2
9912353 Low Mar 2018 B1
9985651 Varanasi May 2018 B2
10157013 Perlmutter et al. Dec 2018 B2
10474525 Sharon Nov 2019 B2
10847241 Yassine Nov 2020 B2
10872013 Symons Dec 2020 B2
11032031 Jiang et al. Jun 2021 B2
11450400 Yun Sep 2022 B2
20010002172 Tanaka et al. May 2001 A1
20010006479 Ikehashi et al. Jul 2001 A1
20020038440 Barkan Mar 2002 A1
20020056064 Kidorf et al. May 2002 A1
20020118574 Gongwer et al. Aug 2002 A1
20020133684 Anderson Sep 2002 A1
20020166091 Kidorf et al. Nov 2002 A1
20020174295 Ulrich et al. Nov 2002 A1
20020196510 Hietala et al. Dec 2002 A1
20030002348 Chen et al. Jan 2003 A1
20030103400 Van Tran Jun 2003 A1
20030161183 Tran Aug 2003 A1
20030189856 Cho et al. Oct 2003 A1
20040057265 Mirabel et al. Mar 2004 A1
20040057285 Cernea et al. Mar 2004 A1
20040083333 Chang et al. Apr 2004 A1
20040083334 Chang et al. Apr 2004 A1
20040105311 Cernea et al. Jun 2004 A1
20040114437 Li Jun 2004 A1
20040160842 Fukiage Aug 2004 A1
20040223371 Roohparvar Nov 2004 A1
20050007802 Gerpheide Jan 2005 A1
20050013165 Ban Jan 2005 A1
20050024941 Lasser et al. Feb 2005 A1
20050024978 Ronen Feb 2005 A1
20050030788 Parkinson et al. Feb 2005 A1
20050086574 Fackenthal Apr 2005 A1
20050121436 Kamitani et al. Jun 2005 A1
20050144361 Gonzalez et al. Jun 2005 A1
20050157555 Ono et al. Jul 2005 A1
20050162913 Chen Jul 2005 A1
20050169051 Khalid et al. Aug 2005 A1
20050189649 Maruyama et al. Sep 2005 A1
20050213393 Lasser Sep 2005 A1
20050224853 Ohkawa Oct 2005 A1
20050240745 Iyer et al. Oct 2005 A1
20050243626 Ronen Nov 2005 A1
20060004952 Lasser Jan 2006 A1
20060028875 Avraham et al. Feb 2006 A1
20060028877 Meir Feb 2006 A1
20060101193 Murin May 2006 A1
20060106972 Gorobets et al. May 2006 A1
20060107136 Gongwer et al. May 2006 A1
20060129750 Lee et al. Jun 2006 A1
20060133141 Gorobets Jun 2006 A1
20060156189 Tomlin Jul 2006 A1
20060179334 Brittain et al. Aug 2006 A1
20060190699 Lee Aug 2006 A1
20060203546 Lasser Sep 2006 A1
20060218359 Sanders et al. Sep 2006 A1
20060221692 Chen Oct 2006 A1
20060221705 Hemkin et al. Oct 2006 A1
20060221714 Li et al. Oct 2006 A1
20060239077 Park et al. Oct 2006 A1
20060239081 Roohparvar Oct 2006 A1
20060256620 Nguyen et al. Nov 2006 A1
20060256626 Werner et al. Nov 2006 A1
20060256891 Yuan et al. Nov 2006 A1
20060271748 Jain et al. Nov 2006 A1
20060285392 Incarnati et al. Dec 2006 A1
20060285396 Ha Dec 2006 A1
20070006013 Moshayedi et al. Jan 2007 A1
20070019481 Park Jan 2007 A1
20070033581 Tomlin et al. Feb 2007 A1
20070047314 Goda et al. Mar 2007 A1
20070047326 Nguyen et al. Mar 2007 A1
20070050536 Kolokowsky Mar 2007 A1
20070058446 Hwang et al. Mar 2007 A1
20070061502 Lasser et al. Mar 2007 A1
20070067667 Ikeuchi et al. Mar 2007 A1
20070074093 Lasser Mar 2007 A1
20070086239 Litsyn et al. Apr 2007 A1
20070086260 Sinclair Apr 2007 A1
20070089034 Litsyn et al. Apr 2007 A1
20070091677 Lasser et al. Apr 2007 A1
20070091694 Lee et al. Apr 2007 A1
20070103978 Conley et al. May 2007 A1
20070103986 Chen May 2007 A1
20070104211 Opsasnick May 2007 A1
20070109845 Chen May 2007 A1
20070109849 Chen May 2007 A1
20070115726 Cohen et al. May 2007 A1
20070118713 Guterman et al. May 2007 A1
20070143378 Gorobetz Jun 2007 A1
20070143531 Atri Jun 2007 A1
20070159889 Kang et al. Jul 2007 A1
20070159892 Kang et al. Jul 2007 A1
20070159907 Kwak Jul 2007 A1
20070168837 Murin Jul 2007 A1
20070171714 Wu et al. Jul 2007 A1
20070183210 Choi et al. Aug 2007 A1
20070189073 Aritome Aug 2007 A1
20070195602 Fong et al. Aug 2007 A1
20070206426 Mokhlesi Sep 2007 A1
20070208904 Hsieh et al. Sep 2007 A1
20070226599 Motwani Sep 2007 A1
20070236990 Aritome Oct 2007 A1
20070253249 Kang et al. Nov 2007 A1
20070256620 Viggiano et al. Nov 2007 A1
20070263455 Cornwell et al. Nov 2007 A1
20070266232 Rodgers et al. Nov 2007 A1
20070271424 Lee et al. Nov 2007 A1
20070280000 Fujiu et al. Dec 2007 A1
20070291571 Balasundaram Dec 2007 A1
20070297234 Cernea et al. Dec 2007 A1
20080010395 Mylly et al. Jan 2008 A1
20080025121 Tanzawa Jan 2008 A1
20080043535 Roohparvar Feb 2008 A1
20080049504 Kasahara et al. Feb 2008 A1
20080049506 Guterman Feb 2008 A1
20080052446 Lasser et al. Feb 2008 A1
20080055993 Lee Mar 2008 A1
20080080243 Edahiro et al. Apr 2008 A1
20080082730 Kim et al. Apr 2008 A1
20080086631 Chow et al. Apr 2008 A1
20080089123 Chae et al. Apr 2008 A1
20080104309 Cheon et al. May 2008 A1
20080104312 Lasser May 2008 A1
20080109590 Jung et al. May 2008 A1
20080115017 Jacobson May 2008 A1
20080123420 Brandman et al. May 2008 A1
20080123426 Lutze et al. May 2008 A1
20080126686 Sokolov et al. May 2008 A1
20080130341 Shalvi et al. Jun 2008 A1
20080148115 Sokolov et al. Jun 2008 A1
20080151618 Sharon et al. Jun 2008 A1
20080151667 Miu et al. Jun 2008 A1
20080158958 Sokolov et al. Jul 2008 A1
20080181001 Shalvi Jul 2008 A1
20080198650 Shalvi et al. Aug 2008 A1
20080198654 Toda Aug 2008 A1
20080209116 Caulkins Aug 2008 A1
20080209304 Winarski et al. Aug 2008 A1
20080215798 Sharon et al. Sep 2008 A1
20080219050 Shalvi et al. Sep 2008 A1
20080239093 Easwar et al. Oct 2008 A1
20080239812 Abiko et al. Oct 2008 A1
20080253188 Aritome Oct 2008 A1
20080263262 Sokolov et al. Oct 2008 A1
20080263676 Mo et al. Oct 2008 A1
20080270730 Lasser et al. Oct 2008 A1
20080282106 Shalvi et al. Nov 2008 A1
20080288714 Salomon et al. Nov 2008 A1
20090013233 Radke Jan 2009 A1
20090024905 Shalvi et al. Jan 2009 A1
20090034337 Aritome Feb 2009 A1
20090043831 Antonopoulos et al. Feb 2009 A1
20090043951 Shalvi et al. Feb 2009 A1
20090049234 Oh et al. Feb 2009 A1
20090073762 Lee et al. Mar 2009 A1
20090086542 Lee et al. Apr 2009 A1
20090089484 Chu Apr 2009 A1
20090091979 Shalvi Apr 2009 A1
20090094930 Schwoerer Apr 2009 A1
20090106485 Anholt Apr 2009 A1
20090112949 Ergan et al. Apr 2009 A1
20090132755 Radke May 2009 A1
20090144600 Perlmutter et al. Jun 2009 A1
20090150894 Huang et al. Jun 2009 A1
20090157950 Selinger Jun 2009 A1
20090157964 Kasorla et al. Jun 2009 A1
20090158126 Perlmutter et al. Jun 2009 A1
20090168524 Golov et al. Jul 2009 A1
20090172257 Prins et al. Jul 2009 A1
20090172261 Prins et al. Jul 2009 A1
20090193184 Yu et al. Jul 2009 A1
20090199074 Sommer et al. Aug 2009 A1
20090204824 Lin et al. Aug 2009 A1
20090204872 Yu et al. Aug 2009 A1
20090213653 Perlmutter et al. Aug 2009 A1
20090213654 Perlmutter et al. Aug 2009 A1
20090225595 Kim Sep 2009 A1
20090228761 Perlmutter et al. Sep 2009 A1
20090265509 Klein Oct 2009 A1
20090300227 Nochimowski et al. Dec 2009 A1
20090323412 Mokhlesi et al. Dec 2009 A1
20090327608 Eschmann Dec 2009 A1
20100017650 Chin et al. Jan 2010 A1
20100034022 Dutta et al. Feb 2010 A1
20100057976 Lasser Mar 2010 A1
20100061151 Miwa et al. Mar 2010 A1
20100082883 Chen et al. Apr 2010 A1
20100083247 Kanevsky et al. Apr 2010 A1
20100110580 Takashima May 2010 A1
20100131697 Alrod et al. May 2010 A1
20100137167 Hellsten et al. Jun 2010 A1
20100142268 Aritome Jun 2010 A1
20100142277 Yang et al. Jun 2010 A1
20100169547 Ou Jul 2010 A1
20100169743 Vogan et al. Jul 2010 A1
20100174847 Paley et al. Jul 2010 A1
20100211803 Lablans Aug 2010 A1
20100287217 Borchers et al. Nov 2010 A1
20110010489 Yeh Jan 2011 A1
20110060969 Ramamoorthy et al. Mar 2011 A1
20110066793 Burd Mar 2011 A1
20110075482 Shepard et al. Mar 2011 A1
20110107049 Kwon et al. May 2011 A1
20110149657 Haratsch et al. Jun 2011 A1
20110199823 Bar-Or et al. Aug 2011 A1
20110302354 Miller Dec 2011 A1
20120297116 Gurgi et al. Nov 2012 A1
20150149840 Alhussien et al. May 2015 A1
20150205664 Janik et al. Jul 2015 A1
20170185299 Conley et al. Jun 2017 A1
Foreign Referenced Citations (43)
Number Date Country
0783754 Jul 1997 EP
1434236 Jun 2004 EP
1605509 Dec 2005 EP
199610256 Apr 1996 WO
1998028745 Jul 1998 WO
2002100112 Dec 2002 WO
2003100791 Dec 2003 WO
2007046084 Apr 2007 WO
2007132452 Nov 2007 WO
2007132453 Nov 2007 WO
2007132456 Nov 2007 WO
2007132457 Nov 2007 WO
2007132458 Nov 2007 WO
2007146010 Dec 2007 WO
2008026203 Mar 2008 WO
2008053472 May 2008 WO
2008053473 May 2008 WO
2008068747 Jun 2008 WO
2008077284 Jul 2008 WO
2008083131 Jul 2008 WO
2008099958 Aug 2008 WO
2008111058 Sep 2008 WO
2008124760 Oct 2008 WO
2008139441 Nov 2008 WO
2009037691 Mar 2009 WO
2009037697 Mar 2009 WO
2009038961 Mar 2009 WO
2009050703 Apr 2009 WO
2009053961 Apr 2009 WO
2009053962 Apr 2009 WO
2009053963 Apr 2009 WO
2009063450 May 2009 WO
2009072100 Jun 2009 WO
2009072101 Jun 2009 WO
2009072102 Jun 2009 WO
2009072103 Jun 2009 WO
2009072104 Jun 2009 WO
2009072105 Jun 2009 WO
2009074978 Jun 2009 WO
2009074979 Jun 2009 WO
2009078006 Jun 2009 WO
2009095902 Aug 2009 WO
2011024015 Mar 2011 WO
Non-Patent Literature Citations (59)
Entry
US 7,161,836 B1, 01/2007, Wan et al. (withdrawn)
Wei, “Trellis-Coded Modulation With Multidimensional Constellations”, IEEE Transactions on Information Theory, vol. IT-33, No. 4, pp. 483-501, Jul. 1987.
Conway et al., “Sphere Packings, Lattices and Groups”, Springer-Verlag, New York, Inc., USA, 3rd edition, chapter 4, pp. 94-135, year 1998.
Ankolekar et al., “Multibit Error-Correction Methods for Latency-Constrained Flash Memory Systems”, IEEE Transactions on Device and Materials Reliability, vol. 10, No. 1, pp. 33-39, Mar. 2010.
Berman et al., “Mitigating Inter-Cell Coupling Effects in MLC NAND Flash via Constrained Coding”, Presentation, Flash Memory Summit, Santa Clara, USA, pp. 1-21, Aug. 19, 2010.
Kim et al., “Multi-bit Error Tolerant Caches Using Two-Dimensional Error Coding”, Proceedings of the 40th Annual ACM/IEEE International Symposium on Microarchitecture (MICRO-40), Chicago, USA, pp. 197-209, Dec. 2007.
Budilovsky et al., “Prototyping a High-Performance Low-Cost Solid-State Disk”, SYSTOR—The 4th Annual International Systems and Storage Conference, Haifa, Israel, pp. 1-11, year 2011.
NVM Express Protocol, “NVM Express”, Revision 1.2a, pp. 1-209, Oct. 23, 2015.
SCSI Protocol, “Information Technology—SCSI Architecture Model—5 (SAM-5)”, INCITS project T10/2104-D, revision 01, pp. 1-150, Jan. 28, 2009.
SAS Protocol, “Information Technology—Serial Attached SCSI—2 (SAS-2)”, INCITS project T10/1760-D, revision 15a, pp. 1-921, Feb. 22, 2009.
Agrell et al., “Closest Point Search in Lattices,” IEEE Transactions on Information Theory, vol. 48, No. 8, pp. 2201-2214, Aug. 2002.
Bez et al., “Introduction to Flash Memory,” Proceedings of the IEEE, vol. 91, No. 4, pp. 489-502, Apr. 2003.
Blahut, “Theory and Practice of Error Control Codes,” Addison-Wesley, section 3.2, pp. 47-48, May 1984.
Chang, “Hybrid Solid State Disks: Combining Heterogeneous NAND Flash in Large SSDs,” Presentation, ASPDAC, pp. 1-26, Jan. 2008.
Cho et al., “A 3.3V 1 GB Multi-Level NAND Flash Memory with Non-Uniform Threshold Voltage Distribution,” IEEE International Solid-State Circuits Conference (ISSCC), pp. 28-29 and 424, Feb. 2001.
“Databahn TM Flash Memory Controller IP,” Denali Software, Inc., pp. 1-1, year 1994, as downloaded from https://denali.com/en/products/databahn_flash.jsp.
“FlashFX Pro TM 3.1—High Performance Flash Manager for Rapid Development of Reliable Products,” product information, Datalight, Inc., pp. 1-2, Nov. 16, 2006.
Duann, “SLC & MLC Hybrid,” Silicon Motion, Inc., Presentation, Flash Memory Summit, Santa Clara, USA, pp. 1-18, Aug. 2008.
Eitan et al., “Can NROM, a 2-bit, Trapping Storage NVM Cell, Give a Real Challenge to Floating Gate Cells?”, Proceedings of the 1999 International Conference on Solid State Devices and Materials (SSDM), pp. 522-524, year 1999.
Eitan et al., “Multilevel Flash Cells and their Trade-offs,” Proceedings of the 1996 IEEE International Electron Devices Meeting (IEDM), pp. 169-172, year 1996.
Engh et al., “A Self Adaptive Programming Method with 5 mV Accuracy for Multi-Level Storage in FLASH,” Proceedings of the IEEE 2002 Custom Integrated Circuits Conference, pp. 115-118, May 2002.
Goodman et al., “On-Chip ECC for Multi-Level Random Access Memories,” Proceedings of the IEEE/CAM Information Theory Workshop, pp. 1-1, Jun. 1989.
Han et al., “An Intelligent Garbage Collection Algorithm for Flash Memory Storages,” Proceedings of the Conference an Computational Science and its Applications—ICCSA 2006, Springer-Verlag Berlin Heidelberg, vol. 3980/2006, pp. 1019-1027, May 2006.
Han et al., “CATA: A Garbage Collection Scheme for Flash Memory File Systems,” Ubiquitous Intelligence and Computing, Springer-Verlag Berlin Heidelberg, vol. 4159/2006, pp. 103-112, Aug. 2006.
Horstein, “On the Design of Signals for Sequential and Nonsequential Detection System with Feedback,” IEEE Transactions on Information Theory, vol. IT-12, No. 4, pp. 448-455, Oct. 1966.
Jung et al., “A 117-mm2 3.3-V Only 128-Mb Multilevel NAND Flash Memory for Mass Storage Applications,” IEEE Journal of Solid-State Circuits, vol. 31, No. 11, pp. 1575-1583, Nov. 1996.
Kawaguchi et al., “A Flash-Memory Based File System,” Proceedings of the USENIX 1995 Technical Conference, pp. 155-164, year 1995 XXXXDocument attached is not identical with the document cited. Actual page numbers are 1-10XXXXX.
Kim et al., “Future Memory Technology Including Emerging New Memories,” Proceedings of the 24th International Conference on Microelectronics (MIEL2004), volue 1, pp. 377-384, May 2004.
Lee et al., “Effects of Floating-Gate Interference on NAND Flash Memory Cell Operation,” IEEE Electron Device Letters, vol. 23, No. 5, pp. 264-266, May 2002.
Maayan et al., “A 512 Mb NROM Flash Data Storage Memory with 8 MB/s Data Rate,” Proceedings of the 2002 IEEE International Solid-State Circuits Conference (ISSCC 2002), pp. 100-101, Feb. 2002.
Mielke et al., Recovery Effects in the Distributed Cycling of Flash Memories, Proceedings of the IEEE 44th Annual International Reliability Physics Symposium, pp. 29-35, Mar. 2006.
“ONFI—Open NAND Flash Interface Specification,” revision 1.0, pp. 1-106, Dec. 2006.
“PS8000 Controller Specification (for SD Card),” Phison Electronics Corporation, revision 1.2, document No. S-07018, pp. 1-20, Mar. 2007.
Shalvi et al., “Signal Codes,” Proceedings of the 2003 IEEE Information Theory Workshop (ITW2003), pp. 1-18, Apr. 2003.
Shiozaki, “Adaptive Type-II Hybrid Broadcast ARQ System,” IEEE Transactions on Communications, vol. 44, No. 4, pp. 420-422, Apr. 1996.
Suh et al., “A 3.3 V 32 Mb NAND Flash Memory with Incremental Step Pulse Programming Scheme,” IEEE Journal of Solid-State Circuits, vol. 30, No. 11, pp. 1149-1156, Nov. 1995.
“Bad Block Management in NAND Flash Memories,” STMicroelectronics, Application Note AN 1819, pp. 1-7, May 2004.
“Wear Leveling in Single Level Cell NAND Flash Memories,” STMicroelectronics, Application Note AN1822, pp. 1-7, Feb. 2007.
Takeuchi et al., “A Double-Level-Vth Select Gate Array Architecture for Multi-Level NAND Flash Memories,” Digest of Technical Papers, 1995 Symposium on VLSI Circuits, pp. 69-70, Jun. 1995.
Wu et al., “eNVy: A Non-Volatile, Main Memory Storage System,” Proceedings of the 6th International Conference on Architectural Support for Programming Languages and Operating Systems, pp. 86-97, year 1994.
Takeuchi et al., “A Multipage Cell Architecture for High-Speed Programming Multilevel NAND Flash Memories”, IEEE Journal of Solid-State Circuits, vol. 33, No. pp. 1228-1237, Aug. 1998.
Jedec Standard JESD84-C44, “Embedded MultiMediaCard (e-MMC) Mechanical Standard, with Optional Reset Signal”, Jedec Solid State Technology Association, USA, pp. 1-13, Jul. 2009.
Jedec, “UFS Specification—Version 0.1,” pp. 1-94, Nov. 11, 2009.
SD Group, Technical Committee SD Card Association, “SD Specifications Part 1—Physical Layer Specification, Version 3.01, draft 1.00,” pp. 1-220, Nov. 9, 2009.
“Universal Serial Bus Specification”, revision 2.0, Compaq, Hewlett, Intel, Lucent, Microsoft etc., pp. 1-650, Apr. 27, 2000.
Serial ATA International Organization, “Serial ATA Revision 3.0 Specification”, pp. 1-663, Jun. 2, 2009.
Gotou, “An Experimental Confirmation of Automatic Threshold Voltage Convergence in a Flash Memory Using Alternating Word-Line Voltage Pulses”, IEEE Electron Device Letters, vol. 18, No. 10, pp. 503-505, Oct. 1997.
Huffman, “Non-Volatile Memory Host Controller Interface (NVMHCI)”, Specification 1.0, pp. 1-65, Apr. 14, 2008.
Panchbhai et al., “Improving Reliability of NAND Based Flash Memory Using Hybrid SLC/MLC Device—Project Proposal for CSci 8980—Advanced Storage Systems,” Department of Computer Science, University of Minnesota, USA, pp. 1-4, Mar. 2009.
“M25PE16: 16-Mbit, page-erasable serial flash memory with byte-alterability, 75 MHz SPI bus, standard pinout”, NUMONYX B.V., pp. 1-58, Apr. 2008.
Hong et al., “NAND Flash-based Disk Cache Using SLC/MLC Combined Flash Memory”, 2010 International Workshop on Storage Network Architecture and Parallel I/Os, pp. 21-30, USA, May 3, 2010.
Engineering Windows 7, “Support and Q&A for Solid-State Drives”, e7blog, pp. 1-10, May 5, 2009.
“Memory Management in NAND Flash Arrays”, Technical Note 29-28, Micron Technology Inc., pp. 1-10, year 2005.
Kang et al., “A Superblock-based Flash Translation Layer for NAND Flash Memory”, Proceedings of the 6th ACM & IEEE International Conference on Embedded Software, Seoul, Korea, pp. 161-170, Oct. 2006.
Park et al., “Sub-Grouped Superblock Management for High-Performance Flash Storages”, IEICE Electronics Express, vol. 6, No. 6, pp. 297-303, Mar. 25, 2009.
“How to Resolve “Bad Super Block: Magic Number Wrong” in BSD”, Free Online Articles Director Article Base, pp. 1-3, posted Sep. 5, 2009.
“Memory Stick Failed IO Superblock”, UBUNTU Forums, pp. 1-2, posted Nov. 11, 2009.
“SD Card Failure, can't read superblock”, Super User Forums, pp. 1-2, posted Aug. 8, 2010.
Tishbi, U.S. Appl. No. 17/512,712, filed Oct. 28, 2021.
Related Publications (1)
Number Date Country
20220374308 A1 Nov 2022 US
Provisional Applications (1)
Number Date Country
63184230 May 2021 US