The present application is related to United States patent application entitled “Methods and Apparatus for Computing a Probability Value of a Received Value in Communication or Storage Systems,” and United States patent application entitled “Methods and Apparatus for Computing Soft Data or Log Likelihood Ratios for Received Values in Communication or Storage Systems,” each filed contemporaneously herewith, and International Patent Application Serial No. PCT/US09/49326, entitled “Methods and Apparatus for Read-Side Intercell Interference Mitigation in Flash Memories,” filed Jun. 30, 2009; International Patent Application Serial No. PCT/US09/49333, entitled “Methods and Apparatus for Soft Demapping and Intercell Interference Mitigation in Flash Memories,” filed Jun. 30, 2009; and International Patent Application Serial No. PCT/US09/59077, entitled “Methods and Apparatus for Soft Data Generation for Memory Devices,” filed Sep. 30, 2009, each incorporated by reference herein.
The present invention relates generally to techniques for detection and decoding in storage and communication systems, and more particularly, to methods and apparatus for approximating a probability density function or distribution (bra received value in communication or storage systems.
A number of storage and communication systems use analog values to represent information. For example, storage devices use analog memory cells to store an analog value, such as an electrical charge or voltage, to represent the information stored in the cell. In flash memory devices, for example, each analog memory cell typically stores a certain voltage. The range of possible analog values for each cell is typically divided into threshold regions, with each region corresponding to one or more data bit values. Data is written to an analog memory cell by writing a nominal analog value that corresponds to the desired one or more bits.
In multi-level NAND flash memory devices, for example, floating gate devices are employed with programmable threshold voltages in a range that is divided into multiple intervals with each interval corresponding to a different multibit value. To program a given multibit value into a memory cell, the threshold voltage of the floating gate device in the memory cell is programmed into the threshold voltage interval that corresponds to the value.
The analog values stored in memory cells are often distorted. The distortions are typically due to, for example, back pattern dependency (BPD), noise and intercell interference (ICI). For a more detailed discussion of distortion in flash memory devices, see, for example, J. D. Lee et al., “Effects of Floating-Gate Interference on NAND Flash Memory Cell Operation,” IEEE Electron Device Letters, 264-266 (May 2002) or Ki-Tae Park, et al., “A Zeroing Cell-to-Cell Interference Page Architecture With Temporary LSB Storing and Parallel MSB Program Scheme for MLC NAND Flash Memories,” IEEE J. of Solid State Circuits, Vol. 43, No. 4, 919-928, (April 2008), each incorporated by reference herein.
A probability density function (PDF) of a continuous random variable describes the relative probability that a given value of the random variable will occur at a given point in time. The voltage distributions for memory cells, for example, are often expressed using such probability density functions. Generally, the threshold voltage of a cell is the voltage that needs to be applied to the cell so that the cell conducts a certain amount of current. The threshold voltage is a measure for the data stored in a cell.
Statistical noise in a communication system, for example, is typically approximated using a probability density function having a normal distribution (often referred to as a Gaussian distribution). Computing probability values for a Gaussian distribution is relatively straightforward. The above-described distortions in memory devices, however, as well as imperfections in the write process, may cause the probability density function for received values read from the memory to have an arbitrary or non-Gaussian distribution. The computation of probability values for such arbitrary distributions is significantly more complex than for a Gaussian distribution.
A need therefore exists for improved methods and apparatus for computing probability values for received or stored values that have an arbitrary probability density function. Yet another need exists for improved methods and apparatus for computing probability values for an arbitrary PDF that are based on techniques for computing probability values for a predefined PDF, such as a Gaussian PDF. Among other benefits, such improved techniques for computing probability values for received or stored values will lower the computational complexity of devices incorporating such techniques. A further need exists for methods and apparatus for approximating a probability density function or distribution for a received value in communication or storage systems.
Generally, methods and apparatus are provided for approximating a probability density function or distribution for a received value in communication or storage systems. According to one aspect of the invention, a target distribution is approximated for a received value in a communication system or a memory device, by substantially minimizing a squared error between the target distribution of the received values and a second distribution obtained by mapping a predefined distribution, such as a Gaussian distribution, through a mapping function, wherein the second distribution has an associated set of parameters. The mapping function can be, for example, a piecewise linear function. The target and second distributions can be, for example, probability density functions.
The second distribution has a plurality of segments and each of the segments has an associated set of parameters. The associated set of parameters are used to compute probability values, soft data values or log likelihood ratios for the received values in the communication system or memory device. The associated set of parameters can be stored in at least one table or expressed using an expression.
The squared error can be (i) computed individually for each of the segments and wherein the parameters are selected for a given segment to substantially minimize a squared error for the given segment; or (ii) collectively computed for substantially all of the segments and wherein the parameters are selected for each segment by substantially minimizing the combined squared error.
The target distribution can optionally be obtained through measurements, such as measurements obtained as a function of at least one performance factor. In this manner, the squared error can be substantially minimized for the at least one performance factor to obtain the parameters for the performance factor.
The set of parameters are obtained during an initial parameter characterization phase or adaptively on an intermittent basis. In an adaptive implementation, the set of parameters are adaptively updated using measured or estimated distributions for the received value. The estimated distributions are based on a measurement of parameters for the distribution, a passage of time or a usage counter. The adaptively updated parameters can be used to update one or more tables or to evaluate an expression that accounts for one or more performance factors.
A more complete understanding of the present invention, as well as further features and advantages of the present invention, will be obtained by reference to the following detailed description and drawings.
The present invention provides methods and apparatus for computing probability values for an arbitrary PDF. As previously indicated, the computation of probability values for a Gaussian distribution is relatively straightforward. Generally, for a Gaussian PDF, the log likelihood calculation simplifies to a distance calculation as a Gaussian PDF is completely defined by its mean and variance. The present invention recognizes that when a random variable has an arbitrary distribution, however, the computation of the probability values is significantly more complex.
According to one aspect of the present invention, methods and apparatus are provided for computing probability values for an arbitrary PDF that are based on techniques for computing probability values for a predefined PDF, such as a Gaussian PDF. In one exemplary implementation, the present invention employs a mapping function, φ, to map a Gaussian PDF, fx(x), to an arbitrary probability function of interest, fr(r). In further variations, non-Gaussian PDFs that can be described with predefined analytical functions can be mapped to arbitrary PDFs.
The present invention recognizes that it may be difficult in some applications to find the mapping function, φ. Thus, according to another aspect of the invention, the mapping function, φ, is approximated with a piecewise linear function, φ, comprised of a plurality of linear segments. Thus, within each segment, a Gaussian approximation is used to compute the probabilities for the variable r with the random PDF. In a further variation, the mapping function, φ, is defined over a plurality of segments, where each segment has a set of parameters. Thus, within each segment, the probabilities for the variable r with the random PDF are computed based on a predefined PDF, such as a Gaussian PDF, and the corresponding set of parameters.
The present invention can be used, for example, to compute probability values in memory devices, such as single-level cell or multi-level cell (MLC) NAND flash memory devices. As used herein, a multi-level cell flash memory comprises a memory where each memory cell stores two or more bits. Typically, the multiple bits stored in one flash cell belong to different pages. While the invention is illustrated herein using memory cells that store an analog value as a voltage, the present invention can be employed with any storage mechanism for memory devices, such as the use of voltages, resistances or currents to represent stored data, as would be apparent to a person of ordinary skill in the art. In addition, while the present invention is illustrated herein in the context of exemplary storage systems, the present invention can also be applied to a communication system, as would be apparent to a person of ordinary skill in the art.
The exemplary flash memory block 160 comprises a memory array 170 and one or more buffers 180 that may each be implemented using well-known commercially available techniques and/or products. The memory array 170 may be embodied as a single-level or multi-level cell flash memory, such as a NAND flash memory, a phase-change memory (PCM), an MRAM memory, a NOR flash memory or another non-volatile flash memory. While the invention is illustrated primarily in the context of a multi-level cell NAND flash memory, the present invention can be applied to single-level cell flash memories and other non-volatile memories as well, as would be apparent to a person of ordinary skill in the art.
Multi-Level Cell Flash Memory
In a multi-level cell NAND flash memory, a threshold detector is typically employed to translate the voltage value associated with a particular cell to a predefined memory state.
In the exemplary embodiment shown in
The peaks 210-213 of the threshold voltage distribution graph 200 are labeled with corresponding binary values. Thus, when a cell is in a first state 210, it represents a “1” for the lower bit (also known as least significant bit, LSB) and a “1” for the upper bit (also known as most significant bit, MSB). State 210 is generally the initial unprogrammed or erased state of the cell. Likewise, when a cell is in the second state 211, it represents a “0” for the lower bit and a “1” for the upper bit. When a cell is in the third state 212, it represents a “0” for the lower bit and a “0” for the upper bit. Finally, when a cell is in the fourth state 213, it represents a “1” for the lower bit and a “0” for the upper bit.
Threshold voltage distribution 210 represents a distribution of the threshold voltages V, of the cells within the array that are in an erased state (“11” data state), with negative threshold voltage levels below 0 volts. Threshold voltage distributions 211 and 212 of memory cells storing “10” and “00” user data, respectively, are shown to be between 0 and 1 volts and between 1 and 2 volts, respectively. Threshold voltage distribution 213 shows the distribution of cells that have been programmed to the “01” data state, with a threshold voltage level set between 2 and 4.5 volts of the read pass voltage.
Thus, in the exemplary embodiment of
It is further noted that cells are typically programmed using well-known Program/Verify techniques. Generally, during a Program/Verify cycle, the flash memory 160 gradually applies an increasing voltage to store a charge in the cell transistor until a minimum target threshold voltage is exceeded. For example, when programming a ‘10’ data state in the example of
As discussed further below, each of the two bits stored in a single memory cell is from a different page. In other words, each bit of the two bits stored in each memory cell carries a different page address. The right side bit shown in
In addition,
As previously indicated, the analog values stored in memory cells and transmitted in communication systems are often distorted, for example, due to back pattern dependency, noise and intercell interference. Thus, the present invention recognizes that the threshold voltage distributions shown in
In the exemplary embodiment shown in
Threshold voltage distribution 410 represents a distribution of the threshold voltages Vt of the cells within the array that are in an erased state (“11” data state), with negative threshold voltage levels below 0 volts. Threshold voltage distributions 411 and 412 of memory cells storing “10” and “00” user data, respectively, are shown to be between 0 and 1 volts and between 1 and 2 volts, respectively. Threshold voltage distribution 413 shows the distribution of cells that have been programmed to the “01” data state, with a threshold voltage level set between 2 and 4.5 volts of the read pass voltage. Thus, in the exemplary embodiment of
For the exemplary threshold voltage distributions shown in
As indicated above, a flash cell array can be further partitioned into even and odd pages, where for example cells with even numbers (such as cells 2 and 4 in
Intercell Interference and Other Disturbances
WL: wordline;
BL: bitline;
BLo: odd bitline:
BLe: even bitline: and
C: capacitance.
ICI, for example, is caused by aggressor cells 720 that are programmed after the target cell 710 has been programmed. The ICI changes the voltage, Vt, of the target cell 710. In the exemplary embodiment, a “bottom up” programming scheme is assumed and adjacent aggressor cells in wordlines i and i+1 cause ICI for the target cell 710. With such bottom-up programming of a block, ICI from the lower wordline i−1 is removed, and up to five neighboring cells contribute to ICI as aggressor cells 720, as shown in
Generally, Vt is the voltage representing the data stored on a cell and obtained during a read operation. Vt can be obtained by a read operation, for example, as a soft voltage value with more precision than the number of bits stored per cell, or as a value quantized to a hard voltage level with the same resolution as the number of bits stored per cell (e.g., 3 bits for 3 bits/cell flash).
For a more detailed discussion of ICI mitigation techniques, see, for example, International Patent Application Serial No. PCT/US09/49326, entitled “Methods and Apparatus for Read-Side Intercell Interference Mitigation in Flash Memories;” or International Patent Application Serial No. PCT/US09/49327, entitled “Methods and Apparatus for Write-Side Intercell Interference Mitigation in Flash Memories,” each incorporated by reference herein.
While the present invention is illustrated in the context of probability computations for a soft demapper in a flash control system, the present invention can be employed in any system where probabilities are computed for a received value in a storage or communications system, as would be apparent to a person of ordinary skill in the art. For example, the present invention can be employed in MAP detectors and iterative decoders and/or demappers that use probabilities, such as those based on LDPC coding, turbo coding, the Soft-Output Viterbi Algorithm (SOVA) or BCJR algorithm. For a more detailed discussion of exemplary LDPC decoders, see, for example, U.S. Pat. No. 7,647,548, incorporated by reference herein. For a more detailed discussion of exemplary SOVA detectors, see, for example, J. Hagenauer and P. Hoeher, “A Viterbi Algorithm with Soft-decision Outputs and its Applications,” IEEE Global Telecommunications Conference (GLOBECOM), vol. 3, 1680-1686 (November 1989). For a more detailed discussion of exemplary BCJR detectors, see, for example, L. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE Trans. on Information Theory, Vol. IT-20(2), 284-87 (March 1974). For a more detailed discussion of Turbo coding, see, for example, J. Hagenauer, E. Offer and L. Papke, “Iterative decoding of binary block and convolutional codes,” IEEE Transactions on Information Theory, 429-445 (March 1996).
The present invention provides probability computation techniques for communication and storage systems, such as flash memories. In one example, probability values are computed based on data read by the flash memory, where the read data values have a distribution that is arbitrary or non-Gaussian. The generated probability information can optionally be used for soft decision decoding. As used herein, the term “probability density functions” shall include probability density functions, distributions and approximations thereof, such as histograms and Gaussian approximations.
The exemplary read channel 825 comprises a signal processing unit 830, an encoder/decoder block 840 and one or more buffers 845. It is noted that the term “read channel” can encompass the write channel as well. In an alternative embodiment, the encoder/decoder block 840 and some buffers 845 may be implemented inside the flash controller 820. The encoder/decoder block 840 and buffers 845 may be implemented, for example, using well-known commercially available techniques and/or products, as modified herein to provide the features and functions of the present invention.
The exemplary signal processing unit 830 comprises one or more processors that implement one or more probability computation processes 835, discussed further below in conjunction with, for example,
It is noted that the probability computation process 835 may optionally be implemented in the flash memory block 860, as would be apparent to a person of ordinary skill in the art. For a more detailed discussion of this alternate implementation of the signal processing unit in the flash memory, see, International Patent Application Serial No. PCT/US09/59077, entitled “Methods and Apparatus for Soft Data Generation for Memory Devices,” filed Sep. 30, 2009 and incorporated by reference herein.
The exemplary signal processing unit 830 may also include one or more soft demapper and/or soft data generation processes that utilize the computed probability values. The interface 850 may optionally be implemented, for example, in accordance with the teachings of International PCT Patent Application Serial No. PCT/US09/49328, entitled “Methods and Apparatus for Interfacing Between a Flash Memory Controller and a Flash Memory Array”, filed Jun. 30, 2009 and incorporated by reference herein, which increases the information-carrying capacity of the interface 850 using, for example, Double Data Rate (DDR) techniques. During a write operation, the interface 850 transfers the program values to be stored in the target cells, typically using page or wordline level access techniques. For a more detailed discussion of exemplary page or wordline level access techniques for writing and reading, see, for example, International Patent Application Serial No. PCT/US09/36810, filed Mar. 11, 2009, entitled “Methods and Apparatus for Storing Data in a Multi-Level Cell Flash Memory Device with Cross-Page Sectors, Multi-Page Coding and Per-Page Coding,” incorporated by reference herein.
During a read operation, the interface 850 transfers hard and/or soft read values that have been obtained from the memory array 870 for target and aggressor cells. For example, in addition to read values for the page with the target cell, read values for one or more adjacent pages in upper/lower wordlines or neighboring even or odd bit lines are transferred over the interface bus. In the embodiment of
Soft Data Generation Using Probability Computations
The flash memory 860 optionally provides hard or soft read values to the flash control system 810. Enhanced soft data such as log-likelihood ratios is generated from the read values provided by the flash memory 860 to thereby improve the decoding performance in the flash control system 810. In an implementation using soft read values, the flash memory system 860 transmits the measured voltages or a quantized version of the measured voltages to the flash control system 810 as soft information, where a larger number of bits is used to represent the measured voltage than the number of bits stored in the memory cell.
The exemplary flash control system 920 comprises a probability computation block 1200 (
As shown in
It is noted that the parameters ki,bi of the piece wise linear function φ that are determined by the PDF approximation process 1300 for each segment, can optionally be stored in a look-up table, as discussed further below in conjunction with
Soft Demapper/Soft Data Generator 1000
The obtained probability values are then used during step 1030 to compute the LLR(s). The LLR(s) are discussed below in the section entitled “Computation of Soft Data (LLRs).” The computed LLRs are then provided to the decoder 950 during step 1040, or optionally to an interleaver or deinterleaver. For a discussion of suitable interleavers and deinterleavers, see, for example, International Patent Application Serial No. PCT/US09/59077, entitled “Methods and Apparatus for Soft Data Generation for Memory Devices,” filed Sep. 30, 2009, and incorporated by reference herein. The computed LLRs may optionally be used to make a final decision on the read data, for example, based on the sign of the LLRs.
Segment-Dependent LLR Computation Block 975/1050
The segment-dependent LLR computation process 1050 then identifies the segment, i, associated with the read value(s) for a given state during step 1070. The parameters, ki,bi, associated with the identified segment and given state are obtained during step 1080. Steps 1070 and 1080 are optionally repeated for additional states (and typically for all states). It is noted that the segments or parameters can be pattern-dependent as discussed further below in conjunction with
The obtained parameters for at least one state are then used during step 1090 to compute segment-dependent LLR(s) as described in the section entitled “Computation of Soft Data (LLRs).” The computed LLRs are then provided to the decoder 950 during step 1095, or optionally to an interleaver or deinterleaver. The computed LLRs may optionally be used to make a final decision on the read data, for example, based on the sign of the LLRs.
Probability Computation Process
In alternate embodiments, other predefined PDFs can be used. For flash memory devices, it is convenient to use Gaussian PDFs since the arbitrary PDFs associated threshold voltages can be approximated well with Gaussian PDFs as shown below. Assume that the random variable, r, has a PDF that is too complex for practical probability calculations. Now, assume that a mapping function, φ, exists that maps a random variable, x with Gaussian distribution having a mean of 0 and a variance of 1 to the random variable of interest, r, as follows.
r=φ(x)
Since the random variable x is assumed to have a Gaussian distribution with mean 0 and variance 1, its density can be described as:
While the exemplary embodiments assume that the Gaussian distribution has a mean of 0 and a variance of 1, the invention can be generalized to the case where the Gaussian distribution has a different mean and variance.
The mapping function, φ, thus allows the probability of the random variable, r, to be calculated using the Gaussian distribution having a mean of 0 and a variance of 1. Other Gaussian or predefined functions can bc used without a loss of generality. The probability density of a value r can be computed based on an inverse of the mapping function, φ, as follows:
where |φ′(x)| is the absolute magnitude of the derivative of φ(x). It is noted that in this application, the probability density for the random variable r is denoted both by p(r) and fr(r).
Assuming that x has a Gaussian distribution with mean 0 and variance 1, this probability density can be expressed as:
It is noted that some communications processes and systems employ a log value of the probability density, referred to as a “log likelihood,” rather than a true probability value. Thus, equation (3) can be expressed as follows:
As previously indicated, when the mapping function, φ, cannot be practically obtained, the mapping function, φ, can be approximated with a piecewise linear function, L. Within each segment of the piecewise linear function, L, a Gaussian approximation is employed. As discussed hereinafter, the piecewise linear function is chosen such that the PDF of the variable obtained by applying the piece-wise linear function. L, to the random variable x with the Gaussian distribution matches the PDF of the random variable of interest, r.
First, the set of n+1 segments for the piece-wise linear function, L, is chosen in the domain of x, as follows:
(−∞,a1,a2, . . . an,∞), (5)
where each linear segment has an associated set of parameters ki,bi, and boundaries defined by its endpoints (ai,ai+1. In one exemplary implementation, the set of parameters ki,bi, are stored for each segment. Thus, for a given segment, i, the random variable, r, can be defined, for example, in slope-intercept form, as follows:
r=kix+bi,ai≦x<ai+1 (6)
Alternatively, the probability density of the random variable, r, can be expressed using the parameters, ki,bi, of the linear segment, as follows:
The corresponding log likelihoods can be computed as:
As shown in
Thereafter, the probability computation process 1200 identifies the segment, i, of the piece-wise linear function, L, associated with the read value(s) during step 1220 for a given state. The segment i is chosen such that the received value r satisfies following condition:
kiai+bi≦r<kiai+1+bi (9)
The parameters, ki,bi, associated with the identified segment and given state are obtained during step 1230. As discussed further below in conjunction with
Finally, the probability value for the read data r is calculated during step 1240 for the given state (for example using equation (7) in the exemplary embodiment).
Corresponding log-likelihoods can be computed using equation (8).
Computation of Soft Data (LLRs)
The computation of log-likelihood ratios in step 1030 using probability values computed based on read values is described in further detail in International Patent Application Serial No. PCT/US09/49333, entitled “Methods and Apparatus for Soft Demapping and Intercell Interference Mitigation in Flash Memories”, filed Jun. 30, 2009, and in International Patent Application Serial No. PCT/US09/59077, entitled “Methods and Apparatus for Soft Data Generation for Memory Devices,” filed Sep. 30, 2009, incorporated by reference herein. In one embodiment, for any number of bits per cell, the extrinsic LLR for bit Ci is computed by the Soft Demapper/Soft Data Generator 1000 as
where:
r: received signal
s: original stored state or level given by stored bits (c0, c1, . . . cm)
ci: coded bit
m bits per cell
Le(Ci): extrinsic LLR
Xc
and where La(Ci) is for example provided by the decoder 950, such as an LDPC decoder. In the first iteration, La(Ci) can be initialized to 0. The probability values (probability densities or probabilities) p(r|s) are computed for state s as described above using equations 2, 3 or 7, where the computed probability value p(r) for a state s is inserted as p(r|s) in equation 10. International Patent Application Serial No. PCT/US09/49333, entitled “Methods and Apparatus for Soft Demapping and Intercell Interference Mitigation in Flash Memories”, filed Jun. 30, 2009, and International Patent Application Serial No. PCT/US09/59077, entitled “Methods and Apparatus for Soft Data Generation for Memory Devices,” filed Sep. 30, 2009, describe also alternative LLR computation techniques that can be used here as well.
Pattern-dependent LLRs for one or more soft values, r, for the target cell and one or more values,
where
The pattern
The probability values p(r|s,
In the alternative embodiment of
where σ(s)=ki(s), and E{r|s}=bi(s). The values ki(s) and bi(s) are the parameters k and b that were obtained for state s for the segment i that was identified based on the read value r. In an alternative embodiment, pattern-dependent, segment-dependent LLRs can be computed as follows:
where σ(s,
PDF Estimation Process
If the targeted distribution p(r)=fr(r) is expressed in closed parametric form, then the optimization problem can be solved in a closed form and one can obtain the set of values (ki,bi) that minimize the squared error. The coefficients are obtained by minimizing the following function with respect to (ki,bi) where frG(r; bi,ki2 is the probability density function of a random variable with Gaussian distribution, mean bi and variance ki2.
Generally, the above equation (14) computes the squared errors for every segment and sums the squared errors for all segments. The first term performs an integration from minus infinity to the first segment point, a1, the second term performs a sum of the integrals for all segments between the first segment point, a1, to the final segment point, an; and the final term performs an integration from the final segment point, an, to positive infinity.
Thus, as shown in
The exemplary PDF approximation process 1300 then obtains a second distribution based on a predefined distribution, such as a Gaussian distribution during step 1320. The second distribution can be obtained for example by transforming a Gaussian distribution with mean 0 and variance 1 using the mapping function φ and parameters k, b. The mapping function φ can be for example a piecewise-linear function, where each segment i has corresponding parameters ki and bi. In an alternate embodiment, another predefined distribution can be used instead of the Gaussian distribution, and other parameters instead of k and b can be used.
The parameters k, b that minimize the squared error between the first and second distributions are identified during step 1330. The parameters can be identified for each segment or globally. In a segment-based identification of the parameters, the parameters are selected for each segment that minimize the squared error between the first and second distributions for the corresponding segment. The process is then repeated for all segments. In a global-based identification of the parameters, the total squared error for all segments is computed as a sum as described in equation 14, and then the parameters k and bare selected for all segments jointly such that the total squared error is minimized. In an alternate embodiment where other parameters are used instead of k and b, these other parameters are optimized such that the squared error between the first and second distributions is minimized as described here.
As discussed further below in conjunction with
The number and/or locations of the segments can optionally be changed during step 1340 if there is an insufficient match (based on a predefined standard) between the first and second distributions. The process can then be repeated with the modified segment number and/or location. For example, the number of segments can be increased if there is an insufficient match.
In many practical situations, however, the targeted distribution fr(r) cannot be expressed in closed parametric form but, rather, the targeted distribution fr(r) is obtained through measurements. The parameters k, b can be obtained based on the measurements in advance during a parameter characterization phase, for example during product development, product prototyping or manufacturing tests, or adaptively on an intermittent or periodic basis. In all cases, the set of values (ki,bi) in step 1330 that minimizes the squared error can be obtained iteratively through computer simulations.
In an adaptive parameter characterization implementation, the parameters k, b can be obtained based on measured or estimated distributions of the received data. The estimated distributions can be based, for example, on a measurement of parameters for the distribution, such as a mean and variance of the distribution, or on a passage of time or a usage counter (for example for program, erase or read cycles). The adaptively updated parameters k, b are then used to update the look-up tables 1800 or used to evaluate the expressions that account for performance factors (as discussed further below in conjunction with
For a more detailed discussion of performance factors and their influence on memories and/or communication systems over time, see, for example, International Patent Application Serial No. PCT/US09/59069, entitled “Methods and Apparatus for Soft Data Generation for Memory Devices Based on Hard Data and Performance Factor Adjustment,” filed Sep. 30, 2009 and incorporated by reference herein.
Finally, a computation block 1200 computes the probability value for the received value, r, using equation (2), (3) or (7) and the process described above in conjunction with
The parameters ki,bi, for each linear segment 1610-i are obtained by the PDF approximation process 1300.
In the exemplary embodiment of
As shown in
The parameters are stored in the exemplary look-up table 1800 in
The probability parameter look-up table 1800 could also indicate additional location-specific performance factors and corresponding parameters, such as separate parameters for even/odd bit lines and/or different wordline locations within a memory array. The exemplary table 1800 is shown for a single state (0) and performance factor (P/E Cycles). It is noted that the exemplary table 1800 can optionally be implemented as a multi-dimensional table to account for pattern-dependency (e.g., the aggressor values in the vicinity of a given target cell) and/or additional performance factors, such as number of read cycles, process corner and temperature changes. Generally, the probability parameter look-up table 1800 can be extended to include an entry containing the parameters (k and b) for each combination of (1) considered performance factors; (2) state (e.g., 11, 10, 00, 01); and (3) pattern (e.g., the aggressor values in the vicinity of a given target cell). The number and/or location of the segments can be unique for each state and/or each pattern.
Rather than storing the parameters in one or more table(s) 1800, it is noted that the parameters can alternatively be computed in real-time based on an expression that accounts for performance factors such as a number of program/erase cycles, retention time, temperature, temperature changes, etc., as would be apparent to a person of ordinary skill in the art.
Whether the parameters are stored in one or more table(s) 1800, or computed in real-time based on an expression, the parameters can optionally be updated over time, as discussed above in conjunction with
For a more detailed discussion of pattern-dependent and location-specific performance factors, see, for example, International Patent Application Serial No. PCT/US09/59077, entitled “Methods and Apparatus for Soft Data Generation in Flash Memories,” filed on Sep. 30, 2009, incorporated by reference herein.
A look-up table, such as the look-up table 1800 of
Generally, each probability density function in
In a further variation, the computation of the probability values for random variables having an arbitrary distribution can be performed using one or more look-up tables (LUTs), where the probability values are pre-computed for a limited number of sample points. This approach, however, may introduce quantization and quantization errors as only a finite number of values are chosen to represent the PDF. On the other hand, the computational complexity and storage requirements may be significantly reduced.
Process, System and Article of Manufacture Details
While a number of flow charts herein describe an exemplary sequence of steps, it is also an embodiment of the present invention that the sequence may be varied. Various permutations of the algorithm are contemplated as alternate embodiments of the invention. While exemplary embodiments of the present invention have been described with respect to processing steps in a software program, as would be apparent to one skilled in the art, various functions may be implemented in the digital domain as processing steps in a software program, in hardware by circuit elements or state machines, or in combination of both software and hardware. Such software may be employed in, for example, a digital signal processor, application specific integrated circuit, micro-controller, or general-purpose computer. Such hardware and software may be embodied within circuits implemented within an integrated circuit.
Thus, the functions of the present invention can be embodied in the form of methods and apparatuses for practicing those methods. One or more aspects of the present invention can be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a device that operates analogously to specific logic circuits. The invention can also be implemented in one or more of an integrated circuit, a digital signal processor, a microprocessor, and a micro-controller.
As is known in the art, the methods and apparatus discussed herein may be distributed as an article of manufacture that itself comprises a computer readable medium having computer readable code means embodied thereon. The computer readable program code means is operable, in conjunction with a computer system, to carry out all or some of the steps to perform the methods or create the apparatuses discussed herein. The computer readable medium may be a recordable medium (e.g., floppy disks, hard drives, compact disks, memory cards, semiconductor devices, chips, application specific integrated circuits (ASICs)) or may be a transmission medium (e.g., a network comprising fiber-optics, the world-wide web, cables, or a wireless channel using time-division multiple access, code-division multiple access, or other radio-frequency channel). Any medium known or developed that can store information suitable for use with a computer system may be used. The computer-readable code means is any mechanism for allowing a computer to read instructions and data, such as magnetic variations on a magnetic media or height variations on the surface of a compact disk.
The computer systems and servers described herein each contain a memory that will configure associated processors to implement the methods, steps, and functions disclosed herein. The memories could be distributed or local and the processors could be distributed or singular. The memories could be implemented as an electrical, magnetic or optical memory, or any combination of these or other types of storage devices. Moreover, the term “memory” should be construed broadly enough to encompass any information able to be read from or written to an address in the addressable space accessed by an associated processor. With this definition, information on a network is still within a memory because the associated processor can retrieve the information from the network.
It is to be understood that the embodiments and variations shown and described herein are merely illustrative of the principles of this invention and that various modifications may be implemented by those skilled in the art without departing from the scope and spirit of the invention.
Number | Name | Date | Kind |
---|---|---|---|
5867429 | Chen et al. | Feb 1999 | A |
6134141 | Wong | Oct 2000 | A |
6353679 | Cham et al. | Mar 2002 | B1 |
6522580 | Chen et al. | Feb 2003 | B2 |
6892163 | Herzog et al. | May 2005 | B1 |
7388781 | Litsyn et al. | Jun 2008 | B2 |
7647548 | Haratsch et al. | Jan 2010 | B2 |
20040057284 | Widmer et al. | Mar 2004 | A1 |
20060015802 | Hocevar | Jan 2006 | A1 |
20060221714 | Li et al. | Oct 2006 | A1 |
20070089034 | Litsyn et al. | Apr 2007 | A1 |
20070171714 | Wu et al. | Jul 2007 | A1 |
20070189073 | Aritome | Aug 2007 | A1 |
20070208905 | Litsyn et al. | Sep 2007 | A1 |
20070300130 | Gorobets | Dec 2007 | A1 |
20080019188 | Li | Jan 2008 | A1 |
20080123420 | Brandman et al. | May 2008 | A1 |
20080151617 | Alrod et al. | Jun 2008 | A1 |
20080291724 | Litsyn et al. | Nov 2008 | A1 |
20090043951 | Shalvi et al. | Feb 2009 | A1 |
20090220016 | Dangl et al. | Sep 2009 | A1 |
20090319868 | Sharon et al. | Dec 2009 | A1 |
20110051521 | Levy et al. | Mar 2011 | A1 |
20110246136 | Haratsch et al. | Oct 2011 | A1 |
20110246859 | Haratsch et al. | Oct 2011 | A1 |
Number | Date | Country |
---|---|---|
WO 2007132453 | Nov 2007 | WO |
WO 2007149678 | Dec 2007 | WO |
WO 2008042593 | Apr 2008 | WO |
WO 2008042598 | Apr 2008 | WO |
WO 2009114618 | Sep 2009 | WO |
WO 2010002941 | Jan 2010 | WO |
WO 2010002942 | Jan 2010 | WO |
WO 2010002943 | Jan 2010 | WO |
WO 2010002948 | Jan 2010 | WO |
WO 2010039859 | Apr 2010 | WO |
WO 2010039866 | Apr 2010 | WO |
Entry |
---|
Lee et al., “Effects of Floatng-Gate Interference on NAND Flash Memory Cell Operation,” IEEE Electron Device Letters, pp. 264-266 (May 2002). |
Lee et al., “Effects of Floating-Gate Interference on NAND Flash Memory Cell Operation,” IEEE Electron Device Letters, pp. 264-266 (May 2002). |
Park, et al., “A Zeroing Cell-to-Cell Interference Page Architecture With Temporary LSB Storing and Parallel MSB Program Scheme for MLC NAND Flash Memories,” IEEE J. of Solid State Circuits, vol. 43, No. 4, 919-928, (Apr. 2008). |
Hagenauer et al., “A Viterbi Algorithm with Soft-decision Outputs and its Applications,” IEEE Global Telecommunications Conference (GLOBECOM), vol. 3, 1680-1686 (Nov. 1989). |
Bahl et al., “Optimal Decoding of Linear Codes for Minimizing Symbol Error Rate,” IEEE Trans. on Information Theory, vol. IT-20(2), 284-87 (Mar. 1974). |
Hagenauer et al., “Iterative decoding of binary block and convolutional codes,” IEEE Transactions on Information Theory, 429-445 (Mar. 1996). |
Number | Date | Country | |
---|---|---|---|
20110246842 A1 | Oct 2011 | US |