Flash memories are currently by far the most widely used type of non-volatile memory (NVM), and phase-change memories (PCMs) are the most promising emerging NVM technology. For a general discussion of NVM, see materials by WEB-FEET RESEARCH, INC. (available at the Internet address of www.web-feetresearch.com). For a discussion of PCM technology, see G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, at pp. 223-262 (2010). Flash memories and PCM have many important common properties, including noisy cell programming, limited cell endurance, asymmetric cost in changing a cell state in different directions, the drifting of cell levels after programming, cell heterogeneities, and the like. See the Burr article referenced above. As representative NVMs, they have been, and likely will continue to be, widely used in mobile, embedded, and mass-storage systems. They are partially replacing hard drives and main memories, and are fundamentally changing some computer architectures.
Both PCMs and flash memories use multi-level cells (MLCs) to store data, and increasing their storage capacity is extremely important for their development and commercial application. Current NAND flash memories are typically constructed with 4-level cells in commercially available products, and can achieve 8-level to 16-level cell construction in prototype devices. For PCMs, 4-level cells have been sampled. Each level in an MLC represents a different number that can be stored in one or more iterations of data writing, which is referred to as programming. The pattern of 0's and 1's stored in each cell for a particular level corresponds to a binary representation of data. For flash memories, when the top-most cell level has been programmed for cells in the same block, then all the cells in the block must be erased and the data programming operation is started over for programming a new data value. For example, a 4-level flash memory cell can be programmed four times (meaning that four different data values can be stored, from Level 0 to Level 1, Level 2, and Level 3) before the cell must be erased for starting the programming over at Level 0.
The MLC technology for phase-change memories (PCM) and flash memories faces very serious challenges when more levels are added to cells. As noted, these additional cell levels are needed for higher storage capacity. The challenges to programming cell levels accurately with an increasing number of cell levels are mainly due to: (1) Programming noise. The process of programming cells to change their states is a noisy process (see, e.g., the Burr article referenced previously, and P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash Memories, Kluwer Academic Publishers, 1st Edition (1999)); (2) Cell heterogeneity. Cells display significant heterogeneous properties due to their heterogeneity in cell material and geometry, especially when the cell sizes scale down (see the Cappelletti article referenced previously, and see A. JAGMOHAN et al., Proc. International Conference on Communications (ICC), Cape Town, South Africa (2010)). Even if the same voltage is used to program cells, their cell levels may change differently. See, e.g., H. T. LUE et al., Proc. IEEE Int. Symp. on Reliability Physics, vol. 30, no. 11, pp. 693-694 (2008). This poses a significant challenge for parallel programming, because common voltages are used to program cells in parallel for high write speed; but the heterogeneity of cells make them programmed differently; (3) Necessity/preference to program cells without overshooting. For flash memories, removing charge from any cell will lead to block erasures, which can be very costly in terms of device resources; so when cells are programmed, a very conservative approach is typically used to gradually increase the cell levels without overshooting. See, e.g., the Cappelletti article referenced above. For PCMs, increasing a cell's resistance requires melting the cell to return it to the amorphous state; so to crystallize a cell for a higher level, it is strongly preferred to cautiously increase the level without overshooting. See, e.g., the Burr article referenced above. Since MLC uses fixed cell levels to represent data, the gaps between cell levels must be sufficiently large to tolerate the worst-case performance of programming. Similar difficulties are confronted by PCMs and flash memories in attempting to increase the levels available for programming.
New techniques for information storage in memory devices would be beneficial by increasing the number of data values that can be programmed for the cells in the memory device.
A memory device having a plurality of cells, each of which stores a value, where the values of the cells are mapped to discrete levels and the discrete levels represent data, is programmed by determining a maximum number of cell levels in the memory device, and determining the set of values that are associated with each of the cell levels. The maximum number of cell levels for the memory device is determined by an adaptive programming system connected to the memory device, based on a plurality of cell values attained by at least one cell of the memory device, in response to voltage applied by the adaptive programming system to the cells of the memory device. The adaptive programming system associates, for each of the cell levels, a different set of cell values of the plurality of cell values attained by the cells to which voltage is applied. This technique increases the number of cell levels that can be configured in a memory device as compared with conventional techniques, and increases the number of data values that can be programmed into the cells of a memory device.
The techniques described herein can be applied to flash memory devices, or similar devices that are programmed with data according to cell voltage level, and also can be applied to phase-change memory (PCM) devices, memristor cells, or similar devices that are programmed with data according to cell resistance value. The techniques can also be applied to memory devices that are configured as patterned-cell devices, which are described further below.
Coding schemes for the techniques described herein can be developed in which the cell levels are mapped to codewords for encoding and decoding data in the memory device. The coding schemes can include constant-weight codes, non-constant-weight codes, and graph connectivity codes.
Other features and advantages of the present invention should be apparent from the following description of exemplary embodiments, which illustrate, by way of example, aspects of the invention.
a) is a schematic cross section of a generic floating gate cell;
a) shows a charge-level distribution of MLC;
a)-(d) relate to a patterned cell with the amorphous-island scheme;
a)-(d) relate to a patterned cell with the crystalline-island scheme.
a)-(c) relate to a data representation for VLC memory.
a) and 9(b) illustrate charge-level distributions for an MLC configuration and a VLC configuration, respectively.
a) and (b) illustrate a partial-erasure channel for q levels where q=2 and q=3, respectively.
a)-(d) are illustrations of a patterned cell described by a crystalline-domain model.
a) and 15(b) show two types of two-dimensional arrays, a rectangular array and a triangular array, respectively.
a) and 19(b) show error models, for 19(a) when two diagonal domains overlap, and 19(b) for overreach error.
a)-(c) relate to tiling and coding in rectangular arrays.
a)-(d) relate to programming the cell levels in a VLC, where
a)-(c) relate to changing a stored word in a VLC scheme by increasing cell levels.
a)-(c) relate to the VLC scheme and the patterned cell scheme, where
This Detailed Description is organized according to the following top-level listing of headings:
In the paragraphs below, the text following the headings that are listed above may contain sub-headings, which are not shown above in this top-level listing for simplicity.
This section “A. ADAPTIVE CELL LEVEL PROGRAMMING” describes the technique for adaptively setting the number of levels and adaptively setting the set of cell values of each level in a memory device. Herein, a “memory device” refers to a group of cells in a memory chip that employs the adaptive cell-level programming scheme introduced here. For example, in a flash memory, a memory device can be a page of cells. The cells in a memory chip can be partition into multiple, such as millions of, such memory devices. That is, the number of levels and the set of cell values of cell level in a memory device are dependent on the physical properties of the particular memory device as produced by a memory production process, as well as dependent on the actual values that cells attain during programming; and the number of levels and the set of cell values of each level are not determined in advance. The response of the memory device to the cell level programming will determine the number of levels for storing data that are programmed into the memory device. Once the memory level programming is complete, the memory device stores data using the cell levels attained during this memory level programming process. The number of cell levels and the set of cell values that belong to a level may vary from memory device to memory device (that is, from one group of cells to another group of cells) in the same memory chip; and they may also vary from one programming process to another programming process for the same memory device.
To facilitate discussion, first define two terms for memory cells: “value” and “level”. The “value” of a cell as used herein refers to the physical state of a cell. Specifically, for nonvolatile memories, cell value can have the following specific meaning:
The “level” of a cell as used herein refers to a set of “values”. Specifically, the levels of cells in a memory device are denoted by Level 0, Level 1, Level 2, Level 3, and so on. Every “level” consists of a set of “values”, and for two different levels—say level i and level j—their two corresponding sets of values do not overlap. Therefore, a value belongs to at most one level. Specifically, for nonvolatile memories, the term “level” with respect to a cell can have the following meaning:
Next, define the concept of “coding scheme”. A coding scheme as used herein refers to a mapping from the levels of a group of cells to data. That is, we use the levels of a group of cells—which are called a codeword—are used to represent data. Note that the mapping is from cell levels to data, not from cell values to data. So if a cell is changed from one value to another value, as long as the two values correspond to the same level, the represented data remain the same.
I. System for Programming Levels in a Memory Device
The memory device 604 may comprise what is referred to herein as a variable-level cell (VLC) construction, or may comprise a patterned cell construction. The cells of the VLC memory and the cells of the patterned cell memory may be constructed according to the technology for conventional nonvolatile memory, such as the flash memory single-level cell (SLC) and multi-level cell (MLC) technology, the phase-change memory (PCM) single-level cell and multi-level cell technology, the memristor single-level cell and multi-level cell technology, etc. Just as flash memories, PCM and memristors use SLC (where two levels are used) and MLC (where a fixed number—which is more than two—of levels are used, such as 4 levels) for storing data, so too may the VLC and patterned-cell constructions use multiple levels (or analogous concepts) for storing data. Those skilled in the art will understand that storing data into the cells of a nonvolatile memory device is referred to as programming the cells. As noted above, conventional practice dictates that the number of levels in a cell is predetermined and is the same for all cells in memories of a particular design. Conventional commercially available memory devices may have, for example, four levels per cell, or even eight levels per cell or sixteen levels per cell in advanced designs.
For the discussion herein, the general case of a memory device constructed in accordance with the invention will often be described with reference to a flash memory having cells that can be set (i.e., programmed) to multiple voltage levels. It should be understood that the techniques described herein can also be applied to other memory constructions, such as phase-change memory and memristor constructions. For example, in the case of the existing phase-change memory technology (which is different from the patterned-cell technology that is proposed and described further below) and in the case of the memristor technology (which at this time is an emerging technology), every memory cell is a piece of material whose electrical resistance can be changed. That is, the resistance of the cell can be programmed. The “resistance” of the cell is used to store data, in the same way that the threshold voltage (which is often referred to as “voltage” or “voltage value”) of a flash memory cell is used to store a data value. That is, the flash memory cell voltage and phase-change memory or memristor cell resistance are analogous. And the cells in these constructions are programmed by applying voltage to the cells, which is the same technique as to the way in which flash memory cells are programmed. In view of these analogous concepts in the various constructions, the term “value” will be used herein to denote the physical state of a cell across all these constructions. For a flash memory, the “value” of a cell is its voltage value. For a phase-change memory and memristor, the “value” of a cell is its resistance. Thus, in the above cases, the term “value” (which refers to a real number that describes the physical state of a cell) will be used with reference to all different types of memories with these analogous concepts. For the patterned-cell scheme (which is a new scheme invented in this patent), the “value” of a cell is a discrete state of connectivity for the vertices in a graph that is implemented in a cell. For all memories, the term “level” refers to a set of cell values. For example, for a flash memory cell with a programmable voltage value, a level may be a range (that is, a continuous set of) voltage values, such as [0.8, 1.2], meaning all the voltage values between 0.8 volts and 1.2 volts; for a phase-change memory or memristor cell with a programmable resistance value, a level may be a range (that is, a continuous set of) electrical resistance; for a patterned-cell scheme, a level may be a set of cell values (that is, a level may be a set of discrete connectivity states).
As used herein, “memory device” shall refer to a group of cells that store data in response to applied charge or current. A memory device is typically packaged as a memory chip that includes associated circuitry for encoding and decoding data from the cells of the memory device. Depending on the context of discussion, “memory device” may refer to cells and their associated encoding and decoding circuitry. That is, the phrase “memory device” may refer to all the cells of the memory chip, or may be used to refer to a subgroup comprising less than all the cells in the memory chip, depending on the context.
In accordance with the techniques described herein, the memory device 604 is connected to the adaptive programmer system 602, which iteratively applies voltages or electrical currents to the memory device and determines the resulting cell values (such as cell voltage values or cell resistance values). After a resulting cell value is determined for each cell in a current cell level that is being programmed, a minimum or floor for the cell value for the next cell level is established, and programming continues. After one of the cell values is within a predetermined maximum value for the memory device, the maximum number of levels has been reached. Compared to the conventional nonvolatile memory technology (such as the SLC and MLC technology), the adaptive programming technology disclosed here has the unique properties that the number of attained levels and the set of values for each level may vary from one memory device (that is, a group of cells) to another memory device (that is, another group of cells) in a memory chip; and even for the same memory device, the number of attained levels and the set of values for each level may vary from one programming process to another programming process (that is, from one writing operation to another writing operation). The number of levels for a memory device and the set of cell values for each level may be recorded either approximately or exactly in the memory chip in several efficient ways, including the following three ways, so that the memory can later read the cells and determine the level that each cell is in: (1) In the first method, if the value of a cell is a real number (such as a voltage value or a resistance value), the cells can be programmed such that for two adjacent levels—say, level x and level x+1, where the values of the cells in level x are smaller than the values of the cells in level x+1—the gap between the maximum cell value for level x and the minimum cell value for level x+1 is greater than or equal to a predetermined parameter DELTA; at the same time, for cells of the same level—say level x—the cells are programmed such that if their values are sorted from small to large, the gap between any two adjacent values in the sorted list is less than a predetermined parameter EPSILON, where EPSILON<DELTA; the predetermined parameters EPSILON and DELTA are recorded in memory cells or the microcontroller in the memory chip; then when the memory reads the cells' values, it can determine which cells belong to the same level and which cells belong to different levels based on the parameters EPSILON and DELTA, and also determine which level each cell belongs to; (2) In the second method, if the value of a cell is a real number (such as a voltage value or a resistance value), for every two adjacent levels,—say level x and level x+1,—an additional cell called Reference Cell is programmed such that its value is greater than the maximum cell value for level x and is smaller than the minimum cell value for level x+1; then when the memory reads a cell's value, it can determine which level the cell belongs to by comparing the cell's value to the values of reference cells; (3) In the third method, if a cell is a patterned cell (where a cell value is a discrete graph-connectivity state), the memory can record the number of levels and the set of values for each level as configuration data in additional memory cells; then when the memory reads a cell's value, it can determine which level the cell belongs to based on the configuration data. For all types of memories, the memory may also record coding schemes in a microcontroller or memory cells, where a coding scheme uses the levels of the cells in a memory device (i.e., a group of cells)—which is called a codeword—for encoding and decoding data in the memory. The coding schemes can include constant-weight codes, non-constant-weight codes, and graph connectivity codes, which are described further below. When the memory chip is in used to write data, the microcontroller of the memory chip programs the cells of a memory device (i.e., a group of cells) in accordance with the data to write, the coding scheme, and the adaptive programming method. Details of these operations are described in greater detail below.
II. Operations for Programming Levels
After the maximum number of levels and corresponding settings are determined for the memory device, the next operation is carried out at box 706, where the adaptive programmer system determines the configuration data set that record the number of levels and the set of cell values of each level. Examples of the configuration data or the analogous configurations are described as the three methods in paragraph [0063] above. In the next operation, indicated at box 708, the adaptive programmer system configures the memory device microcontroller or additional memory cells with the configuration data set. That completes a write of data. The microcontroller can perform decoding of stored data in accordance with the number of cell levels, the cell values for each level and corresponding configuration. Those skilled in the art will be familiar with associated configurations that may be necessary for operation of the memory device, given the determined number of cell levels. When a write of data is finished, the memory device may be disconnected from the adaptive programmer system, as indicated by the last box 708. When the next write of data is to begin, the memory device may be connected to the adaptive programmer system again, and indicated by box 702, and the programming process may be repeated.
For example, consider the programming of cell level 1. Initially, all cells are at low values. The low values for cells can be achieved by a block erasure for flash memory cells, or by a RESET operation for PCM cells. (For patterned cells, it can be assumed that initially, all cells have the value that corresponds to the case where all vertices in the graph are in the OFF state, namely, no two vertices are connected.) Let X denote the maximum value of cells, and set the range of values for level 0 to be all the values less than or equal to X. For patterned cells, let level 0 consists of the single value where all vertices are OFF. Then the minimum value for level 1, denoted by Y, is set to be X plus a safety gap increment that provides a spacing between cell levels that is beyond an expected noise level in the memory device circuitry. For patterned cells, let Y denote a “floor” cell value where only two neighboring vertices are ON and all other vertices in the graph are OFF. Then voltage is applied to a subgroup of cells for certain time until all their values are greater than or equal to Y. For patterned cells, voltage is applied to a subgroup of cells for certain time to change vertices from the OFF state to the ON state, until for every cell in the programmed subgroup, the set of ON vertices include the two neighboring vertices mentioned above. Note that due to programming noise, those vertices in the graph that are supposed to be OFF may also be accidently programmed to be ON. Multiple rounds of voltage can be applied to program a cell if necessary; and the cells in the subgroup can be programmed either in parallel or sequentially. Subsequent levels—namely, level 2, level 3, and so on—can be programmed in a similar way, and a safety gap increment is always provided for two adjacent levels to tolerate noise. Thus the sets of values for different levels do not overlap, and every cell belongs to one level.
Next, for the cells being programmed, at box 804, the adaptive programmer system determines the maximum value of the cells in the subgroup, and checks whether this maximum value exceeds a maximum permissible value. For patterned cells, the adaptive programmer system checks whether any cell in the subgroup has the value where all vertices in the graph are ON. If the answer is yes, an affirmative outcome at box 804, then cell level programming is terminated. If the answer is no, a negative outcome at the box 804, then cell level programming continues at box 806, where the maximum value of the cells in the subgroup is set as the maximum value for the current level. For example, if the programmed subgroup of cells belong to level 1, and their maximum value is 2.1 after the above applying of voltage, then the maximum value of level 1 is set as 2.1. The set of values of the current level is set to be the range of values between the minimum cell value of the subgroup and the maximum cell value of the subgroup. For patterned cells, the set of values of the current level is set to be the set of values attained by the cells in the subgroup.
At the next operation, at box 808, the adaptive programmer system sets the minimum value of the next level of the memory device to be the maximum value of the previous level set at box 806 plus a predetermined delta value that provides a safety spacing between two adjacent levels. The delta value will be determined by noise and inaccuracies in the memory device circuitry, as will be known by those skilled in the art. For example, if the applied voltage is on the order of 3.3 volts, and the voltage is applied for about 10 microseconds, then a typical delta value for a VLC configuration on flash memory cells would be about 0.3 volts. Thus, in the example above, if the maximum voltage value for Level 1 is 2.1 volts, and if the delta value is 0.3 volts, then the minimum voltage value for Level 2 will be (Level 1)+delta, equal to 2.4 volts. (For patterned cells, as an analogous step of box 808, set a “minimum” cell value Y for the next level to be a cell value that does not belong to the previously programmed cell levels.)
At the next operation, indicated by the box numbered 810, the adaptive programmer system checks whether the minimum value for the next level set in box 808 exceeds the maximum permissible value. If the answer is yes, an affirmative outcome at the box 810, then the current number of levels determined thus far is the maximum number of levels for the memory device, and operation proceeds to box 812 to terminate further cell programming, and operation of the adaptive programmer system continues with completion processing (e.g., disconnection of the memory device). If the answer is no, a negative outcome at the box 810, then cell level programming by the adaptive programmer system continues for the next cell level, at the box 802. In box 802, a subgroup of cells will be programmed for the next level until all their values are greater than or equal to the minimum value set for the next level. For patterned cells, a subgroup of cells will be programmed for the next level until for each cell in the subgroup, its value does not belong to any of the previously programmed cell levels and its ON vertices in the graph include all those vertices that need to be ON in the “minimum” cell value for the next level.
III. Summary of Operations for Programming Levels
The sequence of operations as described above and illustrated in
Details of programming values into a cell will be known to those skilled in the art. For example, in current memories, (including flash memory, phase-change memory, and memristor), the cell is actually programmed with multiple rounds of programming, instead of just one round of programming. The reason is that with one round, the cell generally cannot be programmed with accuracy, so multiple rounds are used instead. The process is generally as follows, where every round of programming is as described substantially as above. In the first round, a voltage is applied to a cell for a predetermined period of time; then, the cell is measured to see how far away its value is from the target value. If it is far away, then a voltage is applied again to the cell for a predetermined period of time; then, the cell is measured again to see how far away it is from the target value. If it is still far away, then the cell is programmed again in the same way as above. The process continues until the cell's value is sufficiently close to the target value (i.e., within a predetermined error tolerance). It should be noted that the voltage and time duration used in the different rounds of programming can be different, because they are generated based on how far away the cell's value is from the target value. That is, the smaller the difference is between the cell's value and the target value, the smaller the voltage and the time duration will be. By programming a cell in this way, its value can be moved closer and closer to the target value with each round of level programming.
Those skilled in the art will also understand that current practice in programming memory cells, which can be implemented consistent with the manner of operations described herein, is that cells are programmed in parallel. That is, many cells are programmed together. The specific approach is that in each round of programming (as noted above, it usually takes multiple rounds to program a cell to attain a desired target value), the same voltage is applied to many cells together for a period of time. In this way, it takes much less time to program the cells as compared to the scheme where the cells are programmed individually one at a time.
In conjunction with the adaptive programming technique described herein, conventional codes may be used for encoding and decoding data stored into the memory device. Such conventional coding schemes are well-known and need no further description. The encoding and decoding are generally performed by the microcontroller of the memory device. In addition, particular types of codes may be useful for encoding and decoding data in a manner that can exploit the adaptive cell level programming described herein, for greater efficiencies. Specifically, particular types of codes for the adaptive cell level programming may be designed in the following way. Let q be an integral parameter that upper bounds the number of levels that cells in a memory device can practically have. Let the set of levels of the cells be called a codeword. Since the number of levels is not predetermined before a write operation, to more efficiently write data, the particular code construction considers not only codewords that use all the q levels (i.e., Level 0, Level 1, . . . , Level q−1), but also consider codewords that use only the lowest q−1 levels (i.e., Level 0, Level 1, . . . , Level q−2), codewords that use only the lowest q−2 levels (i.e., Level 0, Level 1, . . . , Level q−3), . . . , and codewords that use only the lowest 2 levels (i.e., Level 0 and Level 1). All the considered codewords are used to encode data. (This is very different from conventional coding schemes. In a conventional scheme, the number of levels is predetermined, and only codewords that use all the levels are used to encode data.) For x=2, 3, . . . , q, let a codeword that uses Level 0, Level 1, . . . , Level x−1 be called an “x-level codeword”. So the coding scheme described here uses not only the q-level codewords, but also the 2-level codewords, 3-level codewords, . . . , and (q−1)-level codewords. To make the code more efficient, the following constraint may be used for the coding scheme: The constraint is that for an x-level codeword and a y-level codeword with x<y, if for every cell in the memory device (i.e., a group of cells), its level in the x-level codeword is less than or equal to its level in the y-level codeword, then the data encoded by the x-level codeword is a subset of the data encoded by the y-level codeword. An example of such a coding scheme is illustrated in
I. Constant Weight Code
In the constant weight code used with the adaptive cell level technique described herein, every codeword refers to the levels of a group of cells in the memory device. The codewords consist of those codewords that have only Level 0 and Level 1 (which will be referred to as “2-level codewords” hereafter), those codewords that have only Level 0, Level 1, and Level 2 (which will be referred to as “3-level codewords” hereafter), . . . , and so forth, up to those codewords that have only Level 0, Level 1, . . . , and Level q−1 (which will be referred to as “q-level” codewords hereafter, where q is an integral parameter that upper bounds the maximum number of levels the cells in the memory device can possibly have. The constant weight code adapted for the memory device described herein maps codewords to data with the following property: for an x-level codeword and a y-level codeword with x<y, if for every cell in the cell group, its level in the x-level codeword is less than or equal to its level in the y-level codeword, then the data encoded by the x-level codeword is a subset of the data encoded by the y-level codeword. For example, if the y-level codeword encodes a sequence of binary bits, then the x-level codeword that satisfies the above condition encodes a subset of those bits.) As a special implementation, the data encoded by the x-level codeword can be a prefix of the data encoded by the y-level codeword. A constant-weight code as proposed here is a code with an additional special property: for x=0, 1, . . . , q−1, all the q-level codewords have the same number of cells in Level x. It is shown below that a constant weight code is an optimal code to use for the adaptive cell level technique. A method for constructing a constant-weight code for the adaptive cell level programming technique is as follows: Suppose that there are n cells, and let W—{0}, W—{1}, . . . , W_{q−1} be positive integers such that W—{0}+W—{1}+ . . . +W_{q−1}=n. For the q-level codewords, they have W—{0} cells in level 0, W—{1} cells in Level 1, . . . , and W_{q−1} cells in level q−1. For x=2, 3, . . . , q−1, an x-level codeword has W_{i} cells in Level i, for i=1, 2, . . . , x−1, and has W—{0}+W_{x}+W_{x+1}+ . . . +W_{q−1} cells in Level 0. The mapping from such codewords to data can be constructed as follows: Since there are “n choose W—{1}” ways to assign W—{1} cells out of the n cells to Level 1, those x-level codewords with x>=2 can use the cells in Level 1 to store a data symbol of alphabet size “n choose W—{1}”; since there are “n−W—{1} choose W—{2}” ways to assign W—{2} cells out of the remaining n−W—{1} cells to Level 2, those x-level codewords with x>=3 can use the cells in Level 2 to store an additional data symbol of alphabet size “n−W—{1} choose W—{2}”; since there are “n−W—{1}-W—{2} choose W—{3}” ways to assign W—{3} cells out of the remaining n−W—{1}-W—{2} cells to Level 3, those x-level codewords with x>=4 can use the cells in Level 3 to store an additional data symbol of alphabet size “n−W—{1}-W—{2} choose W—{3}”; and so on. Those skilled in the art will understand how to generate constant weight codes based on this explanation, without further description.
II. Non-Constant Weight Code
In this coding scheme, the number of cells assigned to different levels can be different. Non-constant weight code is a more general coding scheme than constant-weight code. An example of a non-constant weight code is illustrated in
III. Scheme for Modifying Data
Another coding scheme is referred to as the “scheme for modifying data”, such as the scheme in
IV. Graph Connectivity
Graph connectivity based coding schemes are designed for the patterned cell scheme invented in this patent. Such a coding scheme, which is suited for resistance-setting configurations, such as PCM constructions, adaptively assigns the values of a cell to discrete levels, where the value of a cell is defined to be the state of connectivity between the vertices in a graph realized in the cell; then, data are stored in cells by mapping the levels of the cells to data (such as a sequence of bits). The code can be a constant-weight code or a non-constant weight code. Furthermore, error-correcting/detecting codes can be used to detect or correct errors in the cells.
I. Introduction
In this work, two novel storage technologies for next-generation PCMs and flash memories are described. The first technology, variable-level cell (VLC), adaptively and robustly controls the number and positions of levels programmed into cells. It eliminates the bottleneck imposed by cell heterogeneities and programming noise, and maximizes the number of levels stored in cells. The second technology, patterned cells, uses the internal structure of amorphous/crystalline domains in PCM cells to store data. It eliminates the high precision and power requirements imposed by programming cell levels, and opens a new direction for data storage in PCMs. Novel coding techniques for data representation, rewriting and error correction are developed. The results are able to substantially improve the storage capacity, speed, reliability and longevity of PCMs and flash memories.
In the following, we first present the basic motivations for developing the variable-level cell (VLC) and patterned cell technologies. We then outline our work on coding schemes.
II. VLC and Patterned Cell Technologies
A. Introduction to Current Flash Memory and PCM Technology
Flash memories use floating-gate cells as their basic storage elements. (See
A PCM consists of chalcogenide-glass cells with two stable states: amorphous and crystalline. The two states have drastically different electric resistance, which is used to store data. Intermediate states, called partially crystalline states, can also be programmed [See G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262 (2010)]. To make the cell amorphous,—called RESET,—a very high temperature (˜600° C.) is used to melt the cell and quench it. To make the cell more crystallized,—called SET,—a more moderate temperature (˜300° C.) above the crystallization threshold is used to heat the cell. The heat is generated/controlled by the current between the bottom and top electrodes of the cell. See
Flash memories and PCMs have many common properties: (1) Noisy programming. It is hard to control the charge-injection/crystallization of the cells. (2) Cell heterogeneity. Some cells are harder to program, while some are easier. When the same voltage is applied to program cells, the harder-to-program cells will have less charge-injection/crystallization. (3) Asymmetry in state transitions. A cell can gradually change in the direction of charge-injection/crystallization, but to remove charge or make the cell amorphous, the cell will be erased/RESET to the lowest level. This is especially significant for flash memories, which use block erasures. (4) limited longevity. A flash memory block can endure 103˜105 erasures. A PCM cell can endure 106˜108 RESETs [See G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262 (2010)], [See P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash memories, Kluwer Academic Publishers, 1st Edition (1999)].
B. Variable-Level Cell (VLC) Coding Scheme for Maximum Storage Capacity
We introduce the VLC scheme for maximum storage capacity. To simplify the terms, we will introduce the concepts based on flash memories. However, all the concepts can be applied to PCMs equally well.
The key to maximizing storage capacity is to maximize the number of (discrete) levels programmed into cells. However, the multi-level cell (MLC) technology uses fixed levels to store data, and its performance is limited by the worst-case performance of cell programming [See P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash memories, Kluwer Academic Publishers, 1st Edition (1999)]. This is illustrated in
The variable-level cell (VLC) scheme maximizes storage capacity by flexibly programming the levels. It has two meanings: (1) The number of levels is flexibly chosen during programming; (2) The charge levels for each discrete level is flexibly chosen during programming. Let q denote the maximum number of discrete levels that can be programmed into a cell; and denote the q discrete levels by {0, 1, . . . , q−1}. Let page denote the set of n cells programmed in parallel. (For NAND flash memories, a page is the basic unit for read/write, and is about 1/64 or 1/128 of a block.) Let c1, . . . , cnεR denote the charge levels of the n cells; and let l1, . . . , lnε {0, 1, . . . , q−1} denote their discrete levels. The discrete levels of cells are determined by the relative order of the cells' charge levels, instead of the absolute values of charge levels. In principle, cells of similar charge levels are considered to have the same discrete level. There are many feasible ways to define the mapping from charge levels to discrete levels. One mapping is defined below.
(M
To program the cells in a page (with parallel programming), we program the discrete levels from low to high: Initially, all the charge levels are below a certain threshold (after the erasure operation) and are considered in the discrete level 0; then the memory first programs level 1, then level 2, then level 3, . . . , and so on. Let p≦q be the integer such that level p−1 is programmed, but its charge levels are already very close to the physical limit; in this case, although the memory will not attempt to program level p, the first p−1 levels—namely, levels 1, 2, . . . , p−1—have been successfully programmed without any ambiguity. The programming has the very nice property that it eliminates the risk of overshooting, because the gap between adjacent levels is only lower bounded, not upper bounded. This enables much more reliable and efficient writing. To better tolerate programming noise and cell heterogeneity, we can further partition a page into an appropriate number of cell groups, and apply the VLC scheme to every group.
The VLC scheme maximizes storage capacity for two reasons: (1) More compact charge-level distribution. The MLC scheme applies the same programming algorithm to all pages. So it considers the worst-case charge-level distribution. In contrast, VLC adaptively uses the actual charge-level distribution of the programmed page, which is narrower; (2) Very compact placement of levels. Since level i+1 is programmed after level i, and only their relative charge level is important, the charge levels of level i+1 just need to be above the actual—instead of the worst-possible-case—maximum charge level of level i by D (the safety gap). This is illustrated in
When the VLC scheme is applied to PCMs, the concept of charge injection is replaced by cell crystallization for programming. Although PCMs do not have block erasures, it is still very beneficial to take the level-by-level programming method to place the levels as compactly as possible.
The VLC coding scheme is distinct from conventional coding schemes in that the symbol written into the cells is adaptively chosen during programming. More specifically, the number of programmed levels depends on the actual programming process. So the coding theories developed for VLC are not only important for flash memories and PCMs, but also for other emerging storage media with heterogeneous storage elements that need similar storage schemes. We study a comprehensive set of important coding topics, including data representation, codes for rewriting data with low computational complexity, error-correcting codes, data management, and their integration. The designed codes can substantially improve the storage capacity, writing speed, longevity, reliability and efficiency of flash memories and PCMs.
1) Data Representation: Flash memories and PCMs use the cell levels to represent data. An optimal data representation scheme can not only fully utilize the storage capacity provided by VLC, but also make encoding and decoding of data very efficient. So it is very important. Since for VLC, the number of levels that will be programmed into a page is not pre-determined, the representation schemes are very distinctive. An example of the schemes is presented below.
(D
namely, v has the same number of cells allocated to each level. Let C⊂{0, 1, . . . , q−1}n denote the set of uniform-weight cell-state vectors. Let us first consider the scheme where only uniform-weight cell-state vectors are used to store data.
Let S denote a large volume of data to store in cells. Since a page is the basic unit of parallel programming, we will store the bits of S page after page. Consider the first page. Given any v□C, for i=1, 2, . . . , q−1, let Li(v)={j|1≦j≦n, vj=i} denote the set of
cells of level i. For i=1, 2, . . . , q−1, given Li(v), . . . , Li−1(v), there are
ways to select Li(v) for a uniform-weight cell-state vector v; and furthermore, those
possible values of Li(v) can be mapped to the index set {0, 1, . . . ,
efficiently in polynomial time:
cells that may be assigned to level i (i.e., the cells indexed by
as a1, a2, . . . ,
Every possible value of Li(v) can be uniquely mapped to
-bit binary vector b=(b1, b2, . . . ,
as follows:
bj=1 iƒƒ the cell aj is assigned to level i. (Clearly, the Hamming weight of b is n/q.) Let ƒ be a bijection that maps b to a number in {0, 1, . . . ,
based on the lexical order of b. ƒ(b) can be computed recursively as follows. Let k be the smallest integer in {1, 2, . . . ,
such that bk=1. Let b′ be the vector obtained by flipping the kth bit of b from 1 to 0. Then ƒ(b), which equals the number of
-bit vectors of Hamming weight n/q that are lexically smaller than b, equals
plus the number of
-bit vectors of Hamming weight
that are lexically smaller than b′.
So we can efficiently store the first
bits of the data S into level 1 when the memory programs level 1; then store the next
bits of S into level 2 when the memory programs level 2; and so on . . . , until the memory ends programming the page. The subsequent data of S will be written into the next page. The encoding and decoding are very efficient, and the data are written into the pages sequentially.
The number of cells in a page, n, is often a large number. For NAND flash memories, n˜104. The above scheme can be generalized by letting every level have
cells. As such cell-state vectors form the “typical set” of all cell-state vectors, the storage capacity of VLC is very well utilized. The scheme can also be generalized to constant-weight cell-state vectors for better performance, where the numbers of cells in the q levels are not necessarily uniform, but are still appropriately-chosen constant numbers.
The optimal data representation schemes can maximize the expected amount of data written into a page by utilizing the probability for each level to be programmed. It is sometimes also desirable to maximize the amount of data that can be written into a page with guarantee. The schemes should also be designed to conveniently support other functions of the memory system.
2) Efficient Codes for Rewriting Data: Codes for rewriting (i.e., modifying) data are very important for flash memories and PCMs [See A. JIANG, V. BOHOSSIAN AND J. BRUCK, Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1166-1170, Nice, France (2007)], [A. JIANG, J. BRUCK AND H. LI, Proc. IEEE Information Theory Workshop (ITW) (2010), [L. A. LASTRAS-MONTANO et al., Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1224-1228, Seoul, Korea (2009)]. Flash memories use block erasures, where a block contains about 106 cells. Modifying even a single bit may require removing charge from a cell, which will lead to the very costly block erasure and reprogramming operations. Although PCMs do not use block erasures, to low the level (i.e., increase the resistance) of a PCM cell, the cell must be RESET to the lowest level, which is also costly. Codes designed for rewriting data can substantially improve the longevity, speed and power efficiency of flash memories and PCMs [see, e.g., A. JIANG et al., Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1219-1223, Seoul, Korea (2009)].
The VLC technology can maximize the number of levels in cells. And codes for rewriting are a particularly effective way to utilize the levels. Consider a cell with q levels. If it is used to store data at full capacity, the cell can store log2 q bits; however, it has no rewriting capability without erasure/RESET. Let us compare it with a simple rewriting code that uses the cell to store one bit: if the cell level is an even integer, the bit is 0; otherwise, the bit is 1. The code allows the data to be rewritten q−1 times without erasure/RESET, where every rewrite will increase the cell level by only one. To see how effective it is, consider a VLC or MLC technology that improves q from 8 to 16. When the cell stores data at full capacity, the number of bits stored in the cell increases from 3 bits to 4 bits, a 33% improvement; and 4 bits are written into the cell per erase/RESET cycle. When the above rewriting code is used, the number of rewrites supported by the cell increases from seven rewrites to fifteen rewrites, a 114% improvement; and overall 15 bits can be sequentially written into the cell per erase/RESET cycle. Rewriting codes with better performance can be shown to exist [See A. JIANG, V. BOHOSSIAN and J. BRUCK, Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1166-1170, Nice, France (2007)], [R. L. Rivest and A. Shamir, Information and Control, vol. 55, pp. 1-19 (1982)]; and in general, the number of supported rewrites increases linearly (instead of logarithmically) with q, the number of levels. Given the limited endurance of flash memories and PCMs, rewriting codes can substantially increase the amount of data written into them over their lifetime [see A. JIANG, V. BOHOSSIAN and J. BRUCK, Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1166-1170, Nice, France (2007), and L. A. LASTRAS-MONTANO et al., Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1224-1228, Seoul, Korea (2009)].
We design highly efficient rewriting codes for VLC. The codes are also useful for MLC and SLC (single-level cells). Although high-rate rewriting codes can be shown to exist [R. L. Rivest and A. Shamir, Information and Control, vol. 55, pp. 1-19 (1982)], how to design such codes with low computational-complexity is a significant challenge. In our work, we have focused on optimal rewriting codes that fully utilize the different properties of flash memories and PCMs.
3) Error-Correcting Codes: Strong error-correcting codes are very important for the data reliability of flash memories and PCMs [G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262 (2010)], [P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash memories, Kluwer Academic Publishers, 1st Edition (1999)]. The cell levels of flash memories can be disturbed by write disturbs, read disturbs, charge leakage, cell coupling and other error mechanisms [P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash memories, Kluwer Academic Publishers, 1st Edition (1999)]. The cell levels of PCMs can be disturbed by the drifting of the resistance levels and thermal interference [G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262 (2010)]. For cells of multiple levels, the likelihood of errors also depends on the magnitude of the errors. Currently, BCH codes and Hamming codes are widely used in flash memories [see P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash memories, Kluwer Academic Publishers, 1st Edition (1999)]; LDPC codes and other codes are actively under study.
In addition to the types of errors common to VLC and MLC, the VLC scheme also has a unique source of “partial erasure”: the programming of levels. Consider VLC with at most q levels: levels 0, 1, . . . , q−1. Before programming, all cells are in level 0; then levels 1, 2, . . . are programmed from low to high. If the maximum discrete level is p<q−1 when programming ends, then for a cell still in level 0, it may belong to any level in the set {0, p+1, p+2, . . . , q−1} in the original plan. So the cell can be considered partially erased. When data are stored as error-correcting codes, such partial erasures can be corrected by adaptively adjusting the construction of the code (i.e., add more redundant cells to the codeword when less data is written into a page), or by designing codes that can tolerate the partial erasures. An example of the latter codes is presented below.
Error-Correcting Codes For Cell Groups. Partition the n cells in a page into m cell groups, and apply the VLC scheme (i.e., the mapping from charge levels to discrete levels) to every cell group independently. Compared to applying VLC to a whole page, for such a smaller cell group, the number of cells in a level is usually smaller, which makes it easier to program levels and enables more levels to be programmed in expectation. Choose an (m, k) error-correcting code C whose symbols are over large alphabets. For i=1, . . . , m, let the ith cell group store the ith codeword symbol of C. The code C can be Reed-Solomon codes, BCH codes, fountain codes, etc. An appropriate mapping can be used that maps the cell levels of a cell group to the codeword symbol, such that when the partial erasure or errors happen, the number of induced bit erasures/errors in the codeword symbol is minimized. The number of levels programmed into the different cell groups may be different (due to programming noise and cell heterogeneity); so the amount of partial erasure in the different cell groups can be different. An efficient decoding algorithm for C, such as soft-decoding algorithms for Reed-Solomon codes or BCH codes, can be designed to correct the partial erasures due to programming and the errors due to the disturbs in cell levels.
The codeword symbols of the code C can also be mapped to the cell groups in different ways, in order to minimize the number of symbols that contain partial erasures. One method is to map every symbol to a fixed level. Compared to higher levels, lower levels are much more robust against partial erasures.
We have explored optimal code constructions for error correction. The memories have very strong requirements for reliability, and the study of codes for VLC is very important for high-capacity memories with various magnitude-related errors.
4) Data Management: Data management consists of a set of functions that can substantially affect the performance of storage systems. For flash memories and PCMs, data are frequently read, rewritten and relocated. For memories with high storage capacity, these operations can be even more frequent. Due to the limited endurance and data retention of flash memories and PCMs, it is very important to maximize their longevity, reliability and speed with optimized data management schemes. In this work, we study: (1) Data aggregation and movement based on rewriting codes and other novel coding schemes for memory longevity and performance; (2) Data reliability schemes that maintain the global reliability of data, especially for combating the drift of cell levels (charge leakage for flash memories and resistance drift for PCMs); (3) File systems that present an integrated solution for flash memories and PCMs.
C. Patterned Cell Technology for Phase-Change Memories
PCM is the most promising emerging memory technology for the next generation. Currently, PCM prototypes of 512 MB using 4-level cells have been sampled. Despite their great potential, PCMs are facing two significant challenges: (1) Hardness of programming cell levels. It is very challenging to control the crystallization of cells using heat, which makes it hard to increase the number of cell levels for MLC (where fixed cell levels are used); (2) High power requirement. PCMs require extensive power when data are written. This constraint is currently significantly limiting the application of PCMs, especially for mobile and embedded systems [see D. LAMMERS, IEEE Spectrum, pp. 14 (September 2010)].
We develop a novel data storage technology for PCMs named Patterned Cells. It uses the internal structure of amorphous/crystalline domains (i.e., islands) in PCM cells to store data. It eliminates the high precision and power requirements imposed by programming cell levels, and opens a new direction for data storage in PCMs. It should be noted that the internal structure of PCM cells is an active topic of study in industry and research [see G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262 (2010)], [see M. FRANCESCHINI et al., Proc. Information Theory Workshop, UCSD (2010)]. However, so far the effort has been for a single island in the PCM cell, and the focus is on controlling (i.e., programming) the position, size and shape of the island [see M. FRANCESCHINI et al., Proc. Information Theory Workshop, UCSD (2010)]. Patterned cell is distinct in that it uses multiple islands in a PCM cell. Clearly, the programming techniques developed for controlling a single island can also be applied to multiple islands. In the following, we present two designs of patterned cells: the amorphous-island scheme and the crystalline-island scheme.
In the following introduction, for simplicity of description, we assume that the electrodes are attached to two sides of a cell—top side and bottom side—and the electrodes that are connected to amorphous/crystalline islands are always on the bottom side of the cell. This can be easily generalized to the case where the electrodes are attached to various sides of the cell in various ways.
1) Amorphous-Island Scheme: In the amorphous-island scheme, the cell as a base is in the crystalline state, and multiple bottom-electrodes are attached to the bottom of the cell that can create multiple amorphous islands. An example with two bottom-electrodes is shown in
The state a cell can be described by the resistance level measured for each island. In the simplest case, the resistance level can be quantized into two states: low resistance (no island) and high resistance (island exists). If the island has partially crystalline states, then more levels are used. However, it is challenging to program the resistance levels of the islands precisely, and the thermal interference from the SET/RESET operation on an island can affect the state of other islands (e.g., to crystallize the other islands). To conquer the difficulty of programming cell levels and the thermal interference during the SET (i.e., crystallization) operation [see A. PIROVANO et al., IEEE Trans. Device and Materials Reliability, vol. 4, no. 3, pp. 422-427 (2004)] or RESET operation, we can use the relative order of the resistance levels of the islands to represent data. Specifically, for a cell with m amorphous islands, let R1, R2, . . . , Rm denote their resistance. For PCMs, usually the logarithm of the resistance is used. To achieve robust programming, every time the memory SET an island, it makes it more and more crystallized until its resistance is lower than all the other islands. Since the thermal interference from the SET operation may partly affect other islands, we choose an integer k<m such that only the order of the k islands with lowest resistance is used to represent data. For example, if k=1, the programming is very robust. The islands can alternatively become more crystallized for rewriting data; and when the islands become nearly fully crystallized (namely, when they disappear), the cell will be RESET to create the amorphous islands again. The state of a cell is represented by a vector s=(s1, s2, . . . , sk), where s1, . . . , skε{1, 2, . . . , m} are the indices of the k islands with the lowest resistance. Namely, Rs1<Rs2< . . . <Rsk<mini□{1, 2, . . . , m}-{s
In the above scheme, the cell programming is robust because only the relative order of resistance levels is used to store data. The exact value of the resistance does not need to be precisely controlled. This makes it easier to fully utilize the wide resistance range of the PCM material for repeated changing of the cell state and for rewriting data. And since the resistance level does not have to be programmed precisely using multiple cautious SET operations, the power for programming may be reduced.
2) Crystalline-Island Scheme: In the crystalline-island scheme, the cell as a base is in the amorphous state, and multiple bottom-electrodes are attached to the cell that can create multiple crystalline islands. Initially, the cell is RESET using top and bottom electrodes. Then every bottom electrode can create a crystalline island using SET (the crystalline temperature). The resistance between two bottom electrodes becomes low when their two corresponding islands both exist and overlap, because the crystalline state has a much lower resistance (up to 103 times) than the amorphous state. And bottom electrodes with low resistance between them are called connected. See
The state of the cell can be represented by the connectivity of the bottom electrodes. There are different geometric ways to place the bottom electrodes. An example using a 2×2 array is shown in
The crystalline-island scheme is a novel geometric coding scheme for PCMs. It only uses the existence/nonexistence of crystalline islands to represent data, and there is little requirement on controlling the programming precision of cell levels. This makes programming more robust. The scheme also provides an important way to pre-process the cells during idle time, in order to reduce the power requirement when data are actually written. Since most of time memories are idle, and the power becomes a constraint usually only when (a large volume of) data are written (i.e., during the peak time) [see D. LAMMERS, IEEE Spectrum, pp. 14 (Sep. 2010)], the following strategy can be taken: when the memory is idle, create small crystalline islands in cells such that they are relatively close to each other but are still isolated. This is the preprocessing step. When the memory writes data, the cells just need to expand the islands to connect them, and this costs less power. See
We discuss some natural extensions. In the crystalline-island scheme, we can adaptively control the size of each island, or gradually increase them over time to change the connectivity of the bottom electrodes and to rewrite data. Also, since the different configurations of islands can change the resistance measured between different parts of the cell, the resistance level can also be used to store data.
3) Coding Schemes for Patterned Cells: The patterned cell scheme is very distinct from all existing memory storage schemes [G. W. BURR et al., Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262 (2010)] because it uses the geometry of cells to store data. So the coding theories developed for patterned cells are not only important for PCMs, but also for other emerging storage media where geometrical structures can be used to represent data [D. LAMMERS, IEEE Spectrum, pp. 14 (September 2010)], [H. J. RICHTER et al., IEEE Trans. Magn., vol. 42, no. 10, pp. 2255-2260 (2006)]. We study a comprehensive set of important coding topics, including data representation, codes for rewriting data with low computational complexity, error-correcting codes, data management, and their integration. The topics are related to the corresponding topics for VLC. The distinction is that for patterned cells, the width of the cell's state-transition diagram is more than one. (For VLC, the diagram can be considered a cycle.) Also, the errors can be geometry related. More details on the coding topics will be presented in the following sections. The codes can substantially improve the reliability, longevity and performance of PCMs.
D. Outline of Coding Schemes
Variable-level cells (VLC) and patterned cells are novel technologies for next-generation PCMs and flash memories. By adaptively programming the cell levels, VLC can maximize the number of levels written into cells. By using the structures of amorphous/crystalline domains in cells, the patterned cell scheme opens a new direction for data representation. In this work, we study the following major coding functions:
1) Efficient and Robust Data Representation. Data representation constructions are explored to maximize the storage capacity, be robust to the uncertainty in programming cell levels, and enable very efficient encoding and decoding of data.
2) Codes for rewriting with high rates and low computational complexity. Codes with very high rates and very low computational complexity for rewriting data are designed. The code construction can fully utilize the many levels provided by VLC and the cell states provided by patterned cells. They can maximize the amount of data a memory can write during its lifetime, which can also optimize its write speed and power efficiency.
3) Error Correction. Error-correcting codes for VLC and patterned cells of high rates and efficient encoding/decoding algorithms are developed. The focus is to explore how to design the codes when cell levels may not be fully programmed, or when the errors are related to geometry. The codes can significantly increase the reliability of PCMs and flash memories.
4) Data Management. New data management schemes are designed to optimally aggregate/migrate data in the memory systems, and maintain the long-term reliability of data.
5) Integration of Coding Schemes. It is very important to integrate the different coding schemes designed for the different functions discussed above. Our results can provide a unified and practical solution for PCMs and flash memories, and fully optimize its performance.
These results provide a fundamental understanding of the VLC and patterned cell technologies, which is for the nonvolatile memory technology of the next generation. The following sections provide more details on the coding functions that are listed above.
III. Efficient and Robust Data Representation
Data representation is the mapping between the cell states and data. In this work, highly efficient and robust data representation schemes are studied for VLC and patterned cells.
A. Data Representation for VLC
We focus on data representation schemes for VLC with these important properties: (1) The storage capacity provided by VLC is fully utilized; (2) The encoding and decoding of data is very efficient despite partial erasures, namely, the uncertainty in which set of levels will be programmed in a page (the basic unit for parallel programming). A good understanding of such schemes are not only useful for VLC, but also for future storage media where storage elements are heterogeneous and best-effort writing is needed to achieve maximum storage capacity.
The data representation scheme is an interesting and new optimization problem. Consider a page with n cells, where at most q discrete levels can be programmed into the cells using VLC. Let L=(l1, l2, . . . , ln)ε{0, 1, . . . , q−1}n be the cell-state vector, where li is the discrete level of the ith cell. Before programming, ∀iε{1, . . . , n} we have li=0. Let T=(t1, t2, . . . , tn)ε{0, 1, . . . , q−1}n be the target vector, namely, assuming all q levels can be programmed, we would like the ith cell to be programmed to level ti. The levels are programmed sequentially: first level 1, then level 2, and so on. See
For i=1, 2, . . . , q−1, let Pi denote the probability that when the memory programs the page, level i will be successfully programmed. (This is assuming that all q levels contain cells, which is the typical case). Clearly, 1=P1≧P2≧P3≧ . . . ≧Pq−1. The distribution of Pi's are as illustrated in
We present a new data-representation scheme based on constant-weight cell-state vectors. It generalizes the scheme using uniform-weight cell-state vectors presented in the previous section. Let w=(w0, w1, . . . , wq−1) be a vector, such that every wi is a positive integer and Σi=0q−1wi=n. The scheme uses only those target vectors to represent data where for i=0, 1, . . . , q−1, the target vector has wi cells in level i. With the low-to-high programming method of VLC, for i=1, 2, . . . , q−1, there are
ways to allocate wi cells to level i given the previously programmed i−1 lower levels. So level i can store Bi=└ log2 Ai┘ bits. The expected number of bits that will be stored in the page is Σi−1q−1PiBi. If our objective is to maximize the expect amount of data stored in a page, then we should choose the vector w to maximize the objective function Σi−1q−1PiBi. Alternatively, robust programming problems can also be defined for more guaranteed performance [S. BOYD AND L. VANDENBERGHE, Convex optimization, Cambridge University Press (2004)]. The above scheme also enables very efficient encoding and decoding of data.
The above scheme can be generalized to the case where every level i contains wi(1+o(1)) cells. It can be proved that as n→∞1, such a scheme can maximize the storage capacity. The scheme can also be generalized to the case where a page in partitioned into multiple cell groups. Various important functions of memory systems can be explored for optimal solutions.
B. Data Representation for Patterned Cells
Patterned cells use the structures of amorphous/crystalline islands in cells to store data. In the amorphous-island scheme, the relative order of the resistance levels of amorphous islands can be used. In the crystalline-island scheme, the connectivity of the crystalline islands can be used. When the cell states are mapped to data, to achieve robust programming, it is important to understand how robust the data representation is toward noisy programming.
Consider the crystalline-island scheme introduced in the previous section, where the bottom electrodes in a cell form an a×b array. Every island is allowed only to connect to its neighbors in the same row or column. If the programming of islands is noisy, the most common form of error is that two islands that are diagonal from each other are too large and become connected. (See
Theorem 4. The crystalline-island scheme with islands positioned as any a×b rectangular array can correct all diagonal errors.
Various important error types, design optimal structures and programming algorithms can be studied for patterned cells, and corresponding coding schemes can be explored. The results can achieve high storage capacity, very robust/efficient cell programming and high power efficiency.
IV. Error Correction And Data Management
Error-correcting codes are very important for flash memories and PCMs. When cells become smaller and more levels are stored in cells for higher storage capacity, errors appear more easily in cells. Currently, flash memories and PCMs use the Hamming codes and BCH codes for error correction [see P. CAPPELLETTI, C. GOLLA, P. OLIVO AND E. ZANONI (Ed.), Flash memories, Kluwer Academic Publishers, 1st Edition (1999)]; and other codes, including LDPC codes and Reed-Solomon codes, are also explored. However, due to the memories' special error mechanisms (read/write disturbs, coupling, charge leakage and drifting of cell levels, thermal interference) and cell properties (multiple levels in cells, programming algorithms, etc.), new codes of better performance are urgently needed.
We study and design strong error-correcting codes that fully utilize the high storage capacity of VLC and the rich structures of patterned cells, and are fully compatible with other coding schemes. The maximized number of levels of VLC provides more cell states that can be used to combat errors; on the other side, the uncertainty in the highest programmable level requires the code to be adaptive. One solution is to encode the information bits in the lower levels, which are more robust for programming, and store the parity-check information in the higher levels. Another solution is to partition a page into cell groups, and concatenate an erasure code (for each cell group) with an MDS code (across the cell groups) for optimized performance. The MDS code can also be replaced by other large-alphabet codes such as BCH codes or fountain codes. The above two solutions can also be combined. For patterned cells, it has been shown in the previous section that they have inherent robustness against geometry-related errors. By exploring more error types, strong error-correcting codes can be designed accordingly.
The coding schemes for different functions can be combined to form a comprehensive data management solution. For example, the above MDS code construction can be generalized for both error correction and rewriting, where every codeword symbol (cell group) is a small error-detecting rewriting code and is much easier to design due to the lower computational complexity. Page-level coding is also new and interesting. We have shown that to migrate data among n flash-memory blocks, coding-based solutions over GF(2) can reduce the number of block erasures from O(n log n) to O(n)[see, e.g., A. JIANG et al., Proc. 47th Annual Allerton Conference on Communication, Control and Computing (Allerton), pp. 1031-1038, Monticello, Ill. (2009)], [see also A. JIANG et al., Proc. IEEE International Symposium on Information Theory (ISIT), pp. 1918-1922, Austin, Tex. (2010)], [see also A. JIANG et al., IEEE Transactions on Information Theory, vol. 56, no. 10 (2010)]. The results can be further extended from conventional codes to rewriting codes for better performance. These comprehensive coding schemes are very suitable for VLC and patterned cells. Based on the study of data management systems that integrate different coding schemes, we can significantly improve the overall performance, longevity and reliability of PCMs and flash memories.
V. Conclusion
In this work, we present two novel storage technologies—variable-level cell and patterned cells—for PCMs and flash memories. They can also be used for other storage media with similar properties. The new technologies can maximize the storage capacity of PCMs and flash memories, enable robust and efficient programming, and substantially improve their longevity, speed and power efficiency.
I. Introduction
For nonvolatile memories (NVMs)—including flash memories, phase-change memories (PCMs), memristors, etc.,—maximizing the storage capacity is a key challenge. The existing method is to use multi-level cells (MLCs) of more and more levels, where a cell of q discrete levels can store log2 q bits. See J. E. Brewer and M. Gill (Ed), Nonvolatile memory technologies with emphasis on flash, John Wiley & Sons, Inc., Hoboken, N.J., 2008. Flash memories with four and eight levels have been used in products, and MLCs with sixteen levels have been demonstrated in prototypes. For PCMs, cells with four or more levels have been in development. How to maximize the number of levels in cells is a most important topic for study.
The number of levels that can be programmed into cells is seriously constrained by the noise in cell programming and by cell heterogeneity. See the Brewer document referenced above. We explain it with flash memories as an example, and the concepts can be naturally extended to PCMs and memristors. A flash memory uses the charge stored in floating-gate cells to store data, where the amount of charge in a cell is quantized into q values to represent q discrete levels. Cell programming—the operation of injecting charge into cells—is a noisy process, which means that the actual increase in the cell levels can deviate substantially from the target value. And due to the block erasure property,—which means that to remove charge from any cell, a whole block of about 105 cells must be erased together to remove all their charge,—during the writing procedure, the cell levels are only allowed to monotonically increase using charge injection. That makes it infeasible to correct over-injection errors. See the Brewer document referenced above. Beside cell-programming noise, the difficulty in programming is also caused by cell heterogeneity, which means that even when the same voltage is used to program different cells, the increments in the different cells' levels can differ substantially, due to the heterogeneity in cell material and geometry. H. T. Lue et al., “Study of incremental step pulse programming (ISPP) and STI edge effect of BE-SONOS NAND flash,” Proc. IEEE Int. Symp. on Reliability Physics, vol. 30, no. 11, pp. 693-694, May 2008. Since memories use parallel programming for high write speed, a common voltage is used to program many cells during a programming step, which cannot be adjusted for individual cells. See Brewer and Lue documents referenced above. As cell sizes scale down, the cell heterogeneity will be even more significant. See the Brewer document referenced above.
The storage capacity of MLC is limited by the worst-case performance of cell-programming noise and cell heterogeneity. See the Brewer and Lue documents referenced above. We illustrate it in
In this document, we introduce a new storage scheme named variable-level cells (VLC) for maximum storage capacity. It has two unique properties: the number of levels is not fixed, and the positions of the levels are chosen adaptively during programming. More specifically, we program the levels sequentially from low to high. After level i is programmed, we program level i+1 such that the gap between the two adjacent levels is at least the required safety gap. (There are many ways to differentiate the cells in different levels. For example, we can require the cells of the same level to have charge levels within δ from each other, and require cells in different levels to have charge levels at least Δ away from each other, for appropriately chosen parameters δ, Δ.) We program as many levels into the cells as possible until the highest programmed level reaches the physical limit.
The VLC scheme places the levels as compactly as possible, and maximizes the number of programmed levels, which is determined by the actual instead of the worst-case programming performance. It is illustrated in
The VLC scheme shifts data representation into the stochastic regime, because the number of levels actually used is not determined in advance. New coding schemes are needed for this new paradigm. In this paper, we present a data representation scheme, and prove that it achieves the storage capacity of VLC. We also study rewriting codes, which are important for improving the longevity of flash memories and PCMs, and present bounds for achievable rates. See A. Jiang, V. Bohossian, and J. Bruck, “Floating codes for joint information storage in write asymmetric memories,” Proc. IEEE International Symposium on Information Theory (ISIT), Nice, France, June 2007, pp. 1166-1170 and L. A. Lastras-Montano, M. Franceschini, T. Mittelholzer, J. Karidis and M. Wegman, “On the lifetime of multilevel memories,” Proc. IEEE International Symposium on Information Theory (ISIT), Seoul, Korea, 2009, pp. 1224-1228.
The remaining of the paper is organized as follows. In Section II, data representation schemes are studied, and the storage capacity of VLC is derived. In Section III, data rewriting and the achievable rates are studied. In Section IV, concluding remarks are presented.
II. Data Representation and Capacity of VLC
In this section, we present a probabilistic model for VLC, study its representation scheme, and derive its capacity.
A. Discrete Model for VLC
For a storage scheme, it is key to have a discrete model that not only enables efficient code designs, but is also robust to the physical implementation of the scheme. In this paper, we use the following simple probabilistic model for VLC.
Let q denote the maximum number of levels we can program into cells, and call the q levels level 0, level 1, . . . , level q−1. Let n denote the number of cells, and for i=1, 2, . . . , n, denote the level of the ith cell by ciε{0, 1, . . . , q−1}. Before writing, all cells are at level 0. Let L=(l1, l2, . . . , ln)ε{0, 1, . . . , q−1}n denote the target levels, which means that for i=1, . . . , n, we plan to program ci as li. Since VLC uses the relative positions of charge levels to store data, we usually require for i=0, 1, . . . , max1≦j≦nlj, at least one cell is assigned to level i. However when n→∞, this constraint has a negligible effect on the code rate. So when we analyze capacity, this constraint can be neglected. To program cells to the target levels L, we first program level 1 (namely, push some cells from level 0 to level 1), then program level 2, level 3, . . . , until we reach a certain level i such that its charge levels are so close to the physical limit that we will not be able to program level i+1. All the cells that should belong to levels 1, 2, . . . , i are successfully programmed to those levels. The cells that should belong to levels {i+1, i+2, . . . , max1≦j≦nlj} are still in level 0 (together with the cells that should belong to level 0). So the final cell levels are LiΔ(c′1, c′2, . . . , c′n), where for j=1, . . . , n, c′j=lj if 1≦lj≦i, and c′j=0 otherwise.
For i=1, 2, . . . , q−1, let pi denote the probability that level i can be programmed given that levels 1, 2, . . . , i−1 are successfully programmed. (And for convenience, define pq=0.) Let T denote the target levels, and S denote the written levels. So when T=Lε {0, 1, . . . , q−1}n, for i=0, 1, . . . , q−1, we have Pr{S=Li}=(1−pi+1)πj=1ipj.
We define the capacity of VLC by
where PT(t) is the probability distribution of T, and I(T; S) is the mutual information of T and S. Here we view the n cells as one symbol for the channel, and normalize its capacity by the number of cells. The capacity defined this way equals the expected number of bits a cell can store.
B. Data Representation Schemes
We present a data representation scheme with a nice property: every level i (for i=1, 2, . . . , q−1) encodes a separately set of information bits. It enables efficient encoding and decoding of data. The code also achieves capacity and is therefore optimal. The code is of constant weight: the number of cells assigned to each level is fixed for all codewords.
Let μ1, μ2, . . . , μq−1ε(0,1) be parameters. The code-words of our code are the target levels T that have this property: “nμ1 cells are assigned to level 1; and for i=2, 3, . . . , q−1, nμiΣj=1i−1(1−μj) cells are assigned to level i.” (This is a general definition of constant-weight codes. Clearly, μi denotes the number of cells assigned to level I divided by the number of cells assigned to levels {0, i, i+1, . . . , q−1}. Here we consider n→∞ and pi>0 for 1≦i≦q−1.) The constant-weight code enables convenient encoding and decoding methods as follows. Since there are
ways to choose the nμ1 cells in level 1, level 1 can encode log 2
information bits. Then, for i=2, 3, . . . , q−1, given the cells already assigned to levels {1, 2, . . . , i−1}, there are
ways to choose the nμiΣj=1i−1(1−μj) cells in level i; so level i can encode
information bits. The mapping from cells in level i to information bits that level i represents has a well-studied solution in enumerative source coding. See T. M. Cover, “Enumerative source coding,” IEEE Transactions on Information Theory, vol. IT-19, no. 1, pp. 73-77, January 1973, so we skip its details.
Given a stream of information bits, we can store its first nH(μ1) bits in level 1, its next n(1−μ1)H(μ2) bits in level 2, its next n(1−μ1)(1−μ2)H(μ3) bits in level 3, and so on. This makes encoding and decoding convenient despite the nondeterministic behavior of writing. In memories, the n cells represent a page of cells that are programmed in parallel. If the target levels are L and the written levels are Li, then we have written the first Σk=11(nπj=1k−1(1−μj))H(μk) information bits of the stream to the page of n cells. The rest of the stream can be written to the other pages in the memory. The expected number of information bits that can be written into the n cells is Σi−1q−1(πj=1ipj(nπj−1i−1(1−μj))H(μi). So the rate of the code, measured as number of stored bits per cell, is
Let us define A1, A2, . . . , Aq−1 recursively: Aq−1=2pq−1; and for i=q−2, q−3, . . . , 1, Ai=(1+Ai+1)pi. Theorem 2 below shows the maximum rate of the code and the corresponding optimal configuration of the parameters μ1, μ2, . . . , μq−1. We first prove the following lemma.
Lemma 1. Let xε[0,1] and yε[0,1] be given numbers. Let
Where ƒ′(μ) is the derivative of ƒ(μ). By setting ƒ′(μ)=0, we get
And we get ƒ(μ*)=log2(1+2y
Theorem 2. The maximum rate of the constant-weight code is R=log2 A1, which is achieved when
for i=1, 2, . . . , q−2 and
Proof: Since
to maximize R, we should have
So in the following discussion, we always assume that
For k=q−2, q−3, . . . , 1, define
We will prove the following property by induction, for k=q−2, q−3, . . . , 1:
Property :
for i=k, k+1, . . . , q−2. And the maximum value of
As the base case, let k=q−2. We have
To maximize
and the maximum value of
We now consider the induction step. For kε {q−3, q−4, . . . , 1}, we have
By the inductive assumption,
for I=k+1, k+2, . . . , q−2 and the maximum value of
and the maximum value of
Consider VLC constant-weight codes with q=5. We have
A4=2p4,
A
3=(1+2p4)p3,
A
2=(1+(1+2p4)p3)p2,
A
1=(1+(1+(1+2p4)p3)p2)p1.
By Theorem 2, to maximize the rate of the code, we should choose the parameters μ1, μ2, μ3, μ4 as follows:
The above parameters make the code achieve the maximum rate
We now discuss briefly data representation for VLC when n is small. In this case, it can be beneficial to use codes that are not of constant weight to improve code rates. At the same time, the need for every target level to contain at least one cell no longer has a negligible effect on the code rates. We illustrate such codes with the following example.
Consider n=4 cells that can have at most q=3 levels. We show a code in
C. Capacity of VLC
We now derive the capacity of VLC, and prove that the constant-weight code shown above is optimal.
We first present a channel model for a single cell. Let X denote the target level for a cell, and let Y denote the actual state of the cell after writing. Clearly, Xε {0, 1, . . . , q−1}. The level X can be successfully programmed with probability p1 p2 . . . pX if X≧1, and with probability p1 p2 . . . pq−1 if X=0; and if so, we get Y=X. It is also possible that level X is not successfully programmed. For i=0, 1, . . . , q−2, the highest programmed level will be level i with probability (1−pi+1πj=1ipj; and if so, the cells with target levels in {0, i+1, i+2, . . . , q−1} will all remain in level 0. In that case, if X=0 or i+1≦X≦q−1, we denote that state of the cell after writing (namely, Y) by E{0, i+1, i+2, . . . , q−1} and call it a partial erasure, because it is infeasible to tell which level in {0, i+1, i+2, . . . , q−1} is the target level of the cell. So we have Yε {0, 1, . . . , q−1}∪{E{0, 1, 2, . . . , q−1}, E{0, 2, 3, . . . , q−1}, . . . E{0, q−1}}. We call the channel the partial-erasure channel. Examples of the channel for q=2, 3 are shown in
Lemma 5. The capacity of the partial-erasure channel for q levels is log2 A1 bits per cell.
Proof: The capacity of the partial-erasure channel is maxPX(x)I(X; Y), where PX(x) is the probability distribution for X. For i=2, 3, . . . , q, we define Chi to be a partial-erasure channel with i levels and the following alternation of notations:
Let
Claim : For i=2, 3, . . . q, we have
First, consider the base case i=2. The channel Ch2 is a binary erasure channel with erasure probability 1−pq−1, and its capacity is pq−1. We have Aq−1=2pq−1, so log2 Aq−1=pq−1. So claim holds for i=2.
As the inductive step, consider i≧3. We have
iε{0, q−i+1, q−i+2, . . . , q−1}
and
For convenience, in the following equation we use P(x) to denote P
We see that B is actually the mutual information between the input and output symbols of the channel Chi−1, namely B=I(
By Lemma 1,
So claim is proved. Since X=
That completes the proof.
Theorem 6. The capacity of VLC is
C=log2 A1.
Proof. Let T=(x1, . . . , xn)ε{0, 1, . . . , q−1}n denote the target levels of the n cells, and S=(y1, . . . , yn)ε{0, 1, . . . , q−1, E{0, 1, . . . , q−1}, E{0, 2, . . . , q−1}, . . . , E{0, q−1}n denote the written levels of the n cells. Note that the requirement for every level to have at least one cell has a negligible effect on the capacity, because we can satisfy the requirement by assigning q auxiliary cells a0, a1, . . . , aq−1 to the q levels, where for i=0, 1, . . . , q−1, we let auxiliary cell ai's target level be level i. As n→∞, the q auxiliary cells do not affect the code's rate. So in the following, we can assume that the set of values that T can take are exactly the set {0, 1, . . . , q−1}n. Namely, every cell's target level can be freely chosen from the set {0, 1, . . . q−1}. We also assume the q auxiliary cells exist without loss of generality (w.l.o.g.).
Let hε{0, 1, . . . q−1} denote the highest programmed level. Pr{h=0}=1−p1, and for i=1, 2, . . . , q−1, Pr{h=i}=p1 p2 . . . pi. The value of h can be determined after writing this way: h is the highest written level of the q auxiliary cells. Note that the random variable h is independent of the n target levels x1, x2, . . . , xn; and for i=1, . . . , n, the value of yi is determined by xi and h. So maxPT(t)I(T; S)=n maxpx (x)I(xi; yi)=n maxPX (x)I(X;Y)=n log2 A1, where X, Y are the input and output symbols of the partial-erasure channel. Since the capacity of VLC is
(where we see every VLC group of n cells as one symbol for the channel, and the channel has infinitely many such symbols), we have C=log2 Al.
The above theorem shows that the constant-weight code introduced in the previous subsection achieves capacity.
III. Rewriting Data in VLC
In this section, we study codes for rewriting data in VLC, and bound its achievable rates. There has been extensive study on rewriting codes for flash memories and PCMs (for both single-level cells (SLCs) and MLCs) for achieving longer memory lifetime. See Jiang and Lastras-Montano. In the well known write-once memory (WOM) model, the cell levels can only increase when data are rewritten. See F. Fu and A. J. Han Vinck, “On the capacity of generalized write-once memory with state transitions described by an arbitrary directed acyclic graph,” IEEE Transactions on Information Theory, vol. 45, no. 1, pp. 308-313, 1999. For flash memories and PCMs, the model describes the behavior of cells between two global erasure operations. Since erasures reduce the quality of cells, it is highly desirable to avoid them. Given the number of rewrites, T, our objective is to maximize the rates of the code for the T rewrites, when cell levels can only increase for rewriting.
A. Codes for Rewriting Data
We first consider some specific code constructions. Consider a VLC cell group that has n cells of q levels. Let p1, P2, . . . , Pq−1 be the same probabilities as defined before. And for convenience, we define pq=0.
Let (c1, c2, . . . , cn)ε{0, 1, . . . , q−1}n denote the n cells' levels. Let them represent n bits of data, (b1, b2, . . . , bn)ε{0,1}n this way: for 1≦i≦n, bi=ci mod 2. For convenience, we assume n→∞, and we have q auxiliary cells with target levels 0, 1, . . . , q−1, respectively. The auxiliary cells will ensure every programmed level will maintain at least one cell, and will help us tell the levels of the n cells. Clearly, for every rewrite, a cell's level needs to increase by at most one. The rewriting has to end when we cannot program a higher level. The rate of the code is one bit per cell for each rewrite. And the expected number of rewrites this parity code can support is Σi=1q−1i·(pip2 . . . pi (1−pi+1))=p1(1+p2(1+p3( . . . +pq−2(1+pq−1)))).
More generally, given a WOM code that rewrites k bits of data t times in n two-level cells, by a similar level-by-level approach, we can get a rewriting code in VLC of rate k/n that supports tp1 (1+p2(1+p3( . . . +pq−2(1+pq−1)))) rewrites in expectation. See the Fu document referenced above.
B. Bounding the Capacity Region for Rewriting in VLC
We now study the achievable rates for rewriting in VLC. Note that unlike MLC, which are deterministic, the highest programmable level of a VLC group is a random variable. So we need to define code rates accordingly.
Consider a VLC group of n cells, whose highest programmable level is a random variable hε{1, 2, . . . , q−1}. (We assume h≧1—namely p1=1—for the convenience of presentation. The analysis can be extended to h≧0.) Note that the value of h remains unknown until level h is programmed. To simplify rate analysis, we suppose that there are q auxiliary cells a0, a1, . . . , aq−1 in the same VLC group, whose target levels are 0, 1, . . . , q−1, respectively. For i=1, . . . , h, when level i is programmed, the auxiliary cell ai will be raised to level i and always remain there. If h<q−1, after level h is programmed (at which point we find that level h+1 cannot be programmed), we push ah+1, . . . , aq−1 to level h, too. So having more than one auxiliary cell in a level i indicates h=i. For sufficiently large n, the q auxiliary cells have a negligible effect on the code rate.
Now consider N VLC groups G1, G2, . . . , GN, each of n cells. (For capacity analysis, we consider N→∞.) For i=1, . . . , N, denote the highest programmable level of Gi by hiε{1, . . . , q−1}, and denote its cells by (ci, 1, . . . , ci,n). Here h1, . . . , hN are i.i.d. random variables, where for 1≦i≦N and 1≦j≦q−1, Pr{hi=j}=p1p2 . . . pj(1−pj+1). (Note p1=1 and pq Δ0.) If the target level of cell ci,j is li,j, we will program it to level min{li,j, hi}. Then if hi<q−1 and the written level of cell ci,j is hi, we say that the cell is in the partially-erased state Ehi, since its target level could be any value in {hi, hi+1, . . . , q−1}. In addition, for any two vectors x=(x1, x2, . . . , xk) and y=(y1, y2, . . . , yk), we say x≦y if xi≦yi for i=1, . . . , k.
Definition 8. A (T, V1, V2, . . . , VT) rewriting code for the N VLC groups consists of T pairs of encoding and decoding functions {(ft, gt)}t=1T, with the message index sets It={1, 2, . . . , Vt}, the encoding functions ft: It×{0, 1, . . . , q−1}Nn→{0, 1, . . . , q−1}Nn, and the decoding functions gt: {0, 1, . . . , q−1}Nn→It. Let x0Nn=(0, 0, . . . , 0)ε{0, 1, . . . , q−1}Nn. Given any sequence of T messages m1εI1, m2εI2, . . . , mTεIT, for the T rewrites, the target levels for the cells (c1,1, . . . , c1,n, c2,1, . . . , c2,n, cN,1, . . . , cN,n) are x1Nn=f1(m1,x0Nn), x2Nn=f2(m2,x1Nn), . . . , xTNn=fT(mT, xT−1Nn), respectively, where xt−1Nn≦xtNn for t=1, . . . , T. However, while the target cell levels for the tth rewrite (for t=1, . . . , T) are xtNn=(l1,1, . . . , l1,n, l2,1, . . . , l2,n, . . . , lN,1, . . . , lN,n), the written cells levels are ytNn=(l′1,1, . . . , l′1,n, l′2,1, . . . , l′2,n, . . . l′N,1, . . . , l′N,n), where l′i,j=min{li,j, hi}. For decoding, it is required that for t=1, . . . , T, we have Pr{gt (ytNn)=mt}→1 as N→∞.
For t=1, . . . T, define
Then (R1, R2, . . . , RT) is called the rate vector of the code.
We call the closure of the set of all rate vectors the capacity region, and denote it by AT. We present its inner/outer bounds.
1) Inner Bound to Capacity Region: We consider a sub channel code for VLC. Let c1, c2, . . . , cN be N cells, one from each of the N VLC groups. The Nn cells in the N VLC groups can be partitioned into n such “sub-channels.” We define the rewriting code for the N cells in the same way as in Definition 8 (by letting n=1). We denote its capacity region by AT. Clearly, for any given n, we have ĀT⊂AT.
Let L={0, 1, . . . , q−1} denote the set of target levels. Let E={E1, E2, . . . , Eq−2} denote the set of partially-erased states. Then L∪E are written levels. For two random variables X, Y taking values in L, we say “XY” if Pr{X=x, Y=y}=0 for any 0≦y<x≦q−1. Let random variables S1, S2, . . . , ST form a Markov chain that takes values in L. We say “S1S2 . . . ST” if St−1St for t=2, 3, . . . , T. For i=1, 2, . . . , T, let {si,0, si, 1, . . . , si, q−1) denote the probability distribution where si,j=Pr {Si=j} for j=0, 1, . . . q−1.
Given the random variables S1, S2, . . . , ST, we define ai,j and Bi,j (for i=1, 2, . . . , T and j=1, 2, . . . , q−2) as follows. Let αi,j=(Σk+jq−1si,k)(πk=2jpk)(1−pj+1). We define Bi,j to be a random variable taking values in {j, j+1, . . . , q−1}, where Pr{Bi,j=k}=si,k/(Σl=jq−1si,l) for k=j, j+1, . . . , q−1. We now present an inner bound to ĀT. Since ĀT⊂AT, it is also an inner bound to AT.
Theorem 9. Define DT={(R1, R2, . . . , RT)εT} there exist Markov-chain random variables S1, S2, . . . , ST taking values in {0, 1, . . . , q−1}, such that S1S2 . . . ST and
Then, we have DT⊂ĀT.
Proof: Suppose S1, S2, . . . , ST are Markov-chain random variables that take values in {0, 1, . . . , q−1}, and that S1S2 . . . ST. For any constant ε>0 (which can be arbitrarily small), we set
V1=2N[H(S
Vt=2N[H(S
We will prove that when N is sufficiently large, there exists an (T, V1, V2, . . . , VT) rewriting code for the N cells c1, c2, . . . , cN.
We first consider the case T=2. Let TS
Similarly, let TS
elements, and denote the selected subset by
elements, and denote the selected subset by
We first prove the following property:
Property : ∀×ε
To prove Property , consider the channel model for a cell ci, with its target level XεL as the input symbol and its written level YεL∪E as the output symbol. We have Pr{Y=0|X=0}=1; for i=1, 2, . . . , q−2, we have Pr{Y=i|X=i}=p2p3 . . . pi+1 and for j=1, 2, . . . , i, Pr{Y=Ej|X=i}=p2p3 . . . pj(1−pj+1); and we have Pr{Y=q−1|X=q−1}=p2p3 . . . pq−1 and for j=1, 2, . . . , q−2, Pr{Y=Ej|X=q−1}=p2p3 . . . pj(1−pj+1). The channel model for q=6 is illustrated in
We can see that if X has the same distribution as the random variable S1, then for i=1, 2, . . . , q−2,
also, for i=1, 2, . . . , q−2 and j=i, i+1, . . . , q−1,
For any iεL, if Y=i, then X=i and H(X|Y=i)=0. So we have
when N→∞, with probability one we can decode x from y based on their joint typicality. So Property is true. Using the same analysis, we get the following property for
We now discuss the encoding and decoding of the T=2 writes. For the first write, we choose V1 different elements x1, x2, . . . , xv1ε
Consider the second write. Let {F1, F2, . . . , FV2} be a partition of the set
To prove Property ⋄, we use the method of random coding. For every zΔ
F={zε
Then {F1, F2, . . . , FV2} form a partition of the set
For any uΔ(u1, u2, . . . , uN)εTS
and define G(u) as
G(u)={vεTS
Since S1∩S2, we have TS
For any vεI2={1, 2, . . . , V2} and uε
By the union bound, we get
Pr{∃vεI
2} and uε
This implies that Property ⋄ is true.
We now describe the encoding and decoding functions of the second write. Let {F1, F2, . . . , FV2} be a partition of the set
The above proof for T=2 can be easily generalized to the proof for general T. The encoding and decoding functions for the tth write (for t=3, 4, . . . , T) can be defined in the same way as for the second write. So we get the conclusion.
Note that if p2=p3= . . . =pq−1=1 (namely, every cell can be programmed to the highest level q−1 with guarantee), we get ai,j=0 for all Consequently, the set of achievable rates presented in the above theorem, DT, becomes DT={(R1, R2, . . . , RT)εRT| there exist Markov-chain random variables S1, S2, . . . , ST, such that S1S2 . . . ST and R1≦H(S1), R2≦H(S2|S1), . . . , RT≦T(ST|ST−1)}, which is exactly the capacity region of MLC with q levels. See F. Fu and A. J. Han Vinck, “On the capacity of generalized write-once memory with state transitions described by an arbitrary directed acyclic graph,” IEEE Transactions on Information Theory, vol. 45, no. 1, pp. 308-313, 1999.
2) Outer Bound to Capacity Region: To derive an outer bound to the capacity region AT, we consider the rewriting code as defined in Definition 8, but with an additional property: the highest reachable levels h1, h2, . . . , hN for the N VLC groups are known in advance. Thus the encoding and decoding functions can use that information. Let AT* denote its capacity region. Clearly, AT*⊃AT, so it is an outer bound to AT.
Theorem 10. Define GT={(R1, R2, . . . RT)εRT| for i=1, 2, . . . , q−1, there exist (r1, r2, . . . , rT,i)εRT and Markov-chain random variables S1,i, S2,i, . . . , ST,i taking values in {0, 1, . . . , i}, such that
S1,iS2,i . . . ST,i,
r
1,i
≦H(S1,i), r2,i≦H(S2,i|S1,i), . . . , rT,i≦H(ST,i|ST−1,i)
and for j=1, 2, . . . , T,
Let CT be the closed set generated by GT. We have AT*=CT.
Proof: For i=1, 2, . . . , q−1, let Qi be the indices of the VLC groups whose highest reachable levels are all level i. That is, Qi={jε{1, 2, . . . , N}|hj=i}⊂{1, 2, . . . , N}. Also, define γi=p1 p2 . . . pi(1−pi+1). (As before, pq Δ0.) Clearly,
with high probability as N→∞.
We first prove that all rate vectors (R1, R2, . . . , RT) in GT are achievable rate vectors. It is known that for WOM of i+1 levels [4], the rate vector, r1,i, r2,i, . . . , rT,i) is achievable for T writes if and only if there exist Markov-chain random variables S1,i, S2,i, . . . , ST,i taking values in {0, 1, . . . , i} such that S1,iS2,i, . . . , ST,i and r1,i≦H(S1,i), r2,i≦H(S2,i|S1,i), . . . , rTi≦H(ST,i|ST−1,i). So for i=1, 2, . . . , q−1, we can use the cells in the VLC groups indexed by Qi to achieve T writes with the rate vector (r1,i, r2,i, . . . , rT,i). Together, the N VLC groups achieve T writes with the rate vector (R1, R2, . . . , RT).
Next, we prove the converse. Given a (T, V1, V2, . . . , VT) code, we need to show that
We use the same technique of proof as described in the Fu reference (Theorem 3.1). For t=1, 2, . . . , T, let ƒt, gt denote the encoding and decoding functions of the code for the t-th write, respectively.
Let W1, W2, . . . , WT be independent random variables that are uniformly distributed over the message index set It={1, 2, . . . , Vt} (for t=1, 2, . . . , T), respectively. Let YtNnΔ{0, 0, . . . , 0} denote the all-zero vector of length Nn. Then for t=1, 2, . . . , T, define YtNn=(Yt,1, Yt,2, . . . , Yt,Nn) as YtNn=ƒt(Wt, Yt−1Nn). That is, YtNn denotes the cell levels after the t-th write. It is not hard to see that H(Wt)=H(YtNn|Yt−1Nn) for t=1, 2, . . . , T.
For i=1, 2, . . . , q−1, let Qi⊂{1, 2, . . . , Nn} denote the indices of the cells whose highest reachable levels are all i, and let Li be an independent random variable that is uniformly distributed over the index set Qi. Specifically, the indices for cells in VLC group G1 are {1, 2, . . . , n}, the indices for cells in G2 are {n+1, n+2, . . . , 2n}, and so on. Let L be an independent random variable that is uniformly distributed over the index set {1, 2, . . . , Nn}. We get
For i=1, 2, . . . , q−1, define a set of new random variables S1,i, S2,i, . . . , ST,i taking values in {0, 1, . . . , i}, whose joint probability distribution is defined as
Define S0,iΔ0. It is not hard to see that, S1,i, S2,i, . . . , ST,i form a Markov chain, and for any tε{1, 2, . . . , T} the random variables (St−1,i,St,i) and Yt−1,Li, Yt,Li have the same probability distribution. So H(S1,i)=H(Y1,Li) and for t=2, 3, . . . , T, H(St,i|St−1,i)=H(Yt,Li|St−1,Li). Since Yt−1,Li=Yt,Li for t=2, 3, . . . , T, we have S1,iS2,i . . . ST,i. Therefore for t=1, 2, . . . , T,
So we have
That completes the converse part of the proof. So AT*=CT.
Let MTΔ max {Σt=1T Rt|(R1, R2, . . . , RT)εAT} denote the maximum total rate of all rewriting codes for VLC. It is known that for WOM (i.e., MLC) of i+1 levels, the maximum total rate over T writes is
See the Fu. By Theorem 10, we get MT≦
IV. Conclusion
This paper introduces a new data representation scheme, variable-level cells, for nonvolatile memories. By adaptively choosing the number and positions of levels in cells, higher storage rates can be achieved. The storage capacity of the VLC scheme is proved, and it is shown that it can be achieved by constant-weight codes. Codes for rewriting data are also analyzed for the VLC scheme, and both inner and outer bounds to the capacity region of rewriting are presented.
I. Introduction
Phase-change memory (PCM) is an important emerging nonvolatile memory (NVM) technology that promises high performance. It uses chalcogenide glass as cells, which has two stable states: amorphous and crystalline. See G. W. Burr et al., “Phase change memory technology,” Journal of Vacuum Science and Technology, vol. 28, no. 2, pp. 223-262, March 2010. The amorphous state has very high electrical resistance, and the crystalline state has low resistance. Intermediate states, called partially crystalline states, can also exist. High temperatures induced by electrical currents are used to switch the state of a portion of the cell, which is called a domain. By quantizing cell resistance into multiple discrete levels, one or more bits per cell can be stored. Currently, four-level cells have been developed. To improve data density, more levels are needed. See the Burr article referenced above.
The current multi-level cell (MLC) approach faces a number of challenges, including cell-programming noise, cell-level drifting, and high power consumption. See the Burr article and D. Lammers, “Resistive RAM gains ground,” in IEEE Spectrum, pp. 14, September 2010. It is difficult to program cell levels accurately due to cell heterogeneity and noise. The cell levels can drift away significantly after they are programmed, making it even harder to control their accuracy. And the high power requirement for cell programming is hindering PCM's application in mobile devices. See Lammers, referenced above.
In this paper, we explore a new cell structure and its data representation scheme. In the new structure, called patterned cells, multiple domains per cell are used. An example is shown in
We let every domain have two basic states: on (crystalline) or off (amorphous). If two neighboring domains are both on, they overlap and become electrically connected (i.e., low resistance). The connectivity of domains can be detected by measuring the resistance between their bottom electrodes, which uses low reading voltage and does not change the state of the domains. We use the connectivity patterns of domains to represent data. As an example, the connectivity patterns of the four domains in
The patterned cell is a new approach to store data using the internal structure of domains in PCM cells. The two basic states of its domains may eliminate the high precision and power requirements imposed by programming cell levels. The data representation scheme is a new type of code defined based on graph connectivity. In this paper, we explore this new scheme, analyze its storage capacity, and study its error-correction capability and the construction of error-control codes.
The rest of the paper is organized as follows. In Section II, we study the storage capacity of patterned cell. In Section III, we study error correction and detection for patterned cell. In Section IV, we present concluding remarks.
II. Storage Capacity of Patterned Cell
In this section, we present the graph model for connectivity-based data representation. Then we analyze the storage capacity of domains that form one or two dimensional arrays.
A. Graph Model for Connectivity-Based Data Representation
Let G=(V, E) be a connected undirected graph, whose vertices V represent the domains in a cell. An edge (u,v) exists if the two domains are adjacent (which means they overlap if they are both on). Let S: V→{0,1} denote the states of vertices: ∀vεV, S(v)=1 if v is on, and S(v)=0 if v is off. Denote the |V| vertices by v1, v2, . . . , v|V|. We call (S(v1), S(v2), . . . , S(v|V|)) a configuration of G. Let Ū={0,1}|V| denote the set of all configurations. Since in the crystalline-domain model, the purpose of making a domain crystalline is to connect it to at least one crystalline neighbor, we focus on configurations denoted by U that satisfy this property: “For any v E V that is on, at least one of its neighbors is also on.” That is, U={(S(v1), S(v2), . . . , S(v|v|))εŪ∀1≦i≦|V|, if S(vi)=1, then ∃vjεV such that (vi,vj)ε E and S(vj)=1}. We call U the set of valid configurations.
Let C: V×V→{0,1} denote the connectivity between vertices: “∀w1≠w2 εV, C(w1,w2)=1 if there exists a sequence of vertices (w1=u1, u2, . . . , uk=w2) such that (ui, ui+1)εE and S(ui)=S(ui+1)=1 for i=1, 2, . . . , k−1; otherwise, C(w1,w1)=0. And for any wεV, we set C(w,w)=1 by default.” Two vertices w1,w2 are connected if C(wi,w2)=1. The vector (C(v1,v1), C(v1, v2), . . . , C(v1, v|V|); C(v2, v1), C(v2, v2), . . . , C(v2, v|V|); . . . ; C(v|V|, v1), C(v|V|, v2), . . . , C(v|V|,v|V|)) is called the connectivity pattern of G. Clearly, not all vectors in {0,1}|V|×|V| are connectivity patterns that correspond to valid configurations (or even just configurations). So to be specific, let if ƒ:U→{0,1}|V|×|V| be the function that maps a valid configuration to its connectivity pattern. Let C={ƒ(ū)|ūεU}, and we call C the set of valid connectivity patterns.
Lemma 1. The mapping f: U→C is a bijection.
Proof: Given a connectivity pattern
A PCM can read the connectivity pattern. We store data by mapping elements in C to symbols. The rate of graph G is
bits per vertex (i.e., domain).
B. Capacity of One-Dimensional Array
It is not difficult to compute the rate of G when |V| is small. In this paper, we focus on large |V| (especially for |V|→∞), which corresponds to using numerous domains in a large PCM layer. Let n=|V| and define N(n) Δ|C|=|U|. We define the capacity of G as
We first consider the case where the domains form a one-dimensional array. That is, in graph G=(V,E), we have V={v1, v2, . . . , vn} and E={(v1,v2), (v2,v3), . . . , (vn−1,vn)}. We denote the capacity of the one-dimensional array by capID.
Theorem 2. Let We
have cap1D=log2λ*≈0.8114.
Proof: The valid configuration of a one-dimensional array is a constrained system, where every run of is (i.e., “on” vertices) needs to have length at least two. The Shannon cover of the system is shown in
By solving |A−λI|=−(λ3−2λ2+λ−1)=0, we find that for matrix A, its eigenvalue of the greatest absolute value is λ*≈1.7549. It is known that the capacity of the constrained system is log2 λ*.
We further present the number of valid configurations for a one-dimensional array with n vertices.
Theorem 3. Let a1, a2, a3 be the three solutions to x for the equation x3−2X2+x−I=0, and let μ1, μ2, μ3 be the numbers that satisfy the linear equation set
We get
Then for a one-dimensional array with n vertices, we have
N(n)=|C|=|U|=μ1α1n+μ2α2n+μ3α3n.
Proof: We derive the value of N(n) by recursive functions. Define g(n) to be the set of valid configurations for a linear array with n vertices given that the first vertex is “on”. That is, g(n)={(s1, s2, . . . , sn)ε|s1=1}.
To compute g(n), we notice that for a valid configuration {(s1, s2, . . . , sn)εU, if s1=1, then s2=1 and we also have the following properties:
So we get |g(n)|=N(n−3)+|g(n−1)|.
To compute N(n), we notice that for a valid configuration (s1, s2, . . . , sn)εU:
So we get N(n)=N(n−1)+1|g(n)|.
Combing the above two equations, we get the recursive function
N(n)=2N(n−1)−N(n−2)+N(n−3).
By solving the recursive function and using the boundary conditions that N(l)=1, N(2)=2, N(3)=4, we get the conclusion.▪
C. Capacity of Two-Dimensional Arrays
We now consider the case where the domains form a two-dimensional array. Specifically, we study two types: the rectangular array and the triangular array, illustrated in
(1) Lower Bound based on Tiling: If we consider a distribution θ on the valid configuration set U, then the rate of G is
So another expression for capacity is
For any distribution θ, limn→∞ R(θ) is a lower bound for cap. Different ways of constructing θ lead us to different methods.
In A. Sharov and R. M. Roth, “Two-Dimensional constrained coding based on tiling”, IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1800-1807, 2010, tiling was proposed as a variable-length encoding technique for two-dimensional (2-D) constraints, such as runlength-limited (RLL) constraints and no isolated bits (n.i.b.) constraints. The idea of tiling is that we can divide all the 2-D plane using shifted copies of two certain shapes, referred as ‘W’ and ‘B’ tiles. Here, we say that a set of vertices A is a shift or shifted copy of another set B if and only if their vertices are one-to-one mapped and the position movement (vector) between each vertex in A and its corresponding vertex in B is fixed. For these two types of tiles—'W′ tiles and B′ tiles—they have the following properties:
According to these properties, we can first set ‘W’ tiles independently based on a predetermined distribution n, and then configure the ‘B’ tiles uniformly and independently (given the ‘W’ tiles). Finally, the maximal information rate maxπ, R(π) is a lower bound of the array's capacity.
As discussed previously, our constraint for a valid configuration is that each “on” vertex has at least one “on” neighbor. For the rectangular/triangular arrays, we can use the tiling schemes in
According to Theorem 3.1 in A. Sharov and R. M. Roth, “Two-Dimensional constrained coding based on tiling”, IEEE Transactions on Information Theory, vol. 56, no. 4, pp. 1800-1807, 2010, we have
Here, |W| (or |B| is the size of each ‘W (‘B’) tile, e.g., |W|=12 in the left-side tiling of
(2) Lower Bound based on Bit-Stuffing: Another way to obtain the lower bounds for the capacities of 2-D constraint codes is based on bit-stuffing I. Tal and R. M. Roth, “Bounds on the rate of 2-D bit-stuffing encoders”, IEEE Trans, on Information Theory vol. 56, no. 6, pp 2561-2567, 2010. In bit-stuffing, let a denote the vertices near the left and top boundaries, called boundary vertices. Assume we know the state configuration of a; then we can program the remaining vertices one by one such that the ith vertex depends on a set of programmed vertices near it, denoted by Di. In this scheme, for different i,j we have that the set Di∪i is a shift of the set Dj∪j, and for all i, the conditional distribution P(xi|x(Di)) is fixed, denoted by γ, where x(Di) is the configuration of Di.
Let θ denote the probability distribution of the configuration on all the vertices V, and let δ denote the probability distribution of the configuration on the boundary islands ∂. Then we see that θ is uniquely determined by δ and the conditional distribution γ. It is not hard to prove that for any conditional distribution γ, when the 2-D array is infinitely large, there exists a distribution δ such that θ is stationary. That means for any subset A⊂V and its arbitrary shift σ(A)⊂V, A and σ(A) have the same configuration distribution, namely,
Pθ(x(A)=a)=Pθ(x(σ(A))=a)
for any state configuration a. Note that this equation is true only when the block is infinity large; otherwise, θ is quasi-stationary. See Tal referenced above.
Given this stationary distribution θ, we would like to calculate the relative entropy Ri of the ith vertex given the states of the vertices programmed before it. Here the ith vertex is not a boundary vertex. Assume the state distribution on Di is φ; then according to the definition of bit-stuffing
where |Di| is the same for different i, so we can also write it as |D|. It is not easy to get the exact value of Ri because φ is unknown (it depends on γ) and there are too many constraints to guarantee that θ is stationary. By relaxing the constraints, we get a set of distributions on Di, denoted as {φ′}, such that θ is stationary near the ith vertex (limited in a fixed area T near the ith vertex). Therefore,
such that (1) the configuration distribution on Tis stationary, and (2) given some zε{0,1}|D|, we have γ(0|z)=0 to guarantee that each “on” vertex has at least one “on” neighbor.
Since the inequality above holds for all the vertices except the boundary vertices, a lower bound of the capacity can be written as
under the constraints. For more discussions, please see the Tal article referenced above.
(1) Upper Bound based on Convex Programming: In I. Tal and R. M. Roth, “Convex programming upper bounds on the capacity of 2-D constraints”, IEEE Transactions on Information Theory vol. 57, no. 1, pp 381-391, 2011, convex programming was used as a method for calculating an upper bound on the capacity of 2-D constraints. The idea is based on the observations that there exists an optimal distribution θ* such that θ* is stationary and symmetric when the array is sufficiently large. The stationary property implies that for any set of vertices A,—let σ(A) be an arbitrary shift of A,—A and σ(A) have the same state (configuration) distribution. The symmetric property depends on the type of the array. For a rectangular array, if two sets of vertices A and B are reflection symmetric about a horizontal/vertical line or a 45-degree line, then they have the same state (configuration) distribution. Note that the reflection symmetry about a 45-degree line is also called transposition invariance in Tal and Roth referenced immediately above. For a triangular array, there are more symmetries: if two sets of vertices A and B are reflection symmetric about a horizontal/vertical line or a 30/60-degree line, then they have the same state (configuration) distribution.
Now let us consider the distribution over a small region T for both arrays, as shown in
It is easy to see that if a vertex i is not on the boundary, then
R
i
≦H(xi|{x1, x2, . . . , xi−1}∩T)=R(φ).
That implies that R(φ) is an upper bound for
So our work is to maximize R(φ) such that φ is stationary and symmetric on T. Thus we get the upper bounds for the capacity of the rectangular array in Table I. The same method also applies to the triangular array.
III. Error Correction and Detection
In this section, we study error correction/detection for patterned cells. We focus on one-dimensional arrays and two-dimensional rectangular arrays. When programming domains, a common error is to make a domain too large such that it changes the connectivity pattern unintentionally. Two types of such errors are shown in
A. One-Dimensional Array
Let G=(V, E) be a one-dimensional array of n vertices: v1, v2, . . . , vn. When n→∞ and given the overreach error probability pe, let cap1(pe) denote its capacity.
Theorem 4. For one-dimensional array, cap1(pe)≧
Proof: We prove the theorem constructively by presenting error-correcting codes for one-dimensional arrays.
To see that cap1(pe)≧0.5, consider n to be even. Partition the n vertices into pairs: (v1, v2), (v3, v4), . . . , (vn−1, vn). Store one bit in every pair (v2i−1, v2i) (for i=1, 2, . . . ,
this way: if the bit is 0, set both vertices as “off”; if the bit is 1, set both vertices as “on”. Clearly, the code can correct all overreach errors. And its rate is 0.5 bit per vertex. So cap1(pe)≧0.5. In the following, we need to prove that
Given a valid configuration
For a configuration
sig(
sig(
Given a binary vector
Δ(
Δ(
We first prove the following property:
Property : Let
Due to the symmetry between Um,1 and Um,0 (just replace 1-runs with 0-runs and vice versa), we have |{
To prove that, consider a configuration
Let us obtain a new binary vector
this way: first, for i=1, 2, . . . , m, if the ith element in Δ(
that m+1 1-runs and 0-runs (without any limitation on the lengths of the 1-runs and 0-runs), and there is a one-to-one mapping between configurations in Um,1 of signature
such vectors
We now consider m→∞, let m be even, and let ε be an arbitrarily small constant. Define
So limm→∞ is even
Let K⊂
It is not difficult to see that K is an error-correcting code of length m (with m→∞), rate 1−H(pe) (we make ε→0) that can correct binary symmetric errors with error probability pe.
Let
and let
be even. By Property , for every vector
configurations in Um,1 (and in Um,0) of signature
we can encode
information bits into the configurations in D1∪D0 as follows:
information bits are mapping to one of the configurations in Um,1 or Um,0 (depending on if the 1st information bit is 1 or 0) whose signatures equal
We now show how to decode the codewords (i.e., configurations) in D1∪D0 to recover the information bits, where the codewords can contain overreach errors (with error probability Pe).
Let
Since every 1-run or 0-run in
Let
let . . . be the length of the ith segment in
let L2i=α2i+1−α2i−1−1. Define the signature of
So the overreach errors have a one-to-one mapping to the 1's in the vector (μ1+n1 mod 2, μ2+n2 mod 2, . . . , μm+nm mod 2). Since sig(
information bits. That concludes the decoding algorithm.
We now analyze the rate R of the above code. When n, m→∞, we have
That leads to the conclusion.
It is noticeable that the overreach error is a type of asymmetric error for graph connectivity. In the following, we present an error-detecting code that can detect all overreach errors. Its underlying idea is closely related to the well-known Berger code discussed in J. M. Berger, “A note on an error detection code for asymmetric channels,” Information and Control, vol. 4, pp. 68-73, March 1961, for asymmetric errors.
The framework of the code construction is as follows. We use m information vertices and r redundant vertices, which form a one-dimensional array of n=m+r vertices. The redundant vertices follow the information vertices in the array. Let the constants α1, α2, α3, μ1, μ2, μ3 be as specified in Theorem 3. The m information vertices store data from an alphabet of size N(m)=μ1α1m+μ2α2m+μ3α3m. When m is large, the m information vertices store about 0.8114m information bits, and r≈ log1 7549 m. (So the redundancy is logarithmic in the codeword length.) Let x denote the number of connected components in the subgraph induced by the information vertices, which overreach errors can only decrease. We use the redundant vertices to record the value of x, and the mapping is constructed such that the recorded value can only be increased by overreach errors. This way, the mismatch between information vertices and redundant vertices can be used to detect all overreach errors.
We now present details of the code. Let v1, v2, . . . , vm, denote the m information vertices. A connected component among them is a maximal segment of vertices (vi, Vi+1, . . . , vj) such that their corresponding bottom electrodes are all electrically connected. Let x and x denote the number of connected components among the information vertices before and after overreach errors happen (if any), respectively. Clearly, 1≦x≦X≦m. If there is one or more overreach errors among the m information vertices, then X<x: otherwise, x=X.
Let u1, u2, . . . , ur denote the r redundant vertices, and let Ur⊂{0, 1}r denote the set of valid configurations for them. For every
That is, the function F sorts the valid configurations of the redundant vertices based on their lexical order. Let F−1 denote the inverse function of F. We will introduce the specific computations used by F and T−1 at the end of the subsection.
We now introduce how to encode the value of x using the configuration of the r redundant vertices. We choose r to be the smallest positive integer such that N(r)≧m. Let
We introduce details of the decoding (i.e., error detection) process. Let
Similarly, let
The decoding (i.e., error detection) algorithm is as follows:
Theorem 5. The above code can detect all overreach errors.
Proof: If overreach errors happen among the information vertices, we will have
The only remaining case is that no overreach error happens among the information vertices or among the redundant vertices, however there is an overreach error between the two segments (namely, between vm and u1). In this case, xm and y1 will be the true states of the two vertices, and the second step of the algorithm will detect the error.▪
Theorem 6. Let m≧2 be an integer. Let r be the smallest positive integer such that μ1α1r+μ2α2r+μ3α3r≧m. The constants α1, α2, α3, μ1, μ2, μ3 are specified in Theorem 3. Then, there is an error-detecting code of length m+r and rate
bits per vertex that can detect all overreach errors. When m→∞, we have r=logα1 m≈ log1.7549 m, and the rate of the code is cap1D=log2 α1≈0.8114, which reaches the capacity of one-dimensional arrays.
We now introduce how the function F: Ur→{1, 2, . . . , N(r)} maps configurations to integers, and how its inverse function F−1: {1, 2, . . . , N(r)}→Ur maps integers to configurations.
We first show that given any valid configuration
i=min{kε{1, 2, . . . , r}|sk=1}.
Let jε{i+1, i+2, . . . , r} be defined as follows: if si=si+1= . . . =sr=1, then j=n; otherwise, let j be the integer such that si=si+1= . . . =sj=1 and sj+1=0. For any two configurations
F(
By default, let N(0)=1; and if j≧r−1, let F((0, . . . , 0, sj+2, ss+3, . . . , sr))=1. The above recursion can be easily used to compute F(
Next, we show that given an integer zε{1, 2, . . . , N(r)}, how to compute F−1(z)=(s1, s2, . . . , sr)εUr. If z=1, then F−1(z)=(0, 0, . . . , 0). In the following we assume z>1. Let i be the greatest integer such that N(r−i+1)≧z; then we have
s1=s2= . . . =si−1=0 and si=1.
Let j be the smallest integer such that
(By default, let N(0)=N(−1)=1.) Then we have
Si=Si+1= . . . =Sj=1.
If j=n−1, we have sn=0. If j≦n−2, we have sj+1=0 and (0, . . . , 0, sj+2, sj+3, . . . , sr)=F−1(z−N(r−1)−Σk=i+1j−1N(r−k−1)).
With the above recursion, we can easily determine F−1(z).
B. Two-Dimensional Array
We now focus on the capacity of two-dimensional rectangular array when i.i.d. overreach errors happen with probability pe between neighboring on and off vertices. Let G=(V, E) be an m×m two-dimensional rectangular array, where m→∞. Let cap2(pe) denote its capacity.
Theorem 7. For any qε[0,1/2], let n(q,pe)=(1−q3)(pe+(1−pe)(1−(1−(1−q)Pe)3)). Then for two-dimensional rectangular array,
Proof: The proof is constructive. First, consider a tile of five vertices as in
be a parameter we will optimize. Let the on/off states of the four vertices a, b, c, d be i.i.d., where a (or b, c, d) is on with probability 1−q and off with probability q. We set the state of vertex e—the vertex in the middle—this way: “If a, b, c, d are all off, then e is off; otherwise, e is on.” Clearly, the above approach guarantees that every on vertex has at least one neighboring vertex that is also on. Let S(a), S(b), S(c), S(d)ε{0,1} denote the states of the vertices a, b, c, d, respectively. We let each of the four vertices a, b, c, d store a bit, which equals S(a), S(b), S(c), S(d), respectively.
It is well known that the small tiles can be packed perfectly to fill the two-dimensional space. It is illustrated in
Let us focus on the stored bit S(a1). (The analysis applies to the other stored bits in the same way.) After overreach errors happen, let S′(a1) denote our estimation of the bit S(a1). We determine S′(a1) this way:
We can see that if S(a1)=1, there will be no decoding error for this bit because we will have S′(a1)=1. If S(a1)=0, with a certain probability P (which we will analyze later) the overreach errors will make S′(a1) be 1. So the channel for the stored bits is asymmetric, similar to the Z-channel but not memoryless. We first show the following property:
Property :
P≦(1−q3)(pe+(1−pe)(1−(1−(1−q)pe)3)).
To prove Property , assume S(a1)=0. If S′(a1)=1, then S(e1)=1, and there must be an overreach error that connects al to a neighbor that is on. We have Pr{S(e1)=1|S(a1)=0}=Pr{S(b1)=1, or S(c1)=1, or S(d1)=1}=1−q3. Given S(e1)=1, the probability that an overreach error connects a1 to either e1 or one of the on vertices among {b3, c2, d2}—see
We now use N small tiles to form a large tile, and use infinitely many such large tiles to fill the two-dimensional space with the following special arrangement: These large tiles are separated by buffer vertices that are always set as off, and for any two vertices in two different large tiles, there are at least two consecutive buffer vertices separating them on any path between them. We illustrate it in
Build a sub-channel as follows: Take one vertex from each large tile (which is either an ai, bi, ci, or di vertex, but not an ei vertex), and let each vertex store one bit as described before (i.e., the vertex stores bit 0 with probability q and bit 1 with probability 1−q). For example, we can take the vertex a shown in
Since in every small tile, four out of the five vertices are used to store bits, we get the conclusion.
It can be seen that when pe→0, the low bound in the above theorem approaches 4/5.
IV. Conclusion
In this paper, a new cell structure named patterned cell is introduced for phase-change memories. It has a new data representation scheme based on graph connectivity. The storage capacity of the scheme is analyzed, and its error correction and detection performance is studied.
This section has three parts. In the first part, we consider the VLC scheme, and discuss how to differentiate the different discrete levels. In the second part, we consider the case where VLC is used for rewriting data, and clarify some details. In the third part, we describe the common features of VLC and patterned cells.
I. Part One
In this part, we consider the VLC scheme, and discuss how to differentiate the different discrete levels.
In the VLC scheme, there are various ways to different levels, namely, to tell which cell belongs to which level. We introduce two such methods, which are based on clustering and reference voltages, respectively.
A. Clustering—Based Method
In the clustering—based method, we see the range of analog levels as a one—dimensional space (i.e., a line), where the analog level of a cell is a point in the line. The basic idea is that nearby points are considered to form a cluster—which are considered to be in the same discrete level—while faraway points are considered to be in different clusters and therefore are in different discrete levels. See
There are many ways to define clusters. One of the simplest approaches is to define a parameter Δ>0, and require the gap between two adjacent clusters to be at least Δ; at the same time, we require that for analog levels in the same cluster, the gap between two adjacent analog levels be smaller than Δ. It is simple to determine which cell belongs to which cluster by measuring the analog levels.
B. Reference-Voltage Based Method
In the reference-voltage based method, between every two adjacent discrete levels, a reference level is used to separate them. More specifically, consider level i and level i+1 to be two adjacent discrete levels, where level i is lower than level i+1. After level i is programmed, a reference cell can be programmed such that its analog level is above level i. Then level i+1 can be programmed to be higher than the reference level. With the reference level (i.e., the level of the reference cell), the memory can differentiate level i and level i+1 by comparing them to the reference level. See
C. How to Program Levels from Low to High
Finally, we describe a method for programming VLCs. (It will be shown later that patterned cells can be programmed in a similar way.) When we program levels, we can program them from low to high, so that there is no risk of overshooting. See
Note that the levels in VLC are very flexible, because they need not have fixed positions. So if we need to adjust the positions of some existing level (such as for rewriting data or for removing noise from levels), we can adjust the other levels accordingly easily.
II. Part Two
In this part, we consider the case where VLC is used for rewriting data, and clarify some details. Note that by rewriting data, we mean to change the stored data by only increasing the cell levels (without decreasing cell levels). This way, no block erasure is need. Also note that when rewriting data, new (that is, higher) cell levels can be created. The cell levels created at different times are all considered to be levels of the VLC scheme.
A. How to Store One Bit Per Cell, and Rewrite Data
We first introduce the following concept: How to store one bit per cell in the VLC scheme, and how to rewrite the stored data. The result here can be extended to storing multiple bits per cell, or storing one or more bits in a cell group that contains multiple cells. We also note that the data stored in the cells can be any type of data, including error—correcting codes.
Consider n VLC cells with levels 0, 1, 2, 3 . . . . For i{1, 2, . . . , n}, let L{0, 1, 2, 3 . . . } denote the discrete level of the ith cell. We let the bit stored in the ith cell be Li, mod 2. Alternatively, we can also let the bit stored in the ith cell be (Li+1) mod 2, which is very similar.
Given a binary word (x1, x2, . . . , xn){0,1}n, we can store it in the n cells this way: For i=1, 2, . . . , n, if xi=0, then we let Li=0; if xi=1, then we let Li=1.
After that, we can rewrite data (that is, modify data) by only increasing cell levels (thus avoiding the expensive block erasure operation). Suppose that the word currently stored in the cells is
(y1, y2, . . . , yn){0,1}n,
and we want to change it to
(z1, z2, . . . , zn){0, 1}n.
We can rewrite data this way: For i=1, 2, . . . , n, if zi=yi, we do not change Li; if zi≠yi, we increase Li by 1.
We illustrate the rewriting process in
We can see that with more and more rewrites, the cells occupy more and more levels.
B. Physically Correcting Errors
In this subsection, we introduce the following concept:
The VLC scheme provides the ability to physically correct errors. For example, consider the above 1-bit-per-cell scheme. Suppose that the stored word is an error-correcting code. Say that noise changes a cell level Li from an odd integer to an even integer. For the error-correcting code, the corresponding bit is change from 1 to 0. After detecting this error using the error-correcting capability of the error-correcting code, we can physically correct the error by increasing the cell level Li by one (thus making it an odd integer again). This approach becomes infeasible only if the cell has already reached the highest level.
C. How to Fully Use Cell Levels
In the VLC scheme, we try to program as many levels as possible. Eventually, the highest level will reach the physical limit, and no more level can be created. When we rewrite data (as introduced above), some cells will reach the highest level sooner than other cells. But this does not mean that we cannot keep rewriting data. In the following, we introduce a method that allows us to keep rewriting data even though some cells have reached the highest level.
Let the highest level be seen as an “erased state”; more specifically, we see cells in the highest level as non-existent. We use the remaining cells to store data as before. With more and more rewrites, the number of cells we can use becomes smaller and smaller, so we need to store fewer and fewer bits.
D. Storing Multiple Bits per Cell
In this subsection, we introduce the following concept:
The extension from storing one bit per cell to more than one bit per cell is straightforward. For example, if we store two bits per cell, then we say that the symbol stored by a cell has alphabet size 4, because it has four values: s0=0, S1=1, s2=2, s3=3. If we let every cell store a symbol of 3 values—s0=0, s1=1, s2=2,—then we say the stored symbol has alphabet size 3.
For i=1, 2, . . . , n, let Li denote the discrete level of the ith cell. If the symbol stored by a cell has alphabet size m, then we can let the symbol represented by the cell level Li be
Li mod m.
To rewrite data, we can increase cell levels similarly as before.
Storing One or More Bits per Cell Group
In this subsection, we introduce the following concept:
We can generalize the method introduced previously—where every cell stores one or more bits—in the following way. Partition the n cells into groups—say every group has m cells—and let every cell group store one or more bits. All that we need is a mapping from the states of the m cells in a group to the symbol they store. We show an example.
Let m be 3, and let the discrete levels of the three cells in a group be denoted by L1, L2, L3 E {0, 1, 2, 3, . . . }. Suppose that we store two bits in the cell group, and use the following mapping:
III. Part Three
In this part, we describe the common features of VLC and patterned cells.
A. Unified Asymmetric Model for VLC and Patterned Cell
A flash memory cell has this special property: When it is programmed, its level can only be increased, unless the cell block is erased. For a conventional PCM cell, there is a similar property: When it is programmed, its level can only be increased, unless it is reset. So in a program/erase (or program/reset) cycle, we can see the states of a cell as an acyclic directed graph, where the cell can only be changed from a lower state to a higher state, but not from a higher state to a lower state. It is illustrated in
A patterned PCM cell has this special property: When it is programmed, the domains can only change from amorphous to crystalline, unless the cell is reset. Therefore, we can see the states of a cell as an acyclic directed graph, where the cells can only be changed from a lower state to a higher state, but not from a higher state to a lower state. It is illustrated in
B. Unified Variable-Level Programming Model
In VLC, the discrete levels we want to program for cells are labeled by level 0, level 1, level 2, . . . . We program a lower level before programming a higher level. In other words, level i+1 is programmed after level i. And the relation is as follows:
In other words, when programming cells, overshooting errors can happen. So our method is to program a lower level before programming a higher level, and we make sure that the higher level does not overlap with the lower level in terms of their included cell states.
In patterned cells, we can also denote the discrete levels by level 0, level 1, level 2, . . . . Note that during programming, overreach error—which connects a crystalline domain to the electrode of a neighboring amorphous domain—can happen. So to program levels robustly, we can use a method that is similar to that of VLC. Namely, we program levels from low to high; and every time we program a level, we makes sure that it does not overlap with the lower levels in terms of their included cell states. We illustrate it with an example, which refers to
We first note two things:
We now illustrate the robust programming of patterned cells with the following example.
There is a set of n patterned cells, which are initially all in the state where all domains are amorphous. We need to program them to level 0, level 1, level 2, . . . . Our goal is to program as many levels as possible, and for any 1≦i<j, we will program level i before we program level j (same as in VLC).
For illustration, we will consider the patterned cells to have 2×2 domain arrays, which is illustrated in
We now program level 1. To those cells that need to be in level 1, we program them to change them from state A to state B. Note that we assign level 1 to a state that is as low as possible, without overlapping with the state of level 0. If all those cells are successfully programmed to state B, then level 1 will consists of only state B; and in the next step, to program level 2, we can program cells to change them from state A (i.e., level 0) to state C, which is again the lowest state we can choose; and so on (to program level 3, 4, . . . ).
However, suppose that when we program level 1, due to overreach errors, some of the cells that should have state B actually become state F or I (which are both higher than B). In this case, we will let level 1 consists of three states: states B, F, and I. This is similar to VLC, where we let a level be the actually set of states that the cells of this level reach. Then to program level 2, we can choose state D as a state that belongs to level 2, because if we program cells to state D, even if overreach errors happen, the cells will not have those states already assigned to level 0 or level 1, namely states {A, B, F, I}. For those cells that should belong to level 2, we program them to change them from state A to state D. For illustration, again consider two possible outcomes:
So we can see that the number of levels we program is determined adaptively based on the actual programming performance. If no error happens during programming, then every single state can be a distinct level. However, if errors happen during programming, we adaptively assign states to levels, and the number of levels that can be programmed will be less than the number of states.
The above programming method for patterned cells has two important properties:
It can be seen that they are also the two important properties of VLC.
The above programming method can be summarized as follows:
It can be seen that the above programming method is very similar to that of VLC.
C. Unified Rewriting Model
The rewriting method for VLC can be applied in a similar way to patterned cells. For simplicity, we skip the details.
The memory controller 2704 operates under control of a microcontroller 2710, which manages communications with the memory 2702 via a memory interface 2712 and manages communications with the host device via a host interface 2714. Thus, the memory controller supervises data transfers from the host 2706 to the memory 2702 and from the memory 2702 to the host 2706. The memory controller 2704 also includes a data buffer 2716 in which data values may be temporarily stored for transmission over the data channel controller 2717 between the memory 2702 and the host 2706. The memory controller also includes an ECC block 2718 in which data for the ECC is maintained. For example, the ECC block 2718 may comprise data and program code to perform error correction operations. Such error correction operations are described, for example, in the U.S. patent application Ser. No. 12/275,190 entitled “Error Correcting Codes for Rank Modulation” by Anxiao Jiang et al. filed Nov. 20, 2008. The ECC block 2718 may contain parameters for the error correction code to be used for the memory 2702, such as programmed operations for translating between received symbols and error-corrected symbols, or the ECC block may contain lookup tables for codewords or other data, or the like. The memory controller 2704 performs the operations described above for decoding data and for encoding data.
The operations described above for programming the levels in a memory device and generating and storing a configuration data set, and for programming a data storage device, can be carried out by the operations depicted in
The host device 2706 may comprise a conventional computer apparatus and, as noted above, comprises the adaptive programmer system 602 when the levels are being determined and programmed. The conventional computer apparatus also may carry out the operations of
In various embodiments, the computer system 2800 typically includes conventional computer components such as the one or more processors 2805. The file storage subsystem 2825 can include a variety of memory storage devices, such as a read only memory (ROM) 2845 and random access memory (RAM) 2850 in the memory subsystem 2820, and direct access storage devices such as disk drives. As noted, the direct access storage device may comprise a adaptive programming data storage device that operates as described herein.
The user interface output devices 2830 can comprise a variety of devices including flat panel displays, touchscreens, indicator lights, audio devices, force feedback devices, and the like. The user interface input devices 2835 can comprise a variety of devices including a computer mouse, trackball, trackpad, joystick, wireless remote, drawing tablet, voice command system, eye tracking system, and the like. The user interface input devices 2835 typically allow a user to select objects, icons, text and the like that appear on the user interface output devices 2830 via a command such as a click of a button or the like.
Embodiments of the communication subsystem 2840 typically include an Ethernet card, a modem (telephone, satellite, cable, ISDN), (asynchronous) digital subscriber line (DSL) unit, FireWire (IEEE 1394) interface, USB interface, and the like. For example, the communications subsystem 2840 may be coupled to communications networks and other external systems 2855 (e.g., a network such as a LAN or the Internet), to a FireWire bus, or the like. In other embodiments, the communications subsystem 2840 may be physically integrated on the motherboard of the computer system 2800, may be a software program, such as soft DSL, or the like.
The RAM 2850 and the file storage subsystem 2825 are examples of tangible media conFig.d to store data such as error correction code parameters, codewords, and program instructions to perform the operations described herein when executed by the one or more processors, including executable computer code, human readable code, or the like. Other types of tangible media include program product media such as floppy disks, removable hard disks, optical storage media such as CDs, DVDs, and bar code media, semiconductor memories such as flash memories, read-only-memories (ROMs), battery-backed volatile memories, networked storage devices, and the like. The file storage subsystem 2825 includes reader subsystems that can transfer data from the program product media to the storage subsystem 2815 for operation and execution by the processors 2805.
The computer system 2800 may also include software that enables communications over a network (e.g., the communications network 2855) such as the DNS, TCP/IP, UDP/IP, and HTTP/HTTPS protocols, and the like. In alternative embodiments, other communications software and transfer protocols may also be used, for example IPX, or the like.
It will be readily apparent to one of ordinary skill in the art that many other hardware and software configurations are suitable for use with the present invention. For example, the computer system 2800 may be a desktop, portable, rack-mounted, or tablet configuration. Additionally, the computer system 2800 may be a series of networked computers. Further, a variety of microprocessors are contemplated and are suitable for the one or more processors 2805, such as CORE 2 DUO™ microprocessors from Intel Corporation of Santa Clara, Calif., USA; OPTERON™ or ATHLON XP™ microprocessors from Advanced Micro Devices, Inc. of Sunnyvale, Calif., USA; and the like. Further, a variety of operating systems are contemplated and are suitable, such as WINDOWS®, WINDOWS XP®, WINDOWS 7®, or the like from Microsoft Corporation of Redmond, Wash., USA, SOLARIS® from Sun Microsystems, Inc. of Santa Clara, Calif., USA, various Linux and UNIX distributions, and the like. In still other embodiments, the techniques described above may be implemented upon a chip or an auxiliary processing board (e.g., a programmable logic device or graphics processor unit).
The present invention can be implemented in the form of control logic in software or hardware or a combination of both. The control logic may be stored in an information storage medium as a plurality of instructions adapted to direct an information-processing device to perform a set of steps disclosed in embodiments of the present invention. Based on the disclosure and teachings provided herein, a person of ordinary skill in the art will appreciate other ways and/or methods to implement the present invention.
The adaptive programming scheme described herein can be implemented in a variety of systems for encoding and decoding data for transmission and storage. That is, codewords are received from a source over an information channel according to a adaptive programming scheme and are decoded into their corresponding data values and provided to a destination, such as a memory or a processor, and data values for storage or transmission are received from a source over an information channel and are encoded into a adaptive programming scheme.
The operations of encoding and decoding data according to the adaptive programming scheme can be illustrated as in
The information values 2906 comprise the means for physically representing the data values and codewords. For example, the information values 2906 may represent charge levels of memory cells, such that multiple cells are configured to operate as a virtual cell in which charge levels of the cells determine a permutation of the adaptive programming code. Data values are received and encoded to permutations of a adaptive programming code and charge levels of cells are adjusted accordingly, and adaptive programming codewords are determined according to cell charge levels, from which a corresponding data value is determined. Alternatively, the information values 2906 may represent features of a transmitted signal, such as signal frequency, magnitude, or duration, such that the cells or bins are defined by the signal features and determine a permutation of the adaptive programming code. For example, rank ordering of detected cell frequency changes over time can determine a permutation, wherein the highest signal frequency denotes the highest cell level. Other schemes for physical representation of the cells will occur to those skilled in the art, in view of the description herein.
For information values 2906 in the case of cell charge levels, the source/destination 2910 comprises memory cells in which n memory cells provide n cell values whose charge levels define a adaptive programming permutation. For storing a codeword, the memory cells receive an encoded codeword and comprise a destination, and for reading a codeword, the memory cells provide a codeword for decoding and comprise a source. In the case of data transmission, the source/destination 2910 may comprise a transmitter/receiver that processes a signal with signal features such as frequency, magnitude, or duration that define cells or bins such that the signal features determine a permutation. That is, signal components comprising signal frequency, magnitude, or duration may be controlled and modulated by the transmitter such that a highest signal frequency component or greatest magnitude component or greatest time component corresponds to a highest cell level, followed by signal component values that correspond to other cell values and thereby define a permutation of the adaptive programming code. When the source/destination 2910 receives a codeword from the controller 2904, the source/destination comprises a transmitter of the device 2902 for sending an encoded signal. When the source/destination provides a codeword to the controller 2904 from a received signal, the source/destination comprises a receiver of the device for receiving an encoded signal. Those skilled in the art will understand how to suitably modulate signal components of the transmitted signal to define adaptive programming code permutations, in view of the description herein.
The embodiments discussed herein are illustrative of one or more examples of embodiments of the present invention. As these embodiments of the present invention are described with reference to illustrations, various modifications or adaptations of the methods and/or specific structures described may become apparent to those skilled in the art. All such modifications, adaptations, or variations that rely upon the teachings of the present invention, and through which these teachings have advanced the art, are considered to be within the scope of the present invention. Hence, the present descriptions and drawings should not be considered in a limiting sense, as it is understood that the present invention is in no way limited to only the embodiments illustrated.
This application is a non-provisional patent application of co-pending U.S. Provisional Application Ser. No. 61/384,646 filed on Sep. 20, 2010, titled “Information Representation and Coding for Next-Generation Nonvolatile Memories based on Phase-Change and Flash Technologies,” which is hereby expressly incorporated by reference in its entirety for all purposes.
This work was supported by Grant No. ECCS-0802107 and CCF-0747415 awarded by the National Science Foundation. The Government of the United States of America may have certain rights in this invention.
Number | Date | Country | |
---|---|---|---|
61384646 | Sep 2010 | US |