This invention relates generally to computer memory, and more specifically to encoding data into a constrained memory.
Phase change memory (PCM) cells exist in one of two states: amorphous which is associated with a low electrical conductivity state, and crystalline which is associated with a high electrical conductivity state. Sometimes a PCM cell is unable to switch from one state to another, hence becoming stuck or having a constrained value that cannot be changed. The inability to switch from one state to another can make a memory cell unusable. More sophisticated uses of PCM cells store multiple bits per cell by writing multiple analog levels. PCM cells exhibit variations on what resistance ranges they can achieve. These different ranges imply that the number of bits per cell that each cell can support can vary. One way of dealing with the different resistance ranges is to store a number of bits very close to the average of the individual bits per cell, even when the decoder does not know the ranges of the memory cells. This technique may further employ stuck bit codes based on the Luby transform (LT). A drawback of using the LT is that it can be difficult to implement in hardware.
Accordingly, and while existing techniques for dealing with constrained value memory cells may be suitable for their intended purpose, there remains a need in the art for memory systems that overcome these drawbacks.
An embodiment is a method for writing data that includes receiving write data to be encoded into a write word, receiving constraints on symbol values associated with the write word, encoding the write data into the write word, and writing the write word to a memory. The encoding includes: representing the write data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values associated with the write word.
Another embodiment is a memory system that includes a memory including memory cells and constraints on symbol values stored in the memory cells, and an encoder in communication with the memory. The encoder is configured for receiving write data to be encoded into a write word that satisfies the constraints on the symbol values, encoding the write data into the write word, the encoding comprising, and transmitting the write word to the memory. The encoding includes: representing the write data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values in the write word.
Another embodiment is a computer implemented method for transmitting data that includes receiving data to be encoded into a word for transmission across a transmission medium, receiving constraints on symbol values associated with the word, encoding the data into the word, and outputting the word on the transmission medium. The encoding includes: representing the data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the word, the word satisfying the constraints on symbol values associated with the word.
A further embodiment is a computer implemented method for solving linear systems including receiving a linear system of equations in a first field of a first size, embedding the linear system of equations into a second field of a second size, the second size larger than the first size, solving the linear system of equations in the second field resulting in a second field solution, collapsing the second field solution into the first field resulting in a first field solution, and outputting the first field solution.
Additional features and advantages are realized through the techniques of the present embodiment. Other embodiments and aspects are described herein and are considered a part of the claimed invention. For a better understanding of the invention with the advantages and features, refer to the description and to the drawings.
The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:
An embodiment of the present invention is a method for encoding data in memories with memory cells having constrained values (e.g., “stuck cells”). Algorithms for performing the encoding rely on earlier code constructions termed cyclic partitioned linear block codes (PLBCs). For the corresponding q-ary Bose Chaudhuri Hocquenghem (BCH) type codes for u constrained values in a codeword of length n, an embodiment of the encoding algorithm has complexity O((u logq n)2) Fq operations, which compares favorably to a generic approach based on Gaussian elimination. It is noted that a codeword is regarded as a sequence of symbols. The computational complexity improvements are realized by taking advantage of the algebraic structure of cyclic codes for codeword symbols having constrained symbol values. The algorithms are also applicable to cyclic codes for both encoding with constrained symbol values and decoding with received words with errors.
In addition to its meaning as an entry in a codeword, as used herein, the term “symbol” refers to data stored in one or more memory cells. In an embodiment, one memory cell may include two symbols, one symbol may span two memory cells, and/or one memory cell may include one symbol. More generally, one memory cell may include a plurality of symbols or one symbol may span a plurality of cells. When used to describe a communication system, one wire (e.g., on a bus or other transmission medium) may carry two symbols, two wires may combine to carry one symbol, and/or one wire may carry one symbol. More generally, one wire may include a plurality of symbols or one symbol may span a plurality of cells. In addition, a symbol is regarded as an element of a Galois field.
As used herein, the term “constrained symbol value” generally refers to a constraint to be enforced in some positions of a word. For example, a memory may have positions that are stuck to one value and the data is to be stored in a manner that conforms to the stuck values. Or it may be advantageous to store a particular value in a particular position of a memory in order to save power or increase reliability of the stored word. Alternately, a memory cell may have a constraint on the range of reachable values and a word must be found to be written in which the level assigned to each cell fits inside of its variable range; this can be solved by solving one or more instances of encoding with codewords having constrained values. In a different application, data is sent through a link in which one of the wires is stuck to a value, and hence the data needs to be encoded for transmission satisfying a suitable constraint on the values of specific positions of the encoded data. Or alternately, one may find it convenient to fix a value of the transmission pattern so as to achieve improved power or reliability. A value may be one bit or multiple bits.
In an embodiment, the memory 102 is modeled as a medium that can store a vector of length n with elements in a finite field Fq in n physical entities referred to herein as “memory cells”. Occasionally, a memory cell may be constrained to a value and the write word written to the memory 102 must conform to the constrained values that the memory cells have. It is further assumed that the information of the identity and content of constrained value memory cells is known only at encoding time. Typically, this information is obtained by a cell state sensing procedure (e.g., performed by the tester 104) that could be time consuming. Learning the constrained cell information during the read process 114 increases the latency during a read and may involve the destruction of the written contents due to the sensing procedure. Both of these things are undesirable, hence justifying the importance of the assumption that the identity and content of constrained value memory cells is known only at encoding time.
Problem definition. Let Fq denote the finite field with q elements, where q is a power of a prime. A message mεFqk is to be encoded and written into n memory cells, each of which can hold a value in Fq. Some of the memory cells are “stuck” or have a “constrained value” in that they hold a value that cannot be changed. It is assumed that there are u constrained cells; the indices of these cells are given by Ψ={ψ0, . . . , ψu−1} (ψiε{0, 1, . . . , n−1}) and the values to which they are constrained are given by Φ={φ1, . . . , φu−1} (φiεFq). The location and constrained value of the constrained cells are known only during the write process 112.
In an embodiment, the encoder 106 performs a mapping that accepts m, Ψ, Φ and returns xεFqn, a vector to be written to memory that satisfies:
x
ψ
=φi, iε{0, 1, . . . , u−1} (1)
In an embodiment, the decoder 108 performs a mapping that accepts x and recovers m. In another embodiment, the decoder 108 also has the ability to correct for errors in the standard sense of error correction for memories. Although most of the description herein is focused on the problem of encoding for constrained values, the embodiments are easily extendable to the problem of encoding for error bits.
An embodiment uses linear codes including a matrix HM with dimensions k×n and with entries in Fq. This matrix is referred to herein as “the message retrieval matrix”; as m is obtained from x through the operation
m=H
M
x (2)
The encoding problem then is to find x satisfying equation (2) with the condition that equation (1) is satisfied. In an embodiment it is assumed that HM has full column rank, as otherwise there would instances of the encoding problem that cannot be solved for. In an embodiment, the adjustment code is defined as the linear space:
C
A
={yεF
q
n
:H
M
y=0}. (3)
The encoding is then decomposed into two steps: Step One—find a vector x′ such that HMx′=m; and Step Two—find a vector dεCA such that x=x′+d satisfies equation (1). Because the matrix HM is known at code design time, implementing Step One requires about O(k2) Fq multiplication and addition operations; as a matter of fact the Galois field multiplications involved have one of the operands constant, significantly reducing their implementation complexity.
Let G be a n×n−k generator matrix for CA. Step Two will always succeed if any choice of u rows of G is linearly independent. From this viewpoint, solving Step Two is equivalent to finding a vector ξεFqn−k such that:
(dGξ)ψ
It will be appreciated that this is not the only type of encoding method. In another embodiment, equation (2) is solved as a system of linear equations where the free variables are those memory cells that are not stuck. The problem size in this embodiment is related to the number of free variables, in turn related to the number of message bits. In contrast in the embodiment where decomposition into two steps is utilized, the problem size is related to the number of stuck symbols. Thus, the embodiments described herein provide the most benefit when there are fewer stuck bits than message bits. This is an important case because it most compatible with mid to high coding rate applications commonly encountered in applications.
Preliminaries. Extension fields from finite field theory are relied on herein to describe the cyclic codes for stuck bits. For some integer μ≧1, the degree-μ extension of Fq is denoted by Fq
The dot product between two elements a,bεFq
Let ξεFq
[(ξ)0(ξη)0 . . . (ξημ−1)0]T
From this definition, it can be easily deduced that if a,b are the companions to α,β then a+b is the companion to a α+β, and ζεFq then ζa is the companion of ζα. As a consequence, the operator that returns the companion of an element must be linear and thus there exists a μ×μ matrix with elements in Fq, which is referred to as Λ, such that the result of the matrix×vector operation Λξ results on the companion of ξ for any ξεFq
Lemma A. For any a,bεFq
Proof of Lemma A. Write a=(a)0+(a)1η+ . . . +(a)μ−1ημ−1, so that ab=(a)0b+(a)1bη+ . . . +(a)μ−1bημ−1 and therefore:
Similarly, let a,bεFq
The following discussion assumes familiarity with basic concepts of the theory of linear and cyclic codes such as duality and generator/check polynomials. In an embodiment described herein, it is assumed that n is a factor of the integer qμ−1. Let ωεield be such that ωn=1 and there is no positive integer j<n with the same property; such an element is known to exist. The Fourier transform V of a vector vεFq
The inverse Fourier transform of a vector VεFq
where n=1+1+ . . . +1 (n times). It is known that any vector vεFqn has a Fourier transform, V, that satisfies the q-ary conjugacy constraint:
V
j
q
=V
jq mod n0≦j<n. (5)
Let the q-ary conjugacy classes be defined by Γj={j, jq, jq2, . . . , jql−1} mod n for jε{0, 1, . . . , n−1} respectively, where l is the smallest positive integer satisfying jql=j, modulo n. Due to equation (5) if Vj=0, then Vi=0 for every iεΓj. For a set W⊂{0, 1, . . . , n−1}define ΓW=∪iεWΓi.
An example. Before describing the general algorithm for cyclic codes, the algorithm will be examined it in the context of a simple problem, from which most of the essential insight can be derived. It is assumed that q=2 in this example. Let ai=ωi, iε{0, . . . , n−1}. For this example, the problem considered is the problem of encoding for a code whose adjustment code (ref. (3)) has a n×μ F2 generator matrix
which alternately may be regarded as the transpose of the check matrix of a Hamming code. Any two rows of this matrix are linearly independent and therefore this is a code for up to two stuck bits; in this example assume that u=2. Recall that ψ0, ψ1 are the two distinct indices of bits that are stuck. In view of equation (4), the problem of encoding this code can be reduced to finding cεF2μ such that:
<c,aψ
To solve this problem, the following problem is solved first:
where z0,z1εF2
w
k
=[s
k0 . . . 0]T (8)
for k=0,1. The problem described by equation (7) can be solved because the determinant of the associated matrix, which is Vandermonde, is nonzero. Next, it is shown that a solution for the problem stated in equation (6) is given by:
c=Λz
0
+S
T
Λz
1 (9)
From equation (9), it can be deduced that:
where (a) follows from Lemma A and (b) follows from equation (7); clearly an identical development may be done for the index j as well. Thus c indeed solves equation (6) and thus by extension, the encoding problem at hand.
Some general remarks which will help in elucidating generalizations of this technique follow. The first step in the solution is an embedding in a larger field (in this case, given by equation (7)). The problem in the larger field is, by design, a problem with a known efficient solution—in this case a linear system of equations involving a Vandermonde matrix. The second step in the solution is includes a collapsing of the solution in the larger field to a solution of the problem in the smaller field. This is implemented by equation (9). The right hand side in equation (7), is arbitrary as long as it satisfies (wi)0=si for i=0,1. Different choices for it may result in different solutions to equation (6). The Vandermonde problem solved in equation (7) is a polynomial fitting, and thus in more complex problems Lagrange's interpolation formula may be used for solving.
Cyclic codes for stuck bits. In an embodiment, cyclic codes for stuck bits are obtained by using a good cyclic code (in the standard sense of minimum distance) for the dual of the adjustment code; in fact the minimum distance of such dual minus 1 is a lower bound on the number of stuck bits that can be encoded. A cyclic code for stuck bits can be defined through a frequency split. Let WS,WM⊂{0, 1, . . . , n−1} be disjoint sets such that:
W
S
∪W
M={0, 1, . . . , n−1} (10)
and such that ΓWS=WS, ΓWM=WM (it can be shown that only one of these conditions is actually necessary since equation (10) will imply the other one). The frequency split is associated with the polynomials g(ξ)=ΠjεW
It is known that the coefficients of these polynomials are in Fq; this is due to h(ξ). By definition, the adjustment code CA for this family of cyclic codes is generated by g(ξ). For HM, choose any parity check matrix for CA.
The structure of the adjustment code. This subsection is devoted to finding a useful counterpart to equation (6) in the case of general cyclic codes. Instead of using a generator matrix for the adjustment code, a matrix will be constructed that will, in general, span only a subset of the adjustment code, and may also have redundant columns; the latter is done mostly for notational convenience. This notation M will be sued for this matrix. This construction will be most efficient when the dual of the adjustment code is a good BCH code. A generator matrix for CA can be found by identifying it with a parity check matrix for the code dual to CA, which is labeled CA⊥⊂Fqn. Since CA is generated by g(ξ), it is known that CA⊥ is generated by
h
0
−1ξn−kh(ξ−1)==h0−1ΠjεW
The zeros of the generator polynomial of a code can be used to construct a parity check matrix for that code. Let n−WS be the set {n−j:jεWS}. Since ω−j=ωn−j, a way of describing the code CA⊥ is through the parity check equations CA⊥={vεFqn:Vj=0:jεn−WS}.
Define run(•) to be the function that accepts a subset of {0, 1, . . . , n−1} and returns the largest run length of consecutive integers (modulo n) within that subset. It is assumed that u=run(WS). Note that run(WS)=run(n−WS).
Let R⊂n−WS be a set with u consecutive integers, modulo n. Also let r0ε{0, . . . , n−1} be such that R={r0, r1, . . . , ru−1}={r0, r0+1, . . . , r0+u−1} mod n.
Note that {vεFqn:Vj=0:jεL}={vεFqn:Vj=0:jεR} and that CA⊥ is a subset of either of these sets. Define the n×(Lμ) matrix M with entries in Fq by
[M]m,iμ+j=(ωml
for mε{0, . . . , n−1}, iε{0, . . . , L−1} jε{0, . . . , μ−1. It is noted that M will in general have redundant columns. Therefore {vεFqn:Vj=0,jεL}=vεFqn:MTv=0}.
By construction: {vεFqn:v=Mξ for some ξεFqμL}⊂CA.
The set of adjustment codewords are employed in the left to perform the encoding task.
At block 208 in
Embedding. The first step of the encoding algorithm is the embedding in a recognizable computational problem in a larger field, in this case given by a Vandermonde system of equations. The system of equations to be solved is
for kε{0, . . . , u−1}. This is not exactly a Vandermonde system but since B can be written as the product of a diagonal invertible matrix times a Vandermonde matrix, it is referred to herein as the Vandermonde problem. A u×u Vandermonde matrix has an inverse that can be written analytically involving no more than O(u2) operations in Fq
At block 212 in
A method for collapsing according to an embodiment follows.
Theorem 1. Let, for iε{0, . . . , |L|−1}
and define ξiμ+j=(ci)j for iε{0, . . . , |L|−1}, jε{0, . . . , μ−1} Then for kε{0, . . . , u−1}, [Mξ]ψ
Proof of Theorem 1. The proof is a generalization of the ideas in the above example. The quantity (Mξ)k can be rewritten as:
where (a) follows from the fact that Sξ=ξq, (b) follows from Lemma A, (c) follows from equation (16) and (d) follows from equations (13 and 15)
In order to do an asymptotic computational complexity analysis, it is necessary to specify how the problem at hand will be scaled. A natural parameter to scale is μ, which dictates the codeword length via the relation logq(n+1)=μ. As one scales μ (and hence n), the number of stuck symbols u to be encoded should also be scaled.
Following is a comparison of the computational complexity of the proposed algorithm described herein when compared with standard Gaussian elimination. As every Fq
As long as u grows faster than μ2 (u/μ2→∞), the encoding algorithm presented herein will outperform Gaussian elimination. It is important to note that the class of codes considered in this article do not possess asymptotics that allow u to grow linearly with n, for the same reason that BCH codes, as a family, have a best case “zero” asymptotic coding rate. Thus, it is important to analyze how fast can one, in principle, have u grow as a function of μ. The concern is addressed using an approximate analysis in order to preserve the readability of the text. Using a rather loose bound, every stuck symbol costs at most μ symbols worth of redundancy. Now, let αε(0,1) be a parameter that we will keep constant as we scale the problem. After investing □αn/μ□ redundant symbols it is possible to encode with at least
stuck symbols. Under this scaling law, it can be seen that u can grow significantly faster than μ2. Thus, this analysis concludes that the algorithm presented in here outperforms Gaussian elimination in a large set of circumstances.
Technical effects and benefits include the ability to store more data in memory (higher density) by utilizes memory cells or symbols having constrained values.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.