ENCODING DATA INTO CONSTRAINED MEMORY

Abstract
Encoding data into constrained memory using a method for writing data that includes receiving write data to be encoded into a write word, receiving constraints on symbol values associated with the write word, encoding the write data into the write word, and writing the write word to a memory. The encoding includes: representing the write data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values associated with the write word.
Description
BACKGROUND

This invention relates generally to computer memory, and more specifically to encoding data into a constrained memory.


Phase change memory (PCM) cells exist in one of two states: amorphous which is associated with a low electrical conductivity state, and crystalline which is associated with a high electrical conductivity state. Sometimes a PCM cell is unable to switch from one state to another, hence becoming stuck or having a constrained value that cannot be changed. The inability to switch from one state to another can make a memory cell unusable. More sophisticated uses of PCM cells store multiple bits per cell by writing multiple analog levels. PCM cells exhibit variations on what resistance ranges they can achieve. These different ranges imply that the number of bits per cell that each cell can support can vary. One way of dealing with the different resistance ranges is to store a number of bits very close to the average of the individual bits per cell, even when the decoder does not know the ranges of the memory cells. This technique may further employ stuck bit codes based on the Luby transform (LT). A drawback of using the LT is that it can be difficult to implement in hardware.


Accordingly, and while existing techniques for dealing with constrained value memory cells may be suitable for their intended purpose, there remains a need in the art for memory systems that overcome these drawbacks.


SUMMARY

An embodiment is a method for writing data that includes receiving write data to be encoded into a write word, receiving constraints on symbol values associated with the write word, encoding the write data into the write word, and writing the write word to a memory. The encoding includes: representing the write data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values associated with the write word.


Another embodiment is a memory system that includes a memory including memory cells and constraints on symbol values stored in the memory cells, and an encoder in communication with the memory. The encoder is configured for receiving write data to be encoded into a write word that satisfies the constraints on the symbol values, encoding the write data into the write word, the encoding comprising, and transmitting the write word to the memory. The encoding includes: representing the write data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values in the write word.


Another embodiment is a computer implemented method for transmitting data that includes receiving data to be encoded into a word for transmission across a transmission medium, receiving constraints on symbol values associated with the word, encoding the data into the word, and outputting the word on the transmission medium. The encoding includes: representing the data and the constraints as a first linear system in a first field of a first size; embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; and collapsing the solution into the first field resulting in the word, the word satisfying the constraints on symbol values associated with the word.


A further embodiment is a computer implemented method for solving linear systems including receiving a linear system of equations in a first field of a first size, embedding the linear system of equations into a second field of a second size, the second size larger than the first size, solving the linear system of equations in the second field resulting in a second field solution, collapsing the second field solution into the first field resulting in a first field solution, and outputting the first field solution.


Additional features and advantages are realized through the techniques of the present embodiment. Other embodiments and aspects are described herein and are considered a part of the claimed invention. For a better understanding of the invention with the advantages and features, refer to the description and to the drawings.





BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS

The subject matter that is regarded as the invention is particularly pointed out and distinctly claimed in the claims at the conclusion of the specification. The foregoing and other features, and advantages of the invention are apparent from the following detailed description taken in conjunction with the accompanying drawings in which:



FIG. 1 illustrates a block diagram of a system for storing data in memory in accordance with an embodiment;



FIG. 2 illustrates a process for writing data that is implemented by an embodiment;



FIG. 3 illustrates an example of the relationship between matrices that are utilized by an embodiment;



FIG. 4 illustrates a block diagram of a system for encoding and decoding transmitted data in accordance with an embodiment; and



FIG. 5 illustrates a process for solving a linear system of equations that is implemented by an embodiment.





DETAILED DESCRIPTION

An embodiment of the present invention is a method for encoding data in memories with memory cells having constrained values (e.g., “stuck cells”). Algorithms for performing the encoding rely on earlier code constructions termed cyclic partitioned linear block codes (PLBCs). For the corresponding q-ary Bose Chaudhuri Hocquenghem (BCH) type codes for u constrained values in a codeword of length n, an embodiment of the encoding algorithm has complexity O((u logq n)2) Fq operations, which compares favorably to a generic approach based on Gaussian elimination. It is noted that a codeword is regarded as a sequence of symbols. The computational complexity improvements are realized by taking advantage of the algebraic structure of cyclic codes for codeword symbols having constrained symbol values. The algorithms are also applicable to cyclic codes for both encoding with constrained symbol values and decoding with received words with errors.


In addition to its meaning as an entry in a codeword, as used herein, the term “symbol” refers to data stored in one or more memory cells. In an embodiment, one memory cell may include two symbols, one symbol may span two memory cells, and/or one memory cell may include one symbol. More generally, one memory cell may include a plurality of symbols or one symbol may span a plurality of cells. When used to describe a communication system, one wire (e.g., on a bus or other transmission medium) may carry two symbols, two wires may combine to carry one symbol, and/or one wire may carry one symbol. More generally, one wire may include a plurality of symbols or one symbol may span a plurality of cells. In addition, a symbol is regarded as an element of a Galois field.


As used herein, the term “constrained symbol value” generally refers to a constraint to be enforced in some positions of a word. For example, a memory may have positions that are stuck to one value and the data is to be stored in a manner that conforms to the stuck values. Or it may be advantageous to store a particular value in a particular position of a memory in order to save power or increase reliability of the stored word. Alternately, a memory cell may have a constraint on the range of reachable values and a word must be found to be written in which the level assigned to each cell fits inside of its variable range; this can be solved by solving one or more instances of encoding with codewords having constrained values. In a different application, data is sent through a link in which one of the wires is stuck to a value, and hence the data needs to be encoded for transmission satisfying a suitable constraint on the values of specific positions of the encoded data. Or alternately, one may find it convenient to fix a value of the transmission pattern so as to achieve improved power or reliability. A value may be one bit or multiple bits.



FIG. 1 illustrates a block diagram of a system for storing data in a memory in accordance with an embodiment. The system depicted in FIG. 1 includes memory 102 made up of blocks of memory cells, a tester 104 (e.g., for determining physical characteristics of the memory cells), an encoder 106 for receiving write data and generating a write word for storage in the memory cells, and a decoder 108 for receiving a read word from the memory 102 and generating read data. An embodiment of the write process 112 utilizes the tester 104, the encoder 106, and the shared codes 110 to generate the write word to be written to cells in the memory 102. Another embodiment of the system does not include the tester 104 and the write process 112 utilizes the encoder 106 and the shared codes 110 to generate the write word to be written to cells in the memory 102. An exemplary read process 114 utilizes the decoder 108 and the shared codes 110 to translate the read word from the memory 102 into the read data.


In an embodiment, the memory 102 is modeled as a medium that can store a vector of length n with elements in a finite field Fq in n physical entities referred to herein as “memory cells”. Occasionally, a memory cell may be constrained to a value and the write word written to the memory 102 must conform to the constrained values that the memory cells have. It is further assumed that the information of the identity and content of constrained value memory cells is known only at encoding time. Typically, this information is obtained by a cell state sensing procedure (e.g., performed by the tester 104) that could be time consuming. Learning the constrained cell information during the read process 114 increases the latency during a read and may involve the destruction of the written contents due to the sensing procedure. Both of these things are undesirable, hence justifying the importance of the assumption that the identity and content of constrained value memory cells is known only at encoding time.


Problem definition. Let Fq denote the finite field with q elements, where q is a power of a prime. A message mεFqk is to be encoded and written into n memory cells, each of which can hold a value in Fq. Some of the memory cells are “stuck” or have a “constrained value” in that they hold a value that cannot be changed. It is assumed that there are u constrained cells; the indices of these cells are given by Ψ={ψ0, . . . , ψu−1} (ψiε{0, 1, . . . , n−1}) and the values to which they are constrained are given by Φ={φ1, . . . , φu−1} (φiεFq). The location and constrained value of the constrained cells are known only during the write process 112.


In an embodiment, the encoder 106 performs a mapping that accepts m, Ψ, Φ and returns xεFqn, a vector to be written to memory that satisfies:






x
ψ

i
i, iε{0, 1, . . . , u−1}  (1)


In an embodiment, the decoder 108 performs a mapping that accepts x and recovers m. In another embodiment, the decoder 108 also has the ability to correct for errors in the standard sense of error correction for memories. Although most of the description herein is focused on the problem of encoding for constrained values, the embodiments are easily extendable to the problem of encoding for error bits.


An embodiment uses linear codes including a matrix HM with dimensions k×n and with entries in Fq. This matrix is referred to herein as “the message retrieval matrix”; as m is obtained from x through the operation






m=H
M
x  (2)


The encoding problem then is to find x satisfying equation (2) with the condition that equation (1) is satisfied. In an embodiment it is assumed that HM has full column rank, as otherwise there would instances of the encoding problem that cannot be solved for. In an embodiment, the adjustment code is defined as the linear space:






C
A
={yεF
q
n
:H
M
y=0}.  (3)


The encoding is then decomposed into two steps: Step One—find a vector x′ such that HMx′=m; and Step Two—find a vector dεCA such that x=x′+d satisfies equation (1). Because the matrix HM is known at code design time, implementing Step One requires about O(k2) Fq multiplication and addition operations; as a matter of fact the Galois field multiplications involved have one of the operands constant, significantly reducing their implementation complexity.


Let G be a n×n−k generator matrix for CA. Step Two will always succeed if any choice of u rows of G is linearly independent. From this viewpoint, solving Step Two is equivalent to finding a vector ξεFqn−k such that:





(dcustom-characterGξ)ψii−(x′)ψicustom-charactersi, iε{0, . . . , u−1}  (4)


It will be appreciated that this is not the only type of encoding method. In another embodiment, equation (2) is solved as a system of linear equations where the free variables are those memory cells that are not stuck. The problem size in this embodiment is related to the number of free variables, in turn related to the number of message bits. In contrast in the embodiment where decomposition into two steps is utilized, the problem size is related to the number of stuck symbols. Thus, the embodiments described herein provide the most benefit when there are fewer stuck bits than message bits. This is an important case because it most compatible with mid to high coding rate applications commonly encountered in applications.


Preliminaries. Extension fields from finite field theory are relied on herein to describe the cyclic codes for stuck bits. For some integer μ≧1, the degree-μ extension of Fq is denoted by Fqμ. In this embodiment, elements of Fqμ are viewed as μ-column vectors whose entries are elements of Fq. If aεFqμ, then the ith element of the column vector associated with a will be denoted by (a)i; thus for example we write a=[(a)0 . . . (a)mu−1]T, where T denotes the transpose operator. Let p(•) be a polynomial of degree μ with coefficients in Fq that is irreducible over Fq, i.e. p(•) cannot be stated as a product of two or more polynomials with coefficients in Fq. Let ηεFqμ denote a root of the equation p(η)=0. To an element aεFqμ associate the univariate polynomial Σi=0μ−1(a)iηi. Multiplication and addition of elements of Fqμ is then achieved by the usual rules for polynomial arithmetic followed by a modulus operation that accounts for p(η)=0.


The dot product between two elements a,bεFqμ is defined as









a
,
b



=





i
=
0


μ
-
1






(
a
)

i




(
b
)

i






F
q

.






Let ξεFqμ. The companion to ξ us defines as the Fqμ element:





[(ξ)0(ξη)0 . . . (ξημ−1)0]T


From this definition, it can be easily deduced that if a,b are the companions to α,β then a+b is the companion to a α+β, and ζεFq then ζa is the companion of ζα. As a consequence, the operator that returns the companion of an element must be linear and thus there exists a μ×μ matrix with elements in Fq, which is referred to as Λ, such that the result of the matrix×vector operation Λξ results on the companion of ξ for any ξεFqμ. The following basic result will be useful in later usage of the Λ linear operator as described herein.


Lemma A. For any a,bεFqμ, <a, Λb>=(ab)0.


Proof of Lemma A. Write a=(a)0+(a)1η+ . . . +(a)μ−1ημ−1, so that ab=(a)0b+(a)1bη+ . . . +(a)μ−1μ−1 and therefore:











(
ab
)

0

=






(
a
)

0




(
b
)

0


+



(
a
)

1




(

b





η

)

0


+

+



(
a
)


μ
-
1





(

b






η

μ
-
1



)

0









=






a
,

Λ





b




.








Similarly, let a,bεFqμ. It is known that (a+b)q=aq+bq, that is, raising to the qth power is a linear operation. Let S denote the μ×μ matrix with elements in Fqμ such that ξq=Sξ for every ξεFqμ.


The following discussion assumes familiarity with basic concepts of the theory of linear and cyclic codes such as duality and generator/check polynomials. In an embodiment described herein, it is assumed that n is a factor of the integer qμ−1. Let ωεield be such that ωn=1 and there is no positive integer j<n with the same property; such an element is known to exist. The Fourier transform V of a vector vεFqμn is defined as







V
i

=





j
=
0


n
-
1





v
j



ω
ij


i





{

0
,





,

n
-
1


}

.






The inverse Fourier transform of a vector VεFqμn is given by







v
i

=



1
n






j
=
0


n
-
1





V
j



ω

-
ij



i





{

0
,





,

n
-
1


}






where n=1+1+ . . . +1 (n times). It is known that any vector vεFqn has a Fourier transform, V, that satisfies the q-ary conjugacy constraint:






V
j
q
=V
jq mod n0≦j<n.  (5)


Let the q-ary conjugacy classes be defined by Γj={j, jq, jq2, . . . , jql−1} mod n for jε{0, 1, . . . , n−1} respectively, where l is the smallest positive integer satisfying jql=j, modulo n. Due to equation (5) if Vj=0, then Vi=0 for every iεΓj. For a set W⊂{0, 1, . . . , n−1}define ΓW=∪iεWΓi.


An example. Before describing the general algorithm for cyclic codes, the algorithm will be examined it in the context of a simple problem, from which most of the essential insight can be derived. It is assumed that q=2 in this example. Let aii, iε{0, . . . , n−1}. For this example, the problem considered is the problem of encoding for a code whose adjustment code (ref. (3)) has a n×μ F2 generator matrix






G
=

[





(

a
0

)

0





(

a
0

)

1








(

a
0

)


μ
-
1








(

a
1

)

0





(

a
1

)

1








(

a
1

)


μ
-
1






















(

a

n
-
1


)

0





(

a

n
-
1


)

1








(

a

n
-
1


)


μ
-
1





]





which alternately may be regarded as the transpose of the check matrix of a Hamming code. Any two rows of this matrix are linearly independent and therefore this is a code for up to two stuck bits; in this example assume that u=2. Recall that ψ0, ψ1 are the two distinct indices of bits that are stuck. In view of equation (4), the problem of encoding this code can be reduced to finding cεF2μ such that:





<c,aψ0>=s0, <c,aψ1>=s1  (6)


To solve this problem, the following problem is solved first:











[




a
Ψ0




a
Ψ0
2






a
Ψ1




a
Ψ1
2




]



[




z
0






z
1




]


=

[




w
0






w
1




]





(
7
)







where z0,z1εF2μ and wkεF2μ is given by:






w
k
=[s
k0 . . . 0]T  (8)


for k=0,1. The problem described by equation (7) can be solved because the determinant of the associated matrix, which is Vandermonde, is nonzero. Next, it is shown that a solution for the problem stated in equation (6) is given by:






c=Λz
0
+S
T
Λz
1  (9)


From equation (9), it can be deduced that:
















a
i

,
c



=







a
i

,

Λ






z
0





+




a
i

,


S
T


Λ






z
1












=







a
i

,

Λ






z
0





+




Sa
i

,

Λ






z
1












=







a
i

,

Λ






z
0





+




a
i
2

,

Λ






z
1













=

(
a
)









(


a
i



z
0


)

0

+


(


a
i
2



z
1


)

0




=

(
b
)




s
i

















where (a) follows from Lemma A and (b) follows from equation (7); clearly an identical development may be done for the index j as well. Thus c indeed solves equation (6) and thus by extension, the encoding problem at hand.


Some general remarks which will help in elucidating generalizations of this technique follow. The first step in the solution is an embedding in a larger field (in this case, given by equation (7)). The problem in the larger field is, by design, a problem with a known efficient solution—in this case a linear system of equations involving a Vandermonde matrix. The second step in the solution is includes a collapsing of the solution in the larger field to a solution of the problem in the smaller field. This is implemented by equation (9). The right hand side in equation (7), is arbitrary as long as it satisfies (wi)0=si for i=0,1. Different choices for it may result in different solutions to equation (6). The Vandermonde problem solved in equation (7) is a polynomial fitting, and thus in more complex problems Lagrange's interpolation formula may be used for solving.



FIG. 2 illustrates a process for writing data that is implemented by an embodiment of the encoder 106. At block 202, write data to be encoded into a write word is received. At block 204, any constraints on symbol values in the write word are received. In an embodiment, the constraints are determined by the tester 104 depicted in FIG. 1. At block 206, the encoding process is represented as a first linear system (e.g., a first Galois field) having a first size. One way of deriving the first linear system follows.


Cyclic codes for stuck bits. In an embodiment, cyclic codes for stuck bits are obtained by using a good cyclic code (in the standard sense of minimum distance) for the dual of the adjustment code; in fact the minimum distance of such dual minus 1 is a lower bound on the number of stuck bits that can be encoded. A cyclic code for stuck bits can be defined through a frequency split. Let WS,WM⊂{0, 1, . . . , n−1} be disjoint sets such that:






W
S
∪W
M={0, 1, . . . , n−1}  (10)


and such that ΓWS=WS, ΓWM=WM (it can be shown that only one of these conditions is actually necessary since equation (10) will imply the other one). The frequency split is associated with the polynomials g(ξ)=ΠjεWM(ξ−ωj),h(ξ)=ΠjεWS(ξ−ωj).


It is known that the coefficients of these polynomials are in Fq; this is due to h(ξ). By definition, the adjustment code CA for this family of cyclic codes is generated by g(ξ). For HM, choose any parity check matrix for CA.


The structure of the adjustment code. This subsection is devoted to finding a useful counterpart to equation (6) in the case of general cyclic codes. Instead of using a generator matrix for the adjustment code, a matrix will be constructed that will, in general, span only a subset of the adjustment code, and may also have redundant columns; the latter is done mostly for notational convenience. This notation M will be sued for this matrix. This construction will be most efficient when the dual of the adjustment code is a good BCH code. A generator matrix for CA can be found by identifying it with a parity check matrix for the code dual to CA, which is labeled CA⊂Fqn. Since CA is generated by g(ξ), it is known that CA is generated by






h
0
−1ξn−kh−1)==h0−1ΠjεWS(ξ−ω−j)  (11)


The zeros of the generator polynomial of a code can be used to construct a parity check matrix for that code. Let n−WS be the set {n−j:jεWS}. Since ω−jn−j, a way of describing the code CA is through the parity check equations CA={vεFqn:Vj=0:jεn−WS}.


Define run(•) to be the function that accepts a subset of {0, 1, . . . , n−1} and returns the largest run length of consecutive integers (modulo n) within that subset. It is assumed that u=run(WS). Note that run(WS)=run(n−WS).


Let R⊂n−WS be a set with u consecutive integers, modulo n. Also let r0ε{0, . . . , n−1} be such that R={r0, r1, . . . , ru−1}={r0, r0+1, . . . , r0+u−1} mod n.


Note that {vεFqn:Vj=0:jεL}={vεFqn:Vj=0:jεR} and that CA is a subset of either of these sets. Define the n×(Lμ) matrix M with entries in Fq by





[M]m,iμ+j=(ωmli)j  (12)


for mε{0, . . . , n−1}, iε{0, . . . , L−1} jε{0, . . . , μ−1. It is noted that M will in general have redundant columns. Therefore {vεFqn:Vj=0,jεL}=vεFqn:MTv=0}.


By construction: {vεFqn:v=Mξ for some ξεFqμL}⊂CA.


The set of adjustment codewords are employed in the left to perform the encoding task.


At block 208 in FIG. 2, the first linear system is embedded into a second linear system (e.g., a second Galois field) having a second size larger than the first size. At block 210, the second linear system is solved, resulting in a second field solution. An embodiment of how blocks 208 and 210 are performed follows.


Embedding. The first step of the encoding algorithm is the embedding in a recognizable computational problem in a larger field, in this case given by a Vandermonde system of equations. The system of equations to be solved is










B


[




z
0











Z

u
-
1





]


=

[




w
0











w

u
-
1





]





(
13
)






B


[




w

r





0

Ψ0








w


(


r
0

+
u
-
1

)



Ψ
0



















w


r





0

Ψ





u

-
1








w


(


r
0

+
u
-
1

)



Ψ

u


-



1







]





(
14
)







w
k

=


[




s
k



0





0



]

T





(
15
)







for kε{0, . . . , u−1}. This is not exactly a Vandermonde system but since B can be written as the product of a diagonal invertible matrix times a Vandermonde matrix, it is referred to herein as the Vandermonde problem. A u×u Vandermonde matrix has an inverse that can be written analytically involving no more than O(u2) operations in Fqu, and thus the system of equations can be solved within the same complexity. Note that strictly speaking, the basic problem is polynomial fitting problem that may be solved using any available algorithm with desired computational advantages.


At block 212 in FIG. 2, the second field solution is collapsed into the first field, resulting in the write word. At block 214, the write word is written to memory. It is assumed the existence of a solution {z0, . . . , zu−1} satisfying equation (13). The goal is to build a ξεFqμL such that [Mξ]ψk=sk, kε{0, . . . , u−1}. Let τ:{0, . . . , L−1}×{0, . . . , μ−1}→{0, . . . , u−1} be the mapping such that rτ(i,j)=qκ(i,j)li mod n. Them for kε{0, . . . , u−1}:













j
=
0


u
-
1





z
j



ω


(


r
0

+
j

)



ψ
k





=




i
=
0




L


-
1







j
=
0





P
i



-
1





z

τ


(

i
,
j

)





ω


l
i



q

κ


(

i
,
j

)





ψ
k










(
16
)







A method for collapsing according to an embodiment follows.


Theorem 1. Let, for iε{0, . . . , |L|−1}










c
i

=




j
=
0





P
i



-
1






(

S

κ


(

i
,
j

)



)

T


Λ






z

τ


(

i
,
j

)









(
17
)







and define ξiμ+j=(ci)j for iε{0, . . . , |L|−1}, jε{0, . . . , μ−1} Then for kε{0, . . . , u−1}, [Mξ]ψk=sk.


Proof of Theorem 1. The proof is a generalization of the ideas in the above example. The quantity (Mξ)k can be rewritten as:













i
=
0




L


-
1







c
i

,

ω


l
i



ψ
k







=






i
=
0




L


-
1










j
=
0





P
i



-
1






(

S

κ


(

i
,
j

)



)

T


Λ






z

τ


(

i
,
j

)





,

ω


l
i



ψ
k













=






i
=
0




L


-
1







j
=
0





P
i



-
1









(

S

κ


(

i
,
j

)



)

T


Λ






z

τ


(

i
,
j

)




,

ω


l
i



ψ
k














=






i
=
0




L


-
1







j
=
0





P
i



-
1







Λ






z

τ


(

i
,
j

)




,


S

κ


(

i
,
j

)





ω


l
i



ψ
k
















=

(
a
)









i
=
0




L


-
1







j
=
0





P
i



-
1







Λ






z

τ


(

i
,
j

)




,

ω


l
i



q

κ


(

i
,
j

)





ψ
k















=

(
b
)









i
=
0




L


-
1







j
=
0





P
i



-
1




(








z

τ


(

i
,
j

)





ω


l
i



q

κ


(

i
,
j

)





ψ
k




ight

)

0

=







(




i
=
0




L


-
1







j
=
0





P
i



-
1





z

τ


(

i
,
j

)





ω


l
i



q

κ


(

i
,
j

)





ψ
k






)

0














=

(
c
)








(




j
=
0


u
-
1





z
j



ω


(


r
0

+
j

)



ψ
k





)

0



=

(
d
)




s
k









where (a) follows from the fact that Sξ=ξq, (b) follows from Lemma A, (c) follows from equation (16) and (d) follows from equations (13 and 15)


In order to do an asymptotic computational complexity analysis, it is necessary to specify how the problem at hand will be scaled. A natural parameter to scale is μ, which dictates the codeword length via the relation logq(n+1)=μ. As one scales μ (and hence n), the number of stuck symbols u to be encoded should also be scaled.


Following is a comparison of the computational complexity of the proposed algorithm described herein when compared with standard Gaussian elimination. As every Fqμ operation can be implemented in O(μ2) Fq operations, the suggested algorithm has complexity O(u2μ2) Fq operations. On the other hand, Gaussian elimination solves the problem using Ω(u3) Fq operations (in reality the complexity is in general bigger than this, but a lower bound suffices for our comparison).


As long as u grows faster than μ2 (u/μ2→∞), the encoding algorithm presented herein will outperform Gaussian elimination. It is important to note that the class of codes considered in this article do not possess asymptotics that allow u to grow linearly with n, for the same reason that BCH codes, as a family, have a best case “zero” asymptotic coding rate. Thus, it is important to analyze how fast can one, in principle, have u grow as a function of μ. The concern is addressed using an approximate analysis in order to preserve the readability of the text. Using a rather loose bound, every stuck symbol costs at most μ symbols worth of redundancy. Now, let αε(0,1) be a parameter that we will keep constant as we scale the problem. After investing □αn/μ□ redundant symbols it is possible to encode with at least






u




α


n
μ






α



q
μ

μ






stuck symbols. Under this scaling law, it can be seen that u can grow significantly faster than μ2. Thus, this analysis concludes that the algorithm presented in here outperforms Gaussian elimination in a large set of circumstances.



FIG. 3 illustrates an example of the relation between the embedding B and the matrix M as described above in reference to blocks 206, 208 and 210 in FIG. 2. The example in FIG. 3 depicts a code for 15 bit write words, 5 constrained bits (or stuck bits), and 6 data bits. The example parameters are q=2, μ=4, n=15, k=6, u=5, l0=0, l1=1, l2=3, P0={0}, P1={1,2,4}, P2={3}, R={0,1,2,3,4}, L={0,1,3}, WS={0,11,12,13,14}, WM={1,2,3,4,5,6,7,8,9,10}, ψ0=2, ψ1=5, ψ2=6, ψ3=11, ψ4=14, Γ0={0}, Γ1={1,2,4,8}, Γ3={3,6,9,12}, Γ5={5,10}, Γ7={7,11,13,14}. The matrix M has 12 columns but three of them (1,2,3) are redundant since each of their entries is zero. If the redundant columns are eliminated, generator matrix for the adjustment code is obtained. M is shown with rectangles on quarter of the width of B because during a matrix multiplication, a F24 element can be thought of a 4×4 operator with entries in F2. The powers of ω in the matrix B for this example are shown explicitly.



FIG. 4 illustrates a block diagram of a system for encoding and decoding transmitted data in accordance with an embodiment. The system depicted in FIG. 1 includes a transmission medium made up of links (e.g., physical wires, optical fibres, wireless connections, etc.) for transmitting a message between a transmitter 402 and a receiver 404. As shown in FIG. 4, the transmitter includes a sensor 412 for links in the transmission medium 408 that require a constrained value (e.g., a “stuck bit”) and an encoder 406 for performing the encoding described herein in response to the required constrained values. The receiver 404 includes a decoder 410 for decoding the encoded message. In an embodiment, the decoder 410 does not need to know about the constrained values required by the transmission medium 408 and instead may decide on its own how to constrain the values.



FIG. 5 illustrates a process for solving a linear system of equations that is implemented by an embodiment. At block 502, a linear system of equations is received in a first field (e.g., a Galois field) having a first size. At block 504, the linear system of equations is embedded in a linear system of equations in a second field (e.g., a Galois field) that is larger than the size of the first field. In an embodiment, the linear system of equations in the second field is an underspecified system of equations (i.e., more unknowns than equations). The linear system of equations is solved in the second field at block 506. At block 508, the solution in the second field is collapsed into the first field to generate a solution to the linear system of equations received at block 502. At block 510, the solution is output.


Technical effects and benefits include the ability to store more data in memory (higher density) by utilizes memory cells or symbols having constrained values.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiment was chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.


As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method or computer program product. Accordingly, aspects of the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the present invention may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.


Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.


A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.


Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.


Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).


Aspects of the present invention are described below with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.


The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.


The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods, and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.

Claims
  • 1. A computer implemented method for writing data, the method comprising: receiving write data to be encoded into a write word;receiving constraints on symbol values associated with the write word;encoding the write data into the write word, the encoding comprising: representing the write data and the constraints as a first linear system in a first field of a first size;embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size;solving the second linear system in the second field resulting in a solution; andcollapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values associated with the write word; andwriting the write word to a memory.
  • 2. The method of claim 1, wherein the first and second fields are Galois fields.
  • 3. The method of claim 1, wherein the memory is a phase change memory (PCM).
  • 4. The method of claim 1, wherein the constraints are due to physical defects in the memory.
  • 5. The method of claim 1, wherein the second linear system is represented as Vandermonde matrix.
  • 6. The method of claim 5, wherein the Vandermonde matrix contains powers of symbols denoting locations of constrained symbol values.
  • 7. The method of claim 1, wherein the first linear system includes an underspecified system of equations.
  • 8. The method of claim 1, wherein the collapsing includes multiplying a matrix times the solution.
  • 9. The method of claim 1 wherein the collapsing the solution includes multiplying a matrix times the solution and adding it to a result of multiplying a different matrix times the write data.
  • 10. A memory system comprising: a memory including memory cells and constraints on symbol values stored in the memory cells; andan encoder in communication with the memory, the encoder configured for: receiving write data to be encoded into a write word that satisfies the constraints on the symbol values; encoding the write data into the write word, the encoding comprising:representing the write data and the constraints as a first linear system in a first field of a first size;embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size; solving the second linear system in the second field resulting in a solution; andcollapsing the solution into the first field resulting in the write word, the write word satisfying the constraints on symbol values in the write word; andtransmitting the write word to the memory.
  • 11. The system of claim 10, wherein the first and second fields are Galois fields.
  • 12. The system of claim 10, wherein the memory is a phase change memory (PCM).
  • 13. The system of claim 10, wherein the constraints are constrained values.
  • 14. The system of claim 10, wherein the second linear system is represented as Vandermonde matrix.
  • 15. The system of claim 10, wherein the first linear system includes an underspecified system of equations.
  • 16. The system of claim 10, wherein the collapsing includes multiplying a matrix times the solution.
  • 17. A computer implemented method for transmitting data, the method comprising: receiving data to be encoded into a word for transmission across a transmission medium;receiving constraints on symbol values associated with the word;encoding the data into the word, the encoding comprising: representing the data and the constraints as a first linear system in a first field of a first size;embedding the first linear system into a second linear system in a second field of a second size, the second size larger than the first size;solving the second linear system in the second field resulting in a solution; andcollapsing the solution into the first field resulting in the word, the word satisfying the constraints on symbol values associated with the word; and outputting the word on the transmission medium.
  • 18. The method of claim 17, wherein the first and second fields are Galois fields.
  • 19. The method of claim 17, wherein the constraints are constrained values.
  • 20. The method of claim 17, wherein the first linear system is represented as Vandermonde matrix.
  • 21. The method of claim 17, wherein the second linear system includes an underspecified system of equations.
  • 22. The method of claim 17, wherein the collapsing includes multiplying a matrix times the solution.
  • 23. A computer implemented method for solving linear systems, the method comprising: receiving a linear system of equations in a first field of a first size;embedding the linear system of equations into a second field of a second size, the second size larger than the first size;solving the linear system of equations in the second field resulting in a second field solution; andcollapsing the second field solution into the first field resulting in a first field solution; andoutputting the first field solution.
  • 24. The method of claim 23, wherein the first and second fields are Galois fields.
  • 25. The method of claim 23 wherein the constraints are constrained values.