Decoding Method for Algebraic Geometric Codes and Associated Device

Information

  • Patent Application
  • 20080270873
  • Publication Number
    20080270873
  • Date Filed
    December 22, 2005
    18 years ago
  • Date Published
    October 30, 2008
    16 years ago
Abstract
The present invention relates to a method of decoding a one-point algebraic geometric code defined on an algebraic curve of type C(a,b) represented by an equation F(X,Y)=0 of degree b in X and of degree a in Y over Fq, comprising the following steps: —calculating extended error syndromes (σj(i)) associated with a received word (r); —determining the values of errors in each component (r(x, yp(x))) of the received word r, on the basis of the extended error syndromes calculated. Since the error value is determined for each component, it is not necessary to have recourse to an error locating step. The invention also relates to devices and apparatuses associated with the method.
Description

The present invention concerns systems for communication or recording of data in which the data are subjected to a channel encoding in order to improve the fidelity of the transmission or storage. It concerns more particularly a decoding method, as well as the devices and apparatus adapted to implement this method.


It will be recalled that channel “block” encoding consists of transmitting “codewords” to a receiver (or recording them on a data carrier) that are formed by introducing a certain level of redundancy into the data to transmit. Considering in more detail, by means of each codeword, the information is transmitted that is initially contained in a predetermined number k of symbols termed “information symbols” taken from an “alphabet” of finite size q; on the basis of these k information symbols, calculation is made of a number n>k of symbols belonging to that alphabet, which constitute the components of the codewords: v=(v1, v2, . . . , vn). The set of codewords obtained when each information symbol takes some value in the alphabet constitutes a sort of dictionary referred to as a “code” of “dimension” k and “length” n.


In particular, when the size q of the alphabet is taken equal to a power of a prime number, the alphabet can be given a field structure known as a “Galois field” denoted Fq. For example, for q=2, Fq is a binary alphabet, and for q=28=256, Fq is an alphabet of bytes.


The “Hamming distance” between two words of the same length is the number of indices for which the component of the first word is different from the component of the second word. For a given code, the smallest Hamming distance between any two different words belonging to the code is termed the “minimum distance” d; this is an important parameter of the code.


Certain codes, termed “linear codes” are such that any linear combination of codewords (with the coefficients taken from the alphabet) is still a codeword. These codes can, conveniently, be associated with a matrix H of dimension (n−k)×n, referred to as a “parity check matrix”: a given word v of length n is a codeword if, any only if, it satisfies the equation: H·vT=0 (where the exponent T indicates the transposition); the code is then said to be “orthogonal” to the matrix H.


There is a transmission error if the difference e between a received word r and the corresponding codeword v sent by the transmitter is non-zero. It is said that the transmission errors are caused by the “channel noise”. To detect possible transmission errors and if possible correct them, a decoding method is implemented at the receiver which judiciously exploits the redundancy mentioned above.


More particularly, the decoding is carried out in two main steps.


The first step consists of associating an “associated codeword” {circumflex over (v)} with the received word. To do this, the decoder first of all calculates the “error syndromes vector” s=H·rT=H·eT of length (n−k) (in the context of the present invention, no difference is made between the term “word” and the term “vector”). If the syndromes are all zero, it is assumed that no transmission error has occurred, and the “associated codeword” will then simply be taken to be equal to the received word. If that is not the case, it is thereby deduced that the received word is erroneous, and calculations are then performed that are adapted to estimate the value of the error e; in other words, these calculations provide an estimated value ê of the error such that (rê) is a codeword, which will then constitute the “associated codeword” {circumflex over (v)}. Usually, this first step of the decoding is divided into two distinct sub-steps: a first so-called “error locating” sub-step, during which the components of the received word are determined of which the value is erroneous, and a second so-called “error correction” sub-step, during which an estimation is calculated of the transmission error affecting those components.


The second step simply consists in reversing the encoding method. If the received word was correct, or if correction was achieved of all the transmission errors therein during the first step, this second step of course makes it possible to retrieve the k initial information symbols (before encoding) corresponding to that received word.


It will be noted that in the context of the present invention, reference will often be made to “decoding” for brevity, to designate solely the first of those steps, it being understood that the person skilled in the art is capable without difficulty of implementing the second step.


The objective usually assigned to decoding is to associate with the received word the codeword situated at the shortest Hamming distance from that received word. Reasonably, it will be attempted to identify the position of the possible errors in a received word, and to provide the correct replacement symbol (i.e. identical to the one sent by the sender) for each of those positions, each time the number of erroneous positions is at most equal to INT[(d−1)/2] (where “INT” designates the integer part) for a code of minimum distance d. For certain error configurations, it is sometimes possible to do better. However, it is not always simple to perfect a decoding algorithm achieving that performance. It should also be noted that, when the chosen algorithm manages to propose a correction for the received word, that correction is all the more reliable (at least, for most transmission channels) the smaller the number of positions it concerns.


Among known codes, “Reed-Solomon” codes may be cited, which are reputed for their efficiency. They are linear codes, of which the minimum distance d is equal to (n−k+1). The parity-check matrix H of the Reed-Solomon code of dimension k and length n (where n is less than or equal to (q−1)) is a matrix with (n−k) lines and n columns, which has the structure of a Vandermonde matrix. This parity matrix H, which may be defined for example by taking Hiji(j−1) (1≦i≦n−k, 1≦j≦n), where α is an nth root of unity in Fq; it is then possible to label the component vj where 1≦j≦n, of any codeword v=(v1, v2, . . . , vn) by means of the element α(j−1) of Fq; it is for this reason that a set such as (1, α, α2, . . . , αn−1) is termed “locating set” of the Reed-Solomon code.


Like all codes, Reed-Solomon codes may be “modified” and/or “shortened”. It is said that a given code Cmod is a “modified” version of the code C if there is a square non-singular diagonal matrix A such that each word of Cmod is equal to v·A with v being in C. It is said that a given code is a “shortened” version of the code C if it comprises solely the words of C of which, for a number R of predetermined positions, the components are all zero: as these positions are known to the receiver, their transmission can be obviated, such that the length of the shortened code is (n−R).


As mentioned above, the step of a method of decoding during which a “codeword associated with the received word” is calculated is usually divided into two sub-steps: the first sub-step referred to as an “error locating” sub-step, consists of identifying in the received word any components whose value is erroneous; and the second sub-step consists then of calculating the corrected value of those erroneous components.


For the decoding of Reed-Solomon codes, as regards error locating, use is usually made of the algorithm known as the “Berlekamp-Massey” algorithm, which will now be briefly described: firstly a matrix S is constructed, termed “syndromes matrix”, of which each element is a certain component of the error syndromes vector s=H·rT=H·eT; next a vector Λ is sought such that Λ·S=0, then an “error locating polynomial” Λ(Z) is formed of which the coefficients are components of the vector Λ; the inverses of the roots of that polynomial Λ(Z) are then, among the elements ωi (where i=1, . . . , n) of the locating set, those which label the erroneous components of the received word r.


As regards the error correction, use is usually made of the algorithm known as the “Forney” algorithm which will now be briefly described. The error calculating polynomial Ω(Z)=Λ(Z)S(Z) modulo Zn−k is constructed, where







S


(
Z
)


=




i
=
0


n
-
k
-
1





s
i



Z
i







and the si are the components of the error syndromes vector s; the errors are then given, for i=1, . . . , n, by:







e
i

=

{




0


si




Λ


(

ω
i

-
1


)



0






-


Ω


(

ω
i

-
1


)




p
i




Λ




(

ω
i

-
1


)







si




Λ


(

ω
i

-
1


)


=
0




,






where Λ′(Z) designates the derivative of Λ(Z), and pi is equal to 1 for a “standard” Reed-Solomon code and to the diagonal element in position (i,i) of the matrix P for a modified code (see above).


For more details on Reed-Solomon codes, and in particular the algorithms of Berlekamp-Massey and of Forney, reference may for example be made to the work by R. E. Blahut entitled “Theory and practice of error-control codes”, Addison-Wesley, Reading, Mass., 1983.


For modern information carriers, for example on computer hard disks, CDs (“compact discs”) and DVDs (“digital video discs”), it is sought to increase the density of information. When such a carrier is affected by a physical defect such as a scratch, a high number of information symbols may be rendered unreadable. This problem may nevertheless be remedied by using a very long code. However, as indicated above, the length n of the words in Reed-Solomon codes is less than the size q of the alphabet of the symbols. Consequently, if a Reed-Solomon code is desired having codewords of great length, high values of q must be envisaged, which leads to costly implementations in terms of calculation and storage in memory. Moreover, high values of q are sometimes ill-adapted to the technical application envisaged. For this reason, it has been sought to build codes which naturally provide words of greater length than Reed-Solomon codes without however requiring a longer alphabet.


In particular so-called “algebraic geometric codes” or “Goppa geometric codes” have been proposed (see for example “Algebraic Geometric Codes” by par J. H. van Lint, in “Coding Theory and Design Theory” IMA Volumes Math. Appl., vol. 21, pages 137 to 162, Springer-Verlag, Berlin, 1990). These codes are constructed from a set of n distinct pairs (x,y) of symbols belonging to a chosen Galois field Fq; this set of pairs constitutes the locating set of the algebraic geometric code. In general terms, there is an algebraic equation with two unknowns X and Y such that the pairs (x,y) of that locating set are all solutions of that algebraic equation. The values of x and y of these pairs may be considered as coordinates of “points” Pβ (where β=1, . . . , n) forming an “algebraic curve”.


An important parameter of such a curve is its “genus” g. In the particular case where the curve is a simple straight line (the genus g is then zero), the algebraic geometric code reduces to a Reed-Solomon code. For given q and g, certain algebraic curves, termed “maximum”, make it possible to achieve a length equal to (q+2g√{square root over (q)}), which may be very high; for example, with an alphabet size of 256 and a genus equal to 120, codewords are obtained of length 4096.


Among all the algebraic geometric codes, those usually considered are the ones which are defined on an algebraic curve represented by an equation F(X,Y)=0, with:






F(X,Y)=Xb+cYa+ΣcijYjXi,


where c≠0 and the cij are elements of Fq, a and b are strictly positive mutually prime integers, and where the sum only applies to the integers i and j which satisfy ai+bj<ab. An equation of this form will be referred to as “of type C(a,b)”, and the codes defined on a curve of type C(a,b) will be referred to as “codes of type C(a,b)”. The genus g of a curve of type C(a,b) is equal to (a−1)(b−1)/2. Moreover, in the following portion of the present text, and without losing generality (given that the names “X” and “Y” of the unknowns may be exchanged), it will be assumed that a<b.


With every monomial of the form YjXi, where i and j are positive integers or zero, a “weight” is conventionally associated, which is equal by definition to (ai+bj). It can be shown that the monomials hα=YjXi of weight ρα≦m, where α=1, . . . , n−k, and j is an integer between 0 and (a−1), constitute a basis of the vector space L(mP) of the polynomials in X and Y with coefficients in Fq of which solely the poles are situated at infinity and are of order less than or equal to m, where 0<m<n. It can also be shown that this vector space is of dimension greater than or equal to (m−g+1) (equal if m>2g−2).


For any code of type C(a,b), a parity-check matrix H is conventionally defined in the following manner: the element in line α and column β of the parity-check matrix is equal to the monomial hα evaluated at the point Pβ (where, it may be recalled, β=1, . . . , n) of the algebraic curve. Each point Pβ then serves to identify the βth component of any codeword. A code having such a parity-check matrix is termed a “one-point” code since its parity-check matrix is obtained by evaluating (at the n points Pβ) functions (the monomials hα) which have poles only at a single point, i.e. the point at infinity.


The codewords c are such that H·cT=0 and it is possible on account of this fact in particular to designate a component of a codeword c (and also of a received word) by referring to the corresponding point Pβ in the locating set (in general itself expressed by its coordinates x, y). It can thus be said that the components of the word are “labeled” by means of the points of the locating set.


Algebraic geometric codes are advantageous as to their minimum distance d, which is at least equal to (n−k+1−g), and, as has been said, as to the length of the codewords, but they have the drawback of requiring decoding algorithms that are rather complex, and thus rather expensive in terms of equipment (software and/or hardware) and processing time. This complexity is in fact greater or lesser according to the algorithm considered, a greater complexity being in principle the price to pay for increasing the error correction capability of the decoder (see for example the article by Tom Høholdt and Ruud Pellikaan entitled “On the Decoding of Algebraic-Geometric Codes”, IEEE Trans. Inform. Theory, vol. 41 no. 6, pages 1589 to 1614, November 1995). Generally, the higher the genus g of the algebraic curve used, the greater the length of the codewords, but also the greater the complexity of the decoding.


It is sometimes useful to shorten an algebraic geometric code. In particular, it is quite common to delete a point from the locating set, or several points, of which the x coordinate is zero.


Various error locating algorithms are known for algebraic geometric codes (defined on a curve of non-zero genus).


Such an algorithm, termed “basic” algorithm, has been proposed by A. N. Skorobogatov and S. G. Vl{hacek over (a)}du in the article entitled “On the Decoding of Algebraic-Geometric Codes”, IEEE Trans. Inform. Theory, vol. 36 no. 5, pages 1051 to 1060, November 1990). This algorithm uses a “syndromes matrix” S, of dimension (n−k)×(n−k), of which each coefficient Sij, where j is less than or equal to a “boundary” value w(i), is equal to a judiciously chosen linear combination of the elements sv (v=1, 2, . . . , n−k) of the syndrome s, the coefficients Sij beyond the boundary remaining indeterminate. The basic algorithm makes it possible to construct, on the basis of the syndromes matrix S, an “error locating polynomial” Λ(x,y), of which the zeros comprise all the pairs (x,y) labeling the positions of the received word for which the component in that position has suffered a transmission error.


Skorobogatov and Vl{hacek over (a)}du have also proposed, in the same article cited above, a “modified” version of the “basic” algorithm, which generally enables a higher number of errors to be corrected than the “basic” algorithm.


Algorithms are also known which operate using an iterative principle: each new iteration of such an algorithm invokes a supplementary component of the error syndromes vector s=H·rT.


An example of such an iterative decoding algorithm is disclosed in the article by M. Sakata et al. entitled “Generalized Berlekamp-Massey Decoding of Algebraic-Geometric Codes up to Half the Feng-Rao Bound” (IEEE Trans. Inform. Theory, vol 41, pages 1762 to 1768, November 1995). This algorithm can be viewed as a generalization of the Berlekamp-Massey algorithm to algebraic geometric codes defined on a curve of non-zero genus.


Another example of an iterative decoding algorithm for algebraic geometric codes has been disclosed in the article by M. E. O'Sullivan entitled “Decoding of Codes Defined by a Single Point on a Curve” (IEEE Trans. Inform. Theory, vol. 41, pages 1709 to 1719).


For any given received word r, it is possible to determine error locating polynomials of which the zeros comprise all the pairs (x,y) labeling the erroneous components of that received word. The set of the error locating polynomials has the structure of an ideal termed a “Gröbner ideal” (associated with the transmission errors affecting that word). It is possible to generate the Gröbner ideal by means of a finite set of f polynomials, where f≦a, which constitutes a “Gröbner basis” of the ideal. As the pairs (x,y) labeling the erroneous components of a received word of course all satisfy the equation F(X,Y)=0 of the algebraic curve, the polynomial F(X,Y) forms part of the Gröbner ideal, and it is consequently possible to reduce modulo F(X,Y) the f polynomials Gφ(X,Y) of any given Gröbner basis; the result of this is that there is always a Gröbner basis of which all the elements have a degree in Y less than a.


It is possible to obtain a Gröbner basis G={Gφ(X,Y)|φ=1, . . . f} from a matrix S*, of size n×n, obtained by “extending” the matrix S (in other words, the elements of S and those of S* are identical for j≦w(i) with i≦n−k). This extension is possible each time the number of errors in the received word is less than or equal to (n−k−g)/2.


Thus, when the number of errors in the received word is less than or equal to (n−k−g)/2, it is in general necessary, in order to be able to locate those errors, to know further elements of the syndromes matrix than the elements which we will qualify as “known” due to the fact that they are equal to components of the error syndromes vector s=H·rT or to simple linear combinations of those components (see the numerical example described below). It is fortunately possible to calculate these elements of “unknown” value by a method comprising a certain number of “majority decisions”, for example the algorithm known as the “Feng-Rao” algorithm (see article by G.-L. Feng and T. R. N. Rao entitled “Decoding Algebraic-Geometric Codes up to the Designed Minimum Distance>>, IEEE Trans. Inform. Theory, vol. 39, no. 1, January 1993). The object of this algorithm is essentially to extend the matrix S by means of calculation steps having the role of successive iterations. A number of iterations is necessary equal to a certain number g′, where g′ is at most equal to 2g, in order to reach the state where, as explained above, it becomes possible to calculate a Gröbner basis from the “extended” syndromes matrix S* so obtained. At this stage, it is also possible to calculate additional “unknown” elements of the matrix S* from elements obtained previously, either by means of new iterations of a “majority decisions” algorithm, or more conveniently by means of a certain number of relationships, known as “recursion” relationships, using “feedback polynomials” chosen from the Gröbner basis. In relation to this, reference can be made to the article by Sakata et al. cited above.


In the context of the present invention, it will be said that the elements of the syndromes matrix S* (“known” or “unknown”) are “extended error syndromes”.


Moreover, various algorithms are known for algebraic geometric codes making it possible to calculate the corrected value of the erroneous components of the received word; in other words, these algorithms are adapted to provide an estimated value ê of the transmission error e suffered by the transmitted codeword.


The calculation of errors for algebraic geometric codes is prima facie more complicated than for Reed-Solomon codes. This is because:


the error locating sub-step not only produces one error locating polynomial (such as Λ(Z) above), but several polynomials, which form a Gröbner basis of the ideal of the error locating polynomials;


these error locating polynomials are polynomials with two variables instead of one; and


these error locating polynomials have partial derivatives with respect to those two variables, such that the conventional correction algorithms (such as the algorithm of Forney mentioned above), which involve a single derivative, are no longer applicable.


Various error value calculating algorithms are known for algebraic geometric codes.


The article “Algebraic Geometry Codes”, by Tom Høholdt, Jacobus Van Lint and Ruud Pellikaan (Chapter 10 of the “Handbook of Coding Theory”, North Holland, 1998) constructs the product of certain powers of the polynomials of the Gröbner basis. It then performs a linear combination of those products, allocated with appropriate coefficients. Finally it shows that the value of the polynomial so obtained, taken at the point (x,y) of the locating set, is, with the sign being the only possible difference, the value of the error for the component of the received word labeled by that point (x,y).


The article “A Generalized Forney Formula for Algebraic Geometric Codes” by Douglas A. Leonard (IEEE Trans. Inform. Theory, vol. 42, no. 4, pages 1263 to 1268, July 1996) calculates the values of the errors by evaluating a polynomial with two variables at the points of which the coordinates are the common zeros of the error locating polynomials. The article “A Key Equation and the Computation of Error Values for Codes from Order Domains” by John B. Little (published on the Internet on Apr. 7, 2003) calculates the values of the errors by evaluating two polynomials with a single variable at the same points as earlier.


These three algorithms are complex to implement.


European patent application EP-1 434 132 in the name of CANON describes a decoding method applicable in particular to the one point algebraic geometric codes described above defined on an algebraic curve of type C(a,b). This decoding method performs both the location and the correction of errors. It will now be described in some detail.


This decoding method relies on the subdivision of the locating set of the code into subsets termed “aggregates”. By definition, an “aggregate” groups together the pairs (x,y) of the locating set sharing the same value of x, when a<b is taken (still on the assumption that a<b, the aggregates could be defined as grouping together the pairs (x,y) of the locating set sharing the same value of y, but the first definition will be held to in what follows). When it is desired to emphasize this aggregate structure, the pairs of the locating set will be denoted (x,yp(x)), where p=1, . . . , λ(x) and λ(x) is the cardinal of the aggregate considered, and the components of any word c of length n will be denoted c(x,yp(x)); it will be said that the components of c which, labeled in this manner, possess the same value of x form an “aggregate of components” of the word c; in particular, when it is a received word, it will be said that an aggregate associated with a value x of X is “erroneous” when there exists at least one point (x,y) of the locating set of the code such that the component of said received word labeled by that point is erroneous.


Let m be the maximum weight of the monomials defining the lines of the parity-check matrix (see above). According to application EP-1 434 132, these monomials are classified in sets of monomials






M
j
={Y
j
X
i|0≦i≦(m−bj)/a}


for 0≦j≦jmax, where jmax<a. The cardinal of this set Mj is thus:






t(j)=1+INT[(m−bj)/a].


Let x1, . . . , xμ denote the different values of x in the locating set, and







v=[v
(x1,y1(x1)), . . . , v(x1,yλ(x1)(x1)), . . . , v(xμ,yλ(xμ)(xμ))],


denote any particular codeword. For each aggregate attached to one of the values x1, x2, . . . , xμ of x, there are constructed (jmax+1) “aggregate symbols”








v
j



(
x
)


=




p
=
1


λ


(
x
)







[


y
p



(
x
)


]

j



v


(

x
,


y
p



(
x
)



)








for j=0, . . . , jmax. These aggregate symbols serve to form (jmax+1) “aggregate words”







v

j
=[v
j(x1), . . . , vj(xμ)],


of length μ.


It is easily verified that the condition of belonging to the algebraic geometric code (i.e. H·vT=0) is equivalent to the set of (jmax+1) equations:






H
t(j)
·v
j
T=0,


where the function t(j) is given above and is, by definition,







H
t

=


[



1


1





1





x
1




x
2







x
μ




















x
1

t
-
1





x
2

t
-
1








x
μ

t
-
1





]

.





However, this matrix Ht is a Vandermonde matrix defined over Fq; consequently, if, for each value of j, it is considered that Ht(j) is a parity-check matrix defining a set of codewords vj, that set constitutes a Reed-Solomon code. It is then said that the algebraic geometric code considered has been “broken down” into a certain number of “component” Reed-Solomon codes.


The advantage of this formulation is that it gives decoding algorithms for Reed-Solomon codes, that are very simple, and that have a very high level of performance, at least for certain types of channels. For example, if a word r has been received, calculation is first made, for j=0, . . . , jmax, of the “aggregate received words”







r

j
=[r
j(x1), . . . , rj(xμ)],


in which, for x=x1, . . . , xμ, the “aggregate received symbols” rj(x) are given by











r
j



(
x
)


=




p
=
1


λ


(
x
)







[


y
p



(
x
)


]

j




r


(

x
,


y
p



(
x
)



)


.







(
1
)







Next, the Berlekamp-Massey algorithm is used for locating the erroneous symbols rj(x) of each word rj. Except for accidental compensation (the case arising) in equation (1) for a certain value of j, the aggregate received symbols rj(x) associated with an “erroneous aggregate” (see definition above) will clearly themselves also be erroneous.


Next the Forney algorithm is implemented for the correction of those erroneous symbols, according to the error syndromes vector sj=Ht(j)·rjT. Finally, calculation is made of the symbols {circumflex over (v)}(x,yp(x)) of the associated codeword on the basis of the corrected symbols {circumflex over (r)}j(x) using the system of equations












r
^

j



(
x
)


=




p
=
1


λ


(
x
)







[


y
p



(
x
)


]

j




v
^



(

x
,


y
p



(
x
)



)








(
2
)







where j takes a number of different values (the number of equations) at least equal to λ(x) (the number of unknowns). This decoding method thus requires that (jmax+1) be at least equal to λmax, where λmax is the greatest among the aggregate cardinals λ(x). In what follows it will be accepted that the same condition is satisfied.


With respect to the known error correction algorithms generally applicable to the algebraic geometric codes considered, the saving in terms of complexity resulting from the implementation of the method according to application EP-1 434 132 is significant, despite the necessity to implement an error correction algorithm for Reed-Solomon code (for example the Forney algorithm) λmax times, and to solve for each erroneous aggregate labeled by a value x of X a system of equations (2); it will be noted in this connection that the number of equations in each of these systems is at most equal to a (the exponent of Y in the equation representing the algebraic curve), since the size of any aggregate (that is to say the number of solutions, for fixed x, of the equation representing the algebraic curve) is at most equal to a.


It will furthermore be noted that the system of equations (2) is a non-singular Vandermonde system: it thus always possesses one, and only one, solution; moreover, as is well known to the person skilled in the art, the solution of this type of system of linear equations is, advantageously, particularly simple.


It may furthermore be noted that the known methods for calculating the values of the errors use the syndromes σj(i)=(YjXi, e), 0≦i, 0≦j≦a−1 where e is the error in the received word, YjXi is the word obtained by evaluation of the monomial YjXi at the points of the curve used to define the code and (u,v) represents the scalar product of the words u and v. For 0≦i≦t(j)−1, these syndromes are directly obtained as components of HrT, where H is the parity-check matrix of the algebraic geometric code and r is the received word. For t(j)≦i, these syndromes remain properly calculable as soon as the weight of the error is ≦(n−k−g)/2. In this connection reference may be made to the article by Sakata et al. cited above. Below the vector of the σj(i) placed in order of increasing i will be designated by σj.


As mentioned above, the method described in the document EP-1 434 132 is designed to be implemented once the erroneous aggregates of a received word have been located. However, recourse to a preliminary locating step has the following two drawbacks:


the implementation of the method is complicated by the fact that it must include parameters that are adjustable during the individual decoding of each received word, depending on the position of the (possible) erroneous aggregates detected during location, and


the locating step in itself implies an operational cost.


The invention provides a method of decoding a one-point algebraic geometric code defined on an algebraic curve of type C(a,b) represented by an equation F(X,Y)=0 of degree b in X and of degree a in Y over Fq, characterized in that it comprises the following steps:


calculating extended error syndromes σj(i) associated with a received word r;


determining the values of errors in each component r(x,yp(x)) of the received word r, on the basis of the extended error syndromes calculated.


Since the error value is determined for each component, it is not necessary to have recourse to the error locating step.


For example, when the algebraic curve is partitioned into aggregates, the step of determining the values of errors in each component on the basis of the extended error syndromes comprises the following steps for each aggregate:


calculating, on the basis of the extended error syndromes, compound error values each representing a linear combination of the errors in the components of the aggregate;


determining that the errors in the components of the aggregate are zero if all said compound error values are zero;


calculating the errors in the components of the aggregate if at least one of said compound error values is not zero.


Thus the authors of the present invention realized that it is possible to consider that the estimated errors ê(x,yp(x)) in each component represent a “zero error” when the corresponding component r(x,yp(x)) is correct; hence, by using a number of equations at least equal to the size of the aggregate, it is possible to obtain all the values of the error estimations ê(x,yp(x)), whether they be zero or not. The inventors thus wondered what, in practice, would be the cost associated with the decision to process all the aggregates in advance as being potentially erroneous, which decision makes it possible not to have to distinguish between the aggregates containing errors and those which do not. In this case, as usual, the Gröbner basis of the locator polynomials would be calculated, but the trouble would not be taken to explicitly seek which are the points of the curve that are zeros common to those locators.


They then realized that the cost was in fact rather minimal, and all the more so in that, in certain conditions, the utility of this locating step is particularly mediocre: these are conditions where all, or almost all, the aggregates are erroneous, for example when the channel considered is sufficiently noisy with respect to the size of the aggregate, since the probability that a given aggregate, labeled by a value x of X, comprises at least one erroneous component is all the greater that the cardinal λ(x) of that aggregate is great.


The calculation of the extended error syndromes σj(i) is in practice performed for j=0, . . . , j0−1, where j0 is at least equal to the maximum value λmax of the cardinals λ(x) of the aggregates of the code. Sufficient equations are thus obtained to enable the possible calculation of the errors in each aggregate.


From a practical point of view, it is worthwhile deleting the pairs (x,y) for which x is zero from the locating set of the algebraic geometric code.


With this restriction, determination of the values of errors for example comprises the following steps:


calculating τj=σj·V−1 for j=0, . . . , a−1, where V−1 is the matrix of dimensions (q−1)×(q−1) of which the element in position (i,j), 1≦i,j≦q−1, is α−(i−1)(j−1), α being a primitive element of Fq;


for all h such that there is j such that the (h+1)th component τj(h) of τj is non-zero, calculating the error ê(x,yp(x)) of the components of the aggregate defined by x=αh in the received word r by:







[





e




(


α
h

,


y
1



(

α
h

)



)








e




(


α
h

,


y
2



(

α
h

)



)













e




(


α
h

,


y

λ
(

α
h

)




(

α
h

)



)





]

=



[



1








1






y
1



(

α
h

)












y

λ
(

α
h

)




(

α
h

)






















y
1


λ
(

α
h

)

-
1




(

α
h

)












y

λ
(

α
h

)



λ
(

α
h

)

-
1




(

α
h

)





]


-
1


·

[





τ
0



(
h
)








τ
1



(
h
)













τ


λ
(

α
h

)

-
1




(
h
)





]






In a possible implementation, for each j, the calculation of the elements of the vector τj is performed by a circuit comprising a shift register and receiving as input the elements of the vector σj.


According to a another embodiment, determining the values of errors may comprise the following steps:


implementing, for j=0, . . . , λmax−1, by means of the error syndromes polynomial









S
j



(
Z
)


=




i
=
0


q
-
2






σ
j



(
i
)




Z
i




,




an error correction algorithm adapted to Reed-Solomon codes, so as to calculate the error Ej(x) in each component labeled by the element x of Fq of a word of Reed-Solomon code defined over Fq, and


calculating the estimations ê(x,yp(x)) of the respective errors in the components r(x,yp(x)) of r by solving the system of equations of type












E
j



(
x
)


=




p
=
1


λ


(
x
)







[


y
p



(
x
)


]

j




e
^



(

x
,


y
p



(
x
)



)





,




(
3
)







where equations are chosen that are associated with consecutive values of j at least equal in number to the number of components λ(x) of the aggregate considered.


The calculation of the estimations ê(x,yp(x)) may then be performed only for the values of x such that there is at least one value of j for which Ej(x) is non-zero, Calculation of the estimations is thus limited to certain aggregates and it is considered that the error values in the components of the other aggregates are zero.


According to a possible feature of implementation, if there is at least one value of x=αh associated with an erroneous aggregate of cardinal λ(x)<a, comparison is then made between the members of at least one equation









E
j



(
x
)


=




p
=
1


λ


(
x
)







[


y
p



(
x
)


]

j




e
^



(

x
,


y
p



(
x
)



)





,






where






λ


(
x
)




j


a
-
1


,




associated with said value of x, if need be after having calculated Ej(x) by means of an error correction algorithm adapted to Reed-Solomon codes.


By virtue of these provisions, it will be possible to detect a possible erroneous correction should said at least one equation prove not to be satisfied by the values of e(x,yp(x)) obtained previously.


According to a second aspect the invention concerns a device for correcting errors for a one-point algebraic geometric code defined on an algebraic curve of type C(a,b) represented by an equation F(X,Y)=0 of degree b in X and of degree a in Y over Fq, characterized in that it comprises:


means for calculating extended error syndromes σj(i) associated with a received word r;


means for determining the values of errors in each component r(x,yp(x)) of the received word r, on the basis of the extended error syndromes calculated.


When the algebraic curve is partitioned into aggregates, the means for determining the values of errors in each component on the basis of the extended error syndromes may comprise:


means for calculating, for each aggregate and on the basis of the extended error syndromes, compound error values τj(h), Ej(x) each representing a linear combination of the errors in the components of the aggregate;


means for determining that the errors in the components of an aggregate are zero if all said compound error values relative to that aggregate are zero;


means for calculating the errors in the components of an aggregate if at least one of said compound error values relative to that aggregate is not zero.


Such a device may also have features corresponding to those already presented for the decoding method and the corresponding advantages arising therefrom.


The invention also relates to:


a decoder comprising at least one error correction device as described succinctly above, and at least one redundancy removal device,


an apparatus for receiving encoded digital signals comprising a decoder as succinctly described above, and comprising means for demodulating said encoded digital signals,


a computer system comprising a decoder as succinctly described above, and further comprising at least one hard disk as well as at feast one means for reading that hard disk,


a non-removable data storage means comprising computer program code instructions for the execution of the steps of any one of the decoding methods succinctly described above,


a partially or wholly removable data storage means comprising computer program code instructions for the execution of the steps of any one of the decoding methods succinctly described above, and


a computer program containing instructions such that, when said program controls a programmable data processing device, said instructions cause said data processing device to implement one of the decoding methods succinctly described above.


The advantages provided by this decoder, this reception apparatus, this computer system, these data storage means and this computer program are essentially the same as those provided by the error correction methods according to the invention.





Other aspects and advantages of the invention will emerge from a reading of the following detailed description of particular embodiments, given by way of non-limiting example. The description refers to the accompanying drawings, in which:



FIG. 1 is a block diagram of a system for transmitting information implementing a method according to the invention,



FIG. 2 represents an apparatus for receiving digital signals incorporating a decoder according to the invention, and



FIG. 3 represents a circuit element that can be used in an embodiment according to the invention.






FIG. 1 is a block diagram of a system for transmitting information implementing a method according to the invention.


The function of this system is to transmit information of any nature from a source 100 to a recipient or user 109. First of all, the source 100 transforms this information into a series of symbols belonging to a certain Galois field Fq (for example bytes of bits for q=28), and transmits these symbols to a storage unit 101, which accumulates the symbols so as to form sets each containing k symbols. Next, each of these sets is transmitted by the storage unit 101 to an encoder 102 which incorporates redundancy therein, so as to construct a word of length n belonging to the chosen code.


The code words so formed are next transmitted to a modulator 103, which associates a modulation symbol with each symbol of the codeword. Next, these modulation symbols are transmitted to a recorder (or a transmitter) 104, which inserts the symbols in a transmission channel. This channel may for example be storage on a suitable carrier such as a DVD or a magnetic disc or a magnetic tape. It may also correspond to be a wired transmission or wireless transmission as is the case with a radio link.


The message transmitted arrives at a reader (or a receiver) 105, after having been affected by a “transmission noise” whose effect is to modify or erase some of the modulation symbols.


The reader (or receiver) 105 then transmits these symbols to the demodulator 106, which transforms them into symbols of Fq. The n symbols resulting from the transmission of the same codeword are next grouped together into a “received word” in an error correction unit 107, which implements a decoding method according to the invention, so as to provide an “associated codeword”. Next, this associated codeword is transmitted to a redundancy removal unit 108, which extracts from it k information symbols by implementing a decoding algorithm that is the reverse of that implemented by the encoder 102. Finally, these information symbols are supplied to their recipient 109.


Units 107 and 108 can be considered to form conjointly a “decoder” 10.


The encoding method according to the invention will now be illustrated, with the aid of a numerical example. It should be noted that this example does not necessarily constitute a preferred choice of parameters for the encoding or decoding. It is only provided here to enable the person skilled in the art to understand more easily the operation of the method according to the invention.


The designation Q will be given to the “one-point” algebraic geometric code of dimension 22 and length 60, defined in the following manner.


The alphabet of the symbols is constituted by the Galois field F16. As the cardinal of this field is a power of 2 (16=24), the sign “+” is equivalent to the sign “−” before any coefficient of a polynomial with coefficients belonging to that field.


The following “algebraic curve” is considered of genus g=6 constituted by the set of the solutions (X=x,Y=y) of the equation with two unknowns






Y
4
+Y+X
5=0  (4)


over F16. It is found that by giving to X some particular value x in F16, each time there are 4 values yp(x) (p=1, 2, 3, 4) in F16 such that the pair (x,yp(x)) is a solution of equation (4); these solutions of equation (4) are the coordinates of the “finite points of the curve” (the curve also contains a point at infinity denoted P). It is chosen to constitute the locating set by means of all these solutions except those where x=0; the locating set thus has a cardinal equal to 60, and it can be divided into 15 aggregates which each have a cardinal λ(x) equal to 4. Generally, the designation λmax is given to the maximum value among the cardinals λ(x) of the aggregates, here λmax=4. It is recalled that each point Pj of the locating set serves to identify the j-th element of any codeword; the number of such points being here equal to 60, the length n of the code is thus also equal to 60.


Next, the vector space L(mP) is considered of polynomials in X and Y with coefficients in F16 of which solely the poles are situated in P, and are of order less than or equal to m, where 0≦m≦65=n+g−1. This vector space, which is of dimension greater than or equal to (m−g+1) (equal if m>2g−2), has a base constituted by the monomials hi=YtXu, where t is an integer between 0 and 3, u is a positive integer or zero, 4u+5t≦m, and i=1, . . . , n−k. This quantity ρ(hi)=4u+5t is usually called the “weight” of the monomial hi.


Take for example: m=43; a set of monomials hi is then obtained where i=1, . . . , 38, since m−g+1=43−6+1=38.


The monomials hi may be classified into ordered subsets of monomials






M
t
={Y
t
X
u|0≦u≦(43−5t)/4},


where: 0≦t≦3. These ordered subsets of monomials are explicitly:





M0={1, X, X2, . . . , X10}





M1={Y, YX, YX2, . . . , YX9},





M2={Y2, Y2X, Y2X2, . . . , Y2X8}, and





M3={Y3, Y3X, Y3X2, . . . , Y3X7}.


It is verified that the total number of monomials hi is indeed equal to: 11+10+9+8=38.


Finally, the parity-check matrix H of the code is defined in the following manner: the element in line i and column j of that matrix is equal to the value taken by the monomial hi at the point Pj of the algebraic curve.


The redundancy (n−k) of the code Q being equal to 38, its dimension is k=60−38=22. The minimum distance d of this code is at least equal to n−k+1−g=33.


Let the received word be the word r of weight 16, given by the following table where α is a primitive element of F16 satisfying α4+α+1=0, x and y are the coordinates of the point concerned in the locating set, and r(x,y) is the component associated with (i.e. labeled by) that point. At the points (x,y) of the curve not included in the table, the received symbol r(x,y) has the value 0.

















x
y
r (x, y)









α
α13
α12



α2
α3
α10



α3
α
α5



α3
α2
α7



α4
α13
1



α7
α7
α9



α7
α9
α9



α8
α11
α2



α8
α14
1



α8
α3
α2



α9
α
α5



α9
α8
α11



α9
α2
α11



α11
α12
α5



α11
α11
α5



α14
α14
α10










Use is then made of the “extended error syndromes” σj(i)=YjXi|e, where e is the transmission error affecting the received word r, and YjXi represents the word of which the components are equal to the value taken by the monomial YjXi at the points of the locating set (the notation c1|c2 represents the scalar product of any two words c1 and c2 of length n). In passing, it will be noted that, due to the fact that αq−1=1 for any non-zero element α of Fq, the extended error syndromes σj(i) are periodic in i, with a period equal to (q−1).


In the example set out here, given the received word r, it is possible to calculate the 38 syndromes corresponding to the 38 lines of the parity-check matrix H. Let σj (i) be the syndrome corresponding to the monomial YjXi. These syndromes are directly obtained for the non-negative values of i and j which satisfy 0≦i≦(43−5j)/4, using the parity-check matrix H. For the other useful values of i and j, use is made of a method of calculating the syndromes that are referred to as “unknown”. For the first unknown syndromes, this method implies majority decisions. For the other unknown syndromes, it may also be carried out by “majority” decisions which are then unanimous, or more simply by using each time one of the polynomials of the Gröbner basis produced by the decoding algorithm used. It may be recalled here that this basis of polynomials generates the ideal of the locator polynomials. In this connection reference may be made to the article by Sakata et al. mentioned above.


For 0≦j≦3, it is thus possible to calculate σj(i) for i=0, . . . , 14. By giving the designation σj to the vector of the 15 components σj(i) for i=0, . . . , 14, the four following vectors are obtained:






σ
0=[α2 α8 α2 α α14 α11 α14 α2 α2 α14 1 α10 α5 α14 α14],






σ
1=[α10 α5 α12 α α8 α13 α8 α8 α11 α10 α11 1 α10 α3 α13],






σ
2=[α α3 α10 α10 α5 α12 α13 α5 0 α11 α13 α12 α5 α8 α10],






σ
3=[0 α12 α13 1 α14 α12 α12 α9 α14 α11 0 α6 α2].


On the basis of the syndromes so obtained, an error correction algorithm is implemented, for j=0, . . . , λmax−1 (i.e. for j going from 0 to 3 in the example described), that is adapted to Reed-Solomon codes, so as to calculate the errors Ej(x) in the components of a word of Reed-Solomon code, defined over the same Galois field as said algebraic geometric code, the components being labeled by the (q−1)=15 values of x each defining an aggregate. Such an error correction is for example performed by using the error syndromes polynomials:








S
j



(
Z
)


=




i
=
0


q
-
2






σ
j



(
i
)





Z
i

.







It is noted that the error correction is performed on all the aggregates, without seeking to determine beforehand whether or not the aggregate concerned is an erroneous aggregate for example by means of an error locating algorithm.


In the above example, the matrix V is used, of dimensions 15×15, of which the element in position (i, j), 1≦i, j≦15, is α(i−1)(j−1) where α is a root of X4+X+1. This is because it is possible to define the 15-tuples τj=σjV−1 which are used profitably to calculate the estimated values ê(x, y(x) of the errors. It may be noted that the inverse matrix V−1 of V is the matrix having α−(i−1)(j−1) in position (i,j). In the example treated the following is thus obtained:






τ
0=[0 α12 α10 α13 1 0 0 0 1 α5 0 0 0 0 α10],






τ
1=[0 α10 α13 α5 α13 0 0 α9 α α 0 α5 0 0 α9],






τ
2=[0 α8 α α8 α11 0 0 α9 α α8 0 α5 0 0 α8],






τ
3=[0 α6 α4 α3 α9 0 0 α13 α10 α10 0 α7 0 0 α7].


It will be noted that the calculation of τj from σj may advantageously be performed using shift registers. Each column of V−1 is in fact a series of consecutive powers of the same element of Fq. For all fixed h, the calculation of the component τj(h) of τj is thus easily performed by the circuit of FIG. 3, where the element D represents a delay element. Advantageously, for all j, the q−1 components τj(h) of τj may then be calculated in parallel by circuits of the FIG. 3 type.


Thus let T be the 4×15 matrix of which the four lines are the four 15-tuples τj. The knowledge of this matrix T is equivalent to that of the 60 values of Ejh) since giving the designation τj(h) to the element in position (j, 1+h) of T gives:





τj(h)=Ejh).


This leads to a simple calculation of the errors ê(αh, yph)) in the following manner. Let the designation th, h=0, . . . , 14, be given to the (h+1)th column of that matrix T, and for any such h, let there be constructed the Vandermonde matrix Vh of type 4×4 based on the four elements yph), p=1, . . . , 4. Here is thus the form of these matrices Vh:







[



1


1


1


1






y
1



(

α
h

)






y
2



(

α
h

)






y
3



(

α
h

)






y
4



(

α
h

)








[


y
1



(

α
h

)


]

2





[


y
2



(

α
h

)


]

2





[


y
3



(

α
h

)


]

2





[


y
4



(

α
h

)


]

2







[


y
1



(

α
h

)


]

3





[


y
2



(

α
h

)


]

3





[


y
3



(

α
h

)


]

3





[


y
4



(

α
h

)


]

3




]

.




The column vector






êh)=[êh,y1h)),êh,y2h)),êh,y2h)),êh,y3h))]T


(where the circumflex accent ̂ indicates estimated errors and where T indicates the transposition) is then simply given by ê(αh)=Vh−1th.


Preferably, it can be decided not to “materially” perform the calculation of the errors in the aggregate indexed by αh when the “transformed” errors Ejh) are all zero. This is because in this case it is certain in advance that the errors in the aggregate αh will also be zero.


In summary, the four lines τj of the matrix T are calculated by multiplying the four vectors of the syndromes σj (completed by the syndromes components referred to as “unknown”) by V−1 and the errors are determined in the aggregate indexed by αh by multiplying the corresponding column of T by the inverse of the Vandermonde matrix based on the four values yph), p=1, . . . , 4, of Y such that (αh, yph)) is a point on F16 of the curve Y4+Y+X5=0.


It is noted that this determination of the errors ê(αh, yph)) is performed directly without having beforehand and explicitly located the aggregates containing at least one error.


The passage to a more general case does not give rise to difficulties. It is still possible to calculate a vectors σj of length q−1 and the “transformed” components Ej(x) of the errors are the components of the vector σjV−1 where V is the Vandermonde matrix based on the non-zero elements of Fq. If for given x, all the Ej(x) are zero, it is naturally considered that the components of the received word r in the aggregate of index x are correct. On the contrary, if at fixed x there is at least one component Ej(x) which is non-zero, it is considered that the aggregate of index x is erroneous and it is sought to determine the errors affecting each component r(x, yp(x)) of the received word in that aggregate.


For this purpose, calculation is made for each such value of x of the estimations ê(x, yp(x)) of the respective errors in the components r(x, yp(x)) of the received word by solving the equation system








E
j



(
x
)


=




p
=
1


λ


(
x
)







[


y
p



(
x
)


]

j




e
^



(

x
,


y
p



(
x
)



)








in which equations have been chosen that are associated with consecutive values of j and are at least equal in number to the number λ(x) of components of the aggregate indexed by x. This system contains a non-singular Vandermonde system of small size of which the vector solution formed by the components of the error in the aggregate indexed by x=αh for α primitive in Fq is given by:







[




e


(


α
h

,


y
1



(

α
h

)



)







e


(


α
h

,


y
2



(

α
h

)



)












e


(


α
h

,


y

λ


(

α
h

)





(

α
h

)



)





]

=




[



1








1






y
1



(

α
h

)












y

λ


(

α
h

)





(

α
h

)






















y
1


λ


(

α
h

)


-
1




(

α
h

)












y

λ


(

α
h

)




λ


(

α
h

)


-
1




(

α
h

)





]


-
1


·

[




τ

0
,
h







τ

1
,
h












τ



λ


(

α
h

)


-
1

,
h





]


=


ϑ

-
1


·


[




τ

0
,
h







τ

1
,
h












τ



λ


(

α
h

)


-
1

,
h





]

.







Returning to the example described here in detail, it may be noted that, for non-zero x in fixed F16, the four solutions of the equation Y4+Y+x5=0 in Y are of four different types:





[α α2 α4 α8] for x=1, α3, α6, α9, α12,





6 α7 α9 α13] for x=α, α4, α7, α10, α13,





3 α11 α12 α14] for x=α2, α5, α8, α11, α14.


For example, the Vandermonde matrix θ constructed on [α6 α7 α9 α13] (usable for the aggregates corresponding to x=α, α4, α7, α10, α13) is:






[



1


1


1


1





α
6




α
7




α
9




α
13






α
12




α
14




α
3




α
11






α
3




α
6




α
12




α
9




]




and its inverse θ−1 is:







[




α
14




α
12




α
6



1





α
13




α
14




α
7



1





α
11




α
3




α
9



1





α
7




α
11




α
13



1



]

.




By multiplying this inverse matrix by the second column of the matrix τ corresponding to x=α, the vector [0 0 0 α12]T is obtained;


By multiplying this inverse matrix by the fifth column of the matrix τ corresponding to x=α4, the vector [0 0 0 1]T is obtained;


By multiplying this inverse matrix by the eighth column of the matrix τ corresponding to x=α7, the vector [0 α9 α9 0]T is obtained;


By multiplying this inverse matrix by the eleventh column of the matrix τ corresponding to x=α10, the vector [0 0 0 0]T is obtained;


By multiplying this inverse matrix by the fourteenth column of the matrix τ corresponding to x=α13, the vector [0 0 0 0]T is obtained.


In the aggregate indexed by x=α, there is thus a single non-zero error. It corresponds to y=α8 and the value of that error is α12;


in the aggregate indexed by x=α4, there is thus a single non-zero error. It corresponds to y=α8 and the value of that error is 1;


in the aggregate indexed by x=α7, there are thus two non-zero errors corresponding to y=α2 and to y=α4 and the value of both these errors is α9;


in the aggregate indexed by x=α10, all the errors are zero;


in the aggregate indexed by x=α13, all the errors are zero.


When these estimated values of error are subtracted from the corresponding components of the received word, the zero symbol is obtained everywhere.


Similar calculations may be carried out for both the other classes of aggregates corresponding respectively to x=1, α3, α6, α9, α12 and to x=α2, α5, α8, α11, α14. In all cases, by subtracting the estimated values of error from the corresponding components of the received word, the zero symbol is obtained everywhere. The decoded word is thus the zero word.


In the general case, the aggregates do not necessarily all have the same size. This is the case if the code, although constructed on a maximum curve of type C(a, b), that is to say having aq points over Fq, is shortened. This is also the case, more fundamentally, if for the curve of type C(a, b) given by the equation F(X, Y)=0, there are different values x1 and x2 in Fq, for which the number of solutions at Y (in Fq) of F(x1, Y)=0 and of F(x2, Y)=0 is different.


For example, with a=4, if there are available the four polynomials









S
j



(
Z
)


=




i
=
0


q
-
2






σ
j



(
i
)




Z
i




,





j
=
0

,





,
3
,




from which can be calculated the four vectors τj=σjV−1, where σj is the vector of the σj(i). In equivalent manner, τj is the vector having in position h, 0≦h≦q−2, the value of the polynomial Sj(Z) evaluated in Z=α−h with α primitive in Fq.


If for example the case is taken in which the aggregate corresponding to X=x only contains three coordinates denoted (x, y1(x)), (x, y2(x)) and (x, y3(x)), there are four equations relating to the three unknowns ê (x, ys(x)), s=1, 2, 3:






ê(x,y1(x))+ê(x,y2(x))+ê(x,y3(x))=S0(x−1)=τ0(x),






y
1(x)ê(x,y1(x))+y2(x)ê(x,y2(x))+y3(x)ê(x,y3(x))=S1(x−1)=τ1(x),





(y1(x))2ê(x,y1(x))+(y2(x))2ê(x,y2(x))+(y3(x))2ê(x,y3(x))=S2(x−1)=τ2(x),





(y1(x))3ê(x,y1(x))+(y2(x))3ê(x,y2(x))+(y3(x))3ê(x,y3(x))=S3(x−1)=τ3(x),


that are written as







[




τ
0






τ
1






τ
2






τ
3




]

=


[



1


1


1










y
1




y
2




y
3











y
1
2




y
2
2




y
3
2











y
1
3




y
2
3




y
3
3









]



[




e
1






e
2






e
3




]






where the arguments x and yi(x) have been omitted from the elements of the matrix and of the two vectors. In such a case, if no yi(x) is zero, it is possible to choose either the first three equations or the last three, from the four equations given above in matrix form. For example, if the last three are taken, calculation will be made of the three components ei of the error in the aggregate indexed by x by solving the system







[




τ
1






τ
2






τ
3




]

=



[




y
1




y
2




y
3






y
1
2




y
2
2




y
3
2






y
1
3




y
2
3




y
3
3




]



[




e
1






e
2






e
3




]


.





The first of the four equations is then usable to verify the consistency of the values ei so calculated.


The block diagram of FIG. 2 represents an apparatus for reading encoded digital signals 70, incorporating the decoder 10. This apparatus 70 comprises a keyboard 711, a screen 709, an external recipient of information 109, a data reader 105 and a demodulator 106, conjointly connected to input/output ports 703 of the decoder 10 which is produced here in the form of a logic unit.


The decoder 10 comprises, connected together by an address and data bus 702:


a central processing unit 700,


a random access memory (RAM) 704,


a read only memory (ROM) 705; and


said input/output ports 703.


Each of the elements illustrated in FIG. 2 is well known to the person skilled in the art of microcomputers and mass storage systems and, more generally, of information processing systems. These known elements are therefore not described here. It should be noted, however, that:


the information recipient 109 could be, for example, an interface peripheral, a display, a modulator, an external memory or another information processing system (not shown), and could be adapted to receive sequences of signals representing speech, service messages or multimedia data in particular of the IP or ATM type, in the form of sequences of binary data,


the reader 105 is adapted to read data recorded on a carrier such as a magnetic or magneto-optical disk.


The random access memory 704 stores data, variables and intermediate processing results, in memory registers bearing, in the description, the same names as the data whose values they store. The random access memory 704 contains in particular the following registers:


registers “received_words”, in which the received words are kept,


a register “estimated_symbols”, in which are stored the symbols from a received word in course of correction,


a register “associated_words”, in which are stored the symbols of the “associated codewords”, and


a register “information_symbols”, in which are stored the symbols resulting from the redundancy removal.


The read only memory 705 is adapted to store, in registers which, for convenience, have the same names as the data which they store:


the operating program of the central processing unit 700, in a register “program”,


the length of each codeword in a register “n”,


the number of information symbols in each code word, in a register “k”, and


a table containing each word YjXi of which the components are equal to the value taken by the monomial YjXi at the points of the locating set, for i=0, . . . , 2L−1 and j=0, . . . , Λmax−1, where L is the total number of aggregates and Λmax is the size of the maximum aggregate, in a register “W”.


An application of the invention to the mass storage of data has been described above by way of example, but it is clear that the methods according to the invention may equally well be implemented within a telecommunications network, in which case unit 105 could for example be a receiver adapted to implement a protocol for data packet transmission over a radio channel.

Claims
  • 1. A method of decoding a one-point algebraic geometric code defined on an algebraic curve of type C(a,b) represented by an equation F(X,Y)=0 of degree b in X and of degree a in Y over Fq, characterized in that it comprises the following steps: calculating extended error syndromes (σj(i)) associated with a received word (r),determining the values of errors in each component r(x,yp(x)) of the received word r, on the basis of the extended error syndromes calculated.
  • 2. A decoding method according to claim 1, characterized in that, the algebraic curve being partitioned into aggregates, the step of determining the values of errors in each component on the basis of the extended error syndromes comprises the following steps for each aggregate: calculating, on the basis of the extended error syndromes, compound error values (rj(h), Ej(x)) each representing a linear combination of the errors in the components of the aggregate;determining that the errors in the components of the aggregate are zero if all said compound error values are zero;calculating the errors in the components of the aggregate if at least one of said compound error values is not zero.
  • 3. A decoding method according to claim 2, characterized in that the calculation of the extended error syndromes σj(i) is performed for j=0, . . . , jo−1, where jo is at least equal to the maximum value λmax of the cardinals λ(x) of the aggregates of the code.
  • 4. A decoding method according to one of claims 1 to 3, characterized in that determining the values of errors comprises the following steps: calculating τj=σj·V−1 for j=0, . . . , a−1, where V−1 is the matrix of dimensions (q−1)×(q−1) of which the element in position (i,j), 1≦i,j≦q−1, is α−(i−1)(j−1), α being a primitive element of Fq;for all h such that there is j such that τj(h)≠0, calculating the error ê(x,yp(x)) of the components of the aggregate defined by x=αh in the received word by:
  • 5. A decoding method according to claim 4, characterized in that, for each j, the calculation of the elements of the vector τj is performed by a circuit comprising a shift register and receiving as input the elements of the vector σj.
  • 6. A decoding method according to one of claims 1 to 3, characterized in that determining the values of errors comprises the following steps: implementing, for j=0, . . . , λmax−1, by means of the error syndromes polynomial
  • 7. A decoding method according to claim 6, characterized in that the calculation of the estimations ê(x,yp(x)) is performed only for the values of x such that there is at least one value of j for which Ej(x) is non-zero.
  • 8. A decoding method according to claim 4, characterized in that, if there is at least one value of x=αh associated with an erroneous aggregate of cardinal λ(x)<a, comparison is then made between the members of at least one equation
  • 9. An error correction device for a one-point algebraic geometric code defined on an algebraic curve of type C(a,b) represented by an equation F(X,Y)=0 of degree b in X and of degree a in Y over Fq, characterized in that it comprises: means for calculating extended error syndromes (σj(I)) associated with a received word (r);means for determining the values of errors in each component r(x,yp(x)) of the received word r, on the basis of the extended error syndromes calculated.
  • 10. An error correction device according to claim 9, characterized in that, the algebraic curve being partitioned into aggregates, the means for determining the values of errors in each component on the basis of the extended error syndromes comprise: means for calculating, for each aggregate and on the basis of the extended error syndromes, compound error values (τj(h), Ej(x)) each representing a linear combination of the errors in the components of the aggregate;means for determining that the errors in the components of an aggregate are zero if all said compound error values relative to that aggregate are zero;means for calculating the errors in the components of an aggregate if at least one of said compound error values relative to that aggregate is not zero.
  • 11. An error correction device according to claim 10, characterized in that the means for calculating the extended error syndromes σj(I) are implemented for j=0, . . . , jo−1, where jo is at least equal to the maximum value λmax of the cardinals λ(x) of the aggregates of the code.
  • 12. An error correction device according to one of claims 9 to 11, characterized in that the means for determining the values of errors comprise: means for calculating τj=σj·V−1 for j=0, . . . , a−1, where V−1 is the matrix of dimensions (q−1)×(q−1) of which the element in position (i,j), 1≦i,j≦q−1, is α−(i−1)(j−1), α being the primitive element of Fq;means for calculating, for all h such that there is j such that τj(h)≠0, the error ê(x,yp(x)) of the components of the aggregate defined by x=αh in the received word r by:
  • 13. An error correction device according to claim 12, characterized in that, for each j, it comprises a circuit comprising a shift register and receiving as input the elements of the vector σj to calculate the elements of the vector τj.
  • 14. An error correction device according to one of claims 9 to 11, characterized in that the means for determining the values of errors comprise: means for implementing, for j=0, . . . , λmax−1, by means of the error syndromes polynomial
  • 15. An error correction device according to claim 14, characterized in that the means for calculating the estimations ê(x,yp(x)) are implemented only for the values of x such that there is at least one value of j for which Ej(x) is non-zero.
  • 16. An error correction device according to claim 12, characterized by means for comparing, if there is at least one value of x=αh associated with an erroneous aggregate of cardinal λ(x)<a, between the members of at least one equation
  • 17. A decoder (10), characterized in that it comprises: at least one error correction device according to any one of claims 9 to 11, andat least one redundancy removal unit (108).
  • 18. An apparatus for receiving encoded digital signals (70), characterized in that it comprises a decoder according to claim 17, and in that it comprises means (106) for demodulating said encoded digital signals.
  • 19. A computer system (70), characterized in that it comprises a decoder according to claim 17, and in that it further comprises: at least one hard disk, andat least one means (105) for reading that hard disk.
  • 20. Non-removable data storage means, characterized in that it comprises computer program code instructions for the execution of the steps of a method according to any one of claims 1 to 3.
  • 21. Partially or wholly removable data storage means, characterized in that it comprises computer program code instructions for the execution of the steps of a method according to any one of claims 1 to 3.
  • 22. A computer program, characterized in that it contains instructions such that, when said program controls a programmable data processing device, said instructions cause said data processing device to implement a method according to any one of claims 1 to 3.
Priority Claims (1)
Number Date Country Kind
0413840 Dec 2004 FR national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/IB05/04034 12/22/2005 WO 00 6/21/2007