Information
-
Patent Application
-
20040194006
-
Publication Number
20040194006
-
Date Filed
December 29, 200320 years ago
-
Date Published
September 30, 200420 years ago
-
Inventors
-
Original Assignees
-
CPC
-
US Classifications
-
International Classifications
Abstract
The present invention concerns channel codes particularly well adapted to transmission in channels in,which errors tend to occur in bursts. Moreover, the codes according to one embodiment of the invention using an algebraic geometric curve are easy to decode and have a relatively high minimum distance. The invention also relates to the corresponding encoding and decoding methods, as well as the devices and apparatuses adapted to implement those methods. Application is in particular to mass storage, and to systems of communication by OFDM.
Description
[0001] The present invention concerns communication systems in which, in order to improve the fidelity of the transmission, the data to be transmitted are subjected to a channel encoding. More particularly it relates both to encoding methods and to decoding methods, and also to the devices and apparatuses adapted to implement those methods.
[0002] It will be recalled that so-called “channel” encoding consists, when the “codewords” sent to the receiver are formed, of introducing a certain amount of redundancy in the data to be transmitted. More particularly, by means of each codeword, the information is transmitted that is initially contained in a predetermined number k of symbols taken from an “alphabet” of finite size q; on the basis of these k information symbols, calculation is made of a number n of symbols belonging to that alphabet, so as to form codewords v=[v1,v2, . . . vn]. The set of codewords obtained when each information symbol takes some value in the alphabet constitutes a sort of dictionary referred to as a “code” of “dimension” k and “length” n.
[0003] When the size q of the alphabet is a power of a prime number, the alphabet can be given the structure of a so-called “Galois field” denoted Fq, of which the non-zero elements may conveniently be identified as each being equal to γi−1 for a corresponding value of i, where i=1, . . . , q−1, and where γ is an element of Fq chosen from the so-called “primitive” elements of that field. Where the alphabet is a Galois field, certain codes may conveniently be associated with a matrix H of dimension (n−k)×n known as a “parity matrix”, defined over Fq: a given word v of length n is a codeword if, and only if, it satisfies the relationship: H·vT=0 (where the exponent T indicates the transposition); the code is then said to be “orthogonal” to the matrix H. These codes, which are termed “linear codes”, will be the only codes considered further on.
[0004] At the receiver, the associated decoding method then judiciously uses this redundancy to detect any transmission errors and if possible to correct them. There is a transmission error if the difference e between a received word r and the corresponding codeword v sent by the transmitter is non-zero.
[0005] More particularly, the decoding is carried out in two main steps.
[0006] The first step consists of associating an “associated codeword” {circumflex over (v)}, which is an estimated value of the codeword v, with the received word r. To do this, the decoder first of all calculates the vector of “error syndromes” H·rT=H·eT. If the syndromes are all zero, it is assumed that no transmission error has occurred, and the “associated codeword” {circumflex over (v)} will then simply be taken to be equal to the received word r. If that is not the case, it is thereby deduced that certain symbols in the received word are erroneous, and a correction algorithm is then implemented which is adapted to estimate the value of the error e; the algorithm will thus provide an estimated value ê such that {circumflex over (v)}=r−ê is a codeword, which will then constitute the “associated codeword”.
[0007] The second step simply consists in reversing the encoding method. In the ideal situation in which all the transmission errors have been corrected, the initial information symbols are thereby recovered.
[0008] The purpose of an error correction algorithm is to associate with the received word the codeword situated at the shortest Hamming distance from that received word, the “Hamming distance” being, by definition, the number of places where two words of the same length have a different symbol. The shortest Hamming distance between two different codewords of a code is termed the “minimum distance” d of that code. This is an important parameter of the code. More particularly, it is in principle possible to find the position of the possible errors in a received word, and to provide the correct replacement symbol (i.e. that is identical to that sent by the transmitter) for each of those positions, each time the number of erroneous positions is at most equal to INT[(d−1)/2] (where “INT” designates the integer part) for a code of minimum distance d (fort certain error configurations, it is sometimes even possible to achieve better). However, in all cases, the concern is not with a possibility in principle, since it is often difficult to develop a decoding algorithm achieving such performance. It should also be noted that, when the chosen algorithm manages to propose a correction for the received word, that correction is all the more reliable (at least, for most transmission channels) the smaller the number of positions it concerns.
[0009] Among known codes, “Reed-Solomon” codes may be cited, which are reputed for their efficiency (for a definition of Reed-Solomon codes, reference may be made to the work by R. E. Blahut entitled “Theory and practice of error-control codes”, Addison-Wesley, Reading, Mass.,1983). These codes are defined over Fq, and their minimum distance d is equal to (n−k+1). To decode them, a so-called “Berlekamp-Massey” algorithm is usually employed for the detection of the erroneous positions in a received word, and a so-called “Forney” algorithm for the correction of the corresponding erroneous symbols (these algorithms are described in the work mentioned above).
[0010] For modern information carriers, for example on hard disks, CD's -(“compact discs”) and DVD's (“digital video discs”), it is sought to increase the density of information. When such a carrier is affected by a physical defect such as a scratch, a high number of information symbols may be rendered unreadable. This problem may nevertheless be remedied by using a very long code. However, Reed-Solomon codes have the particularity that the length n of the codewords is necessarily less than or equal to the size q of the alphabet of the symbols. Consequently, if a Reed-Solomon code is desired having codewords of great length, high values of q must be envisaged, which leads to costly implementations in terms of calculation and storage in memory. Moreover, high values of q are sometimes ill-adapted to the technical application envisaged. For this reason, it has been sought to build codes which naturally provide words of greater length than Reed-Solomon codes.
[0011] In particular so-called “algebraic geometric codes” or “Goppa geometric codes” have recently been proposed (see for example “Algebraic Geometric Codes” by par J. H. van Lint, in “Coding Theory and Design Theory” 1st part, IMA Volumes Math. Appl., volume 21, Springer-Verlag, Berlin, 1990). These codes, also defined over a Galois field Fq, are constructed on the basis of an algebraic equation with two unknowns X and Y. The solutions to this algebraic equation may be considered as the coordinates (x,y) of points on an “algebraic curve”. To define a parity matrix, an ordered set is first of all constituted, termed a “locating set”, based on n such points of which all the coordinates are finite; then each row of the parity matrix is obtained by calculating the value of one judiciously chosen function of X and Y for each element of that locating set. An algebraic geometric code of length n is thus obtained.
[0012] An important parameter of such a curve is its “genus” g. In the particular case where the curve is a simple straight line (the genus g is then zero), the algebraic geometric code reduces to a Reed-Solomon code. In certain cases, algebraic geometric codes make it possible to achieve a length equal to (q+2g{square root}{square root over (q)}), which may be very high; for example, with an alphabet length of 256 and a genus equal to 120, codewords are obtained of length 4096. It should moreover be noted that algebraic geometric codes have a minimum distance d greater than or equal to (n−k+1−g).
[0013] Algebraic geometric codes are advantageous as to their minimum distance, and, as has been said, as to the length of the codewords, but they have the drawback of requiring decoding algorithms that are rather complex, and thus rather expensive in terms of equipment (software and/or hardware) and processing time. This complexity is in fact greater or lesser according to the algorithm considered, a greater complexity being in principle the price to pay for increasing the error correction capacity of the decoder. (see for example the article by Tom Høholdt and Ruud Pellikaan entitled “On the Decoding of Algebraic-Geometric Codes”, IEEE Trans. Inform. Theory, vol. 41 no. 6, pages 1589 to 1614, November 1995).
[0014] Like all codes, algebraic geometric codes may be “modified” and/or “shortened”. It is said that a given code Cmod is a “modified” version of the code C if there is a square non-singular diagonal matrix A such that each word of Cmod is equal to v·A with v being in C. It is said that a given code is a “shortened” version of the code C if it comprises solely the words of C of which, for a number k of predetermined positions, the components are all zero: as these positions are known to the receiver, their transmission can be obviated, such that the length of the shortened code is (n−R). In particular, it is common to shorten an algebraic geometric code by removing from the locating set, where possible, one or more points for which the x coordinate is zero.
[0015] The object of the invention, inter alia, is to provide a code making it possible to correct a relatively high number of transmission errors in an economic manner, particularly where transmission errors have a tendency to occur in “error bursts” during the transmission of encoded symbols (it should be recalled that an “error burst” is a series of errors of which the frequency is high with respect to the mean frequency of errors over the channel considered; such error bursts are observed both in certain radio transmissions and in certain recordings on hard disk.)
[0016] Thus the creators of the present invention wondered whether, on determining the properties of the code used to transmit information over a given channel, it might be possible to take into account the characteristics of the channel envisaged to choose a well-adapted code. In particular, said creators considered the channels in which the data to transmit are grouped in blocks of predetermined length, and in which the transmission error rate per item of data transmitted is essentially constant within the same block; on other words, such channels are physically characterized in that, most often, transmission “noises” affect the data per block, and may affect different blocks differently; thus, for certain blocks, the probability of error can be very low or even zero, but for certain other blocks the probability of error can be very high and even close to (q−1)/q.
[0017] For such channels, it is advantageous to use a communication system with multiple carriers known as “OFDM” (which stands for “Orthogonal Frequency Division Multiplexing”). OFDM is particularly useful in environments in which the received signal is the sum of multiple transmitted signals which have undergone various reflections, and thus various phase shifts and attenuations, over their path between transmitter and receiver. Interference effects result from this which it is necessary to correct in order to guarantee good reception quality. OFDM achieves this objective by dividing the total bandwidth into a certain number of portions allocated to “subcarriers” of different frequency, such that the OFDM signal results from the superposition of the individual signals, which are mutually orthogonal, associated with those subcarriers.
[0018] More particularly, the data to be transmitted are first of all expressed, in conventional manner, in the form of “elementary symbols”, that is to say complex numbers defined in accordance with a certain modulation method, for example of phase (“Phase Shift Keying” or “PSK”), or of both phase and amplitude in combination (“Quadrature Amplitude Modulation” or “QAM”). In an OFDM system, those elementary symbols are then taken P by P (where P is a predetermined integer) and converted, by means of an IDFT (Inverse Discrete Fourier Transform) into a series of K (where K is a predetermined integer) complex numbers cr(r=0, . . . , K−1) representing as many “carriers”. Finally, the real part of the signal is transmitted defined by:
1
[0019] where the function h(t) is, by definition, equal to 1 in the interval 0≦t≦T. and zero outside that interval.
[0020] After receiving the modulated signal, a DFT (Discrete Fourier Transform) is implemented which is the inverse of the preceding one, which restores each of the individual elementary symbols.
[0021] For more details on OFDM, reference may for example be made to the book by R. van Nee and R. Prasad entitled “OFDM for Wireless Multimedia Communications” (Artech House, Boston and London, 2000).
[0022] Thus the noise affecting the signal c(t) during its transmission over the channel will globally affect the block of P elementary symbols from which it issues, and consequently the MP corresponding binary elements, where 2M is the cardinal of the modulation constellation.
[0023] The present invention thus relates to a channel code adapted to take advantage of such a distribution of noise over a transmission channel. At the same time, it is desired for the code to be easy to decode, and to have a relatively high minimum distance.
[0024] Thus, according to a first aspect, the invention relates to a method of encoding information symbols, comprising a step in which a codeword v, of length n and orthogonal to a parity matrix H, is associated with every block of k information symbols belonging to a Galois field Fq, where q is an integer greater than 2 and equal to a power of a prime number. This method of encoding is remarkable in that the element Hαβ at position (α, β) (where α=1, . . . , n−k, and β=1, . . . , n) of said parity matrix H is equal to the value taken by the monomial Mα at the point Pβ, where
[0025] the monomials Mα≡XiYj, where the integers i and j are positive or zero, are such that if, among those monomials, there is one at i>0 and arbitrary j, then there is also one at (i−1) and j, and if there is one at arbitrary i and j>0, then there is also one at i and (j−1), and
[0026] said points Pβ are pairs of non-zero symbols of Fq which have been classified by aggregates:
(x1,y1(x1)), (x1,y2(x1)), . . . , (x1,yλ1(x1)); (x2,y1(x2)), (x2,y2(x2)), . . . , (x2,yλ2(x2)); . . . ; (xμ,y1(xμ)), (xμ,y2(xμ)), . . . , (xμ,yλμ(xμ)) (with
2
[0027] ).
[0028] Thus, according to the invention, the columns of the parity matrix are arranged by “aggregates”, an “aggregate” being defined as being a set of pairs of symbols belonging to Fq which have a common value for the first element of those pairs. Each codeword v being, by definition, orthogonal to the parity matrix, it satisfies:
3
[0029] (for α=1, . . . , n−k); it will thus be convenient, under the present invention, to replace the index β by the corresponding point Pβ=(x,y) in order to identify a component of the codewords, such that it will be possible to write:
v
=[v
(x1,y1(x1)), . . . v(x1,yλ1(x1)), . . . , v(xμ,yλμ(xμ))];
[0030] furthermore, when components of codewords that are indexed in such manner by pairs (x,y) are such that their indexes have a common value x, it is convenient to state that those components form an “aggregate” of components. As the components belonging to the same aggregate are inserted in adjacent positions in the flow of data to transmit, the method of encoding according to the invention is particularly efficient for channels where the errors tend to occur in error bursts, provided that a method of correcting aggregates is implemented rather than of correcting individual errors. A method of correcting this type is moreover disclosed further on.
[0031] According to particular features applicable when said codewords v are destined to be transmitted in the form of blocks of predetermined length,
[0032] successive codewords v are put end to end so as to form a continuous chain of data to transmit, and
[0033] that chain of data is divided up into blocks of said predetermined length.
[0034] In this case, the components of an aggregate will, generally, be represented in the same data block transmitted; for example, in the case of OFDM, the components of each aggregate will be represented in the same interval of time KT.
[0035] According to still more particular features applicable when the codewords are not exactly divisible into blocks, each incomplete block is completed with a predetermined arbitrary sequence of data.
[0036] In a variant form, when the codewords are not exactly divisible into blocks, each incomplete block is completed by copying the value of the data situated at a predetermined number of positions of the corresponding codeword equal to the number of items of data to complete. It will usually be convenient to copy data situated in the same incomplete block.
[0037] Thanks to these provisions, it is possible to accelerate the process of transmitting blocks, and also to use the supplementary data so inserted for the proposes of synchronization.
[0038] According to particular features, on the one hand, said points Pβ form part of the solutions to an algebraic equation
X
b
+c Y
a
+Σc
ij
X
i
Y
j
=0,
[0039] where c (≠0) and the cij are elements of Fq, a and b are strictly positive mutually prime integers, and where the sum only applies to the integers i and j which satisfy a i+b j<a b, and, on the other hand, the maximum power jmax of Yin the monomials Mα is strictly less than a.
[0040] The benefit is thus obtained of the large minimum distance guaranteed by the algebraic geometric codes.
[0041] According to features that are still more particular, on the one hand, said monomials Mα=Xi Yj satisfy:
a i+b j≦m,
[0042] where m is a predetermined strictly positive integer, and on the other hand
λ(x)≦jmax+1
[0043] for all x=x1,x2, . . , xμ.
[0044] As explained in detail further on, this particular structure of the parity matrix makes it possible to associate with each codeword a certain number of words encoded according to Reed-Solomon. The correction of errors for the latter words will advantageously be simple and rapid, as is well-known in relation to the algorithms adapted to Reed-Solomon codes.
[0045] In a complementary manner, according to the same first aspect, the invention relates to a method of decoding received data, remarkable in that said received data result from the transmission of encoded data according to any one of the methods of encoding described succinctly above.
[0046] The received data may in particular result from the transmission of data encoded in accordance with the method according to the invention provided with the still more particular features described above. In this case, a word
r
≡[r
(x1,y1(x1), . . . , r(x1,yλ1(x1)), . . . , r(xμ,yλμ(xμ))],
[0047] of length n having been received, and an integer smax satisfying
λ(x)−1≦smax≦jmax
[0048] for all x=x1,x2, . . . , xμ having been predetermined, the decoding method, according to particular features, comprises the following steps:
for s=0, . . . , smax:
[0049] calculating the word
r
s
≡[r
s
(x1), rs(x2), . . . ,rs(xμ)],
[0050] of length μ, in which, for x=x1,x2, . . . ,xμ, the symbol
4
[0051] is erased if at least one of the symbols r(x,yi(x)) is itself erased, and
[0052] calculating the error syndrome vector as σs≡Ht(s)rsT, where
5
[0053] and where t(s) designates the number of monomials Mα=Xi Yj having j=s,
[0054] attempting to calculate a word {circumflex over (v)}0≡[{circumflex over (v)}0(x1),{circumflex over (v)}0(x2), . . . , {circumflex over (v)}0(xμ)] by correcting the word r0 according to the error syndrome vector σ0 by means of an error correction algorithm adapted to take into account erasures,
for s=1, . . . , smax
[0055] erasing, where the preceding error correction attempt has succeeded, for all x such that {circumflex over (v)}s−1(x)≠rs−1(x), the symbols rp(x) for p=s, . . . , smax, and
[0056] attempting to calculate a word {circumflex over (v)}≡[{circumflex over (v)}s(x1),{circumflex over (v)}s(x2), . . . ,{circumflex over (v)}s(xμ)] by correcting the word rs according to the error syndrome vector σs by means of an error correction algorithm adapted to take into account erasures, and
[0057] calculating, where the above (smax+1) correction attempts have succeeded, for x=x1,x2, . . . ,xμ, the symbols {circumflex over (v)}(x,yi), where i=1, . . . ,λ(x), which are respectively the estimated values of the transmitted symbols corresponding to the received symbols r(x, yi), by solving the system of (smax+1) equations:
6
for s=0, . . . , smax.
[0058] As can be seen, this decoding method explicitly manipulates the symbols received by aggregates. It is thus very sensitive to the number of aggregates of the received word which contain errors, and little sensitive to the total number of erroneous symbols in the received word. Due to this, the efficiency of the error correction is optimized, on condition that, for a received word comprising transmission errors, those errors only affect a limited number of aggregates, which is generally the case.
[0059] An additional advantage of this decoding method, by virtue of the flexibility given by the choice of the parameter smax, is the possibility of U.E.P. (Unequal Error Protection) as explained in detail further on.
[0060] Moreover, it will be noted that it is easy to generalize the methods of encoding and decoding succinctly described above in the case in which the parity matrix H as described above is replaced by the parity matrix HA=H·A, where A is a non-singular diagonal matrix:
[0061] the word rA≡r·A−1 is associated with each received word r,
[0062] to said word rA a decoding method as succinctly described above is applied, for the code of which the parity matrix is H, and
[0063] if that application results in an estimated value {circumflex over (v)}A, then {circumflex over (v)}={circumflex over (v)}A·A is taken as the estimated value of the transmitted word corresponding to said received word r.
[0064] Similarly, it should be noted that the codewords v described above may be obtained, in entirely equivalent manner, by two steps instead of one, by commencing with a parity matrix Hπ obtained by applying an arbitrary permutation π−1 to the columns of a matrix H as succinctly described above: first of all, on the basis of the information symbols, words vπ orthogonal to Hπ are constructed, then the permutation π is applied to the components of vπ, so as to obtain the words v destined to be transmitted, in which the components belonging to the same aggregate are adjacent. After receiving the word r corresponding to v, and possible correction of the transmission errors so as to obtain an estimated value {circumflex over (v)}, it suffices to apply the permutation π−1 to that word {circumflex over (v)} to obtain the estimated value {circumflex over (v)}π of the word vπ.
[0065] For the implementation of the invention, it is possible to choose a so-called “hyperelliptic” algebraic code, in which the exponent a of Y is equal to 2. However, from the article (in Japanese) by S. Miura entitled “Hyperelliptic Codes II” (12th Symposium on the Theory of Information and its Applications—SITA '89, Inuyama, Japan, December 1989), a decoding method is known which was designed for a family of codes also defined over a Galois field Fq. These codes will be referred to below as “Miura codes”. These codes are of even length n and are characterized by a parity matrix cleverly chosen so that, to decode a received word, it is possible to apply any decoding algorithm for Reed-Solomon code of length n/2 to two words of length n/2 deduced in a certain manner from the received word.
[0066] A first family of Miura codes, of length n=2q, is defined by the following parity matrix (in what follows, a primitive element of the Galois field Fq will be designated by γ):
7
[0067] where
[0068] r is a strictly positive integer,
[0069] {overscore (H)}2r and {overscore (H)}r are respective embodiments for u=2r and u=r of the matrix {overscore (H)}u with u lines and q columns defined by {overscore (H)}u ij=γ(i−1)(j−1)(1≦i≦u, 1≦j≦q−1), {overscore (H)}u Hd iq =0 (2≦i≦u), and {overscore (H)}u1q=1, and
[0070] Y1 and Y2 are two square matrices of dimension q, proportional to the identity matrix, and different from each other.
[0071] But the drawback of these Miura codes is that their minimum distance, which is equal to (2r+1), is (provided that r is greater than 8) less than the minimum distance of certain known algebraic geometric codes of the same redundancy (they are codes relying on an “attractive” hyperelliptic equation, i.e. having, whatever the value x of X, exactly two solutions (x,y1) and (x,y2) in Fq and where, furthermore, these values y1 and y2 of Y are different from each other).
[0072] As explained in the work by R. E. Blahut cited above, it is simpler to decode a Reed-Solomon code of length (q−1) defined over Fq than a code of length q, still defined over Fq. As the decoding of the codes used by the invention relies on decoding algorithms for Reed-Solomon codes, it is useful, to facilitate the decoding, to have codes shortened to the length n=2(q−1).
[0073] Furthermore, it is possible to define a second family of Miura codes, of length n=2(q−1), by the following parity matrix:
8
[0074] where
[0075] r is a strictly positive integer,
[0076] H2r−Hr and γ are respective embodiments for u=2r and u=r of the matrix Hu with u lines and (q−1) columns defined by HUij=γ(i−1)(j−1)(1≦i≦u, 1>j>q−1), and
[0077] Y1 and Y2 are two square matrices of dimension (q−1), proportional to the identity matrix, and different from each other.
[0078] These Miura codes of the second family have the drawback, as for those of the first family, that their minimum distance, which is equal to (2r+1), is (provided that r is greater than 8) less than the minimum distance of known algebraic geometric codes of the same redundancy (which rely on an attractive hyperelliptic code).
[0079] By comparison, the code according to the invention, in the particular case in which, on the one hand, n=2q or n=2(q−1), and in which, on the other hand, n−k=3r for any strictly positive integer r, has similar properties of “decomposition” in a pair of Reed-Solomon codes as the Miura codes, but may have a greater minimum distance. It can -be shown for example that the minimum distance of such a code according to the invention applied to an attractive hyperelliptic equation is equal to (2r+2); it is thus greater by one unit than the minimum distance of the corresponding Miura code. In a received word, a decoding algorithm adapted for such a code is capable of correcting r aggregates containing errors (of which the position and value are unknown before application of that algorithm), even if the two components of those aggregates are erroneous, except in the case in which certain aggregates are such that, not only the two components of such an aggregate contain an error, but these two errors are furthermore equal to each other.
[0080] According to the same first aspect, the invention also relates to a method of communication of data in the form of blocks of predetermined length. This communication method comprises the following steps:
[0081] a) encoding the data to transmit, in accordance with one of the methods of encoding succinctly described above,
[0082] b) transmitting said encoded data blocks by OFDM, and
[0083] a) decoding the received data, in accordance with one of the methods of decoding succinctly described above,
[0084] The advantages of this method of communication are essentially the same as those of the corresponding methods of encoding and decoding succinctly set out above, with, in addition, the particular advantages given by the OFDM.
[0085] According to a second aspect, the invention relates to various devices.
[0086] Thus the invention relates firstly to an encoding device comprising a unit for calculating codewords adapted to associate a codeword v of length n orthogonal to a parity matrix H with any block k of information symbols belonging to a Galois field Fq, where q is an integer greater than 2 and equal to a power of a prime number. This encoding device is remarkable in that the element Hαβ at position (α, β) (where α=1, . . . , n−k, and β=1, . . . , n) of said parity matrix H is equal to the value taken by the monomial Mα at the point Pβ, where
[0087] the monomials Mα≡Xi Yj, where the integers i and j are positive or zero, are such that if, among those monomials, there is one at i>0 and arbitrary j, then there is also one at (i−1) and j, and if there is one at arbitrary i and j>0, then there is also one at i and (j−1), and
[0088] said points Pβ are pairs of non-zero symbols of Fq which have been classified by aggregates: (x1,y1(x1)), (x1,y2(x1)), . . . , (x1,yλ1(x1)); (x2,y1(x2)), (x2,y2(x2)), . . . , (x2,yλ2(x2)); . . . ; (xμ,y1(xμ)), (xμ,y2(xμ)), . . . , (xμ,yλμ(xμ)) (with
9
[0089] ).
[0090] According to particular features, this encoding device further comprises a formatting unit adapted to put the successive words v end to end so as to form a continuous chain of data to transmit, and to divide up that chain of data into blocks of predetermined length.
[0091] According to still more particular features applicable when the codewords are not exactly divisible into blocks, said formatting unit is capable of completing each incomplete block with a predetermined arbitrary sequence of data.
[0092] According to still more particular features applicable when the codewords are not exactly divisible into blocks, said formatting unit is capable of completing each incomplete block by copying the value of the data situated at a predetermined number of positions of the corresponding codeword equal to the number of items of data to complete. For example these copied data may conveniently be situated in the same incomplete block.
[0093] Secondly, the invention relates to a device for decoding received data resulting from the transmission of data encoded according to any one of the encoding methods succinctly described above. This decoding device comprises:
[0094] an error correction unit adapted to correct the transmission errors of said encoded data, and
[0095] a unit for calculating information symbols.
[0096] Where the received data result from the transmission of data encoded in accordance with the method according to the invention provided with the still more particular features described above, a word
r
≡[r
(x1,y1(x1), . . . , r(x1, yλ1(x1)), . . . , r(xμ,yλμ(xμ))],
[0097] of length n having been received, and an integer smax satisfying
λ(x)−1≦smax≦jmax
[0098] for all x=x1,x2, . . . ,xμ having been predetermined, said error correction unit is, according to particular features, adapted to:
for s=0, . . . ,smax:
[0099] calculate the word
r
s
≡[r
s
(x1), rs(x2), . . . , rs(xμ)],
[0100] of length μ, in which, for x=x1, x2, . . . ,xμ, each symbol
10
[0101] is erased if at least one of the symbols r(x,yi(x)) is itself erased, and
[0102] calculate the error syndrome vector σs≡Ht(s)rsT, where
11
[0103] and where t(s) designates the number of monomials Mα=Xi Yj having j=s,
[0104] attempt to calculate a word {circumflex over (v)}0≡[{circumflex over (v)}0(x1),{circumflex over (v)}0(x2), . . . , {circumflex over (v)}0(xμ)] by correcting the word r0 according to the error syndrome vector σ0 by means for an error correction algorithm adapted to take into account erasures,
for s=1, . . . ,smax:
[0105] erase, where the preceding error correction attempt has succeeded, for all x such that {circumflex over (v)}s−1(x)≠rs−1(x), the symbols rp(x) for p=s, . . . , smax, and
[0106] attempting to calculate a word {circumflex over (v)}≡[{circumflex over (v)}s(x1),{circumflex over (v)}s(x2), . . . ,{circumflex over (v)}s(xμ)] by correcting the word rs according to the error syndrome vector σs by means for an error correction algorithm adapted to take into account erasures, and
[0107] calculate, where the above (smax+1) correction attempts have succeeded, for x=x1,x2, . . . ,xμ, the symbols {circumflex over (v)}(x,yi), where i=1, . . . ,λ(x), which are respectively the estimated values of the transmitted symbols corresponding to the received symbols r(x,yi), by solving the system of (smax+1) equations:
12
for s=0, . . . , smax.
[0108] According to particular features applicable when said codewords v have been transmitted in the form of blocks of predetermined length, the decoding device further comprises a reformatting device adapted to put said blocks of received data end to end after having removed, where appropriate, the data added before transmission to complete certain blocks, and to identify in the flow of data so obtained sequences of length n forming “received words” r.
[0109] The advantages of these devices are essentially the same as those of the corresponding encoding and decoding methods described succinctly above.
[0110] The invention also relates to:
[0111] an apparatus for transmitting encoded digital signals, comprising an encoding device as succinctly described above, means for modulating said encoded digital signals, and a modulated data transmitter,
[0112] an apparatus for recording encoded digital signals, comprising an encoding device as succinctly described above, means for modulating said encoded digital signals, and a modulated data recorder,
[0113] an apparatus for receiving encoded digital signals, comprising a decoding device as succinctly described above, means for demodulating said encoded digital signals, and a modulated data receiver,
[0114] an apparatus for reading encoded digital signals, comprising a decoding device as succinctly described above, means for demodulating said encoded digital signals, and a modulated data reader,
[0115] a system for telecommunicating data in the form of blocks of predetermined length comprising at least one apparatus for transmitting encoded digital signals as succinctly described above, and at least one apparatus for receiving encoded digital signals as succinctly described above,
[0116] a system for mass storage comprising at least one apparatus for recording digital signals as succinctly described above, at least one recording medium, and at least one apparatus for reading encoded digital signals as succinctly described above,
[0117] a non-removable data storage means comprising computer program code instructions for the execution of the steps of any one of the methods of encoding and/or decoding and/or communicating succinctly described above,
[0118] a partially or wholly removable data storage means comprising computer program code instructions for the execution of the steps of any one of the methods of encoding and/or decoding and/or communicating succinctly described above, and
[0119] a computer program containing instructions such that, when said program controls a programmable data processing device, said instructions lead to said data processing device implementing one of the methods of encoding and/or of decoding and/or of communicating succinctly described above.
[0120] The advantages provided by these transmitting, recording, receiving or reading apparatuses, these systems for telecommunication or mass storage, these means for data storage and this computer program are essentially the same as those provided by the methods of encoding, decoding and communicating according to the invention.
[0121] Other aspects and advantages of the invention will emerge from a reading of the following detailed description of particular embodiments, given by way of non-limiting example. The description refers to the accompanying drawings, in which:
[0122]
FIG. 1 is a block diagram of a system for transmitting information according to one embodiment of the invention,
[0123]
FIG. 2 represents an apparatus for transmitting signals incorporating an encoder according to the invention, and
[0124]
FIG. 3 represents an apparatus for receiving signals incorporating a decoder according to the invention.
[0125]
FIG. 1 is a block diagram of a system for transmitting information according to one embodiment of the invention.
[0126] The function of this system is to transmit information of any nature from a source 100 to a recipient or user 109. First of all, the source 100 puts this information into the form of symbols belonging to a certain alphabet (for example bytes of bits in the case in which the size q of the alphabet is 256), and transmits these symbols to a storage unit 101, which accumulates the symbols so as to form sets each containing k symbols. Next, each of these sets is transmitted by the storage unit 101 to a codeword computation unit 102 which constructs a word v orthogonal to the parity matrix H.
[0127] The methods of encoding and decoding according to the invention will now be illustrated, with the aid of a numerical example. Note that this example does not necessarily constitute a preferred choice of parameters for the encoding or decoding. It is provided here only to enable the person skilled in the art to understand the operation of the invention more easily.
[0128] An algebraic geometric code will thus be considered with length 1020 and dimension 918 defined as follows.
[0129] The alphabet of the symbols is constituted by the 28 elements of the Galois field F256 (i.e. by bytes of binary symbols) (this field may be constructed with the aid of the polynomial (X8+X4+X3+X2+1) defined over F2).
[0130] The following algebraic curve is then considered of genus g=24 of which the points (X y) are the solutions in F256 of the equation with two unknowns
f
(X,Y)=X17−Y4−Y=0. (1)
[0131] This equation is said to be “attractive” since, for any value x taken by X in F256, the corresponding equation in Y has λ(x)=4 distinct solutions which are also in F256. Each of the 256 sets of 4 points having a common value of X constitute an aggregate” within the meaning of the invention.
[0132] This curve thus comprises 1024 points of finite coordinates (as well as a point P∞ at infinity). Preferably, the code will be “shortened” by removing from that set the four solutions of the equation for which x=0. The set of the remaining points Pβ (where β=1, . . . ,1020) will thus constitute the locating set, each point Pβ serving to identify the βth element of any codeword. In accordance with the invention, by means of the number β, these points are classified such that the points of the same aggregate bear successive values of the number β (here four distinct values for each aggregate).
[0133] Next, the vector space L(mP∞) is considered of polynomials in X and Y with coefficients in F256 of which solely the poles are situated in P∞, and are of order less than or equal to m, where m is a strictly positive integer (it is thus a so-called “one-point” algebraic geometric code). This vector space, which is of dimension greater than or equal to (m−g+1) (equal if m≧2g−2), has a base constituted by the monomials (XiYj)), where i is a positive integer or zero, j is an integer between 0 and 3, and: 4i+17j≦m. This quantity W(ij)≦4i+17j is often referred to as the “weight” of the monomial (XiYj).
[0134] More generally, use could advantageously be made of an algebraic equation
[0135]
f
(X,Y)≡Xb+c Ya+Σc ijXiYj=0, (2)
[0136] where c (≠0 ) and the c ij are elements of Fq, a and b are strictly positive prime integers, and where the sum only applies to the integers i and j which satisfy ai+bj<ab.
[0137] For such an algebraic equation, only the monomials (XiYj) where the exponent j of Y is strictly less than a, and the weight of such a monomial (XiYj) is defined by W(ij)≡a i+b j. In this embodiment, a maximum weight m is set, such that the monomials may be classified in the sets of monomials
T
(j)≡{Xi Yj|0≦ii≦(m−bj)/a} (3)
[0138] for j≧0, j<a, and j<(m/b). The cardinal of this set T(j) is thus:
t
(j)=1+INT[(m−bj)/a]
[0139] In the case of equation (1), where a=4 and b=17, if for example we take m=125, then 4 sets of monomials are obtained:
T
(0)≡{Xi|0≦i≦31}, with a maximum weight W(31,0)=124,
T
(1)≡{Xi Y|0≦i≦27}, with a maximum weight W(27,1)=125,
T
(2)≡{Xi Y2|0≦i≦22}, with a maximum weight W(22,2)=122, and
T
(3)={Xi Y3|0≦i≦18}, with a maximum weight W(18,3)=123.
[0140] The base of the vector space L(mP∞) then comprises: 32+28+23+19=102 monomials.
[0141] Finally, a parity matrix H is defined in the following manner: the monomials Mα=XiYj (of weight less than or equal to m, and where the maximum value jmax of j is strictly less than a) are arranged in any order as a function of i and j, and the element in line a (with α=1, . . . , n−k) and column β (with β=1, . . . , n) of the matrix H is equal to the monomial Mα evaluated at point Pβ of the algebraic curve. These points Pβ correspond to distinct solutions to the algebraic equation (2), but the person skilled in the art will decide, as a function of the application envisaged, if it is useful to include all the solutions in the locating set, or if on the contrary (as was done in the numerical example above) it is appropriate instead to select a particular solution. Whatever the case, each set of points (x,yp) (where p=1, . . . , λ(x)) of the locating set constitutes an aggregate within the meaning of the invention, and naturally λ(x)≦a.
[0142] It can thus be seen that the choice of the integers m and
13
[0143] are related. Thus, in the numerical example considered:
[0144] n−k=102, and so k=918.
[0145] In this embodiment, it will moreover be required that the size chosen for each aggregate respects the condition
λ(x)≦jmax+1 (x=x1, x2, . . . ,xμ),
[0146] in order to be able to implement the decoding method described further on.
[0147] The codeword calculation unit 102 constructs a word v, orthogonal to the parity matrix H so defined, on the basis of each set of k information symbols.
[0148] In this embodiment of the invention, the formatting unit 20 puts the words v end to end so as to construct blocks of the length provided for by the transmission system.
[0149] Units 101, 102 and 20 can be considered to form conjointly an “encoder” 30.
[0150] Encoder 30 transmits said blocks to a modulator 103. This modulator 103 associates a modulation symbol with each group of M binary symbols (“bits”). It may for example be a matter of a complex amplitude defined according to the 4-QAM constellation or 8-DPSK or 16-QAM; in fact it may be necessary, where appropriate, to limit (if permitted) the size of the constellation in order to limit the number of items of data included in each block of P elementary symbols (where, for example, P=96), since a transmission error affecting a whole block representing a high number of components of v could prove to exceed the correction capacity for the code.
[0151] However, it is then necessary to know how to solve the practical problem which arises when the codewords are not exactly divisible into blocks, i.e. when the length of the codewords, expressed in bits, is not an integer multiple of the MP bits represented in a block.
[0152] To illustrate this problem, consider again our example in which the codewords have a length of 1020 bytes (corresponding to q=256), i.e. 8160 bits, and take P=96. We find that 8160 is not divisible by 96M, whether M be equal to 2, to 3 or to 4. Take, for example, M=2: the bits of a codeword will then “fill” 42 blocks each representing 192 bits, but 96 bits will still remain to be processed, which only occupy half a block.
[0153] Consider first of all the case in which the components of the codewords are not continuously produced by the encoder 30. If a first block has been commenced at the same time as a codeword, then the last block is half incomplete, since the second half of its contents, which could “accommodate” the start of the following codeword, is not yet available. This causes a delay in the transmission which may be bothersome for the recipient of the transmission.
[0154] To overcome this problem, in this embodiment, the last block is completed by some predetermined sequence, for example a series of zeros. In a variant form, 96 bits read from predetermined positions of the codeword are repeated, for example the last 96 bits of the codeword represented in the first half of the last block. Next the block completed in this way is transmitted without awaiting the following codeword.
[0155] These two ways of associating codewords, and blocks to be transmitted, can also be used in the case in which the components of the codewords are continuously produced by the encoder 30. This is because one or other of these ways may conveniently be used by the receiver of the transmission for the purposes of synchronization, since it suffices the receiver to detect a data block of which the second half contains, in the first embodiment, a predetermined sequence, or else, in the second embodiment, a repetition of data already received in certain predetermined positions.
[0156] Next, these modulation symbols are transmitted to a transmitter or to a recorder 104, which inserts the symbols in a transmission channel. This channel may for example be a wired transmission or wireless transmission as is the case with a radio link. It may also correspond to storage on a suitable carrier such as a DVD or magnetic tape.
[0157] As explained above, there may advantageously be an OFDM transmitter transmitting a superposition of K (where, for example, K=48, or 64, or 96) discrete Fourier transformations of the P elementary symbols.
[0158] This transmission, after having been affected by a “transmission noise” whose effect is to modify or erase certain of the transmitted data at random, arrives at a receiver or a reader 105. It may advantageously be an OFDM receiver which applies a discrete Fourier transformation, inverse to the previous one, to the complex amplitude received, so as to obtain P elementary symbols.
[0159] The receiver (or reader) 105 then transmits these elementary symbols to the demodulator 106, which transforms them into symbols of the alphabet Fq. These symbols of Fq are then transmitted to the reformatting unit 40.
[0160] The reformatting unit 40 commences by erasing the additional data from each successive block, for example series of zeros, which had been added to the data blocks to “complete” those blocks, before transmitting them. Next, it identifies sequences of n successive symbols so obtained, each of these sequences constituting a “received word”.
[0161] That word r is next processed by a unit 107, which implements an error correcting algorithm, so as to provide an “associated codeword”.
[0162] Before presenting such an algorithm, it is useful to briefly reconsider the encoding according to the embodiment, described above, which utilizes an algebraic equation (2).
[0163] A formulation for belonging to the code will be presented which is equivalent to the orthogonal relationship H·vT=0 , and which will be very convenient for the decoding of the received words.
[0164] For every codeword
v
=[v
(x1,y1(x1)), . . . v(x1,yλ1(x1)), . . . , v(xμ,yλμ(xμ))],
[0165] for each aggregate attached to one of the values x1,x2, . . . ,xμ of x, there are constructed (jmax+1) “s-aggregate symbols”
14
[0166] for s=0, . . . ,jmax (it should be recalled that jmax is the maximum exponent of Y among the monomials Mα).
[0167] There are then constructed (jmax+1) “s-aggregate words”
v
s
≡[v
s
(x1),vs(x2), . . . ,vs(xμ)],
[0168] of length μ, with the use of which the condition of belonging to the code is reduced to the set of (jmax+1) equations:
H
t(s)
·v
s
T
=0,
[0169] where, by definition,
15
[0170] The advantage of this formulation is that the matrix Ht of equation (4) is a Vandermonde matrix defined over Fq; consequently, if Ht(s) is considered as a parity matrix defining codewords vs, we have here, for each value of s, a Reed-Solomon code, for which decoding algorithms are known which are simple as well as providing good performance; for example the Berlekamp-Massey algorithm could be used for locating erroneous symbols, followed by the Forney algorithm for the correction of those erroneous symbols.
[0171] More specifically, according to one embodiment of the invention, it is possible to proceed as follows to correct a received word
r
≡[r
(x1,y1(x1), . . . , r(x1,yλ1(x1)), . . . , r(xμ,yλμ(xμ))]
[0172] (of length n) taking into account erasures, i.e. information according to which the value of the symbol in a particular position in the received word is uncertain.
[0173] It is assumed that an integer smax satisfying
λ(x)−1≦smax≦jmax
[0174] for all x=x1,x2, . . . ,xμ, whose utility will appear further on, had been chosen before carrying out the following steps. By default, it is always possible to take smax=jmax
[0175] Firstly, for s=0, . . ,smax:
[0176] calculation is made of the word
r
s
≡[r
s
(x1), rs(x2), . . . ,rs(xμ)],
[0177] of length μ, in which, for x=x1,x2, . . . ,xμ, each symbol
16
[0178] is erased if at least one of the symbols r(x,yi(x)) is considered as doubtful by the receiver, and
[0179] calculation is made of the error syndrome vector σs≡Ht(s)rsT, where
17
[0180] and where t(s) designates the number of monomials Mα=Xi Yj having j=s.
[0181] Next, an attempt is made to calculate a word {circumflex over (v)}0≡[{circumflex over (v)}0(x1),{circumflex over (v)}0(x2), . . . ,{circumflex over (v)}0(xμ)] by correcting the word r0 according to the error syndrome vector σ0 by means for an error correction algorithm adapted to take into account erasures, such as the combination of the Berlekamp-Massey and Forney algorithms.
[0182] If that algorithm has not been able to provide a corrected word, it is thereby concluded that the means implemented do not enable that received word to be corrected, due to too high a number of transmission errors; the operations following (for example, replacing the word with a predetermined word such as the zero word) depend on the applications envisaged for the decoding method.
[0183] If, on the other hand, the correction algorithm is capable of proposing a word {circumflex over (v)}0, then for all x such that {circumflex over (v)}0(x)≠r0(x), the symbols rp(x) are erased for p=1, . . . ,smax.
[0184] Next, an attempt is made to calculate a word {circumflex over (v)}1≡[{circumflex over (v)}1(x1),{circumflex over (v)}1(x2), . . . ,{circumflex over (v)}1(xμ)] by correcting the word r1 according to the error syndrome vector σ1 by means for an error correction algorithm adapted to take into account erasures, such as the combination of the Berlekamp-Massey and Forney algorithms.
[0185] If that algorithm has not been able to provide a corrected word, it is thereby concluded that the means implemented do not enable that received word to be corrected, due to too high a number of transmission errors; the operations following (for example, replacing the word with a predetermined word such as the zero word) depend on the applications envisaged for the decoding method.
[0186] If, on the other hand, the correction algorithm is capable of proposing a word {circumflex over (v)}1, for all x such that {circumflex over (v)}1(x)≠r1(x), the symbols rp(x) are erased for p=2, . . .smax.
[0187] The correction of the words rs is continued in similar manner (if possible) up to s=smax.
[0188] Finally, where the above (smax+1) correction attempts have succeeded, calculation is made, for x=x1,x2, . . . ,xμ, the symbols {circumflex over (v)}(x,yi), where i=1, . . . ,λ(x), which are respectively the estimated values of the transmitted symbols corresponding to the received symbols r(x,yi), by solving the system of (smax+1) equations:
18
[0189] for s=0, . . . ,smax.
[0190] For a given x, is it always possible to solve this system of equations? Note first of all that this system has the matrix
19
[0191] where the symbols y1,y2, . . . , yλ refer to the aggregate considered and are all distinct taken in pairs: it is thus a Vandermonde matrix. Moreover, as indicated above, λ(x)−1≦smax≦jmax<a.
[0192] If λ(x)=smax+1, matrix (6) is square, and the inversion of system (5) produces one and only one solution.
[0193] If, on the other hand, λ(x)≦smax, system (5) is “over determined”. In this case, it is possible for example to use the λ(x) first equations of system (5) to calculate the {circumflex over (v)}(x,yi) symbols, and to use the (smax+1−λ(x)) remaining equations, when one or more of them is not satisfied, to detect wrongly estimated values for the s-aggregate symbols {circumflex over (v)}s(x). It can thus be seen that, in the context of the decoding algorithm according to the invention, the correction of the symbols belonging to small aggregates may be rendered more reliable than that of the symbols belonging to large aggregates. Consequently, the method of decoding according to the invention gives the possibility of “unequal protection” against errors, which is desirable in certain applications as is well known to the person skilled in the art.
[0194] Once the correction has been terminated, the associated codeword {circumflex over (v)} is transmitted to an information symbols calculation unit 108, which extracts from it k information symbols by performing the inverse of the transformation implemented by unit 102. Finally, these information symbols are supplied to their recipient 109.
[0195] Units 40, 107 and 108 can be considered to form conjointly a “decoder”10.
[0196] The block diagram of FIG. 2 represents, very schematically, a device 48 for transmitting signals incorporating an encoder 30.
[0197] This device 48 comprises a keyboard 911, a screen 909, a source of external information 100, a modulator 103 and a transmitter of modulated data 104, conjointly connected to input/output ports 903 of an encoder 30 which is implemented here in the form of a logic unit.
[0198] The encoder 30 comprises, connected together by an address and data bus 902:
[0199] a central processing unit 900,
[0200] a random access memory RAM 904,
[0201] a read only memory 905, and
[0202] said input/output ports 903.
[0203] Each of the elements illustrated in FIG. 2 is well known to a person skilled in the art of microcomputers and transmission systems and, more generally, of information processing systems. These known elements are therefore not described here. It should be noted, however, that:
[0204] the information source 100 could, for example, be an interface peripheral, a sensor, a demodulator, an external memory or other information processing system (not shown), and could for example supply sequences of signals representing speech, service messages or multimedia data in particular of the IP or ATM type, in the form of sequences of binary data, and
[0205] the transmitter 104 is adapted to transmit signals of the OFDM system.
[0206] The random access memory 904 stores data, variables and intermediate processing results, in memory registers bearing, in the description, the same names as the data whose values they store. It should be noted, in passing, that the word “register” designates, throughout the present description, a memory area of low capacity (a few items of binary data) and equally a memory area of large capacity (making it possible to store a complete program) within a random access memory or read only memory.
[0207] The random access memory 904 contains in particular the following registers:
[0208] a register “information_symbols” in which the information symbols belonging to Fq are stored,
[0209] a register “code_words”, in which are stored the codewords v, and
[0210] a register “data_blocks” in which are stored the data blocks before they are submitted to the modulator 103.
[0211] The read only memory 905 is adapted to store, in registers which, for convenience, have the same names as the data which they store:
[0212] the operating program of the central processing unit 900, in a register “program”,
[0213] the cardinal of the Galois field Fq serving as alphabet for the code used, in a register “q”,
[0214] the number of information symbols serving to construct a codeword, in a register “k”,
[0215] the length of the stored codewords, in a register “n”,
[0216] the parity matrix of the code, in a register “H”, and
[0217] the length of the data blocks transmitted, in a register “block_length”.
[0218] The block diagram of FIG. 3 represents, very schematically, a signal receiving device 70 incorporating the decoder 10.
[0219] This apparatus 70 comprises a keyboard 711, a screen 709, a recipient of external information 109, a modulated data receiver 105 and a demodulator 106, conjointly connected to input/output ports 703 of the decoder 10 which is produced here in the form of a logic unit.
[0220] The decoder 10 comprises, connected together by an address and data bus 702:
[0221] a central processing unit 700,
[0222] a random access memory (RAM) 704,
[0223] read only memory (ROM) 705; and
[0224] said input/output ports 703.
[0225] Each of the elements illustrated in FIG. 3 is well known to a person skilled in the art of microcomputers and transmission systems and, more generally, of information processing systems. These known elements are therefore not described here. It should be noted, however, that:
[0226] the information recipient 109 could, for example, be an interface peripheral, a display, a modulator, an external memory or other information processing system (not shown), and could be adapted to receive sequences of signals representing speech, service messages or multimedia data in particular of the IP or ATM type, in the form of sequences of binary data, and
[0227] the receiver 105 is adapted to receive signals of the OFDM system.
[0228] The random access memory 704 stores data, variables and intermediate processing results, in memory registers bearing, in the description, the same names as the data whose values they store. The random access memory 704 contains in particular the following registers:
[0229] a register “data_blocks” in which the data blocks issuing from the demodulator 106 are stored,
[0230] a register “received_words”, in which the received words r are stored,
[0231] a register “associated_words” in which, the case arising, the words {circumflex over (v)} resulting from the correction of r are stored, and
[0232] a register “information_symbols” in which the information symbols calculated by the unit 108 are stored.
[0233] The read only memory 705 is adapted to store, in registers which, for convenience, have the same names as the data which they store:
[0234] the operating program of the central processing unit 700, in a register “program”,
[0235] the length of the data blocks transmitted, in a register “block_length”,
[0236] the cardinal of the Galois field Fq serving as alphabet for the code used, in a register “q”,
[0237] the length of the stored codewords, in a register “n”,
[0238] the number of information symbols serving to construct a codeword, in a register “k”, and
[0239] the length of the parity matrix of the code, in a register “H”.
[0240] It should be noted that, in certain applications, it will be convenient to use the same computer device (functioning in multitask mode) for the exchange, that is to say both the transmission and reception, of signals according to the invention; in this case, the units 10 and 30 will be physically identical.
[0241] To finish with, it should be noted that, when the code relies on a algebraic equation (2) and when the channel considered produces both error bursts and independent errors in relation to the symbols (and not in relation to the aggregates), it is advisable to use two decoders in parallel: the first will use the decoding algorithm according to the invention, and the second will use any known algorithm appropriate to correct, for an algebraic geometric code, the errors and/or erasures of symbols individually (for example the algorithm known as that of “Feng and Rao”). If a single of these two algorithms is capable of providing an estimated value of the transmitted word, or if both algorithms provide the same estimated value, it will be natural to accept that estimated value; on the other hand, it those algorithms provide two different estimated values, it will be necessary to provide a method of arbitration taking into account, preferably, the characteristics of the channel considered.
[0242] It will also be noted that, even if an application of the invention to the transmission of data over a radio channel has been described above by way of example, the methods according to the invention may equally be applied to mass storage, for example within the same computer; in that case, for example, unit 104 may be a recorder and unit 105 a reader of data on magnetic or magnetic-optic disk. Such an application is all the more appropriate since certain recording media of that type are subject to error bursts.
Claims
- 1. A method of encoding information symbols, comprising a step in which a codeword v of length n and orthogonal to a parity matrix H, is associated with every block of k information symbols belonging to a Galois field Fq, where q is an integer greater than 2 and equal to a power of a prime number, wherein element Hαβ at position (α, β) (where α=1, . . . ,n−k, and β=1, . . . , n) of said parity matrix H is equal to the value taken by the monomial Mα at the point Pβ, where
the monomials Mα≡Xi Yj, where the integers i and j are positive or zero, are such that if, among those monomials, there is one at i>0 and arbitrary j, then there is also one at (i−1) and j, and if there is one at arbitrary i and j>0, then there is also one at i and (j−1), and said points Pβ are pairs of non-zero symbols of Fq which have been classified by aggregates: 20(x1,y1(x1)),(x1,y2(x1)),… ,(x1,yλ1(x1));(x2,y1(x2)),(x2,y2(x2)),… ,(x2,yλ2(x2));… ;(xμ,y1(xμ)),(xμ,y2(xμ)),… ,(xμ,yλμ(xμ)) (with ∑p=1μ λp=n).
- 2. An encoding method according to claim 1, in which said codewords v are destined to be transmitted in the form of blocks of predetermined length, wherein
successive codewords v are put end to end so as to form a continuous chain of data to transmit, and that chain of data is divided up into blocks of said predetermined length.
- 3. An encoding method according to claim 2, in which the codewords are not exactly divisible into blocks, and each incomplete block is completed with a predetermined arbitrary sequence of data.
- 4. An encoding method according claim 2, in which the codewords are not exactly divisible into blocks, that and each incomplete block is completed by copying the value of the data situated at a predetermined number of positions of the corresponding codeword equal to the number of items of data to complete.
- 5. A method of encoding according to claim 4, in which said copied data are situated in the same incomplete block.
- 6. An encoding method according to any one of the preceding claims, in which said points Pβ form part of the solutions to an algebraic equation
- 7. An encoding method according to claim 6, in which said monomials Mα=Xi Yj satisfy:
- 8. An encoding method in which the parity matrix of the code is obtained by post-multiplying a parity matrix according to any one of claims 1-5 by a non-singular diagonal matrix.
- 9. A method of encoding information symbols, comprising a step in which a codeword vπ, of length n and orthogonal to a parity matrix Hπ is associated with every block of k information symbols belonging to a Galois field Fq, where q is an integer greater than 2 and equal to a power of a prime number, and a step in which a predetermined permutation π is applied to the components of vπ so as to obtain words v adapted to be transmitted, wherein said parity matrix Hπ is obtained by applying the permutation π−1 to the columns of a parity matrix according to any one of claims 1-5.
- 10. A method of decoding received data, in which said data result from the transmission of data encoded according to any one of claims 1-5.
- 11. A method of decoding received data resulting from the transmission of data encoded according to claim 7, wherein a word
- 12. (Canceled)
- 13. An encoding device (30) comprising a unit for calculating codewords (102) adapted to associate a codeword v, of length n and orthogonal to a parity matrix H, with every block of k information symbols belonging to a Galois field Fq, where q is an integer greater than 2 and equal to a power of a prime number, wherein element Hαβ at position (α, β) (where α=1, . . . ,n−k, and β=1, . . . , n) of said parity matrix H is equal to the value taken by the monomial Mα at the point Pβ, where
the monomials Mα≡XiYj, where the integers i and j are positive or zero, are such that if, among those monomials, there is one at i>0 and arbitrary j, then there is also one at (i−1) and j, and if there is one at arbitrary i and j>0, then there is also one at i and (j−1), and said points Pβ are pairs of non-zero symbols of Fq which have been classified by aggregates: 24(x1,y1(x1)),(x1,y2(x1)),… ,(x1,yλ1(x1));(x2,y1(x2)),(x2,y2(x2)),… ,(x2,yλ2(x2));… ;(xμ,y1(xμ)),(xμ,y2(xμ)),… ,(xμ,yλμ(xμ)) (with ∑p=1μ λp=n).
- 14. An encoding device according to claim 13, further comprising a formatting unit (20) adapted to put the successive words v end to end so as to form a continuous chain of data to transmit, and to divide up that chain of data into blocks of predetermined length.
- 15. A encoding device according to claim 14, in which the codewords are not exactly divisible into blocks, and said formatting unit (20) is capable of completing each incomplete block with a predetermined arbitrary sequence of data.
- 16. An encoding device according claim 14, in which the codewords are not exactly divisible into blocks, and said formatting unit (20) is capable of completing each incomplete block by copying the value of the data situated at a predetermined number of positions of the corresponding codeword equal to the number of items of data to complete.
- 17. An encoding device according to claim 16, in which said copied data are situated in the same incomplete block.
- 18. A device (10) for decoding received data resulting from the transmission of data encoded according to any one of claims 1 to 5, comprising:
an error correction unit (107) adapted to correct the transmission errors of said encoded data, and a unit (108) for calculating information symbols.
- 19. A device for decoding received data resulting from the transmission of data encoded according to claim 7, wherein, a word
- 20. A decoding device according to claim 18 or in which said codewords v have been transmitted in the form of blocks of predetermined length, further comprising a reformatting device (40) adapted to put said blocks of received data end to end after having removed, where appropriate, the data added before transmission to complete certain blocks, and to identify in the flow of data so obtained sequences of length n forming “received words” r.
- 21. Apparatus for transmitting encoded digital signals (48), comprising an encoding device according to any one of claims 13 to 17, means (103) for modulating said encoded digital signals, and a modulated data transmitter (104).
- 22. Apparatus according to claim 21, in which said modulation is in accordance with OFDM.
- 23. Apparatus for recording encoded digital signals (48), comprising an encoding device according to any one of claims 13 to 17, means (103) for modulating said encoded digital signals, and a modulated data recorder (104).
- 24. Apparatus for receiving encoded digital signals (70), comprising a decoding device according to claim 18, means (106) for demodulating said encoded digital signals, and a modulated data receiver (105).
- 25. Apparatus according to claim 24, in which said demodulation is in accordance with OFDM.
- 26. Apparatus for reading encoded digital signals (70), comprising a decoding device according to claim 18, means (106) for demodulating said encoded digital signals, and a modulated data reader (105).
- 27.-31. (Canceled)
- 32. A method of encoding information symbols, comprising a step of labelling the symbols of a codeword according to the pairs
- 33. A method of decoding a received word, comprising the steps of:
constructing a predetermined number of s-aggregate words rs=[rs(x1), rs(x2), . . . ,rs(xμ)], where s=0, . . . smax, from a received word r≡[r(x1,y1(x1)),r(x1,yλ1(x1)), . . . ,r(xμ,yλμ(xμ))], successively correcting said s-aggregate words rs for s=0, . . . ,smax to obtain words {circumflex over (v)}s by an error-correcting algorithm for a Reed-Solomon code, while erasing, when a word {circumflex over (v)}s for s=0, . . . smax−1 is obtained, symbols rp(x) for p=s+1, . . . , smax, for all x such that {circumflex over (v)}s(x) is not equal to rs(x), and calculating symbols which are respectively the estimated values of the transmitted symbols corresponding to the received symbols r(x,yi), by using said words {circumflex over (v)}s where s=0, . . . smax. {circumflex over (v)}(x, yi)
- 34. An encoding device for encoding information symbols, comprising means for labelling the symbols of a codeword according to the pairs
- 35. A decoding device for decoding a received word, comprising:
construction means for constructing a predetermined number of s-aggregate words rs=[rs(x1), rs(x2), . . . ,rs(xμ)], where s=0, . . . smax, from a received word r≡[r(x1,y1(x1)),r(x1,yλ1(x1)), . . . ,r(xμ,yλμ(xμ))], correction means for successfully correcting said s-aggregate words rs for s=0, . . . , smax, to obtain words {circumflex over (v)}s by an error-correcting algorithm for a Reed-Solomon code, and erasing means for erasing, when a word {circumflex over (v)}s for s=0, . . . , smax−1 is obtained, symbols rp(x) for p=s+1, . . . , smax for all x such that {circumflex over (v)}s(x) is not equal to rs(x), and calculation means for calculating symbols {circumflex over (v)} (x,yi), which are respectively the estimated values of the transmitted symbols corresponding to the received symbols r(x,yi), by using said words {circumflex over (v)}s where s=0, . . . ,smax.
Priority Claims (2)
Number |
Date |
Country |
Kind |
0216714 |
Dec 2002 |
FR |
|
0304767 |
Apr 2003 |
FR |
|