Properties of a channel affect the amount of data that can be handled by the channel. The so-called “Shannon limit” defines the theoretical limit of amount of data that a channel can carry.
Different techniques have been used to increase the data rate that can be handled by a channel. “Near Shannon Limit Error-Correcting Coding and Decoding: Turbo Codes,” by Berrou et al. ICC, pp 1064-1070, (1993), described a new “turbo code” technique that has revolutionized the field of error correcting codes.
Turbo codes have sufficient randomness to allow reliable communication over the channel at a high data rate near capacity. However, they still retain sufficient structure to allow practical encoding and decoding algorithms. Still, the technique for encoding and decoding turbo codes can be relatively complex.
A standard turbo coder is shown in
The encoders 102, 104 are also typically recursive convolutional coders.
Three different items are sent over the channel 150: the original k bits 100, first encoded bits 110, and second encoded bits 112.
At the decoding end, two decoders are used: a first constituent decoder 160 and a second constituent decoder 162. Each receives both the original k bits, and one of the encoded portions 110, 112. Each decoder sends likelihood estimates of the decoded bits to the other decoders. The estimates are used to decode the uncoded information bits as corrupted by the noisy channel.
The present application describes a new class of codes, coders and decoders: called “turbo-like” codes, coders and decoders. These coders may be less complex to implement than standard turbo coders.
The inner coder of this system is rate 1 encoder, or a coder that encodes at close to rate 1. This means that this coder puts out a similar number of bits to the number it takes in. Fewer bits are produced as compared with other systems that use rate less than 1 as their inner coder.
The system can also use codes use component codes in a serially concatenated system. The individual component codes forming the overall code may be simpler than previous codes. Each simple code individually might be considered useless.
More specifically, the present system uses an outer coder, an interleaver, and inner coder. Optional components include a middle coder 305, where the middle coder can also include additional interleavers.
The inner coder 200 is a linear rate 1 coder, or a coder whose rate is close to 1.
Unlike turbo coders that produce excess information in their final coder, the present system uses a final coder which does not increase the number of bits. More specifically, however, the inner coder can be one of many different kinds of elements.
An embodiment of the present system, in its most general form, is shown in
Encoder 200 is called an outer encoder, and receives the uncoded data. The outer coder can be an (n,k) binary linear encoder where n>k. The means that the encoder 200 accepts as input a block u of k data bits. It produces an output block v of n data bits. The mathematical relationship between u and v is v=T0u, where T0 is an n×k binary matrix. In its simplest form, the outer coder may be a repetition coder. The outer coder codes data with a rate that is less than 1, and may be, for example, ½ or ⅓.
The interleaver 220 performs a fixed pseudo-random permutation of the block v, yielding a block w having the same length as v. The permutation can be an identity matrix, where the output becomes identically the same as the input. Alternately and more preferably, the permutation rearranges the bits in a specified way.
The inner encoder 210 is a linear rate 1 encoder, which means that the n-bit output block x can be written as x=TIw, where TI is a nonsingular n×n matrix. Encoder 210 can have a rate that is close to 1, e.g., within 50%, more preferably 10% and perhaps even more preferably within 1% of 1.
The overall structure of coders such as the one in
A number of different embodiments will be described herein, all of which follow the general structure of
More generally, there can be more than 2 encoders: there can be x encoders, and x−1 interleavers. The additional coder can be generically shown as a middle coder.
A number of embodiments of the coders are described including a repeat and accumulate (“RA”) coder, an repeat double accumulate (“RDD”) coder and a repeat accumulate accumulate (“RAA”) coder.
The RA coder includes an outer coder and an inner coder connected via a pseudorandom interleaver. The outer code uses a simple repetition code, and the inner code is a rate 1 accumulator code. The accumulator code is a truncated rate 1 convolutional code with transfer function 1/(1+D). Further details are provided in the following.
In the q=3 embodiment of the encoder, a block of k data bits (u[1], u[2], . . . , u[k]), (the u-block) is subjected to a three-stage process which produces a block of 3k encoded bits (x[1], x[2], . . . , x[3k]) (the x-block). This process is depicted in
Stage 1 of the encoding process forms the outer encoder stage. This system uses a repetition code. The input “u” block (u[1], . . . , u[k]) is transformed into a 3k-bit data block (v[1], v[2], . . . , v[3k]) (the v-block). This is done by repeating each data bit 3 times, according to the following rule:
Stage 2 of the encoding process is the interleaver 510. The interleaver converts the v-block into the w-block as follows:
Stage 3 of the encoding process is the accumulator 520. This converts the w-block into the x-block by the following rule:
The accumulator 520 can alternatively be represented as a digital filter with transfer function equal to 1/(1+D) as shown in 425.
The RA coder is a 1/q coder, and hence can only provide certain rates, e.g. ½, ⅓, ¼, ⅕, etc. Other variations of this general system form alternative embodiments that can improve performance and provide flexibility in the desired rate.
One such is the “RDD” code. The encoder for RDD is shown in
In another preferred embodiment shown in
As described above, the “repetition number” q of the first stage of the encoder can be any positive integer greater than or equal to 2. The outer encoder is the encoder for the (q, 1) repetition code.
The outer encoder can carry out coding using coding schemes other than simple repetition. In the most general embodiment, the outer encoder is a (q, k) block code. For example, if k is a multiple of 4, the input block can be partitioned into four bit subblocks, and each 4-bit subblock can be encoded into 8 bits using an encoder for the (8,4) extended Hamming code. Any other short block code can be used in a similar fashion, for example a (23, 12) Golay code.
In general, k can be partitioned into subblocks k1, k2, . . . km such that
can be similarly partitioned. This, the k input bits can be encoded by m block codes (qi, ki) for any i. In general, these outer codes can be different. Truncated convolutional codes can be used as the block codes. Repetition codes can also be used as the block codes.
In a similar fashion, the q output bits of the interleaver can be partitioned into j subblocks q′1, q′2 . . . such that the summation of all the q′I=q. Then each subblock can be encoded with a rate 1 inner code. In general these inner codes can be different recursive rate 1 convolutional codes:
The accumulator 520 in stage 3 of the encoder can be replaced by a more general device, for example, an arbitrary digital filter using modulo 2 arithmetic with infinite impulse response (“i.i.r.”)
The system can be a straight tree, or a tree with multiple branches.
Some or all of the output bits from the outer encoder can be sent directly to the channel and/or to a modulator for the channel.
Any of a number of different techniques can be used for decoding such a code. For example, soft input soft output can be used with a posteriori probability calculations to decode the code.
A specific described decoding scheme relies on exploiting the Tanner Graph representation of an RA code.
Roughly speaking, a Tanner Graph G=(V,E) is a bipartite graph whose vertices can be partitioned into variable nodes Vm and check nodes Vc, where edges E⊂Vm×Vc. Check nodes in the Tanner Graph represent certain “local constraints” on a subset of variable nodes. An edge indicates that a particular variable is present in a particular constraint.
The Tanner Graph realization for an RA code is explained with reference to
Notice that every xi is a replica of some uj. Therefore, all qk equations in the above can be represented by check nodes ci. These check nodes represent both information bits ui and code bits yi by variable nodes with the same symbol.
Edges can be naturally generated by connecting each check node to the ui and yis that are present in its equation. Using notation C={ci}, U={ui} Y={yi} provides a Tanner Graph representation of an RA code, with Vm=U∪Y and Vc=C.
Generally, in the Tanner Graph for a repetition q RA code, every ui is present in q check nodes regardless of the block-length k. Hence every vertex uεU has degree q. Similarly, every vertex cεC has degree 3 (except the first vertex c1 which has degree 2), and every vertex yεY has degree 2 (except the last vertex yqk, which has degree 1.
“Belief propagation” on the Tanner Graph realization is used to decode RA codes at 910. Roughly speaking, the belief propagation decoding technique allows the messages passed on an edge e to represent posterior densities on the bit associated with the variable node. A probability density on a bit is a pair of non-negative real numbers po, p1 satisfying po+P1=1, where po denotes the probability of the bit being 0, p1 the probability of it being 1. Such a pair can be represented by its log likelihood ratio log
It can be assumed that the messages here use this representation.
There are four distinct classes of messages in the belief propagation decoding of RA codes, namely messages sent (received) by some vertex uεU to (from) some vertex cεC, which are denoted by m[u,c] (m[c,u]), and messages sent (received) by some vertex yεY to (from some vertex cεC, which are denoted by m[y,c] (m[c,y]). Messages are passed along the edges, as shown in
both m[y,c] and m[c,y] have the conditional value of log
Each code node of y also has the belief provided by received bit yr, which value is denoted by
With all the notations introduced, the belief propagation decoding of an RA code can be described as follows:
Initialize all messages m[u,c], m[c,u], m[y,c], m[c,y] to be zero at 905. Then interate at 910. The messages are continually updated over K rounds at 920 (the number K is predetermined or is determined dynamically by some halting rule during execution of the algorithm). Each round is a sequential execution of the following script:
Update m[y,c]
Update m[c,u]:
Update m[u,c]:
m[u,c]=Σc′m[u,c′], where (u,c′)εE and c′≠c.
Update m[c,y]:
if c=c1, where (u, c)εE and uεU, otherwise, where (u, c), (y′,c)εand y≠y′εY.
Upon completion of the K iterative propagations, the values are calculated based on votes at 930. Specifically, compute
for every uεU, where the summation is over all the c such that (u,c)εE. If s(u)>=0, bit u is decoded to be 1; otherwise, it is decoded to be 0.
Although only a few embodiments have been disclosed herein, other modifications are possible. For example, the inner coder is described as being close to rate 1. If the rate of the inner coder is greater than one, certain bits can be punctured to decrease the bit rate.
The present application claims benefit of U.S. Provisional Application No. 60/149,871, filed Aug. 18, 1999.
The work described herein may have been supported by Grant Nos. NCR 9505975, awarded by the National Science Foundation, and 5F49620-97-1-0313 awarded by the Air Force. The US Government may have certain rights to this invention.
Number | Date | Country | |
---|---|---|---|
60149871 | Aug 1999 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09922852 | Aug 2000 | US |
Child | 11429083 | May 2006 | US |