Turbo decoder using parallel processing

Abstract
A method of decoding using a log posterior probability ratio L(uk), which is a function of forward variable α (.) and backward variable β (.). The method comprises dividing the forward variable α (.) and the backward variable β (.) into, for example, two segments p and q, where p plus q equal the length of the code word U. The forward segments α (.) are parallel calculated, and the backward segments β (.) are parallel calculated. The ratio L(uk) is calculated using the parallel calculated segments of α (.) and β (.).
Description


BACKGROUND AND SUMMARY OF THE INVENTION

[0001] The present invention relates generally to decoders and, more specifically, to a turbo decoder that reduces processing time for the computation of A Posteroiri Probability (APP) and is suitable for implementation in parallel processing architectures.


[0002] Data delivered over a telecommunication channel are subject to channel noise, channel fading, and interferences from other channels. As a result, data received at the destination are usually “altered” by the channel from those delivered at the source. To ensure error-free transmission, the data are encoded before transmission by a channel encoder to allow the data receiver to detect or correct the errors. For example, if bit 0 is encoded as 000 and bit 1 is encoded as 111, then when one bit error occurs, 000 may become 100, and 111 may becomes 101. The receiver can correct 100 as 000 (bit 0) and 101 as 111 (bit 1) by the “majority rule” or the hamming distance. The part of receiver responsible for correcting errors is called a channel decoder.


[0003] Turbo encoders and decoders are used in emerging high-speed telecommunications transmission systems, such as terrestrial digital TV communication systems, and third generation wireless (e.g., WCDMA) communication systems. A turbo decoder has been demonstrated to approach the error correcting limit on both AWGN and Rayleigh fading channels.


[0004] Despite the error-correcting efficiency, however, a turbo decoder is computing intensive. To meet the real-time performance (e.g., a few millisecond), it is usually implemented in ASIC. If a turbo decoder is to be implemented in software running on a DSP or a CPU, as in the context of software defined radio, its real-time performance will be improved.


[0005] A 3GPP turbo encoder (FIG. 1) consists of a parallel concatenation of two identical RSC (Recursive Systematic Convolutional) encoders separated by an interleaver. The info word U of length K is encoded by the first RSC encoder, and the interleaved info word is encoded by the second RSC encoder. The interleaver de-correlates the inputs to the two RSC's by reordering the input bits to the second RSC, so that it is unlikely that the encoded bits from both RSC's have low weight code words at the same time. Also, it helps the encoded bits to cope with bursty noise. In 3GPP turbo encoder, a pseudo-random block interleaver is used. Both RSC encoded words are terminated by a trellis termination. The turbo encoded words are consists of systematic bits and two parity bits (U, Xp1, Xp2).


[0006] As shown in FIG. 2, the standard turbo decoder consists of two concatenated SISO (Soft Input Soft Output) blocks, one for each set of systematic and parity bits, (U′,Xp1′) and (U′,Xp2′), where Xp1′ and Xp2′ denote the noisy version of Xp1 and Xp2, respectively, and the same for U′ (U refers to the info words). The SISO blocks are A Posteriori Probability (APP) decoders also know as Maximum A Posteriori (MAP) decoders. The two SISO blocks are separated by the same interleaver (as the encoder) and its inverse block, the deinterleaver. Upon reception of bits from channel and priori information, each SISO block computes log posterior ratios of each bit with well-known forward and backward algorithm. Once SISO computes the log posterior ratios of all bits, it separates a probabilistic entity that was calculated based on its input from overall posterior, then pass it to the other SISO block. This probabilistic entity is often called extrinsic information (L12 and L21 in FIG. 2) for the other SISO block to use as prior information. The two SISO blocks run in an iterative scheme, mutually exchanging extrinsic information. After the required number of iterations is completed, hard decision is made based on accumulated soft information up to the iteration.


[0007] The log posterior probability ratio can be written as:
1L(uk)=log(P(uk=+1|y)P(uk=-1|y))=log(S+P(Sk-1,Sk,y)/P(y)S-P(Sk-1,Sk,y)/P(y)),(1)


[0008] where S+ and S denote the set of all possible state transitions caused by data input uk=+1 and uk=−1, respectively, and y denotes the set of observations, y=(y1, . . . , yk) where yk=(uk′, xk′), k=1, . . . , K. Note that yε(U′, Xp1′, Xp2′).


[0009] As usual, the posterior probability can be obtained by way of computing weighted likelihood, where weights are provided by the prior probability of the event uk. Direct evaluation of weighted likelihood requires the summations over a very large number of state patterns, which is proportional to the sequence length K. Because of the combinatorial complexity, it is not computationally feasible even for a reasonable length of the sequence.


[0010] To reduce the computation, an efficient procedure, known as forward and backward algorithm, is often used. In this algorithm, the posterior probability P(uk|y) is factorized into following 3 terms:


[0011] Forward variable, αk(.),


[0012] Backward variable,βk(.),


[0013] State transition probability, γk(.,.).


[0014] The αk(.) is the joint probability of the observations y1, . . . , yk and the state at time k, that is αk(S)=P(Sk, y1, . . . , yk). The βk(.) represents the conditional probability of future observations given state at time k,βk(S)=P(y1, . . . , yK+1|Sk). The γk(.,.) is the probability of the state transitions from k−1 to k, caused by uk, and expressed as γk(S′, S)=P(Sk=S,yk|Sk−1=S′).


[0015] The procedure of recursive calculation of αk(S) is implemented according to
2αk(S)=Sαk-1(s)γk(S,S).


[0016] For βk(S), the calculation is proceeded recursively as:
3βk(S)=Sβk+1(S)γk+1(S,S).


[0017] Since the turbo encoder is expected to start and end in state 1, the initial conditions for αk(.) and βk(.) are known and given as α0(S)=δ{S,1} and βK(S)=δ{S,1}, respectively, where δ{,.} denote the Kronecker delta.


[0018] Calculation of the posterior entity L(uk) as a function f(αk(.), βk(.)) is then equivalent to:
4P(uk|y)=S*αk-1(s)γk(S,S)βk(S)P(y)(2)


[0019] where S* is the set of state pairs corresponding to all state transitions caused by uk=+1/−1, and P(y) is a normalization constant.


[0020] The procedure of forward and backward algorithm is summarized as:


[0021] Calculate yk(.,.), k=1,2, . . . , K;


[0022] Calculate αk(.,.) forward recursively, k=0,1,2, . . . ,K;


[0023] Calculate βk(.,.) backward recursively, k=K,K−1, . . . 0;


[0024] Calculate (2) to form (1).


[0025] The present invention is a method of decoding using a log posterior probability ratio L(uk), which is a function of forward variable α (.) and backward variable β (.). The method comprises dividing the forward variable α (.) and the backward variable β (.) into, for example, two segments p and q, where p plus q equal the length of the codeword U. The forward segments α (.) are parallel calculated, and the backward segments β (.) are parallel calculated. The ratio L(uk) is calculated using the parallel calculated segments of α (.) and β (.). The first forward segment is calculated from α1(.), . . . , αp(.) starting from α0(.), whereas the second forward segment is calculated from αp+1(.), . . . , αK(.) starting from an estimated αp(.). The first backward segment is calculated from βK(.), . . . , βq+1(.) starting from βk(.), and the second backward segment is calculated from βq(.), . . . , β1(.) starting from an estimated βq+1(.)


[0026] To obtain the estimated initial point αp+1(.), the forward variable is calculated recursively from p−d+1 where d is an arbitrary amount of time and the state at time p−d+1 is treated as a uniform random variable. Similarly, for βq+1(.), the backward variable is calculated from q+d and again the state at time q+d is treated as a uniform random variable. With treating the states at time p−d+1 and q+d as uniform random variables, no informative prior knowledge of the states at the time is claimed.


[0027] The arbitrary amount of time, d, is in the range of 1 to 20 or may be in the range of 15 to 20. Also, the starting points for the estimated probability may also be a predetermined state. This predetermined state may be one divided by the number of possible states.


[0028] The method may include dividing the forward variable α (.) and the backward variable β (.) into more than two segments, and each of the forward and reverse segments would be calculated in parallel.


[0029] The process is performed in a signal receiver including a decoder.


[0030] These and other aspects of the present invention will become apparent from the following detailed description of the invention, when considered in conjunction with accompanying drawings.







BRIEF DESCRIPTION OF DRAWINGS

[0031]
FIG. 1 is a block diagram of a 3GPP turbo decoder of the prior art.


[0032]
FIG. 2 is a block diagram of a 3GPP turbo decoder of the prior art.


[0033]
FIG. 3 is a graph of bit error rate (BER) for a turbo decoder of the prior art and for a turbo decoder of the present invention under various signal to noise ratios (SNRs).


[0034]
FIG. 4 is a graph of block error rate or packet error rate (BLER) of the prior art turbo decoder and a turbo decoder according to the present invention under various SNR.







DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

[0035] One problem with the standard turbo decoding algorithm is that if the size of input sequence K is large, the time required for the computation of the above forward and backward variables grows, causing a long latency as we go through the forward and backward algorithms. K may reach 5114 bits. To reduce the time in calculating αk(.) and βk(.), the input data is divided into M segments and simultaneously calculate the αk(.) and βk(.) for the M segments. Truncation loss may occur with this segmnent-based approach; however, simulation results show that the loss was negligible when M=2.


[0036] In theory, this parallel scheme reduced the computation to nearly 1 out of M of the original calculation of αk(.) and βk(.) (e.g. ½ for M=2). The parallel computing is for the calculation of αk(.) and βk(.), which are the most computational intensive part of a turbo decoder.


[0037] The parallel algorithm will be discussed for the case of M=2 as an example. For M>2, the algorithm is similar. The algorithm consists of the following steps:


[0038] 1. Divide the forward and the backward variable into two parallel segments of size p and q, respectively, where p+q=K.


[0039] 2. Calculate the four segments simultaneously with the following four processes:


[0040] Process 1: calculate α1(.), . . . , αp(.) starting from α0(.);


[0041] Process 2: calculate αp+1(.), . . . , αk(.) starting from an estimated αp(.), say αp′(.);


[0042] Process 3: calculate (backwardly) βk(.), . . . , βq+1(.) staring from βk(.); and


[0043] Process 4: calculate (backwardly) βq(.), . . . ,β1(.) staring from an estimated βq+1(.), say βq+1′(.).


[0044] Process 1 and 3 are run as regular turbo alpha and beta calculation with known initial points (with reduced size), process 2 and 4 require estimated initial points.


[0045] The following algorithm can be used for obtaining αp′(.) for process 2 and βq+1′(.) for process 4. The first iteration start from αp−d+1(.) where d is an arbitrary amount of time steps. The state at time p−d+1 is treated as a uniform random variable. This implies that the probability of a specific state occurs at p−d+1 is ⅛ since the 3GPP turbo encoder has 8 system states. As a consequence, αp−d+1(.)=⅛ and as is βq+d(.) Starting from this uniform prior, when the process reaches at p, the estimate αp′(.) results, and at q+1, the estimate βq+1′(.) results.


[0046] The amount of information extracted from the observation for the duration d is proportional to d. A longer d may give a better initial estimate. However, since the computation during d steps represents “overhead”, d should be increased more than a certain limit. While d may be in the range of 1-20, simulation show that d=15˜20 provides decent results. From the second iteration on, αp−d+1(.) and βq+d(.) can be chosen by the same way as for the first iteration, or use the values resulted from process 1 and process 3 in previous iteration can be used.


[0047] Simulation scenarios are defined by SNR (signal to noise ratio). For each scenario, 2000 packets of size 5114 bits were randomly generated, turbo encoded, and subjected to AWGN noise. The “spoiled” packets were run through both the regular prior art and present parallel turbo decoders. For the parallel turbo decoders of the present invention, the number of division M=2 and the length for initial estimation d=20 in the parallel algorithm, and the appropriate values of the previous iteration as the starting points of the initial estimation of the current iteration were used.


[0048]
FIG. 3 and FIG. 4 compare the BER (bit error rate) and BLER (block error rate, or packet error rate) of the regular turbo decoder and parallel turbo decoder under various SNR. The results were so close they are not dissemble on the graphs.


[0049] Although the number of segments M=2 has been used as an example, a larger number of M may be used. In such case, the equations would have the general formula as follows: L(Uk)=f(αk(.), βk(.)). The definitions of the estimated starting points would be αp(.), . . . , αw(.) and βw(.), . . . , βq+1(.). The forward variable segments are calculated as follows:
5α1(.),,αp(.)startingfromα0(.)αp+1(.),,αq(.)startingfromαp(.)αw+1(.),,αK(.)startingfromαw(.)


[0050] and the reverse variable segments are calculated as follows:
6βK(.),,βw+1(.)startingfromβK(.)βw(.),,βv+1(.)startingfromβw+1(.)βq(.),,β1(.)startingfromβq+1(.).


[0051] The starting points for the forward variable are estimated from:
7αp-d+1(.),,αp(.)andαw-d+1(.),,αw(.);


[0052] and for the backward segments from:


[0053] βw+d(.), . . . ,βw(.) and


[0054] βq+d(.), . . . , βq(.)


[0055] where d is an arbitrary amount of time steps.


[0056] It should also be noted that even though turbo decoders are discussed, any system that uses the A Posteriori Probability decoding may use the present invention.


[0057] Although the present invention has been described and illustrated in detail, it is to be clearly understood that this is done by way of illustration and example only and is not to be taken by way of limitation. The spirit and scope of the present invention are to be limited only by the terms of the appended claims.


Claims
  • 1. A method of a turbo decoder using log A Posteriori Probability L(uk), where L(uk)=f(αk(s), βk(S)), the method comprising: dividing a forward variable α (.) and a backward variable β (.) into a plurality M of parallel segments of size p, q . . . w, where p+q, . . . +w equals the length of a coded word U; parallel calculating the segments of forward variable α (.) and backward variable β (.); and calculating L(uk) using the parallel calculated segments of α (.) and β (.).
  • 2. The method according to claim 1, wherein the forward variable segments are calculated as follows: 8α1⁡(.),…⁢ ,αp⁡(.)⁢ ⁢starting⁢ ⁢from⁢ ⁢α0⁡(.)αp+1⁡(.),…⁢ ,αq⁡(.)⁢ ⁢starting⁢ ⁢from⁢ ⁢αp⁡(.) ⁢⋮αw+1⁡(.),…⁢ ,αK⁡(.)⁢ ⁢starting⁢ ⁢from⁢ ⁢αw⁡(.);and the reverse variable segments are calculated as follows: 9βK⁡(.),…⁢ ,βw+1⁡(.)⁢ ⁢starting⁢ ⁢from⁢ ⁢βK⁡(.)βw⁡(.),…⁢ ,βv+1⁡(.)⁢ ⁢starting⁢ ⁢from⁢ ⁢βw+1⁡(.) ⁢⋮βq⁡(.),…⁢ ,β1⁡(.)⁢ ⁢starting⁢ ⁢from⁢ ⁢βq+1⁡(.).
  • 3. The method according to claim 2, wherein the starting points αp(.), . . . , αw(.) are estimated, and the starting points βw(.), . . . , βq+1(.) are estimated.
  • 4. The method according to claim 3, wherein the starting points for the forward variable are estimated from:
  • 5. The method according to claim 4, wherein d is in the range of 1 to 20.
  • 6. The method according to claim 4, wherein d is in the range of 15 to 20.
  • 7. The method according to claim 4, wherein the starting points for the estimated states are uniform random variables.
  • 8. The method according to claim 4, wherein the starting points for the estimated states are a predetermined state.
  • 9. The method according to claim 4, wherein the starting point for the estimated states is one divided by the number of possible states.
  • 10. The method according to claim 1, wherein the segment sizes are set equal.
  • 11. The method according to claim 1, wherein the variables are divided only into two segments p and q.
  • 12. A method of decoding using a log A Posteriori Probability ratio L(uk) where L(uk)=f(αk(s), βk(s) ), the method comprising: dividing a forward variable α (.) and a backward variable β (.) into two segments p and q where p+q equals the length of the code word U; parallel calculating the forward segments: α1(.), . . . ,αp(.) starting from a known α0(.) and αp+1(.), . . . ,αk(.) starting from an estimated αp(.); parallel calculating the backward segments: βK(.), . . . , βq+1(.) starting from a known βK(.) and βq(.), . . . , β1(.) staring from an estimated βq+1(.); and calculating L(uk) using the parallel calculated segments of α (.) and β (.).
  • 13. A method according to claim 12, wherein the estimated starting point αp(.) for αp+1(.) is estimated from prior probabilities αp−d+(.), . . . , αp(.), and the estimated starting point βq+(.) for βq(.) is estimated from prior probabilities βq+d(.), . . . , βq+1(.).
  • 14. A method according to claim 13, wherein d is in the range of 1 to 20.
  • 15. A method according to claim 13, wherein d is in the range of 15 to 20.
  • 16. A method according to claim 13, wherein the starting points for the estimated states are uniform random variables.
  • 17. A method according to claim 13, wherein the starting points for the estimated states are a predetermined state.
  • 18. A method according to claim 13, wherein the starting point for the estimated states is one divided by the number of possible states.
  • 19. A signal receiver including a decoder performing the method according to claim 1.
  • 20. A signal receiver including a decoder performing the method according to claim 11.