Turbo decoder using parallel processing

Abstract
A method of decoding using a log posterior probability ratio L(uk), which is a function of forward variable α (.) and backward variable β (.). The method comprises dividing the forward variable α (.) and the backward variable β (.) into, for example, two segments p and q, where p plus q equal the length of the code word U. The forward segments α (.) are parallel calculated, and the backward segments β (.) are parallel calculated. The ratio L(uk) is calculated using the parallel calculated segments of α (.) and β (.).
Description
BACKGROUND AND SUMMARY OF THE INVENTION

The present invention relates generally to decoders and, more specifically, to a turbo decoder that reduces processing time for the computation of A Posteroiri Probability (APP) and is suitable for implementation in parallel processing architectures.


Data delivered over a telecommunication channel are subject to channel noise, channel fading, and interferences from other channels. As a result, data received at the destination are usually “altered” by the channel from those delivered at the source. To ensure error-free transmission, the data are encoded before transmission by a channel encoder to allow the data receiver to detect or correct the errors. For example, if bit 0 is encoded as 000 and bit 1 is encoded as 111, then when one bit error occurs, 000 may become 100, and 111 may becomes 101. The receiver can correct 100 as 000 (bit 0) and 101 as 111 (bit 1) by the “majority rule” or the hamming distance. The part of receiver responsible for correcting errors is called a channel decoder.


Turbo encoders and decoders are used in emerging high-speed telecommunications transmission systems, such as terrestrial digital TV communication systems, and third generation wireless (e.g., WCDMA) communication systems. A turbo decoder has been demonstrated to approach the error correcting limit on both AWGN and Rayleigh fading channels.


Despite the error-correcting efficiency, however, a turbo decoder is computing intensive. To meet the real-time performance (e.g., a few millisecond), it is usually implemented in ASIC. If a turbo decoder is to be implemented in software running on a DSP or a CPU, as in the context of software defined radio, its real-time performance will be improved.


A 3GPP turbo encoder (FIG. 1) consists of a parallel concatenation of two identical RSC (Recursive Systematic Convolutional) encoders separated by an interleaver. The info word U of length K is encoded by the first RSC encoder, and the interleaved info word is encoded by the second RSC encoder. The interleaver de-correlates the inputs to the two RSC's by reordering the input bits to the second RSC, so that it is unlikely that the encoded bits from both RSC's have low weight code words at the same time. Also, it helps the encoded bits to cope with bursty noise. In 3GPP turbo encoder, a pseudo-random block interleaver is used. Both RSC encoded words are terminated by a trellis termination. The turbo encoded words are consists of systematic bits and two parity bits (U, Xp1, Xp2).


As shown in FIG. 2, the standard turbo decoder consists of two concatenated SISO (Soft Input Soft Output) blocks, one for each set of systematic and parity bits, (U′,Xp1′) and (U′,Xp2′), where Xp1′ and Xp2′ denote the noisy version of Xp1 and Xp2, respectively, and the same for U′ (U refers to the info words). The SISO blocks are A Posteriori Probability (APP) decoders also know as Maximum A Posteriori (MAP) decoders. The two SISO blocks are separated by the same interleaver (as the encoder) and its inverse block, the deinterleaver. Upon reception of bits from channel and priori information, each SISO block computes log posterior ratios of each bit with well-known forward and backward algorithm. Once SISO computes the log posterior ratios of all bits, it separates a probabilistic entity that was calculated based on its input from overall posterior, then pass it to the other SISO block. This probabilistic entity is often called extrinsic information (L12 and L21 in FIG. 2) for the other SISO block to use as prior information. The two SISO blocks run in an iterative scheme, mutually exchanging extrinsic information. After the required number of iterations is completed, hard decision is made based on accumulated soft information up to the iteration.


The log posterior probability ratio can be written as:











L


(

u
k

)


=


log


(


P


(


u
k

=


+
1

|
y


)



P


(


u
k

=


-
1

|
y


)



)


=

log


(





S
+





P


(


S

k
-
1


,

S
k

,
y

)


/

P


(
y
)








S
-





P


(


S

k
-
1


,

S
k

,
y

)


/

P


(
y
)





)




,




(
1
)








where S+ and S denote the set of all possible state transitions caused by data input uk=+1 and uk=−1, respectively, and y denotes the set of observations, y=(y1, . . . , yk) where yk=(uk′, xk′), k=1, . . . , K. Note that yε(U′,Xp1′,Xp2′).


As usual, the posterior probability can be obtained by way of computing weighted likelihood, where weights are provided by the prior probability of the event uk. Direct evaluation of weighted likelihood requires the summations over a very large number of state patterns, which is proportional to the sequence length K. Because of the combinatorial complexity, it is not computationally feasible even for a reasonable length of the sequence.


To reduce the computation, an efficient procedure, known as forward and backward algorithm, is often used. In this algorithm, the posterior probability P(uk|y) is factorized into following 3 terms:

    • Forward variable, αk(.),
    • Backward variable, βk(.),
    • State transition probability, γk(.,.).


The αk(.) is the joint probability of the observations y1, . . . , yk and the state at time k, that is αk(S)=P(Sk, y1, . . . , yk). The βk(.) represents the conditional probability of future observations given state at time k, βk(S)=P(y1, . . . , yK+1|Sk). The γk(.,.) is the probability of the state transitions from k−1 to k, caused by uk, and expressed as γk(S′, S)=P(Sk=S, yk|Sk−1=S′).


The procedure of recursive calculation of αk(S) is implemented according to








α
k



(
S
)


=




S
*






α

k
-
1




(

s


)






γ
k



(


S


,
S

)


.







For βk(S), the calculation is proceeded recursively as:








β
k



(
S
)


=




S
*






β

k
+
1




(

S


)






γ

k
+
1




(

S
,

S



)


.







Since the turbo encoder is expected to start and end in state 1, the initial conditions for αk(.) and βk(.) are known and given as α0(S)=δ{S,1} and βK(S)=δ{S,1}, respectively, where δ{,.} denote the Kronecker delta.


Calculation of the posterior entity L(uk) as a function f(αk(.), βk(.)) is then equivalent to:










P


(


u
k

|
y

)


=





S
*






α

k
-
1




(

s


)





γ
k



(


S


,
S

)





β
k



(
S
)





P


(
y
)







(
2
)








where S* is the set of state pairs corresponding to all state transitions caused by uk=+1/−1, and P(y) is a normalization constant.


The procedure of forward and backward algorithm is summarized as:

    • Calculate γk(.,.), k=1,2, . . . , K;
    • Calculate αk(.,.) forward recursively, k=1,2, . . . , K;
    • Calculate βk(.,.) backward recursively, k=K−1, . . . 0;
    • Calculate (2) to form (1).


The present invention is a method of decoding using a log posterior probability ratio L(uk), which is a function of forward variable α (.) and backward variable β (.). The method comprises dividing the forward variable α (.) and the backward variable β (.) into, for example, two segments p and q, where p plus q equal the length of the codeword U. The forward segments α (.) are parallel calculated, and the backward segments β (.) are parallel calculated. The ratio L(uk) is calculated using the parallel calculated segments of α (.) and β (.). The first forward segment is calculated from α1(.), . . . , αp(.) starting from α0(.), whereas the second forward segment is calculated from αp+1(.), . . . , αK(.) starting from an estimated αp(.). The first backward segment is calculated from βK−1(.), . . . , βq+1(.) starting from βK(.), and the second backward segment is calculated from βq(.), . . . , β1(.) starting from an estimated βq+1(.).


To obtain the estimated initial point α (.), the forward variable is calculated recursively from p−d+1 where d is an arbitrary amount of time and the state at time p−d+1 is treated as a uniform random variable. Similarly, for βq+1(.), the backward variable is calculated from q+d and again the state at time q+d is treated as a uniform random variable. With treating the states at time p−d+1 and q+d as uniform random variables, no informative prior knowledge of the states at the time is claimed.


The arbitrary amount of time, d, is in the range of 1 to 20 or may be in the range of 15 to 20. Also, the starting points for the estimated probability may also be a predetermined state. This predetermined state may be one divided by the number of possible states.


The method may include dividing the forward variable α (.) and the backward variable β (.) into more than two segments, and each of the forward and reverse segments would be calculated in parallel.


The process is performed in a signal receiver including a decoder.


These and other aspects of the present invention will become apparent from the following detailed description of the invention, when considered in conjunction with accompanying drawings.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 is a block diagram of a 3GPP turbo decoder of the prior art.



FIG. 2 is a block diagram of a 3GPP turbo decoder of the prior art.



FIG. 3 is a graph of bit error rate (BER) for a turbo decoder of the prior art and for a turbo decoder of the present invention under various signal to noise ratios (SNRs).



FIG. 4 is a graph of block error rate or packet error rate (BLER) of the prior art turbo decoder and a turbo decoder according to the present invention under various SNR.



FIG. 5 is a blocked diagram of a Soft Input Soft Output block according to the present disclosure.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

One problem with the standard turbo decoding algorithm is that if the size of input sequence K is large, the time required for the computation of the above forward and backward variables grows, causing a long latency as we go through the forward and backward algorithms. K may reach 5114 bits. To reduce the time in calculating αk(.) and βk(.), the input data is divided into M segments and simultaneously calculate the αk(.) and βk(.) for the M segments. Truncation loss may occur with this segment-based approach; however, simulation results show that the loss was negligible when M=2.


In theory, this parallel scheme reduced the computation to nearly 1 out of M of the original calculation of αk(.) and βk(.) (e.g. ½ for M=2). The parallel computing is for the calculation of αk(.) and βk(.), which are the most computational intensive part of a turbo decoder.


The parallel algorithm will be discussed for the case of M=2 as an example. For M>2, the algorithm is similar. The algorithm consists of the following steps:


1. Divide the forward and the backward variable into two parallel segments of size p and q, respectively, where p+q=K.


2. Calculate the four segments simultaneously with the following four processes as shown in FIG. 5:

    • Process 1: calculate α1(.), . . . , αp(.) starting from α0(.);
    • Process 2: calculate αp+1(.), . . . , αK(.) starting from an estimated αp(.), say αp′(.);
    • Process 3: calculate (backwardly) βK−1(.), . . . , βq+1(.) starting from βK(.); and
    • Process 4: calculate (backwardly) βq(.), . . . , β1(.) starting from an estimated βq+1(.), say βq+1′(.).


Process 1 and 3 are run as regular turbo alpha and beta calculation with known initial points (with reduced size), process 2 and 4 require estimated initial points.


βq+1′(.) for process 4. The first iteration starts from αp−d+1(.) where d is an arbitrary amount of time steps. The state at time p−d+1 is treated as a uniform random variable. This implies that the probability of a specific state occurs at p−d+1 is ⅛ since the 3GPP turbo encoder has 8 system states. As a consequence, αp−d+1(.)=⅛ and similarly so does βq+d(.). Starting from this uniform prior, when the process reaches at p, the estimate αp′(.) results, and at q+1, the estimate βq+1′(.) results.


The amount of information extracted from the observation for the duration d is proportional to d. A longer d may give a better initial estimate. However, since the computation during d steps represents “overhead”, d should not be increased more than a certain limit. While d may be in the range of 1–20, simulation show that d=15˜20 provides decent results. From the second iteration on, αp−d+1(.) and βq+d(.) can be chosen by the same way as for the first iteration, or the values resulted from process 1 and process 3 in previous iteration can be used.


Simulation scenarios are defined by SNR (signal to noise ratio). For each scenario, 2000 packets of size 5114 bits were randomly generated, turbo encoded, and subjected to AWGN noise. The “spoiled” packets were run through both the regular prior art and present parallel turbo decoders. For the parallel turbo decoders of the present invention, the number of divisions M=2 and the length for initial estimation d=20 in the parallel algorithm, and the appropriate values of the previous iterations as the starting points of the initial estimation of the current iteration were used.



FIG. 3 and FIG. 4 compare the BER (bit error rate) and BLER (block error rate, or packet error rate) of the regular turbo decoder and parallel turbo decoder under various SNR. The results were so close they are not dissemble on the graphs.


Although the number of segments M=2 has been used as an example, a larger number for M may be used. In such case, the equations would have the general formula as follows: L(uk)=f(αk(.), βk(.)). The definitions of the estimated starting points would be αp(.), . . . , αw(.) and βw(.), . . . , βq+1(.). The forward variable segments are calculated as follows:













α
1



(
.
)


,





,



α
p



(
.
)







starting





from







α
0



(
.
)











α

p
+
1




(
.
)


,





,



α
q



(
.
)







starting





from







α
p



(
.
)






















α

w
+
1




(
.
)


,





,



α
K



(
.
)







starting





from







α
w



(
.
)












and the reverse variable segments are calculated as follows:













β

K
-
1




(
.
)


,





,



β

w
+
1




(
.
)







starting





from







β
K



(
.
)











β
w



(
.
)


,





,



β

v
+
1




(
.
)







starting





from







β

w
+
1




(
.
)






















β
q



(
.
)


,





,



β
1



(
.
)







starting





from








β

q
+
1




(
.
)


.











The starting points for the forward variable are estimated from:













α

p
-
d
+
1


(
.
)

,





,



α
p

(
.
)






and





















α

w
-
d
+
1


(
.
)

,





,



α
w

(
.
)

;




and















for the backward segments from:


βw+d(.), . . . , βw+1(.) and


βq+d(.), . . . , βq+1(.)


where d is an arbitrary amount of time steps.


It should also be noted that even though turbo decoders are discussed, any system that uses the A Posteriori Probability decoding may use the present invention.


Although the present invention has been described and illustrated in detail, it is to be clearly understood that this is done by way of illustration and example only and is not to be taken by way of limitation. The spirit and scope of the present invention are to be limited only by the terms of the appended claims.

Claims
  • 1. A method of a turbo decoder using log A Posteriori Probability L(uk), where L(uk)=f(αk(.), βk(.)), the method comprising: dividing a forward variable α (.) and a backward variable β (.) into a plurality M of parallel segments of size p, q . . . w, where p+q, . . . +w equals the length of a coded word U;simultaneous, parallel calculating the segments of forward variable α (.) of the code word U as follows:
  • 2. The method according to claim 1, wherein the segment sizes are set equal.
  • 3. A signal receiver including a decoder performing the method according to claim 1.
  • 4. The method according to claim 1, wherein the starting points αp(.), . . . , αw(.) are estimated, and the starting points βw+1(.), . . . , βq+1 are estimated.
  • 5. The method according to claim 4, wherein the starting points for the forward variable are estimated from:
  • 6. The method according to claim 5, wherein d is in the range of 1 to 20.
  • 7. The method according to claim 5, wherein d is in the range of 15 to 20.
  • 8. The method according to claim 5, wherein the probabilities of the first states for estimating the starting points are uniform random variables.
  • 9. The method according to claim 5, wherein the first states for estimating the starting points are a predetermined state.
  • 10. The method according to claim 5, wherein the first states for estimating the starting point is one divided by the number of possible states.
  • 11. A method of decoding using a log A Posteriori Probability ratio L(uk) where L(uk)=f(αk(.), βk(.) ), the method comprising: dividing a forward variable α (.) and a backward variable β (.) into two segments p and q where p+q equals the length of the code word U;parallel calculating the forward segments:α1(.), . . . ,αp(.) starting from a known α0(.) andαp+1(.), . . . ,αk(.) starting from an estimated αp(.);parallel calculating the backward segments:βK−1(.), . . . , βq+1(.) starting from a known βK(.) andβq(.), . . . , β1(.) starting from an estimated βq+1(.); andcalculating L(uk) using the parallel calculated segments of α (.) and β (.).
  • 12. A signal receiver including a decoder performing the method according to claim 11.
  • 13. A method according to claim 11, wherein the estimated starting point αp(.) is estimated from state probabilities αp−d+1(.), . . . , αp(.), and the estimated starting point βq+1(.) is estimated from state probabilities for βq+d(.), . . . , βq+1(.).
  • 14. A method according to claim 13, wherein d is in the range of 1 to 20.
  • 15. A method according to claim 13, wherein d is in the range of 15 to 20.
  • 16. A method according to claim 13, wherein the probabilities of the first states for estimating the starting points are uniform random variables.
  • 17. A method according to claim 13, wherein the first states for estimating the starting points are a predetermined state.
  • 18. A method according to claim 13, wherein the first states for estimating the starting point is one divided by the number of possible states.
US Referenced Citations (16)
Number Name Date Kind
6145114 Crozier et al. Nov 2000 A
6292918 Sindhushayana et al. Sep 2001 B1
6304995 Smith et al. Oct 2001 B1
6343368 Lerzer Jan 2002 B1
6377610 Hagenauer et al. Apr 2002 B1
6484283 Stephen et al. Nov 2002 B1
6563877 Abbaszadeh May 2003 B1
6715120 Hladik et al. Mar 2004 B1
6754290 Halter Jun 2004 B1
6760879 Giese et al. Jul 2004 B1
6813743 Eidson Nov 2004 B1
6829313 Xu Dec 2004 B1
6856657 Classon et al. Feb 2005 B1
6865711 Arad et al. Mar 2005 B1
20010046269 Gatherer et al. Nov 2001 A1
20020118776 Blankenship et al. Aug 2002 A1
Foreign Referenced Citations (1)
Number Date Country
WO 0059118 Oct 2000 WO
Related Publications (1)
Number Date Country
20040111659 A1 Jun 2004 US