This invention relates to a maximum a posteriori probability (MAP) decoding method and to a decoding apparatus that employs this decoding method. More particularly, the invention relates to a maximum a posteriori probability decoding method and apparatus for implementing maximum a posteriori probability decoding in a short calculation time and with little use of a small amount of memory.
Error correction codes, which are for the purpose of correcting errors contained in received information or in reconstructed information so that the original information can be decoded correctly, are applied to a variety of systems. For example, error correction codes are applied in cases where data is to be transmitted without error when performing mobile communication, facsimile or other data communication, and in cases where data is to be reconstructed without error from a large-capacity storage medium such as a magnetic disk or CD.
Among the available error correction codes, it has been decided to adopt turbo codes (see the specification of U.S. Pat. No. 5,446,747) for standardization in 3rd-generation mobile communications. Maximum a posteriori probability decoding (MAP decoding) manifests its effectiveness in such turbo codes. A MAP decoding method is a method of decoding that resembles Viterbi decoding.
(a) Convolutional Encoding
Viterbi decoding is a method of decoding a convolutional code.
The content of the shift register SFR of the convolutional encoder is defined as its “state”. As shown in
(1) If “0” is input in state m0, the output is 00 and the state is m0; if “1” is input, the output is 11 and the state becomes m2.
(2) If “0” is input in state m1, the output is 11 and the state is m0; if “1” is input, the output is 00 and the state becomes m2.
(3) If “0” is input in state m2, the output is 01 and the state becomes m1; if “1” is input, the output is 10 and the state becomes m3.
(4) If “0” is input in state m3, the output is 10 and the state becomes m1; if “1” is input, the output is 01 and the state becomes m3.
If the convolutional codes of the convolutional encoder shown in
Upon referring to this lattice-like representation (a trellis diagram), it will be understood that if the original data is 11001, then state m=2 is reached via the path indicated by the dot-and-dash line in
Conversely, when decoding is performed, if data is received in the order 11→10→10→11→11 as receive data (ya,yb), the receive data can be decoded as 11001 by tracing the trellis diagram from the initial state m=0.
(b) Viterbi Decoding
If encoded data can be received without error, then the original data can be decoded correctly with facility. However, there are cases where data changes from “1” to “0” or from “0” to “1” during the course of transmission and data that contains an error is received as a result. One method that makes it possible to perform decoding correctly in such case is Viterbi decoding.
Using a kth item of data of encoded data obtained by encoding information of information length N, Viterbi decoding selects, for each state (m=0 to m=3) prevailing at the moment of input of the kth item of data, whichever of two paths that lead to the state has the fewer errors, discards the path having many errors, thenceforth, and in similar fashion, selects, for each state prevailing at the moment of input of a final Nth item of data, whichever of two paths that lead to the state has the fewer errors, and performs decoding using the paths of fewest errors among the paths selected at each of the states. The result of decoding is a hard-decision output.
With Viterbi decoding, the paths of large error are discarded in each state and these paths are not at all reflected in the decision regarding paths of fewest errors. Unlike Viterbi decoding, MAP decoding is such that even a path of many errors in each state is reflected in the decision regarding paths of fewest errors, whereby decoded data of higher precision is obtained.
(c) Overview of MAP Decoding
(c-1) First Feature of MAP Decoding
With MAP decoding, the probabilities α0,k(m), α1,k(m) that decoded data uK is “0”, “1” in each state (m=0, 1, 2, 3) at time k (see
(c-2) Second Feature of MAP Decoding
With Viterbi decoding, the path of fewest errors leading to each state at a certain time k is obtained taking into account the receive data from 1 to k and the possible paths from 1 to k. However, the receive data from k to N and the paths from k to N are not at all reflected in the decision regarding paths of fewest errors. Unlike Viterbi decoding, MAP decoding is such that receive data from k to N and paths from k to N are reflected in decoding processing to obtain decoded data of higher precision.
More specifically, the probability βk(m) that a path of fewest errors will pass through each state m (=0 to 3) at time k is found taking into consideration the receive data and trellises from N to k. Then, by multiplying the probability βk(m) by the forward probabilities α0,k(m), β1,k(m) of the corresponding state, a more precise probability that the decoded data uK in each state m (m=0, 1, 2, 3) at time k will become “0”, “1” is obtained.
To this end, the probability βk(m) in each state m (m=0, 1, 2, 3) at time k is decided based upon the following:
Thus, the MAP decoding method is as follows, as illustrated in
(1) Letting N represent information length, the forward probabilities α0,k(m), α1,k(m) of each state (m=0 to 3) at time k are calculated taking into consideration the encoded data of 1 to k and trellises of 1 to k. That is, the forward probabilities α0,k(m), α1,k(m) of each state are found from the probabilities α0,k−1(m), α1,k−1(m) and shift probability of each state at time (k−1).
(2) Further, the backward probability βk(m) of each state (m=0 to 3) at time k is calculated using the receive data of N to k and the paths of N to k. That is, the backward probability βk(m) of each state is calculated using the backward probability βk+1(m) and shift probability of each state at time (k+1).
(3) Next, the forward probabilities and backward probability of each state at time k are multiplied to obtain the joint probabilities as follows:
λ0,k(m)=α0,k(m))·βk(m),
λ1,k(m)=α1,k(m)·βk(m)
(4) This is followed by finding the sum total Σmλ0,k(m) of the probabilities of “1” and the sum total Σmλ1,k(m) of the probabilities of “0” in each state, calculating the probability that the original data uk of the kth item of data is “1” and that the probability is “0” based upon the magnitudes of the sum totals, outputting the larger probability as the kth item of decoded data and outputting the likelihood. The decoded result is a soft-decision output.
(d) First MAP Decoding Method According to Prior Art
(d-1) Overall Structure of MAP Decoder
Upon receiving (yak,ybk) at time k, the shift-probability calculation unit 1 calculates the following probabilities and stores them in a memory 2:
probability γ0,k that (xak,xbk) is (0,0)
probability γ1,k that (xak,xbk) is (0,1)
probability γ2,k that (xak,xbk) is (1,0)
probability γ3,k that (xak,xbk) is (1,1)
Using the forward probability α1,k−1(m) that the original data uk−1 is “1” and the forward probability α0,k−1(m) that the original data uk−1 is “0” in each state m (=0 to 3) at the immediately preceding time (k−1), as well as the obtained shift probabilities γ0,k, γ1,k, γ2,k, γ3,k at time k, a forward-probability calculation unit 3 calculates the forward probability α1,k(m) that the original data uk is “1” and the forward probability α0,k(m) that the original data uk is “0” at time k and stores these probabilities in memories 4a to 4d. It should be noted that since processing always starts from state m=0, the initial values of forward probabilities are α0,0(0)=α1,0(0)=1, α0,0(m)=α1,0(m)=0 (where m≠0).
The shift-probability calculation unit 1 and forward-probability calculation unit 3 repeat the above-described calculations at k=k+1, perform the calculations from k=1 to k=N to calculate the shift probabilities γ0,k, γ1,k, γ2,k, γ3,k and forward probabilities α1,k(m), α0,k(m) at each of the times k=1 to N and store these probabilities in memory 2 and memories 4a to 4d, respectively.
Thereafter, a backward-probability calculation unit 5 calculates the backward probability βk(m) (m=0 to 3) in each state m (=0 to 3) at time k using the backward probability βk+1(m) and shift probability γs,k+1 (s=0, 1, 2, 3) at time (k+1), where it is assumed that the initial value of k is N−1, that the trellis end state is m=0 and that βN(0)=1, βN(1)=βN(2)=βN(3)=0 hold.
A first arithmetic unit 6a in a joint-probability calculation unit 6 multiplies the forward probability α1,k(m) and backward probability βk(m) in each state m (=0 to 3) at time k to calculate the probability λ1,k(m) that the kth item of original data uk is “1”, and a second arithmetic unit 6b in the joint-probability calculation unit 6 uses the forward probability α0,k(m) and backward probability βk(m) in each state m (=0 to 3) at time k to calculate the probability λ0,k(m) that the kth item of original data uk is “0”.
A uk and uk likelihood calculation unit 7 adds the “1” probabilities λ1,k(m) (m=0 to 3) in each of the states m (=0 to 3) at time k, adds the “0” probabilities λ0,k(m) (m=0 to 3) in each of the states m (=0 to 3), decides the “1”, “0” of the kth item of data uk based upon the results of addition, namely the magnitudes of Σmλ1,k(m) and Σmλ0,k(m), calculates the confidence (likelihood) L(uk) thereof and outputs the same.
The backward-probability calculation unit 5, joint-probability calculation unit 6 and uk and uk likelihood calculation unit 7 subsequently repeat the foregoing calculations at k=k+1, perform the calculations from k=N to k=1 to decide the “1”, “0” of the original data uk at each of the times k=1 to N, calculate the confidence (likelihood) L(uk) thereof and output the same.
(d-2) Calculation of Forward Probabilities
The forward probability αik(m) that the decoded data uk will be i (“0” or “1”) in each state (m=0, 1, 2, 3) at time k is obtained in accordance with the following equation based upon
(d-3) Calculation of Backward Probability
In each state (m=0, 1, 2, 3) at time k, the backward probability βk(M) of each state is obtained in accordance with the following equation based upon
(d-4) Calculation of Joint Probabilities and Likelihood
If the forward probabilities α0,k(m), α1,k(m) and backward probability βk(m) of each state at time k are found, these are multiplied to calculate the joint probabilities as follows:
λ0k(m)=α0k(m)·βk(m)
λ1k(m)=α1k(m)·βk(m)
The sum total Σmλ0k(m) of the probabilities of “1” and the sum total Σmλ1k(m) of the probabilities of “0” in each of the states are then obtained and the likelihood is output in accordance with the following equation:
L(u)=log[Σmλ1k(m)/Σmλ0k(m)] (3)
Further, the decoded result uk=1 is output if L(u)>0 holds and the decoded result uk=0 is output if L(u)<0 holds. That is, the probability that the kth item of original data uk is “1” and the probability that it is “0” are calculated based upon the magnitudes of the sum total Σmλ0k(m) of the probabilities of “1” and of the sum total Σmλ1k(m) of the probabilities of “o”, and the larger probability is output as the kth item of decoded data.
(d-5) Problem with First MAP Decoding Method
The problem with the first MAP decoding method of the prior art shown in
(e) Second MAP Decoding Method According to Prior Art
Accordingly, in order to reduce memory, a method that has been proposed is to perform the calculations upon switching the order in which the forward probability and backward probability are calculated.
The shift-probability calculation unit 1 uses receive data (γak,γbk) at time k (=N), calculates the following probabilities and stores them in the memory 2:
probability γ0,k that (xak,xbk) is (0,0)
probability γ1,k that (xak,xbk) is (0,1)
probability γ2,k that (xak,xbk) is (1,0)
probability γ3,k that (xak,xbk) is (1,1)
The backward-probability calculation unit 5 calculates the backward probability βk−1(m) (m=0 to 3) in each state m (=0 to 3) at time k−1 using the backward probability βk(m) and shift probability γs,k (s=0, 1, 2, 3) at time k (=N) and stores the backward probabilities in memory 9.
The shift-probability calculation unit 1 and backward-probability calculation unit 5 subsequently repeat the above-described calculations at k=k−1, perform the calculations from k=N to k=1 to calculate the shift probabilities γ0,k, γ1,k, γ2,k, γ3,k and backward probability βk(m) at each of the times k=1 to N and store these probabilities in memories 2, 9.
Thereafter, using the forward probability α1,k−1(m) that the original data uk−1 is “1” and the forward probability α0,k−1(m) that the original data uk−1 is “0” at time (k−1), as well as the obtained shift probabilities γ0,k, γ1,k, γ2,k, γ3,k at time k, the forward-probability calculation unit 3 calculates the forward probability α1,k(m) that uk is “1” and the forward probability α0,k(m) that uk is “0” in each state m (=0 to 3) at time k. It should be noted that the initial value of k is 1.
The joint-probability calculation unit 6 multiplies the forward probability α1,k(m) and backward probability βk(m) in each state 0 to 3 at time k to calculate the probability λ1,k(m) that the kth item of original data uk is “1”, and similarly uses the forward probability α0,k(m) and backward probability βk(m) in each state 0 to 3 at time k to calculate the probability λ0,k(m) that the original data uk is “0”.
The uk and uk likelihood calculation unit 7 adds the “1” probabilities λ1,k(m) (m=0 to 3) of each of the states 0 to 3 at time k, adds the “0” probabilities λ0,k(m) (m=0 to 3) of each of the states 0 to 3 at time k, decides the “1”, “0” of the kth item of data uk based upon the results of addition, namely the magnitudes of Σmα1,k(m) and Σmα0,k(m), calculates the confidence (likelihood) L(uk) thereof and outputs the same.
The forward-probability calculation unit 3, joint-probability calculation unit 6 and uk and uk likelihood calculation unit 7 subsequently repeat the foregoing calculations at k=k+1, perform the calculations from k=1 to k=N to decide the “1”, “0” of uk at each of the times k=1 to N, calculate the confidence (likelihood) L(uk) thereof and output the same.
In accordance with the second MAP decoding method, as shown in the time chart of
It should be noted that the memory 2 for storing shift probability is not necessarily required. It can be so arranged that forward probabilities α1,k(m), α0,k(m) can be calculated by calculating the shift probabilities γ0,k (s=0, 1, 2, 3) on each occasion.
(f) Third MAP Decoding Method According to Prior Art
With the second MAP decoding method, the backward probability βk(m) need only be stored and therefore the amount of memory is comparatively small. However, it is necessary to calculate all backward probabilities βk(m). If we let N represent the number of data items and Tn the time necessary for processing one node, then the decoding time required will be 2×Tn×N. This represents a problem.
According to this method, the results of the backward probability calculation B are stored in memory while the calculation is performed from N−1 to N/2. Similarly, the results of the forward probability calculation A are stored in memory while the calculation is performed from 0 to N/2. If we let Tn represent the time necessary for the processing of one node, a time of Tn×N/2 is required for all processing to be completed. Thereafter, with regard to N/2 to 0, forward probability A has already been calculated and therefore likelihood is calculated while backward probability B is calculated. With regard to N/2 to N−1, backward probability B has been calculated and therefore likelihood is calculated while forward probability A is calculated. Calculations are performed by executing these processing operations concurrently. As a result, processing is completed in the next period of time of Tn×N/2. That is, according to the third MAP decoding method, decoding can be performed in time Tn×N and decoding time can be shorted in comparison with the second MAP decoding method. However, since forward probability must be stored, a greater amount of memory is used in comparison with the second MAP decoding method.
(G) Fourth Map Decoding Method According to Prior Art
The second and third methods cannot solve both the problem relating to decoding time and the problem relating to amount of memory used. Accordingly, a metric calculation algorithm for shortening decoding time and reducing amount of memory used has been proposed. The best-known approach is referred to as the “sliding window method” (referred to as the “SW method” below), the actual method proposed by Viterbi. (For example, see IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 16, NO. 2, FEBRUARY 1998, “An Intuitive Justification and a Simplified Implementation of the MAP Decoder for Convolutional Codes”, Andrew J. Viterbi.)
In the SW method, k=1 to N is divided equally into intervals L and MAP decoding is executed as set forth below.
First, (1) the B operation is performed from k=2L to k=1. In the B operation, the backward probability βk(m) is not calculated from k=N; calculation starts from the intermediate position k=2L. As a consequence, the backward probability βk(m) found over k=2L to k=L+1 (a training period) in the first half cannot be trusted and is discarded. The backward probability βk(m) found over k=L to k=1 in the second half can be trusted to some extent and therefore this is stored in memory. (2) Next, the A operation is performed at k=1, the S operation is performed using the results α1,1(m), α0,1(m) of the A operation at k=1 as well as β1(m) that has been stored in memory, and the decoded result u1 and likelihood L (u1) are calculated based upon the joint probabilities. Thereafter, and in similar fashion, the A operation is performed from k=2 to k=L and the S operation is performed based upon the results of the A operation and the results of the B operation in memory. This ends the calculation of the decoded result uk and likelihood L (uk) from k=1 to k=L.
Next, (3) the B operation is performed from k=3L to k=to L+1. In the B operation, the backward probability βk(m) is not calculated from k=N; calculation starts from the intermediate position k=3L. As a consequence, the backward probability βk(m) found over k=3L to k=2L+1 (the training period) in the first half cannot be trusted and is discarded. The backward probability βk(m) found over k=2L to k=L+1 in the second half can be trusted to some extent and therefore this is stored in memory. (4) Next, the A operation is performed at k=L+1, the S operation is performed using the results α1,L+1(m), α0,L+1(m) of the A operation at k=L+1 as well as βL+1(m) that has been stored in memory, and the decoded result UL+1 and likelihood L (uL+i) are calculated based upon the joint probabilities. Thereafter, and in similar fashion, the A operation is performed from k=L+2 to k=2L and the S operation is performed based upon the results of the A operation and the results of the B operation in memory. This ends the calculation of the decoded result uk and likelihood L (uk) from k=L+1 to k=2L. Thereafter, and in similar fashion, the calculation of the decoded result uk and likelihood L (uk) up to k=N is performed.
It should be noted that in the third MAP decoding method set forth above, the A operation over L is performed after the B operation over 2L. In terms of a time chart, therefore, this is as indicated in
In accordance with MAP decoding in the SW method, one forward probability calculation unit, two backward probability calculation units and one soft-decision calculation unit are provided and these are operated in parallel, whereby one block's worth of a soft-decision processing loop can be completed in a length of time of (N+2L)×Tn. Further, the amount of memory necessary is merely that equivalent to 2L nodes of backward probability.
With the SW method, backward probability βk(m) is not calculated starting from k=N. Since the same initial value is set and calculation starts in mid-course, the backward probability βk(m) is not accurate. In order to obtain a good characteristic in the SW method, therefore, it is necessary to provide a satisfactory training period TL. The length of this training portion ordinarily is required to be four to five times the constraint length.
If the encoding rate is raised by puncturing, punctured bits in the training portion can no longer be used in calculation of metrics. Consequently, even a training length that is four to five times the constraint length will no longer be satisfactory and a degraded characteristic will result. In order to maintain a good characteristic, it is necessary to increase the length of the training portion further. A problem which arises is an increase in amount of computation needed for decoding and an increase in amount of memory used.
Accordingly, an object of the present invention is to enable a reduction is memory used and, moreover, to substantially lengthen the training portion so that backward probability βk(m) can be calculated accurately and the precision of MAP decoding improved.
According to the present invention, the foregoing object is attained by providing a maximum a posteriori probability decoding method (MAP decoding method) and apparatus for repeatedly executing decoding processing using the sliding window (SW) method. The sliding window (SW) method includes dividing encoded data of length N into blocks each of prescribed length L, calculating backward probability from a data position (initial positions) backward of a block of interest when the backward probability of the block of interest is calculated, obtaining and storing the backward probability of the block of interest, then calculating forward probability, executing decoding processing of each data item of the block of interest using the forward probability and the stored backward probability and subsequently executing decoding processing of each block in regular order.
In maximum a posteriori probability decoding for repeatedly executing decoding processing using the sliding window (SW) method, the fundamental principle of the present invention is as follows: Forward probabilities and/or backward probabilities at initial positions, which probabilities have been calculated during a current cycle of MAP decoding processing, are stored as initial values of forward probabilities and/or backward probabilities in MAP decoding executed in the next cycle. Then, in the next cycle of MAP decoding processing, calculation of forward probabilities and/or backward probabilities is started from the stored initial values.
In first maximum a posteriori probability decoding, backward probability at a starting point (initial position) of backward probability calculation of another block, which backward probability is obtained in current decoding processing of each block, is stored as an initial value of backward probability of the other block in decoding processing to be executed next, and calculation of backward probability of each block is started from the stored initial value in decoding processing the next time.
In second maximum a posteriori probability decoding, backward probability at a starting point of another block, which backward probability is obtained in current decoding processing of each block, is stored as an initial value of backward probability of the other block in decoding processing to be executed next, and calculation of backward probability is started, without training, from the starting point of this block using the stored initial value in decoding processing of each block executed next.
In third maximum a posteriori probability decoding, (1) encoded data of length N is divided into blocks each of prescribed length L and processing for calculating backward probabilities from a data position (backward-probability initial position) backward of each block, obtaining the backward probabilities of this block and storing, the backward probabilities is executed in parallel simultaneously for all blocks; (2) when forward probability of each block is calculated, processing for calculating forward probability from a data position (forward-probability initial position) ahead of this block and obtaining the forward probabilities of this block is executed in parallel simultaneously for all blocks; (3) decoding processing of the data in each block is executed in parallel simultaneously using the forward probabilities of each block and the stored backward probabilities of each block; (4) a backward probability at the backward-probability initial position of another block, which backward probability is obtained in current decoding processing of each block, is stored as an initial value of backward probability of the other block in decoding processing to be executed next; (5) a forward probability at the forward-probability initial position of another block, which forward probability is obtained in current decoding processing of each block, is stored as an initial value of forward probability of the other block in decoding processing to be executed next; and (6) calculation of forward probability and backward probability of each block is started in parallel using the stored initial values in decoding processing executed next.
In accordance with the present invention, a training period can be substantially secured and deterioration of the characteristic at a high encoding rate can be prevented even if the length of the training portion is short, e.g., even if the length of the training portion is made less than four to five times the constraint length or even if there is no training portion. Further, the amount of calculation performed by a turbo decoder and the amount of memory used can also be reduced.
First maximum a posteriori probability decoding according to the present invention is such that from the second execution of decoding processing onward, backward probabilities for which training has been completed are set as initial values. Though this results in slightly more memory being used in comparison with a case where the initial values are made zero, substantial training length is extended, backward probability can be calculated with excellent precision and deterioration of characteristics can be prevented.
Second maximum a posteriori probability decoding according to the present invention is such that from the second execution of decoding processing onward, backward probability for which training has been completed is set as the initial value. Though this results in slightly more memory being used in comparison with a case where the initial value is made zero, substantial training length is extended, backward probability can be calculated with excellent precision and deterioration of characteristics can be prevented. Further, the amount of calculation in the training portion can be reduced and time necessary for decoding processing can be shortened.
In accordance with third maximum a posteriori probability decoding according to the present invention, forward and backward probabilities are both calculated using training data in metric calculation of each sub-block, whereby all sub-blocks can be processed in parallel. This makes high-speed MAP decoding possible. Further, in the second execution of decoding processing onward, forward and backward probabilities calculated and stored one execution earlier are used as initial values in calculations of forward and backward probabilities, respectively, and therefore highly precise decoding processing can be executed.
Other features and advantages of the present invention will be apparent from the following description taken in conjunction with the accompanying drawings.
The MAP decoding method manifests its effectiveness in turbo codes.
MAP element decoders can be used as the first and second element decoders DEC1, DEC2 in such a turbo element decoder.
According to the first embodiment, processing identical with that of the conventional SW method is performed in the first execution of decoding processing (the upper half of
In the first execution of decoding processing (the upper half of
In the second execution of decoding processing (the lower half of
As set forth above, values of backward probabilities β0, βL, β2L, β3L, β4L, . . . at final data positions 0, L, 2L, 3L, 4L, . . . of each of the blocks are stored as initial values of backward probabilities for the next time. However, values of backward probabilities β0″, βL″, β2L″, β3L″, β4L″, . . . at intermediate positions can also be stored as initial values of backward probabilities for the next time.
An input data processor 21 extracts the necessary part of receive data that has been stored in a memory (not shown) and inputs this data to a shift-probability calculation unit 22. The latter calculates the shift probability of the input data and inputs the shift probability to first and second backward-probability calculation units 23, 24, respectively, and to a forward-probability calculation unit 25.
The first backward-probability calculation unit 23 starts the training calculation of backward probabilities in L to 0, 3L to 2L, 5L to 4L, . . . of the odd-numbered blocks BL1, BL3, BL5, . . . in
The second backward-probability calculation unit 24 starts the training calculation of backward probabilities in 2L to L, 4L to 3L, 6L to 5L, . . . of the even-numbered blocks BL2, BL4, BL6, . . . in
The forward-probability calculation unit 25 calculates the forward probabilities of each of the blocks continuously. A selector 29 appropriately selects and outputs backward probabilities that have been stored in the β storage units 26, 28, a joint-probability calculation unit 30 calculates the joint probability, and a uk and uk likelihood calculation unit 31 decides the “1”, “0” of data uk, calculates the confidence (likelihood) L(uk) thereof and outputs the same.
If a first execution of decoding processing of all 1 to N data items has been completed, then the β initial-value setting unit 32 reads the initial values of β out of the β initial-value storage unit 27 and sets these in the backward-probability calculation units 23, 24 when the first and second backward-probability calculation units 23, 24 calculate the backward probabilities of each of the blocks in the next execution of decoding processing.
Each of the above units executes decoding processing in order block by block at timings (
Thus, the first embodiment is such that from the second execution of decoding processing onward, backward probabilities β0, βL, β2L, β3L, β4L, . . . for which training has been completed are set as initial values. Though this results in slightly more memory being used in comparison with a case where fixed values are adopted as the initial values, substantial training length is extended threefold, backward probabilities can be calculated with excellent precision and deterioration of characteristics can be prevented.
According to the second embodiment, processing identical with that of the conventional SW method is performed in the first execution of decoding processing (the upper half of
In the first execution of decoding processing (the upper half of
In the second execution of decoding processing (the lower half of
As set forth above, values of backward probabilities β0, βL, β2L, β3L, β4L, . . . at final data positions 0, L, 2L, 3L, 4L, . . . of each of the blocks are stored as initial values of backward probabilities for the next time. However, values of backward probabilities β0″, βL″, β2L″, β3L″, β4L″, . . . at intermediate positions can also be stored as initial values of backward probabilities for the next time.
A maximum a posteriori probability decoding apparatus according to the second embodiment has a structure identical with that of the first embodiment in
Thus, the second embodiment is such that from the second execution of decoding processing onward, backward probabilities for which training has been completed are set as initial values. Though this results in slightly more memory being used in comparison with a case where fixed values are adopted as the initial values, substantial training length is extended, backward probabilities can be calculated with excellent precision and deterioration of characteristics can be prevented. In addition, the amount of calculation in the training portion can be reduced and time necessary for decoding processing can be shortened. Further, though the amount of calculation in the training portion can be reduced, the training length is twice that of the conventional SW method, backward probabilities can be calculated with excellent precision and deterioration of characteristics can be prevented.
The third embodiment is premised on the fact that all input receive data of one encoded block has been read in and stored in memory. Further, it is assumed that backward-probability calculation means, forward probability-calculation means and soft-decision calculation means have been provided for each of the blocks of block BL1 from L to 0, block BL2 from 2L to L, block BL3 from 3L to 2L, block BL4 from 4L to 3L, block BL5 from 5L to 4L, . . . . The third embodiment is characterized in the following four points: (1) SW-type decoding processing is executed in parallel block by block; (2) forward-probability calculation means for each block executes a training operation and calculates forward probability; (3) forward probabilities and backward probabilities obtained in the course of the preceding calculations are stored as initial values for calculations the next time; and (4) calculations are performed the next time using the stored backward probabilities and forward probabilities as initial values. It should be noted that the fact that decoding processing is executed in parallel block by block in (1) and (2) also is new.
In the third embodiment, the decoding processing of each of the blocks is executed in parallel (the upper half of
In parallel with the above, forward-probability calculation means for each block calculates forward probabilities in each of the blocks, namely block BL1 from L to 0, block BL2 from 2L to L, block BL3 from 3L to 2L, block BL4 from 4L to 3L, block BL5 from 5L to 4L, . . . , in order in parallel fashion from data positions (initial positions) ahead of each block using fixed values an initial values, thereby obtaining forward probabilities at the starting points of each of the blocks. (This represents forward-probability training. However, training is not performed in block BL1.) For example, forward probabilities are trained (calculated) in order in parallel fashion from data positions 0, L, 2L, 3L, 4L, . . . ahead of each of the blocks BL2, BL3, BL4, BL5, . . . , forward probabilities of each of the blocks are calculated in parallel and decoding processing of the data of each of the blocks is executed in parallel using these forward probabilities and the stored backward probabilities.
Further, the values of forward probabilities αL, α2L, α3L, α4L, α5L, . . . at final data positions L, 2L, 3L, 4L, 5L . . . in each of the blocks, namely block BL1 from 0 to L, block BL2 from L to 2L, block BL3 from 2L to 3L, block BL4 from 3L to 4L, block BL5 from 4L to 5L, are stored as initial values of forward probabilities for the next time. That is, the final forward probability αJL of the jth block is stored as the initial value of forward probability of the (j+2)th block in decoding processing the next time.
In the second execution of decoding processing (the lower half of
Furthermore, in the second execution of decoding processing, values of backward probabilities β0′, βL′, β2L′, β3L′, β4L′, . . . of final data 0, L, 2L, 3L, 4L, . . . in each of the blocks are stored as initial values of backward probabilities for the next time. Further, forward probabilities αL′, α2L′, α3L′, α4L′, . . . of final data L, 2L, 3L, 4L, . . . in each of the blocks are stored as initial values of forward probabilities for the next time.
Each of the decoding processors 421, 422, 423, 424, . . . is identically constructed and has a shift-probability calculation unit 51, a backward-probability calculation unit 52, a forward-probability calculation unit 53, a β storage unit 54, a joint-probability calculation unit 55 and a uk and uk likelihood calculation unit 56.
The forward-probability calculation unit 53 of the jth decoding processor 42j of the jth block stores forward probability αJL conforming to final data jL of the jth block in a storage unit (not shown) and inputs it to the forward-probability calculation unit 53 of the (j+2)th decoding processor 42j+2 as the initial value of the next forward probability calculation.
Further, the backward-probability calculation unit 52 of the (j+2)th decoding processor 42j+2 of the (j+2)th block stores backward probability β(J+1)L conforming to final data (j+1) of the (j+2)th block in a storage unit (not shown) and inputs it to the backward-probability calculation unit 52 of the jth decoding processor 42j as the initial value of the next forward probability calculation.
The maximum a posteriori probability decoding apparatus according to the third embodiment executes decoding processing of each of the blocks in parallel in accordance with the time chart of
Thus, in the third embodiment, forward and backward probabilities are both calculated using training data in metric calculation of each sub-block, whereby all sub-blocks can be processed in parallel. This makes high-speed MAP decoding possible. Further, in the second execution of decoding processing onward, forward and backward probabilities calculated and stored one execution earlier are used as initial values in calculations of forward and backward probabilities, respectively, and therefore highly precise decoding processing can be executed.
An external-information likelihood calculation unit EPC1 outputs external-information likelihood Le(u1) using a posteriori probability L(u) output in the first half of a first cycle of MAP decoding and the input signal ya to the MAP decoder. This external-information likelihood Le(u) is interleaved and output as a priori likelihood L(u2′) used in the next half of MAP decoding.
In MAP decoding from the second cycle onward, turbo decoding is such that [signal ya+a priori likelihood L(u3′)] is used as the input signal ya. Accordingly, in the second half of the first cycle of MAP decoding, an external-information likelihood calculation unit EPC2 outputs external-information likelihood Le(u2), which is used in the next MAP decoding, using the a posteriori likelihood L(u2) output from the element decoder DEC2 and the decoder input signal [=signal ya+a priori likelihood L(u2′)]. This external-information likelihood Le(u2) is deinterleaved and output as a priori likelihood (u3′) used in the next cycle of MAP decoding.
Thereafter, and in similar fashion, the external-information likelihood calculation unit EPC1 outputs external-information likelihood Le(u3) in the first half of the second cycle, and the external-information likelihood calculation unit EPC2 outputs external-information likelihood Le(u4) in the second half of the second cycle. In other words, the following equation is established using the log value of each value:
L(u)=Lya+L(u′)+Le(u) (4)
The external-information likelihood calculation unit EPC1 therefore is capable of obtaining the external-information likelihood Le(u) in accordance with the following equation:
Le(u)=L(u)−Lya−L(u′) (5)
where L(u′)=0 holds the first time.
To summarize, therefore, in the first half of decoding processing the first time, decoding is performed using receive signals Lcya, Lcyb and the likelihood L(u1) obtained is output. Next, the a priori probability Le(u1) is obtained in accordance with Equation (5) [where L(u1′)=0 holds], this is interleaved and L(u2′) is obtained.
In the second half of decoding processing the first time, a signal obtained by interleaving the receive signal cya and the a priori likelihood L(u2′) obtained in the first half of decoding processing are regarded as being a new receive signal Lcya′, decoding is performed using Lcya′ and Lcyc, and the likelihood (u2) obtained is output. Next, the a priori likelihood Le(u2) is found in accordance with Equation (5) and this is deinterleaved to obtain L(u3′).
In the first half of decoding processing the second time, the receive signal Lcya and the a priori likelihood L(u3′) obtained in the second half of decoding processing are regarded as being a new receive signal Lcya′, decoding is performed using Lcya′ and Lcyb, and the likelihood (U3) obtained is output. Next, the a priori likelihood Le(u3) is found in accordance with the above equation, this is interleaved and L(u4′) is obtained.
In the second half of decoding processing the second time, a signal obtained by interleaving the receive signal cya and the a priori likelihood L(u4′) obtained in the first half of decoding processing are regarded as being a new receive signal Lcya′, decoding is performed using Lcya′ and Lcyc, and the likelihood (u4) obtained is output. Next, the a priori likelihood Le(u4) is found in accordance with Equation (5) and this is deinterleaved to obtain L(u5′). The above-described decoding processing is repeated.
In accordance with the present invention, when decoding of code of a high encoding rate using puncturing is performed in a turbo decoder, a substantial encoding length can be assured and deterioration of characteristics prevented even if the length of a training portion in calculation of metrics is reduced. Furthermore, amount of calculation by the turbo decoder and the amount of memory used can be reduced. The invention therefore is ideal for utilization in MAP decoding by a turbo decoder or the like. It should be noted that the invention of this application is applicable to a MAP decoding method for performing not only the decoding of turbo code but also similar repetitive decoding processing.
As many apparently widely different embodiments of the present invention can be made without departing from the spirit and scope thereof, it is to be understood that the invention is not limited to the specific embodiments thereof except as defined in the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
JP2003-339003 | Sep 2003 | JP | national |