The present invention relates to decoding, and more particularly, to decoding using a Max-Log-MAP decoding scheme.
In the field of wireless telecommunications, in particular code division multiple access (CDMA), the demand for low-cost and low-power decoder chips, particularly for use in mobile user terminals has resulted in renewed interest in low-complexity decoders.
Several known approaches seeking to reduce complexity of an optimum Maximum A posteriori Probability (MAP) decoder are known, such as the Log-MAP and Max-Log-MAP schemes.
A method of decoding and a decoding apparatus according to the present invention are defined in the independent claims to which the reader should now refer. Preferred features are laid out in the dependent claims.
An example of the present invention is a method of decoding comprising processing iterations. In each processing iteration, there is a first Max-Log-MAP decoding operation giving rise to a systematic error due to the Max-Log approximation, and a first weighting operation to weight extrinsic information from the first decoding operation to be applied as a priori information to the second Max-Log-MAP decoding operation. This is followed by a second Max-Log-MAP decoding operation, also giving rise to a systematic error due to the Max-Log approximation, and a second weighting operation to weight extrinsic information from the second decoding to be applied as a priori information to the first Max-Log-MAP decoding of the next iteration. The weights are applied to compensate for the systematic error due to the Max-Log approximation made in the last Max-Log-MAP decoding operation.
It can thus be considered that a modification to a known Max-Log-MAP iterative decoder is provided, basically using correction weights for the extrinsic information at each iteration in order to correct the error caused by the Max-Log approximation in the extrinsic information provided by the previous decoding iteration. This can be achieved by applying optimised weight factors to the extrinsic information in each decoding iteration. Applying such weights not only allows the inherent advantages of a Max-Log-MAP decoder to be kept, such as of low complexity and insensitivity to input scaling, but tends to result in improved performance.
An embodiment of the present invention will now be described by way of example and with reference to the drawings, in which:
As background, turbo-coding will first be explained generally, before focussing in on Log-MAP decoding and then Max-Log-MAP decoding. An improvement to Max-Log-MAP decoding will then be presented.
Turbo-Decoding
Given systematic (i.e. information) bit xt,0 and parity (i.e. check sequence) bits xt,1 and xt,2, generated at the turbo-encoder (not shown) and assuming transmission through an additive white gaussian noise (AWGN) channel at time t, the corresponding received signals at the turbo-decoder 2 may be written as Λc(xt,0), Λc(xt,1) and Λc(xt,2). Turbo decoding is performed in an iterative manner using two soft-output decoders 4,6, with the objective of improving data estimates from iteration i to iteration i+1. Each soft-output decoder 4,6 generates extrinsic information Λei(xt) on the systematic bits which then serves as a priori information Λai(xt,0) for the other decoder 6,4. Extrinsic information is the probabilistic information gained on the reliability of the systematic bits. This information is improved on through decoding iterations. In order to minimise the probability of error propagation, the decoders 4, 6 are separated by interleaving process such that extrinsic information bits passing from decoder 4 to decoder 6 are interleaved, and extrinsic information bits passing from decoder 6 to decoder 4 are de-interleaved.
As regards, a choice of soft output decoders, 4,6, a maximum a posteriori probability (MAP) scheme would be the optimum decoding scheme in the sense that it results in a minimum probability of bit error. However, the MAP scheme is computationally complex and, as a result is usually implemented in the logarithmic domain in the form of Log-MAP or Max-Log-MAP scheme. While the former is a mathematical equivalent of MAP, the latter scheme involves an approximation which results in even lower complexity, albeit at the expense of some degradation in performance.
For further background, the reader is referred to the book by B. Vucetic and J. Yuan, entitled “Turbo codes”, published by Kluwer Academic Publishers, 2000.
Log-MAP Algorithm
The known log-domain implementation of the MAP scheme requires log-likelihood ratios (LLR) of the transmitted bits at the input of the decoder. These LLRs are of the form
where Pr(A) represents the probability of event A, x is the value of the transmitted bit, r=x+n is the received signal at the output of an additive white gaussian noise (AWGN) channel where x is the data value, and n is the noise assumed to have an expected value E{|n|2}=σn2 where σn2 is the variance of the noise.
Given LLRs for the systematic and parity bits as well as a priori LLRs for the systematic bits, the Log-MAP decoder computes new LLRs for the systematic bits as follows:
where γtq(l′,l) denotes the logarithmic transition probability for a transition from state l′ to state l of the encoder trellis at time instant t given that the systematic bit takes on value q{1,0} and Ms is the total number of states in the trellis. (For further explanation of trellis structures the reader is again referred to the Vucetic and Yuan book).
Note that the new information at the decoder output regarding the systematic bits is encapsulated in the extrinsic information term Λe(xt,0). Coefficients αt(l′) are accumulated measures of transition probability at time t in the forward direction in the trellis. Coefficients βt(l) are accumulated measures of transition probability at time t in the backward direction in the trellis. For a data block corresponding to systematic bits x1,0 to xt,0 and parity bits x1,1 to xt,1, these coefficients are calculated as described below.
Using the following initial values in a forward direction:
{overscore (α)}0(0)=0 and {overscore (α)}0 (l)=−∞ for l≠0 (4),
the coefficients are calculated as
Using the following initial values in the backward direction:
{overscore (β)}t(0)=0 and {overscore (β)}t(l)=−∞ for l≠0 (7),
the coefficients are calculated as
Equation (2) is readily implemented using the known Jacobian equality
log(eδ
and using a look up table to evaluate the correction function log(1+e−|δ
Max-Log-MAP decoding
It is known that the complexity of the Log-MAP scheme can be further reduced by using the so-called Max-Log approximation, namely
log(eδ
for evaluating Equation (2). (log, of course, denotes natural logarithm, i.e. loge). The Max-Log-MAP scheme is often the preferred choice for implementing a MAP decoder, for example as shown in
However, the known Max-Log approximation leads to accumulating a bias in the decoder output (extrinsic information), i.e. the Max-Log approximation results in biased soft outputs. A bias is, of course, an average of errors over time. This is due to the fact that the known Max-Log-MAP scheme uses the mathematical approximation of Equation (10) to simplify the computation of extrinsic information Λe(xt,0). This approximation results in an error which accumulates from iteration to iteration and impedes the convergence of the turbo-decoder. Since in a turbo-decoder, each decoder output becomes a priori information for the following decoding process, the bias leads to sub-optimal combining proportions between the channel input and the a priori information, thereby degrading the performance of the decoder. In consequence, the known turbo-decoding process may not converge when the known Max-Log-MAP scheme is used for the constituent decoding processes.
Maximum Mutual Information Combining
The inventors recognised these errors as a problem and so wanted to correct for such errors in an efficient manner whilst maintaining the benefits of the Max-Log-MAP approach.
As mentioned above (see Equation (5)), the central operation in the known Max-Log-MAP scheme is the computation of logarithmic transition probabilities of the form
where Λa(xt,0), Λc(xt,0) and Λc(xt,1) are inputs to the constituent decoder.
The inventors realised that bias produced by the Max-Log-MAP scheme should be corrected by appropriate scaling of the terms Λa(xt,0) and Λc(xt,0) in the above equation by weights wai and wci, resulting in
where i represents the iteration index. This is illustrated in
This correction is simple and effective, and retains the advantage of Max-Log-MAP decoding in that soft inputs in the form of scaled LLRs are accepted.
Determination of Weights
The optimum values of the weights wai and wci to be applied are those that maximise the transfer of mutual information from one Max-Log-MAP decoder to the other at every iteration. Mutual information is, of course the level of information as to knowledge of a data sequence. These optimum values can be mathematically written as
where eigmax(A) denotes the eigenvector corresponding to the largest eigenvalue of matrix A, and R are correlation matrices defined later on in this text. Equation (26) has more than one solution, but any of the solutions is optimum in maximising mutual information.
The above-mentioned optimum weights are both normalised (divided) by wOPT,ci so that the weights to be applied become
This normalisation is required to ensure that the natural ratio between Λc(xt,0) and Λc(xt,1) remains undisturbed.
Computation of the weights in Equation (26) requires the (2×2) matrices Rεi=Rλ+εi−Rλi and Rλ+εi to be computed. These are:
where Λai(xt,0)=λt,ai+εt,ai and Λci(xt,0)=λt,ciεt,ci, where λt,ai and λt,ci the notional uncorrupted LLR values, εt,ai and εt,ci are the errors in LLR as detected compares to the notional uncorrupted LLR values, and “E” denotes statistical mean.
It is proposed that the above statistical means be computed by averaging over a number of data blocks containing a total of say, N, information bits. In other words:
The above operations to determine weights are performed only once and off-line.
The averaging operations defined by equation (30) are undertaken in the average determining processor 8 shown in
Example System
Given systematic (i.e. information) bit xt,0 and parity (i.e. check sequence) bits xt,1 and xt,2, generated at the turbo-encoder (not shown) and assuming transmission through an additive white gaussian noise (AWGN) channel at time t, the corresponding received signals at the turbo-decoder 2′ may be written as Λc(xt,0), Λc(xt,1) and Λc(xt,2).
Turbo decoding is performed in an iterative manner using two Max-log-MAP decoders 4′,6′ of known type as described above, with the objective of improving the data estimates from iteration i to iteration i+1. Each soft-output decoder 4′,6′ generates extrinsic information Λei(xt) on the systematic bits which then serves as a priori information Λai(xt,0) for the other decoder. The Extrinsic information is the probabilistic information gained on the reliability of the systematic bits. This information is improved on through decoding iterations. In order to minimise the probability of error propagation, the decoders 4′, 6′ are separated by interleaving process such that extrinsic information bits passing from decoder 4′ to decoder 6′ are interleaved, and extrinsic information bits passing from decoder 6′ to decoder 4′ are de-interleaved.
Importantly, the a priori information is weighted as described above to remove the bias caused by the Max-log approximation.
The weights are determined by feeding the turbo-decoder with the channel outputs {Λc(xt,0), Λc(xt,0),Λc(xt,1)}t=1 . . . N and performing I turbo iterations as per normal operation. By observing the constituent decoder inputs Λc(xt,0) and Λa(xt,0) at each iteration, the coefficients wai are computed following Equations (26) to (30) as explained above.
To review, it was found that the performance of a decoder can be improved by using the modified Max-Log-MAP decoder to approach that of a decoder using the optimum Log-MAP or MAP decoders. This is achieved at the expense of only two additional multiplications (each multiplication being to apply a weight to a priori information for each decoder) per iteration for each systematic bit. The weights are to correct for a bias caused by the Max-Log approximation. The advantages of a Max-Log-MAP decoder can be maintained, namely its insensitivity to scaling of the log-likelihood, and that an estimate of noise variance is not required.
Since the weights need only be computed once for a particular turbo-decoder, the improved decoder retains the low complexity of a Max-Log-MAP approach. The values of optimal weights to be applied can be computed off-line.