The present invention relates to a method for source decoding a variable-length soft-input codewords sequence into a soft-output bit sequence.
Such a method may be used in any system using variable-length codes like, for example, a video or audio communication system.
A video communication system typically comprises a source encoding system, a channel and a source decoding system. The source encoding system generates variable-length codewords sequences and transmits them over the channel to the source decoding system that decodes them thanks to a shared code.
Variable-length codes, which are widely used in video coding standards for their compression capabilities are very sensitive to channel errors. As a matter of fact, when some bits are altered by the channel, synchronisation losses can occur at the receiver side, possibly leading to dramatic symbol error rates. This phenomenon has led to introduce modified variable-length codes such as e.g. self-synchronising Huffman codes, reversible variable-length codes.
Another solution is to re-introduce redundancy in the bitstream by inserting an error correcting code in the chain. The key point of the latter solution is to appropriately use the residual source redundancy at the decoding side. Being considered as a form of implicit channel protection by the decoder, this redundancy can be exploited as such to provide error correction capability of the variable length coded source.
Recent work showed that low-complexity approximate MAP (Maximum A Posteriori) decoders could be used, that provide approximately the same performance as all existing soft decoding algorithms while exhibiting a complexity close to the hard decoding case. Such a decoder is disclosed in [L. Perros-Meilhac and C. Lamy. “Huffmann tree based metric derivation for a low-complexity sequential soft VLC decoding”. In Proceedings of ICC'02, volume 2, pages 783–787, New York, USA, April–May 2002].
Even though MAP algorithms exist and provide very good results in terms of error correction, they are very complex. On the other hand, stack algorithms, as the one disclosed in [Buttigieg:“Variable-length error-correcting codes” PhD thesis, University of Manchester, United Kingdom, 1995] are much less complex and can reach similar performance. However, they are primarily symbol level decoding algorithms and thus are not well adapted to provide reliability information on bits.
The aim of the invention is to provide a method for source decoding a variable-length soft-input codewords sequence working at a bit level and at low complexity for VLC codes.
The subject matter of the invention is a method for source decoding a variable-length soft-input codewords sequence as defined in claim 1.
In addition, there is provided a source decoder for decoding a variable-length soft-input codewords sequence as defined in claim 10.
Additional features are disclosed in the other claims.
As we will see in detail further on, such a method has the advantage to provide a reliability information (or soft-output) on the decoded sequences, allowing to select paths in a tree in an increasing order of their metric value. Thus, by searching for only useful codewords, the proposed method is very efficient in term of CPU cost, complexity and time, as many other paths in the decoding tree are not considered anymore. The soft-output information allows then to perform iterative joint decoding when the VLC encoder has been concatenated with another.
The invention will be better understood from reading the following description which is given solely by way of example and in which reference is made to the drawings, in which:
A transmission chain for joint decoding between convolutional code and variable-length code is shown on
This transmission chain comprises a transmitter 1 and a receiver 2.
Variable-lengths codes (VLC) are used in the transmission chain to reduce the length of the transmitted bitstreams.
The transmitter 1 includes a variable length source 10 which is adapted to output a random VLC symbols sequence s according to chosen VLC codewords probabilities.
At the output of the VLC source, the transmitter 1 comprises a variable length encoder 12, a pseudo-random interleaver 14, a systematic convolution (CC) encoder 16, a puncturer 18 and a BPSK modulator 20.
The random VLC symbols sequence s is encoded in the VLC encoder 12 which is a adapted to map the received symbols, for example grouped into packets of R symbols, to a T-bit VLC sequence denoted x [1:T]. The bit sequence x [1:T] is created, as known per se, following the rule edicted by a VLC table. An example of VLC table is given on
In the VLC table, the code C is defined as follows. A codeword is associated with a symbol Si, a codeword having a predetermined length. A value of a codeword represents as well the value that can be taken by the associated symbol Si. Besides, to each codeword, a probability of appearance P (Sk) is associated.
Each sequence x [1:T] is then permuted by the interleaver 14 to obtain an interleaved sequence x [1:T].
This interleaved sequence {tilde over (x)} [1:T] is given as input to the systematic convolutional encoder 16. The coded sequence denoted v [1:T x n/k] at the output of the CC encoder 16 is eventually punctured by the puncturer 18 in order to attain the wished transmission rate, modulated by the BPSK modulator 20 and transmitted over a channel with variance σ2.
The receiver 2 includes a demodulator 30, an eventual depuncturer 32 and an iterative decoder or joint decoder 34.
The depuncturer 32 is adapted to introduce into the received sequence, a zero-value samples to replace the samples removed from the coded sequence by the puncturing operation. The depunctured sequence denoted z [1:T x n/k] is then decoded by the iterative decoder 34, the flowchart of which is detailed on
As shown on
The SISO CC decoder 42 is for example as disclosed in [L. Bahl, J. Cocke, F. Jelinek, and J. Raviv. “Optimal decoding of linear codes for minimizing symbol error rate”. IEEE Transactions on Information Theory, 20:284–287, March 1974].
For decoding a sequence y [1:T], and as known per se, the SISO VLC decoder 44 is adapted to compute a soft-output in the form of the log a posteriori ratio Λ(x[t]):
and a hard bit estimate sequence denoted {circumflex over (x)}v [1:T] or a hard symbol estimate sequence denoted ŝv [1:R]. The hard symbol estimate sequence is derived from the hard bit estimate sequence by applying the VLC table.
The VLC SISO decoder 44 will derive the reliability information on bits Λ (x[t]) by taking into account the a priori knowledge it had on the VLC encoder. In this case, the a priori knowledge will consist in a VLC tree structure derived from the VLC table, the occurrence probabilities of the symbols P (Sk) and any other source side information denoted SSI, such as possibly the number of symbols of the concerned sequence.
A summary of the iterative decoding method is given hereafter.
1. Initialise Φ(0)[t]=0.
2. For iterations r=1, 2, . . . , I, where I is the total number of iterations
for 1≦t≦T,
More precisely, the iterative decoding process takes place as follows:At the rth iteration, the CC decoder's input consists in the depunctured sequence z[1:T x n/k] and the a priori probabilities ratio Φ(r−1) [1:T] of the interleaved sequence denoted {tilde over (x)} [1:T] obtained at previous iteration. The CC decoder 42 provides the {tilde over (Λ)}c(r) [1:T] output sequence.
At this same rth iteration, the VLC decoder 44 takes as input the observation sequence y(r) [1:T] derived of the CC decoder output sequence, the a priori probabilities of VLC symbols as well as any other available source side information SSI and provides the Λc(r) [1:T] output sequence.
To have each decoder taking advantage of the iterative process, independent information must be exchanged between the two decoders, that is the so-called extrinsic information. Ec(r) [t] and Ev(r) [t] are defined as the extrinsic information about the bit t provided respectively by the CC decoder 42 and by the VLC decoder 44.
The CC extrinsic information Ec(r) [1:T] sequence scaled by σ2/2 is used as observation for the rth iteration of VLC decoder, thus
The interleaved VLC extrinsic information {tilde over (E)}v(r) [1:T] is used as a priori probabilities ratio estimate for the r+1th iteration of the CC decoder 42,
Φ(r)[t]={tilde over (E)}v(r) [1:T].
The SISO VLC decoder 44 implements a soft-input soft-output (SISO) stack method, the algorithm of which is disclosed on
The method proceeds in two main stages:
The first main stage 100 of the method consists in applying a stack sequential decoding algorithm on a “huge” tree formed by concatenating several times a considered Huffman tree until for example the number of bits and the number of symbols in the considered sequence are reached.
The first step 111 of the hard decoding stage 100 consists in creating a structure tree by defining relationships between nodes and computing an a priori probability associated with each branch of the tree. The unitary Huffman tree corresponding to VLC codes of
Such a tree comprises a plurality of:
A path comprises a plurality of branches B and goes from an initial node N00 to a succeeding node which may be a symbol Si.
Besides, a tree has different levels, the first one being the level 0.
At step 112, a stack is defined. This stack is memorized at each step of the hard decoding stage 100 in order to be later used during the post processing stage 102.
The stack contains:
The stack is initialised by placing the initial node N00 with metric 0 in the stack. Initial node N00 is considered as being the top path in the decoding stack.
At step 113, a metric of the succeeding branches of the last node of the top path is computed.
As sequential decoding methods compare sequences of different lengths, a specific metric is used. For each node 1 in the set Np, the metric associated to the branch leading to this node at time t is defined as follows:
m(1, y[t])=−log P(y[t]|v(1))−log pt(1)+log P0(y[t]).
The term pt (1) will in practice be approximated by the a priori probability of the branch p(1) for simplicity reason. This last quantity can be directly obtained from the tree representation of the VLC table and the codeword probabilities which are assumed to be known by the decoder as explained in [L. Guivarch, J.-C. Carlach, and P. Siohan. “Joint source-channel soft decoding of Huffman codes with turbo-codes”. In Proceedings of the Data Compression Conference (DCC'00), pages 83–91, Snowbird, Utah, USA, March 2000].
At step 114, a test is performed to determine whether an extended path reaches a symbol node. An extended path consists in the current path concatenated with a possible succeeding branch.
If such a symbol node is reached by an extended path, the number of symbols associated to this path is increased at step 115.
The increasing of the number of symbols consists in concatenating the considered unitary Huffmann tree corresponding to the VLC codes with the initial node at the symbol node reached by the extended path.
The top path is deleted from the stack at step 116. The top path is the path mentioned in the first line of the stack. It is also the path of the stack having the smallest cumulative metric.
The extended paths are then inserted in the stack at step 117.
The cumulative metric of each new top path is computed and stored in the stack. The cumulative metric is equal to the cumulative metric of the previous top path increased with the metric of the branch added to obtain the extended path.
At step 118, a new top path is selected. The new top path selected is the path of the stack having the smallest cumulative metric among the paths listed in the stack.
Next, it is checked if stop conditions are verified at step 121. The stop conditions are for example that the top path contains the number of bits and the number of symbols of the original sequence.
If stop conditions are verified at step 121, then step 122 is carried out. Otherwise, step 114 and followings are repeated.
During the hard decoding stage 100, the content of the stack is stored at each step.
At step 122 corresponding to post processing stage 102, soft-output values are derived from the cumulative metrics of paths considered in hard decoding stage 100 and stored in the stack. Post processing stage algorithm will be disclosed later.
An illustrated example of hard decoding stage 100 is given hereafter.
Step 111:creation of the structure tree by defining relationships between nodes computing the a priori probability associated with each branch.
Step 112:initialization of the stack
Step 113:computation of the metric of the succeeding branches of last node N00.
They are calculated as follows:
M(N00−N10)=M10
M(N00−N11)=M11.
Step 114:The extended path reaching node N10 reaches a symbol node. Thus, step 115 is implemented for node N10.
Step 115:concatenation of the considered unitary Huffmann tree with the initial node at the reached symbol node N10. The resulting tree is shown on
Steps 116 & 117:deletion of the top path PTH00 from the stack and insertion of the extended paths in the stack
Step 118:selection of the new top path i.e. the path having the smallest cumulative metric.
(We assume that the smallest cumulative metric is M11)
Step 121:stop condition is assumed to be not verified.
Step 113:the last node of the top path is N11
The metric of the succeeding branches of last node of the top path are calculated as follows:
M(N11−N20)=M20
M(N11−N21)=M21.
Step 114:the extended paths reaching nodes N20 and N21 reach a symbol node, thus step 115 is implemented for nodes N20 and N21.
Step 115:concatenation of the considered unitary Huffman tree with the initial node at the reached symbol node N20 and N21. The resulting tree is shown on
Steps 116 & 117:
Step 118:selection of the new top path, i.e.:the path having the smallest cumulative metric.
(We assume that the smallest cumulative metric is M11+M21).
Step 121:stop condition assumed not be verified.
Step 113:the last node of the top path is N21=N00. The metric of the succeeding branches of last node of the top path are calculated as follows:
M(N21−N34)=M′10
M(N21−N35)=M′11.
Step 114:the extended path reaching node N34 reaches a symbol node, thus step 115 is implemented for N34.
Step 115:concatenation of the considered unitary Huffman tree with the initial node N00 at the reached node N34 as shown on
Steps 116 & 117:deletion of the top path PTH21 from the stack and insertion of the extended paths.
Step 118:selection of the new top path i.e., the path having the smallest cumulative metric. (We assume that the smallest cumulative metric is M11+M20).
The steps as above disclosed are repeated until the stop conditions are verified at step 121.
Once the sequential decoding process is finished, the post-processing takes place and generates soft-outputs. These soft-outputs must consequently be extracted from the paths that have been examined and stored by the stack decoding algorithm.
Let {P1, . . . , Pr} be the r examined paths stored in the stack. A given path Pi (1≦i≦r) is characterised by a length in bit TPi, a sequence of bits {{tilde over (x)}pi [1], . . . , {tilde over (x)}pi [Tpi]} and a cumulative metric μpi.
A first solution proposes to approximate the log-likelihood ratio Λ (x [t]) by:
Λ(x[t])=μ(t, 0)−μ(t, 1),
where μ (t, 1) (resp. μ (t, 0)) is the minimum cumulative metric for all the paths in the stack for which the tth estimated bit is 1 (resp. 0).
If P* is the path selected by the decoding process, then if {tilde over (x)}P*[t]=i(i={0, 1}), we have
μ(t,i)=μP*.
As a consequence, for each time t, the minimum cumulative metric of the paths with complementary bit to the estimated sequence has only to be determined.
In a second solution, not only the best paths for both values 0 and 1 for tth estimated bit are taken into account, but all the metrics in the paths stored by the stack algorithm. So, the log-likelihood ratio Λ (x [t]) is approximated by
Λ(x[t])=log (Σe−μPi/Σe−μPi)
1≦i≦r 1≦i≦r
TPi>=t TPi>=t
Pi[t]=1 {tilde over (x)}Pi[t]=0
The following codes are considered in the communication chain:VLC code C given in Table 1 followed by convolutional code CCA with coded bits punctured using the puncturing table defined in Table 2. The results are given in terms of Frame Error Rate (FER) and Signal to Noise Ratio (SNR).
The stack algorithm in only about 0.5 dB worse than KMAP algorithm for a much lower complexity.
Yet, comparing these results with what would be obtained when replacing the SISO VLC decoders in the receiver by a classical hard VLC decoder (hence without iterating), the gain provided by the iteration process was found as being substantial.
Number | Date | Country | Kind |
---|---|---|---|
02292223 | Sep 2002 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/IB03/03870 | 9/4/2003 | WO | 00 | 3/8/2005 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2004/025840 | 3/25/2004 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
4797887 | Yamasaki et al. | Jan 1989 | A |
6246347 | Bakhmutsky | Jun 2001 | B1 |
6851083 | Hagenauer et al. | Feb 2005 | B1 |
6891484 | Lamy et al. | May 2005 | B2 |
20010007578 | Ran et al. | Jul 2001 | A1 |
20040259098 | Lamy et al. | Dec 2004 | A1 |
Number | Date | Country | |
---|---|---|---|
20060061498 A1 | Mar 2006 | US |