The present invention relates to methods for representing and compressing soft metrics in communication systems based on channel coding. In particular, but not exclusively, the present invention relates to signal de-interleaving in a receiver, for example an OFDM receiver.
All modern digital communication systems use channel coding to protect data and allow a better reception. Such is the case, to name just a few examples, of the several Digital Audio and Video Broadcast standards available (DAB and DVB), of wireless networks, including WiFi and Bluetooth in their various implementations, and of modern cellular phone communication systems.
It is customary, in these communication systems, to apply several permutation operators to the data stream. Such permutations, generally indicated as interleaving, are often introduced at the transmitter side and have in general the effect of improving the communication bandwidth and reducing the error rate. According to the cases, interleaving can take place at bit or symbol level, or both. Interleaving introduced in at the transmitter side must in general be undone by a corresponding inverse operation of deinterleaving in the receiver to allow the reconstruction of the original signal.
Known implementation of interleaving and deinterleaving require storing in a memory a sequence of data whose length is equal to the period of the interleaving operator. Since newly proposing communication standards advocate the use of interleaving operator of increasing complexity and length, interleaving and deinterleaving operation place a heavy burden on the memory resources. There is therefore a need of a interleaving and/or deinterleaving method that is less memory demanding than the methods of the art.
According to the invention, these aims are achieved by means of the object of the appended claims.
The invention will be better understood with the aid of the description of an embodiment given by way of example and illustrated by the figures, in which:
This invention concerns methods and devices to represent, compress, de-compress and de-represent data that must be processed by a channel that allows data to be de-compressed, such as, for example: (a) a channel that permutes the order of an incoming signal, (b) a memory where the data are written and then read in another given order, (c) a communication channel.
Functional block 2, or encoder, is adapted to map the d-data generated by source 1 on words of a given error correcting code. In the field of digital communication system many error correcting codes can be used as: Low Density Parity Check Codes (LDPC), convolutional codes, block codes, etc. The mapping of d in words of error-correction code field is named encoding. In the following the output of block 2 will be indicated by c. The encoded data c are modulated by modulator 3. The transmitter 8 comprises the sequence of the three blocks 1, 2 and 3. Its output, denoted as x, goes through a transmission channel 4 that could be a radio propagation process, a cable transmission, or also a generic operator. The signal emerging from the other side of channel 4 is collected by receiver 9, the received signal is denoted by r. The received signal is processed by block 5 that performs the demodulation of the received signal. The output of Block-5 is then processed by block 6, which performs decoding. The output of Block-6 is an estimate of the transmitted data d
The c-stream is transformed by modulator block 3 in another format suitable for transmission. In the following the process performed by Block-3 will be also named modulation. Many techniques can be used for addressing this goal. Nevertheless most of them can be represented as reported in
The interleaver block 31 takes a set of the values carried by the c-stream and performs a permutation on it. Denoting the output of Block-30 by a the interleaving rule can be written as follows:
a
j
=c
i
, j=π(i), (1)
where ci is the i-th value carried by the c-stream, aj is the j-th value carried by the a stream and π is a function specific to the chosen modulation standard that defines the permutation performed by Block-30 and it. Since at this stage both signals, c and a, carry binary values (bits), the process performed by Block-31 is also named bit-interleaving.
The schema represented in
The mapper block 32 maps the bit carried by the a stream to a finite set of complex numbers also named constellation. Block-32 takes subsets of values carried by a and associates to them a value of a given constellation. Let z be the output of block 32 and zk its k-th element. zk is an element of a specific constellation set that can differ from standard to standard and from transmission to transmission. Indicating with m the rule used in block 32, the relation between a and z can be written as follows:
z
k
=m(ai, ai+1, iai+M−1) where ai, ai+1, ai±M−1 ∈ B and zk ∈ C. (2)
ai, a{i+1}, a{i+M−1} is the sub-set of values carried by the a-stream and zk is the value in which they are mapped. In most digital communication systems, zk is a complex number. Possible values of zk depend on the considered digital communication system.
where j is the imaginary unit. The map of
The z stream is then processed by the interleaving block 33, which performs, similarly to Block-31, a permutation on values carried by the z stream. The process performed by Block-33 can be written in mathematical form as follows:
w
j
=z
i
, j=r(i), (3)
where zi is the i-th element of the input, wj is the i-th element of the output and τ is a given permutation specific to the considered system.
The transmitted signal is modified by the environment or channel 4 visible in
In the receiver 9 the received signal r emerging from the channel 4 is presented to demodulator block 5.
Preferably the receiver performs also noise estimation in block-53. The output of Block-53 estimates the power of the noise that affects the receiver signal and will be denoted by σ2.
The meaning of the three signals
2]=σ2, (4)
where n is a Gaussian variable having power equal to σ2, (E[n2]=σ2).
The three signals
In practical implementations
τ(i)=
where
The three-ple (
P(ai=ā|
In the following for the sake of simplicity the above reported probability we will be also denoted by Pi(ā).
In the usual case of binary transmitted values ={0,1}, block-55 must compute two probability values for each ai transmitted value: Pi(0) and Pi(1). Probabilities Pi(0) and Pi(1) are commonly indicated as ‘soft metrics’ and in case of binary values they can be represented by a unique value named Log Likelihood Ratio:
Before starting the decoding in block-6, the receiver 9 must re-organize the sequence {Pi(ā)}i. Block-56 re-arranges the sequence in the following way:
j(ā)=Pi(ā) j=π−1(i), (8)
where π−1(i) is the inverse of the π-permutation used at the transmitter side by Block-31.
To perform the permutations π−1(i) and τ−1(i), Blocks 54 and 56 need memory. In all the systems the permutations π and τ are applied on a finite sequence:
π(i) and π−1(i) are defined for i=1, . . . , Nπ (9)
τ(i) and τ−1(i) are defined for i=1, . . . , Nτ (10)
The memory used by Block-54 and Block-56 depends on Nπ and Nτ. Assuming to efficiently implement the de-interleavers performed by Block-54 and Block-56, the memory used by Block-54 and by Block-56 is respectively composed by Nπ and Nτ words.
For Block-54 one word is represented by three-ple (
M
54=(Br+Bh+Bσ)×Nπ bits. (11)
For Block-56 one word is represented by the a K-dimensional vector or K-uple [(Pi(āi), Pi(ā2), . . . Pi(āk)], where K is the cardinality of and ā1, ā2, . . . āk are all the possible elements of . Using BP bits for the representation of the generic Pi(ā) value, it follows that the memory used by Block-56 is equal to:
M
56
=B
P
×K×N
τ bits. (12)
In the case of a binary -alphabet and using the LLR representation it follows that the deinterleaving operation performed by Block-56 is based on words of one single value. Assuming BLLR bits for the representation of the LLR, it follows that the memory used by Block-56 is equal to:
M
56
=B
LLR
×N
τ bits. (13)
Another field in which the invention is intended to be used is illustrated in schematic form in
Block-54 and Block-56 of
In the inventive signal processing method an input signal 305 is fed to a representation conversion block-301 that changes representation of the values carried by signal 305 in another format. The format change can be a permutation, an interleaving, a mapping, or a general conversion operation, represented by a suitable operator, and may also in cases cause information loss.
The first phase, see Block-301, is the change of the representation used of the incoming signal. Signal-305 is represented using a given number of bits. In most of the systems each element of Signal-305 is represented using a constant number of bits. Indicating with Sn(305) the generic n-th element of Signal-305 and by Bn(305) the number of bits used to represent it, it follows that:
B
i
(305)
=B
j
(305)
=B
(305)
∀i,j. (14)
Block-301 changes the representation used for Signal-305 and generates Signal-306. Possibly, the number of bits used for the representation of Signal-306 is not constant through the stream. Denoting by Sn(306) the generic n-th element of Signal-306 and by Bn(306) the number of bits used to represent it, it can happen that:
B
n
(306)
≠B
m
(306). (15)
Relaxing the constraint on the constant number of bits used for the signal entering Block-301 allows to optimize the total number of bits used of the representation of Signal-305. The optimization of the used bits depends on the nature of Signal-305. Section “LLR Quantization” reports a possible bit-width optimization in case of Signal-305 carrying LLR values.
The representation conversion is followed by a compression step carried out by Block-302 that generates Signal-307.
Signal-306 is then compressed by Block-302. The compression is designed taking into account the statistic of the elements of Signal-306. It can be applied on each value Sn(306) or on words composed by M elements of Signal-306. Let wi(306) be the generic word composed by M elements of Signal-306, which can be expressed in the mathematical form as follows:
where wi(306) is a vector composed M by elements and j1(i), j2(i), . . . jM(i) are the indices of the S(306)-values that compose the word wi(306). A simple solution to generate the word wi(306) could be to take M consecutive elements of Signal-306. In that case the word can be written as follows:
Moreover the compression code applied on the n-th value, Sn(306), is in general different from the compression code applied on the m-th values Sm(306).
Selector block-3021 is the first stage of the compressor Block-302. It assigns the values carried by Signal-306 to the C different compression codes available, each represented by one of the blocks 3022 to 3024. The outputs
In case of pre-computed compression codes, the assignment performed by Block-3021 is done in such a way that the signal, at the input of the n-th compression code, fits as much as possible the statistical description for which the n-th compression code has been designed. In case of adaptive compression codes the assignment is done in such a way that the signal at the input of each compression code will be as much as possible non-uniformly distributed. The goal of Block-3021 is to guarantee a signal, at the input of each compression, having a statistical description suitable for an efficient compression code design.
In the following, the input of i-th compression code will be denoted by vi, and its n-th element by vi(n). Signal vi can be composed by a sequence of elements of S(306)-values or by a sequence of elements of w(306). The output of the compression codes is then rearranged by Block-3025 in the inverse order used by Block-3021 to assigned the values of Signal-306 to the different compression codes.
Different kinds of compression codes can be used for the compression, including (but not only) entropy coding algorithms and dictionary-based algorithms. If the distribution of the symbol values is known beforehand, arithmetic coding (or Huffman coding) is very well suited.
Some compression codes, in particular entropy coding generate code words of variable length, and it is difficult to guarantee that code words generated in correspondence to unusual combinations of input do not exceed a given maximum length. Preferably the present invention proposes a process to limit the length of the word generated by the compression code.
Let Vi be the alphabet of the words at the input of the i-th compression code. Let i the rule used the i-th code to generate the output words:
(i)(n)=Ci(v(i)(n)) v(i)(n) ∈ Vi. (18)
where v(i)(n) is the n-th word at the input of the i-th compression code.
Let
i
={
(i)(n)|
The i-th compression code is designed to compress as much as possible the input v(i). Nevertheless it could happen that some words of the set
V
i
(L)
={v|v ∈ V
i and length[Ci(v)]≦Li}. (21)
Let Vi(L) be the subset of Vi that does not generate words exceeding the maximum length:
V
i
(L)
={v|v ∈ V
i and length[Ci(v)]≦Li}. (21)
The i-th code analyzes the output generated by v(i)(n) using the rule i; if the output exceeds the maximum length the code modifies the input, from v(i)(n) to {circumflex over (v)}(i)(n), in such a way that the output, generated by {circumflex over (v)}(i)(n), has the wanted length. That means that {circumflex over (v)}i(n) must be in Vi(L).
The generation of {circumflex over (v)}(i)(n) must take into account system performance and length constraints. The use of {circumflex over (v)}(i)(n) in place v(i)(n) must generate the performance loss as small as possible. The technique used to map a given v(i)(n) into {circumflex over (v)}(i)(n) depends on the system on which the present invention is applied. The impact of using {circumflex over (v)}(i)(n) in place v(i)(n) can be represented as a cost function that must be minimized. It follows that {circumflex over (v)} is selected using the following rule:
where f is a generic cost function and {circumflex over (v)}(i)(n) is the element of Vi(L) that minimizes the cost function given v(i)(n).
A simple example of cost function is the distance function.
Another constraint can be imposed on the total lengths of the words generated by the C compression codes. At a given instant n the sum of the length of the words generated by the C compression codes cannot exceed a given value. This constraint can be written in a mathematical form as follows:
It could happen that a given instant n the above reported constraint is not verified. In that case the invention changes the values v(1)(n), . . . , v(C)(n), that have generated the too long sequence
A compression method subject to the constraint expressed by equation (23) is particularly useful in de-interleaving a stream of soft metrics in a receiver, for example. In this case, as it will be seen further on, the constraint 23 can be enforced to ensure that the total length of each group of compressed soft metrics relative to a same constellation symbol be preserved. Thanks to this feature, the compressed soft metrics can be de-interleaved as easily as the uncompressed ones, albeit with reduced memory usage.
The goal of both Block-301 and Block-302 is to reduce the number of bits used to represent the values carried by Signal-305. A set of M values carried by Signal-305 is represented using:
[B(305)×M] bits. (24)
The number of bits used by Signal-307 to carry the same information can be difficulty written in close form.
Nevertheless, considering the worst case, in which all the words generated by the i-th code have maximum length, it follows that the number of bit used by Signal-307 to carry the same information is upper bounded by the following equation:
where Mi is the number of words generated by the i-th compression code for the process associated to the M values carried by Signal-305 and B(L
The compressed signal-307 goes then through a channel 303 that might be a physical propagation channel, but also a generic operation on the signal, for example a non-distortion process, from which it emerges as signal 308, and is further processed by Block-304. Block-304 performs inverts the compression step previously applied by Block-302 in order to generate Signal-309 in the same format as Signal-306.
Signal-308 is decompressed by Block-304, Block-304 performs the inverse of process previously performed by Block-302.
Signal-308 is split in C different signals: ē1, ē2, . . . , ēC. The rules used by Block-308 permit to re-build at input of the i-th de-compression code (Block-3042, Block-3043, . . . Block-3044) the code words previously coded by the i-th compression code.
The generic signal ē(i) processed by the i-th de-compression code. The i-th de-compression code performs the inverse of the mapping performed by the i-th compression code. Denoting by ē(i)(n) and by e(i)(n) the input and the output of the i-th de-compression, the process performed by Block-3042, Block-3043, . . . Block-3044, can be written in mathematical form as follows:
e
(i)(n)=Ci−1(ē(i)(n)), (27)
where i−1 is the inverse of the i-function reported in Eq. 18.
The ei signals are reordered by Block-3045 which performs the inverse of the process previously executed by Block-3021. The output of Block-3045 is the Signal-309.
The last step is to represent Signal-309 in a format coherent with the representation used for Signal-305. This task is performed by Block-305 which performs the inverse of the process previously performed by Block-301.
This section focuses on Block-301 in case of a Signal-305 carrying LLR values. In such case Signal-305 is a sequence of LLR values. The n-th value of Signal-305 is a LLR value associated to a transmitted/received bit.
All the LLR are represented using the same number of bits. Block-301 changes the representation of the n-th LLR values.
The change of the representation is based on the position in the constellation of the bit carried by the LLR value.
Signal-305 can be divided into groups of M elements, Sjk(305), with k=1, 2, . . . , M, where M is the number of bits associated at each received constellation point. The set:
{Sj
is the set of the LLRs of the bits associated at the same received/transmitted constellation point.
Block-301 quantizes Signal-305 by a quantizer, where to each input value Sjk(305), one of the Gk=2l
In a possible embodiment, each quantized value is associated to a quantization interval [vk,v−,vk,v], where vk,0=−∞ and
vk,G
Then, the value associated to Sj
S
j
(306)
={v*:S
j
(305)
∈ [v
k,v*−1
,v
k,v*
]}=Q(Sj
where Q is the quantization function, denoting the process performed by Block-301.
Various approaches can be followed for the choice of the quantization interval edges vk,v. One possible implementation provides that they are chosen according to the statistics of Signal-305 in order to maximize the generalized mutual information between the transmitted bits and Signal-306, which provides the maximum achievable for given quantization choice. Considering a decoder having as input the quantized LLR, and assuming equiprobable inputs, the generalized mutual information can be written as
where P(Sj
for any c>0, and in this case the generalized mutual information coincides with the mutual information and the corresponding transmitted bit, i.e.
where P(Sj
In a possible embodiment, the quantization process could be designed in such a way that it maximizes the sum of the mutual information of the LLR associated to the same transmitted/receiver constellation point, under a constraint on the total bits used for the set reported in Eq. (28):
Assuming an uniform quantization, the solution of the above reported problem needs the computation of M quantization steps, Δ1, . . . , ΔM, and M bit-width, l1, . . . , lM. Different techniques can be used to solve Eq. (34). The quantizing operation generates a constant number of bits for each group of soft metrics relative to a same constellation symbol or different number of bits for each soft metric relative to a same constellation symbol.
Assuming to use a 16-QAM constellation, each constellation point carries 4 bits (M−4). Let's assume that Btot, the number reported in Eq. (34) is equal to 16, (Btot=16). In case of constant bit-width it follows that: Bj1=Bj2=Bj3=Bj4=4. Note that Bj1+Bj2+Bj3+Bj4=16. Otherwise, it could happen, that maximising the mutual information, see Eq (34), the bit-width is not constant, example: Bj1=5, Bj2=5, Bj3=3 and Bj4=3. Note, that also in this second case the constraint is satisfied, Bj1+Bj2+Bj3+Bj4=16.
As an embodiment of compression, Block-302 provides the use of Huffman coding on each element Sj
where P(vk) is the probability that Sj
length[
then {circumflex over (v)}i=
The choice of the LLRs to be substituted and their replacement will have an impact on the system performance. Note that while quantization and entropy coding are performed for each bit separately, the compression is done on the ensemble of the LLRs of all the bits. This problem can be seen as a multidimensional multiple choice knapsack problem. Unfortunately, this problem is NP hard problem, thus a possible embodiment of provides the use of a greedy approach for the compression.
We consider the following iterative procedure:
1. Let v̂ ((1)) (n) be the quantized indices values obtained by Block-301.
2. Initialize {circumflex over (v)}(i)(n)=v(i)(n), i=1, . . . , M.
3. If (36) is satisfied, terminate the process.
4. Otherwise, find
A possible expression of the cost function ƒ is the MI loss, i.e.
In this case, among all quantized value that have a given length, the one providing the highest MI is selected.
The invention can be applied to the demodulation process.
Optional block 301 represents a quantizer unit, or any other suitable process block that transforms the representation of the soft metrics generated by the demapper, and then compressed by compressor unit 02. The output of Block-32 is then processed by Block-56. Since the number of bits used to represent Signal-307 are less than the number of bit used for the P signal, the de-interleaver 56 of this variant of the invention uses less memory. The output of the de-interleaver 56 is then decompressed by Block-304 and optionally further processed by Block-305, for example to change or adapt its representation, according to the needs.
The invention can be applied to a diversity receiver akin to that represented in
The invention can also be applied to the system reported in
This application is a divisional application of and claims the priority benefits of U.S. non-provisional application Ser. No. 14/422,149, filed on Feb. 17, 2015, now allowed. The prior U.S. non-provisional application Ser. No. 14/422,149 is a 371 application of the International PCT application serial no. PCT/EP2012/066286, filed on Aug. 21, 2012. The entirety of each of the above-mentioned patent applications is hereby incorporated by reference herein and made a part of this specification.
Number | Date | Country | |
---|---|---|---|
Parent | 14422149 | Feb 2015 | US |
Child | 15288739 | US |