Soft decision decoding of a scheduled convolutional code

Information

  • Patent Grant
  • 7082173
  • Patent Number
    7,082,173
  • Date Filed
    Wednesday, December 1, 1999
    25 years ago
  • Date Issued
    Tuesday, July 25, 2006
    18 years ago
Abstract
A method for decoding a predetermined code word is specified in which the code word comprises a number of positions having different values. In this method, encoding is performed, in particular, by way of a terminated convolutional code. Each position of the code word is correlated with a safety measure (soft output) for a most probable Boolean value by performing the correlation on the basis of a trellis representation. The decoding of the code word is determined by the correlation of the individual positions of the code word.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


The invention relates to a method and an arrangement for decoding a predetermined code word.


2. Description of the Related Art


In the decoding of a code word which has a predetermined number of positions, the information-carrying positions are restored as completely as possible.


The decoding takes place at the end of the receiver which has received the code word via a disturbed channel. Signals are transmitted, in particular as Boolean values, preferably subdivided into +1 and −1, via the channel where they are subject to a disturbance, and are converted into analog values which can deviate to a greater or lesser extent from the predetermined Boolean values (±1) by a demodulator.


The general assumption is K positions of binary information (“information bits”) without redundancy uε{±1}K, which is mapped into a code word cε{±1}N by means of systematic block codes or unsystematic block codes by a channel coder. In this arrangement, the code word contains N−K bits (also “check bits”) which can be used as redundant information to the N information bits for restoring the information after transmission via the disturbed channel.


The systematic block code adds to the N information bits N−K check bits which are calculated from the information bits, the information bits themselves remaining unchanged whereas, in the unsystematic block code, the information bits themselves are changed, for example the information is in an operation performed from one to the next position. Here, too, check bits are provided for reconstructing the information hidden in the operations. In the text which follows, in particular, a technically significant variant of unsystematic block codes, the so-called terminated convolutional codes, is considered.


“Hard” decoding of a correlation of the received code word (with the positions occupied by analog values) i.e., correlating each position with the nearest Boolean value in each case, is a decisive disadvantage since valuable information is lost in this process.


SUMMARY OF THE INVENTION

It is the object of the invention to determine a decoding of a predetermined code word, with decoding supplying analog values (so-called “soft outputs”) which, in particular, can be taken into consideration in the subsequent decoding method and thus provide for high error correction in the transmission of code words via a disturbed channel.


This object is achieved in accordance with the various embodiments of the method and apparatus discussed below.


To achieve this object, a method for decoding a predetermined code word is specified in which the code word comprises a number of positions having different values. In this arrangement, encoding has taken place, in particular, by way of a terminated convolutional code. Each position of the code word is correlated with a safety measure (soft output) for a most probable Boolean value by performing the correlation on the basis of a trellis representation. The decoding of the code word is determined by the correlation of the individual positions of the code word.


A decisive advantage here is that due to the correlation based on the trellis representation, a distinct reduction in complexity compared with a general representation takes place with the result that decoding of the code word (generation of the soft outputs at the positions of the code word) also becomes possible in real time.


A further development consists in that the decoding rule for each position of the code word is determined by











L


(


U
i

|
y

)


=

ln


(





c



Γ
i



(

+
1

)










exp


(

-




(

y
-
c

)

T



(

y
-
c

)



2


σ
2




)







c



Γ
i



(

-
1

)










exp


(

-




(

y
-
c

)

T



(

y
-
c

)



2


σ
2




)




)



,




(
1
)







where


L(Ui|y) is a safety measure (soft output) for the i-th position of the code word to be determined;


y is a demodulation result to be decoded;


c is a code word;


Γi(±1) are all code words for ui=±1; and


σ2 is a variance (channel disturbance).


Another further development provides that the equation (1) is solved by utilizing a characteristic of a convolutional code used in the coding (and correspondingly in the decoding) which determines states in accordance with a shift register operation used during the convolution, from which states, in turn, the trellis representation is obtained.


In an additional further development, the trellis representation is run through in a predetermined direction in order to recursively calculate terms Am and Am respectively. Into this calculation rule, node weights μm(s) which are determined by the demodulation result y enter at the nodes (s, m) of the trellis representation. The terms Am and Am are described by













A
~

m



(
E
)


=




s

E









A
m



(
s
)




,


for





m







(
2
)







with












A
m



(
s
)


=



μ
m



(
s
)







t


W


(

s
,

V
m


)











A

m
-
1




(
t
)





,


for





m







(
3
)







and a starting value











A
0



(
s
)


=

{






1
:

for





s


=

s
0


,






0
:
else









(
4
)







A more detailed discussion of the forms of description listed here can also be found in the description of the exemplary embodiment.


One embodiment provides that mappings Bm are determined by way of the trellis representation, the trellis representation being processed in opposition to the predetermined direction. The term Bm is determined by












B
m



(
s
)


=



μ

Q
-
m
+
1




(
s
)







t


T


(

s
,

V

Q
-
m
+
2



)











B

m
-
1




(
t
)





,


for





1


m

Q

,




(
5
)







where











B
0



(
s
)


=

{






1
:

for





s


=

s
0


,






0
:
else









(
6
)







is determined for terminating the recursion.


Furthermore, terms Aαi can be determined by again running through the trellis representation taking into consideration the terms Am and Bm already determined. In particular, the terms Aαi are determined in accordance with











A
α
i



(
y
)


=




s

S










A

j
-
1




(
s
)







t


T


(

s
,


V
j
i



(
α
)



)












B

Q
-
j
+
1




(
t
)


.








(
7
)







In a further embodiment, the K positions of the decoded code word are determined in accordance with











L


(


U
i

|
y

)


=

ln


(



A

+
1

z



(
y
)




A

-
1

i



(
y
)



)



,

i
=
1

,





,

K
.





(
8
)







In particular, an AWGN (Additive Gaussian White Noise) channel model is used for the derivation. The method presented can also be used for other channel models, especially for channel models used in mobile radio.


Another embodiment relates to the use of the method in a mobile radio network, especially the GSM network.


It is also a further development that, after the soft outputs have been determined, there is a “hard” correlation of the analogue values with the Boolean values ±1. In this arrangement, the nearest Boolean value is in each case determined for correlating the analogue value.


The soft output values determined can be used as input values for further decoding when concatenated codes are used.


To achieve the object, an arrangement for decoding a predetermined code word is also specified in which a processor unit is provided which is set up in such a manner that

    • 1. the code word comprises a number of positions having different values;
    • 2. each position of the code word can be correlated with a soft output value by performing the correlation on the basis of a trellis representation; and
    • 3. the decoding of the code word can be determined by the correlation of the individual positions of the code word.


This arrangement is particularly suitable for performing the method according to the invention or one of its further developments explained above.





BRIEF DESCRIPTION OF THE DRAWINGS

In the text which follows, exemplary embodiments of the invention will be shown and explained with reference to the drawings, in which:



FIG. 1 is a block diagram showing a representation of digital information transmission;



FIG. 2 is an algorithm in pseudocode notation for progressing in the trellis diagram observing all states for the calculation of node weights;



FIG. 3 is an algorithm in pseudocode notation for determining soft outputs (general case);



FIG. 4 is an algorithm in pseudocode notation for determining soft outputs (special case: binary state transition); and



FIG. 5 is a block diagram of a processor unit.





DETAILED DESCRIPTION OF THE INVENTION

The text which follows describes in greater detail, first the convolutional code, then the reduction in complexity in the calculation of soft outputs and, finally, an algorithmic translation of the reduction in complexity.


Terminated Convolutional Code


In communication technology, terminated convolutional codes are mostly used in concatenation with other systematic or unsystematic block codes. In particular, the decoding result of a convolutional decoder is used as the input for another decoder.


To ensure the lowest possible error rate, it is necessary to supply “soft” decoding decisions instead of “hard” ones in the convolutional decoding for the further decoder, i.e., to generate a tuple of “soft” values (soft outputs) from R instead of a tuple of “hard” Boolean (±1) values. The absolute value of the respective “soft” decision then provides a safety measure for the correctness of the decision.


In principle, these soft outputs can be calculated in accordance with equation (1), depending on the channel model. However, the numeric complexity for calculating a soft output is O(2K), where K specifies the number of information bits. If K is realistically large, then these formulae can not be evaluated, in particular, since such a code word must be calculated again every few milliseconds (a real-time requirement).


One consequence of this is that soft outputs are dispensed with (with all consequences for the word and bit error rates) or, respectively, fewer elaborate approximations are performed for determining the soft outputs.


In the text which follows, a possibility for terminated convolutional codes is specified with the aid of which this complexity can be reduced to O(K) in a trellis representation for calculating all soft outputs, i.e., this solution provides the possibility for a precise evaluation of equation (1).


In the text which follows, the bits of the code are represented in {±1} representation. In comparison with a {0, 1} representation, which is often used in information technology, −1 corresponds to 1 and 1 corresponds to 0.


On a body {±1}, addition ⊕ and multiplication ⊙ are defined as follows:


















−1 ⊕ −1 = 1
−1 ⊙ −1 = −1



−1 ⊕ 1 = −1
−1 ⊙ 1 = 1



  1 ⊕ −1 = −1
  1 ⊙ −1 = 1



  1 ⊕ 1 = 1
  1 ⊙ 1 = 1










The coding is done with the aid of a “shift register” into which bit blocks (input blocks) of the information bits are written with each clock pulse. The combination of the bits of the shift register then generates one bit block of the code word. The shift register is pre-assigned +1 bits in each case. To terminate the coding (termination) blocks of tail zeros (+1) are shifted in afterwards. As has been mentioned initially, check bits by way of which bit errors can be corrected are correlated with the information bits by way of coding.


The following are defined for the further embodiments:


bεN number of input bits per block;


V:={±1}b set of state transition signs;


aεN number of input blocks;


K:=a·b number of information bits without tail zeros;


kεN, k≧2 block length of the shift register, penetration depth;


L:=k·b bit length of the shift register;


S:={±1}L set of shift register signs;


nεN number of output bits per block;


Q:=a+k−1 number of state transitions, input blocks+zeros;


N:=n·Q number of code bits; and






R
:=


b
n






code






rate
.






It should be noted here that the code rate is not K/N since the information bits have been counted without the zeros (+1) of the convolutional termination.


Furthermore, s0εS and ν0εV are assumed to be the respective zero elements, i.e.,

s0=(+1, . . . , +1)τ, ν0=(+1, . . . , +1)τ.  (9)


The state transition function of the shift register is assumed to be

T:S×V→S,  (10)
(s,ν)custom character(sb+1, . . . , sL1, . . . , νb)τ.  (11)


The terminated convolutional code is defined by the characterizing subsets

M1, . . . , Mn{1, . . . , L},  (12)


(combination of register bits, alternatively in polynomial representation).


The current register content is coded via

C:S→{±1}n,  (13)











s



C


(
s
)







where







C
j



(
s
)



:=




i


M
j





s
i



,


for





1


j


n
.






(
14
)







where si is the i-th component of s.


Finally, the coding of an information word is defined by way of

φ:{±1}K→{±1}N,  (15)










u



(




C


(

s
1

)












(

s
Q

)




)


,




(
16
)








where s0εS is the zero state (zero element),










u
=

(




v
1











v
a




)


,


v
i


ε





V

,

1

i

a

,




(
17
)








ν1:=v0, a+1≦i≦Q,  (18)


and furthermore

si:=T(si−1, νi), 1≦i≦Q.  (19)


According to the definition of T, the following is obtained

sQ+1:=T(sQ,v0)=s0.  (20)


Accordingly, the set of all code words is

φ({±1}K):={φ(u)ε{±1}N; uε{±1}K}.  (21)


Often, polynomials

pjε{0,1}[D] where deg(pj)≦L−1


are used instead of the sets Mj for code definition, i.e.,











p
j



(
D
)


=




i
=
0


L
-
1








γ

i
,

j






D
i


,







(
22
)







with

γi,j ε {0,1} i=0, . . . , L−1, j=1, . . . , n.


The following transformations then apply for j=1, . . . , n:

Mj={iε{1, . . . , L}; γL−i,j=1}  (23)











p
j



(
D
)


=




i





ε






M
j













D

L
-
i


.






(
24
)








Block Code Representation


Since a terminated convolutional code is a block code, the code bits cj, 1≦j≦N can also be represented from the information bits ui, 1≦i≦K, with index sets Jj, as follows:












c
j

:=




i





ε






J
j









u
i



,


for





1


j

N

,









(
25
)







where

J1, . . . , JN532 {1, . . . , K}.  (26)


The index sets Jj can be calculated directly from the above index sets Mm of the code definition.


Consider

j=n(q−1)+m,q=1, . . . , Q,m=1, . . . , n.  (27)











c
j

=



C
m



(

s
q

)


=





i





ε






M
m






(

s
q

)

i


=




i





ε






M
m









u

i
+

b


(

q
-
k

)








,




(
28
)







where ui:=+1 for i ∉{1, . . . , K}.


Furthermore,











c
j

=





i
-


b


(

q
-
k

)







ε






M
m






u
i


=





i





ε






M
m


+

b


(

q
-
k

)






u
i




,




(
29
)







and it thus follows for j=1, . . . , N that













J
j

=


{

1
,





,
K

}



(


M
m

+

b


(

q
-
k

)



)








=


{


i





ε






{

1
,





,
K

}


;

i
-


b


(

q
-
k

)



ε






M
m




}

.








(
30
)








Example: SACCH Convolutional Code


In the above terminology, the convolutional code described in section 4.1.3 of the GSM Technical Specification GSM 05.03, Version 5.2.0 (channel coding) is:


b=1 number of input bits per block;


V={±1} set of state transition signs;


a=224 number of input blocks;


K=224 number of information bits without tail zeros;


k=5 block length of the shift register, depth of penetration;


L=5 bit length of the shift register;


S={±1}5 set of shift register signs;


n=2 number of output bits per block;


Q=228 number of state transitions, input blocks+zeros;


N=456 number of code bits;







R
=


1
2






code





rate


;




M1={1,2,5} characterizing set; polynomial: 1+D3+D4; and


M2={1,2,4,5} characterizing set; polynomial: 1+D+D3+D4.


Soft Outputs in an AWGN Channel Model


In the text which follows, calculation rules for determining the soft outputs are derived, especially for the sake of clarity.


For this purpose, a probability space (Ω,S,P) and a K-dimensional random variable U:Ω→{±1}K are considered which have the properties


The components U1, . . . , UK: Ω→{±1} are stochastically independent.


The following holds for i=1, . . . ,K

P({ωεΩ; Ui(ω)=−1})=P({ωεΩ; Ui(ω)=+1}).  (31)



FIG. 1 shows a representation of digital telecommunication. A unit consisting of source 201, source encoder 202 and crypto-encoder 203 determines an information item uε{±1}K which is used as input for one (or possibly more) channel encoder(s) 204. The channel encoder 204 generates a code word cε{±1}N which is fed into a modulator 205 and is transmitted via a disturbed physical channel 206 to a receiver where it is determined to become a real-value code word yεRN in a demodulator 207. This code word is converted into a real-value information item in a channel decoder 208. If necessary, a “hard” correlation with the Boolean values ±1 can also be made in a further decoder so that the received information is present in Boolean notation. The receiver is completed by a unit of crypto-decoder 209, source decoder 210 and sink 211. The two crypto-encoder 203 and crypto-decoder 209 units are optional in this arrangement.


The information to be reconstructed, uε{±1}K, of the crypto-encoder 203 is interpreted as implementation of the random variables U since nothing is known about the choice of u in the receiver.


Thus, the output cε{±1}N of the channel encoder 204 is an implementation of the random variables φ(U).


The output yεRN of the demodulator 207 is interpreted as implementation of the random variables

Y:Ω→custom characterN,  (32)
ωcustom characterφ(U(ω))+Z(ω),  (33)


a random variable Z:Ω→RN representing the channel disturbances in the physical channel 206.


In the text which follows, an AWGN channel model is assumed, i.e., Z is a N(0,σ2IN) normally distributed random variable which is stochastically independent of U and, respectively, φ(U). The variance σ2 is calculated from the ratio between noise power density and mean energy in the channel 206 and is here assumed to be known.


The unknown output uε{±1}K of the crypto-encoder is to be reconstructed on the basis of an implementation y of Y. To estimate the unknown quantities u1, . . . , uK, the distribution of the random variables U is investigated given the condition that y has been received.


The consequence of the fact that the random variable Y is a steady random variable is that the consideration of U under the condition that y has been received (Y({circumflex over (ω)})=y) is extremely complicated.


Firstly, the following is defined for iε{1, . . . , K} and αε{±1}

Γi(α):={φ(u); uε{±1}K; ui=α}.  (34)


In a preparatory step, the following quantities are considered for ε>0, paying attention to the injectivity of the coding map φ:













L
ε

(



U
i




y
)


:=

ln


(


P
(


{


ω





ε





Ω





;



U
i



(
ω
)


=

+
1



}









{


ω





ε





Ω





;


Y


(
ω
)







ε






M

y
,
ε




}

)




P
(


{


ω





ε





Ω





;



U
i



(
ω
)


=

-
1



}









{


ω





ε





Ω





;


Y


(
ω
)







ε






M

y
,
ε




}

)




)










=

ln


(





c





ε







Γ
i



(

+
1

)













P
(


{


ω





ε





Ω





;


φ


(

U


(
ω
)


)


=
c


}









{


ω





ε





Ω





;


Y


(
ω
)







ε






M

y
,
ε




}

)












c





ε







Γ
i



(

-
1

)













P
(


{


ω





ε





Ω





;


φ


(

U


(
ω
)


)


=
c


}









{


ω





ε





Ω





;


Y


(
ω
)







ε






M

y
,
ε




}

)









)



,







(
35
)







for i=1, . . . , K, where My,ε:=[y1,y1+ε]× . . . ×[yN,yN+ε].


Using the theorem by Bayes, the following is obtained:













L
ε

(



U
i




y
)


=

ln


(





c





ε







Γ
i



(

+
1

)













P
(


{


ω





ε





Ω





;


Y


(
ω
)







ε






M

y
,
ε




}









{


ω





ε





Ω





;


φ


(

U


(
ω
)


)


=
c


}

)








c





ε







Γ
i



(

-
1

)













P
(


{


ω





ε





Ω





;


Y


(
ω
)







ε






M

y
,
ε




}









{


ω





ε





Ω





;


φ


(

U


(
ω
)


)


=
c


}

)





)









=


ln


(





c





ε







Γ
i



(

+
1

)












M

y
,
ε






exp


(

-




(

x
-
c

)

T



(

x
-
c

)



2






σ
2




)





x








c





ε







Γ
i



(

-
1

)












M

y
,
ε






exp


(

-




(

x
-
c

)

T



(

x
-
c

)



2






σ
2




)





x





)


.








(
36
)







Considering then the limiting process of Lε(Ui|y) for ε↓0 by using L'Hospital's rule several times, the soft output L(Ui|y) is obtained for each symbol as in equation (1).


Since

Γi(+1)∪Γi(−1)={±1}K


holds true, a total of O(2K) numeric operations are necessary for evaluating equation (1).


The vector L(U.|y)εRK is the result of decoder 208.


Reduction of Complexity in the Determination of the Soft Outputs


Soft-Output Determination for Convolutional Codes


Firstly, the special characteristics of the terminated convolutional coding are used for providing an organized representation of the soft-output formula (1).


For an arbitrary, but preselected output yεRN of the demodulator 207, the following weighting function (a Viterbi metric) of code words is considered:

F:{±1}Ncustom character0+,  (37)









c





j
=
1

N









(


y
j

-

c
j


)

2

.






(
38
)







For permissible code words cε{±1}N, i.e. cεφ({±1}K), F(c) can be reduced as follows, using the shift register representation:










F


(
c
)


=




q
=
1

Q









j
=
1

n








(


y


n


(

q
-
1

)


+
j


-


C
j



(


s
~

q
c

)



)

2


,









=

:

Δ







F
q



(


s
~

q
c

)












(
39
)







where {tilde under (s)}qc stands for the q-th state of the shift register in the (unambiguous) generation of the word c.


Then the following is defined for I=1, . . . , K and αε{±1}:











A
α
i



(
y
)


:=





c





ε







Γ
i



(
α
)













exp






(

-




(

y
-
c

)

T



(

y
-
c

)



2






σ
2




)



=




c





ε







Γ
i



(
α
)












q
=
1

Q







exp







(


-

1

2






σ
2





Δ







F
q



(


s
~

q
c

)



)

.









(
40
)







Thus, the following holds true for the soft outputs









L
(




U
i








y
)


=

ln


(



A

+
1

i



(
y
)




A

-
1

i



(
y
)



)



,

i
=
1

,





,

K
.






(
41
)







In the text which follows, the values Aαi(y) are determined with the aid of a trellis diagram/representation.


To reduce the complexity of calculation, the following procedure is adopted in the following sections:

    • generalization of Aαi by mappings Ãm.
    • Recursive representation of Ãm by mappings Am, the values of which are calculated with a “from left to right” run through a trellis diagram.
    • Reversal of the recursion by mappings Bm, the values of which are calculated with a “from right to left” run through a trellis diagram.
    • joint calculation of all Aαi by way of a further run through a trellis diagram by using Am and Bm.


The trellis diagram is here a set

custom character={(s,q); sεS, q=0, . . . , Q+1}  (42)


The elements (s,q) of this set are also called the nodes in the trellis diagram, s representing a state and q being considered as a dynamic value (especially time).


General Recursive Representation


Firstly, some definitions are needed for representing the Aαi in a generalized form which allows later transformation. For this reason, the following is determined

siu:=T(s0,u1), uεVm=V× . . . ×V, m≧1,  (43)
sju:=T(sj−1u,uj) uεVm, m≧j>2,  (44)


i.e., sju represents the state of the shift register after j shifts of the register with the input symbols u1, . . . , uj.


Furthermore, sets VjV,jεN, which contain the permissible state transition symbols in the j-th step, are considered. Furthermore, product sets are defined as

Um:V1× . . . ×VmVm, mεcustom character  (45)


i.e., Um contains the first m components of the permissible input words.


For qεN, mappings

μq:S→custom character  (46)


are considered and for mεN and input word sets UmVm, mappings are defined as follows

Ãm:custom character(S)→custom character,  (47)










E






(

u






εU
m


)




(


s
m
u


ε





E

)
















j
=
1

m








μ
j



(

s
j
u

)





,




(
48
)







i.e., summing over all permissible input words, the shift register of which reaches a final state in E, is performed. If there are no such input words, the sum is determined as 0 over an empty index set.


In addition, a mapping is determined as

W:S×custom character(V)→custom character(S),  (49)
(t, {circumflex over (V)})custom character{s ∈S; ∃{circumflex over (v)}∈{circumflex over (V)}custom characterT(s,{circumflex over (v)})=t},  (50)


i.e., W maps (t, {circumflex over (V)}) into the sets of all states which can reach the state t with a transition symbol from {circumflex over (V)}.


The following holds true for m≧2, ES















A
~

m



(
E
)


=





(

u






εU
m


)




(


s
m
u


ε





E

)
















j
=
1

m








μ
j



(

s
j
u

)










=




s





ε





E















(

u






εU
m


)




(


s
m
u

=
s

)












j
=
1

m








μ
j



(

s
j
u

)











=




s





ε





E









μ
m



(
s
)








(

u






εU
m


)




(


s
m
u

=
s

)












j
=
1


m
-
1









μ
j



(

s
j
u

)












=




s





ε





E









μ
m



(
s
)








(

u






εU

m
-
1



)




(


s

m
-
1

u


ε






W


(

s
,

V
m


)



)












j
=
1


m
-
1









μ
j



(

s
j
u

)












=




s





ε





E









μ
m



(
s
)







A
~


m
-
1




(

W


(

s
,

V
m


)


)


.










(
51
)







In the transformation in the last step but one, attention must be paid to the fact that there is exactly one transition symbol νεVm with T(sm−1u,v)=s, if sm−1u is in W(s,Vm), i.e., it is not necessary to take account of any multiplicities.


Consider, then, the following for m≧2 mappings

Am:S→custom character,  (52)
scustom characterμm(s)Ãm−1(W(s,Vm)).  (53)


Thus, a recursion formula can be derived for m≧3:














A
m



(
s
)


=





μ
m



(
s
)






A
~


m
-
1




(

W


(

s
,

V
m


)


)









=





μ
m



(
s
)







t


W


(

s
,

V
m


)












μ

m
-
1




(
t
)






A
~


m
-
2




(

W


(

t
,

V

m
-
1



)


)











=





μ
m



(
s
)







t


W


(

s
,

V
m


)








A

m
-
1




(
t
)


.










(
54
)







Furthermore:














A
2



(
s
)


=





μ
2



(
s
)






A
~

1



(

W


(

s
,

V
2


)


)









=





μ
2



(
s
)








(

u


U
1


)




(


s
1
u



W


(

s
,

V
2


)



)











μ
1



(

s
1
u

)










=






μ
2



(
s
)







t


W


(

s
,

V
2


)












μ
1



(
t
)




δ

s
0







W


(

t
,

V
1


)









=





μ
2



(
s
)







t


W


(

s
,

V
2


)









μ
1



(
t
)








t
^



W


(

t
,

V
1


)










δ


t
^

=

s
0










=

:


A
0



(

t
^

)












=

:


A
1



(
t
)















(
55
)







In summary, the following thus holds true for sεS, ES:











A
0



(
s
)


=

{





1
,






for





s

=

s
0


,






0
,



otherwise



,






(
56
)









A
m



(
s
)


=



μ
m



(
s
)







t


W


(

s
,

V
m


)







A

m
-
1




(
t
)





,


for





m



,




(
57
)










A
~

m



(
E
)


=




s

E









A
m



(
s
)




,


for





m



.






(
58
)







The sets W(s,Vm) can be represented constructively. For this purpose, two further mappings are considered. The following is defined

τ:S→V,  (59)
s=(si, . . . , sL)τcustom character(sL−b+1, . . . , sL)τ,  (60)


i.e., if the state s is the result of a state transition, then τ(s) was the associated state transition symbol.


Furthermore

{circumflex over (T)}:V×S→S,  (61)
(v,s)custom character(v1, . . . , vb,s1, . . . , sL−b)τ,  (62)


is defined, i.e., {circumflex over (T)} reverses the direction of the shift register operation.


The following then holds

T({circumflex over (T)}(v,s),τ(s))=s, for all sεS,vεV  (63)


and for all tεS and {circumflex over (V)}V, it also holds true that













W


(

t
,

V
^


)


=



(


s

S

;





υ
^



V
^



T


(

s
,

υ
^


)




=
t


}












=



{





{



T
^



(

υ
,
t

)


;

υ

V


}

,







,

















if






τ


(
t
)





V
^


,






else
.











(
64
)







Thus, the recursion formula (57) for Am(s) can be written down constructively as follows:














A
m



(
s
)


=





μ
m



(
s
)







t


W


(

s
,

V
m


)







A

m
-
1




(
t
)















=



{







μ
m



(
s
)







υ

V









A

m
-
1




(


T
^



(

υ
,
s

)


)




,






0
,

















if






τ


(
s
)





V
m


,






else
.











(
65
)







It should be noted that in this section, no restrictions were set for the set V of the state transition symbols and for the sets Vjεδ(V).


Reversal of Recursion


In the text which follows, a recursion in the “reverse direction” compared with the above recursion is described. This new recursion is defined with the aid of the recursion formula (57) for Am(s).


The following is assumed for this purpose

T(t,{circumflex over (V)}):={T(t,{circumflex over (v)}); {circumflex over (v)}ε{circumflex over (V)}}, for tεS,{circumflex over (V)}V  (66)


and for MεN, 0≦m≦Q, the following mappings are considered

Bm:S→custom character,  (67)


with the following recursive characteristic:

















s

S










A
m



(
s
)







t


T


(

s
,

V

m
+
1



)







B

Q
-
m




(
t
)





=






s

S






μ
m



(
s
)








t
^



W


(

s
,

V
m


)








A

m
-
1




(

t
^

)







t


T


(

s
,

V

m
+
1



)







B

Q
-
m




(
t
)













=







t
^


S







s


T


(


t
^

,

V
m


)








μ
m



(
s
)





A

m
-
1




(

t
^

)







t


T


(

s
,

V

m
+
1



)







B

Q
-
m




(
t
)













=







t
^


S






A

m
-
1




(

t
^

)







s


T


(


t
^

,

V
m


)









μ
m



(
s
)







t


T


(

s
,

V

m
+
1



)







B

Q
-
m




(
t
)







=

:


B

Q
-
m
+
1




(
s
)










,











i
.
e
.

,
















s

S






A
m



(
s
)







t


T


(

s
,

V

m
+
1



)







B

Q
-
m




(
t
)





=




s

S






A

m
-
1




(
s
)







t


T


(

s
,

V
m


)








B

Q
-
m
+
1




(
t
)


.








(
68
)







By applying equation (68) several times, the following is obtained for an arbitrary jε{1, . . . , m+1}













s

S










A
m



(
s
)







t


T


(

s
,

V

m
+
1



)







B

Q
-
m




(
t
)





=




s

S










A

j
-
1




(
s
)







t


T


(

s
,

V
j


)








B

Q
-
j
+
1




(
t
)


.








(
69
)







According to the above definition, the recursion formula is thus












B
m



(
s
)


=



μ

Q
-
m
+
1




(
s
)







t


T


(

s
,

V

Q
-
m
+
2



)







B

m
-
1




(
t
)





,


for





1


m


Q
.






(
70
)







To terminate the recursion, the following are defined











B
0



(
s
)


=

{




1
,






for





s

=

s
0


,






0
,




else
.









(
71
)







Given this termination and the equations (58) and (69),

ÃQ(W(s0,VQ+1))


can be represented for VQ+1:={ν0} and with an arbitrary jε{1, . . . , Q+1} as follows:















A
~

Q



(

W


(


s
0

,

V

Q
+
1



)


)


=






s


W


(


s
0

,

V

Q
+
1



)







A
Q



(
s
)









=






s

S










A
Q



(
s
)







t


T
(

s
,

{

υ
0

)


}






B
0



(
t
)











=






s

S










A
Q



(
s
)







t


T


(

s
,

V

Q
+
1



)







B
0



(
t
)











=






s

S






A

j
-
1




(
s
)







t


T


(

s
,

V
j


)








B

Q
-
j
+
1




(
t
)


.











(
72
)







Note: in the evaluation of (72), Vj is not included in the calculation of the Am and Bm needed.


Calculation of Aαi


Using the preliminary work from the preceding sections, Aαi can now be calculated in a simple manner.


For this purpose, the following are defined:

Vj:=V, for jε{1, . . . , α},  (73)
Vj:={v0}, for jε{α+1, . . . , Q+1},  (74)


i.e., all permissible code words are defined via the states sju with

uεUQ=V1× . . . ×VQ


The code words used in the calculation of the Aαi are restricted by ui=α. For an arbitrary but fixed choice of iε{1, . . . , K}, there is exactly one jε{1, . . . , α} and exactly one îε{1, . . . , b} with

i=(j−1)·b+î.  (75)


Furthermore, the following are defined for an arbitrary but fixed choice of αε{±1}:

Vji(α):={vεV; vî=α}  (76)
UQi(α):=V1 × . . . ×Vj−1×Vji(α)×Vj+1 × . . . ×VQ⊂UQ,  (77)

i.e., the code words from Γi(α) are determined via the states sju with uεUQi(α).


For an arbitrary, but fixed choice of yεRN, define for qε{1, . . . , Q}

μq:S→custom character  (78)










s



exp


(


-

1

2


σ
2









j
=
1

n








(


y


n


(

q
-
1

)


+
j


-


C
j



(
s
)



)

2



)



=


exp


(


-

1

2


σ
2





Δ







F
q



(
s
)



)


.





(
79
)







According to the definition of the convolutional code, the following holds true for all sQu with uεUQ:

sQ+1u=T(sQu,uQ+1)=s0, uQ+1εVQ+1={ν0},  (80)


i.e.,

SQuεW(s0,VQ+1).  (81)


Taking account of equation (72), the following thus holds true:














A
α
i



(
y
)


=






c



Γ
i



(
α
)













q
=
1

Q







exp


(


-

1

2


σ
2





Δ







F
q



(


s
~

q
c

)



)










=






u



U
Q
i



(
α
)













q
=
1

Q








μ
q



(

s
q
u

)










=







(

u



U
Q
i



(
α
)



)




(


s
Q
u



W


(


s
0

,

V

Q
+
1



)



)













q
=
1

Q








μ
q



(

s
q
u

)










=





A
~

Q



(

W


(


s
0

,

V

Q
+
1



)


)








=






s

S










A

j
-
1




(
s
)







t


T


(

s
,


V
j
i



(
α
)



)











B

Q
-
j
+
1




(
t
)












(
82
)







The important factor is that the Am and Bm needed can be calculated independently of i and α via UQ and, respectively, UQ+1. Above, ÃQ(W(s0,VQ+1)) was formally determined via the auxiliary construct UQi(α) which, however, is no longer needed in the resultant explicit representation.


SUMMARY OF THE PROCEDURE

Define











V
j

:=
V

,



for




j


{

1
,





,
α

}


,








V
j

:=

{

υ
0

}


,



for




j


{


α
+
1

,





,

Q
+
1


}


,









V
j
i



(
α
)


:=

{


υ

V

;


υ
i

=
α


}


,



for




i
=



(

j
-
1

)

·
b

+

i
^



,


















i
^



{

1
,





,
b

}


,

















j


{

1
,





,
α

}


,

α



{

±
1

}

.









For an arbitrary, but fixed choice of yεRN, define for qε{1, . . . , Q}

μq:S→custom character,







s



exp


(


-

1

2


σ
2









j
=
1

n








(


y


n


(

q
-
1

)


+
j


-


C
j



(
s
)



)

2



)



=


exp


(


-

1

2


σ
2





Δ







F
q



(
s
)



)


.





Calculate

Am(s), for sεS, mε{1, . . . , a−1},
Bm(s), for sεS, mε{1, . . . , Q},


according to the recursion formulae (57) and (70) and starting values A0(s), B0(s), specified above, with (56) and (71).


Calculate all Aαi, iε{1, . . . , L}, αε{±1} over











A
α
i



(
y
)


=




s

S










A

j
-
1




(
s
)







t


T


(

s
,


V
j
i



(
α
)



)












B

Q
-
j
+
1




(
t
)


.








(
83
)







and determine the soft outputs






L
(




U
i




y
)


=

ln






(



A

+
1

i



(
y
)




A

-
1

i



(
y
)



)



,

i
=
1

,





,

K
.






Together with the recursion formula from the preceding section, all Aαi(y) can now be calculated jointly with O(2L·Q) or, respectively, O(K) operations instead of O(K2K) operations.


Note that



L=k·b, Q=a+k−1,K=a·b,


where a is the number of information bits.


The numeric complexity for calculating the soft outputs has thus been reduced from an exponential order to a linear order where a, the number of information bits, is the decisive quantity.


Special Case: Binary State Transition (B=1)


In the important special case of b=1, the set V of state transition symbols only consists of the two elements +1, −1. The GSM codes, for instance, belong to this widespread special case.


Since now i=j and Vji (α)={α} in the above description, the procedure is simplified as follows:


Define

Vj:={±1}, for jε{1, . . . , α},
Vj:={+1}, for jε{α+1, . . . , Q+1}


For an arbitrary, but fixed choice of yεRN define for qε{1, . . . , Q}

μq:S→custom character







s


exp






(


-

1

2






σ
2









j
=
1

n








(


y


n


(

q
-
1

)


+
j


-


C
j



(
s
)



)

2



)


=


exp


(


-

1

2






σ
2









Δ







F
q



(
s
)



)


.





Calculate

Am(s), for sεS, mε{1, . . . , α−1},
Bm(s), for sεS, mε{1, . . . , Q},


according to the recursion formulae (57) and (70) and starting values A0(s), B0(s) with (56) and (71).


Calculate all Aαi, iε{1, . . . , K}, αε{±1} over











A
α
i



(
y
)


=





s





ε





S


















A

i
-
1




(
s
)






B

Q
-
i
+
1




(

T


(

s
,
α

)


)


.







(
84
)







and determine the soft outputs






L
(




U
i




y
)


=

ln






(



A

+
1

i



(
y
)




A

-
1

i



(
y
)



)



,

i
=
1

,





,

K
.







Algorithmic Conversion


For the algorithmic conversion, consider the trellis diagram

custom character={(s,q); sεS,q=0, . . . , Q+1}


and the mappings


node weights in state s of trellis segment q

μ:custom charactercustom character,







(

s
,
q

)



exp






(


-

1

2






σ
2









Δ







F
q



(
s
)



)





Subtotals ‘A’ in state s of trellis segment q

A:custom charactercustom character,
(s,q)custom characterA(s,q)


Subtotals ‘B’ in state s of trellis segment Q−q+1

B:custom charactercustom character,
(s,q)custom characterB(s,q)


The mappings are only evaluated in the meaningful subsets of the definition domain.



FIG. 2 shows an algorithm in pseudocode notation which represents a progression in the trellis diagram, considering all states for the calculation of the node weights. The algorithm illustrates the above statements and is comprehensible out of itself. Since the value of ΔFq(s) depends only indirectly on the state s and is formed directly with C(s), the following holds true

|{ΔFq(s); sεS}|≦min{2L,2n},


i.e., for n<L, many of the above μ(s,q) have the same value. Depending on the special code, μ(s,q) can thus be determined with far fewer operations in the implementation.



FIG. 3 and FIG. 4 each show an algorithm in pseudocode notation for determining soft outputs. FIG. 3 relates to the general case and FIG. 4 relates to the special case for the binary state transition (b=1). Both algorithms illustrate the above statements and are comprehensible in and of themselves.


With a suitable implementation representation of V and, respectively, vji(α), for instance as subsets of N, the above iterations νεV and sεS can be implemented as normal program loops. Naturally, indices which may occur such as, for example, k−1+q, are calculated only once in the implementation and not with every occurrence as is written down here for better clarity.



FIG. 5 shows a processor unit PRZE. The processor unit PRZE comprises a processor CPU, a memory SPE and an input/output interface IOS which is used in various ways via an interface IFC: an output can be displayed on a monitor MON and/or output on a printer PRT via a graphics interface. An input is made via a mouse MAS or a keyboard TAST. The processor unit PRZE also has a data bus BUS which ensures the connection of a memory MEM, the processor CPU and the input/output interface IOS. Furthermore, additional components, for example an additional memory, data store (hard disk) or scanner, can be connected to the data bus BUS.


The above-described method and apparatus are illustrative of the principles of the present invention. Numerous modifications and adaptations will be readily apparent to those skilled in this art without departing from the spirit and scope of the present invention.

Claims
  • 1. A method for decoding a predetermined code word, wherein said code word comprises a number of positions having different values, comprising the steps of: providing a processor comprising a central processing unit, memory, an input/output interface, and a data bus connecting said central processing unit to said memory and said input/output interface, said processor decoding a predetermined code word;determining a calculation rule for a soft-output value for each position of said code word, each said position of said code being correlated with said soft-output value, according to the formula
  • 2. The method as claimed in claim 1, wherein said convolutional code has binary state transitions, said method further comprising the steps of: determining mappings Am recursively by the equation Am(s)=μm(s)(Am−1({circumflex over (T)}(+1s))+Am−i({circumflex over (T)}(−1, s))), for mε;determining mappings Bm recursively by the equation Bm(s)=μQ−m+1(s)(Bm−1(T(s,+1))+Bm−1(T(s,−1))),for 1≦m≦Q; anddetermining terms Aαi, iε{1, . . . , K}, αε{±1} according to the equation
  • 3. The method as claimed in claim 1, further comprising the step of: providing a mobile radio network in which said decoding of a predetermined code word operates.
  • 4. The method as claimed in claim 3, wherein said mobile radio network is a GSM network.
  • 5. The method as claimed in claim 1, wherein said predetermined code word is a concatenated code word, said method further comprising the steps of: providing said calculated soft-output values as input data of another decoder.
Priority Claims (1)
Number Date Country Kind
198 55 453 Dec 1998 DE national
PCT Information
Filing Document Filing Date Country Kind 371c Date
PCT/DE99/03824 12/1/1999 WO 00 5/30/2001
Publishing Document Publishing Date Country Kind
WO00/33467 6/8/2000 WO A
US Referenced Citations (4)
Number Name Date Kind
5566191 Ohnishi et al. Oct 1996 A
5574751 Trelewicz Nov 1996 A
5721746 Hladik et al. Feb 1998 A
5936605 Munjal Aug 1999 A
Foreign Referenced Citations (3)
Number Date Country
40 38 251 Jun 1992 DE
44 37 984 Aug 1996 DE
0 391 354 Oct 1990 EP