Generating log-likelihood values in a maximum a posteriori processor

Information

  • Patent Grant
  • 6760883
  • Patent Number
    6,760,883
  • Date Filed
    Thursday, September 13, 2001
    23 years ago
  • Date Issued
    Tuesday, July 6, 2004
    20 years ago
Abstract
A maximum a posteriori (MAP) detector/decoder employs an algorithm that computes log-likelihood value with an a posteriori probability (APP) value employing a number N of previous state sequences greater than or equal to two (N≧2). By defining the APP with more previous state sequences, the set of α values may be calculated for a current state and then reduced. After generating the reduced set of α values, the full set of β values may be generated for calculation of log-likelihood values. By calculating a set of α values that may be decimated by, for example, N, the amount of memory required to store the α values used in subsequent computations is reduced.
Description




BACKGROUND OF THE INVENTION




1. Field of the Invention




The present invention relates to generating, storing, and updating log-likelihood values when processing encoded data with a maximum a posteriori (MAP) algorithm.




2. Description of the Related Art




MAP decoding algorithms are employed for processing data input to a processor for detection and/or decoding operations. The algorithm provides a maximum a posteriori estimate of a state sequence of a finite-state, discrete-time Markov process observed in noise. The MAP algorithm forms a trellis corresponding to possible states (portion of received symbol bits in the sequence) for each received output channel symbol per unit increment in time (i.e., clock cycle).




States, and transitions between states, of the Markov process spanning an interval of time may be represented by a trellis diagram. The number of bits that a state represents is equivalent to the memory of the Markov process. Thus, probabilities (sometimes of the form of log-likelihood values) are associated with each transition within the trellis, and probabilities are also associated with each decision for a symbol bit in the sequence.




The processor using the MAP algorithm computes log-likelihood values using α (forward state probabilities for states in the trellis) and β values (reverse state probabilities in the trellis), as described subsequently. The α values (a vector) are associated with states within the trellis, and these α values are stored in memory. The processor using the MAP algorithm then computes values of β, and the α values are then retrieved from memory to compute the final output log-likelihood values. To compute the log-likelihood values, the entire state metric array of α values is stored by the MAP algorithm.




The variable S


k


is defined as the state of the Markov process at time k, and y


k


is defined as the noisy channel output sample at time k, y


m




n


is defined as the sequence y


m




n


=(y


m


,y


m+1


, . . . y


n


) and, for a data block of length K, probability functions may be defined for the Markov process as given in equations (1) through (3):






α


s




k




=p


(


S




k




=s;y




1




k


)  (1)








β


s




k




=p


(


y




k+1




K




|S




k




=s


)  (2)








γ


s′,s




=p


(


S




k




=s;y




k




|S




k−1




=s


′).  (3)






where s is the sequence defining the state S


k


of the Markov process at time k.




In prior art decoders, calculation of the probability associated with a decision generally employs knowledge of a previous state S


k−1


sequence s′ at time k−1 (complete state decisions and associated probabilities) of the decoder and the current state at time k. Thus, the algorithm computes the a posteriori probability (APP) value σ


k


(s′,s)=p(S


k−1


=s′;S


k


=s|y


1




K


) using the probabilities defined in equations (1) through (3). The APP value is then as given in equation (4):






σ


k


(


s′,s


)=α


s′




k−1


γ


s′




k


β


s




k


  (4)






With the APP value for input u


k


, the log-likelihood value may then be calculated as given in equation (5):










L


(

u
k

)


=

log







(


p


(


u
k

=


+
1



y
1
K



)



p


(


u
k

=


-
1



y
1
K



)



)

.






(
5
)













SUMMARY OF THE INVENTION




In accordance with embodiments of the present invention, a maximum a posteriori (MAP) processor employs an algorithm that computes log-likelihood value with an a posteriori probability (APP) value employing a number N of previous state sequences greater than or equal to two (N≧2). The set of α values may be calculated for a current state and then reduced in accordance with an APP value based on previous state sequences. After forming the reduced set of α values, the full set of β values may be subsequently generated for calculation of log-likelihood values. By calculating a set of α values that may be decimated by, for example, N, the amount of memory required for to store the α values used in subsequent computations is reduced.




In accordance with an exemplary embodiment of the present invention, log-likelihood values are generated for data in a processor by a) generating a set of α values for a current state; and b) deleting selected values of the set of α values based on an a posteriori probability (APP) value having at least two previous states (N≧2) to provide a reduced set of α values corresponding to the data. The exemplary embodiment of the present invention further c) generates a set of β values for the current state; and d) calculates the log-likelihood values from the reduced set of α values and the set of β values.











BRIEF DESCRIPTION OF THE DRAWINGS




Other aspects, features, and advantages of the present invention will become more fully apparent from the following detailed description, the appended claims, and the accompanying drawings in which:





FIG. 1

shows an exemplary method for generating log-likelihood values in accordance with the present invention;





FIG. 2

shows an 8-state trellis used to illustrate generation of log-likelihood values in accordance with the exemplary method of

FIG. 1

;





FIG. 3

shows a processor that may employ the exemplary method shown in FIG.


1


.











DETAILED DESCRIPTION




In accordance with embodiments of the present invention, a maximum a posteriori (MAP) processor employs an algorithm that a computes log-likelihood value with an a posteriori probability (APP) value spanning a number N of previous states greater than or equal to two (N≧2). By defining the APP spanning greater than two time intervals, the set of α values may be calculated and then reduced. After the reduced set of α values is calculated, the full set of β values may be generated for calculation of log-likelihood values. By calculating a set of α values that may be decimated by a factor of N, the amount of memory required to store the α values used in subsequent computations is reduced. While the present invention is described for a MAP algorithm generated for turbo codes using the current and two previous states, one skilled in the art may readily extend the teachings herein to maximum a posteriori algorithms derived for other types of codes and for a greater number of previous states.




A MAP decoding algorithm computes the a posteriori probability (APP) value for a Markov process state variable S


k


, a sequence y


m




n


defined as the sequence y


m




n


=(y


m


,y


m+1


, . . . y


n


), and data block of length K using the probabilities defined in equations (1) through (3). With the APP value, the log-likelihood value for input u


k


may be defined as given in equation (5). Equations (1)-(3) and (5) are repeated below for convenience:






α


s




k




=p


(


S




k




=s;y




1




k


)  (1)








β


s




k




=p


(


y




k+1




K




|S




k




=s


)  (2)








γ


s′,s




=p


(


S




k




=s;y




k




|S




k−1




=s


′)  (3)

















L


(

u
k

)


=

log







(


p


(


u
k

=


+
1



y
1
K



)



p


(


u
k

=


-
1



y
1
K



)



)

.






(
5
)













In accordance with an exemplary embodiment of the present invention, a MAP decoding algorithm calculates an a posteriori probability (APP) value σ


k


based on a current state s at time k as well as the two previous state sequences s′ and s″ at time k−1 and k−2, respectively. The MAP decoding algorithm calculates the APP value σ


k


(s″,s′,s) as the probability p(S


k−2


=s″; S


k−1


=s′; S


k


=s;y


1




K


). Using the chain rule, it is straightforward to show the following relationship for the APP value α


k


(s″,s′,s) as given in equation (6):






&AutoLeftMatch;






σ
k



(


s


,

s


,
s

)


=



p


(



S

k
-
2


=

s



;

y
1

k
-
2



)


·


p


(



S

k
-
2


=

s



;


S

k
-
1


=

s



;

y
1

k
-
1



)



p


(



S

k
-
2


=

s



;

y
1

k
-
2



)



·


p


(



S

k
-
2


=

s



;


S

k
-
1


=

s



;


S
k

=
s

;

y
1
k


)



p


(



S

k
-
2


=

s



;


S

k
-
1


=

s



;

y
1

k
-
1



)



·


p


(



S

k
-
2


=

s



;


S

k
-
1


=

s



;


S
k

=
s

;

y
1
K


)



p


(



S

k
-
2


=

s



;


S

k
-
1


=

s



;


S
k

=
s

;

y
1
k


)




=


p


(



S

k
-
2


=

s



;

y
1

k
-
2



)


·

p


(



S

k
-
1


=

s



;



y

k
-
1




S

k
-
2



=

s



;

y
1

k
-
2



)


·

p


(



S
k

=
s

;



y
k



S

k
-
2



=

s



;


S

k
-
1


=

s



;

y
1

k
-
1



)


·


p


(




y

k
+
1

K



S

k
-
2



=

s



;


S

k
-
1


=

s



;


S
k

=
s

;

y
1
k


)


.







(
6
)














A property of the Markov process is, if the current state S


k


is known, events after time k do not depend on y


1




k


. Using this Markov process property, equation (6) may be expressed as given in equation (7):




 σ


k


(


s″,s′,s


)=


p


(


S




k−2




s″;y




1




k−2





p


(


S




k−1




=s′;y




k−1




=s′;|S




k−2




=s


″)·


p


(


S




k




=s;y




k




|S




k−1




=s


′)·


p


(


y




k+1




k




|S




k




=s


)  (7)




Using the definitions of probabilities given in equations (1) through (3), equation (7) may then be expressed as given in equation (8):






σ


k


(


s″,s′,s


)=α


s″




k−2


γ


s″,s′




k−1


γ


s′,s




k


β


s




k


  (8)






From equation (8), the log-likelihood value as given in equation (5) may then be calculated with the relationship of equation (9):






&AutoLeftMatch;








p


(



u

k
-
1


=

+
1


;

y
1
K


)


=





(


s


,

s


,
s

)



S
+

k
-
1










p


(


S

k
-
2


;

S

k
-
1


;

S
k

;

y
1
K


)










p


(



u

k
-
1


=

-
1


;

y
1
K


)


=





(


s


,

s


,
s

)



S
-

k
-
1










p


(


S

k
-
2


;

S

k
-
1


;

S
k

;

y
1
K


)










(
9
)














where (s″,s′,s) ∈ S


+




k−1


(“∈” defined as “an element of”) is the set of state sequences which contain transition segments from S


k−2


=s″ to S


k−1


=s′ generated by the input u


k−1


=+1, and (s″,s′,s) ∈ S







k−1


is the set of state sequences which contain transition segments from S


k−2


=s″ to S


k−1


=s′ generated by the input u


k−1


=−1.




Then, from equation (5), the log-likelihood values L for the input u


k−1


and u


k


may be computed as in equations (10) and (11), respectively:










L


(

u

k
-
1


)


=

log


(





S
+

k
-
1










a

s



k
-
2




γ


s


,

s




k
-
1




γ


s


,
s

k



β
s
k







S
-

k
-
1










a

s



k
-
2




γ


s


,

s




k
-
1




γ


s


,
s

k



β
s
k




)






(
10
)







L


(

u
k

)


=


log


(





S
+
k









a

s



k
-
2




γ

s



k
-
1




γ


s


,
s

k



β
s
k







S
-
k









a

s



k
-
2




γ


s


,

s




k
-
1




γ


s


,
s

k



β
s
k




)


.





(
11
)













MAP algorithms of the prior art employ α


s″




k−2


and α


s′




k−1


to compute L(u


k−1


) and L(u


k


), respectively. However, a MAP algorithm in accordance with an exemplary embodiment of the present invention employing the relationships of equations (10) and (11) for N=2 computes both the log-likelihood values L(u


k−1


) and L(u


k


) with only α


s″




k−2


. Thus, when storing values for α, only every other value needs to be stored since each value may be used to compute two different log-likelihood values.





FIG. 1

shows an exemplary method for generating log-likelihood values in accordance with the present invention. At step


101


, a set of α values are computed for the trellis with an APP value based on N≧2 time intervals, such as in accordance with the expression of equation (8) for N=2. At step


102


, certain values are deleted (e.g., decimation of the set of α values by N), and, thus, these values are not necessarily stored in memory. While the particular α values that are deleted may be specified by a given design choice, preferred embodiments keeps values of α at regular positions within the set (e.g. every second value for N=2, every third value for N=3, etc.).




In step


103


, β values are calculated, for example, as is done in accordance with methods of the prior art. In step


104


, the reduced set of α values stored in memory is retrieved from memory. In step


105


, the final output value for the log-likelihood value is calculated from the values of β and the reduced set of α values. In step


105


, an algorithm, as described with respect to equations (10) and (11), is applied that allows for calculation of the final output value with the reduced set of α values.




As an example,

FIG. 2

shows an 8-state trellis (states 0-7) from time k−2 to time k. The number next to the state in the trellis identifies the state's number. The total branch metric Γ


2




k


from state 2 at time k−2 to state 0 at time k is γ


2,4




k−1


γ


4,0




k


. Note that for the 2


M


state trellis (M a positive integer), the path between time k−N to time k is uniquely defined for N≦M. For N>M, there exist multiple paths between two states.




By extending the above equations (6) and (8) for N previous states, The APP value may be generalized as in equation (12):








p


(


S




k−N




=s




(N)




;S




k−N+1




=s




(N−1


)


; . . . ;S




k−1




=s′;S




k




=s;y




1




K


)=α


s






(N)






k−N


Γ


N




k


β


s




k


  (12)






where the total branch metric Γ


N




k


is defined as Γ


N




k





s






(N)






,s






(N−1)








Also, it follows that the log-likelihood values are computed as in equation (13)











L


(

u

k
-
1


)


=



log


(





S
+

k
-
i










a

s

(
N
)



k
-
N




Γ
N
k



β
s
k







S
-

k
-
i










a

s

(
N
)



k
-
N




Γ
N
k



β
s
k




)







for





i

=
0


,
1
,








N

-
1.





(
13
)













Thus, this example shows that storing the α values for α


s′




k−1





s″




k−2


, . . . α


s






(N−1)






k−N+1


is not necessary when computing the log-likelihood value L(u


k−1


) for i=0,1, . . . , N−1. Not storing these values reduces the memory size by N.





FIG. 3

shows a MAP processor


300


that may employ the exemplary method shown in FIG.


1


. MAP processor


300


includes α calculator


301


, β calculator


302


, update module


303


, decimator


304


, and decision device


305


. MAP processor


300


is coupled to memory


306


. MAP processor


300


receives input data for detection or decoding, and α calculator


301


determines α values (forward trellis state probabilities). Once calculated, decimator


304


selects a subset of α values in accordance with an exemplary embodiment of the present invention for storage in memory


306


. β calculator


302


determines β values (reverse trellis state probabilities). Update module


303


retrieves the subset of α values, β values, and previously stored branch metric values from memory


306


. Using the retrieved values, Update module


303


calculates log-likelihood values, such as by the method given in equation (13). Decision device


305


makes decisions corresponding to a sequence of the input data based on the log-likelihood values. Update module


303


then updates and stores the branch metric values in memory


306


using the log-likelihood values.




The following pseudo-code may be employed for an exemplary implementation of a MAP algorithm calculating log-likelihood values in accordance with the present invention


















100













initialize






α
s
0






and






β
s
L






for





s

=
0

,
1
,








M

-
1




















101




For k = 1, 2, . . . L



















102













compute






α
s
k






for





s

=
0

,
1
,





,

M
-
1




















103




if k = mod


N


(k) = 0, then



















104











store






α
s
k


























105




For k = L, L − 1, . . . , 2



















106













compute






β
s

k
-
1







for





s

=
0

,
1
,





,

M
-
1




















107












set






U
k
+


=


U
k
-

=


U

k
-
1

+

=


U

k
-
1

-

=


=


U

k
-
N
+
1

-

=
0






























108




For all possible s1 and s2:



















110












compute





C

=


α
s1

k
-
N




Γ
N
k



β
s2
k






where






Γ
N
k






is





the





total





branch





metric


























111




between s1 and s2



















112




For i = 0, 1, . . . , N − 1



















113




if the path between s1 and s2 contains a transition caused by







u


k−1


= +1



















111












U

k
-
i

+

=


U

k
-
i

+

+
C


























112




if the path between s1 and s2 contains a transition caused by







u


k−1


= −1



















113












U

k
-
i

-

=


U

k
-
i

-

+
C


























114












compute






L


(

u

k
-
i


)



=

log


(


U

k
-
i

+


U

k
-
i

-


)






















While the exemplary embodiments of the present invention have been described with respect to methods implemented within a detector, as would be apparent to one skilled in the art, various functions may be implemented in the digital domain as processing steps in a software program, by digital logic, or in combination of both software and hardware. Such software may be employed in, for example, a digital signal processor, micro-controller or general-purpose computer. Such hardware and software may be embodied within circuits implemented in an integrated circuit.




The present invention can be embodied in the form of methods and apparatuses for practicing those methods. The present invention can also be embodied in the form of program code embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. The present invention can also be embodied in the form of program code, for example, whether stored in a storage medium, loaded into and/or executed by a machine, or transmitted over some transmission medium, such as over electrical wiring or cabling, through fiber optics, or via electromagnetic radiation, wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the invention. When implemented on a general-purpose processor, the program code segments combine with the processor to provide a unique device that operates analogously to specific logic circuits.




It will be further understood that various changes in the details, materials, and arrangements of the parts which have been described and illustrated in order to explain the nature of this invention may be made by those skilled in the art without departing from the principle and scope of the invention as expressed in the following claims.



Claims
  • 1. A method of generating probabilities to calculate log-likelihood values corresponding to data in a maximum a posteriori processor, the method comprising the steps of:(a) generating a set of α values as forward probabilities of a trellis for a current state; and (b) deleting selected values of the set of α values based on an a posteriori probability (APP) value having at least two previous states to provide a reduced set of α values corresponding to the data.
  • 2. The invention as recited in claim 1, wherein step (b) further includes the step of storing the reduced set of α values in a memory.
  • 3. The invention as recited in claim 2, wherein, for step (a), the APP value has two previous states.
  • 4. The invention as recited in claim 1, further comprising the steps of:(c) generating a set of β values as reverse probabilities of the trellis for the current state; and (d) calculating the log-likelihood values from the reduced set of α values and the set of β values based on the APP value.
  • 5. The invention as recited in claim 4, wherein step (d) further includes the step of retrieving the reduced set of α values from the memory.
  • 6. The invention as recited in claim 4, wherein, for step (a), the APP value for input uk at time k for a Markov process state variable Sk, a sequence ymn=(ym,ym+1, . . . yn), N previous states, and data block of length K is defined as:p(Sk−N=s(N);Sk−N+1=s(N−1); . . . ;Sk−1=s′;Sk=s;y1K)=αs(N)k−NΓNkβsk, where the total branch metric ΓNk is defined as ΓNk=γs(N),s(N−1)k−N+1γs(N−1),s(N−2)k−N+2 . . . γs′,sk, and:αsk=p(Sk=s;y1k),  βsk=p(yk+1K|Sk=s), andγs′,s=p(Sk=s;yk|Sk−1=s′).
  • 7. The invention as recited in claim 6, wherein, for step (d), the log-likelihood value L(uk) for input uk is: L⁡(uk)=log⁡(∑S+k⁢ ⁢as″k-2⁢γs″,s′k-1⁢γs′,sk⁢βsk∑S-k⁢ ⁢as″k-2⁢γs″,s′k-1⁢γs′,sk⁢βsk).
  • 8. The invention as recited in claim 6, wherein, for step (a), the APP value has N=2 previous states.
  • 9. The invention as recited in claim 1, wherein, for step (a), the APP value for input uk at time k for a Markov process state variable Sk, a sequence ymn=(ym,ym+1, . . . yn), N previous states, and data block of length K is defined as:p(Sk−N=s(N);Sk−N+1=s(N−1); . . . ;Sk−1=s′;Sk=s;y1K)=αs(N)k−NΓNkβsk, where the total branch metric ΓNk is defined as ΓNk=γs(N),s(N−1)k−N+1γs(N−1),s(N−2)k−N+2 . . . γs′,sk, andαsk=p(Sk=s;y1k), βsk=p(yk+1K|Sk=s), and γs′,s=p(Sk=s;yk|Sk−1=s′).
  • 10. The invention as recited in claim 1, wherein the method is employed during a step of either maximum a posteriori (MAP) detection or MAP decoding of received samples.
  • 11. The invention as recited in claim 1, wherein the method is implemented by a processor in an integrated circuit.
  • 12. Apparatus generating probabilities to calculate log-likelihood values corresponding to data in a maximum a posteriori processor, the method comprising the steps of:a first calculator that generates a set of α values as forward probabilities of a trellis for a current state; and a decimator that deletes selected values of the set of α values based on an a posteriori probability (APP) value having at least two previous states to provide a reduced set of α values corresponding to the data.
  • 13. The invention as recited in claim 12, further comprising:a second calculator that generates a set of β values as reverse probabilities of the trellis for the current state; and an update module that calculates the log-likelihood values from the reduced set of α values and the set of β values based on the APP value.
  • 14. The invention as recited in claim 12, further comprising a decision device applying a maximum a posteriori algorithm to generate decisions for the data in the MAP processor.
  • 15. The invention as recited in claim 12, wherein the apparatus is a circuit embodied in an integrated circuit.
  • 16. A computer-readable medium having stored thereon a plurality of instructions, the plurality of instructions including instructions which, when executed by a processor, cause the processor to implement a method for generating log-likelihood values for data in a detector, the method comprising the steps of:(a) generating a set of α values as forward probabilities of a trellis for a current state; and (b) deleting selected values of the set of α values based on an a posteriori probability (APP) value having at least two previous states to provide a reduced set of α values corresponding to the data.
  • 17. The invention as recited in claim 16, further comprising the steps of:(c) generating a set of β values for the current state; and (d) calculating the log-likelihood values from the reduced set of α values and the set of set of β values based on the APP value.
US Referenced Citations (5)
Number Name Date Kind
5933462 Viterbi et al. Aug 1999 A
6145114 Crozier et al. Nov 2000 A
6226773 Sadjadpour May 2001 B1
6400290 Langhammer et al. Jun 2002 B1
6510536 Crozier et al. Jan 2003 B1