Claims
- 1. A method for maximum a posteriori (MAP) decoding of an input information sequence based on a first information sequence received through a channel, comprising:iteratively generating decode results, Xi, for i=1,2, . . . n, where n is an integer, and where each Xi is generated by employing Xi−1 and X1 is generated from said first information sequence and from initial decode result, X0; and ceasing said step of iteratively generating, and outputting last-generated decode results when difference between said last-generated decode results and next-to-last-generated decode results is within a compare threshold.
- 2. A method for maximum a posteriori (MAP) decoding of an input information sequence based on a first information sequence received through a channel, comprising:iteratively generating a sequence of one or more decode results starting with an initial decode result; and outputting one of adjacent decode results as a decode of the input information sequence if the adjacent decode results are within a compare threshold, wherein the step of iteratively generating comprises: a. generating the initial decode result as a first decode result; b. generating a second decode result based on the first decode result and a model of the channel; c. comparing the first and second decode results; d. replacing the first decode result with the second decode result; and e. repeating b-d if the first and second decode results are not within the compare threshold.
- 3. The method of claim 2, wherein the generating a second decode result comprises searching for a second information sequence that maximizes a value of an auxiliary function.
- 4. The method of claim 3, wherein the auxiliary function is based on the expectation maximization (EM) algorithm.
- 5. The method of claim 4, wherein the model of the channel is a Hidden Markov Model (HMM) having an initial state probability vector π and probability density matrix (PDM) of P(X,Y), where XεX, YεY and elements of P(X,Y), pij(X,Y)=Pr(j,X,Y|i), are conditional probability density functions of an information element X of the second information sequence that corresponds to a received element Y of the first information sequence after the HMM transfers from a state i to a state j, the auxiliary function being expressed as: Q(X1T,X1,pT)=∑zΨ(x,X1,pT,Y1T)log(Ψ(z,X1T,Y1T)),where p is a number of iterations, Ψ(z,X1T,Y1T)=πio∏t=1T pit-1it(Xt,Y1), T is a number of information elements in a particular information sequence, z is a HMM state sequence i0T, πi0 is the probability of an initial state i0,X1T is the second information sequence, X1,pT is a second information sequence estimate corresponding to a pth iteration, and Y1T is the first information sequence.
- 6. The method of claim 5, wherein the auxiliary function is expanded to be: Q(X1T,X1,pT)=∑t=1T ∑i=1n ∑j=1n γt,ij(X1,pT)log(pij(Xt,Yt))+Cwhere C does not depend on X1T andγt,ij(X1,pT)=αi(X1,pt−1,Y1t−1)Pij(Xt,p,Yt)βj(Xt+1,pT,Yt+1T) where αi(X1,pt,Y1T) and βj(Xt+1,pT,Yt+1T) are the elements of forward and backward probability vectors defined as α(X1t,Y1t)=π∏i=1t P(Xi,Yi),and β (X1T,Y1T)=∏j=tT P(Xj,Yj)1,π is an initial probability vector, 1 is the column vector of ones.
- 7. The method of claim 6, wherein a source of an encoded sequence is a trellis code modulator (TCM), the TCM receiving a source information sequence I1T and outputting X1T as an encoded information sequence that is transmitted, the TCM defining Xt=gt(St,It) where Xt and It are the elements of X1T and I1T for each time t, respectively, St is a state of the TCM at t, and gt(·) is a function relating Xt, to It and St, the method comprising:generating, for iteration p+1, a source information sequence estimate I1,p+1T that corresponds to a sequence of TCM state transitions that has a longest cumulative distance L(St−1) at t=1 or L(S0), wherein a distance for each of the TCM state transitions is defined by L(St)=L(St+1)+m(Ît+1(St+1)) for the TCM state transitions at each t for t=1, . . . , T and the cumulative distance is the sum of m(Ît(St)) for all t, m(Ît(St)) being defined as m(I^t(St))=∑i=1nc ∑j=1nc γt,ij(I1,pT)log pc,ij(Yt|Xt(St)),for each t=1,2,… ,T,where Xt(St)=gt(St,Ît(St)), nc is a number of states in an HMM of the channel and pc,ij(Yt|Xt(St)) are channel conditional probability density functions of Yt when Xt(St) is transmitted by the TCM, I1,p+1T being set to a sequence of Ît for all t.
- 8. The method of claim 7, wherein for each t=1,2, . . . , T, the method comprises:generating m(Ît(St)) for each possible state transition of the TCM; selecting state trajectories that correspond to largest L(St)=L(St+1)+m(Ît+1(St+1)) for each state as survivor state trajectories; and selecting Ît(St)s that correspond to the selected state trajectories as It,p+1(St).
- 9. The method of claim 8, further comprising:a. assigning L(ST)=0 for all states at t=T; b. generating m(Ît(St)) for all state transitions between states St and all possible states St+1; c. selecting state transitions between the states St and St+1 that have a largest L(St)=L(St−1)+m(Ît+1(St+1)) and Ît+1(St+1) that correspond to the selected state transitions; d. updating the survivor state trajectories at states St by adding the selected state transitions to the corresponding survivor state trajectories at state St+1; e. decrementing t by 1; f. repeating b-e until t=0; and g. selecting all the Ît(St) that correspond to a survivor state trajectory that corresponding to a largest L(St) at t=0 as I1,p+1T.
- 10. The method of claim 6, wherein the channel is modeled as Pc(Y|X)=PcBc(Y|X) where Pc is a channel state transition probability matrix and Bc(Y|X) is a diagonal matrix of state output probabilities, the method comprising for each t=1,2, . . . , T:generating γt,i(I1,pT)=αi(Y1t|I1,pt)βi(Yt+1T|It+1,pT); selecting an Ît(St) that maximizes L(St)=L(St+1)+m(Ît+1(St+1)), where m(Ît(St)) is defined as m(I^t(St))=∑i=1ncγt,ij(I1,pT)βj(Yt|Xt(St)),nc being a number of states in an HMM of the channel; selecting state transitions between states St and St−1that corresponds to a largest L(St)=L(St+1)+m(Ît+1(St+1)); and forming survivor state trajectories by connecting selected state transitions.
- 11. The method of claim 10, further comprising:selecting Ît(St) that corresponds to a survivor state trajectory at t=0 that has the largest L(St) as It,p+1T for each pth iteration; comparing I1,pT and I1,p+1T; and outputting I1,p+1T as the second decode result if I1,pT and I1,p+1T are within the compare threshold.
- 12. A maximum a posteriori (MAP) decoder that decodes a transmitted information sequence using a received information sequence received through a channel, comprising:a memory; and a controller coupled to the memory, the controller iteratively-generating decode results, Xi, for i=1,2, . . . n, where n is an integer, and where each Xi is generated by employing Xi−1, and X1 is generated from said first information sequence and from an initial decode result, X0, and ceasing said step of iteratively generating, and outputting last-generated decode results when difference between said last-generated decode results and next-to-last-generated decode results is within a compare threshold.
- 13. A maximum a posteriori (MAP) decoder that decodes a transmitted information sequence using a received information sequence received through a channel, comprising:a memory; and a controller coupled to the memory, the controller iteratively generating a sequence of one or more decode results starting with an initial decode result, and outputting one of adjacent decode results as a decode of the input information sequence if the adjacent decode results are within a compare threshold wherein the controller: a. generates the initial decode result as a first decode result; b. generates a second decode result based on the first decode result and a model of the channel; c. compares the first and second decode results; d. replaces the first decode result with the second decode result; and e. repeats b-d until the first and second decode result are not within the compare threshold.
- 14. The decoder of claim 13, wherein the controller searches for information sequence that maximizes a value of an auxiliary function.
- 15. The decoder of claim 14, wherein the auxiliary function is based on expectation maximization (EM).
- 16. The decoder of claim 15, wherein the model of the channel is a Hidden Markov Model (HMM) having an initial state probability vector π and probability density matrix (PDM) of P(X,Y), where XεX, YεY and elements of P(X,Y), pij(X,Y)=Pr(j,X,Y|i), are conditional probability density functions of an information element X of the second information sequence that corresponds to a received element Y of the first information sequence after the HMM transfers from a state i to a state j, the auxiliary function being expressed as: Q(X1T,X1,pT)=∑zΨ(z,X1,pT,Y1T)log(Ψ(z,X1T,Y1T)),where p is a number of iterations, Ψ(z,X1T,Y1T)=πio∏t=1T pit-1it(Xt,Yt), T is a number of information elements in a particular information sequence, z is a HMM state sequence i0T, πi0 is the probability of an initial state i0,X1T is the second information sequence, X1,pT is a second information sequence estimate corresponding to a pth iteration, and Y1T is the first information sequence.
- 17. The decoder of claim 16, wherein the auxiliary function is expanded to be: Q(X1T,X1,pT)=∑t=1T ∑i=1n ∑j=1n γt,ij(X1,pT)log(pij(Xt,Yt))+Cwhere C does not depend on X1T andγt,ij(X1,pT)=αi(X1,pt−1,Y1t−1)pij(Xt,p,Yt)βj(Xt+1,pT,Yt+1T) where αi(X1,pt,Y1T) and βj(Xt+1,pT,Yt+1T) are the elements of forward and backward if probability vectors defined as α(X1t,Y1t)=π∏i=1t P(Xi,Yi),and β (Xt+1T,Yt+1T)=∏j=t+1T P(Xj,Yj)1,π is an initial probability vector, 1 is the column vector of ones.
- 18. The decoder of claim 17, wherein a source of an encoded sequence is a trellis code modulator (TCM), the TCM receiving a source information sequence I1T and outputting X1T as an encoded information sequence that is transmitted, the TCM defining Xt=gt(St,It) where Xt and It are the elements of X1T and I1T for each time t, respectively, St is a state of the TCM at t, and gt(·) is a function relating Xt, to It and St, the controller generates, for iteration p+1, an input information sequence estimate I1,p+1T that corresponds to a sequence of TCM state transitions that has a longest cumulative distance L(St−1) at t=1 or L(S0), wherein a distance for each of the TCM state transitions is defined by L(St+1)=L(St+1)+m(Ît+1(St+1)) for the TCM state transitions at each t for t=1, . . . , T and the cumulative distance is the sum of m(Ît(St)) for all t, m(Ît(St)) being defined as m(I^t(St))=∑i=1nc ∑j=1nc γt,ij(I1,pT)log pc,ij(Yt|Xt(St)),for each t=1,2, . . . , T, where Xt(St)=gt(St,Ît(St)), nc is a number of states in an HMM of the channel and Pc,ij(Yt|Xt(St)) are channel conditional probability density functions of Yt when Xt(St) is transmitted by the TCM, I1,p+1T being set to a sequence of Ît for all t.
- 19. The decoder of claim 18, wherein for each t=1,2, . . . , T, the controller generating m(Ît(St)) for each possible state transition of the TCM, selecting state trajectories that correspond to largest L(St)=L(St+1)+m(Ît+1(St+1)) for each state as survivor state trajectories, and selecting Ît+1(St+1)s that correspond to the selected state trajectories as It+1,p+1(St+1).
- 20. The decoder of claim 19, wherein the controller:a. assigns L(ST)=0 for all states at t=T; b. generates m(Ît(St)) for all state transitions between states St and all possible states St+1; c. selects state transitions between the states St and St+1 that have a largest L(St)=L(St+1)+m(Ît+1(St+1)) and Ît+1(St+1) that correspond to the selected state transitions; d. updates the survivor state trajectories at states St by adding the selected state transitions to the corresponding survivor state trajectories at state St+1; e. decrements t by 1; f. repeats b-e until t=0; and g. selects all the Ît(St) that correspond to a survivor state trajectory that corresponding to a largest L(St) at t=0 as I1,p+1T.
- 21. The decoder of claim 20, wherein the channel is modeled as Pc(Y|X)=PcBc(Y|X) where Pc is a channel state transition probability matrix and Bc(Y|X) is a diagonal matrix of state output probabilities, for each t=1,2, . . . , T, the controller:generates γt,i(I1,pT)=αi(Y1t|I1,pt)βi(Yt+1T|It+1,pT); selects an Ît(St) that maximizes L(St)=L(St+1)+m(Ît+1(St+1)), where m(Ît(St)) is defined as m(I^t(St))=∑i=1ncγt,i(I1,pT)βj(Yt|Xt(St)),nc being a number of states in an HMM of the channel; selects state transitions between states St and St+1 that corresponds to a largest L(St)=L(St+1)+m(Ît+1(St+1)); and forms survivor state trajectories by connecting selected state transitions.
- 22. The decoder of claim 21, wherein the controller selects Ît(St) that corresponds to a survivor state trajectory at t=0 that has the largest L(St) as I1,p+1T for each pth iteration, compares I1,pT and I1,p+1T, and outputs I1,p+1T as the second decode result if I1,pT and I1,p+1T are within the compare threshold.
Parent Case Info
This nonprovisional application claims the benefit of U.S. provisional application No. 60/174,601 entitled “Map Decoding In Channels With Memory” filed on Jan. 5, 2000. The Applicant of the provisional application is William Turin. The above provisional application is hereby incorporated by reference including all references cited therein.
US Referenced Citations (6)
Number |
Name |
Date |
Kind |
5721746 |
Hladik et al. |
Feb 1998 |
A |
6167552 |
Gagnon et al. |
Dec 2000 |
A |
6182261 |
Haller et al. |
Jan 2001 |
B1 |
6223319 |
Ross et al. |
Apr 2001 |
B1 |
6343368 |
Lerzer |
Jan 2002 |
B1 |
6377610 |
Hagenauer et al. |
Apr 2002 |
B1 |
Non-Patent Literature Citations (2)
Entry |
Georghiades, et al., “Sequence Estimation in the Presence of Random Parameters Via the EM Algorithms”, IEEE Transactions on Communications, vol. 45, No. 3, Mar. 1997. |
Turin, Digital Transmission System; Performance Analysis and Modeling, pp. 126-143, pp. 227-228, 1998. |
Provisional Applications (1)
|
Number |
Date |
Country |
|
60/174601 |
Jan 2000 |
US |