High-performance sequence estimation system and method of operation

Information

  • Patent Grant
  • 9191247
  • Patent Number
    9,191,247
  • Date Filed
    Tuesday, December 9, 2014
    9 years ago
  • Date Issued
    Tuesday, November 17, 2015
    8 years ago
  • CPC
  • Field of Search
    • US
    • 375 316000
    • 375 340000
    • 375 341000
    • 375 259000
    • 375 260000
    • 375 261000
    • 375 262000
    • 375 265000
    • CPC
    • G10L15/14
    • G10L15/142
    • H04L1/0054
    • H04L1/0052
    • H04L1/006
    • H04L25/03203
    • H04L25/03235
    • H04L25/03216
    • H04L25/03331
    • H04L27/3416
    • H04L27/3422
    • G06K9/6297
    • H03M13/39
    • H03M13/4169
    • H03M13/4192
  • International Classifications
    • H04L27/06
    • H04L25/03
Abstract
An electronic receiver comprises sequence estimation circuitry operable to implement a sequence estimation algorithm. In the sequence estimation algorithm, each of a plurality of possible current states of the signal may have associated with it a respective Nc possible prior states and a respective M state extensions, where Nc and M are integers greater than 1. Each iteration of the sequence estimation algorithm may comprise extending each of the plurality of possible current states of the signal by its respective Nc possible prior states and its respective M state extensions to generate a respective Nc×M extended states for each of the plurality of possible current states. Each iteration of the sequence estimation algorithm may comprise, for each of the plurality of possible current states of the signal, selecting M of the respective Nc×M extended states to be state extensions for a next iteration of the sequence estimation algorithm.
Description
BACKGROUND

Limitations and disadvantages of conventional methods and systems for electronic communication will become apparent to one of skill in the art, through comparison of such approaches with some aspects of the present method and system set forth in the remainder of this disclosure with reference to the drawings.


BRIEF SUMMARY

Methods and systems are provided for communication system with high tolerance of phase noise and nonlinearity, substantially as illustrated by and/or described in connection with at least one of the figures, as set forth more completely in the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts a transmitter in accordance with an example implementation of this disclosure.



FIG. 2 depicts a receiver in accordance with an example implementation of this disclosure.



FIG. 3 depicts an example Viterbi implementation of the sequence estimation circuitry of FIG. 2.





DETAILED DESCRIPTION

As utilized herein the terms “circuits” and “circuitry” refer to physical electronic components (i.e. hardware) and any software and/or firmware (“code”) which may configure the hardware, be executed by the hardware, and or otherwise be associated with the hardware. As used herein, for example, a particular processor and memory may comprise a first “circuit” when executing a first one or more lines of code and may comprise a second “circuit” when executing a second one or more lines of code. As utilized herein, “and/or” means any one or more of the items in the list joined by “and/or”. As an example, “x and/or y” means any element of the three-element set {(x), (y), (x, y)}. In other words, “x and/or y” means “one or both of x and y.” As another example, “x, y, and/or z” means any element of the seven-element set {(x), (y), (z), (x, y), (x, z), (y, z), (x, y, z)}. In other words, “x, y and/or z” means “one or more of x, y and z.” As utilized herein, the terms “e.g.,” and “for example” set off lists of one or more non-limiting examples, instances, or illustrations. As utilized herein, circuitry is “operable” to perform a function whenever the circuitry comprises the necessary hardware and code (if any is necessary) to perform the function, regardless of whether performance of the function is disabled, or not enabled, by some user-configurable setting.


A communication system in accordance with an example implementation of this disclosure may use a single-carrier air interface based on faster than Nyquist coded modulation. The signal processing in the system may be tailored for achieving high capacity by handling non-linearity and phase noise. The system may be particularly suitable for cases of high-order transmission constellations (e.g. 1024QAM) where both power amplifier non-linearity and phase noise are significant.


An M-algorithm based reduced state sequence estimation (RSSE) architecture may be used as a near maximum likelihood receiver for such a communication system. The downside with M-algorithm architecture, however, is that it requires sorting multiple survivor hypothesis and therefore has a bottleneck (sorting) that limits achievable speedup by parallelization. An alternative approach to RSSE is the use of the Viterbi algorithm, which is actually a true maximum likelihood solution. However the number of states (order of complexity) of full Viterbi algorithm is ANh−1, where the signaling constellation size is denoted by A and the combination of pulse and channel duration (memory) is denoted by Nh. For a communication system in accordance with an example implementation of this disclosure, where A=64 and Nh=24, the resulting full Viterbi state count ANh−1=6423 is huge. Such complexity may be reduced through use of a truncated Viterbi algorithm. However, in a communication system in accordance with an example implementation of this disclosure, truncating the Viterbi to memory of Nt=6 symbols still results in a huge state count of ANt−1=645, which is not commercially feasible for some applications.


A set partitioning scheme (i.e. to divide the received state bits to Nc cosets and into parallel transitions) may be used to further reduce the number of Viterbi algorithm states needed. In one example implementation, per each of the Nt−1 symbols corresponding to a Viterbi state, each symbol's in-phase and quadrature LSBs are used to define nc=4 cosets and the symbol's higher bits (in integer mapping) are parallel transitions, thus reducing the state count to NcNt−1=45 which is difficult but feasible for many applications.


The performance of Viterbi algorithm approach is limited due to limited memory (e.g. Nt=6 in the example described above). To further improve performance, this approach may be augmented using multiple survivors per Viterbi state. This approach is referred to in this disclosure as the “Hybrid Viterbi M-Algorithm Approach” (or “M-Viterbi,” for short). In an example implementation, a communication system uses Viterbi Memory of Nt=3 (i.e. NcNt−1=42 Viterbi states), but allows at each Viterbi state multiple (e.g. M=16) survivors paths as used in the M-Algorithm. These Viterbi state extensions are referred to in this disclosure as “tails”. Since the number of survivors required at each Viterbi state is lower than the number of survivors required for pure M-Algorithm, the sort bottleneck is resolved (since the M algorithm has to sort all survivors whereas M-Viterbi only has to sort survivors per state). Moreover, comparing at equal complexity, this approach achieves consistently better performance than the Viterbi Algorithm, and more significantly better performance than M-Algorithm at low SNR's (where the M-Algorithm occasionally loses the correct path and therefore is prone to long error events).


An example implementation of a M-Viterbi receiver in accordance with this disclosure may manage non-linearity and phase noise on top of inter-symbol-interference (ISI) due to channel and faster than Nyquist signaling. Faster than Nyquist signaling, with significant spectral compression (i.e. twice the signal BW), may be used to manage non-linearity. Since distortion BW is higher than signal BW, the M-Viterbi RSSE state may be updated at a multiple of original BW.


In a receiver in accordance with an example implementation of this disclosure, the power amplifier output, rather the transmitted symbols (accompanied by some post cursor ISI response), are reconstructed at the output of the FFE. Reconstructing the PA output rather than the transmitted symbols exposes nonlinear distortion and enables compensating for the nonlinear distortion using a maximum likelihood based approach.


In an example receiver in accordance with an implementation of this disclosure, the M-Viterbi Algorithm decodes a double convolution distorted response. That is, original symbols are first convolved by the response of the transmit pulse shaping filter, then non-linearly distorted, and finally convolved by the post cursor of the channel response. In such an implementation, the FFE is used to convert the channel response to a response having a short post cursor portion and having most of its energy at the initial tap. This may, reduce the probability of parallel transition error, and result in there being relatively little energy at symbol times beyond the depth of the Viterbi memory (NO.


In the “Viterbi” approach, the Viterbi RSSE is used to maintain a metric per the state, while in an example implementation using the “M-Viterbi” approach, a metric is maintained per pair of (state, tail). In both cases that metric is subsequently used to select the best sequence of states (i.e. survivor via traceback). The sequence of states by itself corresponds to the sequence of symbol cosets along the selected path. To update the state metric, the M-Viterbi RSSE uses the original state symbol history and carrier phase estimation. The parallel transitions (most significant bits (MSBs)) are determined per Viterbi state based on that particular state symbols history and carrier phase estimation. The state history allows to anticipate the nonlinear distortion (typically dealing with power amplifier non-linearity) while carrier phase estimation allows to anticipate the phase noise.



FIG. 1 depicts an example transmitter in accordance with an implementation of this disclosure.


Although the transmitter may use coded modulation, in order to achieve very low BER, it may use an additional outer FEC encoding circuit 102.


In the example transmitter 100, the outer FEC encoder 102 is followed by an interleaver circuit 104 and QAM mapper circuit 106 that outputs symbols denoted as a[n]. The QAM mapper 106 operates at faster baud rate than a conventional QAM system using the same bandwidth. In this regard, in an example implementation, if the transmitter 100 is allocated a bandwidth of W0 for transmitting the data 100, then the QAM mapper 106 uses a baud rate (BR)>W0. For comparison, a conventional Nyquist-rate QAM system using the same bandwidth of W0 and having excess bandwidth of β would use a baud rate of W0/(1+β). In an example implementation, BR may be double the conventional baud rate (i.e. BR=2·W0/(1+β). The shaping filter circuit 108, characterized by an expression p[ ], is run at the baud BR and is used to limit transmitter spectrum according to applicable spectral mask (e.g., specified by a standards and/or regulatory body). In an example implementation the shaping filter 108 bandwidth is lower than baud rate. In an example implementation, the shaping filter 108 is part of modulation code and is designed to optimize coding gain. The output 109 of the shaping filer 108 is interpolated by interpolator circuit 110, converted from digital to analog by digital-to-analog converter (DAC) circuit 112, upconverted to carrier frequency by filter and upconverter circuit 114, and amplified by the PA 116, resulting in signal yPA[n], which is sent over a wireline or wireless channel.



FIG. 2 depicts a receiver in accordance with an example implementation of this disclosure. In the receiver 200, the signal 201 (the result of signal yPA[n] passing through the channel) is received by one or more antennas or ports, amplified by an LNA 202, down-converted and filtered by filtering and downconversion circuit 204, sampled by analog-to-digital converter (ADC) circuit 206 and decimated by decimator circuit 208 resulting in signal 209. A carrier recovery loop circuit 230 is used both for driving the downconversion analog phase locked loop (PLL) (frequency conversion) of circuit 204 and a digital fine frequency correction via mixer 210. The carrier recovery loop 230 is fed by a phase error derived by comparing, in circuit 228, the Viterbi/M-Viterbi algorithm delayed input signal y[n−d] from delay element 226 and the Viterbi/M-Viterbi estimated signal yest[n−d] from RSSE circuit 218 (The delay d is to compensate for Viterbi/M-Viterbi processing delay and a short traceback used). In order to quickly track phase changes the traceback used for driving the carrier recovery loop may be smaller than that used for symbol demodulation.


After being rotated by circuit 210 based on the fine frequency correction output by carrier tracking loop 230, the signal 209 is filtered by RX filter circuit 212 and processed by FFE circuit 214 and coarsely phase corrected in circuit 216 resulting in signal y[n] being input to RSSE 218. The objective of the FFE is to recover PA output samples yPA[n] (instead of, or in addition to, recovering transmitted symbols a[n]). The FFE 214 may adapt to minimize pre-cursor ISI, while being allowed to produce post cursor ISI. The output of the FFE is fed to the Viterbi/M-Viterbi based RSSE circuit 218 that, at the same time, decodes the modulated signal and manages post cursor ISI, hPC[n], generated by the channel and FFE.


Allowing the FFE 214 to output a signal with post-cursor ISI, hPC[n], where hPC[n] is the residual post cursor channel response not handled by the FFE 214, along with using the Viterbi/M-Viterbi based RSSE circuit 218 to handle the post-cursor ISI may increase complexity of the RSSE 218. Accordingly, in an example implementation where the channel is flat (within determined tolerances) the FFE circuit 214 may be constrained from generating significant post cursor ISI and/or the Viterbi/M-Viterbi-based search algorithm may be configured to expect a pulse that includes the post-cursor ISI. This combined filter is called the composite h[n]=hPC[n]*p[n] (where * denotes convolution).


The output soft-decisions (e.g., log likelihood ratios (LLRs)) of the Viterbi/M-Viterbi-based RSSE circuit 218 are fed to the de-interleaver circuit 222 and outer FEC decoder circuit 224 (for the initial iteration the LLR switch 232 is open, thus inputting zeroes to the RSSE circuit 218). The FEC decoder 224 may output the data bits at this stage. Alternately, to further improve performance, the receiver may perform additional iterations between the RSSE circuit 218 and the outer FEC decoder 224. For these iterations the LLR switch 232 is closed, and the LLR values are converted to extrinsic LLRs by subtracting the respective decoder LLRs input to RSSE circuit 218 from interleaver 220.


The Viterbi/M-Viterbi algorithms have two functions: (1) Equalizing the channel by handling the post cursor ISI; and (2) decoding the received signal. These two tasks may be performed at the same time for the equalizing function to be provided with the decoded decisions. In an example implementation, the receiver 200 may comprise a decision feedback equalizer. In cases of relatively flat/short channel, however, the receiver may disable the DFE (or not have a DFE at all) and may incorporate the DFE into the composite response h[ ], which is described below.


An MLSE Viterbi algorithm models the set of all possible transmissions using a Hidden Markov Model directed graph that is referred to as the “Trellis”. Where the hidden states are the transmitted symbol indices a[n]ε{0 . . . A−1} and the visible information is the conditional expectancy of the received signal denoted here as yest[n] (conditioned on previous and next trellis states). The trellis has a trellis root node and a trellis terminal node, and between them multiple columns of graph nodes. Each column corresponds to a transmission symbol time denoted by n. Graph nodes populating column n correspond to all possible hidden states of the transmission at that symbol time. The set of possible states per column n is called the state space and denoted by the set s[n]. Directed graph edges, called branches, exist between two states corresponding to successive columns, between trellis root node and first column, and between last column and trellis terminal node. Every trellis path starting at the trellis root and ending at the trellis terminal node corresponds to a valid transmission, where the set of symbol indices along that path a[n] are mapped to actual symbol μ(a[n]) using the mapping function μ(a). The branches are indexed here using Ibr, where the root and terminal states of a branch Ibr are denoted sroot (Ibr) and sterm(Ibr), and the set of branches starting at a specific state s0 are denoted B(s0). Every branch Ibr is labeled by a deterministic value that corresponds to conditional expectancy of the received signal, and based deterministically on sroot(Ibr) and sterm(Ibr) states. In the MLSE case, the state space corresponds to all ANt−1 sequences of Nt−1 last symbol indices that, in conjunction with current symbol index a[n], provide a sufficiently good estimate of received signal yest[n] (i.e. the conditional expectancy). For this to happen Nt must be large enough such that relative tail energy of the channel (including transmission pulse) is low relative to total channel impulse response energy.


Reducing the state space can reduce complexity and memory requirements of a Viterbi decoder. This may be achieved by both reducing Nt significantly below the full channel duration Nh, and also using coset representation of each symbol index a[n]ε{0 . . . A−1}. That is, given a set partitioning of the symbol constellation, the state space represents only the coset index Ics=a[n]% NC instead of full symbol index a[n]. In this case the coset index Ics refers to least significant bits (LSBs) of the symbol index which are protected by the Trellis, while the parallel transitions corresponding to MSBs are not protected by the trellis. To map the symbol coset index Ics and parallel transition index Ims index we use the known mapping functions μ(Ics,Ims) that use set partitioning.


Viterbi State Space:


In an example implementation in which the RSSE circuit 218 uses the Viterbi algorithm, the RSSE circuit 218 maintains a set of states that corresponds to all possible symbols coset sequences of length Nt−1. In an example implementation, such as described above, the length of the sequences may be Nt−1=5. Every Viterbi state represents an infinite set of symbol vectors. Such that symbol vectors are partitioned according to the LSBs (cosets) of their latest Nt−1 symbols. For example, at symbol n, the symbol indices sequence {a[n−k]}k=0 is represented by the following coset sequence

s[n]≡{a[n]%Nc,a[n−1]%Nc,a[n−2]%Nc,a[n−2]%Nc, . . . ,a[n−Nt+1]%Nc},   (1)

where % denotes modulo, and NC is the number of cosets (which may also vary according to delay, i.e. by Nc[k]). If the RSSE circuit 218 looks far enough into the past (k→∞), there are infinitely many symbol sequences represented by the same small (in our example of size Nt−1=5) coset vector.


For each state 0≦m≦4Nt−1−1 (representing a short coset vector), the Viterbi memory of RSSE circuit 218 holds an accumulated metric of the state M; a history of Nh symbols a[n], a[n−1], . . . a[n−Nh+1]; and a last phase estimation θ[n].


Hybrid M-Viterbi State Space


In an example implementation in which the RSSE circuit 218 uses the M-Viterbi algorithm, up to Mtail different tails may be held for each Viterbi state. For each tail, the RSSE circuit 218 may keep and maintain: an accumulated metric of the state M; a history of Nh symbols a[n], a[n−1], . . . , a[n−Nh+1]; and a last phase estimation θ[n]. The RSSE circuit 218 may alternatively maintain a partial symbol history, e.g. a[n−D], a[n−D−1], . . . a[n−Nh+1], where D>0, in which case a[n], a[n−D+1] may be determined from the input data when extending the survivor (e.g. using multi-dimensional slicing).


Unlike the Viterbi algorithm, the M-Viterbi algorithm may experience duplicate tails (as does the M-Algorithm) per Viterbi state. Thus, the M-Viterbi algorithm requires a mechanism for pruning these duplicate tails for the same Viterbi state. Duplicate tails correspond to identical symbol history a[n], a[n−1], . . . a[n−Nh+1]. When the RSSE circuit 218 detects duplicate tails, the tail(s) having worse metric (i.e. higher numerical value) may be discarded. This process may occur far slower than symbol rate, which is beneficial for complexity reduction.


Thus, whereas the Viterbi algorithm maintains a single previous Viterbi state for each current state, the M-Viterbi algorithm holds M different possible previous states for each current state. The tails captures state history that would be too old to be captured by a Viterbi trellis (i.e., the M-Viterbi captures state history that is longer than Viterbi memory). The tails of the M-Viterbi algorithm efficiently describe a small subset of the possible long survivors, whereas the Viterbi trellis describes all possible short survivors and the M algorithm holds only M paths that do not include all possible short paths. Since short paths only contain information about most recent symbols (which is least reliable) the M-Viterbi reduces the probably of losing the correct path, as compared to the M-Algorithm.


Viterbi State Connectivity/Branches


When receiving at time n a new sample [n], the RSSE 218 may update all the Viterbi states from time corresponding to symbol n−1 to time corresponding to symbol n. For each possible state s[n] at time n, the RSSE circuit 218 examines all possible prior states s[n−1] at time n−1 from which s[n] could have possibly originated. In an example implementation, the set of prior states for s[n−1] is:

s[n−1]≡{a[n−1]%Nc,a[n−2]%Nc,a[n−3]%Nc . . . a[n−Nt]%Nc}.  (2)


Note that a[n−1]% Nc, a[n−2]% Nc, . . . , a[n−Nt+1]% Nc are common to s[n] and S[n−1] and therefore don't require any algorithm decision. Basically these correspond to the narrowing of the set of possible prior states from which each particular s[n] may have originated. The different options for the oldest coset a[n−Nt]% Nc complete the s[n−1] definition. Since coset a[n−Nt]% Nc is not defined by s[n], there are Nc different possible prior states (possible values of s[n−1]). Each such directed pair of states s[n−1]→s[n], where s[n−1] is the prior state and s[n] is a branch.


M-Viterbi State Additional Connectivity


In the M-Viterbi approach the RSSE circuit 218 considers, per such prior state s[n−1], all the Mtails possible tails. Each such tail is a possible sequence of symbol indices that ended in that state s[n−1] (i.e. a survivor). Thus, per s[n] state, we get Mtails·Nc survivors that are candidates that may have preceded that s[n] state. We denote each survivor at time n−1 as the pair (s[n−1], m), where s[n−1] is the prior state and m=1, 2, . . . , Mtails is a tail associated with that prior state. An extended branch Iebr can then be defined as the directed pair of (s[n−1], m)→s[n] that associates with the new s[n] state, with possible prior survivor tail, belonging to the s[n−1] state.


Parallel Transitions (MSBs)


Similarly, the MSBs of the newest symbol a[n] (i.e. floor(a[n]/Nc)), are not defined by s[n]. Therefore for each state

S[n−1]≡{a[n−1]%Nc,a[n−2]%Nc,a[n−3]%Nc . . . a[n−Nt]%Nc}  (3)

(and in the M-Viterbi, for each of s[n−1] constituent tails m=1, 2, . . . , M), we have several options for the value of the MSBs (floor(a[n]/Nc) options, to be exact). These different options the values of MSBs do not amount to different branches, since the same set of MSBs may correspond to any branch s[n−1]→s[n]. Instead, these different options are parallel transitions. In a conventional Viterbi decoder, the parallel transitions (i.e. a[n] MSBs) are based on y[n] and protected only by the decoder having determined these cosets. In contrast, in an example implementation of this disclosure, the RSSE circuit 218 protects the MSBs selection based on [n], y[n−1], and their cosets. In another example implementation of this disclosure, the RSSE circuit 218 first decodes the cosets, and then runs the Viterbi Algorithm or the Hybrid M-Viterbi again to decode the MSBs. In this second run, the cosets (LSBs) are fixed (to their decoded result from first run) thus allowing the RSSE circuit 218 to handle the MSBs.


(Coset) Viterbi Update


Every symbol time n, an example implementation of the RSSE circuit 218 using the Viterbi algorithm updates the metric for each state s[n] based on the incoming branches (Ibr) and new received sample y[n]. The metric of each branch is minimized over possible parallel transitions (Ims), and is then used to compute the following state snew metric










M


(

s
new

)


=

arg







min


I
br



B


(

s
new

)








arg





min


I

m





s








y

[
n
]


-


ζ
(


s
new

,


s
root



(

I
br

)


,

I

m





s





2

+

M


(


s
old



(

I
br

)


)











(
4
)








where B(snew) is the set of Nc possible incoming branches sroot[k]→snew to the state snew; Ibr is a branch index; there are Nc possible incoming branches for snew; and sroot(Ibr) is the root states of the branch Ibr, which includes all the recent state history {a[n−k]}k=DNh that is relevant for computing metric (“filter memory” and phase), D>0 where it is desired to maintain partial symbol history as explained above; Ims is a parallel transitions index, of which there are A/Nc possibilities; and ζ(sold(Ibr),Ibr,Ims) is a predictor for y[n] based on snew,sold(Ibr),Ims.


Thus the state metric for state s[n] is taken as the minimum of a set of different branches IbrεB(s[n]), at the same time the RSSE circuit 218 stores the selected branch (providing the minimal metric in the formula above). The index of selected branch is stored in traceback memory of the RSSE circuit 218 that indicates for each state s[n] the selected root state s[n−1] (i.e. selected branch) and also the transmitted symbol a[n] associated with the transition to terminal state s[n]. Note that the state index itself implies the coset a[n]% Nc (i.e. the LSBs). Thus, the RSSE circuit 218 may only incrementally store the MSBs per state. Also note that traceback memory may not hold traceback data relating to very old information that exceeds the traceback depth discussed below. Thus, the traceback memory may be implemented as a cyclic buffer of depth at least as big as the traceback depth.


M-Viterbi Update


Every symbol time n, an example implementation of the RSSE circuit 218 using the M-Viterbi algorithm computes a set of Mtails tails for each state s[n] based on the incoming Nc·Mtails extended branches (Iebr), and based on received sample y[n]. For each new state s[n], there are Nc prior states {[n−1]} and for each such prior state there are Mtails possible tails that correspond to different symbol histories. Thus, in total there are Nc·Mtails candidate (state, tails) pairs that may have preceded that new state. From this set the RSEE circuit 218 using the M-Viterbi algorithm selects a subset of candidates consisting of the best (e.g., having the smallest metrics) Mtails candidates.


For each extended branch Iebr the conditional expectation of y[n] is based on recent symbols history from the root state Sroot(iebr) and slightly less recent symbol history corresponding tail hanging from sroot(iebr) denoted tail(Iebr). To avoid excess notation, it is assumed in this disclosure that tail(Iebr) contains all the history, since for each tail there is only one root state. Thus the conditional expectation function is denoted ζtl(sterm(Iebr),tail(Iebr),Ims).


The metric of each extended branch (Iebr) is minimized over possible parallel transitions (Ims), and is used to compute the following state metric:

M(Iebr)=argminIms∥y[n]−ζζtl(snew(Iebr),tail(Iebr),Ims)∥2+M(sold(Ibr))  (5)

where Iebr is an extended branch index, and there are Nc·Mtails possible incoming branches for Snew; tail(Iebr) includes the prior tail of the branch Iebr, which includes all the recent state history {a[n−k]}k=DNh that is relevant for computing metric (“filter memory” and phase); D>0 if it is desired to maintain partial symbol history as explained above; Ims is a parallel transitions indicator, and there are A/Nc possible parallel transitions; and ζtl(snew,τ,Ims) is the conditional expectation for y[n] based on state Snew, tail τ, and Ims.


Using this formula, an example implementation of the RSSE circuit 218 using the M-Viterbi algorithm computes Nc·Mtails metrics (Iebr), and choses the Mtails tails yielding the smaller (i.e., better) aggregate state metrics as the appropriate survivors for the new s[n] state. At the same time, for each s[n] tail, the RSSE 218 may store the selected extended branch Iebr (providing the minimal metric for the respective s[n] tail). The index of selected extended branch is stored in the traceback memory of the RSSE circuit 218 that indicates, for each pair (state s[n],tail), the selected incoming prior state s[n−1] and tail (i.e. selected extended branch) and also the hypothesized transmitted symbol a[n] at that state s[n]. The state index itself implies the coset a[n]% Nc (i.e. the LSBs). Thus, the RSSE circuit 218 may only incrementally store the MSBs per state. Also the traceback memory may not hold traceback data relating to very old information that exceeds the traceback depth discussed below. Thus, the traceback memory may be implemented as a cyclic buffer of depth at least as big as the traceback depth.


Viterbi Traceback


Having updated the state metric for every state in symbol time n, an example implementation of the RSSE circuit 218 implementing the Viterbi algorithm may apply traceback to decode/estimate the transmitted symbols. The traceback depth may indicate the delay of the symbol to be decode with respect to the latest state from which the processing starts (i.e. s[n]). This may be, for example, at least 5-10 times the pulse memory, including channel induced ISI. The traceback may be implemented every symbol or every several symbols to reduce complexity.


An example Viterbi traceback operation will now be described. Just after updating all metrics of state set {s[n]} the RSSE circuit 218 implementing the Viterbi algorithm finds the best state s[n] based on the aggregate state metrics. Then, using the traceback memory, the RSSE circuit 218 implementing the Viterbi algorithm finds the best prior state for s[n] (denoted s[n−1]). This process repeats until reaching the traceback depth (i.e. using the traceback the RSSE circuit 218 implementing the Viterbi algorithm finds for s[n−k] the best prior state S[n−k−1] until k=Ntbdepth−1). Finally, the traceback process returns the transmission symbol or soft information (e.g., LLRs) attached to






s

[

n
-

N
tb_depth


]






in the traceback memory (i.e. the decoded data).


M-Viterbi Traceback


The traceback in an example implementation of the RSSE circuit 218 implementing the M-Viterbi is similar to as described above when implementing the Viterbi algorithm. For the M-Viterbi algorithm, however, the tails need to be considered in addition to the states.


An example M-Viterbi traceback operation will now be described. Just after updating all metrics of state set {s[n]}, the RSSE circuit 218 finds the best pair of state and tail (denoted (s[n], m[n])). Then, using the traceback memory the RSSE circuit 218 finds the best prior pair of state and tail denoted (s[n−1],m[n−1]). The process repeats until reaching the traceback depth (i.e. using the traceback the RSSE circuit 218 finds, for (s[n−k],m[n−k]), the best prior pair (s[n−k−1], m[n−k−1]), until k=Ntbdepth−1. Finally the traceback process returns the transmission symbol or soft information (i.e., LLRs) attached to






(


s

[

n
-

N
tb_depth


]


,

m

[

n
-

N
tb_depth


]



)





in the traceback memory (i.e. the decoded data).


Viterbi Metric Minimization Process


At the input to the Viterbi algorithm (output of mixer 216) y[n] can be modeled as

ŷ[n]=e[n]·hpc*fNLk=0Nhh[k]·a[n−k]),  (6)

where a[n−k] are the previously transmitted symbols; h[ ] is the transmit pulse response; hpc[ ] is the post-cursor ISI that the RSSE circuit 218 implementing the Viterbi algorithm attempts to cancel, where * stands for convolution; e[n] is the phase rotation due to phase noise; and fNL( ) is a non-linear function that models the power amplifier (PA) of the transmitter from which the signal was received (e.g., PA 116 when receiving from transmitter 100). Thus the appropriate branch metric for the Viterbi Algorithm is

Mbr(snew,sold,Ims)=|y[n]−ζ(snew,sold,Ims)=  (7)
=|y[n]−e[n]·hpc*fNLk=0Nhh[k]·a[n−k])|2=  (8)
=|y[n]−ej.sold.θ·Σl=0Npchpc[l]·NL(h[0]·μ(Snew.Ics,Ims)+Σk=1Nhh[k]·sold.a[n−l−k])|2  (9)

where Snew is the target state for which the RSSE circuit 218 is computing the metric; sold is the designated prior state for Snew; Snew.Ics is the coset value that applies (in a fixed way) to the new state snew; and sold.a[n−k] are the sequence of symbols stored in the prior state sold history for k<=Nh; h[ ] is the transmit pulse response; hpc[ ] is the post cursor ISI the RSSE circuit 218 implementing the Viterbi algorithm attempts to cancel, where * stands for convolution; Npc is the length of the post cursor ISI, hpc, that the RSSE circuit 218 implementing the Viterbi algorithm attempts to cancel, assuming that hpc[0]=1 is the FFE/DFE cursor; Sold.θ is the latest phase hypothesis for the prior state sold; μ(Ics, Ims) is a term that, given coset index and msb selection index, computes symbol value; and Ims the hypothesized MSB bits for a[n].


M-Viterbi Metric Minimization Process


Similarly the appropriate branch metric for the Hybrid M-Viterbi is

Mbr(snew,sold,Ims)=|y[n]−ζtl(s,τ,Ims)|2=  (10)
=|y[n]−ej.τθ·Σl=0Npchpc[l]·fNL(h[0]·μ(s.Ics,Ims)+Σk=1Nhh[k]·τ.a[n−l−k])|2  (11)

where τ is a tail corresponding to a prior state of s, τ·a[n−k] is symbol history of that tail, τθ is a phase estimate of that tail;


Thus, the M-Viterbi metric is similar to Viterbi metric except for substituting the prior state data sold.a[n−k], sold.θ, by the prior tail data τ·a[n−k], τθ. The same substitution can be used in above expressions that are written in terms of state sold instead of tail τ.


The branch metric notation can be simplified, and complexity reduced, by denoting the previous PA output estimations as yPA[n]. These estimations may be held in memory as part of state sold or tail τ history to avoid any need to re-compute them.

Mbr(Snew,sold,Ims)=|y[n]−ej.sold.θ(fNL(h[0]·μ(Snew.Ics,Ims)+Σk=1Nhh[k]·sold.a[n−k])+Σl=1Npchpc[l]·yPA[n−1])|2  (12)


Based on the branch metric of (12) above, the expectation can be written as shown in (13):

ζ(snew,sold,Ims)=ej.sold.θ(fNL(h[0]·μ(Snew.Ics,Ims)+Σk=1Nhh[k]·sold.a[n−k])+Σl=1Npchpc[l]·yNLPA[n−l])  (13)


Parallel Transitions:


As explained previously, to update the state snew metric, the RSSE circuit 218 implementing the Viterbi algorithm may attempt to minimize every branch Ibr metric over all possible parallel transitions Ims.

M(snew,Ibr)=argminIms∥y[n]−ζ(snew,sold(Ibr),Ims)∥2+M(sold(Ibr))  (14)


Similarly, the RSSE circuit 218 implementing the M-Viterbi algorithm may attempt to minimize the extended branch metric over all possible parallel transitions Ims
M(Iebr)=argminIms∥y[n]−ζtl(snew(Iebr),tail(Iebr),Ims)∥2+M(sold(Ibr))  (15)


In both cases, the RSSE circuit 218 may attempt to minimize a metric over all possible different MSB's (parallel transitions) indexed by Ims. In one embodiment, the metrics are computed per state s[n] for every Ims and then the minimum is computed. However this embodiment has significant computational complexity (A/Nc).


In another embodiment, in order to reduce complexity, some of the MSBs are determined directly (i.e. by slicing) without the need to compute a metric for each MSB combination. For example, every state s[n] implies a coset for the new symbol a[n], and thus determines the new symbol LSBs. In this manner, the RSSE circuit 218 determines a coset value for the LSBs. With the coset determined, the RSSE circuit 218 may slice the MSBs and compute the metric corresponding to the resulting (sliced) a[n].


For example if the mapper 106 performs integer mapping of the 4 cosets and of the MSB indices, i.e.

μ(Ics,Ims)=2·Ics+4·Ims−√{square root over (A)}+1  (16)

where A is the number of points for the square constellation. Then














(
17
)







I

m





s


=

round
(












f
NL

-
1




(



y

[
n
]






j
·
s




old
·
θ



-




l
=
1


N
pc






h
pc



[
l
]


·


y
NL



[

n
-
l

]





)


-









k
=
1


N
h





h

[
k
]


·

s
old

·

a

[

n
-
k

]








4
·

h

[
0
]




-



2
·

I
cs


-

A

+
1

4


)













In a similar way, in another example implementation, the RSSE circuit 218 may test 4 hypotheses for the LSB of the parallel transitions Ims for each branch (extended branch) and corresponding coset Ics, and, for each such branch (extended branch) and each such hypothesis, slice the rest of the MSBs. Finally. The RSSE circuit 218 may select from the hypothesized parallel transitions Ims for a given coset Ics and branch (extended branch), the parallel transition having the lowest metric. The down side of slicing parallel transition MSBs is the need to invert the non-linearity, which may increase noise. However the metric computation to be minimized (i.e. M(snew,Ibr) or M(Iebr)) does not involve fNL−1 and therefore does not increase noise. Thus, as the number of hypotheses of Ims LSBs taken prior to minimizing over all hypothesis per coset per branch metric increases, the probability of error decreases due to complexity reduction.


In another example implementation, in order to reduce the Viterbi memory by 1 (i.e. from Nt to Nt−1) the RSSE circuit 218 may use y[n] and y[n−1] to compute parallel transitions for a[n−1]. In this case it may be desirable to account for y[n] being affected by both a[n−1] and a[n]. Thus the RSSE circuit 218 recovers a[n] only tentatively to improve slicing of a[n−1] msb's. In this case the RSSE circuit 218 may use several compound hypothesis (e.g. Nc·Nc) that include both a[n−1] lsb's and a[n] lsb's. For each such compound hypothesis, the RSSE circuit 218 may slice msb's of both a[n−1] and a[n] in order to get a robust estimation of a[n−1]. For each compound hypothesis, the RSSE circuit 218 may compute a metric, and finally select per a[n−1] coset the a[n−1] msb's having best (i.e. lowest) metric.


It should be noted that in order to reduce complexity for relatively flat channel the RSSE circuit 218 may convolve the post cursor response with the composite filter h[ ] and use the trivial post cursor response hpc[ ]=[1,0,0, . . . 0]. In such an implementation, the channel response in the above equations may be rewritten as h[ ]l=1Npchpc[l]. p[n−l].


In FIGS. 1 and 2, busses/data lines labeled with an ‘X’ operate at the baud rate.



FIG. 3 depicts an example Viterbi implementation of the sequence estimation circuitry of FIG. 2. In FIG. 3, the circuitry 302 convolves the expectancy ζ(snew,sold,Ims) with the response of shaping filter 108 (FIG. 1) to output signal 303. The circuitry 304 distorts the signal 303 based on a model of the nonlinear distortion present in the signal y[n]. The result of the distortion is signal 305. The circuitry 306 convolves the signal 305 with the post-cursor portion of the channel response to generate signal 307, which is an estimation of the signal y[n] given the expectancy ζ(snew,sold,Ims).


In accordance with an example implementation of this disclosure, an electronic receiver (e.g., 200) comprises front-end circuitry (e.g., 202, 204, 206, 210, 212, 214, and/or 216) and sequence estimation circuitry (e.g., 218). The front-end circuitry is operable to receive a signal over a communication channel, where the received signal is a result of a sequence of symbols being transmitted by a transmitter (e.g., 100). The sequence estimation circuitry is operable to implement a sequence estimation algorithm. In the sequence estimation algorithm, each of a plurality of possible current states of the signal may have associated with it a respective Nc possible prior states and a respective M state extensions, where Nc and M are integers greater than 1. Each iteration of the sequence estimation algorithm may comprise extending each of the plurality of possible current states of the signal by its respective Nc possible prior states and its respective M state extensions to generate a respective Nc×M extended states for each of the plurality of possible current states. Each iteration of the sequence estimation algorithm may comprise, for each of the plurality of possible current states of the signal, selecting M of the respective Nc×M extended states to be state extensions for a next iteration of the sequence estimation algorithm. The quantity of states in the plurality of possible states may be less than the full Viterbi state count. Each of the plurality of possible states may correspond to a sequence of cosets of the symbol constellation QAM used to generate the symbol sequence. The cosets may correspond to one or more least significant bits of a symbol. The sequence estimation circuitry may be operable to, after determination of the least significant bits based on the plurality of metrics, determine most significant bits of the symbol using slicing. The sequence estimation circuitry may be operable to, after determination of the least significant bits based on the plurality of metrics, determine most significant bits of the symbol using a second iteration of the sequence estimation algorithm in which the determined least significant bits are held fixed. The sequence estimation circuitry may be operable to determine a first one or more most significant bits of the symbol using slicing and a second one or more most significant bits of the symbol using a second iteration of the sequence estimation algorithm in which previously determined least significant bits are held fixed. The state extensions may correspond to previous state information that is older than previous state information represented by the plurality of the possible states. The sequence estimation circuitry may be operable to sort the plurality of extended states for each of the plurality of possible states, where the sort is based on the plurality of metrics.


In accordance with an example implementation of this disclosure, an electronic receiver (e.g., 200) comprises front-end circuitry (e.g., 202, 204, 206, 210, 212, 214, and/or 216) and sequence estimation circuitry (e.g., 218). The front-end circuitry is operable to receive a signal over a communication channel, where the received signal is a result of a sequence of symbols being transmitted by a transmitter (e.g., 100). The sequence estimation circuitry is operable to implement a sequence estimation algorithm. The sequence estimation algorithm may comprise, at symbol time n−1 (an arbitrary symbol time): extending a particular possible state of the signal by Nc possible prior states for the particular possible state, resulting in Nc extended states; extending each of the Nc extended states using their extension tails, resulting in Nc×M first extended states with tails; and selecting M of the Nc×M extended states with tails as second state extension tails for the particular possible state. The sequence estimation algorithm may comprise, at symbol time n (the symbol time following the symbol time n−1): generating second extended states with tails using the M second state extension tails.


The present method and/or system may be realized in hardware, software, or a combination of hardware and software. The present methods and/or systems may be realized in a centralized fashion in at least one computing system, or in a distributed fashion where different elements are spread across several interconnected computing systems. Any kind of computing system or other apparatus adapted for carrying out the methods described herein is suited. A typical combination of hardware and software may be a general-purpose computing system with a program or other code that, when being loaded and executed, controls the computing system such that it carries out the methods described herein. Another typical implementation may comprise an application specific integrated circuit or chip. Some implementations may comprise a non-transitory machine-readable (e.g., computer readable) medium (e.g., FLASH drive, optical disk, magnetic storage disk, or the like) having stored thereon one or more lines of code executable by a machine, thereby causing the machine to perform processes as described herein.


While the present method and/or system has been described with reference to certain implementations, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present method and/or system. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the present disclosure without departing from its scope. Therefore, it is intended that the present method and/or system not be limited to the particular implementations disclosed, but that the present method and/or system will include all implementations falling within the scope of the appended claims.

Claims
  • 1. A system comprising: an electronic receiver comprising: front-end circuitry operable to receive a signal over a communication channel, wherein said received signal is a result of a sequence of symbols being transmitted by a transmitter; andsequence estimation circuitry operable to implement a sequence estimation algorithm in which: each of a plurality of possible current states of said received signal has associated with it a respective Nc possible prior states and a respective M state extensions, where Nc and M are integers greater than 1;for each iteration of said sequence estimation algorithm: each of said plurality of possible current states of said received signal is extended by its respective Nc possible prior states and its respective M state extensions to generate a respective Nc×M extended states for each of said plurality of possible current states; andfor each of said plurality of possible current states of said received signal, M of said respective Nc×M extended states are selected to be state extensions for a next iteration of said sequence estimation algorithm.
  • 2. The system of claim 1, wherein how many states are in said plurality of possible current states is less than a full Viterbi state count.
  • 3. The system of claim 1, wherein each of said plurality of possible current states corresponds to a sequence of cosets of a symbol constellation used to generate said sequence of symbols.
  • 4. The system of claim 3, wherein a coset of said sequence of cosets corresponds to one or more least significant bits of a symbol.
  • 5. The system of claim 4, wherein said sequence estimation circuitry is operable to, after determination of said least significant bits based on a plurality of metrics, determine most significant bits of said symbol using slicing.
  • 6. The system of claim 4, wherein said sequence estimation circuitry is operable to, after determination of said least significant bits based on a plurality of metrics, determine most significant bits of said symbol using a second iteration of said sequence estimation algorithm in which said determined least significant bits are held fixed.
  • 7. The system of claim 4, wherein said sequence estimation circuitry is operable to determine a first one or more most significant bits of said symbol using slicing and a second one or more most significant bits of said symbol using a second iteration of said sequence estimation algorithm in which previously determined least significant bits are held fixed.
  • 8. The system of claim 1, wherein said state extensions correspond to prior state information that is older than prior state information represented by said Nc possible prior states.
  • 9. The system of claim 1, wherein said sequence estimation circuitry is operable to sort said Nc×M extended states for each of said plurality of possible current states.
  • 10. A method comprising: in an electronic receiver: receiving, via front-end circuitry of said electronic receiver, a signal over a communication channel, wherein said received signal is a result of a sequence of symbols being transmitted by a transmitter; anddemodulating, in sequence estimation circuitry of said electronic receiver, said received signal using a sequence estimation algorithm in which: each of a plurality of possible current states of said received signal has associated with it a respective Nc possible prior states and a respective M state extensions, where Nc and M are integers greater than 1; andeach iteration comprises: extending each of said plurality of possible current states of said received signal by its respective Nc possible prior states and its respective M state extensions to generate a respective Nc×M extended states for each of said plurality of possible current states; andfor each of said plurality of possible current states of said received signal, selecting M of said respective Nc×M extended states to be state extensions for a next iteration of said sequence estimation algorithm.
  • 11. The method of claim 10, wherein how many states are in said plurality of possible current states is less than a full Viterbi state count.
  • 12. The system of claim 10, wherein each of said plurality of possible current states corresponds to a sequence of cosets of a symbol constellation used to generate said sequence of symbols.
  • 13. The system of claim 12, wherein a coset of said sequence of cosets corresponds to one or more least significant bits of a symbol.
  • 14. The system of claim 13, comprising: determining, by said sequence estimation circuitry, said least significant bits based on a plurality of metrics; andafter said determining said least significant bits, determining, by said sequence estimation circuitry, most significant bits of said symbol using slicing.
  • 15. The system of claim 13, comprising: determining, by said sequence estimation circuitry, said least significant bits based on a plurality of metrics; andafter said determining said least significant bits, determining, by said sequence estimation circuitry, most significant bits of said symbol using a second iteration of said sequence estimation algorithm in which said determined least significant bits are held fixed.
  • 16. The system of claim 13, comprising determining, by said sequence estimation circuit, a first one or more most significant bits of said symbol using slicing and a second one or more most significant bits of said symbol using a second iteration of said sequence estimation algorithm in which previously determined least significant bits are held fixed.
  • 17. The system of claim 10, wherein said state extensions correspond to prior state information that is older than prior state information represented by said Nc possible prior states.
  • 18. The system of claim 10, comprising sorting, by said sequence estimation circuitry, said Nc×M extended states for each of said plurality of possible current states.
  • 19. A system comprising: an electronic receiver comprising: front-end circuitry operable to receive a signal over a communication channel, wherein said received signal is a result of a sequence of symbols being transmitted by a transmitter; andsequence estimation circuitry operable to implement a sequence estimation algorithm in which: at symbol time n−1, a particular possible state of said received signal is extended by Nc possible prior states for said particular possible state, resulting in Nc extended states;at symbol time n−1, each of said Nc extended states is extended by M first state extension tails, resulting in Nc×M first extended states with tails;at symbol time n−1, M of said Nc×M extended states with tails are selected as second state extension tails for said particular possible state; andat symbol time n, said M second state extension tails is used for generating second extended states with tails.
US Referenced Citations (271)
Number Name Date Kind
4109101 Mitani Aug 1978 A
4135057 Bayless, Sr. et al. Jan 1979 A
4748626 Wong May 1988 A
4797925 Lin Jan 1989 A
5111484 Karabinis May 1992 A
5131011 Bergmans et al. Jul 1992 A
5202903 Okanoue Apr 1993 A
5249200 Chen et al. Sep 1993 A
5283813 Shalvi et al. Feb 1994 A
5291516 Dixon et al. Mar 1994 A
5394439 Hemmati Feb 1995 A
5432822 Kaewell, Jr. Jul 1995 A
5459762 Wang et al. Oct 1995 A
5590121 Geigel et al. Dec 1996 A
5602507 Suzuki Feb 1997 A
5757855 Strolle et al. May 1998 A
5784415 Chevillat et al. Jul 1998 A
5818653 Park et al. Oct 1998 A
5886748 Lee Mar 1999 A
5889823 Agazzi et al. Mar 1999 A
5915213 Iwatsuki et al. Jun 1999 A
5930309 Knutson et al. Jul 1999 A
6009120 Nobakht Dec 1999 A
6151370 Wei Nov 2000 A
6167079 Kinnunen et al. Dec 2000 A
6233709 Zhang et al. May 2001 B1
6272173 Hatamian Aug 2001 B1
6335954 Bottomley et al. Jan 2002 B1
6356586 Krishnamoorthy et al. Mar 2002 B1
6516025 Warket et al. Feb 2003 B1
6516437 Van Stralen et al. Feb 2003 B1
6532256 Miller Mar 2003 B2
6535549 Scott et al. Mar 2003 B1
6591090 Vuorio et al. Jul 2003 B1
6690754 Haratsch et al. Feb 2004 B1
6697441 Bottomley et al. Feb 2004 B1
6785342 Isaksen et al. Aug 2004 B1
6871208 Guo et al. Mar 2005 B1
6968021 White et al. Nov 2005 B1
6985709 Perets Jan 2006 B2
7158324 Stein et al. Jan 2007 B2
7190288 Robinson et al. Mar 2007 B2
7190721 Garrett Mar 2007 B2
7205798 Agarwal et al. Apr 2007 B1
7206363 Hegde et al. Apr 2007 B2
7215716 Smith May 2007 B1
7269205 Wang Sep 2007 B2
7467338 Saul Dec 2008 B2
7830854 Sarkar et al. Nov 2010 B1
7974230 Talley et al. Jul 2011 B1
8005170 Lee et al. Aug 2011 B2
8059737 Yang Nov 2011 B2
8175186 Wiss et al. May 2012 B1
8199804 Cheong Jun 2012 B1
8248975 Fujita et al. Aug 2012 B2
8351536 Mazet et al. Jan 2013 B2
8422589 Golitschek Edler Von Elbwart et al. Apr 2013 B2
8432987 Siti et al. Apr 2013 B2
8498591 Qian et al. Jul 2013 B1
8526523 Eliaz Sep 2013 B1
8548072 Eliaz Oct 2013 B1
8548089 Agazzi et al. Oct 2013 B2
8548097 Eliaz Oct 2013 B1
8553821 Eliaz Oct 2013 B1
8559494 Eliaz Oct 2013 B1
8559496 Eliaz Oct 2013 B1
8559498 Eliaz Oct 2013 B1
8565363 Eliaz Oct 2013 B1
8566687 Eliaz Oct 2013 B1
8571131 Eliaz Oct 2013 B1
8571146 Eliaz Oct 2013 B1
8572458 Eliaz Oct 2013 B1
8582637 Eliaz Nov 2013 B1
8599914 Eliaz Dec 2013 B1
8605832 Eliaz Dec 2013 B1
8665941 Eliaz Mar 2014 B1
8665992 Eliaz Mar 2014 B1
8666000 Eliaz Mar 2014 B2
8675769 Eliaz Mar 2014 B1
8675782 Eliaz Mar 2014 B2
8681889 Eliaz Mar 2014 B2
8731413 Dave et al. May 2014 B1
8737458 Eliaz May 2014 B2
8744003 Eliaz Jun 2014 B2
8781008 Eliaz Jul 2014 B2
8804879 Eliaz Aug 2014 B1
8811548 Eliaz Aug 2014 B2
8824572 Eliaz Sep 2014 B2
8824599 Eliaz Sep 2014 B1
8824611 Eliaz Sep 2014 B2
8831124 Eliaz Sep 2014 B2
8842778 Eliaz Sep 2014 B2
8873612 Eliaz Oct 2014 B1
8885698 Eliaz Nov 2014 B2
8885786 Eliaz Nov 2014 B2
8891701 Eliaz Nov 2014 B1
8897387 Eliaz Nov 2014 B1
8897405 Eliaz Nov 2014 B2
8948321 Eliaz Feb 2015 B2
8972836 Eliaz Mar 2015 B2
8976853 Eliaz Mar 2015 B2
8976911 Eliaz Mar 2015 B2
8982984 Eliaz Mar 2015 B2
8989249 Zerbe et al. Mar 2015 B2
9003258 Eliaz Apr 2015 B2
20010008542 Wiebke et al. Jul 2001 A1
20020016938 Starr Feb 2002 A1
20020083396 Azadet et al. Jun 2002 A1
20020123318 Lagarrigue Sep 2002 A1
20020150065 Ponnekanti Oct 2002 A1
20020150184 Hafeez et al. Oct 2002 A1
20020172297 Ouchi et al. Nov 2002 A1
20030016741 Sasson et al. Jan 2003 A1
20030132814 Nyberg Jul 2003 A1
20030135809 Kim Jul 2003 A1
20030210352 Fitzsimmons et al. Nov 2003 A1
20040009783 Miyoshi Jan 2004 A1
20040037374 Gonikberg Feb 2004 A1
20040081259 Ammer et al. Apr 2004 A1
20040086276 Lenosky May 2004 A1
20040120409 Yasotharan et al. Jun 2004 A1
20040142666 Creigh et al. Jul 2004 A1
20040170228 Vadde Sep 2004 A1
20040174937 Ungerboeck Sep 2004 A1
20040203458 Nigra Oct 2004 A1
20040227570 Jackson et al. Nov 2004 A1
20040240578 Thesling Dec 2004 A1
20040257955 Yamanaka Dec 2004 A1
20050032472 Jiang et al. Feb 2005 A1
20050047517 Georgios et al. Mar 2005 A1
20050089125 Zhidkov Apr 2005 A1
20050105658 Haratsch May 2005 A1
20050123077 Kim Jun 2005 A1
20050135472 Higashino Jun 2005 A1
20050163252 McCallister et al. Jul 2005 A1
20050193318 Okumura et al. Sep 2005 A1
20050220218 Jensen et al. Oct 2005 A1
20050265470 Kishigami et al. Dec 2005 A1
20050268210 Ashley et al. Dec 2005 A1
20050276317 Jeong et al. Dec 2005 A1
20060067396 Christensen Mar 2006 A1
20060109780 Fechtel May 2006 A1
20060109935 McQueen et al. May 2006 A1
20060171489 Ghosh et al. Aug 2006 A1
20060203943 Scheim et al. Sep 2006 A1
20060239339 Brown et al. Oct 2006 A1
20060245765 Elahmadi et al. Nov 2006 A1
20060280113 Huo Dec 2006 A1
20070047121 Eleftheriou et al. Mar 2007 A1
20070092017 Abedi Apr 2007 A1
20070098059 Ives et al. May 2007 A1
20070098090 Ma et al. May 2007 A1
20070098116 Kim et al. May 2007 A1
20070110177 Molander et al. May 2007 A1
20070110191 Kim et al. May 2007 A1
20070127608 Scheim et al. Jun 2007 A1
20070140330 Allpress et al. Jun 2007 A1
20070189404 Baum et al. Aug 2007 A1
20070213087 Laroia et al. Sep 2007 A1
20070230593 Eliaz et al. Oct 2007 A1
20070258517 Rollings et al. Nov 2007 A1
20070291719 Demirhan et al. Dec 2007 A1
20080002789 Jao et al. Jan 2008 A1
20080049598 Ma et al. Feb 2008 A1
20080080644 Batruni Apr 2008 A1
20080130716 Cho et al. Jun 2008 A1
20080130788 Copeland Jun 2008 A1
20080159377 Allpress et al. Jul 2008 A1
20080207143 Skarby et al. Aug 2008 A1
20080260985 Shirai et al. Oct 2008 A1
20090003425 Shen et al. Jan 2009 A1
20090028234 Zhu Jan 2009 A1
20090058521 Fernandez Mar 2009 A1
20090075590 Sahinoglu et al. Mar 2009 A1
20090086808 Liu et al. Apr 2009 A1
20090115513 Hongo et al. May 2009 A1
20090122854 Zhu et al. May 2009 A1
20090137212 Belotserkovsky May 2009 A1
20090185612 McKown Jul 2009 A1
20090213908 Bottomley Aug 2009 A1
20090220034 Ramprashad et al. Sep 2009 A1
20090245226 Robinson et al. Oct 2009 A1
20090245401 Chrabieh et al. Oct 2009 A1
20090290620 Tzannes et al. Nov 2009 A1
20090323841 Clerckx et al. Dec 2009 A1
20100002692 Bims Jan 2010 A1
20100034253 Cohen Feb 2010 A1
20100039100 Sun et al. Feb 2010 A1
20100062705 Rajkotia et al. Mar 2010 A1
20100074349 Hyllander et al. Mar 2010 A1
20100158085 Khayrallah Jun 2010 A1
20100166050 Aue Jul 2010 A1
20100172309 Forenza et al. Jul 2010 A1
20100202505 Yu et al. Aug 2010 A1
20100202507 Allpress et al. Aug 2010 A1
20100203854 Yu et al. Aug 2010 A1
20100208774 Guess et al. Aug 2010 A1
20100208832 Lee et al. Aug 2010 A1
20100215107 Yang Aug 2010 A1
20100220825 Dubuc et al. Sep 2010 A1
20100266071 Chen Oct 2010 A1
20100278288 Panicker et al. Nov 2010 A1
20100283540 Davies Nov 2010 A1
20100284481 Murakami et al. Nov 2010 A1
20100309796 Khayrallah Dec 2010 A1
20100329325 Mobin et al. Dec 2010 A1
20110051864 Chalia et al. Mar 2011 A1
20110064171 Huang et al. Mar 2011 A1
20110069791 He Mar 2011 A1
20110074500 Bouillet et al. Mar 2011 A1
20110074506 Kleider et al. Mar 2011 A1
20110075745 Kleider et al. Mar 2011 A1
20110090986 Kwon et al. Apr 2011 A1
20110134899 Jones, IV et al. Jun 2011 A1
20110150064 Kim et al. Jun 2011 A1
20110164492 Ma et al. Jul 2011 A1
20110170630 Silverman et al. Jul 2011 A1
20110182329 Wehinger Jul 2011 A1
20110188550 Wajcer et al. Aug 2011 A1
20110228869 Barsoum et al. Sep 2011 A1
20110243266 Roh Oct 2011 A1
20110249709 Shiue et al. Oct 2011 A1
20110275338 Seshadri et al. Nov 2011 A1
20110310823 Nam et al. Dec 2011 A1
20110310978 Wu et al. Dec 2011 A1
20120025909 Jo et al. Feb 2012 A1
20120027132 Rouquette Feb 2012 A1
20120051464 Kamuf et al. Mar 2012 A1
20120106617 Jao et al. May 2012 A1
20120163489 Ramakrishnan Jun 2012 A1
20120177138 Chrabieh et al. Jul 2012 A1
20120207248 Ahmed et al. Aug 2012 A1
20130028299 Tsai Jan 2013 A1
20130044877 Liu et al. Feb 2013 A1
20130077563 Kim et al. Mar 2013 A1
20130121257 He et al. May 2013 A1
20130343480 Eliaz Dec 2013 A1
20130343487 Eliaz Dec 2013 A1
20140036986 Eliaz Feb 2014 A1
20140056387 Asahina Feb 2014 A1
20140098841 Song et al. Apr 2014 A2
20140098907 Eliaz Apr 2014 A1
20140098915 Eliaz Apr 2014 A1
20140105267 Eliaz Apr 2014 A1
20140105268 Eliaz Apr 2014 A1
20140105332 Eliaz Apr 2014 A1
20140105334 Eliaz Apr 2014 A1
20140108892 Eliaz Apr 2014 A1
20140133540 Eliaz May 2014 A1
20140140388 Eliaz May 2014 A1
20140140446 Eliaz May 2014 A1
20140146911 Eliaz May 2014 A1
20140161158 Eliaz Jun 2014 A1
20140161170 Eliaz Jun 2014 A1
20140198255 Kegasawa Jul 2014 A1
20140241477 Eliaz Aug 2014 A1
20140247904 Eliaz Sep 2014 A1
20140266459 Eliaz Sep 2014 A1
20140269861 Eliaz Sep 2014 A1
20140301507 Eliaz Oct 2014 A1
20140321525 Eliaz Oct 2014 A1
20140328428 Eliaz Nov 2014 A1
20140376358 Eder et al. Dec 2014 A1
20150010108 Eliaz Jan 2015 A1
20150043684 Eliaz Feb 2015 A1
20150049843 Reuven et al. Feb 2015 A1
20150055722 Eliaz Feb 2015 A1
20150063499 Eliaz Mar 2015 A1
20150070089 Eliaz Mar 2015 A1
20150071389 Eliaz Mar 2015 A1
20150078491 Eliaz Mar 2015 A1
Foreign Referenced Citations (3)
Number Date Country
2007000495 Jan 2007 WO
2012092647 Jul 2012 WO
2013030815 Mar 2013 WO
Non-Patent Literature Citations (46)
Entry
“Reduced-State Sequence Estimation with Set Partitioning and Decision Feedback” by Vedat Eyuboglu, published on 1988.
Equalization: The Correction and Analysis of Degraded Signals, White Paper, Agilent Technologies, Ransom Stephens V1.0, Aug. 15, 2005 (12 pages).
Modulation and Coding for Linear Gaussian Channels, G. David Forney, Jr., and Gottfried Ungerboeck, IEEE Transactions of Information Theory, vol. 44, No. 6, Oct. 1998 pp. 2384-2415 (32 pages).
Intuitive Guide to Principles of Communications, www.complextoreal.com, Inter Symbol Interference (ISI) and Root-raised Cosine (RRC) filtering, (2002), pp. 1-23 (23 pages).
Chan, N., “Partial Response Signaling with a Maximum Likelihood Sequence Estimation Receiver” (1980). Open Access Dissertations and Theses. Paper 2855, (123 pages).
The Viterbi Algorithm, Ryan, M.S. and Nudd, G.R., Department of Computer Science, Univ. of Warwick, Coventry, (1993) (17 pages).
R. A. Gibby and J. W. Smith, “Some extensions of Nyquist's telegraph transmission theory,” Bell Syst. Tech. J., vol. 44, pp. 1487-1510, Sep. 1965.
J. E. Mazo and H. J. Landau, “On the minimum distance problem for faster-than-Nyquist signaling,” IEEE Trans. Inform. Theory, vol. 34, pp. 1420-1427, Nov. 1988.
D. Hajela, “On computing the minimum distance for faster than Nyquist signaling,” IEEE Trans. Inform. Theory, vol. 36, pp. 289-295, Mar. 1990.
G. Ungerboeck, “Adaptive maximum-likelihood receiver for carrier modulated data-transmission systems,” IEEE Trans. Commun., vol. 22, No. 5, pp. 624-636, May 1974.
G. D. Forney, Jr., “Maximum-likelihood sequence estimation of digital sequences in the presence of intersymbol interference,” IEEE Trans. Inform. Theory, vol. 18, No. 2, pp. 363-378, May 1972.
A. Duel-Hallen and C. Heegard, “Delayed decision-feedback sequence estimation,” IEEE Trans. Commun., vol. 37, pp. 428-436, May 1989.
M. V. Eyubog •|u and S. U. Qureshi, “Reduced-state sequence estimation with set partitioning and decision feedback,” IEEE Trans. Commun., vol. 36, pp. 13-20, Jan. 1988.
W. H. Gerstacker, F. Obernosterer, R. Meyer, and J. B. Huber, “An efficient method for prefilter computation for reduced-state equalization,” Proc. of the 11th IEEE Int. Symp. Personal, Indoor and Mobile Radio Commun. PIMRC, vol. 1, pp. 604-609, London, UK, Sep. 18-21, 2000.
W. H. Gerstacker, F. Obernosterer, R. Meyer, and J. B. Huber, “On prefilter computation for reduced-state equalization,” IEEE Trans. Wireless Commun., vol. 1, No. 4, pp. 793-800, Oct. 2002.
Joachim Hagenauer and Peter Hoeher, “A Viterbi algorithm with soft-decision outputs and its applications,” in Proc. IEEE Global Telecommunications Conference 1989, Dallas, Texas, pp. 1680-1686,Nov. 1989.
S. Mita, M. Izumita, N. Doi, and Y. Eto, “Automatic equalizer for digital magnetic recording systems” IEEE Trans. Magn., vol. 25, pp. 3672-3674, 1987.
E. Biglieri, E. Chiaberto, G. P. Maccone, and E. Viterbo, “Compensation of nonlinearities in high-density magnetic recording channels,” IEEE Trans. Magn., vol. 30, pp. 5079-5086, Nov. 1994.
W. E. Ryan and A. Gutierrez, “Performance of adaptive Volterra equalizers on nonlinear magnetic recording channels,” IEEE Trans. Magn., vol. 31, pp. 3054-3056, Nov. 1995.
X. Che, “Nonlinearity measurements and write precompensation studies for a PRML recording channel,” IEEE Trans. Magn., vol. 31, pp. 3021-3026, Nov. 1995.
O. E. Agazzi and N. Sheshadri, “On the use of tentative decisions to cancel intersymbol interference and nonlinear distortion (with application to magnetic recording channels),” IEEE Trans. Inform. Theory, vol. 43, pp. 394-408, Mar. 1997.
Miao, George J., Signal Processing for Digital Communications, 2006, Artech House, pp. 375-377.
Xiong, Fuqin. Digital Modulation Techniques, Artech House, 2006, Chapter 9, pp. 447-483.
Faulkner, Michael, “Low-Complex ICI Cancellation for Improving Doppler Performance in OFDM Systems”, Center for Telecommunication and Microelectronics, 1-4244-0063-5/06/$2000 (c) 2006 IEEE. (5 pgs).
Stefano Tomasin, et al. “Iterative Interference Cancellation and Channel Estimation for Mobile OFDM”, IEEE Transactions on Wireless Communications, vol. 4, No. 1, Jan. 2005, pp. 238-245.
Int'l Search Report and Written Opinion for PCT/IB2013/01866 dated Mar. 21, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/001923 dated Mar. 21, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/001878 dated Mar. 21, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/002383 dated Mar. 21, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/01860 dated Mar. 21, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/01970 dated Mar. 27, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/01930 dated May 15, 2014.
Int'l Search Report and Written Opinion for PCT/IB2013/02081 dated May 22, 2014.
Al-Dhahir, Naofal et al., “MMSE Decision-Feedback Equalizers: Finite-Length Results” IEEE Transactions on Information Theory, vol. 41, No. 4, Jul. 1995.
Cioffi, John M. et al., “MMSE Decision-Feedback Equalizers and Coding—Park I: Equalization Results” IEEE Transactions onCommunications, vol. 43, No. 10, Oct. 1995.
Eyuboglu, M. Vedat et al., “Reduced-State Sequence Estimation with Set Partitioning and Decision Feedback” IEEE Transactions on Communications, vol. 36, No. 1, Jan. 1988.
Khaled M. Gharaibeh, Nonlinear Distortion in Wireless Systems, 2011, John Wiley & Sons, 2nd Edition, chapter 3, pp. 59-81.
Forney, G. David Jr., “Coset Codes—Part I: Introduction and Geometrical Classification” IEEE Transactions on Information Theory, vol. 34, No. 5, Sep. 1988.
Prlja, Adnan et al., “Receivers for Faster-than-Nyquist Signaling with and Without Turbo Equalization”, 2008.
Int'l Search Report and Written Opinion for PCT/IB2014/002449 dated Mar. 12, 2015.
Digital predistortion of power amplifiers for wireless applications (Doctoral dissertation, Georgia Institute of Technology). Retrieved from the internet <http://http://202.28.199.34/multim/3126235.pdf> Ding, L. Mar. 31, 2005.
Digital predistortion for power amplifiers using separable functions. Signal Processing, IEEE Transactions on, 58(8), 4121-4130. Retrieved from the internet </http://arxiv.org/ftp/arxiv/papers/1306/1306.0037.pdf> Jiang, H., & Wilford, P.A. Aug. 8, 2010.
Digital predistortion linearization methods for RF power amplifiers. Teknillinen korkeakoulu. Retrieved from the Internet <http://lib.tkk.fi/Diss/2008/isbn9789512295463/isbn9789512295463.pdf> Teikari I. Sep. 30, 2008.
Kayhan, F., et al., Joint Signal-Labeling Optimization for Pragmatic Capacity under Peak-Power Constraint, 978-1-4244-5637, 2010.
Kayhan, F., et al., Constellation Design for Transmission over Nonlinear Satellite Channels, Oct. 5, 2012.
Kayhan, F., et al., Signal and Labeling Optimization for Non-Linear and Phase Noise Channels, Department of Electronics and Telecommunications, Sep. 24, 2012.