FAST, BLIND EQUALIZATION TECHNIQUES USING RELIABLE SYMBOLS

Information

  • Patent Application
  • 20020080896
  • Publication Number
    20020080896
  • Date Filed
    July 09, 2001
    23 years ago
  • Date Published
    June 27, 2002
    22 years ago
Abstract
A fast equalization technique is disclosed for systems using high-order constellations where symbols have been corrupted by data correlated noise (ISI). The technique permits ISI estimation to begin immediately upon receipt of captured samples. Training symbols are not required for the operation of the equalization technique. ISI estimation is weighted in accordance to a reliability factor of each captured sample.
Description

Brief Description of Drawings

[0012]
FIG. 1 illustrates an exemplary data processing system in which ISI may occur.


[0013]
FIG. 2 is a block diagram of an equalizer according to an embodiment of the present invention.





Detailed Description

[0014] Embodiments of the present invention provide fast equalization techniques for systems using high-order constellations where symbols have been corrupted by ISI. The technique allows ISI estimation to begin immediately upon receipt of captured samples. ISI estimation is weighted according to a reliability factor of each captured sample.


[0015]
FIG. 2 is a block diagram of an equalizer 200 according to an embodiment of the present invention. The equalizer 200 may include a symbol decoder 210, an ISI estimator 220 and a pair of buffers 230, 240. The symbol decoder 210 may estimate decoded symbols d^n from a sequence of captured samples xn based on a current estimate of ISI coefficients (labeled {a^i} in FIG 2). Decoded symbols d^n may be stored in a first buffer 230; captured samples xn may be stored in a second buffer 240. The ISI estimator 220 may generate new estimates of the ISI coefficients a^i based on the symbols d^n and samples xn from the buffers 230, 240.


[0016] The equalizer 200 shown in FIG. 2 advantageously permits decoding to occur immediately upon receipt of captured samples xn even before an accurate estimate of the ISI coefficients {a^i} are available. Thus, the decoded symbols d^n output by the symbol decoder 210 may have large error initially. Over time, however, as more decoded symbols d^n become available, the ISI estimator 220 may develop increasingly improved estimates of the ISI coefficients and improve the accuracy of the decoded symbols d^n estimated by the symbol decoder 210.


[0017] ISI ESTIMATION USING RELIABILITY WEIGHTING


[0018] Having estimated decoded symbols d^n from the captured samples xn, the ISI estimator 220 may revise ISI coefficient estimates. To simplify the nomenclature herein, consider a case where the buffers 240, 230 respectively store a predetermined number L of samples xn and decoded symbols d^n (n=1 to L).


[0019] In an embodiment, the ISI estimator 220 may employ a least squares estimation to update the ISI coefficients according to:


[0020]


[0021] where: {a^} is a vector of estimated normalized ISI coefficients, Δ is a vector that contains elements Δn=xn-d^n, representing the difference between the received samples xn and the related decisions d^n, H is an LxK matrix containing surrounding symbol estimates, and W is an LxL diagonal weight matrix having weights wn,n that are derived from a reliability factor of an associated captured sample (wi,j≡0 for all i≠j). The weight may increase or remain constant with decreasing reliability factor.


[0022] In an embodiment, the H matrix may be populated by symbol estimates obtained from the symbol decoder. It may be constructed as an LxK matrix in which each of the L rows contains symbol estimates surrounding the estimated symbol to which the row refers. For example, an ith row may relate to a symbol estimate d^i. In a simple embodiment, where ISI is known to occur from symbols on only one side of the decoded symbol d^i, the ith row may contain the symbol estimates Hi={d^i-K, d^i-(k-1), ..., d^i-1} . In the more general case, where ISI may occur from symbols on both sides of the decoded symbol d^i, the ith row (Hi) may contain the symbol estimates Hi={d^i-K2, ..., d^-1, d^i+1, ..., d^i+K1} . K, the width of the H matrix, may be determined by the number of adjacent symbols that are expected to contribute to ISI corruption.


[0023] During ISI estimation, different samples x may be assigned relative weights based upon associated reliability factors R(xn) of the samples. In a first embodiment, a weight wn,n may be assigned according to a binary weighting scheme. If the reliability factor of a sample is equal to or less than a predetermined threshold, the weight wn,n may be assigned a first value, otherwise the weight may be a second value. For example:


[0024]


[0025] In this embodiment, a sample xn contributes to ISI estimation if and only if it is a reliable symbol.


[0026] Alternatively, all samples may contribute to ISI estimation, weighted according to their reliability factors. For example:


[0027]


[0028] In this embodiment, even those samples xn that do not meet the criterion for reliability may contribute to the ISI estimation. However, the contribution of samples with very high reliability factors will be much lower than samples with very low reliability factors. In other words, reliable symbols have greater contribution to ISI estimation than samples that are non-reliable.


[0029] In another embodiment, all samples may be permitted to contribute to the ISI estimate but reliable symbols may be given a very large weight in comparison to non-reliable samples. For example:


[0030]


[0031] where f is a ceiling factor that prevents f/R(xn) from exceeding 1 for all non-reliable samples. In this embodiment, any sample that meets the criterion for reliability may be assigned a predetermined weighting value ("1" in the example of Equation 6). All reliable symbols would be equally weighted in the ISI estimation. Any sample that fails the criterion for reliability may be assigned a weight that is proportional to its calculated reliability.


[0032] ALTERNATIVE EQUALIZER STRUCTURES BASED ON RELIABLE SYMBOLS


[0033] Returning to FIG. 2, an embodiment of the equalizer 200 optionally may include a reliable symbol detector 250 (shown in phantom) to enable the symbol decoder 210. The reliable symbol detector 250 may accept input samples xn and identify which of them, if any, as reliable symbols. In this embodiment, the reliable symbol detector 250 may generate a control signal En that enables the symbol decoder 210 upon detection of a first reliable symbol. In this embodiment, the reliable symbol detector 250 inhibits operation of the equalizer 200 until a first reliable symbol is detected from the sequence X of captured samples.


[0034] Although the foregoing embodiments have described the equalizer 200 as employing a purely blind equalization process, the present invention does not preclude use of training symbols. Training symbols may be a transmitted to provide at the destination 120 a number of received samples that can be used as alternatives to reliable symbols. Following receipt of the training symbol samples, the equalizer 200 may process other samples xn in a blind fashion as described in the foregoing embodiments. This embodiment represents an improvement over other equalizers for high-order constellations because, even though training symbols would be used in the present invention, the training symbols would be of reduced number as compared with known systems. Such a short training sequence may not be of sufficient length to allow complete equalization of the channel but may allow ISI adaptation to begin. In such an embodiment, if successive groups of training symbols are used the period between groups of training symbols may be long compared to the dynamics of the channel and the present invention would continue to equalize the channel during the period between training samples.


[0035] THE SYMBOL DECODER


[0036] The Subtractive Equalizer


[0037] Several embodiments of symbol decoders 210, 610 may be employed for use in the equalizers of FIGS. 2 and 4. A first embodiment is shown in phantom in FIG. 4. The symbol decoder 610 may include a subtractive equalizer 680 and a hard decision unit 690. In one embodiment the subtractive equalizer 680 may generate a re-scattered sample yn from the captured sample xn according to:


[0038]


[0039] where coefficients a^i represent a current ISI estimate and d^n-i represent previously decoded symbols. Initially, for the first frame, the ISI estimate may be set arbitrarily, such as a^i=0 for all i. Also, the d^n-i that antedates the first captured sample may be set arbitrarily, such as d^n-i=1. The hard decision unit 690 may generate decoded symbols d^n from respective re-scattered samples yn. For example, the hard decision unit 690 may generate a decoded symbol d^n as the constellation point closest to the re-scattered sample yn.


[0040] In an embodiment where the symbol decoder 610 includes a subtractive equalizer 680 and a hard decision unit 690, ISI estimation may be performed using the re-scattered samples yn rather than the estimated symbols d^n. ISI coefficients may be estimated according to the techniques disclosed in Equation 3 but, in this embodiment, the vector Δ may represent differences between the received samples xn and the re-scattered samples yn (Δn={xn-yn} ) and the matrix H may contain surrounding re-scattered samples. In this embodiment, re-scattered samples yn from the subtractive equalizer 680 may be input to the ISI estimator 620 instead of the estimated symbols d^n (shown in phantom in FIG. 4).


[0041] In this embodiment, the H matrix may be populated by re-scattered samples obtained from the subtractive equalizer. Each row of the matrix may contain re-scattered samples surrounding the sample to which the row refers. For example, an ith row may relate to a symbol estimate yi. In a simple embodiment, where ISI is known to occur from symbols on only one side of the rescattered sample yi, the ith row may contain the rescattered samples Hi={yi-K, yi-(K-1), ..., yi-1}. In the more general case, where ISI may occur from symbols on both sides of the rescattered sample yi, the ith row may contain the rescattered samples Hi={yi-K2, ..., yi-1, yi+1, ..., yi+K1}. K, the width of the H matrix, may be determined from the number of adjacent symbols that are expected to contribute to ISI corruption.


[0042] In an embodiment the subtractive equalizer 680 may be used for a feedback filter in a decision feedback equalizer (DFE).


[0043] Symbol Detection Using Maximum Likelihood


[0044] In other embodiments, a symbol decoder 210 (FIG. 2) may operate according to the well-known maximum likelihood estimation framework. The captured sample xn may be given by Equation 2 above:


[0045]


[0046] The maximum likelihood estimate of the transmitted signals {dn} conditioned on the observations {xn} may be given by maximizing the likelihood of the observations. This is simply the conditional probability of the captured sample xn conditioned on knowing the past transmitted signals {hkn} and the ISI coefficients {ai}:


[0047]


[0048] Finding the maximum likelihood estimate of the present transmitted signal dn depends upon knowledge of both the past transmitted signals and the ISI coefficients {an}. The probability density function of xn given {dn} and {ai} is simply the probability density function of the noise ωn evaluated at:


[0049]


[0050] Then, the probability density function of Equation 19 can be expressed in terms of a series of further conditioned probability functions, which leads to:


[0051]


[0052] where


[0053]


[0054] denotes the whole set of summation of the function, f(*), each summation running over the whole set of possible constellation points, and


[0055]


[0056] denotes the set of the MK1+K2 possible sequences of possible values for the surrounding symbols.. This technique averages over all possible past transmitted sequences. The technique also renders lack of knowledge of the ISI coefficients inconsequential, assuming, of course, that the probability distribution of the ISI coefficients is known instead. In what follows the ISI distribution is taken to be a uniform distribution.


[0057] The compound probability rule states that Pr(A,B)=Pr(A|B)Pr(B), which after some straightforward manipulation provides the following for Equation 20:


[0058]


[0059] where, Pr(a) is a probability density function (pdf) associated with the ISI coefficients, and is a pdf associated with the related surrounding symbols set.


[0060] Assuming additive white Gaussian noise of zero mean and variance σ2, then the standard probability density formula for Gaussian noise may be applied:


[0061]


[0062] Finally, for the re-scattered received signal:


[0063]


[0064] where the decision to the received symbol is carried through:


[0065]


[0066] Equation 23, called the "average likelihood" estimation of a hypothesis symbol hk at time n, serves as a basis for decoding symbols. In essence, Equation 23 takes captured signal samples xn and removes the effects of ISI through re-scattering, accomplished through the second term of the exponential (). At a destination, for each sample xn, Equation 23 may be performed for every point hkn in the governing constellation. A decoded symbol d^n may be estimated as the point hkn having the largest probability of occurrence.


[0067] The evaluation of Equation 23 is believed to provide near optimal symbol detection when the ISI coefficients and the transmitted data are unknown. However, it is very difficult to implement in a real-time computing device. Accordingly, other embodiments of the present invention are approximations of the evaluation of Equation 23 that are computationally less complex. These embodiments are discussed below.


[0068] Symbol Decoding Using Trellis Based Detection


[0069] In another embodiment of the symbol decoder 210, when decoding a sample xn, probability estimations generated from the surrounding symbol samples xn-1 to xn-N may be used. Thus, probabilities for all possible transmitted symbols, , may be stored for the surrounding symbols. Where ISI coefficients are known to be real, these probabilities represent branches in a trellis (i.e. possible combinations of surrounding symbols). For complex ISI coefficients, the trellis may include MK1+K2 branches. The probability of an mth branch in the trellis may be represented as:


[0070]


[0071] More conveniently, the calculation may be evaluated for the logarithm of the probabilities (and later converted back to a probability form),


[0072]


[0073] Either of these results may be used with a trellis decoder to obtain the likelihood-based estimate for d^n according to Equation 23.


[0074] Symbol Decoding Using ISI Coefficient Statistics


[0075] Statistical distributions of the ISI coefficients may yield further computational simplifications according to an embodiment of the symbol decoder 210. Unless otherwise known, in this embodiment, the ISI coefficients may be considered to be uniform over their specified ranges . In this case, Equation 23 becomes:


[0076]


[0077] Since the constant is independent of hkn, it may be omitted from calculation.


[0078] Symbol Decoding Using Past Decisions


[0079] In the embodiments discussed previously, represents the probability of the various possible symbol sequences that can result in the observed sample xn. The symbol decoder 210 embodiments discussed previously rely upon a maximum likelihood -- when considering a sample at time n, each of the symbols d^n-i were generated from the maximum probabilities at the previous iterations. In an embodiment in which K1 is not equal to zero but where its contributions may be neglected; rather than calculate anew for each sample xn, the most likely symbol sequence may be assumed to be symbol sequence that includes the previously estimated symbols that is, it may be assumed that . Therefore, Equation 27 may be simplified further:


[0080]


[0081] Again, the constant is independent of hkn and may be omitted from calculation.


[0082] Eliminating ISI Ranges in Symbol Decoding


[0083] Another embodiment of the symbol decoder 210 simplifies the evaluation of Equation 23 by using the estimate of the ISI coefficients, a^i. In this embodiment, symbol estimation may occur according to maximum likelihood estimation of:


[0084]


[0085] Because of the minus sign in the argument of Equation 29, the estimation may become a minimum likelihood analysis:


[0086]


[0087] It can be observed that this is in fact the subtractive equalizer discussed in Paragraphs 41-44.


[0088] USING 'RELIABLE SYMBOLS' FOR ESTIMATION


[0089] According to an embodiment, identification of a reliable symbol may be made based upon re-scattered symbols yn rather than the captured samples xn. During operation of the equalizer 200, after an arbitrary number of iterations, the equalizer 200 may produce a set of ISI coefficient estimates, a^i, each with an associated error, a~i, such that


[0090]


[0091] The partially equalized signal may be written as:


[0092]


[0093] Substituting into Equation 2 yields:


[0094]


[0095] which by examining Equation (28) and defining the error of the estimated symbol as d^i=di+d~i, Equation (30) becomes


[0096]


[0097] This is a generalization of Equation 2, where the ISI estimates are completely unknown, so that y^n=y'n and a~i=-ai .


[0098] From Equation 34, the residual ISI on the partially equalized symbol point, yn, becomes the inner product of the surrounding data symbols with the ISI error coefficients, a~i, and an additional inner product of the decision errors and the ISI coefficients. Since the ISI error coefficients are smaller then the ISI coefficients, surrounding symbols with higher energy will contribute less to the ISI than they would under the structure of Equation 2. Thus, the probability of identifying a sample as a reliable symbol increases, even though the energies of surrounding symbols can be large. As the quality of the estimate increases, the inner product will remain acceptably low even for surrounding symbols of relatively high energy.


[0099] Several embodiments of the present invention are specifically illustrated and described herein. However, it will be appreciated that modifications and variations of the present invention are covered by the above teachings and within the purview of the appended claims without departing from the spirit and intended scope of the invention.

Claims
  • 1. A reliable symbol identification method comprising:
  • 5. The method of claim 1, wherein the estimating comprises:
  • 6. The method of claim 1, wherein the estimating comprises generating estimated symbols according to a maximum likelihood analysis of conditional probabilities of a captured sample conditioned upon all possible sets of surrounding transmitted symbols and the ranges of all possible ISI coefficients, for all possible values of the captured sample.
  • 7. The method of claim 1, wherein the estimation comprises generating estimated symbols according to trellis decoding based upon all possible sets of surrounding transmitted symbols and the ranges of all possible ISI coefficients, for all possible values of the captured sample.
  • 8. The method of claim 1, wherein the estimating comprises generating estimated symbols according to a maximum likelihood analysis of conditional probabilities of a captured sample conditioned upon all possible sets of surrounding transmitted symbols and the ranges of all possible ISI coefficients, and a uniform distribution of ISI coefficients for all possible values of the captured sample
  • 19. An equalization method, comprising:
  • 20. The equalization method of claim 19, wherein the weighting of a symbol-sample pair comprises:
  • 21. The equalization method of claim 19, wherein the weighting of a symbol-sample pair is proportional to the reliability factor of the candidate sample.
  • 22. The equalization method of claim 19, wherein the weighting of a candidate sample comprises:
  • 23. The equalization method of claim 19, wherein the weighting of a candidate sample comprises:
  • 24. The equalization method of claim 19, wherein the reliability factor of a candidate sample xn is determined from values of neighboring samples.
  • 38. The equalization method of claim 19, wherein the estimation comprises generating decoded symbols according to a computational approximation of:
  • 44. An equalizer, comprising:
  • 45. The equalizer of claim 44, wherein the symbol decoder comprises a subtractive equalizer coupled to a decision unit.
  • 46. The equalizer of claim 44, wherein the symbol decoder comprises a maximum likelihood estimator coupled to a decision unit.
  • 47. The equalizer of claim 46, wherein the maximum likelihood analysis is made having assigned a uniform probability distribution for ISI coefficients over their ranges.
  • 48. The equalizer of claim 46, wherein the maximum likelihood analysis is made having assigned previously decoded symbols to occur with probability equal to one.
  • 49. The equalizer of claim 44, wherein the symbol decoder comprises a trellis decoder coupled to a decision unit.
  • 50. The equalizer of claim 44, wherein the symbol decoder generates decoded symbols according to a computational approximation of:
  • 51. The equalizer of claim 44, further comprising a reliable symbol detector having an input coupled to the first input of the symbol decoder and an output that enables the symbol decoder.
Priority Claims (2)
Number Date Country Kind
WO 00/02634 Jul 2000 XP
16938.3 Jul 2000 GB
Cross Reference to Related Applications

[0001] This application is a continuation-in-part of the following applications: WIPO 00/02634, filed July 10, 2000 and UK application 16938.3, also filed July 10, 2000, the disclosures of which are incorporated herein by reference. Certain claims may benefit from the priority of these applications.