Windowing technique for blind equalization

Information

  • Patent Grant
  • 6075816
  • Patent Number
    6,075,816
  • Date Filed
    Wednesday, November 27, 1996
    28 years ago
  • Date Issued
    Tuesday, June 13, 2000
    24 years ago
Abstract
A blind convergence technique is restricted to using a subset of equalizer output samples. Illustratively, a receiver implements a windowed MMA approach. In this windowed MMA approach, a sample window overlays the two-dimensional plane representing the set of equalizer output samples. Only those equalizer output samples appearing within the sample window are used during filter adaptation.
Description

BACKGROUND OF THE INVENTION
The present invention relates to communications equipment, and, more particularly, to blind equalization in a receiver.
In blind equalization, the equalizer of a receiver is converged without the use of a training signal. As known in the art, there are two techniques for blind equalization: one is referred to herein as the "reduced constellation algorithm" (RCA) (e.g., see Y. Sato, "A Method of Self-Recovering Equalization for Multilevel Amplitude-Modulation Systems," IEEE Trans. Commun., pp. 679-682, June 1975; and U.S. Pat. No. 4,227,152, issued Oct. 7, 1980 to Godard); and the other technique is the so-called "constant modulus algorithm" (CMA) (e.g., see D. N. Godard, "Self-Recovering Equalization and Carrier Tracking in Two-Dimensional Data Communications Systems," IEEE Trans. Commun., vol. 28, no. 11, pp. 1867-1875, November 1980; and N. K. Jablon, "Joint Blind Equalization, Carrier Recovery, and Timing Recovery for High-Order QAM Signal Constellations", IEEE Trans. Signal Processing, vol. 40, no. 6, pp. 1383-1398, 1992.) Further, the co-pending, commonly assigned, U.S. Patent application of Werner et al., entitled "Blind Equalization," Ser. No. 08/646,404, filed on May 7, 1996, presents an new blind equalization technique--the multimodulus algorithm (MMA)--as an alternative to the above-mentioned RCA and CMA approaches.
Unfortunately, whether using the RCA, CMA, or MMA, approaches, the ability to blindly converge the equalizer is also affected by the number of symbol levels represented in the signal point constellation. In other words, the difficulty of "opening the eye" (as this term is used in the art) increases when the number of symbol levels increases.
SUMMARY OF THE INVENTION
Any blind convergence technique is affected by the distribution of the output signals, or samples, of the equalizer. As such, an increase in the number of symbol levels increases the distribution of the equalizer output samples, which, in turn, makes it more difficult to blindly converge the equalizer. Therefore, and in accordance with the inventive concept, a blind convergence technique is restricted to using a subset of the equalizer output samples. This improves the ability to blindly converge the equalizer notwithstanding an increase in symbol levels.
In an embodiment of the invention, a receiver implements a windowed MMA approach. In this windowed MMA approach, a sample window overlays the two-dimensional plane representing the set of equalizer output samples. Only those equalizer output samples appearing within the sample window are used during filter adaptation. Two different variations of the windowed MMA approach are described.





BRIEF DESCRIPTION OF THE DRAWING
FIG. 1 is an illustrative block diagram of a portion of a communications system embodying the principles of the invention;
FIG. 2 is an illustrative block diagram of a phase-splitting equalizer;
FIG. 3 is an illustrative block diagram of a portion of an adaptive filter for use in an equalizer;
FIG. 4 is an illustrative block diagram of a cross-coupled equalizer;
FIG. 5 is an illustrative block diagram of a four-filter equalizer;
FIG. 6 is an illustrative signal point plot of an output signal of an equalizer before convergence;
FIG. 7 is an illustrative signal point plot of an output signal of an equalizer for a system using the MMA blind equalization method;
FIG. 8 is an illustrative signal point plot illustrating the reduced signal point constellation of the RCA blind equalization method;
FIG. 9 is an illustrative signal point plot illustrating the circular contour of the CMA blind equalization method;
FIG. 10 is an illustrative signal point plot illustrating the piecewise linear contours of the MMA blind equalization method;
FIGS. 11 and 12 are illustrative block diagrams of a portion of a receiver embodying the principles of the invention;
FIGS. 13, 14, and 15, are illustrative signal point plots illustrating the piecewise linear contours of the MMA blind equalization method for a nonsquare constellation;
FIGS. 16 and 17 are illustrative signal point plots of an output signal of an equalizer for a communications system using a two-step MMA blind equalization method;
FIG. 18 shows a table providing a general comparison between the RCA, CMA, and MMA, blind equalization methods, without CHCF;
FIG. 19 shows a table of illustrative data values for use in the RCA, CMA, and MMA, blind equalization methods;
FIG. 20 is an illustrative graph of an incorrect diagonal solution for a 64-CAP signal point constellation;
FIG. 21 shows an illustrative partitioning of a signal point constellation using a windowing approach;
FIG. 22 shows an illustrative partitioning of a signal point constellation using a half-constellation windowing approach;
FIG. 23 shows a table of illustrative data values for use in the half-constellation WMMA approach;
FIG. 24 shows a table of illustrative data values for comparing cost functions;
FIG. 25 shows an illustrative partitioning of a signal point constellation using an edge-point constellation windowing approach;
FIG. 26 is an illustrative signal point plot of an output signal of an equalizer using an LMS algorithm; and
FIGS. 27-28 are illustrative signal point plots of an output signal of an equalizer using a Half-Constellation WMMA algorithm and Edge-Point WMMA algorithm, respectively.





DETAILED DESCRIPTION
An illustrative high-level block diagram of a portion of a communications system embodying the principles of the invention is shown in FIG. 1. For illustrative purposes only, it is assumed that receiver 10 receives a CAP (carrierless, amplitude modulation, phase modulation) signal, which can be represented by:
r(t)=.SIGMA..sub.n [a.sub.n p(t-nT)-b.sub.n p(t-nT)]+.xi.(t)(1)
where a.sub.n and b.sub.n are discrete-valued multilevel symbols, p(t) and p(t) are impulse responses which form a Hilbert pair, T is the symbol period, and .xi.(t) is additive noise introduced in the channel.
It is assumed that the CAP signal in equation (1) has been distorted while propagating through communications channel 9 and experiences intersymbol interference (ISI). This ISI consists of intrachannel ISI (a.sub.n or b.sub.n symbols interfering with each other) and interchannel ISI (a.sub.n and b.sub.n symbols interfering with each other). The purpose of receiver 10 is to remove the ISI and minimize the effect of the additive noise .xi.(t) to provide signal r'(t). The inventive concept will illustratively be described in the context of a windowed MMA blind equalization algorithm for use within receiver 10. However, before describing the inventive concept, some background information on adaptive filters and the above-mentioned RCA, CMA, and MMA algorithms is presented. Also, as used herein, an adaptive filter is, e.g., a fractionally spaced linear equalizer, which is hereafter simply referred to as an FSLE equalizer or, simply, an equalizer.
Equalizer Structures
An illustrative phase-splitting FSLE equalizer 100 is shown in FIG. 2. It is assumed that FSLE equalizer 100 operates on an input signal comprising two dimensions: an in-phase component and a quadrature component. FSLE equalizer 100 comprises two parallel digital adaptive filters implemented as finite impulse response (FIR) filters 110 and 120. Equalizer 100 is called a "phase-splitting FSLE" because the two FIR filters 110 and 120 converge to in-phase and quadrature filters. Some illustrative details of the equalizer structure are shown in FIG. 3. The two FIR filters 110 and 120 share the same tapped delay line 115, which stores sequences of successive Analog-to-Digital Converter (A/ID) 125 samples r.sub.k. The sampling rate 1/T' of A/D 125 is typically three to four times higher than the symbol rate 1/T and is chosen in such a way that it satisfies the sampling theorem for real signals. It is assumed that T/T'=i, where i is an integer.
The outputs of the two adaptive FIR filters 110 and 120 as shown in FIG. 3 are computed at the symbol rate 1/T. The equalizer taps and input samples can be represented by a corresponding N-dimensional vector. As such, the following relationships are now defined:
r.sub.n.sup.T =[r.sub.k,, r.sub.k-1,, . . . , r.sub.k-N, ]=vector of A/D samples in delay line; (2)
c.sub.n.sup.T =[c.sub.0,, c.sub.1,, c.sub.2,, . . . , c.sub.N, ]=vector of in-phase tap coefficients; and (3)
d.sub.n.sup.T =[d.sub.0,, d.sub.1,, d.sub.2,, . . . , d.sub.N, ]=vector of quadrature phase tap coefficients; (4)
where the superscript T denotes vector transpose, the subscript n refers to the symbol period nT, and k=in.
Let y.sub.n and y.sub.n be the computed outputs of the in-phase and quadrature filters, respectively, and:
y.sub.n =c.sub.n.sup.T r.sub.n ; and (5)
y.sub.n =d.sub.n.sup.T r.sub.n. (6)
An X/Y display of the outputs y.sub.n and y.sub.n or, equivalently, of the complex output Y.sub.n =y.sub.n +jy.sub.n, is called a signal constellation. FIGS. 6 and 17 show an 64-CAP constellation before and after illustrative convergence using the MMA algorithm. (The term "64-CAP," refers to the number of predefined symbols in the signal space or signal constellation each symbol representing 6 bits since 2.sup.6 =64. Additional information on a CAP communications system can be found in J. J. Werner, "Tutorial on Carrierless AM/PM--Part I--Fundamentals and Digital CAP Transmitter," Contribution to ANSI X3T9.5 TP/PMD Working Group, Minneapolis, Jun. 23, 1992.) After convergence, the signal constellation consists of a display of the complex symbols A.sub.n =a.sub.n +jb.sub.n corrupted by some small noise and ISI.
In the normal mode of operation, decision devices (or slicers) 130 and 135 shown in FIG. 2 compare the sampled outputs y.sub.n and y.sub.n of equalizer 100 with valid symbol values a.sub.n and b.sub.n and makes a decision on which symbols have been transmitted. These sliced symbols will be denoted a.sub.n and b.sub.n. The receiver then computes the following in-phase and quadrature errors e.sub.n and e.sub.n :
e.sub.n =y.sub.n -a.sub.n, (7a)
e.sub.n =y.sub.n -b.sub.n, (7b)
and the tap coefficients of the two adaptive filters are updated using the familiar least-mean-square (LMS) algorithm, i.e.,
c.sub.n+1 =c.sub.n -.alpha.e.sub.n r.sub.n, (8a)
d.sub.n+1 =d.sub.n -.alpha.e.sub.n r.sub.n, (8b)
where .alpha. is the step size used in the tap adjustment algorithm.
Turning now to FIG. 4, a cross-coupled FSLE, 200, is shown. For this equalizer structure, the A/D samples are first fed to two fixed in-phase and quadrature FIR filters, 210 and 205, respectively. In this case, the sampling rate 1/T' of A/D 125 is typically equal to four times the symbol rate 1/T. The outputs of the two fixed FIR filters are computed at a rate 1/T" that is consistent with the sampling theorem for analytic signals as known in the art. The output signals are then fed to equalizer 200 having a so-called cross-coupled structure. Typically, 1/T" is twice the symbol rate 1/T.
The cross-coupled equalizer 200 uses two adaptive FIR filters 215a and 215b, each with tap vectors c.sub.n and d.sub.n. For simplicity, the same tap vector notations c.sub.n and d.sub.n (which have been used for the previous described equalizer 100 of FIG. 2) are used again. However, it should be clear to those skilled in the art that the tap vectors are different for the two types of equalizers. These two filters are each used twice to compute the outputs y.sub.n and y.sub.n of the equalizer. Let r.sub.n and r.sub.n be the output vectors of the in-phase and quadrature filters that are used to compute the outputs of the cross-coupled equalizer. The following definitions can be made:
C.sub.n =c.sub.n +jd.sub.n, (9a)
R.sub.n =r.sub.n +jr.sub.n, and (9b)
Y.sub.n =y.sub.n +jy.sub.n. (9c)
The complex output Y.sub.n of the equalizer can be written in the following compact way:
Y.sub.n =C.sub.n.sup.*T R.sub.n, (10)
where the asterisk * denotes complex conjugate. Making the following definitions for the sliced complex symbol A.sub.n and the complex error E.sub.n :
A.sub.n =a.sub.n +jb.sub.n, (11a)
E.sub.n =Y.sub.n -A.sub.n. (11b)
The LMS algorithm for updating the complex tap vector C.sub.n can be written as:
C.sub.n+1 =C.sub.n -.alpha.E.sub.n *R.sub.n. (12)
Turning now to FIG. 5, a four-filter FSLE is shown. Four-filter equalizer 300 has the same general structure as cross-coupled FSLE 200 shown in FIG. 4, except that the adaptive portion consists of four different filters rather than two filters which are used twice. For this reason it is called a four-filter FSLE. The two output signals of equalizer 300 are computed as follows:
y.sub.n =c.sub.1,n.sup.T r.sub.n +d.sub.2,n.sup.T r.sub.n, and(13a)
y.sub.n =c.sub.2,n.sup.T r.sub.n -d.sub.1,n.sup.T r.sub.n. (13b)
Using the definitions for the in-phase and quadrature errors e.sub.n and e.sub.n in equations (7a) and (7b), the following tap updating algorithms for the four filters result:
c.sub.1,n+1 =c.sub.1,n -.alpha.e.sub.n r.sub.n, (14a)
d.sub.1,n+1 =d.sub.1,n +.alpha.e.sub.n r.sub.n, (14b)
c.sub.2,n+1 =c.sub.2,n -.alpha.e.sub.n r.sub.n, and (15a)
d.sub.2,n+1 =d.sub.2,n -.alpha.e.sub.n r.sub.n. (15b)
Having generally described the structure of some prior-art equalizers as shown in FIGS. 2-5, a general overview of the concept of blind equalization will now be described using the equalizer structure of FIG. 2.
Concept of Blind Equalization
In the normal (steady-state) mode of operation, the decision devices in FIG. 2, i.e., slicers 130 and 135, compare the equalizer complex output samples, Y.sub.n, (where Y.sub.n =y.sub.n +jy.sub.n), with all the possible transmitted complex symbols, A.sub.n (where A.sub.n =a.sub.n +jb.sub.n), and selects the symbol A.sub.n which is the closest to Y.sub.n. The receiver then computes an error, E.sub.n, where:
E.sub.n =Y.sub.n -A.sub.n, (16)
which is used to update the tap coefficients of equalizer 100. This type of tap adaptation is called "decision directed", because it uses the decisions of slicers 130 and 135. The most common tap updating algorithm is the LMS algorithm, which is a stochastic gradient algorithm that minimizes the mean square error (MSE), which is defined as:
MSE.DELTA.E[.vertline.E.sub.n .vertline..sup.2 ]=E[.vertline.Y.sub.n -A.sub.n .vertline..sup.2 ]=E[e.sub.n.sup.2 ]+E[e.sub.n.sup.2 ],(17)
where E[.multidot.] denotes expectation and e.sub.n and e.sub.n are in-phase and quadrature errors, respectively.
At the beginning of start-up, the output signal of equalizer 100, Y.sub.n, is corrupted by a lot of intersymbol interference, as illustrated in FIG. 6. The latter represents experimental data obtained for a 64-CAP receiver using a phase-splitting FSLE as represented by FIG. 2.
When a training sequence is used during start-up (i.e., a predefined sequence of A.sub.n symbols), the receiver can compute meaningful errors E.sub.n by using the equalizer output signal Y.sub.n and the known sequence of transmitted symbols A.sub.n. In this case, tap adaptation is said to be done with "ideal reference" to distinguish it from decision directed tap adaptation.
However, when no training sequence is available, equalizer 100 has to be converged blindly. In this case, a decision-directed tap updating algorithm cannot be used to converge the equalizer, because the slicer makes too many wrong decisions, as should be apparent from FIG. 6.
As such, the philosophy of blind equalization is to use a tap adaptation algorithm that minimizes a cost function that is better suited to provide initial convergence of equalizer 100 than the MSE represented by equation (17). The cost fuictions used in the RCA, CMA, and MMA algorithms are described below.
Convergence of an equalizer during blind start-up usually consists of two main steps. First, a blind equalization algorithm is used to open the "eye diagram." (Hereafter, this will be referred to as "it opens the eye.") Once the eye is open enough, the receiver switches to a decision directed tap adaptation algorithm.
Reduced Constellation Algorithm (RCA)
This section provides a general overview of the RCA algorithm. This general overview is then followed with a description of the RCA algorithm in the context of each of the illustrative equalizer structures, described above.
With the RCA algorithm, the error used in the tap updating algorithm is derived with respect to a signal constellation that has a smaller number of points than the received constellation. As illustration, it is again assumed that the signal constellation comprises 64 symbols. In the RCA algorithm, the reduced constellation typically consists of four signal points only, as shown in FIG. 8. It should be noted that the RCA algorithm requires the use of a decision device, e.g., a slicer, to select the closest signal point from the reduced constellation. The error between the received sample Y.sub.n and the closest signal point A.sub.r,n of the reduced constellation is the complex number:
E.sub.r,n =e.sub.r,n +je.sub.r,n =Y.sub.n -A.sub.r,n, where(18)
A.sub.r,n =a.sub.r,n +jb.sub.r,n =R[sgn(y.sub.n)+jsgn(y.sub.n)], and(19)
where sgn (.multidot.) is the signum function and the expression on the right corresponds to the case where the reduced constellation consists of four points. The reduced constellation algorithm minimizes the following cost function:
CF=E[.vertline.E.sub.r,n .vertline..sup.2 ]=E[e.sub.r,n.sup.2 +e.sub.r,n.sup.2 ]=E[.vertline.Y.sub.n -A.sub.r,n .vertline..sup.2 ],(20)
where E [.multidot.] denotes expectation and where e.sub.r,n refers to the slicer error.
Now, consider the phase-splitting equalizer structure shown in FIG. 2. Using equations (5), (6), and (20), the following equations result:
e.sub.r,n =y.sub.n -a.sub.r,n =c.sub.n.sup.T r.sub.n -Rsgn(y.sub.n),(21a)
e.sub.r,n =y.sub.n -b.sub.r,n =d.sub.n.sup.T r.sub.n -Rsgn(y.sub.n)(21b)
The gradients of the cost friction represented by equation (20) with respect to the tap vectors c.sub.n and d.sub.n are equal to:
.gradient..sub.c (CF)=2E[e.sub.r,n r.sub.n ], and (22a)
.gradient..sub.d (CF)=2E[e.sub.r,n r.sub.n ]. (22b)
These gradients are equal to zero when the channel is perfectly equalized, i.e. when the received samples Y.sub.n are equal to the symbol values A.sub.n. This condition leads to the following value of R: ##EQU1##
For example, consider the gradient with respect to the tap vector c.sub.n. From the left of equations (21a) and (21b) there is the condition: E[(y.sub.n -R sgn(y.sub.n))r.sub.n ]=0. With perfect equalization y.sub.n =a.sub.n. Also, if it is assumed that different symbols are uncorrelated, then: E[a.sub.n r.sub.n ]=k.sub.n E[a.sub.n.sup.2 ], where k.sub.n is a fixed vector whose entries are a function of the channel. The above condition can then be written as: E[a.sub.n.sup.2 ]-R E[sgn(a.sub.n)a.sub.n ]=0. Noting that sgn (a.sub.n)a.sub.n =.vertline.a.sub.n .vertline. and solving for R, equation (23) results.
The nonaveraged gradients in equations (22a) and (22b) can be used in a stochastic gradient algorithm to adapt the tap coefficients of the equalizer, so that the following tap updating algorithms result:
c.sub.n+1 =c.sub.n -.alpha.[y.sub.n -Rsgn(y.sub.n)]r.sub.n, and(24a)
d.sub.n+1 =d.sub.n -.alpha.[y.sub.n -Rsgn(y.sub.n)]r.sub.n.(24b)
Turning now to the cross-coupled FSLE structure illustrated by FIG. 4, the complex output Y.sub.n of this equalizer is computed from equation (10). Using this expression in equation (20), the gradient of the cost function with respect to the complex tap vector C.sub.n is:
.gradient..sub.C =E[(Y.sub.n -A.sub.r,n)*R.sub.n ]. (25)
Assuming a perfectly equalized channel the following expression for R results: ##EQU2## where the expression on the right is the same as the one in equation (23) for the usual case where E[.vertline.a.sub.n .vertline.]=E[.vertline.b.sub.n .vertline.]. The tap updating algorithm for the complex tap vector C.sub.n is given by
C.sub.n+1 =C.sub.n -.alpha.(Y.sub.n -A.sub.r,n)*R.sub.n. (27)
Turning now to the four-filter FSLE structure illustrated by FIG. 5, the outputs y.sub.n and y.sub.n of this four-filter equalizer structure are computed from equations (13a) and (13b). The gradients of the cost function in equation (20) with respect to the four tap vectors are similar to the ones given in equations (22a) and (22b) and will not be repeated here. The tap updating algorithms are given by:
c.sub.1,n+1 =c.sub.1,n -.alpha.[y.sub.n -Rsgn(y.sub.n)]r.sub.n,(28a)
d.sub.1,n+1 =d.sub.1,n +.alpha.[y.sub.n -Rsgn(y.sub.n)]r.sub.n(28b)
c.sub.2,n+1 =c.sub.2,n -.alpha.[y.sub.n -Rsgn(y.sub.n)]r.sub.n, and(28c)
d.sub.2,n+1 =d.sub.2,n -.alpha.[y.sub.n -Rsgn(y.sub.n)]r.sub.n,(28d)
where the constant R is the same as in equation (23).
The main advantage of RCA is its low cost of implementation because it is typically the least complex of blind equalization algorithms. The tap updating algorithms represented by equations (24a), (24b), (27) and (28) are the same as the standard LMS algorithms represented by equations (8a) and (8b) except that the slicer uses a different number of points.
The main disadvantages of RCA are its unpredictability and lack of robustness. The algorithm is known to often converge to so-called "wrong solutions." These solutions are quite acceptable from a channel equalization perspective, but do not allow the receiver to recover the right data. It should be pointed out that the equalizer structure in FIG. 2 is much more likely to converge to wrong solutions than the structure in FIG. 4. This is due to the fact that the former has many more degrees of freedom than the latter.
A wrong solution that is often observed with the equalizer structure in FIG. 2 is the so-called diagonal solution. In this case, the in-phase and quadrature filters both converge to the same filter, so that they both generate the same output samples. As a result, the signal constellation at the output of the equalizer consists of points clustered along a diagonal as illustrated in FIG. 20 for a 64-CAP signal point constellation. It has been found that frequency of occurrence of diagonal solutions is mostly communications channel dependent. Specifically, it is created when certain fractional propagation delay offsets are introduced in the channel. (As a point of contrast, FIG. 16 shows an illustrative correct solution for a 64-CAP signal point constellation using the MMA blind equalization algorithm.)
Other wrong solutions can occur when the in-phase and quadrature filters introduce propagation delays which differ by an integral number of symbol periods. As an example, at a given sampling instant, a.sub.n may appear at the output of the in-phase filter while b.sub.n-1 appears at the output of the quadrature filter. This kind of wrong solution can generate points in the signal constellation at the output of the equalizer that do not correspond to transmitted symbols. For example, a 32-point signal constellation may be converted into a 36-point constellation and the 128-point constellation in FIGS. 13, 14, and 15 may be converted into a 144-point constellation.
Constant Modulus Algorithm (CMA)
This section provides a general overview of the CMA algorithm. This general overview is then followed with a description of the CMA algorithm in the context of each of the illustrative equalizer structures, described above.
The CMA algorithm minimizes the dispersion of the equalized samples Y.sub.n with respect to a circle with radius R. This is graphically illustrated in FIG. 9. The CMA algorithm minimizes the following cost function: ##EQU3## where L is a positive integer. The case L=2 is the most commonly used in practice. The cost function in equation (29) is a true two-dimensional cost function which minimizes the dispersion of the equalizer complex output signal Y.sub.n with respect to a circular two-dimensional contour.
Now, consider the phase-splitting equalizer structure shown in FIG. 2. The gradients of the cost function with respect to the tap vectors c.sub.n and d.sub.n are given by:
.gradient..sub.c (CF)=2L.times.E[(.vertline.Y.sub.n .vertline..sup.L -R.sup.L).vertline.Y.sub.n .vertline..sup.L-2 y.sub.n r.sub.n ], and(30a)
.gradient..sub.d (CF)=2L.times.E[(.vertline.Y.sub.n .vertline..sup.L -R.sup.L).vertline.Y.sub.n .vertline..sup.L-2 y.sub.n r.sub.n ].(30b)
Assuming a perfectly equalized channel the following value for R.sup.L results: ##EQU4## where the expression on the right holds for the usual case where the statistics of the symbols a.sub.n and b.sub.n are the same. For L=2, the following stochastic gradient tap updating algorithms results:
c.sub.n+1 =c.sub.n -.alpha.(y.sub.n.sup.2 +y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n, and (32a)
d.sub.n+1 =d.sub.n -.alpha.(y.sub.n.sup.2 +y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n. (32b)
Turning now to the cross-coupled FSLE structure illustrated by FIG. 4, the gradient of the cost function represented by equation (29) with respect to the complex tap vector C.sub.n is equal to:
.gradient..sub.c (CF)=2L.times.E[(.vertline.Y.sub.n .vertline..sup.L -R.sup.L).vertline.Y.sub.n .vertline..sup.L-2 Y.sub.n *R.sub.n ].(33)
For L=2, the tap updating algorithm for the complex tap vector becomes:
C.sub.n+1 =C.sub.n -.alpha.(.vertline.Y.sub.n .vertline..sup.2 -R.sup.2)Y.sub.n *R.sub.n, (34)
where R is given by the expression on the right in equation (31).
Turning now to the four-filter FSLE structure illustrated by FIG. 5, the gradients of the cost function represented by equation (29) with respect to the four tap vectors are similar to the ones given by equations (30a) and (30b). For L=2, the tap updating algorithms become:
c.sub.1,n+1 =c.sub.1,n -.alpha.(y.sub.n.sup.2 +y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n, (35a)
d.sub.1,n+1 =d.sub.1,n +.alpha.(y.sub.n.sup.2 +y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n, (35b)
c.sub.2,n+1 =c.sub.2,n -.alpha.(y.sub.n.sup.2 +y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n, and (35c)
d.sub.2,n+1 =d.sub.2,n -.alpha.(y.sub.n.sup.2 +y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n. (35d)
The constant R is the same as in equation (31).
The main advantages of CMA are its robustness and predictability. Unlike RCA, it rarely converges to wrong solutions. For some applications, other than those considered here, it also has the advantage of being able to partially equalize the channel in the presence of carrier phase variations. The main disadvantage of CMA is its cost of implementation. The CMA tap updating algorithm is more complex than that of the RCA algorithm and the MMA algorithm and, in addition, the CMA algorithm requires a so-called "rotator" at the output of the equalizer. As a result, once a certain degree of convergence is achieved, the output signal of the equalizer must be counter-rotated before switching to a decision-directed tap adaptation algorithm. The need to use a rotator after the equalizer increases the cost of implementation of CMA for some type of applications. It should be pointed out, however, that there are other applications, such as voiceband and cable modems, where the rotator function is required anyway for other purposes, such as tracking frequency offset introduced in the channel. In these latter cases, the need to do a rotation does not increase the cost of implementation, and CMA becomes a very attractive approach.
Multimodulus Algorithm (MMA)
The MMA algorithm minimizes the dispersion of the equalizer output samples y.sub.n and y.sub.n around piecewise linear in-phase and quadrature contours. For the special case of square signal constellations of the type used for 16-, 64-, and 256-CAP systems, the contours become straight lines. This is graphically illustrated in FIG. 10 for a 64-point constellation. The multimodulus algorithm minimizes the following cost function:
CF=E[(y.sub.n.sup.L -R.sup.L (Y.sub.n)).sup.2 +(y.sub.n.sup.L -R.sup.L (Y.sub.n)).sup.2 ], (36)
where L is a positive integer and R(Y.sub.n) and R(Y.sub.n) take discrete positive values, which depend on the equalizer outputs Y.sub.n.
Multimodulus Algorithm (MMA--Square Constellations
For square constellations, R(Y.sub.n)=R(Y.sub.n)=R=constant, so that the cost function of equation (36) becomes:
CF=CF.sub.I +CF.sub.Q =E[(y.sub.n.sup.L -R.sup.L).sup.2 +(y.sub.n.sup.L -R.sup.L).sup.2 ]. (37)
Unlike the cost function for CMA represented by equation (29), this is not a true two-dimensional cost function. Rather, it is the sum of two independent one-dimensional cost functions CF.sub.I and CF.sub.Q. The application of the MMA algorithm in the context of the three illustrative types of equalizers (described above) will now be described.
For the phase-splitting equalizer structure shown FIG. 2, the gradients of the cost function in equation (37) with respect to the tap vectors c.sub.n and d.sub.n are equal to:
.gradient..sub.c (CF)=2L.times.E[(.vertline.y.sub.n .vertline..sup.L -R.sup.L).vertline.y.sub.n .vertline..sup.L-2 y.sub.n r.sub.n ], and(38a)
.gradient..sub.d (CF)=2L.times.E[(.vertline.y.sub.n .vertline..sup.L -R.sup.L).vertline.y.sub.n .vertline..sup.L-2 y.sub.n r.sub.n ].(38b)
Assuming a perfectly equalized channel, the following value for R.sup.L results: ##EQU5##
The best compromise between cost and performance is achieved with L=2, in which case the tap updating algorithms become
c.sub.n+1 =c.sub.n -.alpha.(y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n, and(40a)
d.sub.n+1 =d.sub.n -.alpha.(y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n.(40b)
Turning now to the cross-coupled FSLE structure illustrated by FIG. 4, the gradient of the cost function represented by equation (37) with respect to the complex tap vector C.sub.n is given by:
.gradient..sub.c (CF)=2L.times.E[K*R.sub.n ], (41)
where,
K=[(.vertline.y.sub.n .vertline..sup.L -R.sup.L).vertline.y.sub.n .vertline..sup.L-2 y.sub.n ]+j[(.vertline.y.sub.n .vertline..sup.L -R.sup.L).vertline.y.sub.n .vertline..sup.L-2 y.sub.n ]. (42)
Assuming a perfectly equalized channel, the value for R.sup.L is: ##EQU6## which reduces to equation (39) for the usual case where the symbols a.sub.n and b.sub.n have the same statistics. For L=2, the tap updating algorithm for the complex tap vector C.sub.n becomes:
C.sub.n+1 =C.sub.n -.alpha.K*R.sub.n, (44)
where,
K=(y.sup.2 -R.sup.2)y+j(y.sup.2 -R.sup.2)y. (45)
Turning now to the four-filter FSLE structure illustrated by FIG. 5, the gradients of the cost function represented by equation (37) with respect to the four tap vectors are similar to the ones given in equations (38a and 38b). For L=2, the tap updating algorithms become:
c.sub.1,n+1 =c.sub.1,n -.alpha.(y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n,(46a)
d.sub.1,n+1 =d.sub.1,n +.alpha.(y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n,(46b)
c.sub.2,n+1 =c.sub.2,n -.alpha.(y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n, and(46c)
d.sub.2,n+1 =d.sub.2,n -.alpha.(y.sub.n.sup.2 -R.sup.2)y.sub.n r.sub.n.(46d)
The constant R is the same as in equation (39).
The above-mentioned two-step blind equalization procedure utilizing the MMA algorithm is graphically illustrated by FIGS. 6, 7, 16, and 17 for equalizer 100. The output signal of equalizer 100, before any form of convergence, is shown in FIG. 6. As noted above, FIG. 6 represents experimental data obtained for a 64-CAP receiver using a phase-splitting FSLE as represented by FIG. 2. FIG. 7 illustrates the beginning of the MMA process convergence. As shown in FIG. 16, the MMA technique converges the equalizer enough to clearly illustrate the 64-symbol signal space as 64 noisy clusters. Although these noisy clusters would, typically, not be acceptable for steady-state operation--the eye is open enough to allow the receiver to switch to a 64-point slicer and a decision-directed LMS algorithm. The end result is a much cleaner constellation, as shown in FIG. 17. Typically, a clean transition can be made between the two modes of adaptation, MMA and decision directed, when the symbol error rate is better than 10.sup.-2, although successful transitions have been observed for worse symbol error rates. It should be pointed out that the noisy clusters in FIG. 16 could be further reduced by decreasing the step size in the MMA tap adjustment algorithm. Indeed, in some applications it may be possible to eliminate the switching to a decision directed tap adaptation algorithm. However, it should be noted that this would increase the start-up time and the required amount of digital precision.
The MMA algorithm for square constellations can be used without modification for nonsquare constellations. In this case, caution has to be exercised in the computation of the constant R, because the discrete levels for the symbols a.sub.n and b.sub.n do not all have the same probability of occurrence (described below). However, it has been found through computer simulations that convergence of the MMA algorithm is somewhat less reliable for nonsquare constellations than for square constellations. This can be corrected by using the modified MMA discussed in the following section.
Multimodulus Algorithm (MMA)--NonSquare Constellations
The principle of the modified MMA is illustrated in FIGS. 13, 14, and 15, with respect to a 128-CAP signal constellation. (A 128-point signal constellation is obtained in the following way. First define a 144-point signal constellation using the symbol levels .+-.1,.+-.3,.+-.5,.+-.7,.+-.9,.+-.11, and then remove the four corner points in each quadrant.) Minimization of the dispersion of the equalizer output samples y.sub.n and y.sub.n is now done around piecewise straight lines. Again, this is done independently for y.sub.n and y.sub.n. The quadrature cost functions derived from equation (37) are:
CF.sub.Q =E[(y.sub.n.sup.L -R.sub.1.sup.L).sup.2 ] if .vertline.y.sub.n .vertline.<K, and (47a)
CF.sub.Q =E[(y.sub.n.sup.L -R.sub.2.sup.L).sup.2 ] if .vertline.y.sub.n .vertline.>K. (47b)
The in-phase cost functions derived from equation (37) are:
CF.sub.I =E[(y.sub.n.sup.L -R.sub.1.sup.L).sup.2 ] if .vertline.y.sub.n .vertline.<K, and (47c)
CF.sub.I =E[(y.sub.n.sup.L -R.sub.2.sup.L).sup.2 ] if .vertline.y.sub.n .vertline.>K. (47d)
The constant K is a function of the signal constellation under consideration and is determined empirically. In computer simulations for 128-CAP, a suggested value is K=8. Two different moduli R.sub.1 and R.sub.2 are used in equations (47) because the symbols a.sub.n and b.sub.n used in the 128-point constellation have two sets of levels {.+-.1,.+-.3,.+-.5,.+-.7} and {.+-.9,.+-.11} which have a different probability of occurrence. More moduli can be used if there are more than two sets of symbol levels with different statistics.
The moduli R.sub.1 and R.sub.2 in equations (47) are computed from equation (39) by evaluating the moments of the symbols over the set of symbol levels to which a given modulus applies (additionally described below). As an example, consider FIG. 13, which illustrates the Moduli for the in-phase dimension and which applies to the real symbols a.sub.n of a 128-CAP signal constellation. The moments of the symbols can be computed by considering the first quadrant only. Consider the subset of 24 symbols in this quadrant that applies to R.sub.1. For these symbols a.sub.n =1, 3, 5, 7, 9, 11; and b.sub.n =1, 3, 5, 7; so that each value of a.sub.n occurs with probability 4/24=1/6. Similarly, the R.sub.2 subset has 8 symbols for which a.sub.n =1, 3, 5, 7 and b.sub.n =9, 11, so that each value of an occurs with probability 2/8=1/4. Thus, the variance of the symbols becomes:
for R.sub.1 symbols, E[a.sub.n.sup.2 ]=1/6(1.sup.2 +3.sup.2 +5.sup.2 +7.sup.2 +9.sup.2 +11.sup.2).apprxeq.47.67, and (48a)
for R.sub.2 symbols, E[a.sub.n.sup.2 ]=1/4(1.sup.2 +3.sup.2 +5.sup.2 +7.sup.2)=21. (48b)
Other moments of the symbols are computed in a similar fashion and then used in equation (39) to evaluate the values of the various moduli.
The tap updating algorithms for the modified MMA algorithm are the same as the ones given in equations (40), (44), and (46), except that the constant R is replaced by either R.sub.1 or R.sub.2 depending on which equalizer output sample Y.sub.n is received. FIG. 14 illustrates the Moduli for the quadrature dimension and which applies to the symbols b.sub.n of the 128-CAP signal constellation. It should be apparent from FIG. 15, which represents the union of FIGS. 13 and 14, that the in-phase and quadrature tap updating algorithms need not use the same moduli R.sub.1 or R.sub.2 in a given symbol period.
Moments of Data Symbols
The following description discusses the concept of "moments of data symbols." In particular, the closed-form expressions for the moments E[.vertline.a.sub.n .vertline..sup.L ], E[.vertline.b.sub.n .vertline..sup.L ], and E[.vertline.A.sub.n .vertline..sup.L ] when the symbols a.sub.n and b.sub.n take values proportional to the odd integers .+-.1,.+-.3,.+-.5,.+-.7, . . . , are presented. These expressions are then used to get closed-form expressions for the constants R used in the three blind equalization algorithms and illustrated in the table of FIG. 19 (described below).
First, it is assumed that the symbols a.sub.n and b.sub.n have the same statistics, so that E[a.sub.n .vertline..sup.L ]=E[.vertline.b.sub.n .vertline..sup.L ]. Consider first the following known summations of powers of integers: ##EQU7##
These summations can be used to find closed-form expressions for sums of powers of odd integers. For example, for power one: ##EQU8## where the two summations in the middle have been evaluated by using the closed-form expression of equation (49a). Similar series manipulations can be used for other sums of powers of odd integers.
Now, consider square signal constellations which use symbols a.sub.n and b.sub.n with values .+-.1,.+-.3,.+-.5,.+-.7, . . . .+-.(2m-1), where m is the number of different symbol levels (in magnitude). As an example, for the 4-CAP, 16-CAP, 64-CAP, and 256-CAP square signal constellations, m=1, 2, 4, and 8, respectively. It is also assumed that all the symbol values are equiprobable. As a result, the moments of the symbols a.sub.n are: ##EQU9##
Next, consider the complex symbols A.sub.n =a.sub.n +jb.sub.n. Assuming that the symbols a.sub.n and b.sub.n are uncorrelated, the following expressions for the even moments of the complex symbols result:
E[.vertline.A.sub.n .vertline..sup.2 ]=2E[a.sub.n.sup.2 ], and(55a)
E[.vertline.A.sub.n .vertline..sup.4 ]=2E[a.sub.n.sup.4 ]+2[E.vertline.a.sub.n.sup.2 .vertline.].sup.2. (55b)
Using equations (52) and (54) in equation (55b), results in:
E[.vertline.A.sub.n .vertline..sup.4 ]=4/45(4m.sup.2 -1)(28m.sup.2 -13).(56)
The above results can now be used to get closed-form expressions for the constants R used in the various blind equalization algorithms. The following (remarkably simple) expressions for these constants result: ##EQU10##
With respect to nonsquare signal constellations, the various symbol levels 2k-1 for a.sub.n and b.sub.n have a different probability of occurrence, even when all the complex symbols A.sub.n are equiprobable. This should be apparent from the 128-point constellation illustrated by FIG. 15. In this case, the moments of the symbols have to be computed according to the general formula: ##EQU11## where P.sub.i is the probability of occurrence of the symbol levels appearing in the corresponding summation. For typical 32-CAP and 128-CAP constellations the expression in (60) is restricted to two different probabilities P.sub.1 and P.sub.2.
Everything else being equal (i.e. symbol rate, shaping filters, etc.), it is possible to guarantee a constant average power at the output of a CAP transmitter if E[a.sub.n.sup.2 ]=E[b.sub.n.sup.2 ]=constant, independently of the type of signal constellation that is used. Of course, different signal constellations will have to use different symbol values if the average power constraint has to be satisfied. Thus, in general, a signal constellation will use symbol values .lambda.(2k-1) where .lambda. is chosen in such a way that the average power constraint is satisfied. For simplicity, it is assumed that E[a.sub.n.sup.2 ]=1. For square constellations, the value of .lambda. can then be determined from equation (52), to result in: ##EQU12##
Using this expression of .lambda. in equations (57), (58), and (59), the following expressions for the normalized constants R result: ##EQU13##
Similar expressions can be obtained for nonsquare constellations in a similar fashion. When the number of points in the signal constellation becomes very large, the following asymptotic values for the normalized constants result:
m.fwdarw..infin. R.sub.rca .apprxeq.1.155 R.sub.mma .apprxeq.1.342 R.sub.cma .apprxeq.1.673. (65)
Summary of RCA, CMA, and MMA Algorithms
A general comparison of the RCA, CMA, and MMA techniques is shown in the table of FIG. 18. In addition, the table shown in FIG. 19 shows illustrative values, for signal constellations of different sizes, of the constants R, R.sub.1, and R.sub.2, which are used in the tap updating algorithms of the RCA, CMA, and MMA, blind equalization techniques described above. The data shown in FIG. 19 assumes that the symbols a.sub.n and b.sub.n take the discrete values .+-.1,.+-.3,.+-.5,.+-.7, . . . The closed-form expressions for these constants are derived as described above.
Generally speaking, the RCA algorithm has less reliable convergence than either the CMA or MMA algorithms. As between the CMA and MMA algorithms, these algorithms have both benefits and drawbacks. For example, the CMA algorithm provides reliable convergence--thus avoiding incorrect diagonal solutions--but the CMA algorithm requires an expensive rotator. In comparison, the MMA algorithm does not require an expensive rotator but is more susceptible than the CMA algorithm to incorrect convergence.
Number of Symbol Levels
Any blind convergence technique is affected by the distribution of the output signals, or samples, of the equalizer. As such, an increase in the number of symbol levels increases the distribution of the equalizer output samples, which--in turn--makes it more difficult to blindly converge the equalizer. This is illustrated by the following comparison between the MMA blind equalization algorithm and the standard LMS algorithm.
For the standard LMS algorithm, the cost function minimizes the error between the equalizer's output signals Y.sub.n and an unknown sequence of transmitted symbols A.sub.n :
CF=E[(Y.sub.n -A.sub.n).sup.2 ]=E[(y.sub.n -a.sub.n).sup.2 +[(y.sub.n -b.sub.n).sup.2 ]=E[e.sub.n.sup.2 (LMS)+e.sub.r,n.sup.2 (LMS)](66)
where Y.sub.n =y.sub.n +jy.sub.n, and A.sub.n =a.sub.n +jb.sub.n.
In comparison, for the MMA blind equalization algorithm, the cost function minimizes the dispersion of the constellation:
CF=E[(y.sub.n.sup.2 -R.sup.2).sup.2 +(y.sub.n.sup.2 -R.sup.2).sup.2 ]=E[e.sub.r,n.sup.2 (CF)+e.sub.r,n.sup.2 (CF)], (67)
where the expression of the constant R is given by: ##EQU14##
Comparing the two cost functions in equations (66) and (67) it can be observed that errors have different interpretations for the LMS algorithm and the MMA algorithm.
In the LMS algorithm, the error e.sub.r,n (LMS) is defined as
e.sub.r,n (LMS)=y.sub.n -a.sub.n ; (69)
and taps are updated in the opposite direction of the gradient:
c.sub.n+1 =c.sub.n -.mu.e.sub.r,u (LMS)r.sub.n =c.sub.n -.mu.(y.sub.n -a.sub.n)r.sub.n. (70)
When y.sub.n and a.sub.n represent the input and output of a slicer, the LMS-based error used during tap updating is equivalent to the error measured at the slicer and is a well-defined quantity. As such, an equalizer can converge to optimal solutions when errors are directly calculated with respect to the inputs and outputs of the slicer.
In contrast, in the MMA algorithm the error e.sub.r,n (CF) is defined differently. It should be noted that since the LMS algorithm uses second-order statistics of the signals, whereas MMA uses fourth-order statistics, a simplified version of the MMA algorithm, with L=1, is used here for comparison purposes. For this one-dimensional MMA the error e.sub.r,n (CF) of the generalized MMA becomes
e.sub.r,n (CF)=.vertline.y.sub.n .vertline.-R; (71)
and the taps of the filter are updated as follows:
c.sub.n+1 =c.sub.n -.mu.e.sub.r,n (CF)r.sub.n =c.sub.n -.mu.(.vertline.y.sub.n .vertline.-R)r.sub.n. (72)
From equation (72), the taps are not exactly updated in the direction of the slicer errors. Instead, error minimization is done with reference to the constant R that has statistical information about the real symbols a.sub.n. The occurrence of filter adaptation depends on the occurrence of R, which depends on m. Consequently, the error e.sub.r,n (CF) has only a statistical meaning and an equalizer is not always guaranteed to converge to optimal solutions in terms of mean-square error (MSE).
If a blind equalization algorithm is not optimum, then CF.noteq.0 when an equalizer has converged. That is, residual values of the cost function exist. Illustratively, the MMA algorithm is examined to explore what are the residual values of the cost function CF.
For a blind start-up with a perfect convergence, y.sub.n .fwdarw.a.sub.n. Consequently, the cost function CF converges to CF.sub.a,n :
CF=E[(y.sub.n.sup.2 -R.sup.2).sup.2 ].fwdarw.CF.sub.a,n =E[(a.sub.n.sup.2 -R.sup.2).sup.2 ]. (73)
This cost function CF.sub.a,n is expanded and simplified as follows: ##EQU15##
It should be noted that only analysis for the in-phase dimension is provided, and that the same analysis applies to the quadrature phase dimension. The cost function can be expressed as a function of the number of symbol levels m. From the above description of the calculations of "Moments of Data Symbols," the constant R is given by: ##EQU16## and the fourth moment of the symbols a.sub.n is given by:
E[a.sub.n.sup.4 ]=1/15(4m.sup.2 -1)(12m.sup.2 -7). (76)
Then the cost function CF.sub.a,n can be rewritten as: ##EQU17##
Equation (80) gives a simple way to express the cost function after convergence and steady-state values of the cost function CF.sub.a,n can be easily calculated. The number of symbol levels m (in magnitude) can be computed from the number of constellation points C for C-CAP: ##EQU18##
Equation (80) shows that CF.sub.a,n =0 for m=1, and CF.sub.a,n .noteq.0 for m.noteq.1. For instance, calculating CF.sub.a,n gives the following results: CF.sub.a,n =0 for 4-CAP with m=1, CF.sub.a,n =14.2 dB for 16-CAP with m=2, and CF.sub.a,n =27.7 dB for 64-CAP with m=4, etc. This means that the optimum convergence for a blind equalizer can only be achieved for 4-CAP with m=1. Residual values of the cost function CF.sub.a,n significantly increase with increasing m. Ultimately, due to large values of the number m, residual values of CF.sub.a,n become so large that a blind equalizer fails to converge.
Residual values of CF.sub.a,n are increasing functions of the number m, and convergence of an equalizer is directly affected by those values. In conclusion, the reliability of a blind algorithm is highly degraded with increasing values of m. When the residual values of CF.sub.a,n increase beyond some quantity, the eye diagram fails to open. It has been experimentally found that a standard MMA is only effective for CAP applications with m less that eight which corresponds to 256-CAP.
A Windowing Approach to Blind Convergence
In accordance with the inventive concept, a blind convergence technique is restricted to using a subset of the equalizer output samples. This improves the ability to blindly converge the equalizer notwithstanding an increase in symbol levels.
In an embodiment of the invention, a receiver implements a windowed MMA (WMMA) approach. In this WMMA approach, a sample window overlays the two-dimensional plane representing the set of equalizer output samples. Only those equalizer output samples appearing within the sample window are used during filter adaptation. An example is shown in FIG. 21, where a sample window is defined by two dotted lines along each dimension--dotted lines 601 and 602 for the in-phase dimension, and dotted lines 603 and 604 for the quadrature phase dimension. These dotted lines form both an exclusion area 600 and a sample window within the signal space. With WMMA, only those samples, y.sub.n,w, falling within the window, i.e., outside the exclusion area, are used during filter adaptation. This is in contrast to MMA, which uses all of the samples. Thus, the size of the window determines the set of data, y.sub.n,w, used during filter adaptation, and therefore effects the convergence of the equalizer.
As illustration, two different variations of the windowed MMA approach are described below. The first variation is "Half-Constellation WMMA" and the second variation is "Edge-Point WMMA." For the purposes of these examples, the size of the window is varied as a function of the value of the single parameter m.sub.w, which results in a square exclusion area. However, it should be noted that each dotted line could be associated with a different parameter value thus yielding non-square exclusion areas.
Half-Constellation WMMA
A half-constellation window is shown in FIG. 22 for a 64-CAP constellation. In the case of the in-phase dimension the samples y.sub.n are divided into two sets by the window boundaries m.sub.w with .vertline.y.sub.n .vertline..ltoreq.m.sub.w and .vertline.y.sub.n,w .vertline.>m.sub.w. With new samples y.sub.n,w, the cost function CF is redefined as:
CF=E[(y.sub.n,w.sup.2 -R.sub.w.sup.2).sup.2 ]. (82)
Note that the constant R is changed to R.sub.w because the signals y.sub.n,w converge to a different constellation with the symbols a.sub.n,w where a.sub.n,w ={.+-.5, .+-.7}. The taps of the equalizer are updated with samples y.sub.n,w and remain unchanged with samples .vertline.y.sub.n .vertline.. For the half-constellation WMMA, the window boundary m.sub.w is defined as:
m.sub.w =m, (83)
where m indicates the number of symbol levels, and the magnitude of the highest symbol level is given by 2m-1. The window boundary m.sub.w is defined in such a way that the same number of inner-point and outer-point symbols a.sub.n are included around the constant R. In other words, the data, y.sub.n,w, used to update the taps are symmetrically distributed on both sides of R. It is called half-constellation WMMA because the number of partial symbols, a.sub.n,w, that construct the new constellation is half the number of the original symbols a.sub.n. For the half-constellation WMMA, the constant R.sub.w needs to be evaluated with respect to symbols a.sub.n,w. With the samples, y.sub.n,w the cost function CF.sub.w now converges to symbols a.sub.n,w :
CF=E[(y.sub.n,w.sup.2 -R.sub.w.sup.2).sup.2 ].fwdarw.CF=E[(a.sub.n,w.sup.2 -R.sub.w.sup.2).sup.2 ]. (84)
Then the constant R.sub.w is computed as ##EQU19##
Note that the initial index of a.sub.n,w does not start with one. Therefore, the moments for the symbols a.sub.n,w must be capable of derivation with an arbitrary initial index. The following example is for the calculation of E[a.sub.n,w.sup.2 ]. The second-order expectation is rewritten as: ##EQU20##
The parameter w denotes the number of symbol levels required for the constellation including a.sub.n,w. For the half-constellation WMMA,
w=1/2m. (87)
Equation (87) means that half the number of the original symbol levels m is required. The parameter M denotes the number of the largest symbol levels and the parameter M.sub.w denotes the number of symbol levels below the window boundary m.sub.w. The parameters are given by:
M=2m-1 and M.sub.w =m-1. (88)
The constant R.sub.w is then calculated as: ##EQU21##
Values of the constant R for MMA and the constant R.sub.w for WMMA are provided in the Table shown in FIG. 23. It can be observed from the Table of FIG. 23 that the values of R.sub.w are always larger than those of R.
Next, the residual values of the cost function for the half-constellation WMMA are computed to show how much reduction can be obtained. From equation (74e):
CF.sub.a,n =R.sup.4 -E[a.sub.n.sup.4 ].fwdarw.CF.sub.an =R.sub.w.sup.4 -E[a.sub.n,w.sup.4 ]. (90)
Replacing R.sub.w.sup.2 with: ##EQU22## and replacing E[a.sub.n,w.sup.4 ] with: ##EQU23## the cost function CF.sub.a,n for the half-constellation WMMA is then obtained as ##EQU24##
A comparison of residual values of cost functions for MMA and half-constellation WMMA is provided in the Table shown in FIG. 24. This table shows that the cost function for 16-CAP becomes zero, and that a reduction of about 5 dB can be obtained for other CAP systems. Thus, by using WMMA, the cost function CF.sub.a,n becomes optimum for 16-CAP. For other CAP systems, the cost function CF.sub.a,n is reduced. Reduction of the residual values of the cost function CF.sub.a,n leads to improved reliability and convergence rate of blind equalizers.
Edge-Point WMMA
Edge-point WMMA is proposed as the second application of WMMA. The cost function for half-constellation WMMA in equation (81) can be directly applied to edge-point WMMA, except that a modification is made for the sample window parameters. The window parameters are illustrated in FIG. 25, where the window boundary m.sub.w is defined as:
m.sub.w =2(m-1). (94)
With such a definition, the symbols a.sub.n,w are given by a.sub.2m-1. The symbols used for filter adaptation are only those that have the largest values. FIG. 25 shows that these symbols are geometrically located at the edge of the original constellation. Because only one symbol level is involved, the calculation of R.sub.w is simply given by:
R.sub.w =a.sub.n,w =2m-1. (95)
The above equation yields the following equality:
R.sub.w.sup.2 =E[a.sub.n,w.sup.2 ]. (96)
Equation (96) further leads to the result:
CF.sub.a,w =E[(a.sub.n,w.sup.2 -R.sub.w.sup.2).sup.2 ].fwdarw.0.(97)
Equation (97) shows that zero value can be achieved with such a cost function. That is, with edge-point WMMA, the cost function becomes optimum for any CAP system.
The edge-point and half-constellation MMA are basically the same except that they use different window parameters. However, the difference in parameters results in different performance. Theoretically, optimum convergence can only be achieved for 16-CAP with half-constellation WMMA, and can be achieved for any CAP application with edge-point WMMA under conditions to be discussed later.
The design parameters of the edge-point constellation are simple and easy to implement. However, expected performance cannot be achieved for high level CAP applications because other factors also affect convergence, such as the lack of enough data samples y.sub.n.
Filter Adaptation
This section describes the algorithms for updating the tap coefficients for the Half-Constellation WMMA and the Edge-Point WMMA approaches. For simplicity, the previous analysis for WMMA was only given for the in-phase dimension. The complete two-dimensional cost function for WMMA is given by:
CF=[(y.sub.n,w.sup.2 -R.sub.w.sup.2).sup.2 +(y.sub.n,w.sup.2 -R.sub.w.sup.2).sup.2 ]. (98)
The cost functions can be applied to both half-constellation and edge-point WMMAs by using different definitions for y.sub.n,w, as described above. The gradients of the cost function in equation (98) with respect to the tap vectors c.sub.n and d.sub.n are equal to:
.gradient..sub.c =(y.sub.n,w.sup.2 -R.sub.w.sup.2)y.sub.n,w r.sub.n .gradient..sub.d =(y.sub.n,w.sup.2 -R.sub.w.sup.2)y.sub.n,w r.sub.n.(99)
The taps of the filter are then updated in a stochastic fashion in the opposite direction of the gradient:
c.sub.n+1 =c.sub.n -.mu.(y.sub.n,w.sup.2 -R.sub.w.sup.2)y.sub.n,w r.sub.n,(100)
d.sub.n+1 =d.sub.n -.mu.(y.sub.n,w.sup.2 -R.sub.w.sup.2)y.sub.n,w r.sub.n.(101)
Note that y.sub.n,w and R.sub.w are different for the two versions of WMMA.
In the implementation of the algorithms, the samples y.sub.n,w are normally calculated by using a comparator, or a look-up table. Alternatively, a nonlinear function .function.(.multidot.) is proposed to determine partial samples y.sub.n,w. The function .function.(.multidot.) is defined as:
.function.(y.sub.n)=1/2[1+sgn(y.sub.n.sup.2 -m.sub.m.sup.2)], and(102)
.function.(y.sub.n)=1/2[1+sgn(y.sub.n.sup.2 -m.sub.m.sup.2)].(103)
So that: ##EQU25## where m.sub.w =m for half-constellation WMMA, and m.sub.w =2(m-1) for edge-point WMMA. The use of the nonlinear equation .function.(.multidot.) gives the following relation:
.function.(y.sub.n)=y.sub.n,w ; and .function.(y.sub.n)=y.sub.n,w.(105)
The cost function CF can be rewritten as:
CF=[.function.(y.sub.n)(y.sub.n.sup.2 -R.sub.w.sup.2).sup.2 +.function.(y.sub.n)(y.sub.n.sup.2 -R.sub.w.sup.2).sup.2 ].(106)
and the corresponding tap updating algorithm becomes:
c.sub.n+1 =c.sub.n -.mu..function.(y.sub.n)(y.sub.n.sup.2 -R.sub.w.sup.2)y.sub.n r.sub.n, and (107)
d.sub.n+1 =d.sub.n -.mu..function.(y.sub.n)(y.sub.n.sup.2 -R.sub.w.sup.2)y.sub.n r.sub.n. (108)
In the case of the in-phase dimension, both equations (100) and (107) can be used to do filter adaptation.
FIGS. 16, and 26-28, show illustrative plots of converged signal constellations for the various algorithms based on computer simulation. FIG. 16 shows the signal constellation after converging with MMA, and FIG. 26 shows the signal constellation after converging with LMS. These two figures together show that even though initial convergence can be achieved with blind algorithms, the LMS algorithm is needed to obtain optimal convergence. FIG. 27 shows the converged constellation with the half-constellation WMMA, whereas FIG. 28 shows the converged constellation with the edge-point WMMA. From FIGS. 27 and 28, it can be observed that the convergence performance is improved by using the half-constellation WMMA, and is further improved by using the edge-point WMMA. In fact, a comparison of FIGS. 26 and 28 shows that almost the same convergence performance is achieved by using the edge-point WMMA and the LMS algorithm. It should be noted that in this example the step size used in the edge-point WMMA is about five times larger than for the other algorithms. A step size of .mu.=0.00001 is used for MMA.
It is recommended that the use of windowed MMA be limited to applications with a limited number of symbol levels. For very large constellation, a good performance may be difficult to obtain because of the lack of sufficient equalizer output samples during filter adaptation.
Illustrative embodiments of the inventive concept are shown in FIGS. 11 and 12. FIG. 11 illustrates an embodiment representative of a digital signal processor 400 that is programmed to implement an FSLE in accordance with the principles of the invention. Digital signal processor 400 comprises a central processing unit (processor) 405 and memory 410. A portion of memory 410 is used to store program instructions that, when executed by processor 405, implement the windowed-MMA type algorithm. This portion of memory is shown as 411. Another portion of memory, 412, is used to store tap coefficient values that are updated by processor 405 in accordance with the inventive concept. It is assumed that a received signal 404 is applied to processor 405, which equalizes this signal in accordance with the inventive concept to provide a output signal 406. For the purposes of example only, it is assumed that output signal 406 represents a sequence of output samples of an equalizer. (As known in the art, a digital signal processor may, additionally, further process received signal 404 before deriving output signal 406.) An illustrative software program is not described herein since, after learning of the windowed-MMA type algorithms as described herein, such a program is within the capability of one skilled in the art. Also, it should be noted that any equalizer structures, such as those described earlier, can be implemented by digital signal processor 400 in accordance with the inventive concept.
FIG. 12 illustrates another alternative embodiment of the inventive concept. Circuitry 500 comprises a central processing unit (processor) 505, and an equalizer 510. The latter is illustratively assumed to be a phase-splitting FSLE as described above. It is assumed that equalizer 510 includes at least one tap-coefficient register for storing values for corresponding tap coefficient vectors (e.g., as shown in FIG. 3). Processor 505 includes memory, not shown, similar to memory 410 of FIG. 11 for implementing the windowed-MMA type algorithms. Equalizer output signal 511, which represents a sequence of equalizer output samples, is applied to processor 505. The latter analyzes equalizer output signal 511, in accordance with the inventive concept, to adapt values of the tap coefficients in such a way as to converge to a correct solution.
The foregoing merely illustrates the principles of the invention and it will thus be appreciated that those skilled in the art will be able to devise numerous alternative arrangements which, although not explicitly described herein, embody the principles of the invention and are within its spirit and scope.
For example, although the invention is illustrated herein as being implemented with discrete functional building blocks, e.g., an equalizer, etc., the functions of any one or more of those building blocks can be carried out using one or more appropriate programmed processors.
In addition, although the inventive concept was described in the context of an FSLE, the inventive concept is applicable to other forms of adaptive filters, such as, but not limited to, a decision feedback equalizer (DFE). The inventive concept is applicable to all forms of communications systems, e.g., broadcast networks, e.g., high-definition television (HDTV), point-to-multipoint Networks like fiber to the curb (mentioned above), signal identification, or classification, applications like wire-tapping, etc.
Also, although the inventive concept was described in the context of a modified MMA algorithm, the inventive concept is applicable to other forms of equalization.
Claims
  • 1. An improved method for performing equalization in a receiver, the improvement comprising:
  • using a subset of output samples from an equalizer for converging the equalizer in such a way that slicing is not performed on the subset of the output sample and wherein the using step includes the steps of:
  • partitioning a signal point space into an exclusion region and a sample window region; and
  • using that subset of output samples from the equalizer that fall within the sample window for converging the equalizer.
  • 2. The method of claim 1 wherein the partitioning step includes the step of
  • selecting the exclusion region and the sample window region such that along one dimension of the signal point space a number of symbols within the sample window region is equal to one half of a number of symbols in that dimension.
  • 3. The method of claim 1 wherein the partitioning step includes the step of
  • selecting the exclusion region and the sample window region such that the sample window region only includes outer-most symbols of the signal point space.
  • 4. An improved method for performing equalization in a receiver, the improvement comprising:
  • using a subset of output samples from an equalizer for converging the equalizer in such a way that slicing is not performed on the subset of the output sample and wherein the using step includes the step of:
  • adapting a set of tap coefficients of the equalizer in accordance with a multimodulus-based blind equalization technique operative on the subset of output samples.
  • 5. An improved method for performing equalization in a receiver, the improvement comprising:
  • using a subset of output samples from an equalizer for converging the equalizer in such a way that slicing is not performed on the subset of the output sample and wherein the using step includes the step of:
  • adapting a set of tap coefficients of the equalizer in accordance with a constant modulus-based blind equalization technique operative on the subset of output samples.
  • 6. An improved equalizer for use in a receiver for performing equalization; the improvement comprising:
  • a processor that adapts coefficients of the equalizer by using a subset of output samples from the equalizer in such a way that slicing is not performed on the subset of the output samples
  • wherein the output samples have associated coordinate values within a signal point space and the subset of output samples are those output samples having coordinate values outside of an exclusion region of the signal point space.
  • 7. The apparatus of claim 6 wherein the processor is a digital signal processor.
  • 8. The apparatus of claim 6 wherein the processor adapts the coefficients of the equalizer in accordance with a blind equalization algorithm.
  • 9. The apparatus of claim 8 wherein the blind equalization algorithm is based upon a constant modulus algorithm.
  • 10. The apparatus of claim 8 wherein the blind equalization algorithm is based upon a mulitmodulus algorithm.
  • 11. An improved equalizer for use in a receiver for performing equalization; the improvement comprising:
  • a processor a) for providing an equalizer function for equalizing a received signal, and b) for adapting coefficients of the equalizer function by using a subset of output samples that represent the equalized received signal in such a way that slicing is not performed on the subset of the output samples
  • wherein the output samples have associated coordinate values within a signal point space and the subset of output samples are those output samples having coordinate values outside of an exclusion region of the signal point space.
  • 12. The apparatus of claim 11 wherein the processor is a digital signal processor.
  • 13. The apparatus of claim 11 wherein the processor adapts the coefficients of the equalizer in accordance with a form of blind equalization algorithm.
  • 14. Apparatus for use in a receiver, the apparatus comprising:
  • an equalizer having a set of tap coefficient values and for providing an equalized version of an applied input signal represented by a plurality of output samples; and
  • a processor for adapting the set of tap coefficients as a function of a subset of the plurality of output samples in such a way that slicing is not performed on the subset of the plurality of output samples
  • wherein the plurality of output samples have associated coordinate values within a signal point space and the subset of output samples are those output samples having coordinate values outside of an exclusion region of the signal point space.
  • 15. The apparatus of claim 14 wherein the exclusion region is such that along one dimension of the signal point space a number of symbols within the exclusion region is equal to one half of a number of symbols in that dimension.
  • 16. The apparatus of claim 14 wherein the exclusion region does not include outer-most symbols of the signal point space.
  • 17. Apparatus for use in a receiver, the apparatus comprising:
  • an equalizer having a set of tap coefficient values and for providing an equalized version of an applied input signal represented by a plurality of output samples; and
  • a processor for adapting the set of tap coefficients as a function of a subset of the plurality of output samples in such a way that slicing is not performed on the subset of the plurality of output samples
  • wherein the form of blind equalization algorithm is a mulitmodulus-based algorithm.
CROSS-REFERENCE TO RELATED APPLICATION

Related subject matter is disclosed in the following co-pending, commonly assigned, U.S. Patent applications of Wemer et al.: Ser. No. 08/646,404, filed on May 7, 1996; Ser. No. 08/717,582, filed on Sep. 18, 1996; and Ser. No. 08/744,908 filed on Nov. 8, 1996.

US Referenced Citations (7)
Number Name Date Kind
4894847 Tjahjadi et al. Jan 1990
5157690 Buttle Oct 1992
5195106 Kazecki et al. Mar 1993
5303263 Shoji et al. Apr 1994
5414732 Kaufmann May 1995
5506871 Hwang et al. Apr 1996
5654765 Kim Aug 1997