The present invention relates, in general to, adaptive filters and, more particularly, to a reduced complexity recursive least square lattice structure adaptive filter.
Adaptive filters are found in a wide range of applications and come in a wide variety of configurations, each of which having distinctive properties. A particular configuration chosen may depend on specific properties needed for a target application. These properties, which include among others, rate of convergence, mis-adjustment, tracking, and computational requirements, are evaluated and weighed against each other to determine the appropriate configuration for the target application.
Of particular interest when choosing an adaptive filter configuration for use in a non-stationary signal environment are the rate of convergence, the mis-adjustment and the tracking capability. Good tracking capability is generally a function of the convergence rate and mis-adjustment properties of a corresponding algorithm. However, these properties may be contradictory in nature, in that a higher convergence rate will typically result in a higher convergence error or mis-adjustment of the resulting filter.
A recursive least squares (RLS) algorithm is generally a good tool for the non-stationary signal environment due to its fast convergence rate and low level of mis-adjustment. A recursive least squares lattice (RLSL) algorithm is one particular version of the RLS algorithm. The initial RLSL algorithm was introduced by Simon Haykin, and can be found in the “Adaptive Filter Theory Third Edition” book. The RLS class of adaptive filters exhibit fast convergence rates and are relatively insensitive to variations in an eigenvalue spread. Eigenvalues are a measure of correlation properties of the reference signal and the eigenvalue spread is typically defined as a ratio of the highest eigenvalue to the lowest eigenvalue. A large eigenvalue spread significantly slows down the rate of convergence for most adaptive algorithms.
However, the RLS algorithm typically requires extensive computational resources and can be prohibitive for embedded systems. Accordingly, there is a need to provide a mechanism by which the computational requirements of a RLSL structure adaptive filter are reduced.
a-1d illustrate four schematic diagrams of applications employing an adaptive filter;
Illustrative and exemplary embodiments of the invention are described in further detail below with reference to and in conjunction with the figures.
A method for reducing a computational complexity of an m-stage adaptive filter is provided by expanding a weighted sum of forward prediction error squares into a corresponding binomial expansion series, expanding a weighted sum of backward prediction error squares into a corresponding binomial expansion series, and determining coefficient updates of the adaptive filter with the weighted sums of forward and backward prediction error squares approximated by a select number of terms of their corresponding binomial expansion series. The present invention is defined by the appended claims. This description addresses some aspects of the present embodiments and should not be used to limit the claims.
a-1d illustrate four schematic diagrams of filter circuits 100 employing an adaptive filter 10. The filter circuits 100 in general and the adaptive filter 10 may be constructed in any suitable manner. In particular, the adaptive filter 10 may be formed using electrical components such as digital and analog integrated circuits. In other examples, the adaptive filter 10 is formed using a digital signal processor (DSP) operating in response to stored program code and data maintained in a memory. The DSP and memory may be integrated in a single component such as an integrated circuit, or may be maintained separately. Further, the DSP and memory may be components of another system, such as a speech processing system or a communication device.
In general, an input signal u(n) is supplied to the filter circuit 100 and to adaptive filter 10. As shown, the adaptive filter 10 may be configured in a multitude of arrangements between a system input and a system output. It is intended that the improvements described herein may be applied to the widest variety of applications for the adaptive filter 10.
In
In
In
In
Now referring to
An RLSL algorithm for the RLSL 100 is defined below in terms of Equation 1 through Equation 8.
Where the variables are defined as follows:
At stage zero, the RLSL 100 is supplied by signals u(n) 12and d(n) 20. Subsequently, for each stage m, the above defined filter coefficient updates are recursively computed. For example at stage m and time n, the forward prediction error ηm(n) 102 is the forward prediction error ηm−1 (n) 103 of stage m−1 augmented by a combination of the forward reflection coefficient Kf,m−1 (n−1) with the delayed backward prediction error βm−1 (n).
In a similar fashion, at stage m and time n, the backward prediction error βm (n) 106 is the backward prediction error βm−1 (n) 105 of stage m−1 augmented by a combination of the backward reflection coefficient Kb,m−1 (n−1) with the delayed forward prediction error ηm−1 (n).
Moreover, the a priori estimation error backward ζm (n) 107, for stage m at time n, is the a priori estimation error backward ζm−1 (n) 108 of stage m−1 reduced by a combination of the joint process regression coefficient Km−1 (n−1) 109, of stage m−1 at time n−1, with the backward forward prediction error βm−1 (n) 105.
The adaptive filter 100 may be implemented using any suitable component or combination of components. In one embodiment, the adaptive filter is implemented using a DSP in combination with instructions and data stored in an associated memory. The DSP and memory may be part of any suitable system for speech processing or manipulation. The DSP and memory can be a stand-alone system or embedded in another system.
This RLSL algorithm requires extensive computational resources and can be prohibitive for embedded systems. As such, a mechanism for reducing the computational requirements of the RLSL structure adaptive filter 100 is obtained by using binomial expansion series in lieu of the divide function in the updates of the forward error prediction squares Fm(n) and the backward error prediction squares Bm (n).
Typically, processors are substantially efficient at adding, subtracting and multiplying, but not necessarily at dividing. Most processors use a successive approximation technique to implement a divide instruction and may require multiple clock cycles to produce a result. As such, in an effort to reduce computational requirements, a total number of computations in the filter coefficient updates may need to be reduced as well as a number of divides that are required in the calculations of the filter coefficient updates. Thus, the RLSL algorithm filter coefficient updates are transformed to consolidate the divides. First, the time (n) and order (m) indices of the RLSL algorithm are translated to form Equation 9 through Equation 17.
Then, the forward error prediction squares Fm(n) and the backward error prediction squares Bm(n) are inverted and redefined to be their reciprocals as shown in Equation 18, Equation 20 and Equation 21. Thus, by inverting Equation 9 we get:
Then redefine the forward error prediction squares Fm(n):
Then insert into Equation 18 and simplify:
By the same reasoning the backwards error prediction square Equation 10 becomes
Further, new definitions for the forward and backward error prediction squares, F′m (n) and B′m(n), are inserted back into the remaining equations, Equation 13, Equation 14, Equation 15, and Equation 17, to produce the algorithm coefficient updates as shown below in Equation 22 through Equation 30.
Now referring to
Now referring to
As stated above, the mechanism for reducing the computational requirements of the RLSL structure adaptive filter RLSL 100 is provided by using binomial expansion in lieu of the divide function in the updates of the forward error prediction squares Fm(n) and the backward error prediction squares Bm (n).
Typical processors use an iterative approach to perform a divide function and therefore require substantial resources and real time to calculate a multiply or an add function. As such, the divide function present in each of Equation 22 and Equation 23 for computing the updates of the forward error prediction squares F′m(n) and the backward error prediction squares B′m (n).is replaced with Taylor series expansions to approximate the forward and backward error prediction squares update recursions.
As such, a general binomial series is introduced in Equation 31 as an expansion of Taylor's theorem to provide a tool to estimate the divide function within a given region of convergence. In general, several terms of the series are needed to achieve a predetermined accuracy.
In order to replace the divide functions in the RLS recursion updates of the forward error prediction squares F′m(n) and the backward error prediction squares B′m (n) found in Equation 22 and Equation 23, respectively, let:
a=λ and bx=−F′m(n−1)γm(n−1)|ηm(n)|2
Then, using the first two terms in the expansion series of Equation 31, the forwards error prediction squares F′m(n) becomes:
Since λ is a constant, then
can be pre-calculated and therefore reduces overhead computation to the recursion loop of the filter updates. After applying the same reasoning to Equation 23, the backwards error prediction squares B′m (n) becomes:
The resulting RLS algorithm with all divides eliminated is given in Equation 34 through Equation 42, as follows:
As given earlier in Equation 31, the region of convergence needs to be satisfied for the binomial expansion to hold true, and the term “b2 x2” needs to be substantially smaller than the term “a2” for a single term in the series to provide sufficient convergence precision. It was found that as λ approaches “1”, “b2X2” becomes substantially smaller than “a2”.
Referring to
To satisfy the region of convergence criteria, this relation only needs to be less than “1” which simply implies that as the number of terms in the series expansion summed together increases, the convergence error approaches zero. However, to reduce real time requirements, as few terms as possible may need to be used in order to achieve the required convergence precision.
The adaptive filter performance was then measured for three different values of λ using the full divide and then one, two and three terms in a Taylor's series approximation for comparison. These results are shown in
The communication device 900 includes a microphone 904 and speaker 906 and analog signal processor 908. The microphone 904 converts sound waves impressed thereon to electrical signals. Conversely, the speaker 906 converts electrical signals to audible sound waves. The analog signal processor 908 serves as an interface between the DSP, which operates on digital data representative of the electrical signals, and the electrical signals useful to the microphone 904 and 906. In some embodiments, the analog signal processor 908 may be integrated with the DSP 902.
The network connection 910 provides communication of data and other information between the communication device 900 and other components. This communication may be over a wire line, over a wireless link, or a combination of the two. For example, the communication device 900 may be embodied as a cellular telephone and the adaptive filter 912 operates to process audio information for the user of the cellular telephone. In such an embodiment, the network connection 910 is formed by the radio interface circuit that communicates with a remote base station. In another embodiment, the communication device 900 is embodied as a hands-free, in-vehicle audio system and the adaptive filter 912 is operative to serve as part of a double-talk detector of the system. In such an embodiment, the network connection 910 is formed by a wire line connection over a communication bus of the vehicle.
In the embodiment of
In operation, the adaptive filter 912 receives an input signal from a source and provides a filtered signal as an output. In the illustrated embodiment, the DSP 902 receives digital data from either the analog signal processor 908 or the network interface 910. The analog signal processor 908 and the network interface 910 thus form means for receiving an input signal. The digital data forms the input signal. As part of audio processing, the processor 916 of DSP 902 implements the adaptive filter 912. The data forming the input signal is provided to the instructions and data forming the adaptive filter. The adaptive filter 912 produces an output signal in the form of output data. The output data may be further processed by the DSP 902 or passed to the analog signal processor 908 or the network interface 910 for further processing.
The communication device 900 may be modified and adapted to other embodiments as well. The embodiments shown and described herein are intended to be exemplary only.
It is therefore intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.
This application claims the benefit of U.S. Provisional Application No. 60/692,345, filed Jun. 20, 2005, U.S. Provisional Application No. 60/692,236, filed Jun. 20, 2005, and U.S. Provisional Application No. 60/692,347, filed Jun. 20, 2005, all of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4032762 | Caloyannides | Jun 1977 | A |
5353307 | Lester et al. | Oct 1994 | A |
5418714 | Barron et al. | May 1995 | A |
5432816 | Gozzo | Jul 1995 | A |
5432821 | Polydoros et al. | Jul 1995 | A |
5513215 | Marchetto et al. | Apr 1996 | A |
5615208 | Hagmanns | Mar 1997 | A |
5809086 | Ariyavisitakul | Sep 1998 | A |
5844951 | Proakis et al. | Dec 1998 | A |
6327302 | Shen | Dec 2001 | B1 |
6353629 | Pal | Mar 2002 | B1 |
6381272 | Ali | Apr 2002 | B1 |
6445692 | Tsatsanis | Sep 2002 | B1 |
6643676 | Slock et al. | Nov 2003 | B1 |
6658071 | Cheng | Dec 2003 | B1 |
6760374 | Tapp et al. | Jul 2004 | B1 |
6763064 | Graf et al. | Jul 2004 | B1 |
6801565 | Bottomley et al. | Oct 2004 | B1 |
6807229 | Kim et al. | Oct 2004 | B1 |
6810073 | Karlsson | Oct 2004 | B1 |
7027504 | Yousef et al. | Apr 2006 | B2 |
7113540 | Yousef et al. | Sep 2006 | B2 |
7533140 | Jaber | May 2009 | B2 |
7548582 | Kim et al. | Jun 2009 | B2 |
20010021940 | Fujii et al. | Sep 2001 | A1 |
20030023650 | Papathanasiou | Jan 2003 | A1 |
20040117417 | Jin et al. | Jun 2004 | A1 |
20060041403 | Jaber | Feb 2006 | A1 |
20060288066 | Barron et al. | Dec 2006 | A1 |
20060288067 | Barron et al. | Dec 2006 | A1 |
Number | Date | Country | |
---|---|---|---|
20060288064 A1 | Dec 2006 | US |
Number | Date | Country | |
---|---|---|---|
60692345 | Jun 2005 | US | |
60692236 | Jun 2005 | US | |
60692347 | Jun 2005 | US |