This application claims the priority of Korean Patent Application No. 2003-3258, filed on Jan. 17, 2003, in the Korean Intellectual Property Office, the disclosure of which is incorporated herein in its entirety by reference.
1. Field of the Invention
The present invention relates to an adaptive beamformer, and more particularly, to a method and apparatus for adaptive beamforming using a feedback structure.
2. Description of the Related Art
Mobile robots have applications in health-related fields, security, home networking, entertainment, and so forth, and are the focus of increasing interest. Interaction between people and mobile robots is necessary when operating the mobile robots. Like people, a mobile robot with a vision system has to recognize people and surroundings, find the position of a person talking in the vicinity of the mobile robot, and understand what the person is saying.
A voice input system of the mobile robot is indispensable for interaction between man and robot and is an important factor affecting autonomous mobility. Important factors affecting the voice input system of a mobile robot in an indoor environment are noise, reverberation, and distance. There are a variety of noise sources and reverberation due to walls or other objects in the indoor environment. Low frequency components of a voice are more attenuated than high frequency components with respect to distance. Accordingly, for proper interaction between a person and an autonomous mobile robot within a house, a voice input system has to enable the robot to recognize the person's voice at a distance of several meters.
Such a voice input system generally uses a microphone array comprising at least two microphones to improve voice detection and recognition. In order to remove noise components contained in a speech signal input via the microphone array, a single channel speech enhancement method, an adaptive acoustic noise canceling method, a blind signal separation method, and a generalized sidelobe canceling method are employed.
The single channel speech enhancement method, disclosed in “Spectral Enhancement Based on Global Soft Decision” (IEEE Signal Processing Letters, Vol. 7, No. 5, pp. 108-110, 2000) by Nam-Soo Kim and Joon-Hyuk Chang, uses one microphone and ensures high performance only when statistical characteristics of noise do not vary with time, like stationary background noise. The adaptive acoustic noise canceling method, disclosed in “Adaptive Noise Canceling: Principles and Applications” (Proceedings of IEEE, Vol. 63, No. 12, pp. 1692-1716, 1975) by B. Widrow et al., uses two microphones. Here, one of the two microphones is a reference microphone for receiving only noise. Thus, if only noise cannot be received or noise received by the reference microphone contains other noise components, the performance of the adaptive acoustic noise canceling method sharply drops. Also, the blind signal separation method is difficult to use in the actual environment and to implement real-time systems.
Referring to
The operations of the ABM 13 and the AMC 15 shown in
Referring to
An ABF 21 adaptively filters the signal b(k) output from the FBF 11 according to the signal output from a first subtractor 23 so that a characteristic of speech components of the filtered signal output from the ABF 21 is the same as that of speech components of a microphone signal x′m(k) that is delayed for a predetermined period of time. The first subtractor 23 subtracts the signal output from the ABF 21 from the microphone signal x′m(k), where m is an integer between 1 and M, to obtain and output a signal zm(k) which is generated by canceling speech components S from the microphone signal x′m(k).
An ACF 25 adaptively filters the signal zm(k) output from the first subtractor 23 according to the signal output from a second subtractor 27 so that a characteristic of noise components of the filtered signal output from the ACF 25 is the same as that of noise components of the signal b(k). The second subtractor 27 subtracts the signal outputs from the ACF 25 from the signal b(k) and outputs a signal y(k) which is generated by canceling noise components N from the signal b(k).
However, the above-described generalized sidelobe canceling method has the following drawbacks. The delay-and-sum beamformer of the FBF 11 has to generate the signal b(k) with a very high SNR so that only pure noise signals are input to the AMC 15. However, because the delay-and-sum beamformer outputs a signal whose SNR is not very high, the overall performance drops. As a result, since the ABM 13 outputs a noise signal containing a speech signal, the AMC 15, using the output of the ABM 13, regards speech components contained in the signal output from the ABM 13 as noise and cancels the noise. Therefore, the adaptive beamformer finally outputs a speech signal containing noise components. Also, because filters used in the generalized sidelobe canceling method have a feedforward connection structure, finite impulse response (FIR) filters are employed. When such FIR filters are used in the feedforward connection structure, 1000 or more filter taps are needed in a room reverberation environment. In addition, in a case where the ABF 21 and the ACF 25 are not properly trained, the performance of the adaptive beamformer may deteriorate. Thus, speech presence intervals and speech absence intervals are necessary for training the ABF 21 and the ACF 25. However, these training intervals are generally unavailable in practice. Moreover, because adaptation of the ABM 13 and the AMC 15 has to be alternately performed, a voice activity detector (VAD) is needed. In other words, for adaptation of the ABF 21, a speech component is a desired signal and a noise component is an undesired signal. On the contrary, for adaptation of the ACF 25, a noise component is a desired signal and a speech component is an undesired signal.
The present invention provides a method of adaptive beamforming using a feedback structure capable of almost completely canceling noise components contained in a wideband speech signal input from a microphone array comprising at least two microphones.
The present invention also provides an adaptive beamforming apparatus including a feedback structure to cancel noise components contained in wideband speech signals input from a microphone array.
Additional aspects and/or advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
According to an aspect of the present invention, there is provided an adaptive beamforming method including compensating for time delays of M noise-containing speech signals input via a microphone array having M microphones (M is an integer greater than or equal to 2), and generating a sum signal of the M compensated noise-containing speech signals; and extracting pure noise components from the M compensated noise-containing speech signals using M adaptive blocking filters that are connected to M adaptive canceling filters in a feedback structure and extracting pure speech components from the sum signal using the M adaptive canceling filters that are connected to the M adaptive blocking filters in the feedback structure.
According to another aspect of the present invention, there is also provided an adaptive beamforming apparatus including: a fixed beamformer that compensates for time delays of M noise-containing speech signals input via a microphone array having M microphones (M is an integer greater than or equal to 2), and generates a sum signal of the M compensated noise-containing speech signals; and a multi-channel signal separator that extracts pure noise components from the M compensated noise-containing speech signals using M adaptive blocking filters that are connected to M adaptive canceling filters in a feedback structure and extracts pure speech components from the added signal using the M adaptive canceling filters that are connected to the M adaptive blocking filters in the feedback structure.
In an aspect of the present invention, the multi-channel signal separator includes a first filter that filters a noise-removed sum signal through the M adaptive blocking filters; a first subtractor that subtracts signals output from the M adaptive blocking filters from the M compensated noise-containing speech signals using M subtractors; a second filter that filters M subtraction results of the first subtractor through the M adaptive canceling filters; a second subtractor that subtracts signals output from the M adaptive canceling filters from the sum signal using M subtractors, and inputs M subtraction results to the M adaptive blocking filters as the noise-removed sum signal; and a second adder that adds signals output from the M subtractors of the second subtractor.
In an aspect of the present invention, the multi-channel signal separator includes a first filter that filters a noise-removed sum signal through the M adaptive blocking filters; a first subtractor that subtracts signals output from the M adaptive blocking filters from the M compensated noise-containing speech signals using M subtractors; a second filter that filters signals output from the M subtractors of the first subtractor through the M adaptive canceling filters; a second adder that adds signals output from M adaptive canceling filters of the second filter; and a second subtractor that subtracts signals output from the second adder from the signals output from the fixed beamformer and inputs M subtraction results to the M adaptive blocking filters as the noise-removed sum signal.
These and/or other aspects and advantages of the invention will become apparent and more readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
Reference will now be made in detail to the embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to the like elements throughout. The embodiments are described below to explain the present invention by referring to the figures.
Hereinafter, embodiments of the present invention will be described in detail with reference to the attached drawings. Meanwhile, “speech” used hereinafter is a representation implicitly including any target signal necessary for using the present invention.
Referring to
The ACF 35 adaptively filters a signal zm(k) output from the first subtractor 33 according to a signal output from the second subtractor 37 so that a characteristic of noise components of the filtered signal output from the ACF 35 is the same as that of noise components of the signal b(k) output from FBF 11 shown in
Referring to
The first adder 417 adds the speech signals x1′(k), x2′(k) and xM′(k) and outputs a signal b(k). The signal b(k) output from the first adder 417 can be represented as in Equation 1.
In the multi-channel signal separator 430, the M ABFs 431a and 431b adaptively filter signals output from the M subtractors 437a and 437b of the second subtractor 437 according to signals output from the M subtractors 433a and 433b of the first subtractor 433, so that a characteristic of speech components of the filtered signals output from the M ABFs 431a and 431b is the same as that of speech components of a microphone signal x′m(k), that is delayed for a predetermined period of time.
The M subtractors 433a and 433b of the first subtractor 433 respectively subtract the signals output from the M ABFs 431a and 431b from the speech signals x1′(k) and xM′(k), and respectively output signals u1(k) and uM(k) to the M ACFs 435a and 435b. When a coefficient vector of the mth ABF of the first filter 431 is hTm(k) and the number of taps is L, the signal um(k) output from the subtractors 433a and 433b of the first subtractor 433 can be represented as in Equation 2.
um(k)=x′m(k)−hTm(k)wm(k) (2)
wherein, hTm(k) and wm(k) can be represented as in Equations 3 and 4, respectively.
hm(k)=[hm,1(k), hm,2(k), . . . , hm,L(k)]T (3)
wherein, hm,l(k) is an lth coefficient of hm(k).
Wm(k)=[wm(k−1), wm(k−2), . . . , wm(k−L)]T (4)
wherein, wm(k) denotes a vector collecting L past values of wm(k), L denotes the number of filter taps of the M ABFs 431a and 431b.
The M ACFs 435a and 435b of the second filter 435 adaptively filter the signals u1(k) and uM(k) output from the M subtractors 433a and 433b of the first subtractor 433 according to signals output from the M subtractors 437a and 437b of the second subtractor 437, so that a characteristic of noise components of the filtered signals output from the M ACFs 435a and 435b is the same as that of noise components of the signal b(k) output from the FBF 410.
The M subtractors 437a and 437b of the second subtractor 437 respectively subtract the signals output from the M ACFs 435a and 435b of the second filter 435 from the signal b(k) output from the FBF 410, and output w1(k) and wM(k) to the second adder 439. When a coefficient vector of the mth ACF of the second filter 435 is gm(k) and the number of taps is N, the signal wm(k) output from the M subtractors 437a and 437b of the second subtractor 437 can be represented as in Equation 5.
wm(k)=b(k)−gTm(k)um(k) (5)
wherein, gTm(k) and um(k) can be represented as in Equations 6 and 7, respectively.
gm(k)=[gm,1(k), gm,2(k), . . . , gm,N(k)]T (6)
wherein, gm,n(k) denotes an nth coefficient of gm(k).
um(k)=[um(k−1), um(k−2), . . . , um(k−N)]T (7)
wherein, um(k) denotes a vector collecting N past values of um(k) and N denotes the number of filter taps of the M ACFs 435a and 435b.
The second adder 439 adds w1(k) and wM(k) output from the M subtractors 437a and 437b of the second subtractor 437 and outputs a signal y(k) in which noise components are cancelled. The signal y(k) output from the second adder 439 can be represented as in Equation 8.
Referring to
The M subtractors 533a, 533b and 533c of the first subtractor 533 respectively subtract the signals output from ABFs 531a, 531b and 531c from microphone signals x1′(k), x2′(k) and xM′(k) delayed for a predetermined period of time and output signals z1(k), z2(k) and zM(k) to the M ACFs 535a, 535b and 535c of the second filter 535. When a coefficient vector of the mth ABF of the first filter 531 is hm(k) and the number of taps is L, the signal zm(k) output from the M subtractors 533a, 533b and 533c of the first subtractor 533 can be represented as in Equation 9.
zm(k)=x′m(k)−hTm(k)y(k), m=1, . . . , M (9)
wherein, hTm(k) and y(k) can be represented as in Equations 10 and 11, respectively.
hm(k)=[hm,1(k), hm,2(k), . . . , hm,L(k)]T (10)
wherein, hm,l(k) denotes an lth coefficient of hm(k).
y(k)=[y(k−1), y(k−2), . . . , y(k−L)]T (11)
wherein, y(k) denotes a vector collecting L past values of y(k) and L denotes the number of filter taps of the M ABFs 531a, 531b and 531c.
The M ACFs 535a, 535b and 535c of the second filter 535 adaptively filter the signals z1(k), z2(k) and zM(k) output from the M subtractors 533a, 533b and 533c of the first subtractor 533 according to a signal output from the second subtractor 539, so that a characteristic of noise components of a signal v(k) output from the second adder 537 is the same as that of noise components of the signal b(k) output from the FBF 510.
The second adder 537 adds the signals output from the M ACFs 535a, 535b and 535c. When a coefficient of the mth ACF of the second filter 535 is gm(k) and the number of taps is N a signal v(k) output from the second adder 537 can be represented as in Equation 12.
wherein, gTm(k) and zm(k) can be represented as in Equations 13 and 14, respectively.
gm(k)=[gm,1(k), gm,2(k), . . . , gm,N(k)]T (13)
wherein, gm,n(k) denotes an nth coefficient of gm(k).
zm(k)=[zm(k−1), zm(k−2), . . . , zm(k−N)]T(14)
wherein, zm(k) denotes a vector collecting N past values of zm(k) and N denotes the number of filter taps of the M ACFs 535a, 535b and 535c.
The second subtractor 539 subtracts the signal v(k) output from the second adder 537 from the signal b(k) output from the FBF 510 and outputs the signal y(k). The signal y(k) output from the second subtractor 539 can be represented as in Equation 15.
y(k)=b(k)−v(k) (15)
In the above-described embodiments, the M ABFs 431a and 431b of the first filter 431, the M ABFs 531a, 531b and 531c of the first filter 531, M ACFs 435a and 435b of the second filter 435, and the M ACFs 535a, 535b and 535c of the second filter 535 illustrated in
Coefficients of the FIR filters are updated by the information maximization algorithm proposed by Anthony J. Bell. The information maximization algorithm is a statistical learning rule well known in the field of independent component analysis, by which non-Gaussian data structures of latent sources are found from sensor array observations on the assumption that the latent sources are statistically independent. Because the information maximization algorithm does not need a voice activity detector (VAD), coefficients of ABFs and ACFs can be automatically adapted without knowledge of the desired and undesired signal levels.
According to the information maximization algorithm, coefficients of the M ABFs 431a and 431b and the M ACFs 435a and 435b are updated as in Equations 16 and 17.
hm,l(k+1)=hm,l(k)+αSGN(um(k))wm(k−l) (16)
gm,n(k+1)=gm,n(k)+βSGN(wm(k))um(k−n) (17)
wherein, α and β denote step sizes for learning rules and SGN(·) is a sign function which is +1 if an input is greater than zero and −1 if the input is less than zero.
According to the information maximization algorithm, coefficients of the M ABFs 531a, 531b and 531c and the M ACFs 535a, 535b and 535c are updated as in Equations 18 and 19.
hm,l(k+1)=hm,l(k)+αSGN(zm(k))y(k−l) (18)
gm,n(k+1)=gm,n(k)+βSGN(y(k))zm(k−n) (19)
wherein, α and β denote step sizes for learning rules and SGN(·) is a sign function which is +1 if an input is greater than zero and −1 if the input is less than zero. The sign function SGN(·) could be replaced by any kind of saturation function, such as a sigmoid function and a tanh(·) function.
In addition, coefficients of the M ABFs 431a and 431b, the M ABFs 531a, 531b and 531c, M ACFs 435a and 435b, and the M ACFs 535a, 535b and 535c can be updated using any kind of statistical learning algorithms such as a least square algorithm and its variant, a normalized least square algorithm.
As described above, when the M ABFs 431a and 431b and the M ACFs 435a and 435b, and the M ABFs 531a, 531b and 531c and the M ACFs 535a, 535b and 535c are FIR filters and connected in a feedback structure, and the number of microphones of each of the microphone arrays 411 and 511 is 8, the number of filter taps of the adaptive beamformer shown in
The results of an objective evaluation of the performance of the two adaptive beamformers in the above-described experimental environment, e.g., a comparison of SNRs, are shown in Table 1 (all units are in dBs).
As can be seen in Table 1, the SNR in a beamforming method according to the present invention is roughly double the SNR in a beamforming method according to the prior art.
For a subjective evaluation in the experimental environment, e.g., an AB preference test, after ten people had listened to outputs of a beamformer according to the prior art and a beamformer according to the present invention, they were asked to choose one of the following sentences for evaluation, which are “A is much better than B”, “A is better than B”, “A and B are the same”, “A is worse than B”, and “A is much worse than B”. A test program randomly determined which one of the beamformers according to the prior art and the present invention would output signal A. Also, two points were given for “much better”, one point for “better”, and no points for “the same” and then the results were summed. The subjective evaluation compared 40 words for fan noise and another 40 words for music noise, and the results of the comparison are shown in Table 2.
As can be seen in Table 2, the outputs of the beamformer according to the present invention are superior to the outputs of the beamformer according the prior art.
As described above, according to the present invention, by connecting ABFs and ACFs in a feedback structure, noise components contained in a wideband speech signal input via a microphone array comprising at least two microphones can be nearly completely cancelled. Also, while the ABFs and the ACFs have been realized as FIR filters and connected in a feedback structure, the ABFs and the ACFs may be regarded as IIR filters, which reduces the number of filter taps. In addition, since an information maximization algorithm can be used to learn coefficients of the ABFs and the ACFs, the number of parameters necessary for learning can be reduced and a VAD for detecting whether speech signals exist is not necessary.
Moreover, a method and apparatus adaptively beamforming according to the present invention are not greatly affected by the size, arrangement, or structure of a microphone array. Also, a method and apparatus adaptively beamforming according to the present invention are more robust against look directional errors than the conventional art, regardless of the type of noise.
The present invention can be realized as a computer-readable code on a computer-readable recording medium. Such a computer-readable medium may be any kind of recording medium in which computer-readable data is stored. Examples of such computer-readable media include ROMs, RAMs, CD-ROMs, magnetic tapes, floppy discs, optical data storing devices, and carrier waves (e.g., transmission via the Internet), and so forth. Also, the computer-readable code can be stored on the computer-readable media distributed in computers connected via a network. Furthermore, functional programs, codes, and code segments for realizing the present invention can be easily analogized by programmers skilled in the art.
Moreover, a method and apparatus adaptively beamforming according to the present invention can be applied to autonomous mobile robots to which microphone arrays are attached, and to vocal communication with electronic devices in an environment where a user is distant from a microphone. Examples of such electronic devices include personal digital assistants (PDA), WebPads, and portable phone terminals in automobiles, having a small number of microphones. With the present invention, the performance of a voice recognizer can be considerably improved.
Although a few embodiments of the present invention have been shown and described, it would be appreciated by those skilled in the art that changes may be made in this embodiment without departing from the principles and spirit of the invention, the scope of which is defined in the claims and their equivalents.
Number | Date | Country | Kind |
---|---|---|---|
10-2003-0003258 | Jan 2003 | KR | national |
Number | Name | Date | Kind |
---|---|---|---|
4536887 | Kaneda et al. | Aug 1985 | A |
5371789 | Hirano | Dec 1994 | A |
5627799 | Hoshuyama | May 1997 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6449586 | Hoshuyama | Sep 2002 | B1 |
6885750 | Egelmeers et al. | Apr 2005 | B2 |
7020290 | Ribic | Mar 2006 | B1 |
Number | Date | Country | |
---|---|---|---|
20040161121 A1 | Aug 2004 | US |