System, method and apparatus for cancelling noise

Information

  • Patent Grant
  • 6363345
  • Patent Number
    6,363,345
  • Date Filed
    Thursday, February 18, 1999
    25 years ago
  • Date Issued
    Tuesday, March 26, 2002
    22 years ago
Abstract
A threshold detector precisely detects the positions of the noise elements, even within continuous speech segments, by determining whether frequency spectrum elements, or bins, of the input signal are within a threshold set according to current and future minimum values of the frequency spectrum elements. In addition, the threshold is continuously set and initiated within a predetermined period of time. The estimate magnitude of the input audio signal is obtained using a multiplying combination of the real and imaginary part of the input in accordance with the higher and lower values between the real and imaginary part of the signal. In order to further reduce instability of the spectral estimation, a two-dimensional smoothing is applied to the signal estimate using neighboring frequency bins and an exponential average over time. A filter multiplication effects the subtraction thereby avoiding phase calculation difficulties and effecting full-wave rectification which further reduces artifacts. Since the noise elements are determined within continuous speech segments, the noise is canceled from the audio signal nearly continuously thereby providing excellent noise cancellation characteristics. Residual noise reduction reduces the residual noise remaining after noise cancellation. Implementation may be effected in various noise canceling schemes including adaptive beamforming and noise cancellation using computer program applications installed as software or hardware.
Description




FIELD OF THE INVENTION




The present invention relates to noise cancellation and reduction and, more specifically, to noise cancellation and reduction using spectral subtraction.




BACKGROUND OF THE INVENTION




Ambient noise added to speech degrades the performance of speech processing algorithms. Such processing algorithms may include dictation, voice activation, voice compression and other systems. In such systems, it is desired to reduce the noise and improve the signal to noise ratio (S/N ratio) without effecting the speech and its characteristics.




Near field noise canceling microphones provide a satisfactory solution but require that the microphone in the proximity of the voice source (e.g., mouth). In many cases, this is achieved by mounting the microphone on a boom of a headset which situates the microphone at the end of a boom proximate the mouth of the wearer. However, the headset has proven to be either uncomfortable to wear or too restricting for operation in, for example, an automobile.




Microphone array technology in general, and adaptive beamforming arrays in particular, handle severe directional noises in the most efficient way. These systems map the noise field and create nulls towards the noise sources. The number of nulls is limited by the number of microphone elements and processing power. Such arrays have the benefit of hands-free operation without the necessity of a headset.




However, when the noise sources are diffused, the performance of the adaptive system will be reduced to the performance of a regular delay and sum microphone array, which is not always satisfactory. This is the case where the environment is quite reverberant, such as when the noises are strongly reflected from the walls of a room and reach the array from an infinite number of directions. Such is also the case in a car environment for some of the noises radiated from the car chassis.




OBJECTS AND SUMMARY OF THE INVENTION




The spectral subtraction technique provides a solution to further reduce the noise by estimating the noise magnitude spectrum of the polluted signal. The technique estimates the magnitude spectral level of the noise by measuring it during non-speech time intervals detected by a voice switch, and then subtracting the noise magnitude spectrum from the signal. This method, described in detail in Suppression of Acoustic Noise in Speech Using Spectral Subtraction, (Steven F Boll, IEEE ASSP-27 NO.2 April, 1979), achieves good results for stationary diffused noises that are not correlated with the speech signal. The spectral subtraction method, however, creates artifacts, sometimes described as musical noise, that may reduce the performance of the speech algorithm (such as vocoders or voice activation) if the spectral subtraction is uncontrolled. In addition, the spectral subtraction method assumes erroneously that the voice switch accurately detects the presence of speech and locates the non-speech time intervals. This assumption is reasonable for off-line systems but difficult to achieve or obtain in real time systems.




More particularly, the noise magnitude spectrum is estimated by performing an FFT of 256 points of the non-speech time intervals and computing the energy of each frequency bin. The FFT is performed after the time domain signal is multiplied by a shading window (Hanning or other) with an overlap of 50%. The energy of each frequency bin is averaged with neighboring FFT time frames. The number of frames is not determined but depends on the stability of the noise. For a stationary noise, it is preferred that many frames are averaged to obtain better noise estimation. For a non-stationary noise, a long averaging may be harmful. Problematically, there is no means to know a-priori whether the noise is stationary or non-stationary.




Assuming the noise magnitude spectrum estimation is calculated, the input signal is multiplied by a shading window (Hanning or other), an FFT is performed (256 points or other) with an overlap of 50% and the magnitude of each bin is averaged over 2-3 FFT frames. The noise magnitude spectrum is then subtracted from the signal magnitude. If the result is negative, the value is replaced by a zero (Half Wave Rectification). It is recommended, however, to further reduce the residual noise present during non-speech intervals by replacing low values with a minimum value (or zero) or by attenuating the residual noise by 30 dB. The resulting output is the noise free magnitude spectrum.




The spectral complex data is reconstructed by applying the phase information of the relevant bin of the signal's FFT with the noise free magnitude. An IFFT process is then performed on the complex data to obtain the noise free time domain data. The time domain results are overlapped and summed with the previous frame's results to compensate for the overlap process of the FFT.




There are several problems associated with the system described. First, the system assumes that there is a prior knowledge of the speech and non-speech time intervals. A voice switch is not practical to detect those periods. Theoretically, a voice switch detects the presence of the speech by measuring the energy level and comparing it to a threshold. If the threshold is too high, there is a risk that some voice time intervals might be regarded as a non-speech time interval and the system will regard voice information as noise. The result is voice distortion, especially in poor signal to noise ratio cases. If, on the other hand, the threshold is too low, there is a risk that the non-speech intervals will be too short especially in poor signal to noise ratio cases and in cases where the voice is continuous with little intermission.




Another problem is that the magnitude calculation of the FFT result is quite complex. This involves square and square root calculations which are very expensive in terms of computation load. Yet another problem is the association of the phase information to the noise free magnitude spectrum in order to obtain the information for the IFFT. This process requires the calculation of the phase, the storage of the information, and applying the information to the magnitude data—all are expensive in terms of computation and memory requirements. Another problem is the estimation of the noise spectral magnitude. The FFT process is a poor and unstable estimator of energy. The averaging-over-time of frames contributes insufficiently to the stability. Shortening the length of the FFT results in a wider bandwidth of each bin and better stability but reduces the performance of the system. Averaging-over-time, moreover, smears the data and, for this reason, cannot be extended to more than a few frames. This means that the noise estimation process proposed is not sufficiently stable.




It is therefore an object of this invention to provide a spectral subtraction system that has a simple, yet efficient mechanism, to estimate the noise magnitude spectrum even in poor signal-to-noise ratio situations and in continuous fast speech cases.




It is another object of this invention to provide an efficient mechanism that can perform the magnitude estimation with little cost, and will overcome the problem of phase association.




It is yet another object of this invention to provide a stable mechanism to estimate the noise spectral magnitude without the smearing of the data.




In accordance with the foregoing objectives, the present invention provides a system that correctly determines the non-speech segments of the audio signal thereby preventing erroneous processing of the noise canceling signal during the speech segments. In the preferred embodiment, the present invention obviates the need for a voice switch by precisely determining the non-speech segments using a separate threshold detector for each frequency bin. The threshold detector precisely detects the positions of the noise elements, even within continuous speech segments, by determining whether frequency spectrum elements, or bins, of the input signal are within a threshold set according to a minimum value of the frequency spectrum elements over a preset period of time. More precisely, current and future minimum values of the frequency spectrum elements. Thus, for each syllable, the energy of the noise elements is determined by a separate threshold determination without examination of the overall signal energy thereby providing good and stable estimation of the noise. In addition, the system preferably sets the threshold continuously and resets the threshold within a predetermined period of time of, for example, five seconds.




In order to reduce complex calculations, it is preferred in the present invention to obtain an estimate of the magnitude of the input audio signal using a multiplying combination of the real and imaginary parts of the input in accordance with, for example, the higher and the lower values of the real and imaginary parts of the signal. In order to further reduce instability of the spectral estimation, a two-dimensional (2D) smoothing process is applied to the signal estimation. A two-step smoothing function using first neighboring frequency bins in each time frame then applying an exponential time average effecting an average over time for each frequency bin produces excellent results.




In order to reduce the complexity of determining the phase of the frequency bins during subtraction to thereby align the phases of the subtracting elements, the present invention applies a filter multiplication to effect the subtraction. The filter function, a Weiner filter function for example, or an approximation of the Weiner filter is multiplied by the complex data of the frequency domain audio signal. The filter function may effect a full-wave rectification, or a half-wave rectification for otherwise negative results of the subtraction process or simple subtraction. It will be appreciated that, since the noise elements are determined within continuous speech segments, the noise estimation is accurate and it may be canceled from the audio signal continuously providing excellent noise cancellation characteristics.




The present invention also provides a residual noise reduction process for reducing the residual noise remaining after noise cancellation. The residual noise is reduced by zeroing the non-speech segments, e.g., within the continuous speech, or decaying the non-speech segments. A voice switch may be used or another threshold detector which detects the non-speech segments in the time-domain.




The present invention is applicable with various noise canceling systems including, but not limited to, those systems described in the U.S. patent applications incorporated herein by reference. The present invention, for example, is applicable with the adaptive beamforming array. In addition, the present invention may be embodied as a computer program for driving a computer processor either installed as application software or as hardware.











BRIEF DESCRIPTION OF THE DRAWINGS




Other objects, features and advantages according to the present invention will become apparent from the following detailed description of the illustrated embodiments when read in conjunction with the accompanying drawings in which corresponding components are identified by the same reference numerals.





FIG. 1

illustrates the present invention;





FIG. 2

illustrates the noise processing of the present invention;





FIG. 3

illustrates the noise estimation processing of the present invention;





FIG. 4

illustrates the subtraction processing of the present invention;





FIG. 5

illustrates the residual noise processing of the present invention;





FIG. 5A

illustrates a variant of the residual noise processing of the present invention;





FIG. 6

illustrates a flow diagram of the present invention;





FIG. 7

illustrates a flow diagram of the present invention;





FIG. 8

illustrates a flow diagram of the present invention; and





FIG. 9

illustrates a flow diagram of the present invention.











DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS





FIG. 1

illustrates an embodiment of the present invention


100


. The system receives a digital audio signal at input


102


sampled at a frequency which is at least twice the bandwidth of the audio signal. In one embodiment, the signal is derived from a microphone signal that has been processed through an analog front end, A/D converter and a decimation filter to obtain the required sampling frequency. In another embodiment, the input is taken from the output of a beamformer or even an adaptive beamformer. In that case the signal has been processed to eliminate noises arriving from directions other than the desired one leaving mainly noises originated from the same direction of the desired one. In yet another embodiment, the input signal can be obtained from a sound board when the processing is implemented on a PC processor or similar computer processor.




The input samples are stored in a temporary buffer


104


of 256 points. When the buffer is full, the new 256 points are combined in a combiner


106


with the previous 256 points to provide 512 input points. The 512 input points are multiplied by multiplier


108


with a shading window with the length of 512 points. The shading window contains coefficients that are multiplied with the input data accordingly. The shading window can be Hanning or other and it serves two goals: the first is to smooth the transients between two processed blocks (together with the overlap process); the second is to reduce the side lobes in the frequency domain and hence prevent the masking of low energy tonals by high energy side lobes. The shaded results are converted to the frequency domain through an FFT (Fast Fourier Transform) processor


110


. Other lengths of the FFT samples (and accordingly input buffers) are possible including 256 points or 1024 points.




The FFT output is a complex vector of 256 significant points (the other 256 points are an anti-symmetric replica of the first 256 points). The points are processed in the noise processing block


112


(


200


) which includes the noise magnitude estimation for each frequency bin—the subtraction process that estimates the noise-free complex value for each frequency bin and the residual noise reduction process. An IFFT (Inverse Fast Fourier Transform) processor


114


performs the Inverse Fourier Transform on the complex noise free data to provide 512 time domain points. The first 256 time domain points are summed by the summer


116


with the previous last 256 data points to compensate for the input overlap and shading process and output at output terminal


118


. The remaining 256 points are saved for the next iteration.




It will be appreciated that, while specific transforms are utilized in the preferred embodiments, it is of course understood that other transforms may be applied to the present invention to obtain the spectral noise signal.





FIG. 2

is a detailed description of the noise processing block


200


(


112


). First, each frequency bin (n)


202


magnitude is estimated. The straight forward approach is to estimate the magnitude by calculating:








Y


(


n


)=((Real(


n


))


2


+(Imag(


n


))


2


)


−2








In order to save processing time and complexity the signal magnitude (Y) is estimated by an estimator


204


using an approximation formula instead:








Y


(


n


)=Max[¦Real(


n


),Imag(


n


)¦]+0.4*Min[¦Real(


n


),Imag(


n


)¦]






In order to reduce the instability of the spectral estimation, which typically plagues the FFT Process (ref[


2


] Digital Signal Processing, Oppenheim Schafer, Prentice Hall P. 542545), the present invention implements a 2D smoothing process. Each bin is replaced with the average of its value and the two neighboring bins' value (of the same time frame) by a first averager


206


. In addition, the smoothed value of each smoothed bin is further smoothed by a second averager


208


using a time exponential average with a time constant of 0.7 (which is the equivalent of averaging over 3 time frames). The 2D-smoothed value is then used by two processes—the noise estimation process by noise estimation processor


212


(


300


) and the subtraction process by subtractor


210


. The noise estimation process estimates the noise at each frequency bin and the result is used by the noise subtraction process. The output of the noise subtraction is fed into a residual noise reduction processor


216


to further reduce the noise. In one embodiment, the time domain signal is also used by the residual noise process


216


to determine the speech free segments. The noise free signal is moved to the IFFT process to obtain the time domain output


218


.





FIG. 3

is a detailed description of the noise estimation processor


300


(


212


). Theoretically, the noise should be estimated by taking a long time average of the signal magnitude (Y) of non-speech time intervals. This requires that a voice switch be used to detect the speech/non-speech intervals. However, a too-sensitive a switch may result in the use of a speech signal for the noise estimation which will defect the voice signal. A less sensitive switch, on the other hand, may dramatically reduce the length of the noise time intervals (especially in continuous speech cases) and defect the validity of the noise estimation.




In the present invention, a separate adaptive threshold is implemented for each frequency bin


302


. This allows the location of noise elements for each bin separately without the examination of the overall signal energy. The logic behind this method is that, for each syllable, the energy may appear at different frequency bands. At the same time, other frequency bands may contain noise elements. It is therefore possible to apply a non-sensitive threshold for the noise and yet locate many non-speech data points for each bin, even within a continuous speech case. The advantage of this method is that it allows the collection of many noise segments for a good and stable estimation of the noise, even within continuous speech segments.




In the threshold determination process, for each frequency bin, two minimum values are calculated. A future minimum value is initiated every 5 seconds at


304


with the value of the current magnitude (Y(n)) and replaced with a smaller minimal value over the next 5 seconds through the following process. The future minimum value of each bin is compared with the current magnitude value of the signal. If the current magnitude is smaller than the future minimum, the future minimum is replaced with the magnitude which becomes the new future minimum.




At the same time, a current minimum value is calculated at


306


. The current minimum is initiated every 5 seconds with the value of the future minimum that was determined over the previous 5 seconds and follows the minimum value of the signal for the next 5 seconds by comparing its value with the current magnitude value. The current minimum value is used by the subtraction process, while the future minimum is used for the initiation and refreshing of the current minimum.




The noise estimation mechanism of the present invention ensures a tight and quick estimation of the noise value, with limited memory of the process (5 seconds), while preventing a too high an estimation of the noise.




Each bin's magnitude (Y(n)) is compared with four times the current minimum value of that bin by comparator


308


—which serves as the adaptive threshold for that bin. If the magnitude is within the range (hence below the threshold), it is allowed as noise and used by an exponential averaging unit


310


that determines the level of the noise


312


of that frequency. If the magnitude is above the threshold it is rejected for the noise estimation. The time constant for the exponential averaging is typically 0.95 which may be interpreted as taking the average of the last 20 frames. The threshold of 4*minimum value may be changed for some applications.





FIG. 4

is a detailed description of the subtraction processor


400


(


210


). In a straight forward approach, the value of the estimated bin noise magnitude is subtracted from the current bin magnitude. The phase of the current bin is calculated and used in conjunction with the result of the subtraction to obtain the Real and Imaginary parts of the result. This approach is very expensive in terms of processing and memory because it requires the calculation of the Sine and Cosine arguments of the complex vector with consideration of the 4 quarters where the complex vector may be positioned. An alternative approach used in this present invention is to use a Filter approach. The subtraction is interpreted as a filter multiplication performed by filter


402


where H (the filter coefficient) is:







H


(
n
)


=


&LeftBracketingBar;


&LeftBracketingBar;

Y


(
n
)


&RightBracketingBar;

-

&LeftBracketingBar;

N


(
n
)


&RightBracketingBar;


&RightBracketingBar;


&LeftBracketingBar;

Y


(
n
)


&RightBracketingBar;












Where Y(n) is the magnitude of the current bin and N(n) is the noise estimation of that bin. The value H of the filter coefficient (of each bin separately) is multiplied by the Real and Imaginary parts of the current bin at


404


:








E


(Real)=


Y


(Real)*


H;E


(Imag)=


Y


(Imag)*


H








Where E is the noise free complex value. In the straight forward approach the subtraction may result in a negative value of magnitude. This value can be either replaced with zero (half-wave rectification) or replaced with a positive value equal to the negative one (full-wave rectification). The filter approach, as expressed here, results in the full-wave rectification directly. The full wave rectification provides a little less noise reduction but introduces much less artifacts to the signal. It will be appreciated that this filter can be modified to effect a half-wave rectification by taking the non-absolute value of the numerator and replacing negative values with zeros.




Note also that the values of Y in the figures are the smoothed values of Y after averaging over neighboring spectral bins and over time frames (2D smoothing). Another approach is to use the smoothed Y only for the noise estimation (N), and to use the unsmoothed Y for the calculation of H.





FIG. 5

illustrates the residual noise reduction processor


500


(


216


). The residual noise is defined as the remaining noise during non-speech intervals. The noise in these intervals is first reduced by the subtraction process which does not differentiate between speech and non-speech time intervals. The remaining residual noise can be reduced further by using a voice switch


502


and either multiplying the residual noise by a decaying factor or replacing it with zeros. Another alternative to the zeroing is replacing the residual noise with a minimum value of noise at


504


.




Yet another approach, which avoids the voice switch, is illustrated in FIG.


5


A. The residual noise reduction processor


506


applies a similar threshold used by the noise estimator at


508


on the noise free output bin and replaces or decays the result when it is lower than the threshold at


510


.




The result of the residual noise processing of the present invention is a quieter sound in the non-speech intervals. However, the appearance of artifacts such as a pumping noise when the noise level is switched between the speech interval and the non-speech interval may occur in some applications.




The spectral subtraction technique of the present invention can be utilized in conjunction with the array techniques, close talk microphone technique or as a stand alone system. The spectral subtraction of the present invention can be implemented on an embedded hardware (DSP) as a stand alone system, as part of other embedded algorithms such as adaptive beamforming, or as a software application running on a PC using data obtained from a sound port.




As illustrated in

FIGS. 6-9

, for example, the present invention may be implemented as a software application. In step


600


, the input samples are read. At step


602


, the read samples are stored in a buffer. If 256 new points are accumulated in step


604


, program control advances to step


606


—otherwise control returns to step


600


where additional samples are read. Once 256 new samples are read, the last 512 points are moved to the processing buffer in step


606


. The 256 new samples stored are combined with the previous 256 points in step


608


to obtain the 512 points. In step


610


, a Fourier Transform is performed on the 512 points. Of course, another transform may be employed to obtain the spectral noise signal. In step


612


, the 256 significant complex points resulting from the transformation are stored in the buffer. The second 256 points are a conjugate replica of the first 256 points and are redundant for real inputs. The stored data in step


614


includes the 256 real points and the 256 imaginary points. Next, control advances to

FIG. 7

as indicated by the circumscribed letter A.




In

FIG. 7

, the noise processing is performed wherein the magnitude of the signal is estimated in step


700


. Of course, the straight forward approach may be employed but, as discussed with reference to

FIG. 2

, the straight forward approach requires extraneous processing time and complexity. In step


702


, the stored complex points are read from the buffer and calculated using the estimation equation shown in step


700


. The result is stored in step


704


. A 2-dimensional (2D) smoothing process is effected in steps


706


and


708


wherein, in step


706


, the estimate at each point is averaged with the estimates of adjacent points and, in step


708


, the estimate is averaged using an exponential average having the effect of averaging the estimate at each point over, for example, 3 time samples of each bin. In steps


710


and


712


, the smoothed estimate is employed to determine the future minimum value and the current minimum value. If the smoothed estimate is less than the calculated future minimum value as determined in step


710


, the future minimum value is replaced with the smoothed estimate and stored in step


714


.




Meanwhile, if it is determined at step


712


that the smoothed estimate is less than the current minimum value, then the current minimum is replaced with the smoothed estimate value and stored in step


720


. The future and current minimum values are calculated continuously and initiated periodically, for example, every 5 seconds as determined in step


724


and control is advanced to steps


722


and


726


wherein the new future and current minimum are calculated. Afterwards, control advances to

FIG. 8

as indicated by the circumscribed letter B where the subtraction and residual noise reduction are effected.




In

FIG. 8

, it is determined whether the samples are less than a threshold amount in step


800


. In step


804


, where the samples are within the threshold, the samples undergo an exponential averaging and stored in the buffer at step


802


. Otherwise, control advances directly to step


808


. At step


808


, the filter coefficients are determined from the signal samples retrieved in step


806


the samples retrieved from step


810


is determined from the signal samples retrieved in step


806


and the estimated samples retrieved from step


810


. Although the straight forward approach may be used by which phase is estimated and applied, the alternative Weiner Filter is preferred since this saves processing time and complexity. In step


814


, the filter transform is multiplied by the samples retrieved from steps


816


and stored in step


812


.




In steps


818


and


820


, the residual noise reduction process is performed wherein, in step


818


, if the processed noise signal is within a threshold, control advances to step


820


wherein the processed noise is subjected to replacement, for example, a decay. However, the residual noise reduction process may not be suitable in some applications where the application is negatively effected.




It will be appreciated that, while specific values are used as in the several equations and calculations employed in the present invention, these values may be different than those shown.




In

FIG. 9

, the Inverse Fourier Transform is generated in step


902


on the basis of the recovered noise processed audio signal recovered in step


904


and stored in step


900


. In step


906


, the time-domain signals are overlayed in order to regenerate the audio signal substantially without noise.




It will be appreciated that the present invention may be practiced as a software application, preferably written using C or any other programming language, which may be embedded on, for example, a programmable memory chip or stored on a computer-readable medium such as, for example, an optical disk, and retrieved therefrom to drive a computer processor. Sample code representative of the present invention is illustrated in Appendix A which, as will be appreciated by those skilled in the art, may be modified to accommodate various operating systems and compilers or to include various bells and whistles without departing from the spirit and scope of the present invention.




With the present invention, a spectral subtraction system is provided that has a simple, yet efficient mechanism, to estimate the noise magnitude spectrum even in poor signal to noise ratio situations and in continuous fast speech cases. An efficient mechanism is provided that can perform the magnitude estimation with little cost, and will overcome the problem of phase association. A stable mechanism is provided to estimate the noise spectral magnitude without the smearing of the data.




Although preferred embodiments of the present invention and modifications thereof have been described in detail herein, it is to be understood that this invention is not limited to those precise embodiments and modifications, and that other modifications and variations may be affected by one skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.



Claims
  • 1. An apparatus for canceling noise, comprising:an input for inputting an audio signal which includes a noise signal; a frequency spectrum generator for generating the frequency spectrum of said audio signal thereby generating frequency bins of said audio signal; and a threshold detector for setting a threshold for each frequency bin using a noise estimation process and for detecting for each frequency bin whether the magnitude of the frequency bin is less than the corresponding threshold, thereby detecting the position of noise elements for each frequency bin.
  • 2. The apparatus according to claim 1, wherein said threshold detector detects the position of a plurality of non-speech data points for said frequency bins.
  • 3. The apparatus according to claim 2, wherein said threshold detector detects the position of said plurality of non-speech data points for said frequency bins within a continuous speech segment of said audio signal.
  • 4. The apparatus according to claim 1, wherein said threshold detector sets the threshold for each frequency bin in accordance with a current minimum value of the magnitude of the corresponding frequency bin; said current minimum value being derived in accordance with a future minimum value of the magnitude of the corresponding frequency bin.
  • 5. The apparatus according to claim 4, wherein said future minimum value is determined as the minimum value of the magnitude of the corresponding frequency bin within a predetermined period of time.
  • 6. The apparatus according to claim 5, wherein said current minimum value is set to said future minimum value periodically.
  • 7. The apparatus according to claim 6, wherein said future minimum value is replaced with the current magnitude value when said future minimum value is greater than said current magnitude value.
  • 8. The apparatus according to claim 6, wherein said current minimum value is replaced with the current magnitude value when said current minimum value is greater than said current magnitude value.
  • 9. The apparatus according to claim 5, wherein said future minimum value is set to a current magnitude value periodically; said current-magnitude value being the value of the magnitude of the corresponding frequency bin.
  • 10. The apparatus according to claim 4, wherein said current minimum value is determined as the minimum value of the magnitude of the corresponding frequency bin within a predetermined period of time.
  • 11. The apparatus according to claim 4, wherein said threshold is set by multiplying said current minimum value by a coefficient.
  • 12. The apparatus according to claim 1, further comprising an averaging unit for determining a level of said noise within said respective frequency bin, wherein said threshold detector detects the position of said noise elements where said level of said noise determined by said averaging unit is less than the corresponding threshold.
  • 13. The apparatus according to claim 1, further comprising a subtractor for subtracting said noise elements estimated at said positions determined by said threshold detector from said audio signal to derive said audio signal substantially without said noise.
  • 14. The apparatus according to claim 13, wherein said subtractor performs subtraction using a filter multiplication which multiplies said audio signal by a filter function.
  • 15. The apparatus according to claim 14, wherein said filter function is a Wiener filter function which is a function of said frequency bins of said noise elements and magnitude.
  • 16. The apparatus according to claim 15, wherein said filter multiplication multiplies the complex elements of said frequency bins by said Weiner filter function.
  • 17. The apparatus according to claim 13, further comprising a residual noise processor for reducing residual noise remaining after said subtractor subtracts said noise elements at said positions determined by said threshold detector from said audio signal.
  • 18. The apparatus according to claim 17, wherein said residual noise processor replaces said frequency bins corresponding to non-speech segments of said audio signal with a minimum value.
  • 19. The apparatus according to claim 18, wherein said residual noise processor includes a voice switch for detecting said non-speech segments.
  • 20. The apparatus according to claim 18, wherein said residual noise processor includes another threshold detector for detecting said non-speech segments by detecting said audio signal is below a predetermined threshold.
  • 21. The apparatus according to claim 1, further comprising an estimator for estimating a magnitude of each frequency bin.
  • 22. The apparatus according to claim 21, wherein said estimator estimates said magnitude of each frequency bin as a function of the maximum and the minimum values of the complex element of said frequency bins for a number n of frequency bins.
  • 23. The apparatus according to claim 21, further comprising a smoothing unit which smoothes the estimate of each frequency bin.
  • 24. The apparatus according to claim 23, wherein said smoothing unit comprises a two-dimensional process which averages each frequency bin in accordance with neighboring frequency bins and averages each frequency bin using an exponential time average which effects an average over a plurality of frequency bins over time.
  • 25. The apparatus according to claim 1, further comprising an adaptive array comprising a plurality of microphones for receiving said audio signal.
  • 26. An apparatus for canceling noise, comprising:input means for inputting an audio signal which includes a noise signal; frequency spectrum generating means for generating the frequency spectrum of said audio signal thereby generating frequency bins of said audio signal; and threshold detecting means for setting a threshold for each frequency bin using a noise estimation process and for detecting for each frequency bin whether the magnitude of the frequency bin is less than the corresponding threshold, thereby detecting the position of noise elements for each frequency bin.
  • 27. The apparatus according to claim 26, wherein said threshold detecting means sets the threshold for each frequency bin in accordance with a current minimum value of the magnitude of the corresponding frequency bin; said current minimum value being derived in accordance with a future minimum value of the magnitude of the corresponding frequency bin.
  • 28. The apparatus according to claim 27, wherein said future minimum value is determined as the minimum value of the magnitude of the corresponding frequency bin within a predetermined period of time.
  • 29. The apparatus according to claim 27, wherein said current minimum value is determined as the minimum value of the magnitude of the corresponding frequency bin within a predetermined period of time.
  • 30. The apparatus according to claim 26, further comprising averaging means for determining a level of said noise within said respective frequency bin, wherein said threshold detecting means detects the position of said noise elements where said level of said noise determined by said averaging means is less than the corresponding threshold.
  • 31. The apparatus according to claim 26, further comprising subtracting means for subtracting said noise elements at said positions determined by said threshold detecting means from said audio signal to derive said audio signal substantially without said noise.
  • 32. The apparatus according to claim 31, wherein said subtracting performs subtraction using a filter multiplication which multiplies said audio signal by a filter function.
  • 33. The apparatus according to claim 31, further comprising residual noise processing means for reducing residual noise remaining after said subtracting means subtracts said noise elements at said positions determined by said threshold detecting means from said audio signal.
  • 34. The apparatus according to claim 26, further comprising estimating means for estimating a magnitude of each frequency bin.
  • 35. The apparatus according to claim 34, wherein said estimating means estimates said magnitude of each frequency bin as a function of a maximum and a minimum of said frequency bins for a number n of frequency bins.
  • 36. The apparatus according to claim 34, further comprising smoothing means for smoothing the estimate of each frequency bin.
  • 37. The apparatus according to claim 26, further comprising adaptive array means comprising a plurality of microphones for receiving said audio signal.
  • 38. A method for driving a computer processor for generating a noise canceling signal for canceling noise from an audio signal representing audible sound including a noise signal representing audible noise, said method comprising the steps of:inputting said audio signal which includes said noise signal; generating the frequency spectrum of said audio signal thereby generating frequency bins of said audio signal; setting a threshold for each frequency bin using a noise estimation process; detecting for each frequency bin whether the magnitude of the frequency bin is less than the corresponding threshold, thereby detecting the position of noise elements for each frequency bin; and subtracting said noise elements detected in said step of detecting from said audio signal to produce an audio signal representing said audible sound substantially without said audible noise.
  • 39. The method according to claim 38, wherein said setting step sets the threshold for each frequency bin in accordance with a current minimum value of the magnitude of the corresponding frequency bin; said current minimum value being derived in accordance with a future minimum value of the magnitude of the corresponding frequency bin.
  • 40. The method according to claim 39, wherein said setting step further comprises the step of determining said future minimum value as the minimum value of the magnitude of the corresponding frequency bin within a predetermined period of time.
  • 41. The method according to claim 40, wherein said setting step further comprises the step of determining said future minimum value as the minimum value of the magnitude of the corresponding frequency bin within a predetermined period of time.
  • 42. The method according to claim 40, further comprising the step of averaging a level of said noise of said respective frequency bin, wherein said step of detecting detects the position of said noise elements where said level of said noise determined by said step of averaging is less than the corresponding threshold.
  • 43. The method according to claim 40, wherein said step of subtracting performs subtraction using a filter multiplication which multiplies said audio signal by a filter function.
  • 44. The method according to claim 40, further comprising the step of estimating a magnitude of each frequency bin as a function of a maximum and a minimum of said frequency bins for a number n of frequency bins.
  • 45. The method according to claim 44, further comprising the step of smoothing the estimate of each frequency bin.
  • 46. The method according to claim 39, further comprising the step of receiving said audio signal from an adaptive array of a plurality of microphones.
  • 47. The method according to claim 38, further comprising the step of reducing the residual noise remaining after said step of subtracting subtracts said noise elements at said positions determined by said step of detecting from said audio signal.
RELATED APPLICATIONS INCORPORATED BY REFERENCE

The following applications and patent(s) are cited and hereby herein incorporated by reference: U.S. patent Ser. No. 09/130,923 filed Aug. 6, 1998, U.S. patent Ser. No. 09/055,709 filed Apr. 7, 1998, U.S. patent Ser. No. 09/059,503 filed Apr. 13, 1998, U.S. patent Ser. No. 08/840,159 filed Apr. 14, 1997, U.S. patent Ser. No. 09/130,923 filed Aug. 6, 1998, U.S. patent Ser. No. 08/672,899 now issued U.S. Pat. No. 5,825,898 issued Oct. 20, 1998. And, all documents cited herein are incorporated herein by reference, as are documents cited or referenced in documents cited herein.

US Referenced Citations (257)
Number Name Date Kind
2379514 Fisher Jul 1945 A
2972018 Hawley et al. Feb 1961 A
3098121 Wadsworth Jul 1963 A
3101744 Warnaka Aug 1963 A
3170046 Leale Feb 1965 A
3247925 Warnaka Apr 1966 A
3262521 Warnaka Jul 1966 A
3298457 Warnaka Jan 1967 A
3330376 Warnaka Jul 1967 A
3394226 Andrews, Jr. Jul 1968 A
3416782 Warnaka Dec 1968 A
3422921 Warnaka Jan 1969 A
3562089 Warnaka et al. Feb 1971 A
3702644 Fowler et al. Nov 1972 A
3830988 Mol et al. Aug 1974 A
3889059 Thompson et al. Jun 1975 A
3890474 Glicksberg Jun 1975 A
4068092 Ikoma et al. Jan 1978 A
4122303 Chaplin et al. Oct 1978 A
4153815 Chaplin et al. May 1979 A
4169257 Smith Sep 1979 A
4239936 Sakoe Dec 1980 A
4241805 Chance, Jr. Dec 1980 A
4243117 Warnaka Jan 1981 A
4261708 Gallagher Apr 1981 A
4321970 Thigpen Mar 1982 A
4334740 Wray Jun 1982 A
4339018 Warnaka Jul 1982 A
4363007 Haramoto et al. Dec 1982 A
4409435 Ono Oct 1983 A
4417098 Chaplin et al. Nov 1983 A
4433435 David Feb 1984 A
4442546 Ishigaki Apr 1984 A
4453600 Thigpen Jun 1984 A
4455675 Bose et al. Jun 1984 A
4459851 Crostack Jul 1984 A
4461025 Franklin Jul 1984 A
4463222 Poradowski Jul 1984 A
4473906 Warnaka et al. Sep 1984 A
4477505 Warnaka Oct 1984 A
4489441 Chaplin et al. Dec 1984 A
4490841 Chaplin et al. Dec 1984 A
4494074 Bose Jan 1985 A
4495643 Orban Jan 1985 A
4517415 Laurence May 1985 A
4527282 Chaplin et al. Jul 1985 A
4530304 Gardos Jul 1985 A
4539708 Norris Sep 1985 A
4559642 Miyaji et al. Dec 1985 A
4562589 Warnaka et al. Dec 1985 A
4566118 Chaplin et al. Jan 1986 A
4570155 Skarman et al. Feb 1986 A
4581758 Coker et al. Apr 1986 A
4589136 Poldy et al. May 1986 A
4589137 Miller May 1986 A
4600863 Chaplin et al. Jul 1986 A
4622692 Cole Nov 1986 A
4628529 Borth et al. Dec 1986 A
4630302 Kryter Dec 1986 A
4630304 Borth et al. Dec 1986 A
4636586 Schiff Jan 1987 A
4649505 Zinser, Jr. et al. Mar 1987 A
4653102 Hansen Mar 1987 A
4653606 Flanagan Mar 1987 A
4654871 Chaplin et al. Mar 1987 A
4658426 Chabries et al. Apr 1987 A
4672674 Clough et al. Jun 1987 A
4683010 Hartmann Jul 1987 A
4696043 Iwahara et al. Sep 1987 A
4718096 Meisel Jan 1988 A
4731850 Levitt et al. Mar 1988 A
4736432 Cantrell Apr 1988 A
4741038 Elko et al. Apr 1988 A
4750207 Gebert et al. Jun 1988 A
4752961 Kahn Jun 1988 A
4769847 Taguchi Sep 1988 A
4771472 Williams, III et al. Sep 1988 A
4783798 Leibholz et al. Nov 1988 A
4783817 Hamada et al. Nov 1988 A
4783818 Graupe et al. Nov 1988 A
4791672 Nunley et al. Dec 1988 A
4802227 Elko et al. Jan 1989 A
4811404 Vilmur et al. Mar 1989 A
4833719 Carme et al. May 1989 A
4837832 Fanshel Jun 1989 A
4847897 Means Jul 1989 A
4862506 Landgarten et al. Aug 1989 A
4878188 Ziegler et al. Oct 1989 A
4908855 Ohga et al. Mar 1990 A
4910718 Horn Mar 1990 A
4910719 Thubert Mar 1990 A
4928307 Lynn May 1990 A
4930156 Norris May 1990 A
4932063 Nakamura Jun 1990 A
4937871 Hattori Jun 1990 A
4947356 Elliott et al. Aug 1990 A
4951954 MacNeill Aug 1990 A
4955055 Fujisaki et al. Sep 1990 A
4956867 Zarek et al. Sep 1990 A
4959865 Stettiner et al. Sep 1990 A
4963071 Larwin et al. Oct 1990 A
4965834 Miller Oct 1990 A
4977600 Ziegler Dec 1990 A
4985925 Langberg et al. Jan 1991 A
4991433 Warnaka et al. Feb 1991 A
5001763 Moseley Mar 1991 A
5010576 Hill Apr 1991 A
5018202 Takahashi et al. May 1991 A
5023002 Schweizer et al. Jun 1991 A
5029218 Nagayasu Jul 1991 A
5046103 Warnaka et al. Sep 1991 A
5052510 Gossman Oct 1991 A
5070527 Lynn Dec 1991 A
5075694 Donnangelo et al. Dec 1991 A
5086385 Launey et al. Feb 1992 A
5086415 Takahashi et al. Feb 1992 A
5091954 Sasaki et al. Feb 1992 A
5097923 Ziegler et al. Mar 1992 A
5105377 Ziegler, Jr. Apr 1992 A
5117461 Moseley May 1992 A
5121426 Bavmhauer Jun 1992 A
5125032 Meister et al. Jun 1992 A
5126681 Ziegler, Jr. et al. Jun 1992 A
5133017 Cain et al. Jul 1992 A
5134659 Moseley Jul 1992 A
5138663 Moseley Aug 1992 A
5138664 Kimura et al. Aug 1992 A
5142585 Taylor Aug 1992 A
5192918 Sugiyama Mar 1993 A
5208864 Kaneda May 1993 A
5209326 Harper May 1993 A
5212764 Ariyoshi May 1993 A
5219037 Smith et al. Jun 1993 A
5226077 Lynn et al. Jul 1993 A
5226087 Ono Jul 1993 A
5241692 Harrison et al. Aug 1993 A
5251263 Andrea et al. Oct 1993 A
5251863 Gossman et al. Oct 1993 A
5260997 Gattey et al. Nov 1993 A
5272286 Cain et al. Dec 1993 A
5276740 Inanaga et al. Jan 1994 A
5311446 Ross et al. May 1994 A
5311453 Denenberg et al. May 1994 A
5313555 Kamiya May 1994 A
5313945 Friedlander May 1994 A
5315661 Gossman et al. May 1994 A
5319736 Hunt Jun 1994 A
5327506 Stites, III Jul 1994 A
5332203 Gossman et al. Jul 1994 A
5335011 Addeo et al. Aug 1994 A
5348124 Harper Sep 1994 A
5353347 Irissou et al. Oct 1994 A
5353376 Oh et al. Oct 1994 A
5361303 Eatwell Nov 1994 A
5365594 Ross et al. Nov 1994 A
5375174 Denenberg Dec 1994 A
5381473 Andrea et al. Jan 1995 A
5381481 Gammie et al. Jan 1995 A
5384843 Masuda et al. Jan 1995 A
5402497 Nishimoto et al. Mar 1995 A
5412735 Engebretson et al. May 1995 A
5414769 Gattey et al. May 1995 A
5414775 Scribner et al. May 1995 A
5416845 Shen May 1995 A
5416847 Boze May 1995 A
5416887 Shimada May 1995 A
5418857 Eatwell May 1995 A
5423523 Gossman et al. Jun 1995 A
5431008 Ross et al. Jul 1995 A
5432859 Yang et al. Jul 1995 A
5434925 Nadim Jul 1995 A
5440642 Denenberg et al. Aug 1995 A
5448637 Yamaguchi et al. Sep 1995 A
5452361 Jones Sep 1995 A
5457749 Cain et al. Oct 1995 A
5469087 Eatwell Nov 1995 A
5471106 Curtis et al. Nov 1995 A
5471538 Sasaki et al. Nov 1995 A
5473214 Hildebrand Dec 1995 A
5473701 Cezanee et al. Dec 1995 A
5473702 Yoshida et al. Dec 1995 A
5475761 Eatwell Dec 1995 A
5479562 Fielder et al. Dec 1995 A
5481615 Eatwell et al. Jan 1996 A
5485515 Allen et al. Jan 1996 A
5493615 Burke et al. Feb 1996 A
5502869 Smith et al. Apr 1996 A
5511127 Warnaka Apr 1996 A
5511128 Lindeman Apr 1996 A
5515378 Roy, III et al. May 1996 A
5524056 Killion et al. Jun 1996 A
5524057 Akiho et al. Jun 1996 A
5526432 Denenberg Jun 1996 A
5546090 Roy, III et al. Aug 1996 A
5546467 Denenberg Aug 1996 A
5550334 Langley Aug 1996 A
5553153 Eatwell Sep 1996 A
5563817 Ziegler et al. Oct 1996 A
5568557 Ross et al. Oct 1996 A
5581620 Brandstein et al. Dec 1996 A
5592181 Cai et al. Jan 1997 A
5592490 Barratt et al. Jan 1997 A
5600106 Langley Feb 1997 A
5604813 Evans et al. Feb 1997 A
5615175 Cater et al. Mar 1997 A
5617479 Hildebrand et al. Apr 1997 A
5619020 Jones et al. Apr 1997 A
5621656 Langley Apr 1997 A
5625697 Bowen et al. Apr 1997 A
5625880 Goldburg et al. Apr 1997 A
5627746 Ziegler, Jr. et al. May 1997 A
5627799 Hoshuyama May 1997 A
5638022 Eatwell Jun 1997 A
5638454 Jones et al. Jun 1997 A
5638456 Conley et al. Jun 1997 A
5642353 Roy, III et al. Jun 1997 A
5644641 Ikeda Jul 1997 A
5649018 Gifford et al. Jul 1997 A
5652770 Eatwell Jul 1997 A
5652799 Ross et al. Jul 1997 A
5657393 Crow Aug 1997 A
5664021 Chu et al. Sep 1997 A
5668747 Obashi Sep 1997 A
5668927 Chan et al. Sep 1997 A
5673325 Andrea et al. Sep 1997 A
5676353 Jones et al. Oct 1997 A
5689572 Ohki et al. Nov 1997 A
5692053 Fuller et al. Nov 1997 A
5692054 Parrella et al. Nov 1997 A
5699436 Claybaugh et al. Dec 1997 A
5701344 Wakui Dec 1997 A
5706394 Wynn Jan 1998 A
5715319 Chu Feb 1998 A
5715321 Andrea et al. Feb 1998 A
5719945 Fuller et al. Feb 1998 A
5724270 Posch Mar 1998 A
5727073 Ikeda Mar 1998 A
5732143 Andrea et al. Mar 1998 A
5745581 Eatwell et al. Apr 1998 A
5748749 Miller et al. May 1998 A
5768473 Eatwell et al. Jun 1998 A
5774859 Houser et al. Jun 1998 A
5787259 Haroun et al. Jul 1998 A
5798983 Kuhn et al. Aug 1998 A
5812682 Ross et al. Sep 1998 A
5815582 Claybaugh et al. Sep 1998 A
5818948 Gulick Oct 1998 A
5825897 Andrea et al. Oct 1998 A
5825898 Marash Oct 1998 A
5828768 Eatwell et al. Oct 1998 A
5835608 Warnaka et al. Nov 1998 A
5838805 Warnaka et al. Nov 1998 A
5874918 Czarnecki et al. Mar 1999 A
5909495 Andrea Jun 1999 A
5914877 Gulick Jun 1999 A
5914912 Yang Jun 1999 A
5995150 Hsieh et al. Nov 1999 A
Foreign Referenced Citations (36)
Number Date Country
2640324 Mar 1978 DE
3719963 Mar 1988 DE
4008595 Sep 1991 DE
0 059 745 Sep 1982 EP
0 380 290 Aug 1990 EP
0 390 386 Oct 1990 EP
0 411 360 Feb 1991 EP
0 509 742 Oct 1992 EP
0 483 845 Jan 1993 EP
0 583 900 Feb 1994 EP
0 595 457 May 1994 EP
0 721 251 Jul 1996 EP
0 724 415 Nov 1996 EP
2305909 Oct 1976 FR
1 160 431 Aug 1969 GB
1 289 993 Sep 1972 GB
1 378 294 Dec 1974 GB
2 172 769 Sep 1986 GB
2 239 971 Jul 1991 GB
2 289 593 Nov 1995 GB
56-89194 Jul 1981 JP
59-64994 Apr 1984 JP
62-189898 Aug 1987 JP
1-149695 Jun 1989 JP
1-314098 Dec 1989 JP
2-070152 Mar 1990 JP
3-169199 Jul 1991 JP
3-231599 Oct 1991 JP
4-16900 Jan 1992 JP
WO 8809512 Dec 1988 WO
WO 9205538 Apr 1992 WO
WO 9217019 Oct 1992 WO
WO 9416517 Jul 1994 WO
WO 9508906 Mar 1995 WO
WO 9615541 May 1996 WO
WO 9723068 Jun 1997 WO
Non-Patent Literature Citations (19)
Entry
B.D. Van Veen and K.M. Buckley, “Beamforming: A Versatile Approach to Spatial Filtering,” IEEE ASSN Magazine, vol. 5, No. 2, Apr. 1988, pp. 4-24.
Beranek, Acoustics (American Institute of Physics, 1986) pp. 116-135.
Boll, IEEE Trans. on Acous., vol. ASSP-27, No. 2, Apr. 1979, pp. 113-120.
Daniel Sweeney, “Sound Conditioning Through DSP”, The Equipment Authority, 1994.
Edward J. Foster, “Switched on Silence”, Popular Science, 1994, p. 33.
Kuo, Automatic Control of Systems, pp. 504-585.
Luenberger, Optimization by Vector Space Method, pp. 134-138.
Ogata, Modern Control Engineering, pp. 474-508.
Oppenheim Schafer, Digital Signal Processing (Prentice Hall) pp. 542-545.
P.P. Vaidyanathan, “Multirate Digital Filters, Filter Banks, Polyphase Networks, and Applications; A Tutorial,” IEEE Proc., vol. 78, No. 1, Jan. 1990.
P.P. Vaidyanathan, “Quadrature Mirror Filter Banks, M-band Extensions and Perfect-Reconstruction Techniques,” IEEE ASSP Magazine, Jul. 1987, pp. 4-20.
Rabiner et al., IEEE Trans. on Acous., vol. ASSP-24, No. 5, Oct. 1976, pp. 399-418.
Rubiner et al., Digital Processing of Speech Signals (Prentice Hall, 1978) pp. 130-135.
Sapontis, Probability, Lambda Variables and Structural Processes, pp. 467-474.
Scott C. Douglas, “A Family of Normalized LMS Algorithms,” IEEE Signal Proc. Letters, vol. 1, No. 3, Mar. 1994.
Sewald et al., “Application of . . . Beamforming to Reject Turbulence Noise in Airducts,” IEEE ICASSP vol. 5, No. CONF-21, May 7, 1996, pp. 2734-2737.
White, Moving-Coil Earphone Design, 1963, pp. 188-194.
Widrow et al., “Adaptive Noise Canceling: Principles and Applications,” Proc. IEEE, vol. 63, No. 12, Dec. 1975, pp. 1692-1716.
Youla et al., IEEE Trans. on Acous., vol. MI-1, No. 2, Oct. 1982, pp. 81-101.