The invention relates generally to detecting and processing acoustic signal data and more specifically to reducing noise in acoustic systems.
Acoustic systems employ acoustic sensors such as microphones to receive audio signals. Often, these systems are used in real world environments which present desired audio and undesired audio (also referred to as noise) to a receiving microphone simultaneously. Such receiving microphones are part of a variety of systems such as a mobile phone, a handheld microphone, a hearing aid, etc. These systems often perform speech recognition processing on the received acoustic signals. Simultaneous reception of desired audio and undesired audio have a negative impact on the quality of the desired audio. Degradation of the quality of the desired audio can result in desired audio which is output to a user and is hard for the user to understand. Degraded desired audio used by an algorithm such as in speech recognition (SR) or Automatic Speech Recognition (ASR) can result in an increased error rate which can render the reconstructed speech hard to understand. Either of which presents a problem.
Undesired audio (noise) can originate from a variety of sources, which are not the source of the desired audio. Thus, the sources of undesired audio are statistically uncorrelated with the desired audio. The sources can be of a non-stationary origin or from a stationary origin. Stationary applies to time and space where amplitude, frequency, and direction of an acoustic signal do not vary appreciably. For example, in an automobile environment engine noise at constant speed is stationary as is road noise or wind noise, etc. In the case of a non-stationary signal, noise amplitude, frequency distribution, and direction of the acoustic signal vary as a function of time and or space. Non-stationary noise originates for example, from a car stereo, noise from a transient such as a bump, door opening or closing, conversation in the background such as chit chat in a back seat of a vehicle, etc. Stationary and non-stationary sources of undesired audio exist in office environments, concert halls, football stadiums, airplane cabins, everywhere that a user will go with an acoustic system (e.g., mobile phone, tablet computer etc. equipped with a microphone, a headset, an ear bud microphone, etc.) At times the environment that the acoustic system is used in is reverberant, thereby causing the noise to reverberate within the environment, with multiple paths of undesired audio arriving at the microphone location. Either source of noise, i.e., non-stationary or stationary undesired audio, increases the error rate of speech recognition algorithms such as SR or ASR or can simply make it difficult for a system to output desired audio to a user which can be understood. All of this can present a problem.
Various noise cancellation approaches have been employed to reduce noise from stationary and non-stationary sources. Existing noise cancellation approaches work better in environments where the magnitude of the noise is less than the magnitude of the desired audio, e.g., in relatively low noise environments. Spectral subtraction is used to reduce noise in speech recognition algorithms and in various acoustic systems such as in hearing aids. Systems employing Spectral Subtraction do not produce acceptable error rates when used in Automatic Speech Recognition (ASR) applications when a magnitude of the undesired audio becomes large. This can present a problem.
Various methods have been used to try to suppress or remove undesired audio from acoustic systems, such as in Speech Recognition (SR) or Automatic Speech Recognition (ASR) applications for example. One approach is known as a Voice Activity Detector (VAD). A VAD attempts to detect when desired speech is present and when undesired audio is present. Thereby, only accepting desired speech and treating as noise by not transmitting the undesired audio. Traditional voice activity detection only works well for a single sound source or a stationary noise (undesired audio) whose magnitude is small relative to the magnitude of the desired audio. Therefore, traditional voice activity detection renders a VAD a poor performer in a noisy environment. Additionally, using a VAD to remove undesired audio does not work well when the desired audio and the undesired audio are arriving simultaneously at a receive microphone. This can present a problem.
In dual microphone VAD systems, an energy level ratio between a main microphone and a reference microphone is compared with a preset threshold to determine when desired voice activity is present. If the energy level ratio is greater than the preset threshold, then desired voice activity is detected. If the energy level ratio does not exceed the preset threshold then desired audio is not detected. When the background level of the undesired audio changes a preset threshold can either fail to detect desired voice activity or undesired audio can be accepted as desired voice activity. In either case, the system's ability to properly detect desired voice activity is diminished, thereby negatively effecting system performance. This can present a problem.
The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the invention. The invention is illustrated by way of example in the embodiments and is not limited in the figures of the accompanying drawings, in which like references indicate similar elements.
In the following detailed description of embodiments of the invention, reference is made to the accompanying drawings in which like references indicate similar elements, and in which is shown by way of illustration, specific embodiments in which the invention may be practiced. These embodiments are described in sufficient detail to enable those of skill in the art to practice the invention. In other instances, well-known circuits, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the invention is defined only by the appended claims.
Apparatuses and methods are described for detecting and processing acoustic signals containing both desired audio and undesired audio. In one or more embodiments, apparatuses and methods are described which increase the performance of noise cancellation systems by increasing the signal-to-noise ratio difference between multiple channels and adaptively changing a threshold value of a voice activity detector based on the background noise of the environment.
In some embodiments, the main channel 102 has an omni-directional response and the reference channel 104 has an omni-directional response. In some embodiments, the acoustic beam patterns for the acoustic elements of the main channel 102 and the reference channel 104 are different. In other embodiments, the beam patterns for the main channel 102 and the reference channel 104 are the same; however, desired audio received on the main channel 102 is different from desired audio received on the reference channel 104. Therefore, a signal-to-noise ratio for the main channel 102 and a signal-to-noise ratio for the reference channel 104 are different. In general, the signal-to-noise ratio for the reference channel is less than the signal-to-noise-ratio of the main channel. In various embodiments, by way of non-limiting examples, a difference between a main channel signal-to-noise ratio and a reference channel signal-to-noise ratio is approximately 1 or 2 decibels (dB) or more. In other non-limiting examples, a difference between a main channel signal-to-noise ratio and a reference channel signal-to-noise ratio is 1 decibel (dB) or less. Thus, embodiments of the invention are suited for high noise environments, which can result in low signal-to-noise ratios with respect to desired audio as well as low noise environments, which can have higher signal-to-noise ratios. As used in this description of embodiments, signal-to-noise ratio means the ratio of desired audio to undesired audio in a channel. Furthermore, the term “main channel signal-to-noise ratio” is used interchangeably with the term “main signal-to-noise ratio.” Similarly, the term “reference channel signal-to-noise ratio” is used interchangeably with the term “reference signal-to-noise ratio.”
The main channel 102, the reference channel 104, and optionally a second reference channel 104b provide inputs to the noise cancellation module 103. While an optional second reference channel is shown in the figures, in various embodiments, more than two reference channels are used. In some embodiments, the noise cancellation module 103 includes an adaptive noise cancellation unit 106 which filters undesired audio from the main channel 102, thereby providing a first stage of filtering with multiple acoustic channels of input. In various embodiments, the adaptive noise cancellation unit 106 utilizes an adaptive finite impulse response (FIR) filter. The environment in which embodiments of the invention are used can present a reverberant acoustic field. Thus, the adaptive noise cancellation unit 106 includes a delay for the main channel sufficient to approximate the impulse response of the environment in which the system is used. A magnitude of the delay used will vary depending on the particular application that a system is designed for including whether or not reverberation must be considered in the design. In some embodiments, for microphone channels positioned very closely together (and where reverberation is not significant) a magnitude of the delay can be on the order of a fraction of a millisecond. Note that at the low end of a range of values, which could be used for a delay, an acoustic travel time between channels can represent a minimum delay value. Thus, in various embodiments, a delay value can range from approximately a fraction of a millisecond to approximately 500 milliseconds or more depending on the application.
An output 107 of the adaptive noise cancellation unit 106 is input into a single channel noise cancellation unit 118. The single channel noise cancellation unit 118 filters the output 107 and provides a further reduction of undesired audio from the output 107, thereby providing a second stage of filtering. The single channel noise cancellation unit 118 filters mostly stationary contributions to undesired audio. The single channel noise cancellation unit 118 includes a linear filter, such as for example a Wiener filter, a Minimum Mean Square Error (MMSE) filter implementation, a linear stationary noise filter, or other Bayesian filtering approaches which use prior information about the parameters to be estimated. Further description of the adaptive noise cancellation unit 106 and the components associated therewith and the filters used in the single channel noise cancellation unit 118 are described in U.S. Pat. No. 9,633,670 B2, titled DUAL STAGE NOISE REDUCTION ARCHITECTURE FOR DESIRED SIGNAL EXTRACTION, which is hereby incorporated by reference. In addition, the implementation and operation of other components of the filter control such as the main channel activity detector, the reference channel activity detector and the inhibit logic are described more fully in U.S. Pat. No. 7,386,135 titled “Cardioid Beam With A Desired Null Based Acoustic Devices, Systems and Methods,” which is hereby incorporated by reference.
Acoustic signals from the main channel 102 are input at 108 into a filter control which includes a desired voice activity detector 114. Similarly, acoustic signals from the reference channel 104 are input at 110 into the desired voice activity detector 114 and into adaptive threshold module 112. An optional second reference channel is input at 108b into desired voice activity detector 114 and into adaptive threshold module 112. The desired voice activity detector 114 provides control signals 116 to the noise cancellation module 103, which can include control signals for the adaptive noise cancellation unit 106 and the single channel noise cancellation unit 118. The desired voice activity detector 114 provides a signal at 122 to the adaptive threshold module 112. The signal 122 indicates when desired voice activity is present and not present. In one or more embodiments a logical convention is used wherein a “1” indicates voice activity is present and a “0” indicates voice activity is not present. In other embodiments other logical conventions can be used for the signal 122.
The adaptive threshold module 112 includes a background noise estimation module and selection logic which provides a threshold value which corresponds to a given estimated average background noise level. A threshold value corresponding to an estimated average background noise level is passed at 118 to the desired voice activity detector 114. The threshold value is used by the desired voice activity detector 114 to determine when voice activity is present.
In various embodiments, the operation of adaptive threshold module 112 is described more completely below in conjunction with the figures that follow. An output 120 of the noise cancellation module 103 provides an acoustic signal which contains mostly desired audio and a reduced amount of undesired audio.
The system architecture shown in
In operation, the amplitude of the reference signals 110/108b will vary depending on the noise environment that the system is used in. For example, in a quiet environment, such as in some office settings, the background noise will be lower than for example in some outdoor environments subject to for example road noise or the noise generated at a construction site. In such varying environments, a different background noise level will be estimated by 202 and different threshold values will be selected by selection logic 210 based on the estimated average background noise level. The relationship between background noise level and threshold value is discussed more fully below in conjunction with
The compressed data is smoothed by a smoothing stage 308 where the high frequency fluctuations are reduced. In various embodiments different smoothing can be applied. In one embodiment, smoothing is accomplished by a simple moving average, as shown by an equation 320. In another embodiment, smoothing is accomplished by an exponential moving average as shown by an equation 330. The smoothed frame energy is output at 310 as the estimated average background energy level which used by selection logic to select a threshold value that corresponds to the estimated average background energy level as described above in conjunction with
An estimate of the average estimated background noise level is plotted at 422 with vertical scale 420 plotted with units of dB. The average estimated background noise level 422 has been estimated using the teachings presented above in conjunction with the preceding figures. Note that in the case of
Visual comparison of 422 (
With reference to
The associations of threshold value and estimated average background noise level, embodiments of which are illustrated in
Once the threshold values are obtained and their association with background noise levels established, the threshold values are stored and are available for use by the data processing system. For example, in one or more embodiments, the threshold values are stored in a look-up table at 206 (
Implementation of an adaptive threshold for the desired voice detection circuit enables a data processing system employing such functionality to operate over a greater range of background noise operating conditions ranging from a quiet whisper to loud construction noise. Such functionality improves the accuracy of the voice recognition and decreases a speech recognition error rate.
In operation, the threshold offset 610 is provided as described above, for example at 118 (
At a block 706 a threshold value (used synonymously with the term threshold offset value) is selected based on the estimated average background noise level computed from the channel used in the block 704.
At a block 708 the threshold value selected in block 706 is used to obtain a signal that indicates the presence of desired voice activity. The desired voice activity signal is used during noise cancellation as described in U.S. Pat. No. 9,633,670 B2, titled DUAL STAGE NOISE REDUCTION ARCHITECTURE FOR DESIRED SIGNAL EXTRACTION, which is hereby incorporated by reference.
In some embodiments, the main channel 802 has an omni-directional response and the reference channel 804 has an omni-directional response. In some embodiments, the acoustic beam patterns for the acoustic elements of the main channel 802 and the reference channel 804 are different. In other embodiments, the beam patterns for the main channel 802 and the reference channel 804 are the same; however, desired audio received on the main channel 802 is different from desired audio received on the reference channel 804. Therefore, a signal-to-noise ratio for the main channel 802 and a signal-to-noise ratio for the reference channel 804 are different. In general, the signal-to-noise ratio for the reference channel is less than the signal-to-noise-ratio of the main channel. In various embodiments, by way of non-limiting examples, a difference between a main channel signal-to-noise ratio and a reference channel signal-to-noise ratio is approximately 1 or 2 decibels (dB) or more. In other non-limiting examples, a difference between a main channel signal-to-noise ratio and a reference channel signal-to-noise ratio is 1 decibel (dB) or less. Thus, embodiments of the invention are suited for high noise environments, which can result in low signal-to-noise ratios with respect to desired audio as well as low noise environments, which can have higher signal-to-noise ratios. As used in this description of embodiments, signal-to-noise ratio means the ratio of desired audio to undesired audio in a channel. Furthermore, the term “main channel signal-to-noise ratio” is used interchangeably with the term “main signal-to-noise ratio.” Similarly, the term “reference channel signal-to-noise ratio” is used interchangeably with the term “reference signal-to-noise ratio.”
The main channel 802, the reference channel 804, and optionally a second reference channel 804b provide inputs to the noise cancellation module 803. While an optional second reference channel is shown in the figures, in various embodiments, more than two reference channels are used. In some embodiments, the noise cancellation module 803 includes an adaptive noise cancellation unit 806 which filters undesired audio from the main channel 802, thereby providing a first stage of filtering with multiple acoustic channels of input. In various embodiments, the adaptive noise cancellation unit 806 utilizes an adaptive finite impulse response (FIR) filter. The environment in which embodiments of the invention are used can present a reverberant acoustic field. Thus, the adaptive noise cancellation unit 806 includes a delay for the main channel sufficient to approximate the impulse response of the environment in which the system is used. A magnitude of the delay used will vary depending on the particular application that a system is designed for including whether or not reverberation must be considered in the design. In some embodiments, for microphone channels positioned very closely together (and where reverberation is not significant) a magnitude of the delay can be on the order of a fraction of a millisecond. Note that at the low end of a range of values, which could be used for a delay, an acoustic travel time between channels can represent a minimum delay value. Thus, in various embodiments, a delay value can range from approximately a fraction of a millisecond to approximately 500 milliseconds or more depending on the application.
An output 807 of the adaptive noise cancellation unit 806 is input into a single channel noise cancellation unit 818. The single channel noise cancellation unit 818 filters the output 807 and provides a further reduction of undesired audio from the output 807, thereby providing a second stage of filtering. The single channel noise cancellation unit 818 filters mostly stationary contributions to undesired audio. The single channel noise cancellation unit 818 includes a linear filter, such as for example a Wiener filter, a Minimum Mean Square Error (MMSE) filter implementation, a linear stationary noise filter, or other Bayesian filtering approaches which use prior information about the parameters to be estimated. Further description of the adaptive noise cancellation unit 806 and the components associated therewith and the filters used in the single channel noise cancellation unit 818 are described in U.S. Pat. No. 9,633,670, titled DUAL STAGE NOISE REDUCTION ARCHITECTURE FOR DESIRED SIGNAL EXTRACTION, which is hereby incorporated by reference.
Acoustic signals from the main channel 802 are input at 808 into a filter 840. An output 842 of the filter 840 is input into a filter control which includes a desired voice activity detector 814. Similarly, acoustic signals from the reference channel 804 are input at 810 into a filter 830. An output 832 of the filter 830 is input into the desired voice activity detector 814. The acoustic signals from the reference channel 804 are input at 810 into adaptive threshold module 812. An optional second reference channel is input at 808b into a filter 850. An output 852 of the filter 850 is input into the desired voice activity detector 814 and 808b is input into adaptive threshold module 812. The desired voice activity detector 814 provides control signals 816 to the noise cancellation module 803, which can include control signals for the adaptive noise cancellation unit 806 and the single channel noise cancellation unit 818. The desired voice activity detector 814 provides a signal at 822 to the adaptive threshold module 812. The signal 822 indicates when desired voice activity is present and not present. In one or more embodiments a logical convention is used wherein a “I” indicates voice activity is present and a “0” indicates voice activity is not present. In other embodiments other logical conventions can be used for the signal 822.
Optionally, the signal input from the reference channel 804 to the adaptive threshold module 812 can be taken from the output of the filter 830, as indicated at 832. Similarly, if optional one or more second reference channels (indicated by 804b) are present in the architecture the filtered version of these signals at 852 can be input to the adaptive threshold module 812 (path not shown to preserve clarity in the illustration). If the filtered version of the signals (e.g., any of 832, 852, or 842) are input into the adaptive threshold module 812 a set of threshold values will be obtained which are different in magnitude from the threshold values which are obtained utilizing the unfiltered version of the signals. Adaptive threshold functionality is still provided in either case.
Each of the filters 830, 840, and 850 provide shaping to their respective input signals, i.e., 810, 808, and 808b and are referred to collectively as shaping filters. As used in this description of embodiments, a shaping filter is used to remove a noise component from the signal that it filters. Each of the shaping filters, 830, 840, and 850 apply substantially the same filtering to their respective input signals.
Filter characteristics are selected based on a desired noise mechanism for filtering. For example, road noise from a vehicle is often low frequency in nature and sometimes characterized by a 1/f roll-off where f is frequency. Thus, road noise can have a peak at low-frequency (approximately zero frequency or at some off-set thereto) with a roll-off as frequency increases. In such a case a high pass filter is useful to remove the contribution of road noise from the signals 810, 808, and optionally 808b if present. In one embodiment, a shaping filter used for road noise can have a response as shown in
In some applications a noise component can exist over a band of frequency. In such a case a notch filter is used to filter the signals accordingly. In yet other applications there will be one or more noise mechanisms providing simultaneous contribution to the signals. In such a case, filters are combined such as for example a high-pass filter and a notch filter. In various embodiments, other filter characteristics are combined to present a shaping filter designed for the noise environment that the system is deployed into.
As implemented in a given data processing system, shaping filters can be programmable so that the data processing system can be adapted for multiple environments where the background noise spectrum is known to have different structure. In one or more embodiments, the programmable functionality of a shaping filter can be accomplished by external jumpers to the integrated circuit containing the filters, adjustment by firmware download, to programmable functionality which is adjusted by a user via voice command according to the environment the system is deployed in. For example, a user can instruct the data processing system via voice command to adjust for road noise, periodic noise, etc. and the appropriate shaping filter is switched in and out according to the command.
The adaptive threshold module 812 includes a background noise estimation module and selection logic which provides a threshold value which corresponds to a given estimated average background noise level. A threshold value corresponding to an estimated average background noise level is passed at 818 to the desired voice activity detector 814. The threshold value is used by the desired voice activity detector 814 to determine when voice activity is present.
In various embodiments, the operation of adaptive threshold module 812 has been described more completely above in conjunction with the preceding figures. An output 820 of the noise cancellation module 803 provides an acoustic signal which contains mostly desired audio and a reduced amount of undesired audio.
The system architecture shown in
Applying a shaping filter as described above increases a signal-to-noise ratio difference between the two channels, as illustrated in equation 1150. Increasing the signal-to-noise ratio difference between the channels increases the accuracy of the desired voice activity detection module which increase the noise cancellation performance of the system.
Thus, in various embodiments, acoustic signal data is received at 1229 for processing by the acoustic signal processing system 1200. Such data can be transmitted at 1232 via communications interface 1230 for further processing in a remote location. Connection with a network, such as an intranet or the Internet is obtained via 1232, as is recognized by those of skill in the art, which enables the acoustic signal processing system 1200 to communicate with other data processing devices or systems in remote locations.
For example, embodiments of the invention can be implemented on a computer system 1200 configured as a desktop computer or work station, on for example a WINDOWS® compatible computer running operating systems such as WINDOWS' XP Home or WINDOWS® XP Professional, Linux, Unix, etc. as well as computers from APPLE COMPUTER, Inc. running operating systems such as OS X, etc. Alternatively, or in conjunction with such an implementation, embodiments of the invention can be configured with devices such as speakers, earphones, video monitors, etc. configured for use with a Bluetooth communication channel. In yet other implementations, embodiments of the invention are configured to be implemented by mobile devices such as a smart phone, a tablet computer, a wearable device, such as eye glasses, a near-to-eye (NTE) headset, or the like.
Algorithms used to process speech, such as Speech Recognition (SR) algorithms or Automatic Speech Recognition (ASR) algorithms benefit from increased signal-to-noise ratio difference between main and reference channels. As such, the error rates of speech recognition engines are greatly reduced through application of embodiments of the invention.
In various embodiments, different types of microphones can be used to provide the acoustic signals needed for the embodiments of the invention presented herein. Any transducer that converts a sound wave to an electrical signal is suitable for use with embodiments of the invention. Some non-limiting examples of microphones are, but are not limited to, a dynamic microphone, a condenser microphone, an Electret Condenser Microphone (ECM), and a microelectromechanical systems (MEMS) microphone. In other embodiments a condenser microphone (CM) is used. In yet other embodiments micro-machined microphones are used. Microphones based on a piezoelectric film are used with other embodiments. Piezoelectric elements are made out of ceramic materials, plastic material, or film. In yet other embodiments, micro-machined arrays of microphones are used. In yet other embodiments, silicon or polysilicon micro-machined microphones are used. In some embodiments, bi-directional pressure gradient microphones are used to provide multiple acoustic channels. Various microphones or microphone arrays including the systems described herein can be mounted on or within structures such as eyeglasses, headsets, wearable devices, etc. Various directional microphones can be used, such as but not limited to, microphones having a cardioid beam pattern, a dipole beam pattern, an omni-directional beam pattern, or a user defined beam pattern. In some embodiments, one or more acoustic elements are configured to provide the microphone inputs.
In various embodiments, the components of the adaptive threshold module, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the adaptive threshold module is implemented in a single integrated circuit die. In other embodiments, the adaptive threshold module is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the components of the desired voice activity detector, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the desired voice activity detector is implemented in a single integrated circuit die. In other embodiments, the desired voice activity detector is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the components of the background noise estimation module, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the background noise estimation module is implemented in a single integrated circuit die. In other embodiments, the background noise estimation module is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the components of the background noise estimation module, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the background noise estimation module is implemented in a single integrated circuit die. In other embodiments, the background noise estimation module is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the components of the noise cancellation module, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the noise cancellation module is implemented in a single integrated circuit die. In other embodiments, the noise cancellation module is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the components of the selection logic, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the selection logic is implemented in a single integrated circuit die. In other embodiments, the selection logic is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
In various embodiments, the components of the shaping filter, such as shown in the figures above are implemented in an integrated circuit device, which may include an integrated circuit package containing the integrated circuit. In some embodiments, the shaping filter is implemented in a single integrated circuit die. In other embodiments, the shaping filter is implemented in more than one integrated circuit die of an integrated circuit device which may include a multi-chip package containing the integrated circuit.
For purposes of discussing and understanding the embodiments of the invention, it is to be understood that various terms are used by those knowledgeable in the art to describe techniques and approaches. Furthermore, in the description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one of ordinary skill in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention.
Some portions of the description may be presented in terms of algorithms and symbolic representations of operations on, for example, data bits within a computer memory. These algorithmic descriptions and representations are the means used by those of ordinary skill in the data processing arts to most effectively convey the substance of their work to others of ordinary skill in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of acts leading to a desired result. The acts are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, waveforms, data, time series or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, can refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission, or display devices.
An apparatus for performing the operations herein can implement the present invention. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer, selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but not limited to, any type of disk including floppy disks, hard disks, optical disks, compact disk read-only memories (CD-ROMs), and magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROM)s, electrically erasable programmable read-only memories (EEPROMs), FLASH memories, magnetic or optical cards, etc., or any type of media suitable for storing electronic instructions either local to the computer or remote to the computer.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method. For example, any of the methods according to the present invention can be implemented in hard-wired circuitry, by programming a general-purpose processor, or by any combination of hardware and software. One of ordinary skill in the art will immediately appreciate that the invention can be practiced with computer system configurations other than those described, including hand-held devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, digital signal processing (DSP) devices, network PCs, minicomputers, mainframe computers, and the like. The invention can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In other examples, embodiments of the invention as described above in
The methods of the invention may be implemented using computer software. If written in a programming language conforming to a recognized standard, sequences of instructions designed to implement the methods can be compiled for execution on a variety of hardware platforms and for interface to a variety of operating systems. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein. Furthermore, it is common in the art to speak of software, in one form or another (e.g., program, procedure, application, driver, . . . ), as taking an action or causing a result. Such expressions are merely a shorthand way of saying that execution of the software by a computer causes the processor of the computer to perform an action or produce a result.
It is to be understood that various terms and techniques are used by those knowledgeable in the art to describe communications, protocols, applications, implementations, mechanisms, etc. One such technique is the description of an implementation of a technique in terms of an algorithm or mathematical expression. That is, while the technique may be, for example, implemented as executing code on a computer, the expression of that technique may be more aptly and succinctly conveyed and communicated as a formula, algorithm, mathematical expression, flow diagram or flow chart. Thus, one of ordinary skill in the art would recognize a block denoting A+B=C as an additive function whose implementation in hardware and/or software would take two inputs (A and B) and produce a summation output (C). Thus, the use of formula, algorithm, or mathematical expression as descriptions is to be understood as having a physical embodiment in at least hardware and/or software (such as a computer system in which the techniques of the present invention may be practiced as well as implemented as an embodiment).
Non-transitory machine-readable media is understood to include any mechanism for storing information in a form readable by a machine (e.g., a computer). For example, a machine-readable medium, synonymously referred to as a computer-readable medium, includes read only memory (ROM); random access memory (RAM); magnetic disk storage media; optical storage media; flash memory devices; except electrical, optical, acoustical or other forms of transmitting information via propagated signals (e.g., carrier waves, infrared signals, digital signals, etc.); etc.
As used in this description, “one embodiment” or “an embodiment” or similar phrases means that the feature(s) being described are included in at least one embodiment of the invention. References to “one embodiment” in this description do not necessarily refer to the same embodiment; however, neither are such embodiments mutually exclusive. Nor does “one embodiment” imply that there is but a single embodiment of the invention. For example, a feature, structure, act, etc. described in “one embodiment” may also be included in other embodiments. Thus, the invention may include a variety of combinations and/or integrations of the embodiments described herein.
Thus, embodiments of the invention can be used to reduce or eliminate undesired audio from acoustic systems that process and deliver desired audio. Some non-limiting examples of systems are, but are not limited to, use in short boom headsets, such as an audio headset for telephony suitable for enterprise call centers, industrial and general mobile usage, an in-line “ear buds” headset with an input line (wire, cable, or other connector), mounted on or within the frame of eyeglasses, a near-to-eye (NTE) headset display, headset computing device or wearable device, a long boom headset for very noisy environments such as industrial, military, and aviation applications as well as a gooseneck desktop-style microphone which can be used to provide theater or symphony-hall type quality acoustics without the structural costs.
While the invention has been described in terms of several embodiments, those of skill in the art will recognize that the invention is not limited to the embodiments described, but can be practiced with modification and alteration within the spirit and scope of the appended claims. The description is thus to be regarded as illustrative instead of limiting.
Number | Name | Date | Kind |
---|---|---|---|
3378649 | Mawby | Apr 1968 | A |
3789163 | Dunlavy | Jan 1974 | A |
3919481 | Kalfaian | Nov 1975 | A |
3946168 | Preves | Mar 1976 | A |
4773095 | Zwicker et al. | Sep 1988 | A |
4904078 | Gorike | Feb 1990 | A |
4966252 | Drever | Oct 1990 | A |
5657420 | Jacobs | Aug 1997 | A |
5825898 | Marash | Oct 1998 | A |
6023674 | Mekuria | Feb 2000 | A |
6091546 | Spitzer | Jul 2000 | A |
6266422 | Ikeda | Jul 2001 | B1 |
6349001 | Spitzer | Feb 2002 | B1 |
6678657 | Bruckner | Jan 2004 | B1 |
6694293 | Benyassine | Feb 2004 | B2 |
6707910 | Valve | Mar 2004 | B1 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7359504 | Reuss et al. | Apr 2008 | B1 |
7881927 | Reuss | Feb 2011 | B1 |
7929714 | Bazarjani et al. | Apr 2011 | B2 |
8184983 | Ho et al. | May 2012 | B1 |
8543061 | Suhami | Sep 2013 | B2 |
8744113 | Rickards | Jun 2014 | B1 |
8958572 | Solbach | Feb 2015 | B1 |
9280982 | Kushner | Mar 2016 | B1 |
20020106091 | Furst | Aug 2002 | A1 |
20020184015 | Li | Dec 2002 | A1 |
20030040908 | Yang | Feb 2003 | A1 |
20030147538 | Elko | Aug 2003 | A1 |
20030179888 | Burnett | Sep 2003 | A1 |
20040111258 | Zangi | Jun 2004 | A1 |
20050063552 | Shuttleworth | Mar 2005 | A1 |
20050069156 | Haapapuro et al. | Mar 2005 | A1 |
20050096899 | Padhi | May 2005 | A1 |
20050248717 | Howell et al. | Nov 2005 | A1 |
20060020451 | Kushner | Jan 2006 | A1 |
20060217973 | Gao | Sep 2006 | A1 |
20060285714 | Akino | Dec 2006 | A1 |
20070160254 | Ritter et al. | Jul 2007 | A1 |
20080137874 | Christoph | Jun 2008 | A1 |
20080249779 | Hennecke | Oct 2008 | A1 |
20080260189 | Schobben | Oct 2008 | A1 |
20080267427 | Johnston | Oct 2008 | A1 |
20080317259 | Zhang et al. | Dec 2008 | A1 |
20080317260 | Short | Dec 2008 | A1 |
20090089053 | Wang | Apr 2009 | A1 |
20090089054 | Wang | Apr 2009 | A1 |
20090112579 | Li | Apr 2009 | A1 |
20090129582 | Chandran | May 2009 | A1 |
20090154726 | Taenzer | Jun 2009 | A1 |
20090190774 | Wang | Jul 2009 | A1 |
20090299739 | Chan | Dec 2009 | A1 |
20100100386 | Yu | Apr 2010 | A1 |
20100198590 | Tackin | Aug 2010 | A1 |
20100208928 | Chene et al. | Aug 2010 | A1 |
20100241426 | Zhang | Sep 2010 | A1 |
20100278352 | Petit | Nov 2010 | A1 |
20100280824 | Petit | Nov 2010 | A1 |
20110038489 | Visser | Feb 2011 | A1 |
20110066429 | Shperling | Mar 2011 | A1 |
20110071825 | Emori | Mar 2011 | A1 |
20110081026 | Ramakrishnan | Apr 2011 | A1 |
20110091057 | Derkx | Apr 2011 | A1 |
20110099010 | Zhang | Apr 2011 | A1 |
20110106533 | Yu | May 2011 | A1 |
20110243349 | Zavarehei | Oct 2011 | A1 |
20110293103 | Park | Dec 2011 | A1 |
20120010881 | Avendano | Jan 2012 | A1 |
20120051548 | Visser | Mar 2012 | A1 |
20120075168 | Osterhout et al. | Mar 2012 | A1 |
20120084084 | Zhu | Apr 2012 | A1 |
20120123773 | Zeng | May 2012 | A1 |
20120123775 | Murgia | May 2012 | A1 |
20120162259 | Sakai | Jun 2012 | A1 |
20120209601 | Jing | Aug 2012 | A1 |
20120215519 | Park | Aug 2012 | A1 |
20120215536 | Sehlstedt | Aug 2012 | A1 |
20120239394 | Matsumoto | Sep 2012 | A1 |
20120259631 | Lloyd | Oct 2012 | A1 |
20120282976 | Suhami | Nov 2012 | A1 |
20130030803 | Liao | Jan 2013 | A1 |
20130034243 | Yermeche | Feb 2013 | A1 |
20130142343 | Matsui | Jun 2013 | A1 |
20130314280 | Maltsev et al. | Nov 2013 | A1 |
20130332157 | Iyengar | Dec 2013 | A1 |
20140003622 | Ikizyan | Jan 2014 | A1 |
20140006019 | Paajanen | Jan 2014 | A1 |
20140010373 | Gran | Jan 2014 | A1 |
20140056435 | Kjems | Feb 2014 | A1 |
20140081631 | Zhu | Mar 2014 | A1 |
20140236590 | Hu | Aug 2014 | A1 |
20140268016 | Chow et al. | Sep 2014 | A1 |
20140270244 | Fan | Sep 2014 | A1 |
20140270316 | Fan | Sep 2014 | A1 |
20140278391 | Braho | Sep 2014 | A1 |
20140337021 | Kim | Nov 2014 | A1 |
20140358526 | Abdelal | Dec 2014 | A1 |
20150012269 | Nakadai | Jan 2015 | A1 |
20150032451 | Gunn | Jan 2015 | A1 |
20150106088 | Jarvinen | Apr 2015 | A1 |
20150172807 | Olsson | Jun 2015 | A1 |
20150215700 | Sun | Jul 2015 | A1 |
20150221322 | Iyengar | Aug 2015 | A1 |
20150230023 | Fujieda | Aug 2015 | A1 |
20150262590 | Joder | Sep 2015 | A1 |
20150262591 | Hill | Sep 2015 | A1 |
20150269954 | Ryan | Sep 2015 | A1 |
20150287406 | Kristjansson | Oct 2015 | A1 |
20150294674 | Takahashi | Oct 2015 | A1 |
20150318902 | Sugiyama | Nov 2015 | A1 |
20160005422 | Zad Issa | Jan 2016 | A1 |
20160029121 | Nesta | Jan 2016 | A1 |
Number | Date | Country |
---|---|---|
202 102 188 | Jan 2012 | CN |
2 323 422 | May 2011 | EP |
2 469 323 | Jun 2012 | EP |
58013008 | Jan 1983 | JP |
06-338827 | Dec 1994 | JP |
06338827 | Dec 1994 | JP |
9252340 | Sep 1997 | JP |
10-301600 | Nov 1998 | JP |
10301600 | Nov 1998 | JP |
2003-271191 | Sep 2003 | JP |
2011015018 | Jan 2011 | JP |
10-0857822 | Sep 2008 | KR |
10-0936772 | Jan 2010 | KR |
WO 200002419 | Jan 2000 | WO |
WO 2009076016 | Jun 2009 | WO |
WO 2011087770 | Jul 2011 | WO |
WO 2012040386 | Mar 2012 | WO |
WO 2012097014 | Jul 2012 | WO |
WO 2014158426 | Oct 2014 | WO |
WO 2014163794 | Oct 2014 | WO |
WO 2014163796 | Oct 2014 | WO |
WO 2014163797 | Oct 2014 | WO |
Entry |
---|
Internation Search Report & Written Opinion for PCT/US2014/026332, Entitled “Dual Stage Noise Reduction Architecture for Desired Signal Extraction,” dated Jul. 24, 2014. |
Internation Search Report & Written Opinion for PCT/US2014/028605, Entitled “Apparatuses and Methods for Multi-Channel Signal Compression During Desired . . . ,” dated Jul. 24, 2014. |
International Search Report & Written Opinion for PCT/US2014/026332, Entitled “Apparatuses and Methods for Acoustic Channel Auto-Balancing During Mult- . . . ,” dated Jul. 30, 2014. |
Zhang, Xianxian, Noise Estimation Based on an Adaptive Smoothing Factor for Improving Speech Quality in a Dual-Microphone Noise-Suppression System, 2011, IEEE, 5 PGS, US. |
Internation Search Report & Written Opinion, PCT/US2014/016547, Entitled, “Sound Induction Ear Speaker for Eye Glasses,” dated Apr. 29, 2014 (15 pages). |
Internation Search Report & Written Opinion, PCT/US2014/016557, Entitled, “Sound Induction Ear Speaker for Eye Glasses,” dated Sep. 24, 2014 (15 pages). |
Internation Search Report & Written Opinion, PCT/US2014/016558, Entitled, “Eye Glasses With Microphone Array” dated Jun. 12, 2014 (12 pages). |
Internation Search Report & Written Opinion, PCT/US2014/016570, Entitled, “Noise Cancelling Microphone Apparatus,” Jun. 25, 2014 (19 pages). |
Number | Date | Country | |
---|---|---|---|
20170110142 A1 | Apr 2017 | US |