Directional audio capture

Information

  • Patent Grant
  • 9838784
  • Patent Number
    9,838,784
  • Date Filed
    Wednesday, December 2, 2015
    9 years ago
  • Date Issued
    Tuesday, December 5, 2017
    7 years ago
Abstract
Systems and methods for improving performance of a directional audio capture system are provided. An example method includes correlating phase plots of at least two audio inputs, with the audio inputs being captured by at least two microphones. The method can further include generating, based on the correlation, estimates of salience at different directional angles to localize a direction of a source of sound. The method can allow providing cues to the directional audio capture system based on the estimates. The cues include attenuation levels. A rate of change of the levels of attenuation is controlled by attack and release time constants to avoid sound artifacts. The method also includes determining a mode based on an absence or presence of one or more peaks in the estimates of salience. The method also provides for configuring the directional audio capture system based on the determined mode.
Description
FIELD

The present disclosure relates generally to audio processing and, more particularly, to systems and methods for improving performance of directional audio capture.


BACKGROUND

Existing systems for directional audio capture are typically configured to capture an audio signal within an area of interest (e.g., within a lobe) and to suppress anything outside the lobe. Furthermore, the existing systems for directional audio capture do not utilize the directionality of the speaker being recorded. This results in non-uniform suppression throughout the lobe. The robustness of such systems can be compromised, especially in cases of varying distances between a talker (i.e., speaker) and an audio capturing device for a given angle. If the talker moves closer to or farther away from the device, the suppression can become non-uniform.


In the existing solutions for directional audio capture, off-the box/calibration and customer requirements may disagree. This disagreement may result in more or less suppression needed at a certain range of angles. With the non-uniform suppression, deploying such solutions can become even more challenging where suppressing/boosting certain angles is desirable to maintain uniform noise suppression across the lobe.


The existing directional audio capture solutions can also be very sensitive to microphone sealing. Better microphone sealing results in more uniform suppression and poor microphone sealing results in non-uniform suppression. Microphone sealing, in general, can make one device different from another device even when the same batch of manufacturing is used. A solution that makes microphone sealing robust during a change in distance between a talker and an audio capture system is desirable.


SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


Provided are systems and methods for improving performance of a directional audio capturing system. An example method includes correlating phase plots of at least two audio inputs. The method allows for generating, based on the correlation, estimates of salience at different directional angles to localize at least one direction associated with at least one source of a sound. The method also includes determining cues, based on the estimates of salience, and providing the cues to the directional audio capture system.


In some embodiments, the cues are used by the directional audio capture system to attenuate or amplify the at least two audio inputs at the different directional angles. In certain embodiments, the cues include at least attenuation levels for the different directional angles. In some embodiments, the estimates of salience include a vector of saliences at directional angles from 0 to 360 in a plane parallel to a ground.


In some embodiments, generating the cues includes mapping the different directional angles to relative levels of attenuation for the directional audio capture system. In certain embodiments, the method includes controlling the rate of changing of the levels of attenuation in a real time by attack and release time constants to avoid sound artifacts.


In some embodiments, the method includes determining, based on absence or presence of one or more peaks in the estimates of salience, a mode from a plurality of the operational modes. The method allows configuring, based on the determined mode, the directional audio capture system. In certain embodiments, the method allows controlling a rate of switching between modes from the plurality of the operational modes in real time by applying attack and release time constants. In some embodiments, the audio inputs are captured by at least two microphones having different qualities of sealing.


According to another example embodiment of the present disclosure, the steps of the method for improving performance of directional audio capture systems are stored on a machine-readable medium comprising instructions, which when implemented by one or more processors, perform the recited steps.


Other example embodiments of the disclosure and aspects will become apparent from the following description taken in conjunction with the following drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements.



FIG. 1 is a block diagram of an exemplary environment in which the present technology can be used.



FIG. 2 is a block diagram of an exemplary audio device.



FIG. 3 is a block diagram of an exemplary audio processing system.



FIG. 4 is a block diagram of an exemplary beam former module.



FIG. 5 is a flow chart of an exemplary method for performing an audio zoom.



FIG. 6 is a flow chart of an exemplary method for enhancing acoustic signal components.



FIG. 7 is a flow chart of an exemplary method for generating a multiplicative mask.



FIG. 8 is a block diagram of an exemplary audio processing system suitable for improving performance of directional audio capture.



FIG. 9 is a flow chart of an exemplary method for improving performance of directional audio capture.



FIG. 10 is a computer system that can be used to implement methods disclosed herein, according to various example embodiments.





DETAILED DESCRIPTION

The technology disclosed herein relates to systems and methods for improving performance of directional audio capture. Embodiments of the present technology may be practiced with audio devices operable at least to capture and process acoustic signals. The audio devices can include: radio frequency (RF) receivers, transmitters, and transceivers; wired and/or wireless telecommunications and/or networking devices; amplifiers; audio and/or video players; encoders; decoders; speakers; inputs; outputs; storage devices; and user input devices. Audio devices may include input devices such as buttons, switches, keys, keyboards, trackballs, sliders, touch screens, one or more microphones, gyroscopes, accelerometers, global positioning system (GPS) receivers, and the like. The audio devices may include outputs, such as Light-Emitting Diode (LED) indicators, video displays, touchscreens, speakers, and the like. In some embodiments, the audio devices include hand-held devices, such as wired and/or wireless remote controls, notebook computers, tablet computers, phablets, smart phones, personal digital assistants, media players, mobile telephones, and the like. In certain embodiments, the audio devices include Television (TV) sets, car control and audio systems, smart thermostats, light switches, dimmers, and so on.


In various embodiments, the audio devices operate in stationary and portable environments. Stationary environments can include residential and commercial buildings or structures, and the like. For example, the stationary embodiments can include living rooms, bedrooms, home theaters, conference rooms, auditoriums, business premises, and the like. Portable environments can include moving vehicles, moving persons, other transportation means, and the like.


According to an example embodiment, a method for improving a directional audio capture system includes correlating phase plots of at least two audio inputs. The method allows for generating, based on the correlation, estimates of salience at different directional angles to localize at least one direction associated with at least one source of a sound. The cues include at least levels of attenuation. The method includes determining cues, based on the estimates of salience, and providing the cues to the directional audio capture system.



FIG. 1 is a block diagram of an exemplary environment 100 in which the present technology can be used. The environment 100 of FIG. 1 includes audio device 104, and audio sources 112, 114 and 116, all within an environment 100 having walls 132 and 134.


A user of the audio device 104 may choose to focus on or “zoom” into a particular audio source from the multiple audio sources within environment 100. Environment 100 includes audio sources 112, 114, and 116 which all provide audio in multidirections, including towards audio device 104. Additionally, reflections from audio sources 112 and 116 as well as other audio sources may provide audio which reflects off the walls 132 and 134 of the environment 100 and is directed at audio device 104. For example, reflection 128 is a reflection of an audio signal provided by audio source 112 and reflected from wall 132, and reflection 129 is a reflection of an audio signal provided by audio source 116 and reflected from wall 134, both of which travel towards audio device 104.


The present technology allows the user to select an area to “zoom.” By performing an audio zoom on a particular area, the present technology detects audio signals having a source within the particular area and enhances those signals with respect to signals from audio sources outside the particular area. The area may be defined using a beam, such as, for example, beam 140 in FIG. 1. In FIG. 1, beam 140 contains an area that includes audio source 114. Audio sources 112 and 116 are contained outside the beam area. As such, the present technology would emphasize or “zoom” into the audio signal provided by audio source 114 and de-emphasize the audio provided by audio sources 112 and 116, including any reflections provided by environment 100, such as reflections 128 and 129.


A primary microphone 106 and secondary microphone 108 of audio device 104 may be omni-directional microphones. Alternate embodiments may utilize other forms of microphones or acoustic sensors, such as directional microphones.


While the microphones 106 and 108 receive sound (i.e., acoustic signals) from the audio source 114, the microphones 106 and 108 also pick up noise from audio source 112. Although the noise 122 is shown coming from a single location in FIG. 1, the noise 122 may include any sounds from one or more locations that differ from the location of audio source 114, and may include reverberations and echoes. The noise 124 may be stationary, non-stationary, and/or a combination of both stationary and non-stationary noise.


Some embodiments may utilize level differences (e.g., energy differences) between the acoustic signals received by the two microphones 106 and 108. Because the primary microphone 106 is much closer to the audio source 116 than the secondary microphone 108 in a close-talk use case, the intensity level for noise 126 is higher for the primary microphone 106, resulting in a larger energy level received by the primary microphone 106 during a speech/voice segment, for example.


The level difference may then be used to discriminate speech and noise in the time-frequency domain. Further embodiments may use a combination of energy level differences and time delays to discriminate speech. Based on binaural cue encoding, speech signal extraction or speech enhancement may be performed.



FIG. 2 is a block diagram of an exemplary audio device. In some embodiments, the audio device of FIG. 2 provides more detail for audio device 104 of FIG. 1.


In the illustrated embodiment, the audio device 104 includes a receiver 210, a processor 220, the primary microphone 106, an optional secondary microphone 108, an audio processing system 230, and an output device 240. The audio device 104 may include further or other components needed for audio device 104 operations. Similarly, the audio device 104 may include fewer components that perform similar or equivalent functions to those depicted in FIG. 2.


Processor 220 may execute instructions and modules stored in a memory (not illustrated in FIG. 2) in the audio device 104 to perform functionality described herein, including noise reduction for an acoustic signal. Processor 220 may include hardware and software implemented as a processing unit, which may process floating point operations and other operations for the processor 220.


The exemplary receiver 210 is an acoustic sensor configured to receive a signal from a communications network. In some embodiments, the receiver 210 may include an antenna device. The signal may then be forwarded to the audio processing system 230 to reduce noise using the techniques described herein, and provide an audio signal to the output device 240. The present technology may be used in one or both of the transmitting and receiving paths of the audio device 104.


The audio processing system 230 is configured to receive the acoustic signals from an acoustic source via the primary microphone 106 and secondary microphone 108 and process the acoustic signals. Processing may include performing noise reduction within an acoustic signal. The audio processing system 230 is discussed in more detail below. The primary and secondary microphones 106, 108 may be spaced a distance apart in order to allow for detecting an energy level difference, time difference, or phase difference between them. The acoustic signals received by primary microphone 106 and secondary microphone 108 may be converted into electrical signals (i.e., a primary electrical signal and a secondary electrical signal). The electrical signals may themselves be converted by an analog-to-digital converter (not shown) into digital signals for processing, in accordance with some embodiments. In order to differentiate the acoustic signals for clarity purposes, the acoustic signal received by the primary microphone 106 is herein referred to as the primary acoustic signal, while the acoustic signal received from by the secondary microphone 108 is herein referred to as the secondary acoustic signal. The primary acoustic signal and the secondary acoustic signal may be processed by the audio processing system 230 to produce a signal with an improved signal-to-noise ratio. It should be noted that embodiments of the technology described herein may be practiced utilizing only the primary microphone 106.


The output device 240 is any device that provides an audio output to the user. For example, the output device 240 may include a speaker, an earpiece of a headset or handset, or a speaker on a conference device.


In various embodiments, where the primary and secondary microphones 106 and 108 are omni-directional microphones that are closely-spaced (e.g., 1-2 cm apart), a beamforming technique may be used to simulate forwards-facing and backwards-facing directional microphones. The level difference may be used to discriminate speech and noise in the time-frequency domain, which can be used in noise reduction.



FIG. 3 is a block diagram of an exemplary audio processing system. The block diagram of FIG. 3 provides more detail for the audio processing system 230 in the block diagram of FIG. 2. Audio processing system 230 includes fast cosine transform (FCT) modules 302 and 304, beam former module 310, multiplicative gain expansion module 320, reverb module 330, mixer module 340, and zoom control module 350.


FCT modules 302 and 304 may receive acoustic signals from audio device microphones and convert the acoustic signals to frequency range sub-band signals. In some embodiments, FCT modules 302 and 304 are implemented as one or more modules that create one or more sub-band signals for each received microphone signal. FCT modules 302 and 304 receive an acoustic signal from each microphone contained in audio device 104. These received signals are illustrated as signals X1-XI, wherein X1 is a primary microphone signal and XI represents the remaining microphone signals. In some embodiments, the audio processing system 230 of FIG. 3 performs an audio zoom on a per frame and per sub band basis.


In some embodiments, beam former module 310 receives the frequency sub-band signals as well as a zoom indication signal. The zoom indication is received from zoom control module 350. The zoom indication communicated by zoom indicator signal K may be generated in response to user input, analysis of a primary microphone signal or other acoustic signals received by audio device 104, a video zoom feature selection, or some other data. In operation, beam former module 310 receives sub-band signals, processes the sub-band signals to identify which signals are within a particular area to enhance (or “zoom”), and provides data for the selected signals as output to multiplicative gain expansion module 320. The output may include sub-band signals for the audio source within the area to enhance. Beam former module 310 also provides a gain factor to multiplicative gain expansion module 320. The gain factor may indicate whether multiplicative gain expansion module 320 should perform additional gain or reduction to the signals received from beam former module 310. In some embodiments, the gain factor is generated as an energy ratio based on the received microphone signals and components. The gain indication output by beam former module 310 may be a ratio of how much energy is reduced in the signal from the primary microphone versus the energy in the signals from the other microphones. Hence, the gain may be a boost or cancellation gain expansion factor. The gain factor is discussed in more detail below.


Beam former module 310 can be implemented as a null processing noise subtraction (NPNS) module, multiplicative module, or a combination of these modules. When an NPNS module is used in microphones to generate a beam and achieve beam forming, the beam is focused by narrowing constraints of alpha and gamma. For a rider beam, the constraints may be made larger. Hence, a beam may be manipulated by putting a protective range around the preferred direction. Beam former module 310 may be implemented by a system described in the U.S. Patent Application No. 61/325,764, entitled “Multi-Microphone Robust Noise Suppression System,” the disclosure of which is incorporated herein by reference. Additional techniques for reducing undesired audio components of a signal are discussed in U.S. patent application Ser. No. 12/693,998 (now U.S. Pat. No. 8,718,290), entitled “Adaptive Noise Reduction Using Level Cues,” the disclosure of which is incorporated herein by reference.


Multiplicative gain expansion module 320 receives the sub-band signals associated with audio sources within the selected beam, the gain factor from beam former module 310, and the zoom indicator signal. Multiplicative gain expansion module 320 applies a multiplicative gain based on the gain factor received. In effect, multiplicative gain expansion module 320 filters the beam former signal provided by beam former module 310.


The gain factor may be implemented as one of several different energy ratios. For example, the energy ratio may be the ratio of a noise reduced signal to a primary acoustic signal received from a primary microphone, the ratio of a noise reduced signal and a detected noise component within the primary microphone signal, the ratio of a noise reduced signal and a secondary acoustic signal, or the ratio of a noise reduced signal compared to the intra level difference between a primary signal and another signal. The gain factors may be an indication of signal strength in a target direction versus all other directions. Put another way, the gain factor may be an indication of multiplicative expansions due, and whether additional expansion or subtraction should be performed at the multiplicative gain expansion module 320. Multiplicative gain expansion module 320 outputs the modified signal and provides signal to reverb module 330 (which may also function to de-reverb).


Reverb module 330 receives the sub-band signals output by multiplicative gain expansion module 320, as well as the microphone signals which were also received by beam former module 310, and performs reverberation or dereverberation to the sub-band signals output by multiplicative gain expansion module 320. Reverb module 330 may adjust a ratio of direct energy to remaining energy within a signal based on the zoom control indicator provided by zoom control module 350.


Adjusting the reverb for a signal may involve adjusting the energy of different components of the signal. An audio signal has several components in a frequency domain, including a direct component, early reflections, and a tail component. A direct component typically has the highest energy level, followed by a somewhat lower energy level of reflections within the signal. Also included within a very particular signal is a tail, which may include noise and other low energy data or low energy audio. A reverberation is defined as reflections of the direct audio component. Hence, a reverberation with many reflections over a broad frequency range results in a more noticeable reverberation. A signal with fewer reflection components has a smaller reverberation component.


Typically, the further away a listener is from an audio source, the larger the reverberation in the signal, and the closer a listener is to an audio source, the smaller the intensity of the reverberation signal (reflection components). Hence, based on the zoom indicator received from zoom control module 350, reverb module 330 may adjust the reverberation components in the signal received from multiplicative gain expansion module 320. Hence, if the zoom indicator received indicates that a zoom in operation is to be performed on the audio, the reverberation will be decreased by minimizing the reflection components of the received signal. If the zoom indicator indicates that a zoom out is to be performed on the audio signal, the early reflection components are gained to increase these components to make it appear as if there is additional reverberation within the signal. After adjusting the reverberation of the received signal, reverb module 330 provides the modified signal to mixer module 340.


The mixer module 340 receives the reverberation adjusted signal and mixes the signal with the signal from the primary microphone. In some embodiments, mixer module 340 increases the energy of the signal appropriately when there is audio present in the frame and decreases it where there is little audio energy present in the frame.



FIG. 4 is a block diagram of an exemplary beam former module. The beam former module 310 may be implemented per tap (i.e., per sub-band). Beam former module 310 receives FCT output signals for a first microphone (such as a primary microphone) and a second microphone. The first microphone FCT signal is processed by module 410 according to the function:

bωejωτf

to generate a first differential array with parameters.


The secondary microphone FCT signal is processed by module 420 according to the function:

aωejωτb


to generate a second differential array with parameters. Further details regarding the generation of exemplary first and second differential arrays are described in the U.S. patent application Ser. No. 11/699,732,764, entitled “System and Method for Utilizing Omni-Directional Microphones for Speech Enhancement,” now U.S. Pat. No. 8,194,880, issued Jun. 5, 2012, the disclosure of which is incorporated herein by reference.


The output of module 410 is then subtracted from the secondary microphone FCT signal at combiner 440 and the output of module 420 is then subtracted by the primary microphone FCT signal at combiner 430. A cardioid signal Cf is output from combiner 430 and provided to module 450 where the following function is applied:

Log(|Cf|2).


A cardioid signal Cb is output from combiner 440 and provided to module 460 where the following function is applied:

Log(|Cb|2).


The difference of the outputs of modules 450 and 460 is determined by element 470 and output as an ILD cue. The ILD cue may be output by beam former module 310 to a post filter (for example, a filter implemented by multiplicative gain expansion module 320).



FIG. 5 is a flow chart of an exemplary method for performing an audio zoom. An acoustic signal is received from one or more sources at step 510. In some embodiments, the acoustic signals are received through one or more microphones on audio device 104. For example, acoustic signals from audio sources 112-116 and reflections 128-129 are received through microphones 106 and 108 of audio device 104.


A zoom indication is then received for a spatial area at step 520. In some embodiments, the zoom indication is received from a user or determined based on other data. For example, the zoom indication is received from a user via a video zoom setting, pointing an audio device in a particular direction, an input for video zoom, or in some other manner.


Acoustic signal component energy levels are enhanced based on the zoom indication at step 530. In some embodiments, acoustic signal component energy levels are enhanced by increasing the energy levels for audio source sub-band signals that originate from a source device within a selected beam area. Audio signals from a device outside a selected beam area are de-emphasized. Enhancing acoustic signal component energy levels is discussed in more detail below with respect to the method of FIG. 6.


Reverberation signal components associated with a position inside the spatial area are adjusted based on the received indication at step 540. As discussed above, the adjustments may include modifying the ratio of a direct component with respect to reflection components for the particular signal. When a zoom in function is to be performed, reverberation should be decreased by increasing the ratio of the direct component to the reflection components in the audio signal. When a zoom out function is performed for the audio signal, the direct component is reduced with respect to the reflection components to decrease the ratio of direct to reflection components of the audio signal.


A modulated gain is applied to the signal component at step 550. The gain may be applied by mixing a reverb processed acoustic signal with a primary acoustic signal (or another audio signal received by audio device 104). The mixed signal that has been processed by audio zoom is output at step 560.


As discussed above, sub-band signals are enhanced based on a zoom indication. FIG. 6 is a flow chart of an exemplary method for enhancing acoustic signal components. In some embodiments, the method in FIG. 6 provides more detail for step 530 of the method in FIG. 5. An audio source is detected in the direction of a beam at step 610. This detection may be performed by a null-processing noise subtraction mechanism or some other module that is able to identify a spatial position of a source based on audio signals received by two or more microphones.


Acoustic signal sources located outside the spatial area are attenuated at step 620. In various embodiments, the acoustic sources outside the spatial area include certain audio sources (e.g., 112 in FIG. 1) and reflected audio signals such as reflections 128 and 129. Adaptation constraints are then used to steer the beam based on the zoom indication at step 630. In some embodiments, the adaptation constraints include a and a constraints used in a null processing noise suppression system. The adaptation constraints may also be derived from multiplicative expansion or selection of a region around a preferred direction based on a beam pattern.


Energy ratios are then determined at step 640. The energy ratios may be used to derive multiplicative masks that boost or reduce a beam former cancellation gain for signal components. Next, multiplicative masks are generated based on energy ratios at step 650. Generating multiplicative masks based on an energy ratio is discussed in more detail below with respect to the method of FIG. 7.



FIG. 7 is a flow chart of an exemplary method for generating a multiplicative mask. The method of FIG. 7 provides more detail for step 650 in the method of FIG. 6. Differential arrays are generated from microphone signals at step 710. The arrays may be generated as part of a beam former module 310. The beam pattern may be a cardiod pattern generated based at least in part from the differential output signals. Next, a beam pattern is generated from the differential arrays at step 720. Energy ratios are then generated from beam patterns at step 730. The energy ratios may be generated as any of a combination of signals. Once generated, an ILD map may be generated per frequency from energy ratios. An ILD range corresponding to the desired selection may be selected. An ILD window may then be applied to a map by boosting the signal components within the window and attenuating the signal components positioned outside the window. A filter, such as a post filter, may be derived from the energy ratio at step 740.


The above described modules, including those discussed with respect to FIG. 3, may include instructions stored in a storage media such as a machine readable medium (e.g., computer readable medium). These instructions may be retrieved and executed by the processor 220 to perform the functionality discussed herein. Some examples of instructions include software, program code, and firmware. Some examples of storage media include memory devices and integrated circuits.



FIG. 8 is a block diagram illustrating an audio processing system 800, according to another example embodiment. The example audio processing system 800 includes a source estimation subsystem 830 coupled to various elements of an example AZA subsystem. The example AZA subsystem includes limiters 802a, 802b, . . . , and 802n, FCT modules 804a, 804b, . . . , and 804n, analysis module 806, zoom control module 810, signal modifier 812, element 818, and a limiter 820. The source estimation subsystem 830 may include a source direction estimator (SDE) module 808, also referred to as a target estimator, a gain module 816, and an automatic gain control (AGC) module 814. The example audio processing system 800 processes acoustic audio signal from microphones 106a, 106b, . . . , and 106n.


In various exemplary embodiments, SDE module 808 is operable to localize a source of sound. The SDE module 808 may generate cues based on correlation of phase plots between different microphone inputs. Based on the correlation of the phase plots, the example SDE module 808 can compute a vector of salience estimates at different angles. Based on the salience estimates, the SDE module 808 can determine a direction of the source. In other words, according to various embodiments, a peak in the vector of salience estimates is an indication of direction of source in a particular direction. At the same time, sources of diffused nature, i.e., non-directional, may be represented by poor salience estimates at all the angles. Various embodiments may rely upon the cues (estimates of salience) to improve the performance of an existing directional audio solution, which is carried out by the analysis module 806, signal modifier 812, and zoom control module 810.


According to an example embodiment, estimates of salience are used to localize the angle of the source in the range of 0 to 360 degrees in a plane parallel to the ground, when, for example, the audio device 104 is placed on a table top. The estimates of salience can be used to attenuate/amplify the signals at different angles as required by the customer/user.


In various embodiments, the SDE module 808 is configured to operate in two and more modes. The modes of operation can include “normal,” “noisy,” and “simultaneous talkers.” The characterization of these modes is driven by a SDE salience parameter.


Normal Mode

A “normal” mode of operation is defined by a single directional speech source without the presence of any kind of strong speech distractors with or without the presence of noise. A vector of salience estimates in such case can be characterized by a single peak (above a salience threshold). The single peak can indicate a presence of a single source of sound. The location of the peak, in the vector of salience estimates, may characterize the angle of the source. In such cases, both a diffused source detector and a simultaneous talker detector may be set to a no state. Based on these states, the target estimator, in various embodiments, drives the level of suppression/amplification as desired by the user on a per angle basis.


In some embodiments, the target estimator generates a mapping of angle to relative levels of attenuation in the AZA subsystem. For example, a range of angles 240-270 degrees may require 10 dB of incremental suppression relative to AZA's performance target estimator containing an array with 0 dB throughout except for the entries between 240 and 270 degrees.


Although an immediate relative suppression level of 10 dB is achievable on detection, in a real time speech system, such suppression may cause audible distortion to a listener due to sudden jumps in signal levels. In some embodiments, to alleviate the distortion problem, the AGC module 814 can control the rate of roll-off by means of attack and release time constants. A smooth roll-off can effectively stabilize the speech system without audible distortions in the audio. In some embodiments, noise, if present along with the directional speech, is alleviated by the AZA subsystem.


Noisy Mode

A noisy mode of operation can be characterized by a diffused noise source with no directional speech. The noisy mode can result in poor salience estimates for all angles. Since there is no directional information of the source of such data, the signal can be processed solely by the AZA subsystem. In some embodiments, interactions between the noisy mode and the normal mode of operations are handled smoothly without sudden switch overs to avoid pumping or any gain related artifacts. For a smooth handover, a target estimator can provide a target of 0 dB to the AGC module 814. By appropriately handling the attack and release time, a smooth handover can be achieved. It should be noted, however, that the attack and release time in the noisy mode are different from the attack and release time used in the normal mode.


Simultaneous Talkers Mode

A simultaneous talkers mode is characterized by simultaneous multiple talkers/side distractors either with or without noise. The salience vector for the simultaneous talkers mode can be characterized by multiple peaks (above a salience threshold). The simultaneous talkers mode can be handled in a way similar to the noisy mode. When SDE module operates in the simultaneous talkers mode, acoustic signals from the microphones can be processed solely by the AZA subsystem. In various embodiments, a handover between the above modes can be carried out in a gracious manner with the help of the AGC subsystem.


Various embodiments of the technology described herein, having the AZA subsystem augmented with a source estimation subsystem, can avoid the problem of microphone sealing by ignoring any inter-microphone signal level differences. Various embodiments focus instead on the time of arrival/phase cues between the microphones. However, it should be noted that even though various embodiments can be insensitive to the microphone sealing, the underlying AZA sub-system may still be sensitive to the microphone sealing, and therefore the overall system performance may depend on the microphone sealing. In some embodiments, to alleviate the microphone sealing problem, an AZA sub-system may be tuned based on characteristics of the sealing of the microphones utilized to reduce the sensitivity on the microphone sealing. Further details regarding exemplary tuning of the AZA sub-system may be found in U.S. patent application Ser. No. 12/896,725, filed Oct. 1, 2010, incorporated by reference herein.


Various embodiments of the present technology may utilize the fact that SDE salience varies very little with the change of a distance between a talker/speaker and an audio device, when the distance is in the range of 0.5 m-2 m and the speaker's mouth is at around 30 cm above the audio device. This can make the audio processing system 800 more robust to the distance variance and can result in an even/similar performance for a talker talking at these distances. In some embodiments, the AZA subsystem may be tuned to take full-advantage of the robustness to the distance.


The target estimator block (also referred to as SDE module) 808 can provide relative levels of suppression based on the angle of arrival of sounds independently of the AZA subsystem. In some embodiments, the target estimator block can be controlled independently without any interactions with other subsystems. This independently controllable (e.g. “island”) architecture can empower the field tuning engineers to match the performance desired by a customer/user.


As described regarding various embodiments, the array of the target estimators during the “normal” mode of operation provides a powerful tool which can allow implementing the above architecture by manipulating the angle of the suppression level array in the target-estimator block.



FIG. 9 is a flow chart showing steps of a method 900 of improving performance of directional audio capture system, according to an example embodiment. In block 910, the example method 900 includes correlating phase plots of at least two audio inputs. In some embodiments, the audio inputs can be captured by at least two microphones having different sealing.


In block 920, the example method 900 allows generating, based on the correlation, estimates of salience at different directional angles to localize at least one direction associated with at least one source of a sound. In some embodiments, the estimates of salience include a vector of saliences at directional angles from 0 to 360 in a plane parallel to a ground.


In block 930, the example method 900 includes determining cues based on the estimates of salience. In block 940, the example method 900 includes providing those “estimates of salience”-based cues to a directional audio capture system.


In further embodiments, the example method 900 includes determining, based on the estimates of salience (e.g., absence or presence of one or more peaks in the estimates of salience), a mode from a plurality of the operational modes. In certain embodiments, the operational modes include a “normal” mode characterized by a single directional speech source, a “simultaneous talkers” mode characterized by the presence of at least two single directional speech sources, and a noisy mode characterized by a diffused noise source with no directional speech.


In block 960, the example method 900 includes configuring, based on the determined mode, the directional audio capture system.


In block 970, the example method 900 includes determining, based on the estimates of salience and the determined mode, other cues including at least levels of attenuation.


In block 980, the example method 900 includes controlling a rate of switching between modes from the plurality of the operational modes in real time by applying attack and release time constants.



FIG. 10 illustrates an exemplary computer system 1000 that may be used to implement some embodiments of the present disclosure. The computer system 1000 of FIG. 10 may be implemented in the contexts of the likes of computing systems, networks, servers, or combinations thereof. The computer system 1000 of FIG. 10 includes one or more processor units 1010 and main memory 1020. Main memory 1020 stores, in part, instructions and data for execution by processor units 1010. Main memory 1020 stores the executable code when in operation, in this example. The computer system 1000 of FIG. 10 further includes a mass data storage 1030, portable storage device 1040, output devices 1050, user input devices 1060, a graphics display system 1070, and peripheral devices 1080.


The components shown in FIG. 10 are depicted as being connected via a single bus 1090. The components may be connected through one or more data transport means. Processor unit 1010 and main memory 1020 are connected via a local microprocessor bus, and the mass data storage 1030, peripheral device(s) 1080, portable storage device 1040, and graphics display system 1070 are connected via one or more input/output (I/O) buses.


Mass data storage 1030, which can be implemented with a magnetic disk drive, solid state drive, or an optical disk drive, is a non-volatile storage device for storing data and instructions for use by processor unit 1010. Mass data storage 1030 stores the system software for implementing embodiments of the present disclosure for purposes of loading that software into main memory 1020.


Portable storage device 1040 operates in conjunction with a portable non-volatile storage medium, such as a flash drive, floppy disk, compact disk, digital video disc, or Universal Serial Bus (USB) storage device, to input and output data and code to and from the computer system 1000 of FIG. 10. The system software for implementing embodiments of the present disclosure is stored on such a portable medium and input to the computer system 1000 via the portable storage device 1040.


User input devices 1060 can provide a portion of a user interface. User input devices 1060 may include one or more microphones, an alphanumeric keypad, such as a keyboard, for inputting alphanumeric and other information, or a pointing device, such as a mouse, a trackball, stylus, or cursor direction keys. User input devices 1060 can also include a touchscreen. Additionally, the computer system 1000 as shown in FIG. 10 includes output devices 1050. Suitable output devices 1050 include speakers, printers, network interfaces, and monitors.


Graphics display system 1070 include a liquid crystal display (LCD) or other suitable display device. Graphics display system 1070 is configurable to receive textual and graphical information and process the information for output to the display device.


Peripheral devices 1080 may include any type of computer support device to add additional functionality to the computer system.


The components provided in the computer system 1000 of FIG. 10 are those typically found in computer systems that may be suitable for use with embodiments of the present disclosure and are intended to represent a broad category of such computer components that are well known in the art. Thus, the computer system 1000 of FIG. 10 can be a personal computer (PC), hand held computer system, telephone, mobile computer system, workstation, tablet, phablet, mobile phone, server, minicomputer, mainframe computer, wearable, or any other computer system. The computer may also include different bus configurations, networked platforms, multi-processor platforms, and the like. Various operating systems may be used including UNIX, LINUX, WINDOWS, MAC OS, PALM OS, QNX ANDROID, IOS, CHROME, TIZEN, and other suitable operating systems.


The processing for various embodiments may be implemented in software that is cloud-based. In some embodiments, the computer system 1000 is implemented as a cloud-based computing environment, such as a virtual machine operating within a computing cloud. In other embodiments, the computer system 1000 may itself include a cloud-based computing environment, where the functionalities of the computer system 1000 are executed in a distributed fashion. Thus, the computer system 1000, when configured as a computing cloud, may include pluralities of computing devices in various forms, as will be described in greater detail below.


In general, a cloud-based computing environment is a resource that typically combines the computational power of a large grouping of processors (such as within web servers) and/or that combines the storage capacity of a large grouping of computer memories or storage devices. Systems that provide cloud-based resources may be utilized exclusively by their owners, or such systems may be accessible to outside users who deploy applications within the computing infrastructure to obtain the benefit of large computational or storage resources.


The cloud may be formed, for example, by a network of web servers that comprise a plurality of computing devices, such as the computer system 1000, with each server (or at least a plurality thereof) providing processor and/or storage resources. These servers may manage workloads provided by multiple users (e.g., cloud resource customers or other users). Typically, each user places workload demands upon the cloud that vary in real-time, sometimes dramatically. The nature and extent of these variations typically depends on the type of business associated with the user.


The present technology is described above with reference to example embodiments. Therefore, other variations upon the example embodiments are intended to be covered by the present disclosure.

Claims
  • 1. A method for improving performance of a directional audio capture system, the method comprising: correlating phase plots of at least two audio inputs;generating, based on the correlation, estimates of salience at different directional angles to localize at least one direction associated with at least one source of a sound;determining cues based on the estimates of salience;providing the cues to the directional audio capture system; anddetermining, based on the estimates of salience, a mode selected from a plurality of operational modes, the plurality of operational modes including a first operational mode wherein the at least one source of sound includes a single directional speech source.
  • 2. The method of claim 1, wherein the cues are used by the directional audio capture system to attenuate or amplify the at least two audio inputs at the different directional angles.
  • 3. The method of claim 1, wherein the cues include at least attenuation levels of the different directional angles.
  • 4. The method of claim 1, wherein the estimates of salience include a vector of saliences at directional angles from 0 to 360 in a plane parallel to a ground.
  • 5. The method of claim 1, wherein generating the cues includes mapping the different directional angles to relative levels of attenuation for the directional audio capture system.
  • 6. The method of claim 5, further comprising controlling the rate of changing the levels of attenuation in a real time by attack and release time constants to avoid sound artifacts.
  • 7. The method of claim 1, wherein the plurality of operational modes further includes a second operational mode wherein the at least one source of sound includes at least two single directional speech sources, and a third operational mode wherein the at least one source of sound includes a diffused noise source having no directional speech.
  • 8. The method of claim 1, wherein determining the mode is based on absence or presence of one or more peaks in the estimates of salience.
  • 9. The method of claim 8, further comprising configuring, based on the determined mode, the directional audio capture system.
  • 10. The method of claim 1, further comprising controlling a rate of switching between modes from the plurality of the operational modes in a real time by applying attack and release time constants.
  • 11. The method of claim 1, wherein the at least two audio inputs are captured by at least two microphones.
  • 12. The method of claim 11, wherein one of the at least two microphones is sealed better than other ones of the at least two microphones.
  • 13. A system for improving performance of a directional audio capture system, the system comprising: at least one processor; anda memory communicatively coupled with the at least one processor, the memory storing instructions, which when executed by the at least one processor performs a method comprising: correlating phase plots of at least two audio inputs;generating, based on the correlation, estimates of salience at different directional angles to localize at least one direction associated with at least one source of a sound;determining cues based on the estimates of salience;providing the cues to the directional audio capture system; anddetermining, based on absence or presence of one or more peaks in the estimates of salience, a mode selected from a plurality of operational modes, the plurality of operational modes including a first operational mode wherein the at least one source of sound includes a single directional speech source.
  • 14. The system of claim 13, wherein the cues are used by the directional audio capture system to attenuate or amplify the at least two audio inputs at the different directional angles.
  • 15. The system of claim 13, wherein the cues include at least attenuation levels for the different directional angles.
  • 16. The system of claim 13, wherein generating the cues includes mapping the different directional angles to relative levels of attenuation for the directional audio capture system.
  • 17. The system of claim 13, wherein the plurality of operational modes further includes a second operational mode wherein the at least one source of sound includes at least two single directional speech sources, and a third operational mode wherein the at least one source of sound includes a diffused noise source having no directional speech.
  • 18. The system of claim 17, further comprising: configuring, based on the determined mode, the directional audio capture system, andcontrolling a rate of switching between modes in a real time by applying attack and release time constants.
  • 19. A non-transitory computer-readable storage medium having embodied thereon instructions, which when executed by at least one processor, perform steps of a method, the method comprising: correlating phase plots of at least two audio inputs;generating, based on the correlation, estimates of salience at different directional angles to localize at least one direction associated with at least one source of a sound;determining cues based on the estimates of salience;providing the cues to the directional audio capture system; anddetermining, based on absence or presence of one or more peaks in the estimates of salience, a mode selected from a plurality of operational modes, the plurality of operational modes including a first operational mode wherein the at least one source of sound includes a single directional speech source.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a Continuation-in-Part of U.S. patent application Ser. No. 12/896,725, filed Oct. 1, 2010, which claims the benefit of U.S. Provisional Application No. 61/266,131, filed Dec. 2, 2009; the present application also claims the benefit of U.S. Provisional Application No. 62/098,247, filed Dec. 30, 2014. The subject matter of the aforementioned applications is incorporated herein by reference for all purposes.

US Referenced Citations (366)
Number Name Date Kind
4025724 Davidson, Jr. et al. May 1977 A
4137510 Iwahara Jan 1979 A
4802227 Elko et al. Jan 1989 A
4969203 Herman Nov 1990 A
5115404 Lo et al. May 1992 A
5204906 Nohara et al. Apr 1993 A
5224170 Waite, Jr. Jun 1993 A
5230022 Sakata Jul 1993 A
5289273 Lang Feb 1994 A
5400409 Linhard Mar 1995 A
5440751 Santeler et al. Aug 1995 A
5544346 Amini et al. Aug 1996 A
5555306 Gerzon Sep 1996 A
5583784 Kapust et al. Dec 1996 A
5598505 Austin et al. Jan 1997 A
5625697 Bowen et al. Apr 1997 A
5682463 Allen et al. Oct 1997 A
5715319 Chu Feb 1998 A
5734713 Mauney et al. Mar 1998 A
5774837 Yeldener et al. Jun 1998 A
5796850 Shiono et al. Aug 1998 A
5806025 Vis et al. Sep 1998 A
5819215 Dobson et al. Oct 1998 A
5850453 Klayman et al. Dec 1998 A
5937070 Todter et al. Aug 1999 A
5956674 Smyth et al. Sep 1999 A
5974379 Hatanaka et al. Oct 1999 A
5974380 Smyth et al. Oct 1999 A
5978567 Rebane et al. Nov 1999 A
5978824 Ikeda Nov 1999 A
5991385 Dunn et al. Nov 1999 A
6011853 Koski et al. Jan 2000 A
6035177 Moses et al. Mar 2000 A
6065883 Herring et al. May 2000 A
6084916 Ott Jul 2000 A
6104993 Ashley Aug 2000 A
6144937 Ali Nov 2000 A
6188769 Jot et al. Feb 2001 B1
6202047 Ephraim et al. Mar 2001 B1
6219408 Kurth Apr 2001 B1
6226616 You et al. May 2001 B1
6240386 Thyssen et al. May 2001 B1
6263307 Arslan et al. Jul 2001 B1
6281749 Klayman et al. Aug 2001 B1
6327370 Killion et al. Dec 2001 B1
6377637 Berdugo Apr 2002 B1
6381284 Strizhevskiy Apr 2002 B1
6381469 Wojick Apr 2002 B1
6389142 Hagen et al. May 2002 B1
6421388 Parizhsky et al. Jul 2002 B1
6477489 Lockwood et al. Nov 2002 B1
6480610 Fang et al. Nov 2002 B1
6490556 Graumann et al. Dec 2002 B1
6496795 Malvar Dec 2002 B1
6504926 Edelson et al. Jan 2003 B1
6584438 Manjunath et al. Jun 2003 B1
6615169 Ojala et al. Sep 2003 B1
6717991 Gustafsson et al. Apr 2004 B1
6748095 Goss Jun 2004 B1
6768979 Menendez-Pidal et al. Jul 2004 B1
6772117 Laurila et al. Aug 2004 B1
6810273 Mattila et al. Oct 2004 B1
6862567 Gao Mar 2005 B1
6873837 Yoshioka et al. Mar 2005 B1
6882736 Dickel et al. Apr 2005 B2
6907045 Robinson et al. Jun 2005 B1
6931123 Hughes Aug 2005 B1
6980528 LeBlanc et al. Dec 2005 B1
7010134 Jensen Mar 2006 B2
RE39080 Johnston Apr 2006 E
7035666 Silberfenig et al. Apr 2006 B2
7054809 Gao May 2006 B1
7058572 Nemer Jun 2006 B1
7058574 Taniguchi et al. Jun 2006 B2
7103176 Rodriguez et al. Sep 2006 B2
7145710 Holmes Dec 2006 B2
7190775 Rambo Mar 2007 B2
7221622 Matsuo et al. May 2007 B2
7245710 Hughes Jul 2007 B1
7254242 Ise et al. Aug 2007 B2
7283956 Ashley et al. Oct 2007 B2
7366658 Moogi et al. Apr 2008 B2
7383179 Alves et al. Jun 2008 B2
7433907 Nagai et al. Oct 2008 B2
7447631 Truman et al. Nov 2008 B2
7472059 Huang Dec 2008 B2
7548791 Johnston Jun 2009 B1
7555434 Nomura et al. Jun 2009 B2
7562140 Clemm et al. Jul 2009 B2
7590250 Ellis et al. Sep 2009 B2
7617099 Yang et al. Nov 2009 B2
7617282 Han Nov 2009 B2
7657427 Jelinek Feb 2010 B2
7664495 Bonner et al. Feb 2010 B1
7685132 Hyman Mar 2010 B2
7773741 LeBlanc et al. Aug 2010 B1
7791508 Wegener Sep 2010 B2
7796978 Jones et al. Sep 2010 B2
7899565 Johnston Mar 2011 B1
7970123 Beaucoup Jun 2011 B2
8032369 Manjunath et al. Oct 2011 B2
8036767 Soulodre Oct 2011 B2
8046219 Zurek et al. Oct 2011 B2
8060363 Ramo et al. Nov 2011 B2
8098844 Elko Jan 2012 B2
8150065 Solbach et al. Apr 2012 B2
8175291 Chan et al. May 2012 B2
8189429 Chen et al. May 2012 B2
8194880 Avendano Jun 2012 B2
8194882 Every et al. Jun 2012 B2
8195454 Muesch Jun 2012 B2
8204253 Solbach Jun 2012 B1
8229137 Romesburg Jul 2012 B2
8233352 Beaucoup Jul 2012 B2
8311817 Murgia et al. Nov 2012 B2
8345890 Avendano et al. Jan 2013 B2
8363823 Santos Jan 2013 B1
8363850 Amada Jan 2013 B2
8369973 Risbo Feb 2013 B2
8467891 Huang et al. Jun 2013 B2
8473287 Every et al. Jun 2013 B2
8531286 Friar et al. Sep 2013 B2
8606249 Goodwin Dec 2013 B1
8615392 Goodwin Dec 2013 B1
8615394 Avendano et al. Dec 2013 B1
8639516 Lindahl et al. Jan 2014 B2
8694310 Taylor Apr 2014 B2
8705759 Wolff et al. Apr 2014 B2
8744844 Klein Jun 2014 B2
8750526 Santos et al. Jun 2014 B1
8774423 Solbach Jul 2014 B1
8798290 Choi et al. Aug 2014 B1
8831937 Murgia et al. Sep 2014 B2
8880396 Laroche et al. Nov 2014 B1
8903721 Cowan Dec 2014 B1
8908882 Goodwin et al. Dec 2014 B2
8934641 Avendano et al. Jan 2015 B2
8989401 Ojanpera Mar 2015 B2
9007416 Murgia et al. Apr 2015 B1
9094496 Teutsch Jul 2015 B2
9185487 Solbach et al. Nov 2015 B2
9197974 Clark et al. Nov 2015 B1
9210503 Avendano et al. Dec 2015 B2
9247192 Lee et al. Jan 2016 B2
20010041976 Taniguchi et al. Nov 2001 A1
20020041678 Basburg-Ertem et al. Apr 2002 A1
20020071342 Marple et al. Jun 2002 A1
20020097884 Cairns Jul 2002 A1
20020097885 Birchfield Jul 2002 A1
20020138263 Deligne et al. Sep 2002 A1
20020160751 Sun et al. Oct 2002 A1
20020177995 Walker Nov 2002 A1
20030023430 Wang et al. Jan 2003 A1
20030056220 Thornton et al. Mar 2003 A1
20030093279 Malah et al. May 2003 A1
20030099370 Moore May 2003 A1
20030118200 Beaucoup et al. Jun 2003 A1
20030138116 Jones Jul 2003 A1
20030147538 Elko Aug 2003 A1
20030177006 Ichikawa et al. Sep 2003 A1
20030179888 Burnett et al. Sep 2003 A1
20030228019 Eichler et al. Dec 2003 A1
20040001450 He et al. Jan 2004 A1
20040066940 Amir Apr 2004 A1
20040076190 Goel et al. Apr 2004 A1
20040083110 Wang Apr 2004 A1
20040102967 Furuta et al. May 2004 A1
20040133421 Burnett et al. Jul 2004 A1
20040145871 Lee Jul 2004 A1
20040165736 Hetherington et al. Aug 2004 A1
20040184882 Cosgrove Sep 2004 A1
20050008169 Muren et al. Jan 2005 A1
20050008179 Quinn Jan 2005 A1
20050043959 Stemerdink et al. Feb 2005 A1
20050080616 Leung et al. Apr 2005 A1
20050096904 Taniguchi et al. May 2005 A1
20050114123 Lukac et al. May 2005 A1
20050143989 Jelinek Jun 2005 A1
20050213739 Rodman et al. Sep 2005 A1
20050240399 Makinen Oct 2005 A1
20050249292 Zhu Nov 2005 A1
20050261896 Schuijers et al. Nov 2005 A1
20050267369 Lazenby et al. Dec 2005 A1
20050276363 Joublin et al. Dec 2005 A1
20050281410 Grosvenor et al. Dec 2005 A1
20050283544 Yee Dec 2005 A1
20060063560 Herle Mar 2006 A1
20060092918 Talalai May 2006 A1
20060100868 Hetherington et al. May 2006 A1
20060122832 Takiguchi et al. Jun 2006 A1
20060136203 Ichikawa Jun 2006 A1
20060198542 Benjelloun Touimi et al. Sep 2006 A1
20060206320 Li Sep 2006 A1
20060224382 Taneda Oct 2006 A1
20060242071 Stebbings Oct 2006 A1
20060270468 Hui et al. Nov 2006 A1
20060282263 Vos et al. Dec 2006 A1
20060293882 Giesbrecht et al. Dec 2006 A1
20070003097 Langberg et al. Jan 2007 A1
20070005351 Sathyendra et al. Jan 2007 A1
20070025562 Zalewski et al. Feb 2007 A1
20070033020 (Kelleher) Francois et al. Feb 2007 A1
20070033494 Wenger et al. Feb 2007 A1
20070038440 Sung et al. Feb 2007 A1
20070041589 Patel et al. Feb 2007 A1
20070058822 Ozawa Mar 2007 A1
20070064817 Dunne et al. Mar 2007 A1
20070067166 Pan et al. Mar 2007 A1
20070081075 Canova, Jr. et al. Apr 2007 A1
20070088544 Acero et al. Apr 2007 A1
20070100612 Ekstrand et al. May 2007 A1
20070127668 Ahya et al. Jun 2007 A1
20070136056 Moogi et al. Jun 2007 A1
20070136059 Gadbois Jun 2007 A1
20070150268 Acero et al. Jun 2007 A1
20070154031 Avendano et al. Jul 2007 A1
20070185587 Kondo Aug 2007 A1
20070198254 Goto et al. Aug 2007 A1
20070237271 Pessoa et al. Oct 2007 A1
20070244695 Manjunath et al. Oct 2007 A1
20070253574 Soulodre Nov 2007 A1
20070276656 Solbach et al. Nov 2007 A1
20070282604 Gartner et al. Dec 2007 A1
20070287490 Green et al. Dec 2007 A1
20080019548 Avendano Jan 2008 A1
20080069366 Soulodre Mar 2008 A1
20080111734 Fam et al. May 2008 A1
20080117901 Klammer May 2008 A1
20080118082 Seltzer et al. May 2008 A1
20080140396 Grosse-Schulte et al. Jun 2008 A1
20080159507 Virolainen et al. Jul 2008 A1
20080160977 Ahmaniemi et al. Jul 2008 A1
20080187143 Mak-Fan Aug 2008 A1
20080192955 Merks Aug 2008 A1
20080192956 Kazama Aug 2008 A1
20080195384 Jabri et al. Aug 2008 A1
20080208575 Laaksonen et al. Aug 2008 A1
20080212795 Goodwin et al. Sep 2008 A1
20080233934 Diethorn Sep 2008 A1
20080247567 Kjolerbakken et al. Oct 2008 A1
20080259731 Happonen Oct 2008 A1
20080298571 Kurtz et al. Dec 2008 A1
20080304677 Abolfathi et al. Dec 2008 A1
20080310646 Amada Dec 2008 A1
20080317259 Zhang et al. Dec 2008 A1
20080317261 Yoshida et al. Dec 2008 A1
20090012783 Klein Jan 2009 A1
20090012784 Murgia et al. Jan 2009 A1
20090018828 Nakadai et al. Jan 2009 A1
20090034755 Short et al. Feb 2009 A1
20090048824 Amada Feb 2009 A1
20090060222 Jeong et al. Mar 2009 A1
20090063143 Schmidt et al. Mar 2009 A1
20090070118 Den Brinker et al. Mar 2009 A1
20090086986 Schmidt et al. Apr 2009 A1
20090089054 Wang et al. Apr 2009 A1
20090106021 Zurek et al. Apr 2009 A1
20090112579 Li et al. Apr 2009 A1
20090116656 Lee et al. May 2009 A1
20090119096 Gerl et al. May 2009 A1
20090119099 Lee et al. May 2009 A1
20090134829 Baumann et al. May 2009 A1
20090141908 Jeong et al. Jun 2009 A1
20090144053 Tamura et al. Jun 2009 A1
20090144058 Sorin Jun 2009 A1
20090147942 Culter Jun 2009 A1
20090150149 Culter et al. Jun 2009 A1
20090164905 Ko Jun 2009 A1
20090192790 El-Maleh et al. Jul 2009 A1
20090192791 El-Maleh et al. Jul 2009 A1
20090204413 Sintes et al. Aug 2009 A1
20090216526 Schmidt et al. Aug 2009 A1
20090226005 Acero et al. Sep 2009 A1
20090226010 Schnell et al. Sep 2009 A1
20090228272 Herbig et al. Sep 2009 A1
20090240497 Usher et al. Sep 2009 A1
20090257609 Gerkmann et al. Oct 2009 A1
20090262969 Short et al. Oct 2009 A1
20090264114 Virolainen et al. Oct 2009 A1
20090287481 Paranjpe et al. Nov 2009 A1
20090292536 Hetherington et al. Nov 2009 A1
20090303350 Terada Dec 2009 A1
20090323655 Cardona et al. Dec 2009 A1
20090323925 Sweeney et al. Dec 2009 A1
20090323981 Cutler Dec 2009 A1
20090323982 Solbach et al. Dec 2009 A1
20100004929 Baik Jan 2010 A1
20100017205 Visser et al. Jan 2010 A1
20100033427 Marks et al. Feb 2010 A1
20100036659 Haulick et al. Feb 2010 A1
20100092007 Sun Apr 2010 A1
20100094643 Avendano et al. Apr 2010 A1
20100105447 Sibbald et al. Apr 2010 A1
20100128123 DiPoala May 2010 A1
20100130198 Kannappan et al. May 2010 A1
20100211385 Sehlstedt Aug 2010 A1
20100215184 Buck et al. Aug 2010 A1
20100217837 Ansari et al. Aug 2010 A1
20100228545 Ito et al. Sep 2010 A1
20100245624 Beaucoup Sep 2010 A1
20100278352 Petit et al. Nov 2010 A1
20100280824 Petit et al. Nov 2010 A1
20100296668 Lee et al. Nov 2010 A1
20100303298 Marks et al. Dec 2010 A1
20100315482 Rosenfeld et al. Dec 2010 A1
20110038486 Beaucoup Feb 2011 A1
20110038489 Visser Feb 2011 A1
20110038557 Closset et al. Feb 2011 A1
20110044324 Li et al. Feb 2011 A1
20110075857 Aoyagi Mar 2011 A1
20110081024 Soulodre Apr 2011 A1
20110081026 Ramakrishnan et al. Apr 2011 A1
20110107367 Georgis et al. May 2011 A1
20110129095 Avendano et al. Jun 2011 A1
20110137646 Ahgren et al. Jun 2011 A1
20110142257 Goodwin et al. Jun 2011 A1
20110173006 Nagel et al. Jul 2011 A1
20110173542 Imes et al. Jul 2011 A1
20110182436 Murgia et al. Jul 2011 A1
20110184732 Godavarti Jul 2011 A1
20110184734 Wang et al. Jul 2011 A1
20110191101 Uhle et al. Aug 2011 A1
20110208520 Lee Aug 2011 A1
20110224994 Norvell et al. Sep 2011 A1
20110257965 Hardwick Oct 2011 A1
20110257967 Every et al. Oct 2011 A1
20110264449 Sehlstedt Oct 2011 A1
20110280154 Silverstrim et al. Nov 2011 A1
20110286605 Furuta et al. Nov 2011 A1
20110300806 Lindahl et al. Dec 2011 A1
20110305345 Bouchard et al. Dec 2011 A1
20120027217 Jun et al. Feb 2012 A1
20120050582 Seshadri et al. Mar 2012 A1
20120062729 Hart et al. Mar 2012 A1
20120116758 Murgia et al. May 2012 A1
20120116769 Malah et al. May 2012 A1
20120123775 Murgia et al. May 2012 A1
20120133728 Lee May 2012 A1
20120182429 Forutanpour et al. Jul 2012 A1
20120202485 Mirbaha et al. Aug 2012 A1
20120209611 Furuta et al. Aug 2012 A1
20120231778 Chen et al. Sep 2012 A1
20120249785 Sudo et al. Oct 2012 A1
20120250882 Mohammad et al. Oct 2012 A1
20120257778 Hall et al. Oct 2012 A1
20130034243 Yermeche et al. Feb 2013 A1
20130051543 McDysan et al. Feb 2013 A1
20130182857 Namba et al. Jul 2013 A1
20130289988 Fry Oct 2013 A1
20130289996 Fry Oct 2013 A1
20130322461 Poulsen Dec 2013 A1
20130332156 Tackin et al. Dec 2013 A1
20130332171 Avendano et al. Dec 2013 A1
20130343549 Vemireddy et al. Dec 2013 A1
20140003622 Ikizyan et al. Jan 2014 A1
20140126745 Dickins May 2014 A1
20140350926 Schuster et al. Nov 2014 A1
20150025881 Carlos et al. Jan 2015 A1
20150078555 Zhang et al. Mar 2015 A1
20150078606 Zhang et al. Mar 2015 A1
20150208165 Volk et al. Jul 2015 A1
20150304766 Delikaris-Manias Oct 2015 A1
20160037245 Harrington Feb 2016 A1
20160061934 Woodruff et al. Mar 2016 A1
20160078880 Avendano et al. Mar 2016 A1
20160093307 Warren et al. Mar 2016 A1
Foreign Referenced Citations (61)
Number Date Country
105474311 Apr 2016 CN
112014003337 Mar 2016 DE
1081685 Mar 2001 EP
1536660 Jun 2005 EP
20080623 Nov 2008 FI
20110428 Dec 2011 FI
20125600 Jun 2012 FI
123080 Oct 2012 FI
05172865 Jul 1993 JP
H05300419 Nov 1993 JP
H07336793 Dec 1995 JP
2004053895 Feb 2004 JP
2004531767 Oct 2004 JP
2004533155 Oct 2004 JP
2005148274 Jun 2005 JP
2005518118 Jun 2005 JP
2005309096 Nov 2005 JP
2006515490 May 2006 JP
2007201818 Aug 2007 JP
2008518257 May 2008 JP
2008542798 Nov 2008 JP
2009037042 Feb 2009 JP
2009538450 Nov 2009 JP
2012514233 Jun 2012 JP
5081903 Sep 2012 JP
2013513306 Apr 2013 JP
2013527479 Jun 2013 JP
5718251 Mar 2015 JP
5855571 Dec 2015 JP
1020070068270 Jun 2007 KR
101050379 Dec 2008 KR
1020080109048 Dec 2008 KR
1020090013221 Feb 2009 KR
1020110111409 Oct 2011 KR
1020120094892 Aug 2012 KR
1020120101457 Sep 2012 KR
101294634 Aug 2013 KR
101610662 Apr 2016 KR
519615 Feb 2003 TW
200847133 Dec 2008 TW
201113873 Apr 2011 TW
201143475 Dec 2011 TW
I421858 Jan 2014 TW
201513099 Apr 2015 TW
WO8400634 Feb 1984 WO
WO0207061 Jan 2002 WO
WO02080362 Oct 2002 WO
WO02103676 Dec 2002 WO
WO03069499 Aug 2003 WO
WO2004010415 Jan 2004 WO
WO2005086138 Sep 2005 WO
WO2007140003 Dec 2007 WO
WO2008034221 Mar 2008 WO
WO2010077361 Jul 2010 WO
WO2011002489 Jan 2011 WO
WO2011068901 Jun 2011 WO
WO2012094422 Jul 2012 WO
WO2013188562 Dec 2013 WO
WO2015010129 Jan 2015 WO
WO2016040885 Mar 2016 WO
WO2016049566 Mar 2016 WO
Non-Patent Literature Citations (124)
Entry
Final Office Action, dated Jan. 12, 2016, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010.
Non-Final Office Action, dated Dec. 28, 2015, U.S. Appl. No. 14/081,723, filed Nov. 15, 2013.
Final Office Action, dated Dec. 10, 2015, U.S. Appl. No. 13/916,388, filed Jun. 12, 2013.
Non-Final Office Action, dated Nov. 3, 2015, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010.
Non-Final Office Action, dated Sep. 14, 2015, U.S. Appl. No. 14/094,347, Dec. 2, 2013.
Notice of Allowance, dated Jul. 30, 2015, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010.
Notice of Allowance, dated Jul. 15, 2015, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013.
Non-Final Office Action, dated May 20, 2015, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010.
Non-Final Office Action, dated May 15, 2015, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010.
Non-Final Office Action, dated Mar. 30, 2015, U.S. Appl. No. 12/958,710, filed Dec. 2, 2010.
Final Office Action, dated Mar. 18, 2015, U.S. Appl. No. 12/904,010, dated Oct. 13, 2010.
Non-Final Office Action, dated Mar. 3, 2015, U.S. Appl. No. 13/916,388, filed Jun. 12, 2013.
Final Office Action, dated Feb. 10, 2015, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010.
Notice of Allowance, dated Dec. 3, 2014, U.S. Appl. No. 13/415,535, filed Mar. 8, 2012.
Non-Final Office Action, dated Nov. 19, 2014, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010.
Non-Final Office Action, dated Sep. 29, 2014, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013.
Final Office Action, dated Sep. 17, 2014, U.S. Appl. No. 13/415,535, filed Mar. 8, 2012.
Non-Final Office Action, dated Aug. 29, 2014, U.S. Appl. No. 12/904,010, filed Oct. 13, 2010.
Non-Final Office Action, dated Jul. 31, 2014, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010.
Notice of Allowance, dated Jul. 23, 2014, U.S. Appl. No. 12/908,746, filed Oct. 20, 2010.
Non-Final Office Action, dated Jul. 21, 2014, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010.
Non-Final Office Action, dated Jun. 6, 2014, U.S. Appl. No. 12/958,710, filed Dec. 2, 2010.
Non-Final Office Action, dated May 13, 2014, U.S. Appl. No. 12/962,519, filed Dec. 7, 2010.
Final Office Action, dated Apr. 9, 2014, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013.
Notice of Allowance, dated Mar. 26, 2014, U.S. Appl. No. 12/841,098, filed Jul. 21, 2010.
Final Office Action, dated Mar. 13, 2014, U.S. Appl. No. 12/904,010, filed Oct. 13, 2010.
Non-Final Office Action, dated Mar. 4, 2014, U.S. Appl. No. 13/415,535, filed Mar. 8, 2012.
Notice of Allowance, dated Jan. 31, 2014, U.S. Appl. No. 13/734,208, filed Jan. 4, 2013.
Non-Final Office Action, dated Jan. 30, 2014, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010.
Non-Final Office Action, dated Dec. 13, 2013, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013.
Non-Final Office Action, dated Nov. 20, 2013, U.S. Appl. No. 12/908,746, filed Oct. 20, 2010.
Non-Final Office Action, dated Oct. 8, 2013, U.S. Appl. No. 13/734,208, filed Jan. 4, 2013.
Notice of Allowance, dated Sep. 17, 2013, U.S. Appl. No. 13/751,907, filed Jan. 28, 2013.
Non-Final Office Action, dated Aug. 23, 2013, U.S. Appl. No. 12/904,010, filed Oct. 13, 2010.
Notice of Allowance, dated Aug. 15, 2013, U.S. Appl. No. 12/893,208, filed Sep. 29, 2010.
Notice of Allowance, dated Jul. 29, 2013, U.S. Appl. No. 13/414,121, filed Mar. 7, 2012.
Non-Final Office Action, dated Jun. 26, 2013, U.S. Appl. No. 12/959,994, filed Dec. 3, 2010.
Final Office Action, dated Jun. 3, 2013, U.S. Appl. No. 12/841,098, filed Jul. 21, 2010.
Non-Final Office Action, dated May 28, 2013, U.S. Appl. No. 13/735,446, filed Jan. 7, 2013.
Final Office Action, dated May 22, 2013, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010.
Non-Final Office Action, dated May 10, 2013, U.S. Appl. No. 13/751,907, filed Jan. 28, 2013.
Final Office Action, dated May 7, 2013, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010.
Non-Final Office Action, dated Apr. 25, 2013, U.S. Appl. No. 12/904,010, filed Oct. 13, 2010.
Final Office Action, dated Apr. 24, 2013, U.S. Appl. No. 12/958,710, filed Dec. 2, 2010.
Final Office Action, dated Apr. 9, 2013, U.S. Appl. No. 12/908,746, filed Oct. 20, 2010.
Non-Final Office Action, dated Apr. 8, 2013, U.S. Appl. No. 12/893,208, filed Sep. 29, 2010.
Non-Final Office Action, dated Jan. 31, 2013, U.S. Appl. No. 13/414,121, filed Mar. 7, 2012.
Non-Final Office Action, dated Jan. 2, 2013, U.S. Appl. No. 12/963,493, filed Dec. 8, 2010.
Non-Final Office Action, dated Dec. 12, 2012, U.S. Appl. No. 12/908,746, filed Oct. 20, 2010.
Non-Final Office Action, dated Nov. 9, 2012, U.S. Appl. No. 12/841,098, filed Jul. 21, 2010.
Non-Final Office Action, dated Oct. 24, 2012, U.S. Appl. No. 12/958,710, filed Dec. 2, 2010.
Non-Final Office Action, dated Oct. 11, 2012, U.S. Appl. No. 12/896,725, filed Oct. 1, 2010.
Notice of Allowance, dated Sep. 27, 2012, U.S. Appl. No. 13/568,989, filed Aug. 7, 2012.
Non-Final Office Action, dated Feb. 4, 2016, U.S. Appl. No. 14/341,697, filed Jul. 25, 2014.
Office Action dated Jan. 30, 2015 in Finland Patent Application No. 20080623, filed May 24, 2007.
Office Action dated Mar. 27, 2015 in Korean Patent Application No. 10-2011-7016591, filed Dec. 30, 2009.
Notice of Allowance dated Aug. 13, 2015 in Finnish Patent Application 20080623, filed May 24, 2007.
Office Action dated Oct. 15, 2015 in Korean Patent Application 10-2011-7016591.
Notice of Allowance dated Jan. 14, 2016 in South Korean Patent Application No. 10-2011-7016591 filed Jul. 15, 2011.
International Search Report & Written Opinion dated Feb. 12, 2016 in Patent Cooperation Treaty Application No. PCT/US2015/064523, filed Dec. 8, 2015.
Klein, David, “Noise-Robust Multi-Lingual Keyword Spotting with a Deep Neural Network Based Architecture”, U.S. Appl. No. 14/614,348, filed Feb. 4, 2015.
Vitus, Deborah Kathleen et al., “Method for Modeling User Possession of Mobile Device for User Authentication Framework”, U.S. Appl. No. 14/548,207, filed Nov. 19, 2014.
Murgia, Carlo, “Selection of System Parameters Based on Non-Acoustic Sensor Information”, U.S. Appl. No. 14/331,205, filed Jul. 14, 2014.
Goodwin, Michael M. et al., “Key Click Suppression”, U.S. Appl. No. 14/745,176, filed Jun. 19, 2015.
International Search Report and Written Opinion dated Feb. 7, 2011 in Patent Cooperation Treaty Application No. PCT/US10/58600.
International Search Report dated Dec. 20, 2013 in Patent Cooperation Treaty Application No. PCT/US2013/045462, filed Jun. 12, 2013.
Office Action dated Aug. 26, 2014 in Japan Application No. 2012-542167, filed Dec. 1, 2010.
Office Action dated Oct. 31, 2014 in Finland Patent Application No. 20125600, filed Jun. 1, 2012.
Office Action dated Jul. 21, 2015 in Japan Patent Application No. 2012-542167, filed Dec. 1, 2010.
Office Action dated Sep. 29, 2015 in Finland Patent Application No. 20125600, filed Dec. 1, 2010.
Allowance dated Nov. 17, 2015 in Japan Patent Application No. 2012-542167, filed Dec. 1, 2010.
International Search Report & Written Opinion dated Dec. 14, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/049816, filed Sep. 11, 2015.
International Search Report & Written Opinion dated Dec. 22, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/052433, filed Sep. 25, 2015.
International Search Report & Written Opinion dated Feb. 11, 2016 in Patent Cooperation Treaty Application No. PCT/US2015/063519, filed Dec. 2, 2015.
Boll, Steven F. “Suppression of Acoustic Noise in Speech using Spectral Subtraction”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-27, No. 2, Apr. 1979, pp. 113-120.
“ENT 172.” Instructional Module. Prince George's Community College Department of Engineering Technology. Accessed: Oct. 15, 2011. Subsection: “Polar and Rectangular Notation”. <http://academic.ppgcc.edu/ent/ent172—instr—mod.html>.
Fulghum, D. P. et al., “LPC Voice Digitizer with Background Noise Suppression”, 1979 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 220-223.
Haykin, Simon et al., “Appendix A.2 Complex Numbers.” Signals and Systems. 2nd Ed. 2003. p. 764.
Hohmann, V. “Frequency Analysis and Synthesis Using a Gammatone Filterbank”, ACTA Acustica United with Acustica, 2002, vol. 88, pp. 433-442.
Martin, Rainer “Spectral Subtraction Based on Minimum Statistics”, in Proceedings Europe. Signal Processing Conf., 1994, pp. 1182-1185.
Mitra, Sanjit K. Digital Signal Processing: a Computer-based Approach. 2nd Ed. 2001. pp. 131-133.
Cosi, Piero et al., (1996), “Lyon's Auditory Model Inversion: a Tool for Sound Separation and Speech Enhancement,” Proceedings of ESCA Workshop on ‘The Auditory Basis of Speech Perception,’ Keele University, Keele (UK), Jul. 15-19, 1996, pp. 194-197.
Rabiner, Lawrence R. et al., “Digital Processing of Speech Signals”, (Prentice-Hall Series in Signal Processing). Upper Saddle River, NJ: Prentice Hall, 1978.
Schimmel, Steven et al., “Coherent Envelope Detection for Modulation Filtering of Speech,” 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, No. 7, pp. 221-224.
Slaney, Malcom, et al., “Auditory Model Inversion for Sound Separation,” 1994 IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 19-22, vol. 2, pp. 77-80.
Slaney, Malcom. “An Introduction to Auditory Model Inversion”, Interval Technical Report IRC 1994-014, http://coweb.ecn.purdue.edu/˜maclom/interval/1994-014/, Sep. 1994, accessed on Jul. 6, 2010.
Solbach, Ludger “An Architecture for Robust Partial Tracking and Onset Localization in Single Channel Audio Signal Mixes”, Technical University Hamburg-Harburg, 1998.
International Search Report and Written Opinion dated Sep. 16, 2008 in Patent Cooperation Treaty Application No. PCT/US2007/012628.
International Search Report and Written Opinion dated May 20, 2010 in Patent Cooperation Treaty Application No. PCT/US2009/006754.
Fast Cochlea Transform, US Trademark Reg. No. 2,875,755 (Aug. 17, 2004).
3GPP2 “Enhanced Variable Rate Codec, Speech Service Options 3, 68, 70, and 73 for Wideband Spread Spectrum Digital Systems”, May 2009, pp. 1-308.
3GPP2 “Selectable Mode Vocoder (SMV) Service Option for Wideband Spread Spectrum Communication Systems”, Jan. 2004, pp. 1-231.
3GPP2 “Source-Controlled Variable-Rate Multimode Wideband Speech Codec (VMR-WB) Service Option 62 for Spread Spectrum Systems”, Jun. 11, 2004, pp. 1-164.
3GPP “3GPP Specification 26.071 Mandatory Speech CODEC Speech Processing Functions; AMR Speech Codec; General Description”, http://www.3gpp.org/ftp/Specs/html-info/26071.htm, accessed on Jan. 25, 2012.
3GPP “3GPP Specification 26.094 Mandatory Speech Codec Speech Processing Functions; Adaptive Multi-Rate (AMR) Speech Codec; Voice Activity Detector (VAD)”, http://www.3gpp.org/ftp/Specs/html-info/26094.htm, accessed an Jan. 25, 2012.
3GPP “3GPP Specification 26.171 Speech Codec Speech Processing Functions; Adaptive Multi-Rate—Wideband (AMR-WB) Speech Codec; General Description”, http://www.3gpp.org/ftp/Specs/html-info26171.htm, accessed on Jan. 25, 2012.
3GPP “3GPP Specification 26.194 Speech Codec Speech Processing Functions; Adaptive Multi-Rate—Wideband (AMR-WB) Speech Codec; Voice Activity Detector (VAD)” http://www.3gpp.org/ftp/Specs/html-info26194.htm, accessed on Jan. 25, 2012.
International Telecommunication Union “Coding of Speech at 8 kbit/s Using Conjugate-Structure Algebraic-code-excited Linear-prediction (CS-ACELP)”, Mar. 19, 1996, pp. 1-39.
International Telecommunication Union “Coding of Speech at 8 kbit/s Using Conjugate Structure Algebraic-code-excited Linear-prediction (CS-ACELP) Annex B: A Silence Compression Scheme for G.729 Optimized for Terminals Conforming to Recommendation V.70”, Nov. 8, 1996, pp. 1-23.
International Search Report and Written Opinion dated Aug. 19, 2010 in Patent Cooperation Treaty Application No. PCT/US2010/001786.
Cisco, “Understanding How Digital T1 CAS (Robbed Bit Signaling) Works in IOS Gateways”, Jan. 17, 2007, http://www.cisco.com/image/gif/paws/22444/t1-cas-ios.pdf, accessed on Apr. 3, 2012.
Jelinek et al., “Noise Reduction Method for Wideband Speech Coding” Proc. Eusipco, Vienna, Austria, Sep. 2004, pp. 1959-1962.
Widjaja et al., “Application of Differential Microphone Array for IS-127 EVRC Rate Determination Algorithm”, Interspeech 2009, 10th Annual Conference of the International Speech Communication Association, Brighton, United Kingdom Sep. 6-10, 2009, pp. 1123-1126.
Sugiyama et al., “Single-Microphone Noise Suppression for 3G Handsets Based on Weighted Noise Estimation” in Benesty et al., “Speech Enhancement”, 2005, pp. 115-133, Springer Berlin Heidelberg.
Watts, “Real-Time, High-Resolution Simulation of the Auditory Pathway, with Application to Cell-Phone Noise Reduction” Proceedings of 2010 IEEE International Symposium on Circuits and Systems (ISCAS), May 30-Jun. 2, 2010, pp. 3821-3824.
3GPP Minimum Performance Specification for the Enhanced Variable rate Codec, Speech Service Option 3 and 68 for Wideband Spread Spectrum Digital Systems, Jul. 2007, pp. 1-83.
Ramakrishnan, 2000. Reconstruction of Incomplete Spectrograms for robust speech recognition. PHD thesis, Carnegie Mellon University, Pittsburgh, Pennsylvania.
Kim et al., “Missing-Feature Reconstruction by Leveraging Temporal Spectral Correlation for Robust Speech Recognition in Background Noise Conditions,” Audio, Speech, and Language Processing, IEEE Transactions on, vol. 18, No. 8 pp. 2111-2120, Nov. 2010.
Cooke et al.,“Robust Automatic Speech Recognition with Missing and Unreliable Acoustic data,” Speech Commun., vol. 34, No. 3, pp. 267-285, 2001.
Liu et al., “Efficient cepstral normalization for robust speech recognition.” Proceedings of the workshop on Human Language Technology. Association for Computational Linguistics, 1993.
Yoshizawa et al., “Cepstral gain normalization for noise robust speech recognition.” Acoustics, Speech, and Signal Processing, 2004. Proceedings, (ICASSP04), IEEE International Conference on vol. 1 IEEE, 2004.
Office Action dated Apr. 8, 2014 in Japan Patent Application 2011-544416, filed Dec. 30, 2009.
Elhilali et al.,“A cocktail party with a cortical twist: How cortical mechanisms contribute to sound segregation.” J. Acoust. Soc. Am., vol. 124, No. 6, Dec. 2008; 124(6): 3751-3771).
Jin et al., “HMM-Based Multipitch Tracking for Noisy and Reverberant Speech.” Jul. 2011.
Kawahara, W., et al., “Tandem-Straight: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, F0, and aperiodicity estimation.” IEEE ICASSP 2008.
Lu et al. “A Robust Audio Classification and Segmentation Method.” Microsoft Research, 2001, pp. 203, 206, and 207.
International Search Report & Written Opinion dated Nov. 12, 2014 in Patent Cooperation Treaty Application No. PCT/US2014/047458, filed Jul. 21, 2014.
Krini, Mohamed et al., “Model-Based Speech Enhancement,” in Speech and Audio Processing in Adverse Environments; Signals and Communication Technology, edited by Hansler et al., 2008, Chapter 4, pp. 89-134.
Office Action dated Dec. 9, 2014 in Japan Patent Application No. 2012-518521, filed Jun. 21, 2010.
Office Action dated Dec. 10, 2014 in Taiwan Patent Application No. 099121290, filed Jun. 29, 2010.
Pumhagen, Heiko, “Low Complexity Parametric Stereo Coding in MPEG-4,” Proc. of the 7th Int. Conference on Digital Audio Effects (DAFx'04), Naples, Italy, Oct. 5-8, 2004.
Chang, Chun-Ming et al., “Voltage-Mode Multifunction Filter with Single Input and Three Outputs Using Two Compound Current Conveyors” IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, vol. 46, No. 11, Nov. 1999.
Nayebi et al., “Low delay FIR filter banks: design and evaluation” IEEE Transactions on Signal Processing, vol. 42, No. 1, pp. 24-31, Jan. 1994.
Notice of Allowance dated Feb. 17, 2015 in Japan Patent Application No. 2011-5444116, filed Dec. 30, 2009.
Related Publications (1)
Number Date Country
20160094910 A1 Mar 2016 US
Provisional Applications (2)
Number Date Country
61266131 Dec 2009 US
60098247 Dec 2014 US
Continuation in Parts (1)
Number Date Country
Parent 12896725 Oct 2010 US
Child 14957447 US