The present invention relates generally to audio processing. More specifically, the present invention relates to controlling adaptivity of noise cancellation in an audio signal.
Presently, there are many methods for reducing background noise in an adverse audio environment. Some audio devices that suppress noise utilize two or more microphones to receive an audio signal. Audio signals received by the microphones may be used in noise cancellation processing, which eliminates at least a portion of a noise component of a signal. Noise cancellation may be achieved by utilizing one or more spatial attributes derived from two or more microphone signals. In realistic scenarios, the spatial attributes of a wanted signal such as speech and an unwanted signal such as noise from the surroundings are usually different. Robustness of a noise reduction system can be adversely affected due to unanticipated variations of the spatial attributes for both wanted and unwanted signals. These unanticipated variations may result from variations in microphone sensitivity, variations in microphone positioning on audio devices, occlusion of one or more of the microphones, or movement of the device during normal usage. Accordingly, robust noise cancellation is needed that can adapt to various circumstances such as these.
Embodiments of the present technology allow control of adaptivity of noise of noise cancellation in an audio signal.
In a first claimed embodiment, a method for controlling adaptivity of noise cancellation is disclosed. The method includes receiving an audio signal at a first microphone, wherein the audio signal comprises a speech component and a noise component. A pitch salience of the audio signal may then be determined. Accordingly, a coefficient applied to the audio signal may be adapted to obtain a modified audio signal when the pitch salience satisfies a threshold. In turn, the modified audio signal is outputted via an output device.
In a second claimed embodiment, a method is set forth. The method includes receiving a primary audio signal at a first microphone and a secondary audio signal at a second microphone. The primary audio signal and the secondary audio signal both comprise a speech component. An energy estimate is determined from the primary audio signal or the secondary audio signal. A first coefficient to be applied to the primary audio signal may be adapted to generate the modified primary audio signal, wherein the application of the first coefficient may be based on the energy estimate. The modified primary audio signal is then outputted via an output device.
A third claimed embodiment discloses a method for controlling adaptivity of noise cancellation. The method includes receiving a primary audio signal at a first microphone and a secondary audio signal at a second microphone, wherein the primary audio signal and the secondary audio signal both comprise a speech component. A first coefficient to be applied to the primary audio signal is adapted to generate the modified primary audio signal. The modified primary audio signal is outputted via an output device, wherein adaptation of the first coefficient is halted based on an echo component within the primary audio signal.
In a forth claimed embodiment, a method for controlling adaptivity of noise cancellation is set forth. The method includes receiving an audio signal at a first microphone. The audio signal comprises a speech component and a noise component. A coefficient is adapted to suppress the noise component of the audio signal and form a modified audio signal. Adapting the coefficient may include reducing the value of the coefficient based on an audio noise energy estimate. The modified audio signal may then be outputted via an output device.
A fifth claimed embodiment discloses a method for controlling adaptivity of noise cancellation. The method includes receiving a primary audio signal at a first microphone and a secondary audio signal at a second microphone, wherein the primary audio signal and the secondary audio signal both comprise a speech and a noise component. A first transfer function is determined between the speech component of the primary audio signal and the speech component of the secondary signal, while a second transfer function is determined between the noise component of the primary audio signal and the noise component of the secondary audio signal. Next, a difference between the first transfer function and the second transfer function is determined. A coefficient applied to the primary audio signal is adapted to generate a modified primary signal when the difference exceeds the threshold. The modified primary audio signal may be outputted via an output device.
Embodiments of the present technology may further include systems and computer-readable storage media. Such systems can perform methods associated with controlling adaptivity of noise cancellation. The computer-readable media has programs embodied thereon. The programs may be executed by a processor to perform methods associated with controlling adaptivity of noise cancellation.
The present technology provides methods and systems for controlling adaptivity of noise cancellation of an audio signal. More specifically, these methods and systems allow noise cancellation to adapt to changing or unpredictable conditions. These conditions include differences in hardware resulting from manufacturing tolerances. Additionally, these conditions include unpredictable environmental factors such as changing relative positions of sources of wanted and unwanted audio signals.
Controlling adaptivity of noise cancellation can be performed by controlling how a noise component is canceled in an audio signal received from one of two microphones. All or most of a speech component can be removed from an audio signal received from one of two or more microphones, resulting in a noise reference signal or a residual audio signal. The resulting residual audio signal is then processed or modified and can be then subtracted from the original primary audio signal, thereby reducing noise in the primary audio signal generating a modified audio signal. One or more coefficients can be applied to cancel or suppress the speech component in the primary signal (to generate the residual audio signal) and then to cancel or suppress at least a portion of the noise component in the primary signal (to generate the modified primary audio signal).
Referring now to
The audio device 102 may include a microphone array. In exemplary embodiments, the microphone array may comprise a primary microphone 108 relative to the user 104 and a secondary microphone 110 located a distance away from the primary microphone 108. The primary microphone 108 may be located near the mouth of the user 104 in a nominal usage position, which is described in connection with
In exemplary embodiments, the primary and secondary microphones 108 and 110 are spaced a distance apart. This spatial separation allows various differences to be determined between received acoustic signals. These differences may be used to determine relative locations of the user 104 and the noise source 106. Upon receipt by the primary and secondary microphones 108 and 110, the acoustic signals may be converted into electric signals. The electric signals may, themselves, be converted by an analog-to-digital converter (not shown) into digital signals for processing in accordance with some embodiments. In order to differentiate the acoustic signals, the acoustic signal received by the primary microphone 108 is herein referred to as the primary signal, while the acoustic signal received by the secondary microphone 110 is herein referred to as the secondary signal.
The primary microphone 108 and the secondary microphone 110 both receive a speech signal from the mouth of the user 104 and a noise signal from the noise source 106. These signals may be converted from the time-domain to the frequency-domain, and be divided into frequency sub-bands, as described further herein. The total signal received by the primary microphone 108 (i.e., the primary signal c) may be represented as a superposition of the speech signal s and of the noise signal n as c=s+n. In other words, the primary signal is a mixture of a speech component and a noise component.
Due to the spatial separation of the primary microphone 108 and the secondary microphone 110, the speech signal received by the secondary microphone 110 may have an amplitude difference and a phase difference relative to the speech signal received by the primary microphone 108. Similarly, the noise signal received by the secondary microphone 110 may have an amplitude difference and a phase difference relative to the noise signal received by the primary microphone 108. These amplitude and phase differences can be represented by complex coefficients. Therefore, the total signal received by the secondary microphone 110 (i.e., the secondary signal f) may be represented as a superposition of the speech signal s scaled by a first complex coefficient σ and of the noise signal n scaled by a second complex coefficient v as f=σs+vn. Put differently, the secondary signal is a mixture of the speech component and noise component of the primary signal, wherein both the speech component and noise component are independently scaled in amplitude and shifted in phase relative to the primary signal. It is noteworthy that a diffuse noise component may be present in both the primary and secondary signals. In such a case, the primary signal may be represented as c=s+n+d, while the secondary signal may be represented as f=σs+vn+e.
The output device 206 is any device which provides an audio output to users such as the user 104. For example, the output device 206 may comprise an earpiece of a headset or handset, or a speaker on a conferencing device. In some embodiments, the output device 206 may also be a device that outputs or transmits audio signals to other devices or users.
Referring now to
The primary signal c and the secondary signal f are received by the frequency analysis module 302. The frequency analysis module 302 decomposes the primary and secondary signals into frequency sub-bands. Because most sounds are complex and comprise more than one frequency, a sub-band analysis on the primary and secondary signals determines what individual frequencies are present. This analysis may be performed on a frame by frame basis. A frame is a predetermined period of time. According to one embodiment, the frame is 8 ms long. Alternative embodiments may utilize other frame lengths or no frame at all.
A sub-band results from a filtering operation on an input signal (e.g., the primary signal or the secondary signal) where the bandwidth of the filter is narrower than the bandwidth of the signal received by the frequency analysis module 302. In one embodiment, the frequency analysis module 302 utilizes a filter bank to mimic the frequency response of a human cochlea. This is described in further detail in U.S. Pat. No. 7,076,315 filed Mar. 24, 2000 and entitled “Efficient Computation of Log-Frequency-Scale Digital Filter Cascade,” and U.S. patent application Ser. No. 11/441,675 filed May 25, 2006 and entitled “System and Method for Processing an Audio Signal,” both of which have been incorporated herein by reference. Alternatively, other filters such as short-time Fourier transform (STFT), sub-band filter banks, modulated complex lapped transforms, cochlear models, wavelets, etc., can be used by the frequency analysis module 302. The decomposed primary signal is expressed as c(k), while the decomposed secondary signal is expressed as f(k), where k indicates the specific sub-band.
The decomposed signals c(k) and f(k) are received by the noise cancellation module 304 from the frequency analysis module 302. The noise cancellation module 304 performs noise cancellation on the decomposed signals using subtractive approaches. In exemplary embodiments, the noise subtraction engine 304 may adaptively subtract out some or the entire noise signal from the primary signal for one or more sub-bands. The results of the noise cancellation engine 304 may be outputted to the user or processed through a further noise suppression system (e.g., the noise suppression engine 306). For purposes of illustration, embodiments of the present technology will discuss the output of the noise cancellation engine 304 as being processed through a further noise suppression system. The noise cancellation module 304 is discussed in further detail in connection with
As depicted in
Next, the decomposed primary signal c″(k) is reconstructed by the frequency synthesis module 310. The reconstruction may include phase shifting the sub-bands of the primary signal in the frequency synthesis module 310. This is described further in U.S. patent application Ser. No. 12/319,107 filed Dec. 31, 2008 and entitled “Systems and Methods for Reconstructing Decomposed Audio Signals,” which has been incorporated herein by reference. An inverse of the decomposition process of the frequency analysis module 302 may be utilized by the frequency synthesis module 310. Once reconstruction is completed, the noise suppressed primary signal may be outputted by the audio processing system 204.
The pitch salience module 402 is executable by the processor 202 to determine the pitch salience of the primary signal. In exemplary embodiments, pitch salience may be determined from the primary signal in the time-domain. In other exemplary embodiments, determining pitch salience includes converting the primary signal from the time-domain to the frequency-domain. Pitch salience can be viewed as an estimate of how periodic the primary signal is and, by extension, how predictable the primary signal is. To illustrate, pitch salience of a perfect sine wave is contrasted with pitch salience of white noise. Since a perfect sine wave is purely periodic and has no noise component, the pitch salience of the sine wave has a large value. White noise, on the other hand, has no periodicity by definition, so the pitch salience of white noise has a small value. Voiced components of speech typically have a high pitch salience, and can thus be distinguished from many types of noise, which have a low pitch salience. It is noted that the pitch salience module 402 may also determine the pitch salience of the secondary signal.
The cross correlation module 404 is executable by the processor 202 to determine transfer functions between the primary signal and the secondary signal. The transfer functions include complex values or coefficients for each sub-band. One of these complex values denoted by {circumflex over (σ)} is associated with the speech signal from the user 104, while another complex value denoted by {circumflex over (v)} is associated with the noise signal from the noise source 106. More specifically, the first complex value {circumflex over (σ)} for each sub-band represents the difference in amplitude and phase between the speech signal in the primary signal and the speech signal in the secondary signal for the respective sub-band. In contrast, the second complex value {circumflex over (v)} for each sub-band represents the difference in amplitude and phase between the noise signal in the primary signal and the noise signal in the secondary signal for the respective sub-band. In exemplary embodiments, the transfer function may be obtained by performing a cross-correlation between the primary signal and the secondary signal.
The first complex value {circumflex over (σ)} of the transfer function may have a default value or reference value σref that is determined empirically through calibration. A head and torso simulator (HATS) may be used for such calibration. A HATS system generally includes a mannequin with built-in ear and mouth simulators that provides a realistic reproduction of acoustic properties of an average adult human head and torso. HATS systems are commonly used for in situ performance tests on telephone handsets. An exemplary HATS system is available from Brüel & Kjær Sound & Vibration Measurement A/S of Nærum, Denmark. The audio device 102 can be mounted to a mannequin of a HATS system. Sounds produced by the mannequin and received by the primary and secondary microphones 108 and 110 can then be measured to obtain the reference value σref of the transfer function. Obtaining the phase difference between the primary signal and the secondary signal can be illustrated by assuming that the primary microphone 108 is separated from the secondary microphone 110 by a distance d. The phase difference of a sound wave (of a single frequency) incident on the two microphones is proportional to the frequency fsw of the sound wave and the distance d. This phase difference can be approximated analytically as φ≈2π fsw d cos(β)/c, where c is the speed of sound and β is the angle of incidence of the sound wave upon the microphone array.
The voice cancellation module 406 is executable by the processor 202 to cancel out or suppress the speech component of the primary signal. According to exemplary embodiments, the voice cancellation module 406 achieves this by utilizing the first complex value {circumflex over (σ)} of the transfer function determined by the cross-correlation module 404. A signal entirely or mostly devoid of speech may be obtained by subtracting the product of the primary signal c(k) and {circumflex over (σ)} from the secondary signal on a sub-band by sub-band basis. This can be expressed as
f(k)−{circumflex over (σ)}·c(k)≈f(k)−σ·c(k)=(v−σ)n(k)
when {circumflex over (σ)} is approximately equal to σ. The signal expressed by (v−σ)n(k) is a noise reference signal or a residual audio signal, and may be referred to as a speech-devoid signal.
Under certain conditions, the value of {circumflex over (σ)} may be adapted to a value that is more effective in canceling the speech component of the primary signal. This adaptation may be subject to one or more constraints. Generally speaking, adaptation may be desirable to adjust for unpredicted occurrences. For example, since the audio device 102 can be moved around as illustrated in
The constraints for adaptation of {circumflex over (σ)} by the voice cancellation module 406 may be divided into sub-band constraints and global constraints. Sub-band constraints are considered individually per sub-band, while global constraints are considered over multiple sub-bands. Sub-band constraints may also be divided into level and spatial constraints. All constraints are considered on a frame by frame basis in exemplary embodiments. If a constraint is not met, adaptation of {circumflex over (σ)} may not be performed. Furthermore, in general, {circumflex over (σ)} is adapted within frames and sub-bands that are dominated by speech.
One sub-band level constraint is that the energy of the primary signal is some distance away from the stationary noise estimate. This may help prevent maladaptation with quasi-stationary noise. Another sub-band level constraint is that the primary signal energy is at least as large as the minimum expected speech level for a given frame and sub-band. This may help prevent maladaptation with noise that is low level. Yet another sub-band level constraint is that {circumflex over (σ)} should not be adapted when a transfer function or energy difference between the primary and secondary microphones indicates that echoes are dominating a particular sub-band or frame. In one exemplary embodiment, for microphone configurations where the secondary microphone is closer to a loudspeaker or earpiece than the primary microphone, {circumflex over (σ)} should not be adapted when the secondary signal has a greater magnitude than the primary signal. This may help prevent adaptation to echoes.
A sub-band spatial constraint for adaptation of {circumflex over (σ)} by the voice cancellation module 406 may be applied for various frequency ranges.
Another sub-band spatial constraint is that the magnitude of σ−1 for the speech signal |σ−1| should be greater than the magnitude of v−1 for the noise signal |v−1| in a given frame and sub-band. Furthermore, v may be adapted when speech is not active based on any or all of the individual sub-band and global constraints controlling adaptation of {circumflex over (σ)} and other constraints not embodied in adaptation of {circumflex over (σ)}. This constraint may help prevent maladaptation within noise that may arrive from a spatial location that is within the permitted σ adaptation region defined by the first sub-band spatial constraint.
As mentioned, global constraints are considered over multiple sub-bands. One global constraint for adaptation of {circumflex over (σ)} by the voice cancellation module 406 is that the pitch salience of the primary signal determined by the pitch salience module 402 exceeds a threshold. In exemplary embodiments, this threshold is 0.7, where a value of 1 indicates perfect periodicity, and a value of zero indicates no periodicity. A pitch salience threshold may also be applied to individual sub-bands and, therefore, be used as a sub-band constraint rather than a global restraint. Another global constraint for adaptation of {circumflex over (σ)} may be that a minimum number of low frequency sub-bands (e.g., sub-bands below approximately 0.5-1 kHz) must satisfy the sub-band level constraints described herein. In one embodiment, this minimum number equals half of the sub-bands. Yet another global constraint is that a minimum number of low frequency sub-bands that satisfy the sub-band level constraints should also satisfy the sub-band spatial constraint described in connection with
Referring again to
Returning to
The coefficient α can be adapted for changes in noise conditions in the environment 100 such as a moving noise source 106, multiple noise sources or multiple reflections of a single noise source. One constraint is that the noise cancellation module 408 only adapts α when there is no speech activity. Thus, α is only adapted when {circumflex over (σ)} is not being adapted by the voice cancellation module 406. Another constraint is that a should adapt towards zero (i.e., no noise cancellation) if the primary signal, secondary signal, or speech-devoid signal (i.e., (v−σ)n(k)) of the voice cancellation module 406 is below some minimum energy threshold. In exemplary embodiments, the minimum energy threshold may be based upon an energy estimate of the primary or secondary microphone self-noise.
Yet another constraint for adapting α is that the following equation is satisfied:
where γ=√{square root over (2)}/|{circumflex over (ν)}−{circumflex over (σ)}|2 and {circumflex over (ν)} is a complex value which estimates the transfer function between the primary and secondary microphone signals for the noise source. The value of {circumflex over (ν)} may be adapted based upon a noise activity detector, or any or all of the constraints that are applied to adaptation of the voice cancellation module 406. This condition implies that more noise is being canceled relative to speech. Conceptually, this may be viewed as noise activity detection. The left side of the above equation (g2·γ) is related to the signal to noise ratio (SNR) of the output of the noise cancellation engine 304, while the right side of the equation (g1/γ) is related to the SNR of the input of the noise cancellation engine 304. It is noteworthy that γ is not a fixed value in exemplary embodiments since actual values of {circumflex over (ν)} and {circumflex over (σ)} can be estimated using the cross correlation module 404 and voice cancellation module 406. As such, the difference between {circumflex over (ν)} and {circumflex over (σ)} must be less than a threshold to satisfy this condition.
In step 502, one or more signals are received. In exemplary embodiments, these signals comprise the primary signal received by the primary microphone 108 and the secondary signal received by the secondary microphone 110. These signals may originate at a user 104 and/or a noise source 106. Furthermore, the received one or more signals may each include a noise component and a speech component.
In step 504, the received one or more signals are decomposed into frequency sub-bands. In exemplary embodiments, step 504 is performed by execution of the frequency analysis module 302 by the processor 202.
In step 506, information related to amplitude and phase is determined for the received one or more signals. This information may be expressed by complex values. Moreover, this information may include transfer functions that indicate amplitude and phase differences between two signals or corresponding frequency sub-bands of two signals. Step 506 may be performed by the cross correlation module 404.
In step 508, adaptation constraints are identified. The adaptation constraints may control adaptation of one or more coefficients applied to the one or more received signals. The one or more coefficients (e.g., {circumflex over (σ)} or α) may be applied to suppress a noise component or a speech component.
One adaptation constraint may be that a determined pitch salience of the one or more received signals should exceed a threshold in order to adapt a coefficient (e.g., {circumflex over (σ)}).
Another adaptation constraint may be that a coefficient (e.g., {circumflex over (σ)}) should be adapted when an amplitude difference between two received signals is within a first predetermined range and a phase difference between the two received signals is within a second predetermined range.
Yet another adaptation constraint may be that adaptation of a coefficient (e.g., {circumflex over (σ)}) should be halted when echo is determined to be in either microphone, for example, based upon a comparison between the amplitude of a primary signal and an amplitude of a secondary signal.
Still another adaptation constraint is that a coefficient (e.g., α) should be adjusted to zero when an amplitude of a noise component is less than a threshold. The adjustment of the coefficient to zero may be gradual so as to fade the value of the coefficient to zero over time. Alternatively, the adjustment of the coefficient to zero may be abrupt or instantaneous.
One other adaptation constraint is that a coefficient (e.g., α) should be adapted when a difference between two transfer functions exceeds or is less than a threshold, one of the transfer functions being an estimate of the transfer function between a speech component of a primary signal and a speech component of a secondary signal, and the other transfer function being an estimate of the transfer function between a noise component of the primary signal and a noise component of the secondary signal.
In step 510, noise cancellation consistent with the identified adaptation constraints is performed on the one or more received signals. In exemplary embodiments, the noise cancellation engine 304 performs step 510.
In step 512, the one or more received signals are reconstructed from the frequency sub-bands. The frequency synthesis module 310 performs step 512 in accordance with exemplary embodiments.
In step 514, at least one reconstructed signal is outputted. In exemplary embodiments, the reconstructed signal is outputted via the output device 206.
It is noteworthy that any hardware platform suitable for performing the processing described herein is suitable for use with the technology. Computer-readable storage media refer to any medium or media that participate in providing instructions to a central processing unit (CPU) such as the processor 202 for execution. Such media can take forms, including, but not limited to, non-volatile and volatile media such as optical or magnetic disks and dynamic memory, respectively. Common forms of computer-readable storage media include a floppy disk, a flexible disk, a hard disk, magnetic tape, any other magnetic medium, a CD-ROM disk, digital video disk (DVD), any other optical medium, RAM, PROM, EPROM, a FLASHEPROM, any other memory chip or cartridge.
Various forms of transmission media may be involved in carrying one or more sequences of one or more instructions to a CPU for execution. A bus carries the data to system RAM, from which a CPU retrieves and executes the instructions. The instructions received by system RAM can optionally be stored on a fixed disk either before or after execution by a CPU.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. The descriptions are not intended to limit the scope of the technology to the particular forms set forth herein. Thus, the breadth and scope of a preferred embodiment should not be limited by any of the above-described exemplary embodiments. It should be understood that the above description is illustrative and not restrictive. To the contrary, the present descriptions are intended to cover such alternatives, modifications, and equivalents as may be included within the spirit and scope of the technology as defined by the appended claims and otherwise appreciated by one of ordinary skill in the art. The scope of the technology should, therefore, be determined not with reference to the above description, but instead should be determined with reference to the appended claims along with their full scope of equivalents.
The present application is a continuation of U.S. patent application Ser. No. 12/422,917 filed Apr. 13, 2009, which is herein incorporated by reference. The present application is also related to U.S. patent application Ser. No. 12/215,980 filed Jun. 30, 2008, U.S. Pat. No. 7,076,315, U.S. Pat. No. 8,150,065, U.S. Pat. No. 8,204,253, and U.S. patent application Ser. No. 12/319,107 filed Dec. 31, 2008, all of which are herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3976863 | Engel | Aug 1976 | A |
3978287 | Fletcher et al. | Aug 1976 | A |
4137510 | Iwahara | Jan 1979 | A |
4433604 | Ott | Feb 1984 | A |
4516259 | Yato et al. | May 1985 | A |
4535473 | Sakata | Aug 1985 | A |
4536844 | Lyon | Aug 1985 | A |
4581758 | Coker et al. | Apr 1986 | A |
4628529 | Borth et al. | Dec 1986 | A |
4630304 | Borth et al. | Dec 1986 | A |
4649505 | Zinser, Jr. et al. | Mar 1987 | A |
4658426 | Chabries et al. | Apr 1987 | A |
4674125 | Carlson et al. | Jun 1987 | A |
4718104 | Anderson | Jan 1988 | A |
4811404 | Vilmur et al. | Mar 1989 | A |
4812996 | Stubbs | Mar 1989 | A |
4864620 | Bialick | Sep 1989 | A |
4920508 | Yassaie et al. | Apr 1990 | A |
4991166 | Julstrom | Feb 1991 | A |
5027306 | Dattorro et al. | Jun 1991 | A |
5027410 | Williamson et al. | Jun 1991 | A |
5054085 | Meisel et al. | Oct 1991 | A |
5058419 | Nordstrom et al. | Oct 1991 | A |
5099738 | Hotz | Mar 1992 | A |
5103229 | Ribner | Apr 1992 | A |
5119711 | Bell et al. | Jun 1992 | A |
5142961 | Paroutaud | Sep 1992 | A |
5150413 | Nakatani et al. | Sep 1992 | A |
5175769 | Hejna, Jr. et al. | Dec 1992 | A |
5177482 | Cideciyan et al. | Jan 1993 | A |
5187776 | Yanker | Feb 1993 | A |
5208864 | Kaneda | May 1993 | A |
5210366 | Sykes, Jr. | May 1993 | A |
5216423 | Mukherjee | Jun 1993 | A |
5222251 | Roney, IV et al. | Jun 1993 | A |
5224170 | Waite, Jr. | Jun 1993 | A |
5230022 | Sakata | Jul 1993 | A |
5319736 | Hunt | Jun 1994 | A |
5323459 | Hirano | Jun 1994 | A |
5341432 | Suzuki et al. | Aug 1994 | A |
5381473 | Andrea et al. | Jan 1995 | A |
5381512 | Holton et al. | Jan 1995 | A |
5400409 | Linhard | Mar 1995 | A |
5402493 | Goldstein | Mar 1995 | A |
5402496 | Soli et al. | Mar 1995 | A |
5406635 | Jarvinen | Apr 1995 | A |
5408235 | Doyle et al. | Apr 1995 | A |
5416847 | Boze | May 1995 | A |
5471195 | Rickman | Nov 1995 | A |
5473759 | Slaney et al. | Dec 1995 | A |
5479564 | Vogten et al. | Dec 1995 | A |
5502663 | Lyon | Mar 1996 | A |
5544250 | Urbanski | Aug 1996 | A |
5550924 | Helf et al. | Aug 1996 | A |
5574824 | Slyh et al. | Nov 1996 | A |
5590241 | Park et al. | Dec 1996 | A |
5602962 | Kellermann | Feb 1997 | A |
5633631 | Teckman | May 1997 | A |
5675778 | Jones | Oct 1997 | A |
5694474 | Ngo et al. | Dec 1997 | A |
5701350 | Popovich | Dec 1997 | A |
5706395 | Arslan et al. | Jan 1998 | A |
5717829 | Takagi | Feb 1998 | A |
5729612 | Abel et al. | Mar 1998 | A |
5732189 | Johnston et al. | Mar 1998 | A |
5749064 | Pawate et al. | May 1998 | A |
5757937 | Itoh et al. | May 1998 | A |
5777658 | Kerr et al. | Jul 1998 | A |
5792971 | Timis et al. | Aug 1998 | A |
5796819 | Romesburg | Aug 1998 | A |
5806025 | Vis et al. | Sep 1998 | A |
5809463 | Gupta et al. | Sep 1998 | A |
5819217 | Raman | Oct 1998 | A |
5839101 | Vahatalo et al. | Nov 1998 | A |
5845243 | Smart et al. | Dec 1998 | A |
5887032 | Cioffi | Mar 1999 | A |
5920840 | Satyamurti et al. | Jul 1999 | A |
5933495 | Oh | Aug 1999 | A |
5937060 | Oh | Aug 1999 | A |
5943429 | Handel | Aug 1999 | A |
5963651 | Van Veen et al. | Oct 1999 | A |
5978824 | Ikeda | Nov 1999 | A |
5983139 | Zierhofer | Nov 1999 | A |
5990405 | Auten et al. | Nov 1999 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6011501 | Gong et al. | Jan 2000 | A |
6061456 | Andrea et al. | May 2000 | A |
6072881 | Linder | Jun 2000 | A |
6092126 | Rossum | Jul 2000 | A |
6097820 | Turner | Aug 2000 | A |
6098038 | Hermansky et al. | Aug 2000 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6122384 | Mauro | Sep 2000 | A |
6122610 | Isabelle | Sep 2000 | A |
6125175 | Goldberg et al. | Sep 2000 | A |
6134524 | Peters et al. | Oct 2000 | A |
6137349 | Menkhoff et al. | Oct 2000 | A |
6140809 | Doi | Oct 2000 | A |
6160265 | Bacchi et al. | Dec 2000 | A |
6160886 | Romesburg et al. | Dec 2000 | A |
6173255 | Wilson et al. | Jan 2001 | B1 |
6188797 | Moledina et al. | Feb 2001 | B1 |
6205421 | Morii | Mar 2001 | B1 |
6205422 | Gu et al. | Mar 2001 | B1 |
6208671 | Paulos et al. | Mar 2001 | B1 |
6216103 | Wu et al. | Apr 2001 | B1 |
6222927 | Feng et al. | Apr 2001 | B1 |
6223090 | Brungart | Apr 2001 | B1 |
6263307 | Arslan et al. | Jul 2001 | B1 |
6266633 | Higgins et al. | Jul 2001 | B1 |
6317501 | Matsuo | Nov 2001 | B1 |
6321193 | Nystrom et al. | Nov 2001 | B1 |
6324235 | Savell et al. | Nov 2001 | B1 |
6326912 | Fujimori | Dec 2001 | B1 |
6339706 | Tillgren et al. | Jan 2002 | B1 |
6339758 | Kanazawa et al. | Jan 2002 | B1 |
6355869 | Mitton | Mar 2002 | B1 |
6363345 | Marash et al. | Mar 2002 | B1 |
6381570 | Li et al. | Apr 2002 | B2 |
6424938 | Johansson et al. | Jul 2002 | B1 |
6430295 | Handel et al. | Aug 2002 | B1 |
6434417 | Lovett | Aug 2002 | B1 |
6449586 | Hoshuyama | Sep 2002 | B1 |
6453289 | Ertem et al. | Sep 2002 | B1 |
6456209 | Savari | Sep 2002 | B1 |
6469732 | Chang et al. | Oct 2002 | B1 |
6477489 | Lockwood et al. | Nov 2002 | B1 |
6487257 | Gustafsson et al. | Nov 2002 | B1 |
6496795 | Malvar | Dec 2002 | B1 |
6513004 | Rigazio et al. | Jan 2003 | B1 |
6516066 | Hayashi | Feb 2003 | B2 |
6516136 | Lee | Feb 2003 | B1 |
6526140 | Marchok et al. | Feb 2003 | B1 |
6529606 | Jackson et al. | Mar 2003 | B1 |
6531970 | McLaughlin et al. | Mar 2003 | B2 |
6549630 | Bobisuthi | Apr 2003 | B1 |
6584203 | Elko et al. | Jun 2003 | B2 |
6647067 | Hjelm et al. | Nov 2003 | B1 |
6683938 | Henderson | Jan 2004 | B1 |
6717991 | Gustafsson et al. | Apr 2004 | B1 |
6718309 | Selly | Apr 2004 | B1 |
6735303 | Okuda | May 2004 | B1 |
6738482 | Jaber | May 2004 | B1 |
6745155 | Andringa et al. | Jun 2004 | B1 |
6760450 | Matsuo | Jul 2004 | B2 |
6785381 | Gartner et al. | Aug 2004 | B2 |
6792118 | Watts | Sep 2004 | B2 |
6795558 | Matsuo | Sep 2004 | B2 |
6798886 | Smith et al. | Sep 2004 | B1 |
6804203 | Benyassine et al. | Oct 2004 | B1 |
6804651 | Juric et al. | Oct 2004 | B2 |
6810273 | Mattila et al. | Oct 2004 | B1 |
6859508 | Koyama et al. | Feb 2005 | B1 |
6882736 | Dickel et al. | Apr 2005 | B2 |
6915257 | Heikkinen et al. | Jul 2005 | B2 |
6915264 | Baumgarte | Jul 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6934387 | Kim | Aug 2005 | B1 |
6978159 | Feng et al. | Dec 2005 | B2 |
6982377 | Sakurai et al. | Jan 2006 | B2 |
6990196 | Zeng et al. | Jan 2006 | B2 |
7003099 | Zhang et al. | Feb 2006 | B1 |
7016507 | Brennan | Mar 2006 | B1 |
7020605 | Gao | Mar 2006 | B2 |
7031478 | Belt et al. | Apr 2006 | B2 |
7039197 | Venkatesh | May 2006 | B1 |
7042934 | Zamir | May 2006 | B2 |
7050388 | Kim et al. | May 2006 | B2 |
7054452 | Ukita | May 2006 | B2 |
7065485 | Chong-White et al. | Jun 2006 | B1 |
7076315 | Watts | Jul 2006 | B1 |
7092529 | Yu et al. | Aug 2006 | B2 |
7092882 | Arrowood et al. | Aug 2006 | B2 |
7099821 | Visser et al. | Aug 2006 | B2 |
7127072 | Rademacher et al. | Oct 2006 | B2 |
7142677 | Gonopolskiy et al. | Nov 2006 | B2 |
7146013 | Saito et al. | Dec 2006 | B1 |
7146316 | Alves | Dec 2006 | B2 |
7155019 | Hou | Dec 2006 | B2 |
7165026 | Acero et al. | Jan 2007 | B2 |
7171008 | Elko | Jan 2007 | B2 |
7171246 | Mattila et al. | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7190665 | Warke et al. | Mar 2007 | B2 |
7206418 | Yang et al. | Apr 2007 | B2 |
7209567 | Kozel et al. | Apr 2007 | B1 |
7225001 | Eriksson et al. | May 2007 | B1 |
7242762 | He et al. | Jul 2007 | B2 |
7246058 | Burnett | Jul 2007 | B2 |
7254242 | Ise et al. | Aug 2007 | B2 |
7289554 | Alloin | Oct 2007 | B2 |
7289955 | Deng et al. | Oct 2007 | B2 |
7327985 | Morfitt, III et al. | Feb 2008 | B2 |
7330138 | Mallinson et al. | Feb 2008 | B2 |
7339503 | Elenes | Mar 2008 | B1 |
7359504 | Reuss et al. | Apr 2008 | B1 |
7359520 | Brennan et al. | Apr 2008 | B2 |
7376558 | Gemello et al. | May 2008 | B2 |
7383179 | Alves et al. | Jun 2008 | B2 |
7395298 | Debes et al. | Jul 2008 | B2 |
7412379 | Taori et al. | Aug 2008 | B2 |
7433907 | Nagai et al. | Oct 2008 | B2 |
7436333 | Forman et al. | Oct 2008 | B2 |
7555075 | Pessoa et al. | Jun 2009 | B2 |
7555434 | Nomura et al. | Jun 2009 | B2 |
7561627 | Chow et al. | Jul 2009 | B2 |
7577084 | Tang et al. | Aug 2009 | B2 |
7617099 | Yang et al. | Nov 2009 | B2 |
7657038 | Doclo et al. | Feb 2010 | B2 |
7725314 | Wu et al. | May 2010 | B2 |
7764752 | Langberg et al. | Jul 2010 | B2 |
7777658 | Nguyen et al. | Aug 2010 | B2 |
7783032 | Abutalebi et al. | Aug 2010 | B2 |
7783481 | Endo et al. | Aug 2010 | B2 |
7895036 | Hetherington et al. | Feb 2011 | B2 |
7912567 | Chhatwal et al. | Mar 2011 | B2 |
7949522 | Hetherington et al. | May 2011 | B2 |
7953596 | Pinto | May 2011 | B2 |
8010355 | Rahbar | Aug 2011 | B2 |
8032364 | Watts | Oct 2011 | B1 |
8046219 | Zurek et al. | Oct 2011 | B2 |
8081878 | Zhang et al. | Dec 2011 | B1 |
8098812 | Fadili et al. | Jan 2012 | B2 |
8103011 | Mohammad et al. | Jan 2012 | B2 |
8107656 | Dreβler et al. | Jan 2012 | B2 |
8126159 | Goose et al. | Feb 2012 | B2 |
8143620 | Malinowski et al. | Mar 2012 | B1 |
8150065 | Solbach et al. | Apr 2012 | B2 |
8160265 | Mao et al. | Apr 2012 | B2 |
8180062 | Turku et al. | May 2012 | B2 |
8180064 | Avendano et al. | May 2012 | B1 |
8184818 | Ishiguro | May 2012 | B2 |
8189766 | Klein | May 2012 | B1 |
8194880 | Avendano | Jun 2012 | B2 |
8194882 | Every et al. | Jun 2012 | B2 |
8204252 | Avendano | Jun 2012 | B1 |
8204253 | Solbach | Jun 2012 | B1 |
8280731 | Yu | Oct 2012 | B2 |
8345890 | Avendano et al. | Jan 2013 | B2 |
8355511 | Klein | Jan 2013 | B2 |
8359195 | Li | Jan 2013 | B2 |
8378871 | Bapat | Feb 2013 | B1 |
8411872 | Stothers et al. | Apr 2013 | B2 |
8447045 | Laroche | May 2013 | B1 |
8473287 | Every et al. | Jun 2013 | B2 |
8488805 | Santos et al. | Jul 2013 | B1 |
8494193 | Zhang et al. | Jul 2013 | B2 |
8521530 | Every et al. | Aug 2013 | B1 |
8526628 | Massie et al. | Sep 2013 | B1 |
8538035 | Every et al. | Sep 2013 | B2 |
8611551 | Massie et al. | Dec 2013 | B1 |
8611552 | Murgia et al. | Dec 2013 | B1 |
8718290 | Murgia et al. | May 2014 | B2 |
8737188 | Murgia et al. | May 2014 | B1 |
8737532 | Green et al. | May 2014 | B2 |
8744844 | Klein | Jun 2014 | B2 |
8761385 | Sugiyama | Jun 2014 | B2 |
8774423 | Solbach | Jul 2014 | B1 |
8804865 | Elenes et al. | Aug 2014 | B2 |
8848935 | Massie et al. | Sep 2014 | B1 |
8867759 | Avendano et al. | Oct 2014 | B2 |
8886525 | Klein | Nov 2014 | B2 |
8934641 | Avendano et al. | Jan 2015 | B2 |
8949120 | Every et al. | Feb 2015 | B1 |
8965942 | Rossum et al. | Feb 2015 | B1 |
9049282 | Murgia et al. | Jun 2015 | B1 |
9076456 | Avendano et al. | Jul 2015 | B1 |
9185487 | Solbach et al. | Nov 2015 | B2 |
9236874 | Rossum | Jan 2016 | B1 |
20010016020 | Gustafsson et al. | Aug 2001 | A1 |
20010031053 | Feng et al. | Oct 2001 | A1 |
20010046304 | Rast | Nov 2001 | A1 |
20010053228 | Jones | Dec 2001 | A1 |
20020002455 | Accardi et al. | Jan 2002 | A1 |
20020009203 | Erten | Jan 2002 | A1 |
20020036578 | Reefman | Mar 2002 | A1 |
20020041693 | Matsuo | Apr 2002 | A1 |
20020080980 | Matsuo | Jun 2002 | A1 |
20020106092 | Matsuo | Aug 2002 | A1 |
20020116187 | Erten | Aug 2002 | A1 |
20020133334 | Coorman et al. | Sep 2002 | A1 |
20020147595 | Baumgarte | Oct 2002 | A1 |
20020156624 | Gigi | Oct 2002 | A1 |
20020176589 | Buck et al. | Nov 2002 | A1 |
20030014248 | Vetter | Jan 2003 | A1 |
20030026437 | Janse et al. | Feb 2003 | A1 |
20030033140 | Taori et al. | Feb 2003 | A1 |
20030038736 | Becker et al. | Feb 2003 | A1 |
20030039369 | Bullen | Feb 2003 | A1 |
20030040908 | Yang et al. | Feb 2003 | A1 |
20030061032 | Gonopolskiy | Mar 2003 | A1 |
20030063759 | Brennan et al. | Apr 2003 | A1 |
20030072382 | Raleigh et al. | Apr 2003 | A1 |
20030072460 | Gonopolskiy et al. | Apr 2003 | A1 |
20030095667 | Watts | May 2003 | A1 |
20030099345 | Gartner et al. | May 2003 | A1 |
20030101048 | Liu | May 2003 | A1 |
20030103632 | Goubran et al. | Jun 2003 | A1 |
20030128851 | Furuta | Jul 2003 | A1 |
20030138116 | Jones et al. | Jul 2003 | A1 |
20030147538 | Elko | Aug 2003 | A1 |
20030169891 | Ryan et al. | Sep 2003 | A1 |
20030191641 | Acero et al. | Oct 2003 | A1 |
20030219130 | Baumgarte et al. | Nov 2003 | A1 |
20030228023 | Burnett et al. | Dec 2003 | A1 |
20040001450 | He et al. | Jan 2004 | A1 |
20040013276 | Ellis et al. | Jan 2004 | A1 |
20040015348 | McArthur et al. | Jan 2004 | A1 |
20040042616 | Matsuo | Mar 2004 | A1 |
20040047464 | Yu et al. | Mar 2004 | A1 |
20040047474 | Vries et al. | Mar 2004 | A1 |
20040078199 | Kremer et al. | Apr 2004 | A1 |
20040105550 | Aylward et al. | Jun 2004 | A1 |
20040111258 | Zangi et al. | Jun 2004 | A1 |
20040125965 | Alberth, Jr. et al. | Jul 2004 | A1 |
20040131178 | Shahaf et al. | Jul 2004 | A1 |
20040133421 | Burnett et al. | Jul 2004 | A1 |
20040165736 | Hetherington et al. | Aug 2004 | A1 |
20040185804 | Kanamori et al. | Sep 2004 | A1 |
20040196989 | Friedman et al. | Oct 2004 | A1 |
20040220800 | Kong et al. | Nov 2004 | A1 |
20040247111 | Popovic et al. | Dec 2004 | A1 |
20040263636 | Cutler et al. | Dec 2004 | A1 |
20050008179 | Quinn | Jan 2005 | A1 |
20050025263 | Wu | Feb 2005 | A1 |
20050027520 | Mattila et al. | Feb 2005 | A1 |
20050049864 | Kaltenmeier et al. | Mar 2005 | A1 |
20050060142 | Visser et al. | Mar 2005 | A1 |
20050066279 | LeBarton et al. | Mar 2005 | A1 |
20050114128 | Hetherington et al. | May 2005 | A1 |
20050152559 | Gierl et al. | Jul 2005 | A1 |
20050152563 | Amada et al. | Jul 2005 | A1 |
20050185813 | Sinclair et al. | Aug 2005 | A1 |
20050203735 | Ichikawa | Sep 2005 | A1 |
20050213778 | Buck et al. | Sep 2005 | A1 |
20050216259 | Watts | Sep 2005 | A1 |
20050226426 | Oomen et al. | Oct 2005 | A1 |
20050228518 | Watts | Oct 2005 | A1 |
20050261894 | Balan et al. | Nov 2005 | A1 |
20050276423 | Aubauer et al. | Dec 2005 | A1 |
20050288923 | Kok | Dec 2005 | A1 |
20060072768 | Schwartz et al. | Apr 2006 | A1 |
20060074646 | Alves et al. | Apr 2006 | A1 |
20060098809 | Nongpiur et al. | May 2006 | A1 |
20060120537 | Burnett et al. | Jun 2006 | A1 |
20060133621 | Chen et al. | Jun 2006 | A1 |
20060149535 | Choi et al. | Jul 2006 | A1 |
20060153391 | Hooley et al. | Jul 2006 | A1 |
20060160581 | Beaugeant et al. | Jul 2006 | A1 |
20060184363 | McCree et al. | Aug 2006 | A1 |
20060222184 | Buck et al. | Oct 2006 | A1 |
20070021958 | Visser et al. | Jan 2007 | A1 |
20070027685 | Arakawa et al. | Feb 2007 | A1 |
20070033020 | (Kelleher) Francois et al. | Feb 2007 | A1 |
20070041589 | Patel et al. | Feb 2007 | A1 |
20070055505 | Doclo et al. | Mar 2007 | A1 |
20070071206 | Gainsboro et al. | Mar 2007 | A1 |
20070078649 | Hetherington et al. | Apr 2007 | A1 |
20070094031 | Chen | Apr 2007 | A1 |
20070110263 | Brox | May 2007 | A1 |
20070116300 | Chen | May 2007 | A1 |
20070136059 | Gadbois | Jun 2007 | A1 |
20070150268 | Acero et al. | Jun 2007 | A1 |
20070154031 | Avendano et al. | Jul 2007 | A1 |
20070165879 | Deng et al. | Jul 2007 | A1 |
20070195968 | Jaber | Aug 2007 | A1 |
20070230712 | Belt et al. | Oct 2007 | A1 |
20070230913 | Ichimura | Oct 2007 | A1 |
20070233479 | Burnett | Oct 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20070294263 | Punj et al. | Dec 2007 | A1 |
20080019548 | Avendano | Jan 2008 | A1 |
20080031466 | Buck | Feb 2008 | A1 |
20080033723 | Jang et al. | Feb 2008 | A1 |
20080037801 | Alves et al. | Feb 2008 | A1 |
20080059163 | Ding et al. | Mar 2008 | A1 |
20080069374 | Zhang | Mar 2008 | A1 |
20080071540 | Nakano et al. | Mar 2008 | A1 |
20080140391 | Yen et al. | Jun 2008 | A1 |
20080152157 | Lin et al. | Jun 2008 | A1 |
20080159573 | Dressler et al. | Jul 2008 | A1 |
20080162123 | Goldin | Jul 2008 | A1 |
20080170703 | Zivney | Jul 2008 | A1 |
20080186218 | Ohkuri et al. | Aug 2008 | A1 |
20080187148 | Itabashi et al. | Aug 2008 | A1 |
20080201138 | Visser et al. | Aug 2008 | A1 |
20080228478 | Hetherington et al. | Sep 2008 | A1 |
20080247556 | Hess | Oct 2008 | A1 |
20080260175 | Elko | Oct 2008 | A1 |
20080273476 | Cohen et al. | Nov 2008 | A1 |
20080306736 | Sanyal et al. | Dec 2008 | A1 |
20080317257 | Furge et al. | Dec 2008 | A1 |
20090003640 | Burnett | Jan 2009 | A1 |
20090012783 | Klein | Jan 2009 | A1 |
20090012786 | Zhang et al. | Jan 2009 | A1 |
20090022335 | Konchitsky | Jan 2009 | A1 |
20090048824 | Amada | Feb 2009 | A1 |
20090063142 | Sukkar | Mar 2009 | A1 |
20090080632 | Zhang et al. | Mar 2009 | A1 |
20090089053 | Wang et al. | Apr 2009 | A1 |
20090089054 | Wang et al. | Apr 2009 | A1 |
20090116652 | Kirkeby et al. | May 2009 | A1 |
20090129610 | Kim et al. | May 2009 | A1 |
20090144053 | Tamura et al. | Jun 2009 | A1 |
20090154717 | Hoshuyama | Jun 2009 | A1 |
20090164212 | Chan et al. | Jun 2009 | A1 |
20090177464 | Gao et al. | Jul 2009 | A1 |
20090220107 | Every et al. | Sep 2009 | A1 |
20090220197 | Gniadek et al. | Sep 2009 | A1 |
20090228272 | Herbig et al. | Sep 2009 | A1 |
20090238373 | Klein | Sep 2009 | A1 |
20090238377 | Ramakrishnan et al. | Sep 2009 | A1 |
20090240495 | Ramakrishnan | Sep 2009 | A1 |
20090245335 | Fang | Oct 2009 | A1 |
20090245444 | Fang | Oct 2009 | A1 |
20090248411 | Konchitsky et al. | Oct 2009 | A1 |
20090253418 | Makinen | Oct 2009 | A1 |
20090271187 | Yen et al. | Oct 2009 | A1 |
20090296958 | Sugiyama | Dec 2009 | A1 |
20090316918 | Niemisto et al. | Dec 2009 | A1 |
20090323982 | Solbach et al. | Dec 2009 | A1 |
20100017205 | Visser et al. | Jan 2010 | A1 |
20100027799 | Romesburg et al. | Feb 2010 | A1 |
20100067710 | Hendriks et al. | Mar 2010 | A1 |
20100076769 | Yu | Mar 2010 | A1 |
20100094643 | Avendano et al. | Apr 2010 | A1 |
20100138220 | Matsumoto et al. | Jun 2010 | A1 |
20100158267 | Thormundsson et al. | Jun 2010 | A1 |
20100166199 | Seydoux | Jul 2010 | A1 |
20100177916 | Gerkmann et al. | Jul 2010 | A1 |
20100239105 | Pan | Sep 2010 | A1 |
20100246849 | Sudo et al. | Sep 2010 | A1 |
20100267340 | Lee | Oct 2010 | A1 |
20100272275 | Carreras et al. | Oct 2010 | A1 |
20100272276 | Carreras et al. | Oct 2010 | A1 |
20100278352 | Petit et al. | Nov 2010 | A1 |
20100290615 | Takahashi | Nov 2010 | A1 |
20100290636 | Mao et al. | Nov 2010 | A1 |
20100309774 | Astrom | Dec 2010 | A1 |
20110007907 | Park et al. | Jan 2011 | A1 |
20110019833 | Kuech et al. | Jan 2011 | A1 |
20110035213 | Malenovsky et al. | Feb 2011 | A1 |
20110123019 | Gowreesunker et al. | May 2011 | A1 |
20110158419 | Theverapperuma et al. | Jun 2011 | A1 |
20110178800 | Watts | Jul 2011 | A1 |
20110182436 | Murgia et al. | Jul 2011 | A1 |
20110243344 | Bakalos et al. | Oct 2011 | A1 |
20110257967 | Every et al. | Oct 2011 | A1 |
20110261150 | Goyal et al. | Oct 2011 | A1 |
20110299695 | Nicholson | Dec 2011 | A1 |
20120027218 | Every et al. | Feb 2012 | A1 |
20120063609 | Triki et al. | Mar 2012 | A1 |
20120087514 | Williams et al. | Apr 2012 | A1 |
20120116758 | Murgia et al. | May 2012 | A1 |
20120121096 | Chen et al. | May 2012 | A1 |
20120140917 | Nicholson et al. | Jun 2012 | A1 |
20120179462 | Klein | Jul 2012 | A1 |
20120197898 | Pandey et al. | Aug 2012 | A1 |
20120220347 | Davidson | Aug 2012 | A1 |
20120237037 | Ninan et al. | Sep 2012 | A1 |
20120250871 | Lu et al. | Oct 2012 | A1 |
20130011111 | Abraham et al. | Jan 2013 | A1 |
20130024190 | Fairey | Jan 2013 | A1 |
20130096914 | Avendano et al. | Apr 2013 | A1 |
20140098964 | Rosca et al. | Apr 2014 | A1 |
20140205107 | Murgia et al. | Jul 2014 | A1 |
20140241702 | Solbach et al. | Aug 2014 | A1 |
20150025881 | Carlos et al. | Jan 2015 | A1 |
20160027451 | Solbach et al. | Jan 2016 | A1 |
20160064009 | Every et al. | Mar 2016 | A1 |
Number | Date | Country |
---|---|---|
0756437 | Jan 1997 | EP |
1232496 | Aug 2002 | EP |
1474755 | Nov 2004 | EP |
20080428 | Jul 2008 | FI |
20100431 | Dec 2010 | FI |
20125814 | Oct 2012 | FI |
20126083 | Oct 2012 | FI |
124716 | Dec 2014 | FI |
62110349 | May 1987 | JP |
4184400 | Jul 1992 | JP |
5053587 | Mar 1993 | JP |
6269083 | Sep 1994 | JP |
H07248793 | Sep 1995 | JP |
H10313497 | Nov 1998 | JP |
H11249693 | Sep 1999 | JP |
2001159899 | Jun 2001 | JP |
2002366200 | Dec 2002 | JP |
2002542689 | Dec 2002 | JP |
2003514473 | Apr 2003 | JP |
2003271191 | Sep 2003 | JP |
2004187283 | Jul 2004 | JP |
2005110127 | Apr 2005 | JP |
2005518118 | Jun 2005 | JP |
2005195955 | Jul 2005 | JP |
2006094522 | Apr 2006 | JP |
2006337415 | Dec 2006 | JP |
2007006525 | Jan 2007 | JP |
2008015443 | Jan 2008 | JP |
2008065090 | Mar 2008 | JP |
2008135933 | Jun 2008 | JP |
2009522942 | Jun 2009 | JP |
2010532879 | Oct 2010 | JP |
2011527025 | Oct 2011 | JP |
5007442 | Jun 2012 | JP |
2013518477 | May 2013 | JP |
2013525843 | Jun 2013 | JP |
5675848 | Jan 2015 | JP |
5762956 | Jun 2015 | JP |
1020080092404 | Oct 2008 | KR |
1020100041741 | Apr 2010 | KR |
1020110038024 | Apr 2011 | KR |
101210313 | Dec 2012 | KR |
1020120114327 | Jun 2013 | KR |
1020130061673 | Jun 2013 | KR |
101461141 | Nov 2014 | KR |
526468 | Apr 2003 | TW |
200305854 | Nov 2003 | TW |
200629240 | Aug 2006 | TW |
200705389 | Feb 2007 | TW |
I279776 | Apr 2007 | TW |
200910793 | Mar 2009 | TW |
201009817 | Mar 2010 | TW |
201142829 | Dec 2011 | TW |
201207845 | Feb 2012 | TW |
I463817 | Dec 2014 | TW |
I465121 | Dec 2014 | TW |
201513099 | Apr 2015 | TW |
I488179 | Jun 2015 | TW |
WO0137265 | May 2001 | WO |
WO0141504 | Jun 2001 | WO |
WO0156328 | Aug 2001 | WO |
WO0174118 | Oct 2001 | WO |
WO03043374 | May 2003 | WO |
WO03069499 | Aug 2003 | WO |
WO2008045476 | Oct 2004 | WO |
WO2006027707 | Mar 2006 | WO |
WO2007001068 | Jan 2007 | WO |
WO2007049644 | May 2007 | WO |
WO2007081916 | Jul 2007 | WO |
WO2009008998 | Jan 2009 | WO |
WO2009035614 | Mar 2009 | WO |
WO2010005493 | Jan 2010 | WO |
WO2011091068 | Jul 2011 | WO |
WO2011094232 | Aug 2011 | WO |
WO2011133405 | Oct 2011 | WO |
WO2012097016 | Jul 2012 | WO |
WO2014131054 | Aug 2014 | WO |
WO2015010129 | Jan 2015 | WO |
Entry |
---|
Allen, Jont B. “Short Term Spectral Analysis, Synthesis, and Modification by Discrete Fourier Transform”, IEEE Transactions on Acoustics, Speech, and Signal Processing. vol. ASSP-25, No. 3, Jun. 1977. pp. 235-238. |
Allen, Jont B. et al., “A Unified Approach to Short-Time Fourier Analysis and Synthesis”, Proceedings of the IEEE. vol. 65, No. 11, Nov. 1977. pp. 1558-1564. |
Avendano, Carlos, “Frequency-Domain Source Identification and Manipulation in Stereo Mixes for Enhancement, Suppression and Re-Panning Applications,” 2003 IEEE Workshop on Application of Signal Processing to Audio and Acoustics, Oct. 19-22, pp. 55-58, New Paltz, New York, USA. |
Boll, Steven F. “Suppression of Acoustic Noise in Speech using Spectral Subtraction”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-27, No. 2, Apr. 1979, pp. 113-120. |
Boll, Steven F. et al., “Suppression of Acoustic Noise in Speech Using Two Microphone Adaptive Noise Cancellation”, IEEE Transactions on Acoustic, Speech, and Signal Processing, vol. ASSP-28, No. 6, Dec. 1980, pp. 752-753. |
Boll, Steven F. “Suppression of Acoustic Noise in Speech Using Spectral Subtraction”, Dept. of Computer Science, University of Utah Salt Lake City, Utah, Apr. 1979, pp. 18-19. |
Chen, Jingdong et al., “New Insights into the Noise Reduction Wiener Filter”, IEEE Transactions on Audio, Speech, and Language Processing. vol. 14, No. 4, Jul. 2006, pp. 1218-1234. |
Cohen, Israel et al., “Microphone Array Post-Filtering for Non-Stationary Noise Suppression”, IEEE International Conference on Acoustics, Speech, and Signal Processing, May 2002, pp. 1-4. |
Cohen, Israel, “Multichannel Post-Filtering in Nonstationary Noise Environments”, IEEE Transactions on Signal Processing, vol. 52, No. 5, May 2004, pp. 1149-1160. |
Dahl, Mattias et al., “Simultaneous Echo Cancellation and Car Noise Suppression Employing a Microphone Array”, 1997 IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 21-24, pp. 239-242. |
Elko, Gary W., “Chapter 2: Differential Microphone Arrays”, “Audio Signal Processing for Next-Generation Multimedia Communication Systems”, 2004, pp. 12-65, Kluwer Academic Publishers, Norwell, Massachusetts, USA. |
“ENT 172.” Instructional Module. Prince George's Community College Department of Engineering Technology. Accessed: Oct. 15, 2011. Subsection: “Polar and Rectangular Notation”. <http://academic.ppgcc.edu/ent/ent172—instr—mod.html>. |
Fuchs, Martin et al., “Noise Suppression for Automotive Applications Based on Directional Information”, 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, May 17-21, pp. 237-240. |
Fulghum, D. P. et al., “LPC Voice Digitizer with Background Noise Suppression”, 1979 IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 220-223. |
Goubran, R.A. et al., “Acoustic Noise Suppression Using Regressive Adaptive Filtering”, 1990 IEEE 40th Vehicular Technology Conference, May 6-9, pp. 48-53. |
Graupe, Daniel et al., “Blind Adaptive Filtering of Speech from Noise of Unknown Spectrum Using a Virtual Feedback Configuration”, IEEE Transactions on Speech and Audio Processing, Mar. 2000, vol. 8, No. 2, pp. 146-158. |
Haykin, Simon et al., “Appendix A.2 Complex Numbers.” Signals and Systems. 2nd Ed. 2003. p. 764. |
Hermansky, Hynek “Should Recognizers Have Ears?”, In Proc. ESCA Tutorial and Research Workshop on Robust Speech Recognition for Unknown Communication Channels, pp. 1-10, France 1997. |
Hohmann, V. “Frequency Analysis and Synthesis Using a Gammatone Filterbank”, ACTA Acustica United with Acustica, 2002, vol. 88, pp. 433-442. |
Jeffress, Lloyd A. et al., “A Place Theory of Sound Localization,” Journal of Comparative and Physiological Psychology, 1948, vol. 41, p. 35-39. |
Jeong, Hyuk et al., “Implementation of a New Algorithm Using the STFT with Variable Frequency Resolution for the Time-Frequency Auditory Model”, J. Audio Eng. Soc., Apr. 1999, vol. 47, No. 4., pp. 240-251. |
Kates, James M. “A Time-Domain Digital Cochlear Model”, IEEE Transactions on Signal Processing, Dec. 1991, vol. 39, No. 12, pp. 2573-2592. |
Kato et al., “Noise Suppression with High Speech Quality Based on Weighted Noise Estimation and MMSE STSA” Proc. IWAENC [Online] 2001, pp. 183-186. |
Lazzaro, John et al., “A Silicon Model of Auditory Localization,” Neural Computation Spring 1989, vol. 1, pp. 17-57, Massachusetts Institute of Technology. |
Lippmann, Richard P. “Speech Recognition by Machines and Humans”, Speech Communication, Jul. 1997, vol. 22, No. 1, pp. 1-15. |
Liu, Chen et al., “A Two-Microphone Dual Delay-Line Approach for Extraction of a Speech Sound in the Presence of Multiple Interferers”, Journal of the Acoustical Society of America, vol. 110, No. 6, Dec. 2001, pp. 3218-3231. |
Martin, Rainer et aL, “Combined Acoustic Echo Cancellation, Dereverberation and Noise Reduction: A two Microphone Approach”, Annales des Telecommunications/Annals of Telecommunications. vol. 49, No. 7-8, Jul.-Aug. 1994, pp. 429-438. |
Martin, Rainer “Spectral Subtraction Based on Minimum Statistics”, in Proceedings Europe. Signal Processing Conf., 1994, pp. 1182-1185. |
Mitra, Sanjit K. Digital Signal Processing: a Computer-based Approach. 2nd Ed. 2001. pp. 131-133. |
Mizumachi, Mitsunori et al., “Noise Reduction by Paired-Microphones Using Spectral Subtraction”, 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, May 12-15, pp. 1001-1004. |
Moonen, Marc et al., “Multi-Microphone Signal Enhancement Techniques for Noise Suppression and Dereverbration,” http://www.esat.kuleuven.ac.be/sista/yearreport97//node37.html, accessed on Apr. 21, 1998. |
Watts, Lloyd Narrative of Prior Disclosure of Audio Display on Feb. 15, 2000 and May 31, 2000. |
Cosi, Piero et al., (1996), “Lyon's Auditory Model Inversion: a Tool for Sound Separation and Speech Enhancement,” Proceedings of ESCA Workshop on ‘The Auditory Basis of Speech Perception,’ Keele University, Keele (UK), Jul. 15-19, 1996, pp. 194-197. |
Parra, Lucas et al., “Convolutive Blind Separation of Non-Stationary Sources”, IEEE Transactions on Speech and Audio Processing. vol. 8, No. 3, May 2008, pp. 320-327. |
Rabiner, Lawrence R. et al., “Digital Processing of Speech Signals”, (Prentice-Hall Series in Signal Processing). Upper Saddle River, NJ: Prentice Hall, 1978. |
Weiss Ron et al., “Estimating Single-Channel Source Separation Masks: Revelance Vector Machine Classifiers vs. Pitch-Based Masking”, Workshop on Statistical and Perceptual Audio Processing, 2006. |
Schimmel, Steven et al., “Coherent Envelope Detection for Modulation Filtering of Speech,” 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 1, No. 7, pp. 221-224. |
Slaney, Malcom, “Lyon's Cochlear Model”, Advanced Technology Group, Apple Technical Report #13, Apple Computer, Inc., 1988, pp. 1-79. |
Slaney, Malcom, et al., “Auditory Model Inversion for Sound Separation,” 1994 IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 19-22, vol. 2, pp. 77-80. |
Slaney, Malcom. “An Introduction to Auditory Model Inversion”, Interval Technical Report IRC 1994-014, http://coweb.ecn.purdue.edu/-maclom/interval/1994-014/, Sep. 1994, accessed on Jul. 6, 2010. |
Solbach, Ludger “An Architecture for Robust Partial Tracking and Onset Localization in Single Channel Audio Signal Mixes”, Technical University Hamburg-Harburg, 1998. |
Soon et al., “Low Distortion Speech Enhancement” Proc. Inst. Elect. Eng. [Online] 2000, vol. 147, pp. 247-253. |
Stahl, V. et al., “Quantile Based Noise Estimation for Spectral Subtraction and Wiener Filtering,” 2000 IEEE International Conference on Acoustics, Speech, and Signal Processing, Jun. 5-9, vo1.3, pp. 1875-1878. |
Syntrillium Software Corporation, “Cool Edit User's Manual”, 1996, pp. 1-74. |
Tashev, Ivan et al., “Microphone Array for Headset with Spatial Noise Suppressor”, http://research.microsoft.com/users/ivantash/Documents/Tashev—MAforHeadset—HSCMA—05.pdf. (4 pages). |
Tchorz, Jurgen et al., “SNR Estimation Based on Amplitude Modulation Analysis with Applications to Noise Suppression”, IEEE Transactions on Speech and Audio Processing, vol. 11, No. 3, May 2003, pp. 184-192. |
Valin, Jean-Marc et al., “Enhanced Robot Audition Based on Microphone Array Source Separation with Post-Filter”, Proceedings of 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems, Sep. 28-Oct. 2, 2004, Sendai, Japan. pp. 2123-2128. |
Watts, Lloyd, “Robust Hearing Systems for Intelligent Machines,” Applied Neurosystems Corporation, 2001, pp. 1-5. |
Widrow, B. et al., “Adaptive Antenna Systems,” Proceedings of the IEEE, vol. 55, No. 12, pp. 2143-2159, Dec. 1967. |
Yoo, Heejong et al., “Continuous-Time Audio Noise Suppression and Real-Time Implementation”, 2002 IEEE International Conference on Acoustics, Speech, and Signal Processing, May 13-17, pp. IV3980-IV3983. |
International Search Report & Written Opinion dated Nov. 27, 2015 in Patent Cooperation Treaty Application No. PCT/US2015/047263, filed Aug. 27, 2015. |
International Search Report dated Jun. 8, 2001 in Patent Cooperation Treaty Application No. PCT/US2001/008372. |
International Search Report dated Apr. 3, 2003 in Patent Cooperation Treaty Application No. PCT/US2002/036946. |
International Search Report dated May 29, 2003 in Patent Cooperation Treaty Application No. PCT/US2003/004124. |
International Search Report and Written Opinion dated Oct. 19, 2007 in Patent Cooperation Treaty Application No. PCT/US2007/000463. |
International Search Report and Written Opinion dated Apr. 9, 2008 in Patent Cooperation Treaty Application No. PCT/US2007/021654. |
International Search Report and Written Opinion dated Sep. 16, 2008 in Patent Cooperation Treaty Application No. PCT/US2007/012628. |
International Search Report and Written Opinion dated Oct. 1, 2008 in Patent Cooperation Treaty Application No. PCT/US2008/008249. |
International Search Report and Written Opinion dated Aug. 27, 2009 in Patent Cooperation Treaty Application No. PCT/US2009/003813. |
Dahl, Mattias et al., “Acoustic Echo and Noise Cancelling Using Microphone Arrays”, International Symposium on Signal Processing and its Applications, ISSPA, Gold coast, Australia, Aug. 25-30, 1996, pp. 379-382. |
Demol, M. et al., “Efficient Non-Uniform Time-Scaling of Speech With WSOLA for CALL Applications”, Proceedings of InSTIL/ICALL2004—NLP and Speech Technologies in Advanced Language Learning Systems—Venice Jun. 17-19, 2004. |
Laroche, Jean. “Time and Pitch Scale Modification of Audio Signals”, in “Applications of Digital Signal Processing to Audio and Acoustics”, The Kluwer International Series in Engineering and Computer Science, vol. 437, pp. 279-309, 2002. |
Moulines, Eric et al., “Non-Parametric Techniques for Pitch-Scale and Time-Scale Modification of Speech”, Speech Communication, vol. 16, pp. 175-205, 1995. |
Verhelst, Werner, “Overlap-Add Methods for Time-Scaling of Speech”, Speech Communication vol. 30, pp. 207-221, 2000. |
Bach et al., Learning Spectral Clustering with application to spech separation, Journal of machine learning research, 2006. |
Mokbel et al., 1995, IEEE Transactions of Speech and Audio Processing, vol. 3, No. 5, Sep. 1995, pp. 346-356. |
Office Action dated Oct. 14, 2013 in Taiwanese Patent Application 097125481, filed Jul. 4, 2008. |
Office Action dated Oct. 29, 2013 in Japanese Patent Application 2011-516313, filed Jun. 26, 2009. |
Office Action dated Dec. 20, 2013 in Taiwanese Patent Application 096146144, filed Dec. 4, 2007. |
Office Action dated Dec. 9, 2013 in Finnish Patent Application 20100431, filed Jun. 26, 2009. |
Office Action dated Jan. 20, 2014 in Finnish Patent Application 20100001, filed Jul. 3, 2008. |
Office Action dated Mar. 10, 2014 in Taiwanese Patent Application 097125481, filed Jul. 4, 2008. |
Bai et al., “Upmixing and Downmixing Two-channel Stereo Audio for Consumer Electronics”. IEEE Transactions on Consumer Electronics [Online] 2007, vol. 53, Issue 3, pp. 1011-1019. |
Jo et al., “Crosstalk cancellation for spatial sound reproduction in portable devices with stereo loudspeakers”. Communications in Computer and Information Science [Online] 2011, vol. 266, pp. 114-123. |
Nongpuir et al., “NEXT cancellation system with improved convergence rate and tracking performance”. IEEE Proceedings—Communications [Online] 2005, vol. 152, Issue 3, pp. 378-384. |
Ahmed et al., “Blind Crosstalk Cancellation for DMT Systems” IEEE—Emergent Technologies Technical Committee. Sep. 2002. pp. 1-5. |
Allowance dated May 21, 2014 in Finnish Patent Application 20100001, filed Jan. 4, 2010. |
Office Action dated May 2, 2014 in Taiwanese Patent Application 098121933, filed Jun. 29, 2009. |
Office Action dated Apr. 15, 2014 in Japanese Patent Application 2010-514871, filed Jul. 3, 2008. |
Elhilali et al.,“A cocktail party with a cortical twist: How cortical mechanisms contribute to sound segregation.” J Acoust Soc Am. Dec. 2008 124(6): 3751-3771). |
Jin et al., “HMM-Based Multipitch Tracking for Noisy and Reverberant Speech.” Jul. 2011. |
Kawahara, W., et al., “TANDEM-STRAIGHT: A temporally stable power spectral representation for periodic signals and applications to interference-free spectrum, F0, and aperiodicity estimation.” IEEE ICASSP 2008. |
Office Action dated Jun. 27, 2014 in Korean Patent Application No. 10-2010-7000194, filed Jan. 6, 2010. |
Office Action dated Jun. 18, 2014 in Finnish Patent Application No. 20080428, filed Jul. 4, 2008. |
International Search Report & Written Opinion dated Jul. 15, 2014 in Patent Cooperation Treaty Application No. PCT/US2014/018443, filed Feb. 25, 2014. |
Notice of Allowance dated Aug. 26, 2014 in Taiwanese Application No. 096146144, filed Dec. 4, 2007. |
Notice of Allowance dated Sep. 16, 2014 in Korean Application No. 10-2010-7000194, filed Jul. 3, 2008. |
Notice of Allowance dated Sep. 29, 2014 in Taiwanese Application No. 097125481, filed Jul. 4, 2008. |
Notice of Allowance dated Oct. 10, 2014 in Finnish Application No. 20100001, filed Jul. 3, 2008. |
International Search Report & Written Opinion dated Nov. 12, 2014 in Patent Cooperation Treaty Application No. PCT/US2014/047458, filed Jul. 21, 2014. |
Office Action dated Oct. 28, 2014 in Japanese Patent Application No. 2011-516313, filed Dec. 27, 2012. |
Heiko Pumhagen, “Low Complexity Parametric Stereo Coding in MPEG-4,” Proc. of the 7th Int. Conference on Digital Audio Effects (DAFx'04), Naples, Italy, Oct. 5-8, 2004. |
Dhun-Ming Chang et al., “Voltage-Mode Multifunction Filter with Single Input and Three Outputs Using Two Compound Current Conveyors” IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications, vol. 46, No. 11, Nov. 1999. |
Notice of Allowance dated Feb. 10, 2015 in Taiwanese Patent Application No. 098121933, filed Jun. 29, 2009. |
Office Action dated Jan. 30, 2015 in Finnish Patent Application No. 20080623, filed May 24, 2007. |
Office Action dated Mar. 24, 2015 in Japanese Patent Application No. 2011-516313, filed Jun. 26, 2009. |
Office Action dated Apr. 16, 2015 in Korean Patent Application No. 10-2011-7000440, filed Jun. 26, 2009. |
Notice of Allowance dated Jun. 2, 2015 in Japanese Patent Application 2011-516313, filed Jun. 26, 2009. |
Office Action dated Jun. 4, 2015 in Finnish Patent Application 20080428, filed Jan. 5, 2007. |
Office Action dated Jun. 9, 2015 in Japanese Patent Application 2014-165477 filed Jul. 3, 2008. |
Notice of Allowance dated Aug. 13, 2015 in Finnish Patent Application 20080623, filed May 24, 2007. |
International Search Report and Written Opinion dated Mar. 31, 2011 in Patent Cooperation Treaty Application No. PCT/US2011/022462, filed Jan. 25, 2011. |
International Search Report and Written Opinion dated Jul. 5, 2011 in Patent Cooperation Treaty Application No. PCT/US11/32578. |
Office Action dated Oct. 30, 2014 in Korean Patent Application No. 10-2012-7027238, filed Apr. 14, 2011. |
Jung et al., “Feature Extraction through the Post Processing of WFBA Based on MMSE-STSA for Robust Speech Recognition,” Proceedings of the Acoustical Society of Korea Fall Conference, vol. 23, No. 2(s), pp. 39-42, Nov. 2004. |
Notice of Allowance dated Nov. 25, 2014 in Japan Application No. 2012-550214, filed Jul. 24, 2012. |
Office Action dated Dec. 10, 2014 in Finland Patent Application No. 20126083, filed Apr. 14, 2011. |
Lu et al., “Speech Enhancement Using Hybrid Gain Factor in Critical-Band-Wavelet-Packet Transform”, Digital Signal Processing, vol. 17, Jan. 2007, pp. 172-188. |
Office Action dated Apr. 17, 2015 in Taiwan Patent Application No. 100102945, filed Jan. 26, 2011. |
Office Action dated May 11, 2015 in Finland Patent Application 20125814, filed Jan. 25, 2011. |
Office Action dated Jun. 26, 2015 in South Korean Patent Application 1020127027238 filed Apr. 14, 2011. |
Office Action dated Jul. 2, 2015 in Finland Patent Application 20126083 filed Apr. 14, 2011. |
Office Action dated Jun. 23, 2015 in Japan Patent Application 2013-506188 filed Apr. 14, 2011. |
Office Action dated Oct. 29, 2015 in Korean Patent Application 1020127027238, filed Apr. 14, 2011. |
Number | Date | Country | |
---|---|---|---|
Parent | 12422917 | Apr 2009 | US |
Child | 14591802 | US |