1. Field of Invention
The present invention relates generally to audio processing and more particularly to acoustic echo cancellation in an audio system.
2. Description of Related Art
Conventionally, when audio from a far-end environment is presented through a loudspeaker of a communication device, the audio may be picked up by microphones or other audio sensors of the communication device. As such, the far-end audio may be sent back to the far-end resulting in an echo to a far-end listener. In order to reduce or eliminate this echo, an acoustic echo canceller may be utilized.
In traditional acoustic echo cancellers, knowledge of the far-end signal (e.g., strength and magnitude of the far-end signal) is required in order to be able to cancel the far-end signal. These traditional acoustic echo cancellers typically utilize one microphone. With knowledge of the far-end signal, a derivation of the transmission path from the loudspeaker to the microphone is performed. Then, the result of the derivation path may be inverted or a pattern of the derivation modeled and applied to the far-end signal.
Some convention acoustic echo cancellation systems utilize two microphones. One of the key disadvantages with conventional acoustic echo cancellers is that in order to implement an adaptive filter between a client and each of the two microphones, two acoustic echo cancellers are needed.
Various embodiments of the present invention overcome or substantially alleviate prior problems associated with acoustic echo cancellation. Instead of cancelling between a loudspeaker and each of two microphones, a prediction between the two microphones is performed and a noise suppressor placed between the two microphones. Additionally, embodiments of the present invention do not require knowledge of a far-end signal being played through a loudspeaker (e.g., strength and magnitude), only a direction that the far-end signal is coming from. Because the loudspeaker is typically in a fixed location relative to the two microphones, this direction is known and a communication device may be appropriately calibrated prior to actual operation.
In exemplary embodiments, primary and secondary acoustic signal are received by a primary and secondary microphone of the communication device. Because a loudspeaker may provide audio that may be picked up by the primary and secondary microphone, the acoustic signals may include loudspeaker leakage. A null coefficient is then adaptively determined for each subband. At least one adaptation constraint may be applied by a null module to determine the null coefficient. The adaptation constraint may comprise a frequency range constraint or a far-end signal energy constraint, for example. The null coefficient is applied to the secondary acoustic signal to generate a coefficient-modified signal.
The coefficient-modified signal may be subtracted from the primary acoustic signal. In some embodiments, a masked acoustic signal may be generated based, at least in part, on the modified primary signal, by an adder or masking module. The masked acoustic signal may comprise reduced or no echo. In some embodiments, the masked acoustic signal may also comprise noise suppression masking. The masked acoustic signal may then be output.
The present invention provides exemplary systems and methods for acoustic echo cancellation (AEC). Various embodiments place a null in a direction of the loudspeaker, thus canceling a signal received from the loudspeaker. The direction of the null may be generated by a plurality of microphones prior to transmission of a signal back to a far-end. Embodiments of the present invention may be applied to both 2-channel and 3-channel AEC systems. In the 2-channel embodiment, the system utilizes a two-microphone system. In the 3-channel embodiment, the system utilizes the 2-channel AEC system and also has knowledge of an additional channel associated with a loudspeaker signal.
Exemplary embodiments are configured to prevent the loudspeaker signal from leaking through the two microphones to the far-end in a way that the far-end does not perceive an echo. These embodiments can perform AEC without having access to far-end signal information. In further embodiments, optimal near-end speech preservation may be achieved if the far-end signal is utilized. While the following description will focus on a two microphone system, alternative embodiments may utilize any number of microphones in a microphone array.
As such, various embodiments of the present invention do not require knowledge of a far-end signal being played through a loudspeaker (e.g., strength and magnitude), only a direction of the far-end signal. Because the loudspeaker is typically in a fixed location relative to the two microphones, this general range of direction from the loudspeaker and the region of speech may be easily determined. As a result, in some embodiments, a communication device may be calibrated prior to practical operation In one example, this general range of direction from the loudspeaker may be considered a blackout region where the null should not be placed.
Embodiments of the present invention may be practiced on any device that is configured to receive audio such as, but not limited to, cellular phones, phone handsets, headsets, and conferencing systems. While some embodiments of the present invention will be described in reference to operation on a speakerphone, the present invention may be practiced on any audio device.
Referring to
While the microphones 106 and 108 receive sound (i.e., acoustic signals) from the acoustic source 102, the microphones 106 and 108 also pick up noise 110 in the near-end environment 100. Although the noise 110 is shown coming from a single location in
Some embodiments of the present invention utilize level differences (e.g., energy differences) between the acoustic signals received by the two microphones 106 and 108. Because the primary microphone 106 is closer to the acoustic source 102 than the secondary microphone 108, the intensity level may be higher for the primary microphone 106 resulting in a larger energy level during a speech/voice segment, for example.
The level difference may then be used to discriminate speech and noise in the time-frequency domain. Further embodiments may use a combination of energy level differences and time delays to discriminate speech. Based on binaural cue decoding, speech signal extraction or speech enhancement may be performed.
An acoustic signal (e.g., comprising speech) from a far-end environment 112 may be received via a communication network 114 by the communication device 104. The received acoustic signal may then be provided to the near-end environment 100 via a loudspeaker 116 associated with the communication device 104. To attenuate or otherwise reduce leakage of a signal from the loudspeaker 116 into the microphones 106 and 108, a directivity pattern may be generated to emphasize signals received by the microphones 106 and 108 from the acoustic source 102 and generate a null in the direction of the loudspeaker 116.
Referring now to
The exemplary receiver 200 is an acoustic sensor configured to receive a far-end signal from the network 114. In some embodiments, the receiver 200 may comprise an antenna device. The received far-end signal may then be forwarded to the audio processing system 204.
The audio processing system 204 is configured to receive the acoustic signals from the primary and secondary microphones 106 and 108 (e.g., primary and secondary acoustic sensors) and process the acoustic signals. As previously discussed, the primary and secondary microphones 106 and 108, respectively, are spaced a distance apart in order to allow for an energy level differences and phase differences between them. After reception by the microphones 106 and 108, the acoustic signals may be converted into electric signals (i.e., a primary electric signal and a secondary electric signal). The electric signals may themselves be converted by an analog-to-digital converter (not shown) into digital signals for processing in accordance with some embodiments. In order to differentiate the acoustic signals, the acoustic signal received by the primary microphone 106 is herein referred to as the primary acoustic signal, while the acoustic signal received by the secondary microphone 108 is herein referred to as the secondary acoustic signal. It should be noted that embodiments of the present invention may be practiced utilizing a plurality of microphones.
The audio processing system 204 may also be configured to receive the far-end signal from the network 114 and provide the far-end signal to the output device 206 discussed below. In various embodiments, the audio processing engine 204 may attenuate noise within the far-end signal and/or emphasize speech prior to providing the processed far-end signal to the output 206.
The output device 206 is any device which provides an audio output to a listener (e.g., the acoustic source 102). For example, the output device 206 may comprise the loudspeaker 116, an earpiece of a headset, or handset on the communication device 104, for example.
In operation, the acoustic signals received from the primary and secondary microphones 106 and 108 are converted to electric signals and processed through a frequency analysis module 302. In one embodiment, the frequency analysis module 302 takes the acoustic signals and mimics the frequency analysis of the cochlea (i.e., cochlear domain) simulated by a filter bank. In one example, the frequency analysis module 302 separates the acoustic signals into frequency bands. Alternatively, other filters such as short-time Fourier transform (STFT), Fast Fourier Transform, Fast Cochlea transform, sub-band filter banks, modulated complex lapped transforms, cochlear models, a gamma-tone filter bank, wavelets, or any generalized spectral analysis filter/method, can be used for the frequency analysis and synthesis. Because most sounds (e.g., acoustic signals) are complex and comprise more than one frequency, a sub-band analysis on the acoustic signal may be performed to determine what individual frequencies are present in the acoustic signal during a frame (e.g., a predetermined period of time). According to one embodiment, the frame is 5-10 ms long. Alternative embodiments may utilize other frame lengths.
After frequency analysis, the signals are provided to an acoustic echo cancellation (AEC) engine 304. The AEC engine 304 is configured to reduce echo resulting from loudspeaker leakage through the primary and secondary microphones 106 and 108. The AEC engine 304 is discussed in more detail in connection with
The results of the AEC engine 304 may be provided to a noise suppression system 306 which incorporates AEC engine 304 results with noise suppression. More details on exemplary noise suppression systems 306 may be found in co-pending U.S. patent application Ser. No. 11/825,563 filed Jul. 6, 2007 and entitled “System and Method for Adaptive Intelligent Noise Suppression,” U.S. patent application Ser. No. 11/343,524, filed Jan. 30, 2006 and entitled “System and Method for Utilizing Inter-Microphone Level Differences for Speech Enhancement,” and U.S. patent application Ser. No. 11/699,732 filed Jan. 29, 2007 and entitled “System And Method For Utilizing Omni-Directional Microphones For Speech Enhancement,” all of which are incorporated by reference.
In some embodiments, the results of the AEC engine 304 (i.e., AEC masked signal) may comprise residual echo. As such, exemplary embodiments utilize a blind subband AEC postfilter (BSAP) system (not depicted) to render the residual echo inaudible.
The results of the AEC engine 304, the noise suppression system 306, and optionally, the BSAP system, may then be combined in a masking module 308. Accordingly in exemplary embodiments, gain masks may be applied to an associated frequency band of the primary acoustic signal in the masking module 308.
Next, the post-AEC frequency bands are converted back into time domain from the cochlea domain. The conversion may comprise taking the post-AEC frequency bands and adding together phase shifted signals of the cochlea channels in a frequency synthesis module 310. Once conversion is completed, the synthesized acoustic signal may be output (e.g., forwarded to the communication network 114 and sent to the far-end environment 112).
It should be noted that the system architecture of the audio processing system 204 of
Referring now to
The exemplary AEC engine 300 is configured to use a subband differential array to create a null in an effective direction of the loudspeaker signal with respect to the primary and secondary microphone 106 and 108. In exemplary embodiments, a null module 402 determines and applies a time-varying complex coefficient (i.e., null coefficient) to the secondary microphone channel acoustic signal to generate a coefficient-modified acoustic signal. In exemplary embodiments the complex coefficient may be continuously adapted to minimize residual echo. The result of the null module 402 is then sent to an adder 404 which subtracts the coefficient-modified acoustic signal from the primary microphone channel acoustic signal.
A model for narrow-spaced microphone cross transfer function of exemplary embodiments is that of a fractional shift in time (e.g., positive or negative) with attenuation. This may be justified by physics of sound propagation if a microphone spacing, D, (i.e., between the primary and secondary microphone 106 and 108) obeys an equation D<c/fs, where c is a speed of sound and fs is a sample rate. Those skilled in the art will appreciate that it is not required for the delay of the model for narrow-spaced microphone cross transfer function to be fractional without an integer part.
Both a gain and a phase of a cross-microphone transfer function may be dependent on a signal frequency and an angle of arrival theta, θ, as follows:
For a given frequency subband with index n, the transfer function may be approximated based on a magnitude and phase at a subband center frequency as follows:
In various embodiments, the higher-order approximations to the fractional delay may be used as well. The above equation relates the primary acoustic signal with angle of arrival theta in the primary microphone 106 to the secondary acoustic signal in the secondary microphone 108 in the frequency subband with index n as follows:
In order to subtractively cancel the primary acoustic signal with angle of arrival theta, the secondary acoustic signal multiplied by an inverse cross-transfer function may be subtracted as mathematically described by:
X1,n(θ)−X2,n(θ)−an(θ)·eiφ
In embodiments where a pure sinusoid at the subband's center frequency, the cancellation may approximate the entire signal picked up by the microphones 106 and 108 from the loudspeaker 116. The same may hold true for all signals if the angle of arrival theta is equal to 0 or 180 degrees since a fractional shift at these angles is zero.
As an inverse of an approximate inter-channel cross-transfer function in each subband is a single complex multiplier, it may be computed using zero-lag correlations as follows:
In exemplary embodiments, the null module 402 may perform null coefficient adaptation. A frame-based recursive approximation may be used, where m is a running frame and N is a frame length as follows:
The result of this computation is an estimate of the time-varying inverse of the cross-transfer function between the primary and secondary microphone 106 and 108. This estimate may then be multiplied with frame samples of the secondary microphone 108 in the null module 402. The result may then be subtracted from the frame samples of the primary microphone 106 in the adder 404 resulting in the following exemplary equations:
y(k)=x1(k)−x2(k)·Hn−1(m).
In order to prevent cancellation of a near-end speech signal from the acoustic source 102, various adaptation constraints may be applied by the null module 402. That is, the null module 402 attempts to constrain the null coefficient such that the null coefficient cancels the echo with little or no effect on the near-end speech.
One such constraint comprises a frequency range constraint. Loudspeakers associated with conventional mobile handsets may comprise limited capabilities for transmitting low frequency content. For example, in a speakerphone mode, a low corner frequency may be 600 Hz. As such, an echo cancellation may be constrained to a range of subbands covering a frequency range of the loudspeaker. Any subbands outside of the speaker frequency are not subject to echo cancellation.
Some embodiments of the present inventions apply AEC based on subbands rather than globally. This allows the system to preserve the ability to adapt in regions of the spectrum where a far-end signal is faint with respect to the near-end signal, and vice-versa. As a result, calibration may be performed prior to implementation in the communication device 104. Calibration may comprise playing pink noise in a sound room through an industrial design loudspeaker. A resulting primary acoustic signal is then fed through a frequency analysis module. Average subband frame energies may then be stored in a communication device configuration file (e.g., stored in the null module 402).
In various embodiments, the AEC is not applied when there is no far-end signal present. In one example, the far-end is never entirely quiet because of quantization noise in the encoding process and comfort noise being inserted by typical codes during times of silence. Further, what may qualify as “no signal” at one volume setting may not be negligible in another. The application of the AEC based on subbands may avoid global errors that attenuate desired speech or emphasize noise.
Once the device is in operation, the null module 402 may determine whether the observed predictor between the two microphones is in a near-end acoustic source class or a loudspeaker class. During calibration, an estimate of the leakage from the loudspeaker to the two microphones may be determined. In one example, calibration may occur offline prior to practical use of the communication device. In another example, calibration may occur online (e.g., the coefficients may be learned online). While the device is in operation, a leakage curve may be utilized by the null module 402. Based on an applied energy, an expected leakage in the two microphones may be determined. If the signal observe in the secondary microphone 108 is above the curve, then an assumption may be made that the signal is mainly from the acoustic source 102.
For example, if a primary microphone energy exceeds the average subband frame energy level, the primary microphone energy is higher than expected given a current far-end energy. In exemplary embodiments, this signifies that most of the signal in the given subband is not due to echo. So an adaptation rule for this predictor between the two microphones is based on location. If the signal is in the loudspeaker class, AEC may occur in order to cancel echo. If the signal is in the near-end acoustic source class, AEC may not occur. There may also be a variant where the coefficient is frozen and a variant where the coefficients may fade—essentially placing a magnitude to zero.
In order to compensate for delay in an echo path, the far-end energy signal (from Weighted Energy Module 406 in
In order to prevent cancellation of a near-end speaker, another calibration method may be performed. In this calibration method, null-coefficients may be computed for Ne echo paths from playing pink noise through the loudspeaker and computing equation (1) for each null coefficient. This may also be performed for a total of five head and torso simulators (HATS) pink noise recordings (e.g., one for nominal near-end loudspeaker position and four for angles +/−30 degrees up and down from the nominal position. The result is 5+Ne, where Ne of the coefficients are in an echo class and five are in a near-end class.
As a difference metric between two null-coefficients, the following equation may be used:
d=|log(a1)−log(a2)|
An observation may then be assigned to one of the two classes (i.e., echo class or near-end class). In exemplary embodiments, adaptation in a particular frequency subband may be allowed if a current null coefficient update is closer to the echo class than it is to the near-end class as determined by the AEC engine 300.
In some embodiments, various adaptation criteria may be satisfied in order for the AEC engine 300 to allow a particular subband to update related coefficients. For example, where one or more adaptation criteria are not satisfied, the current coefficient angle may be frozen and its magnitude may be subject to an exponential decay This may minimize signal alterations during single talk in the near-end environment 100. In other embodiments, both the current coefficient angle and the magnitude may be frozen.
Once the coefficient-modified acoustic signal is subtracted from the primary acoustic signal, the resulting signal is forwarded to the frequency synthesis module 310. In various embodiments, the resulting signal is forwarded to the masking module 308 prior to resynthesis. The frequency synthesis module 310 converts the resulting signal back into time domain from the cochlea domain. Once conversion is complete, the synthesized acoustic signal may be output.
In some embodiments, a switch 410 may be provided. Thus, if AEC is being performed, then the resulting signal may be fed back into the null module 402. If AEC is not being performed, the switch 410 may be closed.
It should be noted that the system architecture of the audio processing system 400 of
Referring now to
The acoustic signals are then converted to electric signals and processed through the frequency analysis module 302 to obtain a primary (microphone channel) acoustic signal and a secondary (microphone channel) acoustic signal in step 504. In one embodiment, the frequency analysis module 302 takes the acoustic signals and mimics the frequency analysis of a cochlea (i.e., cochlear domain) simulated by a filter bank. The result comprises frequency subbands.
In step 506, a null coefficient per subband is determined. As discussed, the exemplary AEC engine 300 is configured to use a subband differential array to create a null in an effective direction of the loudspeaker signal with respect to the primary and secondary microphone 106 and 108. In exemplary embodiments, the null module 402 determines a time-varying complex coefficient (i.e., null coefficient). In some embodiments, this complex coefficient may be continuously adapted to minimize residual echo.
The null coefficient is then applied to the secondary acoustic signal per subband in step 508 to generate a coefficient-modified acoustic signal. The coefficient-modified acoustic signal is then sent to the adder 404 which subtracts the coefficient-modified acoustic signal from the primary acoustic signal per subband in step 510. In an embodiment comprising a noise suppression system, the masking module 308 may apply noise suppression masking to the primary acoustic signal with the result comprising masked frequency bands.
The masked frequency bands may then be output in step 512. In accordance with exemplary embodiments, the masked frequency bands are converted back into time domain from the cochlea domain. The conversion may comprise taking the masked frequency bands and adding together phase shifted signals of the cochlea channels in the frequency synthesis module 310. Once conversion is completed, the synthesized acoustic signal may be output (e.g., forwarded to the communication network 114 and sent to the far-end environment 112).
The above-described modules can be comprised of instructions that are stored on storage media. The instructions can be retrieved and executed by the processor 202. Some examples of instructions include software, program code, and firmware. Some examples of storage media comprise memory devices and integrated circuits. The instructions are operational when executed by the processor 202 to direct the processor 202 to operate in accordance with embodiments of the present invention. Those skilled in the art are familiar with instructions, processor(s), and storage media.
The present invention is described above with reference to exemplary embodiments. It will be apparent to those skilled in the art that various modifications may be made and other embodiments can be used without departing from the broader scope of the present invention. For example, embodiments of the present invention may be applied to any system (e.g., non speech enhancement system) as long as a noise power spectrum estimate is available. Therefore, these and other variations upon the exemplary embodiments are intended to be covered by the present invention.
The present application claims the benefit of U.S. Provisional Patent Application No. 60/903,066, filed Feb. 23, 2007, entitled “Null Processing for AEC” and U.S. Provisional Patent Application No. 60/962,198, filed Jul. 26, 2007, entitled “2-Channel and 3-Channel Acoustic Echo Cancellation,” both of which are hereby incorporated by reference. The present application is also related to U.S. patent application Ser. No. 11/825,563 filed Jul. 6, 2007 and entitled “System and Method for Adaptive Intelligent Noise Suppression,” U.S. patent application Ser. No. 11/343,524, filed Jan. 30, 2006 and entitled “System and Method for Utilizing Inter-Microphone Level Differences for Speech Enhancement,” U.S. patent application Ser. No. 11/699,732 filed Jan. 29, 2007 and entitled “System And Method For Utilizing Omni-Directional Microphones For Speech Enhancement,” co-pending U.S. patent application Ser. No. 12/004,896 filed Dec. 21, 2007, entitled “System and Method for Blind Subband Acoustic Echo Cancellation Postfiltering,” and co-pending U.S. patent application Ser. No. 12/004,788 filed Dec. 21, 2007, entitled “System and Method for Providing Voice Equalization,” all of which are herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
3976863 | Engel | Aug 1976 | A |
3978287 | Fletcher et al. | Aug 1976 | A |
4137510 | Iwahara | Jan 1979 | A |
4433604 | Ott | Feb 1984 | A |
4516259 | Yato et al. | May 1985 | A |
4535473 | Sakata | Aug 1985 | A |
4536844 | Lyon | Aug 1985 | A |
4581758 | Coker et al. | Apr 1986 | A |
4628529 | Borth et al. | Dec 1986 | A |
4630304 | Borth et al. | Dec 1986 | A |
4649505 | Zinser, Jr. et al. | Mar 1987 | A |
4658426 | Chabries et al. | Apr 1987 | A |
4674125 | Carlson et al. | Jun 1987 | A |
4718104 | Anderson | Jan 1988 | A |
4811404 | Vilmur et al. | Mar 1989 | A |
4812996 | Stubbs | Mar 1989 | A |
4864620 | Bialick | Sep 1989 | A |
4920508 | Yassaie et al. | Apr 1990 | A |
5027410 | Williamson et al. | Jun 1991 | A |
5054085 | Meisel et al. | Oct 1991 | A |
5058419 | Nordstrom et al. | Oct 1991 | A |
5099738 | Hotz | Mar 1992 | A |
5119711 | Bell et al. | Jun 1992 | A |
5142961 | Paroutaud | Sep 1992 | A |
5150413 | Nakatani et al. | Sep 1992 | A |
5175769 | Hejna, Jr. et al. | Dec 1992 | A |
5187776 | Yanker | Feb 1993 | A |
5208864 | Kaneda | May 1993 | A |
5210366 | Sykes, Jr. | May 1993 | A |
5224170 | Waite, Jr. | Jun 1993 | A |
5230022 | Sakata | Jul 1993 | A |
5319736 | Hunt | Jun 1994 | A |
5323459 | Hirano | Jun 1994 | A |
5341432 | Suzuki et al. | Aug 1994 | A |
5381473 | Andrea et al. | Jan 1995 | A |
5381512 | Holton et al. | Jan 1995 | A |
5400409 | Linhard | Mar 1995 | A |
5402493 | Goldstein | Mar 1995 | A |
5402496 | Soli et al. | Mar 1995 | A |
5471195 | Rickman | Nov 1995 | A |
5473702 | Yoshida et al. | Dec 1995 | A |
5473759 | Slaney et al. | Dec 1995 | A |
5479564 | Vogten et al. | Dec 1995 | A |
5502663 | Lyon | Mar 1996 | A |
5544250 | Urbanski | Aug 1996 | A |
5574824 | Slyh et al. | Nov 1996 | A |
5583784 | Kapust et al. | Dec 1996 | A |
5587998 | Velardo, Jr. et al. | Dec 1996 | A |
5590241 | Park et al. | Dec 1996 | A |
5602962 | Kellermann | Feb 1997 | A |
5675778 | Jones | Oct 1997 | A |
5682463 | Allen et al. | Oct 1997 | A |
5694474 | Ngo et al. | Dec 1997 | A |
5706395 | Arslan et al. | Jan 1998 | A |
5717829 | Takagi | Feb 1998 | A |
5729612 | Abel et al. | Mar 1998 | A |
5732189 | Johnston et al. | Mar 1998 | A |
5749064 | Pawate et al. | May 1998 | A |
5757937 | Itoh et al. | May 1998 | A |
5792971 | Timis et al. | Aug 1998 | A |
5796819 | Romesburg | Aug 1998 | A |
5806025 | Vis et al. | Sep 1998 | A |
5809463 | Gupta et al. | Sep 1998 | A |
5825320 | Miyamori et al. | Oct 1998 | A |
5839101 | Vahatalo et al. | Nov 1998 | A |
5920840 | Satyamurti et al. | Jul 1999 | A |
5933495 | Oh | Aug 1999 | A |
5943429 | Handel | Aug 1999 | A |
5956674 | Smyth et al. | Sep 1999 | A |
5974380 | Smyth et al. | Oct 1999 | A |
5978824 | Ikeda | Nov 1999 | A |
5983139 | Zierhofer | Nov 1999 | A |
5990405 | Auten et al. | Nov 1999 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6061456 | Andrea et al. | May 2000 | A |
6072881 | Linder | Jun 2000 | A |
6097820 | Turner | Aug 2000 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6122610 | Isabelle | Sep 2000 | A |
6134524 | Peters et al. | Oct 2000 | A |
6137349 | Menkhoff et al. | Oct 2000 | A |
6140809 | Doi | Oct 2000 | A |
6173255 | Wilson et al. | Jan 2001 | B1 |
6180273 | Okamoto | Jan 2001 | B1 |
6216103 | Wu et al. | Apr 2001 | B1 |
6222927 | Feng et al. | Apr 2001 | B1 |
6223090 | Brungart | Apr 2001 | B1 |
6226616 | You et al. | May 2001 | B1 |
6263307 | Arslan et al. | Jul 2001 | B1 |
6266633 | Higgins et al. | Jul 2001 | B1 |
6317501 | Matsuo | Nov 2001 | B1 |
6339758 | Kanazawa et al. | Jan 2002 | B1 |
6355869 | Mitton | Mar 2002 | B1 |
6363345 | Marash et al. | Mar 2002 | B1 |
6381570 | Li et al. | Apr 2002 | B2 |
6430295 | Handel et al. | Aug 2002 | B1 |
6434417 | Lovett | Aug 2002 | B1 |
6449586 | Hoshuyama | Sep 2002 | B1 |
6469732 | Chang et al. | Oct 2002 | B1 |
6487257 | Gustafsson et al. | Nov 2002 | B1 |
6496795 | Malvar | Dec 2002 | B1 |
6513004 | Rigazio et al. | Jan 2003 | B1 |
6516066 | Hayashi | Feb 2003 | B2 |
6529606 | Jackson, Jr. II et al. | Mar 2003 | B1 |
6549630 | Bobisuthi | Apr 2003 | B1 |
6584203 | Elko et al. | Jun 2003 | B2 |
6622030 | Romesburg et al. | Sep 2003 | B1 |
6717991 | Nordholm et al. | Apr 2004 | B1 |
6718309 | Selly | Apr 2004 | B1 |
6738482 | Jaber | May 2004 | B1 |
6760450 | Matsuo | Jul 2004 | B2 |
6785381 | Gartner et al. | Aug 2004 | B2 |
6792118 | Watts | Sep 2004 | B2 |
6795558 | Matsuo | Sep 2004 | B2 |
6798886 | Smith et al. | Sep 2004 | B1 |
6810273 | Mattila et al. | Oct 2004 | B1 |
6882736 | Dickel et al. | Apr 2005 | B2 |
6915264 | Baumgarte | Jul 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6944510 | Ballesty et al. | Sep 2005 | B1 |
6978159 | Feng et al. | Dec 2005 | B2 |
6982377 | Sakurai et al. | Jan 2006 | B2 |
6999582 | Popovic et al. | Feb 2006 | B1 |
7016507 | Brennan | Mar 2006 | B1 |
7020605 | Gao | Mar 2006 | B2 |
7031478 | Belt et al. | Apr 2006 | B2 |
7054452 | Ukita | May 2006 | B2 |
7065485 | Chong-White et al. | Jun 2006 | B1 |
7076315 | Watts | Jul 2006 | B1 |
7092529 | Yu et al. | Aug 2006 | B2 |
7092882 | Arrowood et al. | Aug 2006 | B2 |
7099821 | Visser et al. | Aug 2006 | B2 |
7142677 | Gonopolskiy et al. | Nov 2006 | B2 |
7146316 | Alves | Dec 2006 | B2 |
7155019 | Hou | Dec 2006 | B2 |
7164620 | Hoshuyama | Jan 2007 | B2 |
7171008 | Elko | Jan 2007 | B2 |
7171246 | Mattila et al. | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7206418 | Yang et al. | Apr 2007 | B2 |
7209567 | Kozel et al. | Apr 2007 | B1 |
7225001 | Eriksson et al. | May 2007 | B1 |
7242762 | He et al. | Jul 2007 | B2 |
7246058 | Burnett | Jul 2007 | B2 |
7254242 | Ise et al. | Aug 2007 | B2 |
7359520 | Brennan et al. | Apr 2008 | B2 |
7412379 | Taori et al. | Aug 2008 | B2 |
7433907 | Nagai et al. | Oct 2008 | B2 |
7555434 | Nomura et al. | Jun 2009 | B2 |
7617099 | Yang et al. | Nov 2009 | B2 |
7949522 | Hetherington et al. | May 2011 | B2 |
8098812 | Fadili et al. | Jan 2012 | B2 |
20010016020 | Gustafsson et al. | Aug 2001 | A1 |
20010031053 | Feng et al. | Oct 2001 | A1 |
20020002455 | Accardi et al. | Jan 2002 | A1 |
20020009203 | Erten | Jan 2002 | A1 |
20020041693 | Matsuo | Apr 2002 | A1 |
20020080980 | Matsuo | Jun 2002 | A1 |
20020106092 | Matsuo | Aug 2002 | A1 |
20020116187 | Erten | Aug 2002 | A1 |
20020133334 | Coorman et al. | Sep 2002 | A1 |
20020147595 | Baumgarte | Oct 2002 | A1 |
20020184013 | Walker | Dec 2002 | A1 |
20030014248 | Vetter | Jan 2003 | A1 |
20030026437 | Janse et al. | Feb 2003 | A1 |
20030033140 | Taori et al. | Feb 2003 | A1 |
20030039369 | Bullen | Feb 2003 | A1 |
20030040908 | Yang et al. | Feb 2003 | A1 |
20030061032 | Gonopolskiy | Mar 2003 | A1 |
20030063759 | Brennan et al. | Apr 2003 | A1 |
20030072382 | Raleigh et al. | Apr 2003 | A1 |
20030072460 | Gonopolskiy et al. | Apr 2003 | A1 |
20030095667 | Watts | May 2003 | A1 |
20030099345 | Gartner et al. | May 2003 | A1 |
20030101048 | Liu | May 2003 | A1 |
20030103632 | Goubran et al. | Jun 2003 | A1 |
20030128851 | Furuta | Jul 2003 | A1 |
20030138116 | Jones et al. | Jul 2003 | A1 |
20030147538 | Elko | Aug 2003 | A1 |
20030169891 | Ryan et al. | Sep 2003 | A1 |
20030228023 | Burnett et al. | Dec 2003 | A1 |
20040013276 | Ellis et al. | Jan 2004 | A1 |
20040047464 | Yu et al. | Mar 2004 | A1 |
20040057574 | Faller | Mar 2004 | A1 |
20040078199 | Kremer et al. | Apr 2004 | A1 |
20040131178 | Shahaf et al. | Jul 2004 | A1 |
20040133421 | Burnett et al. | Jul 2004 | A1 |
20040165736 | Hetherington et al. | Aug 2004 | A1 |
20040196989 | Friedman et al. | Oct 2004 | A1 |
20040263636 | Cutler et al. | Dec 2004 | A1 |
20050025263 | Wu | Feb 2005 | A1 |
20050027520 | Mattila et al. | Feb 2005 | A1 |
20050049864 | Kaltenmeier et al. | Mar 2005 | A1 |
20050060142 | Visser et al. | Mar 2005 | A1 |
20050152559 | Gierl et al. | Jul 2005 | A1 |
20050185813 | Sinclair et al. | Aug 2005 | A1 |
20050213778 | Buck et al. | Sep 2005 | A1 |
20050216259 | Watts | Sep 2005 | A1 |
20050228518 | Watts | Oct 2005 | A1 |
20050276423 | Aubauer et al. | Dec 2005 | A1 |
20050288923 | Kok | Dec 2005 | A1 |
20060072768 | Schwartz et al. | Apr 2006 | A1 |
20060074646 | Alves et al. | Apr 2006 | A1 |
20060098809 | Nongpiur et al. | May 2006 | A1 |
20060120537 | Burnett et al. | Jun 2006 | A1 |
20060133621 | Chen et al. | Jun 2006 | A1 |
20060149535 | Choi et al. | Jul 2006 | A1 |
20060160581 | Beaugeant et al. | Jul 2006 | A1 |
20060184363 | McCree et al. | Aug 2006 | A1 |
20060198542 | Benjelloun Touimi et al. | Sep 2006 | A1 |
20060222184 | Buck et al. | Oct 2006 | A1 |
20070021958 | Visser et al. | Jan 2007 | A1 |
20070027685 | Arakawa et al. | Feb 2007 | A1 |
20070033020 | (Kelleher) Francois et al. | Feb 2007 | A1 |
20070067166 | Pan et al. | Mar 2007 | A1 |
20070078649 | Hetherington et al. | Apr 2007 | A1 |
20070094031 | Chen | Apr 2007 | A1 |
20070100612 | Ekstrand et al. | May 2007 | A1 |
20070116300 | Chen | May 2007 | A1 |
20070150268 | Acero et al. | Jun 2007 | A1 |
20070154031 | Avendano et al. | Jul 2007 | A1 |
20070165879 | Deng et al. | Jul 2007 | A1 |
20070195968 | Jaber | Aug 2007 | A1 |
20070230712 | Belt et al. | Oct 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20080019548 | Avendano | Jan 2008 | A1 |
20080033723 | Jang et al. | Feb 2008 | A1 |
20080140391 | Yen et al. | Jun 2008 | A1 |
20080201138 | Visser et al. | Aug 2008 | A1 |
20080228478 | Hetherington et al. | Sep 2008 | A1 |
20080260175 | Elko | Oct 2008 | A1 |
20090012783 | Klein | Jan 2009 | A1 |
20090012786 | Zhang et al. | Jan 2009 | A1 |
20090129610 | Kim et al. | May 2009 | A1 |
20090220107 | Every et al. | Sep 2009 | A1 |
20090238373 | Klein | Sep 2009 | A1 |
20090253418 | Makinen | Oct 2009 | A1 |
20090271187 | Yen et al. | Oct 2009 | A1 |
20090323982 | Solbach et al. | Dec 2009 | A1 |
20100094643 | Avendano et al. | Apr 2010 | A1 |
20100278352 | Petit et al. | Nov 2010 | A1 |
20110178800 | Watts | Jul 2011 | A1 |
20120121096 | Chen et al. | May 2012 | A1 |
20120140917 | Nicholson et al. | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
2005172865 | Jul 1933 | JP |
62110349 | May 1987 | JP |
4184400 | Jul 1992 | JP |
5053587 | Mar 1993 | JP |
6269083 | Sep 1994 | JP |
10-313497 | Nov 1998 | JP |
11-249693 | Sep 1999 | JP |
2004053895 | Feb 2004 | JP |
2004531767 | Oct 2004 | JP |
2004533155 | Oct 2004 | JP |
2005110127 | Apr 2005 | JP |
2005148274 | Jun 2005 | JP |
2005518118 | Jun 2005 | JP |
2005195955 | Jul 2005 | JP |
0174118 | Oct 2001 | WO |
02080362 | Oct 2002 | WO |
02103676 | Dec 2002 | WO |
03043374 | May 2003 | WO |
03069499 | Aug 2003 | WO |
03069499 | Aug 2003 | WO |
2004010415 | Jan 2004 | WO |
2007081916 | Jul 2007 | WO |
2007140003 | Dec 2007 | WO |
2010005493 | Jan 2010 | WO |
Number | Date | Country | |
---|---|---|---|
60903066 | Feb 2007 | US | |
60962198 | Jul 2007 | US |