Presently, there are numerous methods for reducing background noise in speech recordings made in adverse environments. One such method is to use two or more microphones on an audio device. These microphones are localized and allow the device to determine a difference between the microphone signals. For example, due to a space difference between the microphones, the difference in times of arrival of the signals from a speech source to the microphones may be utilized to localize the speech source. Once localized, the signals can be spatially filtered to suppress the noise originating from different directions.
Beamforming techniques utilizing a linear array of microphones may create an “acoustic beam” in a direction of the source, and thus can be used as spatial filters. This method, however, suffers from many disadvantages. First, it is necessary to identify the direction of the speech source. The time delay, however, is difficult to estimate due to such factors as reverberation which may create ambiguous or incorrect information. Second, the number of sensors needed to achieve adequate spatial filtering is generally large (e.g., more than two). Additionally, if the microphone array is used on a small device, such as a cellular phone, beamforming is more difficult at lower frequencies because the distance between the microphones of the array is small compared to the wavelength.
Spatial separation and directivity of the microphones provides not only arrival-time differences but also inter-microphone level differences (ILD) that can be more easily identified than time differences in some applications. Therefore, there is a need for a system and method for utilizing ILD for noise suppression and speech enhancement.
Embodiments of the present invention overcome or substantially alleviate prior problems associated with noise suppression and speech enhancement. In general, systems and methods for utilizing inter-microphone level differences (ILD) to attenuate noise and enhance speech are provided. In exemplary embodiments, the ILD is based on energy level differences.
In exemplary embodiments, energy estimates of acoustic signals received from a primary microphone and a secondary microphone are determined for each channel of a cochlea frequency analyzer for each time frame. The energy estimates may be based on a current acoustic signal and an energy estimate of a previous frame. Based on these energy estimates the ILD may be calculated.
The ILD information is used to determine time-frequency components where speech is likely to be present and to derive a noise estimate from the primary microphone acoustic signal. The energy and noise estimates allow a filter estimate to be derived. In one embodiment, a noise estimate of the acoustic signal from the primary microphone is determined based on minimum statistics of the current energy estimate of the primary microphone signal and a noise estimate of the previous frame. In some embodiments, the derived filter estimate may be smoothed to reduce acoustic artifacts.
The filter estimate is then applied to the cochlea representation of the acoustic signal from the primary microphone to generate a speech estimate. The speech estimate is then converted into time domain for output. The conversion may be performed by applying an inverse frequency transformation to the speech estimate.
a and 1b are diagrams of two environments in which embodiments of the present invention may be practiced;
The present invention provides exemplary systems and methods for recording and utilizing inter-microphone level differences to identify time frequency regions dominated by speech in order to attenuate background noise and far-field distractors. Embodiments of the present invention may be practiced on any communication device that is configured to receive sound such as, but not limited to, cellular phones, phone handsets, headsets, and conferencing systems. Advantageously, exemplary embodiments are configured to provide improved noise suppression on small devices where prior art microphone arrays will not function well. While embodiments of the present invention will be described in reference to operation on a cellular phone, the present invention may be practiced on any communication device.
Referring to
While the microphones 106 and 108 receive sound information from the speech source 102, the microphones 106 and 108 also pick up noise 110. While the noise 110 is shown coming from a single location, the noise may comprise any sounds from one or more locations different than the speech and may include reverberations and echoes.
Embodiments of the present invention exploit level differences (e.g., energy differences) between the two microphones 106 and 108 independent of how the level differences are obtained. In
The level differences may then be used to discriminate speech and noise in the time-frequency domain. Further embodiments may use a combination of energy level difference and time delays to discriminate speech. Based on binaural cue decoding, speech signal extraction or speech enhancement may be performed.
Referring now to
As previously discussed, the primary and secondary microphones 106 and 108, respectively, are spaced a distance apart in order to allow for an energy level difference between them. It should be noted that the microphones 106 and 108 may comprise any type of acoustic receiving device or sensor, and may be omni-directional, unidirectional, or have other directional characteristics or polar patters. Once received by the microphones 106 and 108, the acoustic signals are converted by an analog-to-digital converter (not shown) into digital signals for processing in accordance with some embodiments. In order to differentiate the acoustic signals, the acoustic signal received by the primary microphone 106 is herein referred to as the primary acoustic signal, while the acoustic signal received by the secondary microphone 108 is herein referred to as the secondary acoustic signal.
The output device 206 is any device which provides an audio output to the user. For example, the output device 206 may be an earpiece of a headset or handset, or a speaker on a conferencing device.
Once the frequencies are determined, the signals are forwarded to an energy module 304 which computes energy level estimates during an interval of time. The energy estimate may be based on bandwidth of the cochlea channel and the acoustic signal. The exemplary energy module 304 is a component which, in some embodiments, can be represented mathematically. Thus, the energy level of the acoustic signal received at the primary microphone 106 may be approximated, in one embodiment, by the following equation
E1(t,ω)=λE|X1(t,ω)|2+(1−λE)E1(t−1,ω)
where λE is a number between zero and one that determines an averaging time constant, X1(t,ω) is the acoustic signal of the primary microphone 106 in the cochlea domain, ωrepresents the frequency, and t represents time. As shown, a present energy level of the primary microphone 106, E1(t,ω), is dependent upon a previous energy level of the primary microphone 106, E1(t−1,ω). In some other embodiments, the value of λE can be different for different frequency channels. Given a desired time constant T (e.g., 4 ms) and the sampling frequency ƒs(e.g. 16 kHz), the value of λE can be approximated as
The energy level of the acoustic signal received from the secondary microphone 108 may be approximated by a similar exemplary equation
E2(t,ω)=λE|X2(t,ω)|2+(1−λE)E2(t−1,ω)
where X2(t,w) is the acoustic signal of the secondary microphone 108 in the cochlea domain. Similar to the calculation of energy level for the primary microphone 106, energy level for the secondary microphone 108, E2(t, ω), is dependent upon a previous energy level of the secondary microphone 108, E2(t-1, ω).
Given the calculated energy levels, an inter-microphone level difference (ILD) may be determined by an ILD module 306. The ILD module 306 is a component which may be approximated mathematically, in one embodiment, as
where E1 is the energy level of the primary microphone 106 and E2 is the energy level of the secondary microphone 108, both of which are obtained from the energy module 304. This equation provides a bounded result between −1 and 1. For example, ILD goes to 1 when the E2 goes to 0, and ILD goes to −1 when E1 goes to 0. Thus, when the speech source is close to the primary microphone 106 and there is no noise, ILD=1, but as more noise is added, the ILD will change. Further, as more noise is picked up by both of the microphones 106 and 108, it becomes more difficult to discriminate speech from noise.
The above equation is desirable over an ILD calculated via a ratio of the energy levels, such as
where ILD is not bounded and may go to infinity as the energy level of the primary microphone gets smaller.
In an alternative embodiment, the ILD may be approximated by
Here, the ILD calculation is also bounded between −1 and 1. Therefore, this alternative ILD calculation may be used in one embodiment of the present invention.
According to an exemplary embodiment of the present invention, a Wiener filter is used to suppress noise/enhance speech. In order to derive a Wiener filter estimate, however, specific inputs are required. These inputs comprise a power spectral density of noise and a power spectral density of the source signal. As such, a noise estimate module 308 may be provided to determine a noise estimate for the acoustic signals.
According to exemplary embodiments, the noise estimate module 308 attempts to estimate the noise components in the microphone signals. In exemplary embodiments, the noise estimate is based only on the acoustic signal received by the primary microphone 106. The exemplary noise estimate module 308 is a component which can be approximated mathematically by
N(t,ω)=λI(t,ω)E1(t,ω)+(1−λI(t,ω))min[N(t−1,ω),E1(t,ω)]
according to one embodiment of the present invention. As shown, the noise estimate in this embodiment is based on minimum statistics of a current energy estimate of the primary microphone 106, E1(t,ω) and a noise estimate of a previous time frame, N(t−1,ω). Therefore the noise estimation is performed efficiently and with low latency.
λI(t,ω) in the above equation is derived from the ILD approximated by the ILD module 306, as
That is, when speech at the primary microphone 106 is smaller than a threshold value (e.g., threshold=0.5) above which speech is expected to be, λI is small, and thus the noise estimator follows the noise closely. When ILD starts to rise (e.g., because speech is detected), however, λI increases. As a result, the noise estimate module 308 slows down the noise estimation process and the speech energy does not contribute significantly to the final noise estimate. Therefore, exemplary embodiments of the present invention may use a combination of minimum statistics and voice activity detection to determine the noise estimate.
A filter module 310 then derives a filter estimate based on the noise estimate. In one embodiment, the filter is a Wiener filter. Alternative embodiments may contemplate other filters. Accordingly, the Wiener filter approximation may be approximated, according to one embodiment, as
where Ps is a power spectral density of speech and Pn is a power spectral density of noise. According to one embodiment, Pn is the noise estimate, N(t,ω), which is calculated by the noise estimate module 308. In an exemplary embodiment, Ps=E1(t,ω) −,βN(t,ω), where E1(t,ω) is the energy estimate of the primary microphone 106 from the energy module 304, and N(t,ω) is the noise estimate provided by the noise estimate module 308. Because the noise estimate changes with each frame, the filter estimate will also change with each frame.
β is an over-subtraction term which is a function of the ILD. β compensates bias of minimum statistics of the noise estimate module 308 and forms a perceptual weighting. Because time constants are different, the bias will be different between portions of pure noise and portions of noise and speech. Therefore, in some embodiments, compensation for this bias may be necessary. In exemplary embodiments, β is determined empirically (e.g., 2-3 dB at a large ILD, and is 6-9 dB at a low ILD).
α in the above exemplary Wiener filter equation is a factor which further suppresses the noise estimate. α can be any positive value. In one embodiment, nonlinear expansion may be obtained by setting α to 2. According to exemplary embodiments, α is determined empirically and applied when a body of
falls below a prescribed value (e.g., 12 dB down from the maximum possible value of W, which is unity).
Because the Wiener filter estimation may change quickly (e.g., from one frame to the next frame) and noise and speech estimates can vary greatly between each frame, application of the Wiener filter estimate, as is, may result in artifacts (e.g., discontinuities, blips, transients, etc.). Therefore, an optional filter smoothing module 312 is provided to smooth the Wiener filter estimate applied to the acoustic signals as a function of time. In one embodiment, the filter smoothing module 312 may be mathematically approximated as
M(t,ω)=λs(t,ω)W(t,ω)+(1−λs(t,ω))M(t−1,ω),
where λs is a function of the Wiener filter estimate and the primary microphone energy, E1.
As shown, the filter smoothing module 312, at time (t) will smooth the Wiener filter estimate using the values of the smoothed Wiener filter estimate from the previous frame at time (t-1). In order to allow for quick response to the acoustic signal changing quickly, the filter smoothing module 312 performs less smoothing on quick changing signals, and more smoothing on slower changing signals. This is accomplished by varying the value of λs according to a weighed first order derivative of E1 with respect to time. If the first order derivative is large and the energy change is large, then λs is set to a large value. If the derivative is small then λs is set to a smaller value.
After smoothing by the filter smoothing module 312, the primary acoustic signal is multiplied by the smoothed Wiener filter estimate to estimate the speech. In the above Wiener filter embodiment, the speech estimate is approximated by S (t,ω)=X1(t,ω)*M (t, ω), where X1 is the acoustic signal from the primary microphone 106. In exemplary embodiments, the speech estimation occurs in a masking module 314.
Next, the speech estimate is converted back into time domain from the cochlea domain. The conversion comprises taking the speech estimate, S (t, ω), and multiplying this with an inverse frequency of the cochlea channels in a frequency synthesis module 316. Once conversion is completed, the signal is output to user.
It should be noted that the system architecture of the audio processing engine 204 of
Referring now to
Frequency analysis is then performed on the acoustic signals by the frequency analysis module 302 (
In step 406, energy estimates for acoustic signals received at both the primary and secondary microphones 106 and 108 are computed. In one embodiment, the energy estimates are determined by an energy module 304 (
Once the energy estimates are calculated, inter-microphone level differences (ILD) are computed in step 408. In one embodiment, the ILD is calculated based on the energy estimates of both the primary and secondary acoustic signals. In exemplary embodiments, the ILD is computed by the ILD module 306 (
Based on the calculated ILD, noise is estimated in step 410. According to embodiments of the present invention, the noise estimate is based only on the acoustic signal received at the primary microphone 106. The noise estimate may be based on the present energy estimate of the acoustic signal from the primary microphone 106 and a previously computed noise estimate. In determining the noise estimate, the noise estimation is frozen or slowed down when the ILD increases, according to exemplary embodiments of the present invention.
Instep 412, a filter estimate is computed by the filter module 310 (
In step 418, the speech estimate is converted back to the time domain. Exemplary conversion techniques apply an inverse frequency of the cochlea channel to the speech estimate. Once the speech estimate is converted, the audio signal may now be output to the user in step 420. In some embodiments, the digital acoustic signal is converted to an analog signal for output. The output may be via a speaker, earpieces, or other similar devices.
The above-described modules can be comprised of instructions that are stored on storage media. The instructions can be retrieved and executed by the processor 202 (
The present invention is described above with reference to exemplary embodiments. It will be apparent to those skilled in the art that various modifications may be made and other embodiments can be used without departing from the broader scope of the present invention. Therefore, these and other variations upon the exemplary embodiments are intended to be covered by the present invention.
This application claims the priority and benefit of U.S. Provisional Patent Application Ser. No. 60/756,826, filed January 5, 2006, and entitled “Inter-Microphone Level Difference Suppressor,” which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3976863 | Engel | Aug 1976 | A |
3978287 | Fletcher et al. | Aug 1976 | A |
4137510 | Iwahara | Jan 1979 | A |
4433604 | Ott | Feb 1984 | A |
4516259 | Yato et al. | May 1985 | A |
4535473 | Sakata | Aug 1985 | A |
4536844 | Lyon | Aug 1985 | A |
4581758 | Coker et al. | Apr 1986 | A |
4628529 | Borth et al. | Dec 1986 | A |
4630304 | Borth et al. | Dec 1986 | A |
4649505 | Zinser, Jr. et al. | Mar 1987 | A |
4658426 | Chabries et al. | Apr 1987 | A |
4674125 | Carlson et al. | Jun 1987 | A |
4718104 | Anderson | Jan 1988 | A |
4811404 | Vilmur et al. | Mar 1989 | A |
4812996 | Stubbs | Mar 1989 | A |
4864620 | Bialick | Sep 1989 | A |
4920508 | Yassaie et al. | Apr 1990 | A |
5027410 | Williamson et al. | Jun 1991 | A |
5054085 | Meisel et al. | Oct 1991 | A |
5058419 | Nordstrom et al. | Oct 1991 | A |
5099738 | Hotz | Mar 1992 | A |
5119711 | Bell et al. | Jun 1992 | A |
5142961 | Paroutaud | Sep 1992 | A |
5150413 | Nakatani et al. | Sep 1992 | A |
5175769 | Hejna, Jr. et al. | Dec 1992 | A |
5187776 | Yanker | Feb 1993 | A |
5208864 | Kaneda | May 1993 | A |
5210366 | Sykes, Jr. | May 1993 | A |
5224170 | Waite, Jr. | Jun 1993 | A |
5230022 | Sakata | Jul 1993 | A |
5319736 | Hunt | Jun 1994 | A |
5323459 | Hirano | Jun 1994 | A |
5341432 | Suzuki et al. | Aug 1994 | A |
5381473 | Andrea et al. | Jan 1995 | A |
5381512 | Holton et al. | Jan 1995 | A |
5400409 | Linhard | Mar 1995 | A |
5402493 | Goldstein | Mar 1995 | A |
5402496 | Soli et al. | Mar 1995 | A |
5471195 | Rickman | Nov 1995 | A |
5473702 | Yoshida et al. | Dec 1995 | A |
5473759 | Slaney et al. | Dec 1995 | A |
5479564 | Vogten et al. | Dec 1995 | A |
5502663 | Lyon | Mar 1996 | A |
5536844 | Wijesekera | Jul 1996 | A |
5544250 | Urbanski | Aug 1996 | A |
5574824 | Slyh et al. | Nov 1996 | A |
5583784 | Kapust et al. | Dec 1996 | A |
5587998 | Velardo, Jr. et al. | Dec 1996 | A |
5590241 | Park et al. | Dec 1996 | A |
5602962 | Kellermann | Feb 1997 | A |
5675778 | Jones | Oct 1997 | A |
5682463 | Allen et al. | Oct 1997 | A |
5694474 | Ngo et al. | Dec 1997 | A |
5706395 | Arslan et al. | Jan 1998 | A |
5717829 | Takagi | Feb 1998 | A |
5729612 | Abel et al. | Mar 1998 | A |
5732189 | Johnston et al. | Mar 1998 | A |
5749064 | Pawate et al. | May 1998 | A |
5757937 | Itoh et al. | May 1998 | A |
5792971 | Timis et al. | Aug 1998 | A |
5796819 | Romesburg | Aug 1998 | A |
5806025 | Vis et al. | Sep 1998 | A |
5809463 | Gupta et al. | Sep 1998 | A |
5825320 | Miyamori et al. | Oct 1998 | A |
5839101 | Vahatalo et al. | Nov 1998 | A |
5920840 | Satyamurti et al. | Jul 1999 | A |
5933495 | Oh | Aug 1999 | A |
5943429 | Handel | Aug 1999 | A |
5956674 | Smyth et al. | Sep 1999 | A |
5974380 | Smyth et al. | Oct 1999 | A |
5978824 | Ikeda | Nov 1999 | A |
5983139 | Zierhofer | Nov 1999 | A |
5990405 | Auten et al. | Nov 1999 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6061456 | Andrea et al. | May 2000 | A |
6072881 | Linder | Jun 2000 | A |
6097820 | Turner | Aug 2000 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6122610 | Isabelle | Sep 2000 | A |
6134524 | Peters et al. | Oct 2000 | A |
6137349 | Menkhoff et al. | Oct 2000 | A |
6140809 | Doi | Oct 2000 | A |
6173255 | Wilson et al. | Jan 2001 | B1 |
6216103 | Wu et al. | Apr 2001 | B1 |
6222927 | Feng et al. | Apr 2001 | B1 |
6223090 | Brungart | Apr 2001 | B1 |
6226616 | You et al. | May 2001 | B1 |
6263307 | Arslan et al. | Jul 2001 | B1 |
6266633 | Higgins et al. | Jul 2001 | B1 |
6317501 | Matsuo | Nov 2001 | B1 |
6339758 | Kanazawa et al. | Jan 2002 | B1 |
6355869 | Mitton | Mar 2002 | B1 |
6363345 | Marash et al. | Mar 2002 | B1 |
6381570 | Li et al. | Apr 2002 | B2 |
6430295 | Handel et al. | Aug 2002 | B1 |
6434417 | Lovett | Aug 2002 | B1 |
6449586 | Hoshuyama | Sep 2002 | B1 |
6469732 | Chang et al. | Oct 2002 | B1 |
6487257 | Gustafsson et al. | Nov 2002 | B1 |
6496795 | Malvar | Dec 2002 | B1 |
6513004 | Rigazio et al. | Jan 2003 | B1 |
6516066 | Hayashi | Feb 2003 | B2 |
6529606 | Jackson, Jr. II et al. | Mar 2003 | B1 |
6549630 | Bobisuthi | Apr 2003 | B1 |
6584203 | Elko et al. | Jun 2003 | B2 |
6622030 | Romesburg et al. | Sep 2003 | B1 |
6717991 | Gustafsson et al. | Apr 2004 | B1 |
6718309 | Selly | Apr 2004 | B1 |
6738482 | Jaber | May 2004 | B1 |
6760450 | Matsuo | Jul 2004 | B2 |
6785381 | Gartner et al. | Aug 2004 | B2 |
6792118 | Watts | Sep 2004 | B2 |
6795558 | Matsuo | Sep 2004 | B2 |
6798886 | Smith et al. | Sep 2004 | B1 |
6810273 | Mattila et al. | Oct 2004 | B1 |
6882736 | Dickel et al. | Apr 2005 | B2 |
6915264 | Baumgarte | Jul 2005 | B2 |
6917688 | Yu et al. | Jul 2005 | B2 |
6944510 | Ballesty et al. | Sep 2005 | B1 |
6978159 | Feng et al. | Dec 2005 | B2 |
6982377 | Sakurai et al. | Jan 2006 | B2 |
6999582 | Popovic et al. | Feb 2006 | B1 |
7016507 | Brennan | Mar 2006 | B1 |
7020605 | Gao | Mar 2006 | B2 |
7031478 | Belt et al. | Apr 2006 | B2 |
7054452 | Ukita | May 2006 | B2 |
7065485 | Chong-White et al. | Jun 2006 | B1 |
7076315 | Watts | Jul 2006 | B1 |
7092529 | Yu et al. | Aug 2006 | B2 |
7092882 | Arrowood et al. | Aug 2006 | B2 |
7099821 | Visser et al. | Aug 2006 | B2 |
7142677 | Gonopolskiy | Nov 2006 | B2 |
7146316 | Alves | Dec 2006 | B2 |
7155019 | Hou | Dec 2006 | B2 |
7164620 | Hoshuyama | Jan 2007 | B2 |
7171008 | Elko | Jan 2007 | B2 |
7171246 | Mattila et al. | Jan 2007 | B2 |
7174022 | Zhang et al. | Feb 2007 | B1 |
7206418 | Yang et al. | Apr 2007 | B2 |
7209567 | Kozel et al. | Apr 2007 | B1 |
7225001 | Eriksson et al. | May 2007 | B1 |
7242762 | He et al. | Jul 2007 | B2 |
7246058 | Burnett | Jul 2007 | B2 |
7254242 | Ise et al. | Aug 2007 | B2 |
7359520 | Brennan et al. | Apr 2008 | B2 |
7412379 | Taori et al. | Aug 2008 | B2 |
7433907 | Nagai et al. | Oct 2008 | B2 |
7555434 | Nomura et al. | Jun 2009 | B2 |
7617099 | Yang et al. | Nov 2009 | B2 |
7949522 | Hetherington et al. | May 2011 | B2 |
8098812 | Fadili et al. | Jan 2012 | B2 |
20010016020 | Gustafsson et al. | Aug 2001 | A1 |
20010031053 | Feng et al. | Oct 2001 | A1 |
20020002455 | Accardi et al. | Jan 2002 | A1 |
20020009203 | Erten | Jan 2002 | A1 |
20020041693 | Matsuo | Apr 2002 | A1 |
20020080980 | Matsuo | Jun 2002 | A1 |
20020106092 | Matsuo | Aug 2002 | A1 |
20020116187 | Erten | Aug 2002 | A1 |
20020133334 | Coorman et al. | Sep 2002 | A1 |
20020147595 | Baumgarte | Oct 2002 | A1 |
20020184013 | Walker | Dec 2002 | A1 |
20030014248 | Vetter | Jan 2003 | A1 |
20030026437 | Janse et al. | Feb 2003 | A1 |
20030033140 | Taori et al. | Feb 2003 | A1 |
20030039369 | Bullen | Feb 2003 | A1 |
20030040908 | Yang et al. | Feb 2003 | A1 |
20030061032 | Gonopolskiy | Mar 2003 | A1 |
20030063759 | Brennan et al. | Apr 2003 | A1 |
20030072382 | Raleigh et al. | Apr 2003 | A1 |
20030072460 | Gonopolskiy et al. | Apr 2003 | A1 |
20030095667 | Watts | May 2003 | A1 |
20030099345 | Gartner et al. | May 2003 | A1 |
20030101048 | Liu | May 2003 | A1 |
20030103632 | Goubran et al. | Jun 2003 | A1 |
20030128851 | Furuta | Jul 2003 | A1 |
20030138116 | Jones et al. | Jul 2003 | A1 |
20030147538 | Elko | Aug 2003 | A1 |
20030169891 | Ryan et al. | Sep 2003 | A1 |
20030228023 | Burnett | Dec 2003 | A1 |
20040013276 | Ellis et al. | Jan 2004 | A1 |
20040047464 | Yu et al. | Mar 2004 | A1 |
20040057574 | Faller | Mar 2004 | A1 |
20040078199 | Kremer et al. | Apr 2004 | A1 |
20040131178 | Shahaf et al. | Jul 2004 | A1 |
20040133421 | Burnett et al. | Jul 2004 | A1 |
20040165736 | Hetherington et al. | Aug 2004 | A1 |
20040196989 | Friedman et al. | Oct 2004 | A1 |
20040263636 | Cutler et al. | Dec 2004 | A1 |
20050025263 | Wu | Feb 2005 | A1 |
20050027520 | Mattila et al. | Feb 2005 | A1 |
20050049864 | Kaltenmeier et al. | Mar 2005 | A1 |
20050060142 | Visser et al. | Mar 2005 | A1 |
20050152559 | Gierl et al. | Jul 2005 | A1 |
20050185813 | Sinclair et al. | Aug 2005 | A1 |
20050213778 | Buck et al. | Sep 2005 | A1 |
20050216259 | Watts | Sep 2005 | A1 |
20050228518 | Watts | Oct 2005 | A1 |
20050276423 | Aubauer et al. | Dec 2005 | A1 |
20050288923 | Kok | Dec 2005 | A1 |
20060072768 | Schwartz et al. | Apr 2006 | A1 |
20060074646 | Alves et al. | Apr 2006 | A1 |
20060098809 | Nongpiur et al. | May 2006 | A1 |
20060120537 | Burnett et al. | Jun 2006 | A1 |
20060133621 | Chen et al. | Jun 2006 | A1 |
20060149535 | Choi et al. | Jul 2006 | A1 |
20060160581 | Beaugeant et al. | Jul 2006 | A1 |
20060184363 | McCree et al. | Aug 2006 | A1 |
20060198542 | Benjelloun Touimi et al. | Sep 2006 | A1 |
20060222184 | Buck et al. | Oct 2006 | A1 |
20070021958 | Visser et al. | Jan 2007 | A1 |
20070027685 | Arakawa et al. | Feb 2007 | A1 |
20070033020 | (Kelleher) Francois et al. | Feb 2007 | A1 |
20070067166 | Pan et al. | Mar 2007 | A1 |
20070078649 | Hetherington et al. | Apr 2007 | A1 |
20070094031 | Chen | Apr 2007 | A1 |
20070100612 | Ekstrand et al. | May 2007 | A1 |
20070116300 | Chen | May 2007 | A1 |
20070150268 | Acero et al. | Jun 2007 | A1 |
20070154031 | Avendano et al. | Jul 2007 | A1 |
20070165879 | Deng et al. | Jul 2007 | A1 |
20070195968 | Jaber | Aug 2007 | A1 |
20070230712 | Belt et al. | Oct 2007 | A1 |
20070276656 | Solbach et al. | Nov 2007 | A1 |
20080019548 | Avendano | Jan 2008 | A1 |
20080033723 | Jang et al. | Feb 2008 | A1 |
20080140391 | Yen et al. | Jun 2008 | A1 |
20080201138 | Visser et al. | Aug 2008 | A1 |
20080228478 | Hetherington et al. | Sep 2008 | A1 |
20080260175 | Elko | Oct 2008 | A1 |
20090012783 | Klein | Jan 2009 | A1 |
20090012786 | Zhang et al. | Jan 2009 | A1 |
20090129610 | Kim et al. | May 2009 | A1 |
20090220107 | Every et al. | Sep 2009 | A1 |
20090238373 | Klein | Sep 2009 | A1 |
20090253418 | Makinen | Oct 2009 | A1 |
20090271187 | Yen et al. | Oct 2009 | A1 |
20090323982 | Solbach et al. | Dec 2009 | A1 |
20100094643 | Avendano et al. | Apr 2010 | A1 |
20100278352 | Petit et al. | Nov 2010 | A1 |
20110178800 | Watts | Jul 2011 | A1 |
20120121096 | Chen et al. | May 2012 | A1 |
20120140917 | Nicholson et al. | Jun 2012 | A1 |
Number | Date | Country |
---|---|---|
62110349 | May 1987 | JP |
04184400 | Jul 1992 | JP |
5053587 | Mar 1993 | JP |
05-172865 | Jul 1993 | JP |
06269083 | Sep 1994 | JP |
10-313497 | Nov 1998 | JP |
11-249693 | Sep 1999 | JP |
2004053895 | Feb 2004 | JP |
2004531767 | Oct 2004 | JP |
2004533155 | Oct 2004 | JP |
2005110127 | Apr 2005 | JP |
2005148274 | Jun 2005 | JP |
2005518118 | Jun 2005 | JP |
2005195955 | Jul 2005 | JP |
0174118 | Oct 2001 | WO |
02080362 | Oct 2002 | WO |
02103676 | Dec 2002 | WO |
03043374 | May 2003 | WO |
03069499 | Aug 2003 | WO |
03069499 | Aug 2003 | WO |
2004010415 | Jan 2004 | WO |
2007081916 | Jul 2007 | WO |
2007114003 | Dec 2007 | WO |
2007140003 | Dec 2007 | WO |
2010005493 | Jan 2010 | WO |
Number | Date | Country | |
---|---|---|---|
20070154031 A1 | Jul 2007 | US |
Number | Date | Country | |
---|---|---|---|
60756826 | Jan 2006 | US |