The technology described in this patent application relates generally to directional microphone systems. More specifically, the patent application describes a low-noise directional microphone system that is particularly well suited for use in a digital hearing instrument.
Directional microphone systems are known.
Typical directional hearing instruments include a directional microphone system 1, such as the one illustrated in
A low-noise directional microphone system includes a front microphone, a rear microphone, a low-noise phase-shifting circuit and a summation circuit. The front microphone generates a front microphone signal, and the rear microphone generates a rear microphone signal. The low-noise phase-shifting circuit implements a frequency-dependent phase difference between the front microphone signal and the rear microphone signal to create a controlled loss in directional gain and to maintain a maximum level of noise amplification over a pre-determined frequency band. The summation circuit combines the front and rear microphone signals to generate a directional microphone signal.
Referring now to the remaining drawing figures,
Sound is received by the pair of microphones 24, 26, and converted into electrical signals that are coupled to the FMIC 12C and RMIC 12D inputs to the IC 12A. FMIC refers to “front microphone,” and RMIC refers to “rear microphone.” The microphones 24, 26 are biased between a regulated voltage output from the RREG and FREG pins 12B, and the ground nodes FGND 12F, RGND 12G. The regulated voltage output on FREG and RREG is generated internally to the IC 12A by regulator 30.
The tele-coil 28 is a device used in a hearing aid that magnetically couples to a telephone handset and produces an input current that is proportional to the telephone signal. This input current from the tele-coil 28 is coupled into the rear microphone A/D converter 32B on the IC 12A when the switch 76 is connected to the “T” input pin 12E, indicating that the user of the hearing aid is talking on a telephone. The tele-coil 28 is used to prevent acoustic feedback into the system when talking on the telephone.
The volume control potentiometer 14 is coupled to the volume control input 12N of the IC. This variable resistor is used to set the volume sensitivity of the digital hearing aid.
The memory-select toggle switch 16 is coupled between the positive voltage supply VB 18 to the IC 12A and the memory-select input pin 12L. This switch 16 is used to toggle the digital hearing aid system 12 between a series of setup configurations. For example, the device may have been previously programmed for a variety of environmental settings, such as quiet listening, listening to music, a noisy setting, etc. For each of these settings, the system parameters of the IC 12A may have been optimally configured for the particular user. By repeatedly pressing the toggle switch 16, the user may then toggle through the various configurations stored in the read-only memory 44 of the IC 12A.
The battery terminals 12K, 12H of the IC 12A are preferably coupled to a single 1.3 volt zinc-air battery. This battery provides the primary power source for the digital hearing aid system.
The last external component is the speaker 20. This element is coupled to the differential outputs at pins 12J, 12I of the IC 12A, and converts the processed digital input signals from the two microphones 24, 26 into an audible signal for the user of the digital hearing aid system 12.
There are many circuit blocks within the IC 12A. Primary sound processing within the system is carried out by the sound processor 38. A pair of A/D converters 32A, 32B are coupled between the front and rear microphones 24, 26, and the sound processor 38, and convert the analog input signals into the digital domain for digital processing by the sound processor 38. A single D/A converter 48 converts the processed digital signals back into the analog domain for output by the speaker 20. Other system elements include a regulator 30, a volume control A/D 40, an interface/system controller 42, an EEPROM memory 44, a power-on reset circuit 46, and an oscillator/system clock 36.
The sound processor 38 preferably includes a directional processor 50, a pre-filter 52, a wide-band twin detector 54, a band-split filter 56, a plurality of narrow-band channel processing and twin detectors 58A- 58D, a summer 60, a post filter 62, a notch filter 64, a volume control circuit 66, an automatic gain control output circuit 68, a peak clipping circuit 70, a squelch circuit 72, and a tone generator 74.
Operationally, the sound processor 38 processes digital sound as follows. Sound signals input to the front and rear microphones 24, 26 are coupled to the front and rear A/D converters 32A, 32B, which are preferably Sigma-Delta modulators followed by decimation filters that convert the analog sound inputs from the two microphones into a digital equivalent. Note that when a user of the digital hearing aid system is talking on the telephone, the rear A/D converter 32B is coupled to the tele-coil input “T” 12E via switch 76. Both of the front and rear A/D converters 32A, 32B are clocked with the output clock signal from the oscillator/system clock 36 (discussed in more detail below). This same output clock signal is also coupled to the sound processor 38 and the D/A converter 48.
The front and rear digital sound signals from the two A/D converters 32A, 32B are coupled to the directional processor and headroom expander 50 of the sound processor 38. The rear A/D converter 32B is coupled to the processor 50 through switch 75. In a first position, the switch 75 couples the digital output of the rear A/D converter 32 B to the processor 50, and in a second position, the switch 75 couples the digital output of the rear A/D converter 32B to summation block 71 for the purpose of compensating for occlusion.
Occlusion is the amplification of the users own voice within the ear canal. The rear microphone can be moved inside the ear canal to receive this unwanted signal created by the occlusion effect. The occlusion effect is usually reduced in these types of systems by putting a mechanical vent in the hearing aid. This vent, however, can cause an oscillation problem as the speaker signal feeds back to the microphone(s) through the vent aperture. The system shown in
The directional processor and headroom expander 50 includes a combination of filtering and delay elements that, when applied to the two digital input signals, forms a single, directionally-sensitive response. This directionally-sensitive response is generated such that the gain of the directional processor 50 will be a maximum value for sounds coming from the front of the hearing instrument and will be a minimum value for sounds coming from the rear.
The headroom expander portion of the processor 50 significantly extends the dynamic range of the A/D conversion. It does this by dynamically adjusting the A/D converters 32A/32B operating points. The headroom expander 50 adjusts the gain before and after the A/D conversion so that the total gain remains unchanged, but the intrinsic dynamic range of the A/D converter block 32A/32B is optimized to the level of the signal being processed.
The output from the directional processor and headroom expander 50 is coupled to a pre-filter 52, which is a general-purpose filter for pre-conditioning the sound signal prior to any further signal processing steps. This “pre-conditioning” can take many forms, and, in combination with corresponding “post-conditioning” in the post filter 62, can be used to generate special effects that may be suited to only a particular class of users. For example, the pre-filter 52 could be configured to mimic the transfer function of the user's middle ear, effectively putting the sound signal into the “cochlear domain.” Signal processing algorithms to correct a hearing impairment based on, for example, inner hair cell loss and outer hair cell loss, could be applied by the sound processor 38. Subsequently, the post-filter 62 could be configured with the inverse response of the pre-filter 52 in order to convert the sound signal back into the “acoustic domain” from the “cochlear domain.” Of course, other pre-conditioning/post-conditioning configurations and corresponding signal processing algorithms could be utilized.
The pre-conditioned digital sound signal is then coupled to the band-split filter 56, which preferably includes a bank of filters with variable corner frequencies and pass-band gains. These filters are used to split the single input signal into four distinct frequency bands. The four output signals from the band-split filter 56 are preferably in-phase so that when they are summed together in block 60, after channel processing, nulls or peaks in the composite signal (from the summer) are minimized.
Channel processing of the four distinct frequency bands from the band-split filter 56 is accomplished by a plurality of channel processing/twin detector blocks 58A-58D. Although four blocks are shown in
Each of the channel processing/twin detectors 58A-58D provide an automatic gain control (“AGC”) function that provides compression and gain on the particular frequency band (channel) being processed. Compression of the channel signals permits quieter sounds to be amplified at a higher gain than louder sounds, for which the gain is compressed. In this manner, the user of the system can hear the full range of sounds since the circuits 58A-58D compress the full range of normal hearing into the reduced dynamic range of the individual user as a function of the individual user's hearing loss within the particular frequency band of the channel.
The channel processing blocks 58A-58D can be configured to employ a twin detector average detection scheme while compressing the input signals. This twin detection scheme includes both slow and fast attack/release tracking modules that allow for fast response to transients (in the fast tracking module), while preventing annoying pumping of the input signal (in the slow tracking module) that only a fast time constant would produce. The outputs of the fast and slow tracking modules are compared, and the compression slope is then adjusted accordingly. The compression ratio, channel gain, lower and upper thresholds (return to linear point), and the fast and slow time constants (of the fast and slow tracking modules) can be independently programmed and saved in memory 44 for each of the plurality of channel processing blocks 58A-58D.
After channel processing is complete, the four channel signals are summed by summer 60 to form a composite signal. This composite signal is then coupled to the post-filter 62, which may apply a post-processing filter function as discussed above. Following post-processing, the composite signal is then applied to a notch-filter 64, that attenuates a narrow band of frequencies that is adjustable in the frequency range where hearing aids tend to oscillate. This notch filter 64 is used to reduce feedback and prevent unwanted “whistling” of the device. Preferably, the notch filter 64 may include a dynamic transfer function that changes the depth of the notch based upon the magnitude of the input signal.
Following the notch filter 64, the composite signal is then coupled to a volume control circuit 66. The volume control circuit 66 receives a digital value from the volume control A/D 40, which indicates the desired volume level set by the user via potentiometer 14, and uses this stored digital value to set the gain of an included amplifier circuit.
From the volume control circuit, the composite signal is then coupled to the AGC-output block 68. The AGC-output circuit 68 is a high compression ratio, low distortion limiter that is used to prevent pathological signals from causing large scale distorted output signals from the speaker 20 that could be painful and annoying to the user of the device. The composite signal is coupled from the AGC-output circuit 68 to a squelch circuit 72, that performs an expansion on low-level signals below an adjustable threshold. The squelch circuit 72 uses an output signal from the wide-band detector 54 for this purpose. The expansion of the low-level signals attenuates noise from the microphones and other circuits when the input S/N ratio is small, thus producing a lower noise signal during quiet situations. Also shown coupled to the squelch circuit 72 is a tone generator block 74, which is included for calibration and testing of the system.
The output of the squelch circuit 72 is coupled to one input of summer 71. The other input to the summer 71 is from the output of the rear A/D converter 32B, when the switch 75 is in the second position. These two signals are summed in summer 71, and passed along to the interpolator and peak clipping circuit 70. This circuit 70 also operates on pathological signals, but it operates almost instantaneously to large peak signals and is high distortion limiting. The interpolator shifts the signal up in frequency as part of the D/A process and then the signal is clipped so that the distortion products do not alias back into the baseband frequency range.
The output of the interpolator and peak clipping circuit 70 is coupled from the sound processor 38 to the D/A H-Bridge 48. This circuit 48 converts the digital representation of the input sound signals to a pulse density modulated representation with complimentary outputs. These outputs are coupled off-chip through outputs 12J, 12I to the speaker 20, which low-pass filters the outputs and produces an acoustic analog of the output signals. The D/A H-Bridge 48 includes an interpolator, a digital Delta-Sigma modulator, and an H-Bridge output stage. The D/A H-Bridge 48 is also coupled to and receives the clock signal from the oscillator/system clock 36 (described below).
The interface/system controller 42 is coupled between a serial data interface pin 12M on the IC 12, and the sound processor 38. This interface is used to communicate with an external controller for the purpose of setting the parameters of the system. These parameters can be stored on-chip in the EEPROM 44. If a “black-out” or “brown-out” condition occurs, then the power-on reset circuit 46 can be used to signal the interface/system controller 42 to configure the system into a known state. Such a condition can occur, for example, if the battery fails.
The front and rear microphones 81, 82 are preferably omnidirectional microphones that receive an acoustical waveform and generate a front and rear microphone signal, respectively. The front microphone signal is coupled to the summation circuit 85, and the rear microphone signal is coupled to the low-noise phase-shifting circuit 84. The low-noise phase-shifting circuit 84 implements a frequency-dependent phase shift, θLN, that maintains a maximum desired noise amplification level (GN) in the resultant directional microphone signal. Exemplary maximum noise amplification levels (GN) are described below with reference to
The phase shift implemented by the low-noise phase-shifting circuit 84 may be calculated from array processing theory. This theory states that the directional gain (D) of an arbitrary array at a frequency f can be expressed in matrix notation as:
In this expression, RS(f) and RN(f) are matrices describing the signal and noise correlation properties, respectively. The term w(f) is the sensor-weight vector, and the superscript “H” denotes the conjugate transpose of a matrix. The sensor-weight vector, w(f), is a mathematical description of the actual signal modifications that result from the application of the low-noise phase-shifting circuit 84.
Expressions for the matrix quantities, RS(f) and RN(f), can be obtained by assuming a specific array geometry. For the purposes of directional microphone processing, the signal wavefront is assumed to arrive from a single, fixed direction (usually to the front of a hearing instrument user). Thus, the signal correlation matrix, RS(f), can be expressed as:
RS(f)=s(f)s(f)H
s(f) in the above equation is the signal propagation vector:
where k is the wavenumber and d is the distance between the front and rear microphones 81, 82.
Assuming a spherically isotropic (or diffuse) noise field, the noise correlation matrix, RN(f), can be expressed as:
The sensor-weight vector, w(f), may be expressed in terms of the front and rear microphone filter responses, as follows:
where Hf(f) is a complex frequency response associated with the front microphone filter, and Hr(f) is a complex frequency response associated with the rear microphone filter.
The sensor-weight vector, wO(f), that maximizes the directional gain may be calculated as follows:
wO(f)=[RN(f)+δ(f)I]−1s(f), where I is an identity matrix the same size as RN(f), and δ(f) is a small positive value that controls the amount of noise amplification.
By substituting the previous expressions for RN(f) and s(f), a closed form expression for the optimal sensor-weight vector, wO(f), can be derived as follows:
and Δ=(1+δ(f))2−ρ2
The optimal sensor-weight vector, wO(f), may thus be calculated by determining values for the parameter δ(f) that produce the desired maximum noise amplification over the frequency band of interest. Given a desired level of maximum noise amplification, GN, the parameter δ(f) may be calculated for each frequency in the frequency band of interest, as follows:
T=1/GN
δ(f)=x−1
a=(2−T)
b=(2T−4)ρ cos(ωd/ν)
c=ρ2(2 cos2(ωd/ν)−T)
where ω is the radian frequency (2πf), d is the spacing between the front and rear microphones 81, 82, ν is the speed of sound, and
In order to implement a directional microphone array using the optimal sensor-weight vector, wO(f), as described above, filters with the specified magnitude and phase responses may be constructed for both the front and rear microphone signals. The filters required for this implementation, however, may not be practical for some applications. A considerable simplification results by normalizing the front and rear microphone filter responses by the front microphone response, as the array processing equations are invariant to a constant multiplied by the sensor-weight vector. The result of this normalization is to eliminate the front microphone filter and reduce the rear microphone filter to an allpass filter, as follows:
Using the result from the above equations, the frequency-dependent phase shift, θLN, implemented by the low-noise phase-shifting circuit 84 may be calculated for each frequency in the band of interest, as follows:
The front and rear microphones 110, 112 are preferably omnidirectional microphones that receive an acoustical waveform and generate a front and rear microphone signal, respectively. The front microphone signal is coupled to the front allpass filter 114, and the rear microphone signal is coupled to the time delay circuit 115. The time delay circuit 115 implements a time-of-flight delay that compensates for the distance between the front and rear microphones 110, 112 and determines the specific nature of the directional microphone pattern (i.e., cardioid, hyper-cardioid, bi-directional, etc.).
The front and rear allpass filters 114, 116 are infinite impulse response (IIR) filters that apply a frequency-specific phase shift without significantly affecting the magnitudes of the microphone signals. More specifically, the front and rear allpass filters 114, 116 apply an additional frequency-dependent phase shift (Δθ), beyond that required for conventional directional microphone operation (see, e.g.,
The inter-microphone phase shift, Δθ, is obtained by subtracting the conventional phase shift, θC, from the low-noise phase shift, θLN. It is this inter-microphone phase shift, Δθ=θLN−θC, that is implemented by the front and rear allpass filters 114, 116. An exemplary method for implementing the front and rear allpass filters 114, 116 is described below with reference to
The frequency-dependent phase shift, Δθ, will produce a low-noise version of any desired directional microphone pattern, such as cardioid, super-cardioid, or hyper-cardioid. That is, the low-noise phase shift, Δθ, is effective regardless of the exact directional microphone time delay.
The directional microphone signal is generated by the summation circuit 118 as the difference between the filtered outputs from front and rear allpass filters 114, 116, and is input to the equalization (EQ) filter 120. The equalization filter 120 equalizes the on-axis frequency response of the directional microphone signal to match that of a single, omnidirectional microphone, and generates the microphone system output signal 122. More particularly, the on-axis frequency response of the directional microphone signal will typically exhibit a +6dB/octave slope over some frequency regions and an irregular response over other regions. The equalization filter 120 is implemented using standard audio equalization methods to flatten this response shape. The equalization filter 120 will therefore typically include a combination of low-pass and other audio equalization filters, such as graphic or parametric equalizers.
In step 136, a stable allpass IIR filter is selected for both the front and rear allpass filters 114, 116. Then, in step 138, either the front allpass filter 114, the rear allpass filter 116 or both are modified to approximate the desired inter-microphone phase shift, Δθ. For example, the rear allpass filter 116 phase target may be obtained by adding Δθ to the phase response of the stable front allpass filter 114 selected in step 136. This phase target may then be used to modify the rear allpass filter 116. Techniques for selecting a stable allpass IIR filter and for modifying one of a pair of filters to achieve a desired phase difference are known to those skilled in the art. For example, standard allpass IIR filter design techniques are described in S.S. Kidambi, “Weighted least-square design of recursive allpass filters”, IEEE Trans. on Signal Processing, Vol. 44, No. 6, pp. 1553-1557, June 1996.
In step 140, the stability of the front and rear allpass filters 114, 116 are verified using known techniques. Then in step 142, the on-axis frequency response, GS(f), of the directional microphone signal is calculated at a number of selected frequency points within the frequency band of interest, as follows:
GS(f)=wO(f)Hs(f)
If the resulting frequency response, GS(f), matches the desired frequency response within acceptable limits (for example, ±3 dB) at step 144, then the method ends at step 148. If, however, it is determined at step 144 that the frequency response, GS(f), is not within acceptable limits, then an equalization filter 120 is designed at step 146 with a combination of low-pass and other audio equalization filters, using known techniques as described above. That is, the equalization filter 120 shown in
As described above, the specific implementation of a low-noise directional microphone system is driven by the target value chosen for the maximum noise amplification level, GN. This concept is best illustrated with an example.
Referring first to
A comparison of the maximum DI levels 174, 176, 178, 180, 182 in the exemplary low-noise directional microphone system with the maximum DI 172 in a conventional directional microphone system illustrates the loss of directionality at low frequencies in the low-noise directional microphone system. This loss of directionality may be balanced with the corresponding reduction in noise amplification in order to choose a maximum noise amplification target that is suitable for a particular application.
Also illustrated in
Operationally, the front and rear microphones 1210, 1212 receive an acoustical waveform and generate front and rear microphone signals, respectively. The front and rear microphones 1210, 1212 are preferably omnidirectional microphones, but matched, directional microphones could also be used. The front microphone signal is coupled to the front FIR filter and the rear microphone signal is coupled to the rear FIR filter 1216. The filtered signals from the front and rear FIR filters 1214, 1216 are then combined by the summation circuit 1218 to generate the directional microphone signal 1220.
The front and rear FIR filters 1214, 1216 implement a frequency-dependent phase-response that compensates for the time-of-flight delay between the front and rear microphones 1210, 1212 and also maintains a maximum desired noise amplification level (GN) in the resultant directional microphone signal, similar to the directional microphone systems described above with respect to
More specifically, the front and rear FIR filters 1214, 1216 may be implemented from the above-described expression for the optimal sensor-weight vector, wO(f):
and Δ=(1+δ(f))2−ρ2
As noted above, the optimal sensor-weight vector, wO(f), may be calculated by determining values for the parameter δ(f) that produce the desired maximum noise amplification over the frequency band of interest. Given a desired level of maximum noise amplification, GN, the parameter δ(f) may be calculated for each frequency in the frequency band of interest, as described above. In contrast to the allpass IIR filters 114, 116 of
The design target for the rear FIR filter 1216 may be expressed as:
Using the above design targets for the front and rear FIR filters 1214, 1216, FIR filters may be designed using known FIR filter design techniques, such as described in T. W. Parks & C. S. Burrus, Digital Filter Design, John Wiley & Sons, Inc., New York, N.Y., 1987.
In addition, if the on-axis frequency response of the directional microphone signal 1220 does not match the desired frequency response within acceptable limits (for example, ±3 dB), then the above design targets may be modified to include amplitude response equalization for the directional microphone output 1220. For example, amplitude response equalization may be incorporated into the FIR filter design targets by normalizing the target responses in each microphone by the on-axis frequency response, GS(f), as follows:
In step 1340, the on-axis frequency response of the resultant directional microphone output 1220 is calculated, as described above. If the on-axis frequency response is within acceptable design limits (step 1350), then the method proceeds to step 1385, described below. If the on-axis frequency response calculated in step 1340 is not within acceptable design limits, however, then in 1360 the design targets for the front and rear FIR filters 1214, 1216 are modified to provide amplitude response equalization for the directional microphone output 1220, and the method returns to step 1334.
In step 1385, the actual directivity (DI) and noise amplification (GN) levels for the directional microphone system 1200 are evaluated. If the directivity (DI) and maximum noise amplification (GN) are within the acceptable design parameters (step 1387), then the method ends at step 1395. If the directional microphone performance is not within acceptable design limits, however, then the selected number of FIR filter taps may be increased at step 1390, and the method repeated from step 1330. For example, the design limits may require the maximum noise amplification level (GN) achieved by the directional microphone system 1200 to fall within 1 dB of the target level chosen in step 1310. If the system 1200 does not perform within the design parameters, then number of FIR filter taps may be increased at step 1390 in order to increase the resolution of the filters 1214, 1216 and better approximate the design targets.
The method begins at 1402 and repeats for each frequency within the frequency band of interest. At step 1404 the target maximum noise amplification level, GN, is selected as described above. Then, an initial value for δ(f) is selected at step 1406, and the sensor-weight vector, wO(f), is calculated at step 1408 using the initialized value for δ(f). The resultant noise amplification, GN, for the particular frequency is then be calculated at step 1410, as follows:
If the calculated value for GN is greater than the target value (step 1412), then the value of δ(f) is increased at step 1414, and the method is repeated from step 1408. Similarly, if the calculated value for GN is less than the target value (step 1416), then the value of δ(f) is decreased at step 1418, and the method is repeated from step 1408. Otherwise, if the calculated value for GN is within acceptable design limits, then the value for δ(f) at the particular frequency is set, and the method repeats (step 1420) until a value for δ(f) is set for each frequency in the band of interest.
This written description uses examples to disclose the invention, including the best mode, and also to enable a person skilled in the art to make and use the invention. The patentable scope of the invention is defined by the claims, and may include other examples that occur to those skilled in the art.
For example,
More particularly, the front and rear microphones 1602, 1604 receive an acoustical waveform and generate a front and rear microphone signal, respectively. The front microphone signal is coupled to the low-noise phase-shifting circuit 1608 and the rear microphone signal is coupled to the time-of-flight delay circuit 1606. The low-noise phase-shifting circuit 1608 implements a frequency-dependent phase shift (−Δθ) in order to maintain the maximum desired noise amplification level, as described above. The time-of-flight delay circuit 1606 implements a frequency-dependent time delay to compensate for the time-of-flight delay between the front and rear microphones 1602, 1604, similar to the delay circuit 115 described above with reference to
This application claims priority from and is related to the following prior application: “Low-Noise, First Order Differential Microphone Array,” U.S. Provisional Application No. 60/362,677, filed Mar. 8, 2002. This prior application, including the entire written description and drawing figures, is hereby incorporated into the present application by reference.
Number | Name | Date | Kind |
---|---|---|---|
4399327 | Yamamoto et al. | Aug 1983 | A |
4527282 | Chaplin et al. | Jul 1985 | A |
4536887 | Kaneda et al. | Aug 1985 | A |
4653102 | Hansen | Mar 1987 | A |
4703506 | Sakamoto et al. | Oct 1987 | A |
4731850 | Levitt et al. | Mar 1988 | A |
4879749 | Levitt et al. | Nov 1989 | A |
5058170 | Kanamori et al. | Oct 1991 | A |
5137110 | Bedard, Jr. et al. | Aug 1992 | A |
5226076 | Baumhauer, Jr. et al. | Jul 1993 | A |
5226087 | Ono et al. | Jul 1993 | A |
5289544 | Franklin | Feb 1994 | A |
5400409 | Linhard | Mar 1995 | A |
5473701 | Cezanne et al. | Dec 1995 | A |
5483599 | Zagorski | Jan 1996 | A |
5524056 | Killion et al. | Jun 1996 | A |
5581620 | Brandstein et al. | Dec 1996 | A |
5732143 | Andrea et al. | Mar 1998 | A |
5737430 | Widrow | Apr 1998 | A |
5757933 | Preves et al. | May 1998 | A |
5764778 | Zurek | Jun 1998 | A |
5785661 | Shennib | Jul 1998 | A |
5793875 | Lehr et al. | Aug 1998 | A |
5825897 | Andrea et al. | Oct 1998 | A |
5862240 | Ohkubo et al. | Jan 1999 | A |
6002776 | Bhadkamkar et al. | Dec 1999 | A |
6061456 | Andrea et al. | May 2000 | A |
6069961 | Nakazawa | May 2000 | A |
6084973 | Green et al. | Jul 2000 | A |
6101258 | Killion et al. | Aug 2000 | A |
6122389 | Grosz | Sep 2000 | A |
6154552 | Koroljow et al. | Nov 2000 | A |
6192134 | White et al. | Feb 2001 | B1 |
6222927 | Feng et al. | Apr 2001 | B1 |
6327370 | Killion et al. | Dec 2001 | B1 |
6473514 | Bodley et al. | Oct 2002 | B1 |
6654468 | Thompson | Nov 2003 | B1 |
6751325 | Fischer | Jun 2004 | B1 |
6766029 | Maisano | Jul 2004 | B1 |
6954535 | Arndt et al. | Oct 2005 | B1 |
20030147538 | Elko | Aug 2003 | A1 |
Number | Date | Country |
---|---|---|
0802699 | Oct 1997 | EP |
WO 9904598 | Jan 1999 | EP |
1351544 | Aug 2003 | EP |
Number | Date | Country | |
---|---|---|---|
20030169891 A1 | Sep 2003 | US |
Number | Date | Country | |
---|---|---|---|
60362677 | Mar 2002 | US |