The present invention pertains to sound reproduction, sound recording, audio communications and hearing protection using earphone devices designed to provide variable acoustical isolation from ambient sounds while being able to audition both environmental and desired audio stimuli. Particularly, the present invention describes a method and device for suppressing echo in an ear-canal when capturing a user's voice when using an ambient sound microphone and an ear canal microphone.
People use headsets or earpieces primarily for voice communications and music listening enjoyment. A headset or earpiece generally includes a microphone and a speaker for allowing the user to speak and listen. An ambient sound microphone mounted on the earpiece can capture ambient sounds in the environment; sounds that can include the user's voice. An ear canal microphone mounted internally on the earpiece can capture voice resonant within the ear canal; sounds generated when the user is speaking.
An earpiece that provides sufficient occlusion can utilize both the ambient sound microphone and the ear canal microphone to enhance the user's voice. An ear canal receiver mounted internal to the ear canal can loopback sound captured at the ambient sound microphone or the ear canal microphone to allow the user to listen to captured sound. If the earpiece is however not properly sealed within the ear canal, the ambient sounds can leak through into the ear canal and create an echo feedback condition with the ear canal microphone and ear canal receiver. In such cases, the feedback loop can generate an annoying “howling” sound that degrades the quality of the voice communication and listening experience.
Embodiments in accordance with the present invention provide a method and device for background noise control, ambient sound mixing and other audio control methods associated with an earphone. Note that although this application is filed as a continuation in part of U.S. patent application Ser. No. 16/247,186, the subject matter material can be found in U.S. patent application Ser. No. 12/170,171, filed on 9 Jul. 2008, now U.S. Pat. No. 8,526,645, application Ser. No. 12/115,349 filed on May 5, 2008, now U.S. Pat. No. 8,081,780, and Application No. 60/916,271 filed on May 4, 2007, all of which were incorporated by reference in U.S. patent application Ser. No. 16/247,186 and are incorporated by reference in their entirety herein.
In a first embodiment, a method for in-ear canal echo suppression control can include the steps of capturing an ambient acoustic signal from at least one Ambient Sound Microphone (ASM) to produce an electronic ambient signal, capturing in an ear canal an internal sound from at least one Ear Canal Microphone (ECM) to produce an electronic internal signal, measuring a background noise signal from the electronic ambient signal and the electronic internal signal, and capturing in the ear canal an internal sound from an Ear Canal Microphone (ECM) to produce an electronic internal signal. The electronic internal signal includes an echo of a spoken voice generated by a wearer of the earpiece. The echo in the electronic internal signal can be suppressed to produce a modified electronic internal signal containing primarily the spoken voice. A voice activity level can be generated for the spoken voice based on characteristics of the modified electronic internal signal and a level of the background noise signal. The electronic ambient signal and the electronic internal signal can then be mixed in a ratio dependent on the background noise signal to produce a mixed signal without echo that is delivered to the ear canal by way of the ECR.
An internal gain of the electronic internal signal can be increased as background noise levels increase, while an external gain of the electronic ambient signal can be decreased as the background noise levels increase. Similarly, the internal gain of the electronic internal signal can be increased as background noise levels decrease, while an external gain of the electronic ambient signal can be increased as the background noise levels decrease. The step of mixing can include filtering the electronic ambient signal and the electronic internal signal based on a characteristic of the background noise signal. The characteristic can be a level of the background noise level, a spectral profile, or an envelope fluctuation.
At low background noise levels and low voice activity levels, the electronic ambient signal can be amplified relative to the electronic internal signal in producing the mixed signal. At medium background noise levels and voice activity levels, low frequencies in the electronic ambient signal and high frequencies in the electronic internal signal can be attenuated. At high background noise levels and high voice activity levels, the electronic internal signal can be amplified relative to the electronic ambient signal in producing the mixed signal.
The method can include adapting a first set of filter coefficients of a Least Mean Squares (LMS) filter to model an inner ear-canal microphone transfer function (ECTF). The voice activity level of the modified electronic internal signal can be monitored, and an adaptation of the first set of filter coefficients for the modified electronic internal signal can be frozen if the voice activity level is above a predetermined threshold. The voice activity level can be determined by an energy level characteristic and a frequency response characteristic. A second set of filter coefficients for a replica of the LMS filter can be generated during the freezing and substituted back for the first set of filter coefficients when the voice activity level is below another predetermined threshold. The modified electronic internal signal can be transmitted to another voice communication device and looped back to the ear canal.
In a second embodiment, a method for in-ear canal echo suppression control can include capturing an ambient sound from at least one Ambient Sound Microphone (ASM) to produce an electronic ambient signal, delivering audio content to an ear canal by way of an Ear Canal Receiver (ECR) to produce an acoustic audio content, capturing in the ear canal by way of an Ear Canal Receiver (ECR) the acoustic audio content to produce an electronic internal signal, generating a voice activity level of a spoken voice in the presence of the acoustic audio content, suppressing an echo of the spoken voice in the electronic internal signal to produce a modified electronic internal signal, and controlling a mixing of the electronic ambient signal and the electronic internal signal based on the voice activity level. At least one voice operation of the earpiece can be controlled based on the voice activity level. The modified electronic internal signal can be transmitted to another voice communication device and looped back to the ear canal.
The method can include measuring a background noise signal from the electronic ambient signal and the electronic internal signal, and mixing the electronic ambient signal with the electronic internal signal in a ratio dependent on the background noise signal to produce a mixed signal that is delivered to the ear canal by way of the ECR. An acoustic attenuation level of the earpiece and an audio content level reproduced can be accounted for when adjusting the mixing based on a level of the audio content, the background noise level, and an acoustic attenuation level of the earpiece. The electronic ambient signal and the electronic internal signal can be filtered based on a characteristic of the background noise signal. The characteristic can be a level of the background noise level, a spectral profile, or an envelope fluctuation. The method can include applying a first gain (G1) to the electronic ambient signal, and applying a second gain (G2) to the electronic internal signal. The first gain and second gain can be a function of the background noise level and the voice activity level.
The method can include adapting a first set of filter coefficients of a Least Mean Squares (LMS) filter to model an inner ear-canal microphone transfer function (ECTF). The adaptation of the first set of filter coefficients can be frozen for the modified electronic internal signal if the voice activity level is above a predetermined threshold. A second set of filter coefficients for a replica of the LMS filter can be adapted during the freezing. The second set can be substituted back for the first set of filter coefficients when the voice activity level is below another predetermined threshold. The adaptation of the first set of filter coefficients can then be unfrozen.
In a third embodiment, an earpiece to provide in-ear canal echo suppression can include an Ambient Sound Microphone (ASM) configured to capture ambient sound and produce an electronic ambient signal, an Ear Canal Receiver (ECR) to deliver audio content to an ear canal to produce an acoustic audio content, an Ear Canal Microphone (ECM) configured to capture internal sound including spoken voice in an ear canal and produce an electronic internal signal, and a processor operatively coupled to the ASM, the ECM and the ECR. The audio content can be a phone call, a voice message, a music signal, or the spoken voice. The processor can be configured to suppress an echo of the spoken voice in the electronic internal signal to produce a modified electronic internal signal, generate a voice activity level for the spoken voice based on characteristics of the modified electronic internal signal and a level of the background noise signal, and mix the electronic ambient signal with the electronic internal signal in a ratio dependent on the background noise signal to produce a mixed signal that is delivered to the ear canal by way of the ECR. The processor can play the mixed signal back to the ECR for loopback listening. A transceiver operatively coupled to the processor can transmit the mixed signal to a second communication device.
A Least Mean Squares (LMS) echo suppressor can model an inner ear-canal microphone transfer function (ECTF) between the ASM and the ECM. A voice activity detector operatively coupled to the echo suppressor can adapt a first set of filter coefficients of the echo suppressor to model an inner ear-canal microphone transfer function (ECTF), and freeze an adaptation of the first set of filter coefficients for the modified electronic internal signal if the voice activity level is above a predetermined threshold. The voice activity detector during the freezing can also adapt a second set of filter coefficients for the echo suppressor, and substitute the second set of filter coefficients for the first set of filter coefficients when the voice activity level is below another predetermined threshold. Upon completing the substitution, the processor can unfreeze the adaptation of the first set of filter coefficients
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Processes, techniques, apparatus, and materials as known by one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the enabling description where appropriate, for example the fabrication and use of transducers.
In all of the examples illustrated and discussed herein, any specific values, for example the sound pressure level change, should be interpreted to be illustrative only and non-limiting. Thus, other examples of the exemplary embodiments could have different values.
Note that similar reference numerals and letters refer to similar items in the following figures, and thus once an item is defined in one figure, it may not be discussed for following figures.
Note that herein when referring to correcting or preventing an error or damage (e.g., hearing damage), a reduction of the damage or error and/or a correction of the damage or error are intended.
Various embodiments herein provide a method and device for automatically mixing audio signals produced by a pair of microphone signals that monitor a first ambient sound field and a second ear canal sound field, to create a third new mixed signal. An Ambient Sound Microphone (ASM) and an Ear Canal Microphone (ECM) can be housed in an earpiece that forms a seal in the ear of a user. The third mixed signal can be auditioned by the user with an Ear Canal Receiver (ECR) mounted in the earpiece, which creates a sound pressure in the occluded ear canal of the user. A voice activity detector can determine when the user is speaking and control an echo suppressor to suppress associated feedback in the ECR.
When the user engages in a voice communication, the echo suppressor can suppress feedback of the spoken voice from the ECR. The echo suppressor can contain two sets of filter coefficients; a first set that adapts when voice is not present and becomes fixed when voice is present, and a second set that adapts when the first set is fixed. The voice activity detector can discriminate between audible content, such as music, that the user is listening to, and spoken voice generated by the user when engaged in voice communication. The third mixed signal contains primarily the spoken voice captured at the ASM and ECM without echo, and can be transmitted to a remote voice communications system, such as a mobile phone, personal media player, recording device, walkie-talkie radio, etc. Before the ASM and ECM signals are mixed, they can be echo suppressed and subjected to different filters and at optional additional gains. This permits a single earpiece to provide full-duplex voice communication with proper or improper acoustic sealing.
The characteristic responses of the ASM and ECM filter can differ based on characteristics of the background noise and the voice activity level. In some exemplary embodiments, the filter response can depend on the measured Background Noise Level (BNL). A gain of a filtered ASM and a filtered ECM signal can also depend on the BNL. The (BNL) can be calculated using either or both the conditioned ASM and/or ECM signal(s). The BNL can be a slow time weighted average of the level of the ASM and/or ECM signals, and can be weighted using a frequency-weighting system, e.g. to give an A-weighted SPL level (i.e. the high and low frequencies are attenuated before the level of the microphone signals are calculated).
At least one exemplary embodiment of the invention is directed to an earpiece for voice operated control. Reference is made to
Earpiece 100 includes an Ambient Sound Microphone (ASM) 111 to capture ambient sound, an Ear Canal Receiver (ECR) 125 to deliver audio to an ear canal 131, and an Ear Canal Microphone (ECM) 123 to assess a sound exposure level within the ear canal 131. The earpiece 100 can partially or fully occlude the ear canal 131 to provide various degrees of acoustic isolation. The assembly is designed to be inserted into the user's ear canal 131, and to form an acoustic seal with the walls 129 of the ear canal at a location 127 between the entrance 117 to the ear canal and the tympanic membrane (or ear drum) 133. Such a seal is typically achieved by means of a soft and compliant housing of assembly 113. Such a seal creates a closed cavity 131 of approximately 5 cc between the in-ear assembly 113 and the tympanic membrane 133. As a result of this seal, the ECR (speaker) 125 is able to generate a full range frequency response when reproducing sounds for the user. This seal also serves to significantly reduce the sound pressure level at the user's eardrum resulting from the sound field at the entrance to the ear canal 131. This seal is also a basis for a sound isolating performance of the electro-acoustic assembly.
Located adjacent to the ECR 125, is the ECM 123, which is acoustically coupled to the (closed or partially closed) ear canal cavity 131. One of its functions is that of measuring the sound pressure level in the ear canal cavity 131 as a part of testing the hearing acuity of the user as well as confirming the integrity of the acoustic seal and the working condition of the earpiece 100. In one arrangement, the ASM 111 can be housed in the assembly 113 to monitor sound pressure at the entrance to the occluded or partially occluded ear canal. All transducers shown can receive or transmit audio signals to a processor 121 that undertakes audio signal processing and provides a transceiver for audio via the wired or wireless communication path 119.
The earpiece 100 can actively monitor a sound pressure level both inside and outside an ear canal and enhance spatial and timbral sound quality while maintaining supervision to ensure safe sound reproduction levels. The earpiece 100 in various embodiments can conduct listening tests, filter sounds in the environment, monitor warning sounds in the environment, present notification based on identified warning sounds, maintain constant audio content to ambient sound levels, and filter sound in accordance with a Personalized Hearing Level (PHL).
The earpiece 100 can measure ambient sounds in the environment received at the ASM 111. Ambient sounds correspond to sounds within the environment such as the sound of traffic noise, street noise, conversation babble, or any other acoustic sound. Ambient sounds can also correspond to industrial sounds present in an industrial setting, such as, factory noise, lifting vehicles, automobiles, and robots to name a few.
The earpiece 100 can generate an Ear Canal Transfer Function (ECTF) to model the ear canal 131 using ECR 125 and ECM 123, as well as an Outer Ear Canal Transfer function (OETF) using ASM 111. For instance, the ECR 125 can deliver an impulse within the ear canal and generate the ECTF via cross correlation of the impulse with the impulse response of the ear canal. The earpiece 100 can also determine a sealing profile with the user's ear to compensate for any leakage. It also includes a Sound Pressure Level Dosimeter to estimate sound exposure and recovery times. This permits the earpiece 100 to safely administer and monitor sound exposure to the ear.
Referring to
As illustrated, the earpiece 100 can include an acoustic management module 201 to mix sounds captured at the ASM 111 and ECM 123 to produce a mixed sound. The processor 121 can then provide the mixed signal to one or more subsystems, such as a voice recognition system, a voice dictation system, a voice recorder, or any other voice related processor or communication device. The acoustic management module 201 can be a hardware component implemented by discrete or analog electronic components or a software component. In one arrangement, the functionality of the acoustic management module 201 can be provided by way of software, such as program code, assembly language, or machine language.
The memory 208 can also store program instructions for execution on the processor 121 as well as captured audio processing data and filter coefficient data. The memory 208 can be off-chip and external to the processor 121 and include a data buffer to temporarily capture the ambient sound and the internal sound, and a storage memory to save from the data buffer the recent portion of the history in a compressed format responsive to a directive by the processor 121. The data buffer can be a circular buffer that temporarily stores audio sound at a current time point to a previous time point. It should also be noted that the data buffer can in one configuration reside on the processor 121 to provide high speed data access. The storage memory can be non-volatile memory such as SRAM to store captured or compressed audio data.
The earpiece 100 can include an audio interface 212 operatively coupled to the processor 121 and acoustic management module 201 to receive audio content, for example from a media player, cell phone, or any other communication device, and deliver the audio content to the processor 121. The processor 121 responsive to detecting spoken voice from the acoustic management module 201 can adjust the audio content delivered to the ear canal. For instance, the processor 121 (or acoustic management module 201) can lower a volume of the audio content responsive to detecting a spoken voice. The processor 121 by way of the ECM 123 can also actively monitor the sound exposure level inside the ear canal and adjust the audio to within a safe and subjectively optimized listening level range based on voice operating decisions made by the acoustic management module 201.
The earpiece 100 can further include a transceiver 204 that can support singly or in combination any number of wireless access technologies including without limitation Bluetooth™, Wireless Fidelity (WiFi), Worldwide Interoperability for Microwave Access (WiMAX), and/or other short or long range communication protocols. The transceiver 204 can also provide support for dynamic downloading over-the-air to the earpiece 100. It should be noted also that next generation access technologies can also be applied to the present disclosure.
The location receiver 232 can utilize common technology such as a common GPS (Global Positioning System) receiver that can intercept satellite signals and therefrom determine a location fix of the earpiece 100.
The power supply 210 can utilize common power management technologies such as replaceable batteries, supply regulation technologies, and charging system technologies for supplying energy to the components of the earpiece 100 and to facilitate portable applications. A motor (not shown) can be a single supply motor driver coupled to the power supply 210 to improve sensory input via haptic vibration. As an example, the processor 121 can direct the motor to vibrate responsive to an action, such as a detection of a warning sound or an incoming voice call.
The earpiece 100 can further represent a single operational device or a family of devices configured in a master-slave arrangement, for example, a mobile device and an earpiece. In the latter embodiment, the components of the earpiece 100 can be reused in different form factors for the master and slave devices.
As illustrated, the ASM 111 is configured to capture ambient sound and produce an electronic ambient signal 426, the ECR 125 is configured to pass, process, or play acoustic audio content 402 (e.g., audio content 321, mixed signal 323) to the ear canal, and the ECM 123 is configured to capture internal sound in the ear canal and produce an electronic internal signal 410. The acoustic management module 201 is configured to measure a background noise signal from the electronic ambient signal 426 or the electronic internal signal 410, and mix the electronic ambient signal 426 with the electronic internal signal 410 in a ratio dependent on the background noise signal to produce the mixed signal 323. The acoustic management module 201 filters the electronic ambient signal 426 and the electronic internal 410 signal based on a characteristic of the background noise signal using filter coefficients stored in memory or filter coefficients generated algorithmically.
In practice, the acoustic management module 201 mixes sounds captured at the ASM 111 and the ECM 123 to produce the mixed signal 323 based on characteristics of the background noise in the environment and a voice activity level. The characteristics can be a background noise level, a spectral profile, or an envelope fluctuation. The acoustic management module 201 manages echo feedback conditions affecting the voice activity level when the ASM 111, the ECM 123, and the ECR 125 are used together in a single earpiece for full-duplex communication, when the user is speaking to generate spoken voice (captured by the ASM 111 and ECM 123) and simultaneously listening to audio content (delivered by ECR 125).
In noisy ambient environments, the voice captured at the ASM 111 includes the background noise from the environment, whereas, the internal voice created in the ear canal 131 captured by the ECM 123 has less noise artifacts, since the noise is blocked due to the occlusion of the earpiece 100 in the ear. It should be noted that the background noise can enter the ear canal if the earpiece 100 is not completely sealed. In this case, when speaking, the user's voice can leak through and cause an echo feedback condition that the acoustic management module 201 mitigates.
The acoustic management module 201 includes a first gain (G1) 304 applied to the AGC processed electronic ambient signal 426. A second gain (G2) 308 is applied to the VAD processed electronic internal signal 410. The acoustic management module 201 applies the first gain (G1) 304 and the second gain (G2) 308 as a function of the background noise level and the voice activity level to produce the mixed signal 323, where
G1=f(BNL)+f(VAL) and G2=f(BNL)+f(VAL)
As illustrated, the mixed signal 323 is the sum 310 of the G1 scaled electronic ambient signal and the G2 scaled electronic internal signal. The mixed signal 323 can then be transmitted to a second communication device (e.g. second cell phone, voice recorder, etc.) to receive the enhanced voice signal. The acoustic management module 201 can also play the mixed signal 323 back to the ECR for loopback listening. The loopback allows the user to hear himself or herself when speaking, as though the earpiece 100 and associated occlusion effect were absent. The loopback can also be mixed with the audio content 321 based on the background noise level, the VAL, and audio content level. The acoustic management module 201 can also account for an acoustic attenuation level of the earpiece, and account for the audio content level reproduced by the ECR when measuring background noise characteristics. Echo conditions created as a result of the loopback can be mitigated to ensure that the voice activity level is accurate.
Mixed signal=(1−β)*electronic ambient signal+(β)*electronic internal signal
where (1−β) is an external gain, (β) is an internal gain, and the mixing is performed with 0<β<1.
As illustrated, the VAD produces a VAL that can be used to set a third gain 326 for the processed electronic ambient signal 311 and a fourth gain 328 for the processed electronic internal signal 312. For instance, when the VAL is low (e.g., 0-3), gain 326 and gain 328 are set low so as to attenuate the electronic ambient signal 311 and the electronic internal signal 312 when spoken voice is not detected. When the VAL is high (e.g., 7-10), gain 326 and gain 328 are set high so as to amplify the electronic ambient signal 311 and the electronic internal signal 312 when spoken voice is detected.
The gain scaled processed electronic ambient signal 311 and the gain scaled processed electronic internal signal 312 are then summed at adder 320 to produce the mixed signal 323. The mixed signal 323, as indicated previously, can be transmitted to another communication device, or as loopback to allow the user to hear his or her self.
The echo suppressor 610 can be a Least Mean Squares (LMS) or Normalized Least Mean Squares (NLMS) adaptive filter that models an ear canal transfer function (ECTF) between the ECR 125 and the ECM 123. The echo suppressor 610 generates the modified electronic signal, e(n), which is provided as an input to the voice decision logic 620; e(n) is also termed the error signal e(n) of the echo suppressor 610. Briefly, the error signal e(n) 412 is used to update the filter H(w) to model the ECTF of the echo path. The error signal e(n) 412 closely approximates the user's spoken voice signal u(n) 607 when the echo suppressor 610 accurately models the ECTF.
In the configuration shown the echo suppressor 610 minimizes the error between the filtered signal, {tilde over (γ)}(n), and the electronic internal signal, z(n), in an effort to obtain a transfer function H′ which is a best approximation to the H(w) (i.e., ECTF). H(w) represents the transfer function of the ear canal and models the echo response. (z(n)=u(n)+y(n)+v(n), where u(n) is the spoken voice 607, y(n) is the echo 609, and v(n) is background noise (if present, for instance due to improper sealing).)
During operation, the echo suppressor 610 monitors the mixed signal 323 delivered to the ECR 125 and produces an echo estimate {tilde over (γ)}(n) of an echo y(n) 609 based on the captured electronic internal signal 410 and the mixed signal 323. The echo suppressor 610, upon learning the ECTF by an adaptive process, can then suppress the echo y(n) 609 of the acoustic audio content 603 (e.g., output mixed signal 323) in the electronic internal signal z(n) 410. It subtracts the echo estimate {tilde over (γ)}(n) from the electronic internal signal 410 to produce the modified electronic internal signal e(n) 412.
The voice decision logic 620 analyzes the modified electronic signal 412 e(n) and the electronic ambient signal 426 to produce a voice activity level 622, α. The voice activity level α identifies a probability that the user is speaking, for example, when the user is using the earpiece for two way voice communication. The voice activity level 622 can also indicate a degree of voicing (e.g., periodicity, amplitude), When the user is speaking, voice is captured externally (such as from acoustic ambient signal 424) by the ASM 111 in the ambient environment and also by the ECM 123 in the ear canal. The voice decision logic provides the voice activity level α to the acoustic management module 201 as an input parameter for mixing the ASM 111 and ECM 123 signals. Briefly referring back to
For instance, at low background noise levels and low voice activity levels, the acoustic management module 201 amplifies the electronic ambient signal 426 from the ASM 111 relative to the electronic internal signal 410 from the ECM 123 in producing the mixed signal 323. At medium background noise levels and medium voice activity levels, the acoustic management module 201 attenuates low frequencies in the electronic ambient signal 426 and attenuates high frequencies in the electronic internal signal 410. At high background noise levels and high voice activity levels, the acoustic management module 201 amplifies the electronic internal signal 410 from the ECM 123 relative to the electronic ambient signal 426 from the ASM 111 in producing the mixed signal. The acoustic management module 201 can additionally apply frequency specific filters based on the characteristics of the background noise.
When the user is not speaking, the ECR 125 can pass through ambient sound captured at the ASM 111, thereby allowing the user to hear environmental ambient sounds. As previously discussed, the echo suppressor 610 models an ECTF and suppresses an echo of the mixed signal 323 that is looped back to the ECR 125 by way of the ASM 111 (see dotted line Loop Back path). When the user is not speaking, the echo suppressor continually adapts to model the ECTF. When the ECTF is properly modeled, the echo suppressor 610 produces a modified internal electronic signal e(n) that is low in amplitude level (i.e., low in error). The echo suppressor adapts the weights to keep the error signal low. When the user speaks, the echo suppressor however initially produces a high-level e(n) (e.g., the error signal increases). This happens since the speaker's voice is uncorrelated with the audio signal played out the ECR 125, which disrupts the echo suppressor's ECTF modeling ability.
The control unit 700 upon detecting a rise in e(n), freezes the weights of the echo suppressor 610 to produce a fixed filter H′(w) fixed 738. Upon detecting the rise in e(n) the control unit adjusts the gain 734 for the ASM signal and the gain 732 for the mixed signal 323 that is looped back to the ECR 125. The mixed signal 323 fed back to the ECR 125 permits the user to hear themselves speak. Although the weights are frozen when the user is speaking, a second filter H′(w) 736 continually adapts the weights for generating a second e(n) that is used to determine a presence of spoken voice. That is, the control unit 700 monitors the second error signal e(n) produced by the second filter 736 for monitoring a presence of the spoken voice.
The first error signal e(n) (in a parallel path) generated by the first filter 738 is used as the mixed signal 323. The first error signal contains primarily the spoken voice since the ECTF model has been fixed due to the weights. That is, the second (adaptive) filter is used to monitor a presence of spoken voice, and the first (fixed) filter is used to generate the mixed signal 323.
Upon detecting a fall of e(n), the control unit restores the gains 734 and 732 and unfreezes the weights of the echo suppressor, and the first filter H′(w) returns to being an adaptive filter. The second filter H′(w) 736 remains on stand-by until spoken voice is detected, and at which point, the first filter H′(w) 738 goes fixed, and the second filter H′(w) 736 begins adaptation for producing the e(n) signal that is monitored for voice activity. Notably, the control unit 700 monitors e(n) from the first filter 738 or the second filter 736 for changes in amplitude to determine when spoken voice is detected based on the state of voice activity.
As illustrated the mixing circuitry 816 (shown in center) receives an estimate of the background noise level 812 for mixing either or both the right earpiece ASM signal 802 and the left earpiece ASM signal 804 with the left earpiece ECM signal 806. (The right earpiece ECM signal can be used similarly.) An operating mode selection system 814 selects a switching 808 (e.g., 2-in, 1-out) between the left earpiece ASM signal 804 and the right earpiece ASM signal 802. As indicated earlier, the ASM signals and ECM signals can be first amplified with a gain system and then filtered with a filter system (the filtering may be accomplished using either analog or digital electronics or both). The audio input signals 802, 804, and 806 are therefore taken after this gain and filtering process, if any gain and filtering are used.
The Acoustic Echo Cancellation (AEC) system 810 can be activated with the operating mode selection system 814 when the mixed signal audio output 828 is reproduced with the ECR 125 in the same ear as the ECM 123 signal used to create the mixed signal audio output 828. The acoustic echo cancellation platform 810 can also suppress an echo of a spoken voice generated by the wearer of the earpiece 100. This ensures against acoustic feedback (“howlback”).
The Voice Activated System (VOX) 818 in conjunction with a de-bouncing circuit 822 activates the electronic switch 826 to control the mixed signal output 828 from the mixing circuitry 816; the mixed signal is a combination of the left ASM signal 804 or right ASM signal 802, with the left ECM 806 signal. Though not shown, the same arrangement applies for the other earphone device for the right ear, if present. Note that earphones can be used in both ears simultaneously. In a contra-lateral operating mode, as selected by operating mode selection system 814, the ASM and ECM signal are taken from opposite earphone devices, and the mix of these signals is reproduced with the ECR in the earphone that is contra-lateral to the ECM signal, and the same as the ASM signal.
For instance, in the contra-lateral operating mode, the ASM signal from the Right earphone device is mixed with the ECM signal from the left earphone device, and the audio signal corresponding to a mix of these two signals is reproduced with the Ear Canal Receiver (ECR) in the Right earphone device. The mixed signal audio output 828 therefore can contain a mix of the ASM and ECM signals when the user's voice is detected by the VOX. This mixed signal audio output can be used in loopback as a user Self-Monitor System to allow the user to hear their own voice as reproduced with the ECR 125, or it may be transmitted to another voice system, such as a mobile phone, walkie-talkie radio etc. The VOX system 818 that activates the switch 826 may be one a number of VOX embodiments.
In a particular operating mode, specified by unit 814, the conditioned ASM signal is mixed with the conditioned ECM signal with a ratio dependent on the BNL using audio signal mixing circuitry and the method described in either
As illustrated, modules 922-928 provide exemplary steps for calculating a base reference background noise level. The ECM or ASM audio input signal 922 can be buffered 923 in real-time to estimate signal parameters. An envelope detector 924 can estimate a temporal envelope of the ASM or ECM signal. A smoothing filter 925 can minimize abruptions in the temporal envelope. (A smoothing window 926 can be stored in memory). An optional peak detector 927 can remove outlier peaks to further smooth the envelope. An averaging system 928 can then estimate the average background noise level (BNL_1) from the smoothed envelope.
If at step 929, it is determined that the signal from the ECM was used to calculate the BNL_1, an audio content level 932 (ACL) and noise reduction rating 933 (NRR) can be subtracted from the BNL_1 estimate to produce the updated BNL 931. This is done to account for the audio content level reproduced by the ECR 125 that delivers acoustic audio content to the earpiece 100, and to account for an acoustic attenuation level (i.e. Noise Reduction Rating 933) of the earpiece. For example, if the user is listening to music, the acoustic management module 201 takes into account the audio content level delivered to the user when measuring the BNL. If the ECM is not used to calculate the BNL at step 929, the previous real-time frame estimate of the BNL 930 is used.
At step 936, the acoustic management module 201 updates the BNL based on the current measured BNL and previous BNL measurements 935. For instance, the updated BNL 937 can be a weighted estimate 934 of previous BNL estimates according to BNL=2*previous BNL+(1−w)*current BNL, where 0<W<1. The BNL can be a slow time weighted average of the level of the ASM and/or ECM signals, and may be weighted using a frequency-weighting system, e.g. to give an A-weighted SPL level.
As shown, the filter selection module 1045 can select one or more filters to apply to the microphone signals before mixing. For instance, the filter selection module 1045 can apply an ASM filter 1048 to the ASM signal 1047 and an ECM filter 1051 to the ECM signal 1052 based on the background noise level 1042. The ASM and ECM filters can be retrieved from memory based on the characteristics of the background noise. An operating mode 1046 can determine whether the ASM and ECM filters are look-up curves 1043 from memory or filters whose coefficients are determined in real-time based on the background noise levels.
Prior to mixing with summing unit 1049 to produce output signal 1050, the ASM signal 1047 is filtered with ASM filter 1048, and the ECM signal 1052 is filtered with ECM filter 1051. The filtering can be accomplished by a time-domain transversal filter (FIR-type filter), an IIR-type filter, or with frequency-domain multiplication. The filter can be adaptive (i.e. time variant), and the filter coefficients can be updated on a frame-by-frame basis depending on the BNL. The filter coefficients for a particular BNL can be loaded from computer memory using pre-defined filter curves 1043, or can be calculated using a predefined algorithm 1044, or using a combination of both (e.g. using an interpolation algorithm to create a filter curve for both the ASM filter 1048 and ECM filter 1051 from predefined filters).
In particular,
For low BNLs (e.g. when BNL<L 1170, where L1 is a predetermined level threshold 1171), a G1 is determined for both the ECM signal and the ASM signal. The gain G1 for the ECM signal is approximately zero; i.e. no ECM signal would be present in the output signal 1175. For the ASM input signal, G1 would be approximately unity for low BNL.
For medium BNLs (e.g. when BNL<L2 1172, where L2 is a predetermined level threshold 1173), a G2 is determined for both the ECM signal and the ASM signal. The gain G2 for the ECM signal and the ASM signal is approximately the same. In another embodiment, the gain G2 can be frequency dependent so as to emphasize low frequency content in the ECM and emphasize high frequency content in the ASM signal in the mix. For high BNL; G3 1165 is high for the ECM signal, and low for the ASM signal. The switches 1166, 1167, and 1168 ensure that only one gain channel is applied to the ECM signal and ASM signal. The gain scaled ASM signal and ECM signal are then summed at junction 1174 to produce the mixed output signal 1175.
Examples of filter response curves for three different BNL are shown in
The basic trend for the ASM and ECM filter response at different BNLs is that at low BNLs (e.g. <60 dBA), the ASM signal is primarily used for voice communication. At medium BNL; ASM and ECM are mixed in a ratio depending on the BNL, though the ASM filter can attenuate low frequencies of the ASM signal, and attenuate high frequencies of the ECM signal. At high BNL (e.g. >85 dB), the ASM filter attenuates most al the low frequencies of the ASM signal, and the ECM filter attenuates most all the high frequencies of the ECM signal. In another embodiment of the Acoustic Management System, the ASM and ECM filters may be adjusted by the spectral profile of the background noise measurement. For instance, if there is a large Low Frequency noise in the ambient sound field of the user, then the ASM filter can reduce the low-frequencies of the ASM signal accordingly, and boost the low-frequencies of the ECM signal using the ECM filter.
Where applicable, the present embodiments of the invention can be realized in hardware, software or a combination of hardware and software. Any kind of computer system or other apparatus adapted for carrying out the methods described herein are suitable. A typical combination of hardware and software can be a mobile communications device with a computer program that, when being loaded and executed, can control the mobile communications device such that it carries out the methods described herein. Portions of the present method and system may also be embedded in a computer program product, which comprises all the features enabling the implementation of the methods described herein and which when loaded in a computer system, is able to carry out these methods.
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures and functions of the relevant exemplary embodiments. Thus, the description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the exemplary embodiments of the present invention. Such variations are not to be regarded as a departure from the spirit and scope of the present invention.
This application is a Continuation in Part of U.S. patent application Ser. No. 16/247,186, filed 14 Jan. 2019, which is a Continuation of U.S. patent application Ser. No. 13/956,767, filed on 1 Aug. 2018, now U.S. Pat. No. 10,182,289, which is a Continuation of U.S. patent application Ser. No. 12/170,171, filed on 9 Jul. 2008, now U.S. Pat. No. 8,526,645, which is a Continuation in Part of application Ser. No. 12/115,349 filed on May 5, 2008, now U.S. Pat. No. 8,081,780 which claims the priority benefit of Provisional Application No. 60/916,271 filed on May 4, 2007, the entire disclosure of all of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3876843 | Moen | Apr 1975 | A |
4054749 | Suzuki et al. | Oct 1977 | A |
4088849 | Usami et al. | May 1978 | A |
4533795 | Baumhauer, Jr. et al. | Aug 1985 | A |
4809262 | Klose et al. | Feb 1989 | A |
4947440 | Bateman et al. | Aug 1990 | A |
5002151 | Oliveira et al. | Mar 1991 | A |
5131032 | Esaki et al. | Jul 1992 | A |
5208867 | Stites, III | May 1993 | A |
5251263 | Andrea | Oct 1993 | A |
5259033 | Goodings et al. | Nov 1993 | A |
5267321 | Langberg | Nov 1993 | A |
5276740 | Inanaga et al. | Jan 1994 | A |
5317273 | Hanson | May 1994 | A |
5327506 | Stites | Jul 1994 | A |
5524056 | Killion et al. | Jun 1996 | A |
5550923 | Hotvet | Aug 1996 | A |
5577511 | Killion | Nov 1996 | A |
5692059 | Kruger | Nov 1997 | A |
5796819 | Romesburg | Aug 1998 | A |
5903868 | Yuen et al. | May 1999 | A |
5923624 | Groeger | Jul 1999 | A |
5933510 | Bryant | Aug 1999 | A |
5946050 | Wolff | Aug 1999 | A |
5963901 | Vanatalo et al. | Oct 1999 | A |
5999828 | Sih et al. | Dec 1999 | A |
6005525 | Kivela | Dec 1999 | A |
6021207 | Puthuff | Feb 2000 | A |
6021325 | Hall | Feb 2000 | A |
6028514 | Lemelson | Feb 2000 | A |
6056698 | Iseberg | May 2000 | A |
6081732 | Suvanen et al. | Jun 2000 | A |
6118877 | Lindemann | Sep 2000 | A |
6118878 | Jones | Sep 2000 | A |
6163338 | Johnson et al. | Dec 2000 | A |
6163508 | Kim et al. | Dec 2000 | A |
6169912 | Zuckerman | Jan 2001 | B1 |
6226389 | Lemelson et al. | May 2001 | B1 |
6298323 | Kaemmerer | Oct 2001 | B1 |
6304648 | Chang | Oct 2001 | B1 |
6359993 | Brimhall | Mar 2002 | B2 |
6381572 | Ishimitsu et al. | Apr 2002 | B1 |
6400652 | Goldberg et al. | Jun 2002 | B1 |
6408272 | White | Jun 2002 | B1 |
6415034 | Hietanen | Jul 2002 | B1 |
6466666 | Eriksson | Oct 2002 | B1 |
6567524 | Svean et al. | May 2003 | B1 |
6570985 | Romesburg | May 2003 | B1 |
6606598 | Holthouse | Aug 2003 | B1 |
6631196 | Taenzer et al. | Oct 2003 | B1 |
6639987 | McIntosh | Oct 2003 | B2 |
6647368 | Nemirovski | Nov 2003 | B2 |
RE38351 | Iseberg et al. | Dec 2003 | E |
6661901 | Svean et al. | Dec 2003 | B1 |
6671379 | Nemirovski | Dec 2003 | B2 |
6728385 | Kval | Apr 2004 | B2 |
6738482 | Jaber | May 2004 | B1 |
6748238 | Lau | Jun 2004 | B1 |
6754359 | Svean et al. | Jun 2004 | B1 |
6760453 | Banno | Jul 2004 | B1 |
6804638 | Fiedler | Oct 2004 | B2 |
6804643 | Kiss | Oct 2004 | B1 |
6870807 | Chan et al. | Mar 2005 | B1 |
7003097 | Marchok et al. | Feb 2006 | B2 |
7003099 | Zhang | Feb 2006 | B1 |
7039195 | Svean et al. | May 2006 | B1 |
7039585 | Wilmot | May 2006 | B2 |
7050592 | Iseberg | May 2006 | B1 |
7072482 | Van Doorn et al. | Jul 2006 | B2 |
7107109 | Nathan et al. | Sep 2006 | B1 |
7158933 | Balan | Jan 2007 | B2 |
7177433 | Sibbald | Feb 2007 | B2 |
7209569 | Boesen | Apr 2007 | B2 |
7236580 | Sarkar et al. | Jun 2007 | B1 |
7280849 | Bailey | Oct 2007 | B1 |
7349353 | Guduru et al. | Mar 2008 | B2 |
7403608 | Auvray et al. | Jul 2008 | B2 |
7430299 | Armstrong et al. | Sep 2008 | B2 |
7433714 | Howard et al. | Oct 2008 | B2 |
7444353 | Chen | Oct 2008 | B1 |
7450730 | Bertg et al. | Nov 2008 | B2 |
7464029 | Visser | Dec 2008 | B2 |
7477756 | Wickstrom et al. | Jan 2009 | B2 |
7502484 | Ngia et al. | Mar 2009 | B2 |
7512245 | Rasmussen | Mar 2009 | B2 |
7529379 | Zurek | May 2009 | B2 |
7562020 | Le et al. | Jun 2009 | B2 |
7574917 | Von Dach | Aug 2009 | B2 |
7756285 | Sjursen et al. | Jul 2010 | B2 |
7778434 | Juneau et al. | Aug 2010 | B2 |
7783054 | Ringlstetter et al. | Aug 2010 | B2 |
7801318 | Bartel | Sep 2010 | B2 |
7817803 | Goldstein | Oct 2010 | B2 |
7853031 | Hamacher | Dec 2010 | B2 |
7903825 | Melanson | Mar 2011 | B1 |
7903826 | Boersma | Mar 2011 | B2 |
7920557 | Moote | Apr 2011 | B2 |
7936885 | Frank | May 2011 | B2 |
7953241 | Jorgensen et al. | May 2011 | B2 |
7983433 | Nemirovski | Jul 2011 | B2 |
7983907 | Visser | Jul 2011 | B2 |
7986802 | Ziller | Jul 2011 | B2 |
8014553 | Radivojevic et al. | Sep 2011 | B2 |
8018337 | Jones | Sep 2011 | B2 |
8027481 | Beard | Sep 2011 | B2 |
8045840 | Murata | Oct 2011 | B2 |
8060366 | Maganti et al. | Nov 2011 | B1 |
8081780 | Goldstein et al. | Dec 2011 | B2 |
8086093 | Stuckman | Dec 2011 | B2 |
8140325 | Kanevsky et al. | Mar 2012 | B2 |
8150044 | Goldstein | Apr 2012 | B2 |
8150084 | Jessen et al. | Apr 2012 | B2 |
8160261 | Schulein | Apr 2012 | B2 |
8160273 | Visser | Apr 2012 | B2 |
8162846 | Epley | Apr 2012 | B2 |
8189803 | Bergeron | May 2012 | B2 |
8218784 | Schulein | Jul 2012 | B2 |
8254591 | Goldstein | Aug 2012 | B2 |
8270629 | Bothra | Sep 2012 | B2 |
8275145 | Buck et al. | Sep 2012 | B2 |
8351634 | Khenkin | Jan 2013 | B2 |
8380521 | Maganti et al. | Feb 2013 | B1 |
8401178 | Chen et al. | Mar 2013 | B2 |
8401200 | Tiscareno | Mar 2013 | B2 |
8477955 | Engle | Jul 2013 | B2 |
8493204 | Wong et al. | Jul 2013 | B2 |
8577062 | Goldstein | Nov 2013 | B2 |
8600085 | Chen et al. | Dec 2013 | B2 |
8611560 | Goldstein | Dec 2013 | B2 |
8625818 | Stultz | Jan 2014 | B2 |
8718305 | Usher | May 2014 | B2 |
8750295 | Liron | Jun 2014 | B2 |
8774433 | Goldstein | Jul 2014 | B2 |
8798278 | Isabelle | Aug 2014 | B2 |
8798283 | Gauger et al. | Aug 2014 | B2 |
8838184 | Burnett et al. | Sep 2014 | B2 |
8851372 | Zhou | Oct 2014 | B2 |
8855343 | Usher | Oct 2014 | B2 |
8917894 | Goldstein | Dec 2014 | B2 |
8983081 | Bayley | Mar 2015 | B2 |
9013351 | Park | Apr 2015 | B2 |
9037458 | Park et al. | May 2015 | B2 |
9053697 | Park | Jun 2015 | B2 |
9112701 | Sano | Aug 2015 | B2 |
9113240 | Ramakrishman | Aug 2015 | B2 |
9123343 | Kurki-Suonio | Sep 2015 | B2 |
9135797 | Couper et al. | Sep 2015 | B2 |
9191740 | McIntosh | Nov 2015 | B2 |
9196247 | Harada | Nov 2015 | B2 |
9270244 | Usher et al. | Feb 2016 | B2 |
9384726 | Le Faucheur | Jul 2016 | B2 |
9491542 | Usher | Nov 2016 | B2 |
9508335 | Benattar et al. | Nov 2016 | B2 |
9584896 | Kennedy | Feb 2017 | B1 |
9628896 | Ichimura | Apr 2017 | B2 |
9684778 | Tharappel | Jun 2017 | B2 |
9936297 | Dennis | Apr 2018 | B2 |
9953626 | Gauger, Jr. et al. | Apr 2018 | B2 |
10142332 | Ravindran | Nov 2018 | B2 |
10499139 | Ganeshkumar | Dec 2019 | B2 |
10709339 | Lusted | Jul 2020 | B1 |
10970375 | Manikantan | Apr 2021 | B2 |
20010046304 | Rast | Nov 2001 | A1 |
20020076057 | Voix | Jun 2002 | A1 |
20020098878 | Mooney | Jul 2002 | A1 |
20020106091 | Furst et al. | Aug 2002 | A1 |
20020111798 | Huang | Aug 2002 | A1 |
20020118798 | Langhart et al. | Aug 2002 | A1 |
20020165719 | Wang | Nov 2002 | A1 |
20020193130 | Yang | Dec 2002 | A1 |
20030033152 | Cameron | Feb 2003 | A1 |
20030035551 | Light | Feb 2003 | A1 |
20030112947 | Cohen | Jun 2003 | A1 |
20030130016 | Matsuura | Jul 2003 | A1 |
20030152359 | Kim | Aug 2003 | A1 |
20030161097 | Le et al. | Aug 2003 | A1 |
20030165246 | Kvaloy et al. | Sep 2003 | A1 |
20030165319 | Barber | Sep 2003 | A1 |
20030198359 | Killion | Oct 2003 | A1 |
20040042103 | Mayer | Mar 2004 | A1 |
20040047486 | Van Doom et al. | Mar 2004 | A1 |
20040086138 | Kuth | May 2004 | A1 |
20040109668 | Stuckman | Jun 2004 | A1 |
20040109579 | Izuchi | Jul 2004 | A1 |
20040125965 | Alberth, Jr. et al. | Jul 2004 | A1 |
20040133421 | Burnett | Jul 2004 | A1 |
20040137969 | Nassimi | Jul 2004 | A1 |
20040190737 | Kuhnel et al. | Sep 2004 | A1 |
20040196992 | Ryan | Oct 2004 | A1 |
20040202340 | Armstrong | Oct 2004 | A1 |
20040203351 | Shearer et al. | Oct 2004 | A1 |
20040264938 | Felder | Dec 2004 | A1 |
20050028212 | Laronne | Feb 2005 | A1 |
20050058313 | Victorian et al. | Mar 2005 | A1 |
20050068171 | Kelliher | Mar 2005 | A1 |
20050069161 | Kaltenbach et al. | Mar 2005 | A1 |
20050071158 | Byford | Mar 2005 | A1 |
20050078838 | Simon | Apr 2005 | A1 |
20050096899 | Padhi et al. | May 2005 | A1 |
20050102133 | Rees | May 2005 | A1 |
20050102142 | Soufflet | May 2005 | A1 |
20050123146 | Voix et al. | Jun 2005 | A1 |
20050168824 | Travers | Aug 2005 | A1 |
20050207605 | Dehe | Sep 2005 | A1 |
20050227674 | Kopra | Oct 2005 | A1 |
20050281422 | Armstrong | Dec 2005 | A1 |
20050281423 | Armstrong | Dec 2005 | A1 |
20050283369 | Clauser et al. | Dec 2005 | A1 |
20050288057 | Lai et al. | Dec 2005 | A1 |
20060062395 | Klayman et al. | Mar 2006 | A1 |
20060064037 | Shalon et al. | Mar 2006 | A1 |
20060067512 | Boillot et al. | Mar 2006 | A1 |
20060067551 | Cartwright et al. | Mar 2006 | A1 |
20060083387 | Emoto | Apr 2006 | A1 |
20060083388 | Rothschild | Apr 2006 | A1 |
20060083390 | Kaderavek | Apr 2006 | A1 |
20060083395 | Allen et al. | Apr 2006 | A1 |
20060092043 | Lagassey | May 2006 | A1 |
20060135085 | Chen | Jun 2006 | A1 |
20060140425 | Berg | Jun 2006 | A1 |
20060153394 | Beasley | Jul 2006 | A1 |
20060167687 | Kates | Jul 2006 | A1 |
20060173563 | Borovitski | Aug 2006 | A1 |
20060182287 | Schulein | Aug 2006 | A1 |
20060188075 | Peterson | Aug 2006 | A1 |
20060188105 | Baskerville et al. | Aug 2006 | A1 |
20060195322 | Broussard et al. | Aug 2006 | A1 |
20060204014 | Isenberg et al. | Sep 2006 | A1 |
20060264176 | Hong | Nov 2006 | A1 |
20060287014 | Matsuura | Dec 2006 | A1 |
20070003090 | Anderson | Jan 2007 | A1 |
20070019817 | Siltmann | Jan 2007 | A1 |
20070021958 | Visser et al. | Jan 2007 | A1 |
20070036342 | Boillot et al. | Feb 2007 | A1 |
20070036377 | Stirnemann | Feb 2007 | A1 |
20070043563 | Comerford et al. | Feb 2007 | A1 |
20070014423 | Darbut | Apr 2007 | A1 |
20070086600 | Boesen | Apr 2007 | A1 |
20070092087 | Bothra | Apr 2007 | A1 |
20070100637 | McCune | May 2007 | A1 |
20070143820 | Pawlowski | Jun 2007 | A1 |
20070189544 | Rosenberg | Jun 2007 | A1 |
20070160243 | Dijkstra | Jul 2007 | A1 |
20070177741 | Williamson | Aug 2007 | A1 |
20070223717 | Boersma | Sep 2007 | A1 |
20070253569 | Bose | Nov 2007 | A1 |
20070255435 | Cohen | Nov 2007 | A1 |
20070291953 | Ngia et al. | Dec 2007 | A1 |
20080019539 | Patel et al. | Jan 2008 | A1 |
20080037801 | Alves et al. | Feb 2008 | A1 |
20080063228 | Mejia | Mar 2008 | A1 |
20080130908 | Cohen | Jun 2008 | A1 |
20080137873 | Goldstein | Jun 2008 | A1 |
20080145032 | Lindroos | Jun 2008 | A1 |
20080159547 | Schuler | Jul 2008 | A1 |
20080165988 | Terlizzi et al. | Jul 2008 | A1 |
20080205664 | Kim et al. | Aug 2008 | A1 |
20080221880 | Cerra et al. | Sep 2008 | A1 |
20090010444 | Goldstein et al. | Jan 2009 | A1 |
20090010456 | Goldstein et al. | Jan 2009 | A1 |
20090024234 | Archibald | Jan 2009 | A1 |
20090034748 | Sibbald | Feb 2009 | A1 |
20090076821 | Brenner | Mar 2009 | A1 |
20090085873 | Betts | Apr 2009 | A1 |
20090122996 | Klein | May 2009 | A1 |
20090286515 | Othmer | May 2009 | A1 |
20090147966 | McIntosh et al. | Jun 2009 | A1 |
20100061564 | Clemow et al. | Mar 2010 | A1 |
20100119077 | Platz | May 2010 | A1 |
20100296668 | Lee et al. | Nov 2010 | A1 |
20100316033 | Atwal | Dec 2010 | A1 |
20100328224 | Kerr et al. | Dec 2010 | A1 |
20110055256 | Phillips | Mar 2011 | A1 |
20110096939 | Ichimura | Apr 2011 | A1 |
20110103606 | Silber | May 2011 | A1 |
20110116643 | Tiscareno | May 2011 | A1 |
20110187640 | Jacobsen et al. | Aug 2011 | A1 |
20110264447 | Visser et al. | Oct 2011 | A1 |
20110293103 | Park et al. | Dec 2011 | A1 |
20120170412 | Calhoun | Jul 2012 | A1 |
20120184337 | Burnett et al. | Jul 2012 | A1 |
20130051543 | McDysan et al. | Feb 2013 | A1 |
20140023203 | Rotschild | Jan 2014 | A1 |
20140089672 | Luna | Mar 2014 | A1 |
20140122092 | Goldstein | May 2014 | A1 |
20140163976 | Park | Jun 2014 | A1 |
20140241553 | Tiscareno et al. | Aug 2014 | A1 |
20140370838 | Kim | Dec 2014 | A1 |
20150215701 | Usher | Jul 2015 | A1 |
20150288823 | Burnett et al. | Oct 2015 | A1 |
20160058378 | Wisby et al. | Mar 2016 | A1 |
20160104452 | Guan et al. | Apr 2016 | A1 |
20170345406 | Georgiou et al. | Nov 2017 | A1 |
20190038224 | Zhang | Feb 2019 | A1 |
20190227767 | Yang | Jul 2019 | A1 |
20210211801 | Perez | Jul 2021 | A1 |
Number | Date | Country |
---|---|---|
203761556 | Aug 2014 | CN |
203761556 | Aug 2014 | CN |
105554610 | Jan 2019 | CN |
105554610 | Jan 2019 | CN |
105637892 | Mar 2020 | CN |
105637892 | Mar 2020 | CN |
1385324 | Jan 2004 | EP |
1385324 | Jan 2004 | EP |
1401240 | Mar 2004 | EP |
1519625 | Mar 2005 | EP |
1519625 | May 2005 | EP |
1640972 | Mar 2006 | EP |
1640972 | Mar 2006 | EP |
2963647 | Jul 2019 | EP |
2963647 | Jul 2019 | EP |
2273616 | Dec 2007 | ES |
2273616 | Dec 2007 | ES |
H0877468 | Mar 1996 | JP |
H10162283 | Jun 1998 | JP |
3353701 | Dec 2002 | JP |
2013501969 | Jan 2013 | JP |
2013501969 | Jan 2013 | JP |
6389232 | Sep 2018 | JP |
6389232 | Sep 2018 | JP |
20080111004 | Dec 2008 | KR |
20080111004 | Dec 2008 | KR |
M568011 | Oct 2018 | TW |
WO9326085 | Dec 1993 | WO |
2004114722 | Dec 2004 | WO |
2006037156 | Apr 2006 | WO |
2006054698 | May 2006 | WO |
2007092660 | Aug 2007 | WO |
2008050583 | May 2008 | WO |
WO2008077981 | Jul 2008 | WO |
WO2008077981 | Jul 2008 | WO |
2009023784 | Feb 2009 | WO |
2012097150 | Jul 2012 | WO |
Entry |
---|
Olwal, A. and Feiner S. Interaction Techniques Using Prosodic Features of Speech and Audio Localization. Proceedings of IUI 2005 (International Conference on Intelligent User Interfaces), San Diego, CA, Jan. 9-12, 2005, p. 284-286. |
Bernard Widrow, John R. Glover Jr., John M. McCool, John Kaunitz, Charles S. Williams, Robert H. Hearn, James R. Zeidler, Eugene Dong Jr, and Robert C. Goodlin, Adaptive Noise Cancelling: Principles and Applications, Proceedings of the IEEE, vol. 63, No. 12, Dec. 1975. |
Mauro Dentino, John M. McCool, and Bernard Widrow, Adaptive Filtering in the Frequency Domain, Proceedings of the IEEE, vol. 66, No. 12, Dec. 1978. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00282, Dec. 21, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00242, Dec. 23, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00243, Dec. 23, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00234, Dec. 21, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00253, Jan. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00324, Jan. 13, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00281, Jan. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00302, Jan. 13, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00369, Feb. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00388, Feb. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00410, Feb. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01078, Jun. 9, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01099, Jun. 9, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01106, Jun. 9, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01098, Jun. 9, 2022. |
U.S. Appl. No. 90/015,146, Samsung Electronics Co., Ltd. and Samsung Electronics, America, Inc., Request for Ex Parte Reexamination of U.S. Pat. No. 10,979,836. |
U.S. Appl. No. 90/019,169, Samsung Electronics Co., Ltd. and Samsung Electronics, America, Inc., Request for Ex Parte Reexamination of U.S. Pat. No. 11,244,666. |
Oshana, DSP Software Development Techniques for Embedded and Real-Time Systems, Introduction, pp. xi-xvii (2006). |
Mulgrew et al., Digital Signal Processing: Concepts and Applications, Introduction, pp. xxiii-xxvi (2nd ed. 2002). |
Ronald M. Aarts, Roy Irwan, and Augustus J. E. Janssen, Efficient Tracking of the Cross-Correlation Coefficient, IEEE Transactions on Speech and Audio Processing, vol. 10, No. 6, Sep. 2002. |
Robert Oshana, DSP Software Development Techniques for Embedded and Real-Time Systems, Embedded Technology Series, Elsevier Inc., 2006, ISBN-10: 0-7506-7759-7. |
Number | Date | Country | |
---|---|---|---|
20210219051 A1 | Jul 2021 | US |
Number | Date | Country | |
---|---|---|---|
60916271 | May 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13956767 | Aug 2013 | US |
Child | 16247186 | US | |
Parent | 12170171 | Jul 2008 | US |
Child | 13956767 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16247186 | Jan 2019 | US |
Child | 17215804 | US | |
Parent | 12115349 | May 2008 | US |
Child | 12170171 | US |