The present invention relates to a device that monitors sound directed to an occluded ear, and more particularly, though not exclusively, to an earpiece and method of operating an earpiece that detects acute sounds and allows the acute sounds to be reproduced in an ear canal of the occluded ear.
Since the advent of industrialization over two centuries ago, the human auditory system has been increasingly stressed to tolerate high noise levels to which it had hitherto been unexposed. Recently, human knowledge of the causes of hearing damage have been researched intensively and models for predicting hearing loss have been developed and verified with empirical data from decades of scientific research. Yet it can be strongly argued that the danger of permanent hearing damage is more present in our daily lives than ever, and that sound levels from personal audio systems in particular (i.e. from portable audio devices), live sound events, and the urban environment are a ubiquitous threat to healthy auditory functioning across the global population.
Environmental noise is constantly presented in industrialized societies given the ubiquity of external sound intrusions. Examples include people talking on their cell phones, blaring music in health clubs, or the constant hum of air conditioning systems in schools and office buildings.
Excess noise exposure can also induce auditory fatigue, possibly comprising a person's listening abilities. On a daily basis, people are exposed to various environmental sounds and noises within their environment, such as the sounds from traffic, construction, and industry.
To combat the undesired cacophony of annoying sounds, people are arming themselves with portable audio playback devices to drown out intrusive noise. The majority of devices providing the person with audio content do so using insert (or in-ear) earbuds. These earbuds deliver sound directly to the ear canal at high sound levels over the background noise even though the earbuds generally provide little to no ambient sound isolation. Moreover, when people wear earbuds (or headphones) to listen to music, or engage in a call using a telephone, they can effectively impair their auditory judgment and their ability to discriminate between sounds. With such devices, the person is immersed in the audio experience and generally less likely to hear warning sounds within their environment. In some cases, the user may even turn up the volume to hear their personal audio over environmental noises. It also puts them at high sound exposure risk which can potentially cause long term hearing damage.
With earbuds, personal audio reproduction levels can reach in excess of 100 dB. This is enough to exceed recommended daily sound exposure levels in less than a minute and to cause permanent acoustic trauma. Furthermore, rising population densities have continually increased sound levels in society. According to researchers, 40% of the European community is continuously exposed to transportation noise of 55 dBA and 20% are exposed to greater than 65 dBA. This level of 65 dBA is considered by the World Health Organization to be intrusive or annoying, and as mentioned, can lead to users of personal audio devices increasing reproduction levels to compensate for ambient noise.
A need therefore exists for enhancing the user's ability to listen in the environment without harming his or her hearing faculties.
Embodiments in accordance with the present invention provide a method and device for acute sound detection and reproduction.
In a first embodiment, an earpiece can include an Ambient Sound Microphone (ASM) to capture ambient sound, at least one Ear Canal Receiver (ECR) to deliver audio to an ear canal; and a processor operatively coupled to the ASM and the at least one ECR. The processor can monitor a change in the ambient sound level to detect an acute sound from the change. The acute sound can be reproduced within the ear canal via the ECR responsive to detecting the acute sound.
The processor can pass (transmit) sound from the ASM directly to the ECR to produce sound within the ear canal at a same sound pressure level (SPL) as the acute sound measured at an entrance to the ear canal. In one arrangement, the processor can maintain an approximately constant ratio between an audio content level (ACL) and an internal ambient sound level (iASL) measured within the ear canal. In one arrangement, the processor can measure an external ambient sound level (xASL) of the ambient sound with the ASM and subtract an attenuation level of the earpiece from the xASL to estimate the internal ambient sound level (iASL) within the ear canal.
The earpiece can further include an Ear Canal Microphone (ECM) to measure an ear canal sound level (ECL) within the ear canal. In this configuration, the processor can estimate the internal ambient sound level (iASL) within the ear canal by subtracting an estimated audio content sound level (ACL) from the ECL. For instance, the processor can measure a voltage level of the audio content sent to the ECR, and apply a transfer function of the ECR to convert the voltage level to the ACL. The processor can be located external to the earpiece on a portable computing device.
In a second embodiment, an earpiece can comprise an Ambient Sound Microphone (ASM) to capture ambient sound, at least one Ear Canal Receiver (ECR) to deliver audio to an ear canal, an audio interface operatively coupled to the processor to receive audio content, and a processor operatively coupled to the ASM and the at least one ECR. The processor can monitor a change in the ambient sound level to detect an acute sound from the change, adjust an audio content level (ACL) of the audio content delivered to the ear canal, and reproduce the acute sound within the ear canal via the ECR responsive to detecting the acute sound and based on the ACL.
The audio interface can receive the audio content from at least one among a portable music player, a cell phone, and a portable communication device. During operation, the processor can maintain an approximately constant ratio between an audio content level (ACL) and an internal ambient sound level (iASL) measured within the ear canal. In one arrangement, the processor can mute the audio content and pass the acute sound to the ECR for reproducing the acute sound within the ear canal. In another arrangement, the processor can amplify the acute sound with respect to the audio content level (ACL).
In a third embodiment, a method for acute sound detection and reproduction can include the steps of measuring an ambient sound level (xASL) of ambient sound external to an ear canal at least partially occluded by the earpiece, monitoring a change in the xASL for detecting an acute sound, and reproducing the acute sound within the ear canal responsive to detecting the acute sound. The reproducing can include enhancing the acute sound over the ambient sound. The step of reproducing can produce sound within the ear canal at a same sound pressure level (SPL) as the acute sound measured at an entrance to the ear canal.
The method can further include receiving audio content from an audio interface that is directed to the ear canal, and maintaining an approximately constant ratio between a level of the audio content (ACL) and a level of an internal ambient sound level (iASL) measured within the ear canal. The ACL can be determined by measuring a voltage level of the audio content sent to the ECR, and applying a transfer function of the ECR to convert the voltage level to the ACL. The method can further include measuring an Ear Canal Level (ECL) within the ear canal, and subtracting the ACL from the ECL to estimate the iASL. The iASL can be estimated by subtracting an attenuation level of the earpiece from the xASL.
In a fourth embodiment, a method for acute sound detection and reproduction suitable for use with an earpiece can include the steps of measuring an external ambient sound level (xASL) in an ear canal at least partially occluded by the earpiece, monitoring a change in the xASL for detecting an acute sound, estimating a proximity of the acute sound, and reproducing the acute sound within the ear canal responsive to detecting the acute sound based on the proximity. The step of estimating a proximity can include performing a cross correlation analysis between at least two microphones, identifying a peak in the cross correlation and an associated time lag, and determining the direction from the associated time lag. The method can further include identifying whether the acute sound is a vocal signal produced by a user operating the earpiece or a sound source external from the user.
In a fifth embodiment, a method for acute sound detection and reproduction suitable for use with an earpiece can include measuring an external ambient sound level (xASL) due to ambient sound outside of an ear canal at least partially occluded by the earpiece, measuring an internal ambient sound level (iASL) due to residual ambient sound within the ear canal at least partially occluded by the earpiece, monitoring a high frequency change between the xASL and the iASL with respect to a low frequency change between the xASL and the iASL for detecting an acute sound, and reproducing the xASL within the ear canal responsive to detecting the high frequency change. The method can further include determining a proximity of a sound source producing the acute sound.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the invention, its application, or uses.
Processes, techniques, apparatus, and materials as known by one of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the enabling description where appropriate, for example the fabrication and use of transducers. Additionally in at least one exemplary embodiment the sampling rate of the transducers can be varied to pick up pulses of sound, for example less than 50 milliseconds.
In all of the examples illustrated and discussed herein, any specific values, for example the sound pressure level change, should be interpreted to be illustrative only and non-limiting. Thus, other examples of the exemplary embodiments could have different values.
Note that similar reference numerals and letters refer to similar items in the following figures, and thus once an item is defined in one figure, it may not be discussed for following figures.
Note that herein when referring to correcting or preventing an error or damage (e.g., hearing damage), a reduction of the damage or error and/or a correction of the damage or error are intended.
At least one exemplary embodiment of the invention is directed to an earpiece for ambient sound monitoring and warning detection. Reference is made to
Earpiece 100 includes an Ambient Sound Microphone (ASM) Ill to capture ambient sound, an Ear Canal Receiver (ECR) 125 to deliver audio to an ear canal 131, and an Ear Canal Microphone (ECM) 123 to assess a sound exposure level within the ear canal. The earpiece 100 can partially or fully occlude the ear canal 131 to provide various degrees of acoustic isolation. The assembly is designed to be inserted into the user's ear canal 131, and to form an acoustic seal with the walls 129 of the ear canal at a location 127 between the entrance 117 to the ear canal and the tympanic membrane (or ear drum) 133. Such a seal is typically achieved by means of a soft and compliant housing of assembly 113. Such a seal is pertinent to the performance of the system in that it creates a closed cavity 131 of approximately 5 cc between the in-ear assembly 113 and the tympanic membrane 133. As a result of this seal, the ECR (speaker) 125 is able to generate a full range bass response when reproducing sounds for the user. This seal also serves to significantly reduce the sound pressure level at the user's eardrum 133 resulting from the sound field at the entrance to the ear canal. This seal is also the basis for the sound isolating performance of the electro-acoustic assembly 113.
Located adjacent to the ECR 125, is the ECM 123, which is acoustically coupled to the (closed) ear canal cavity 131. One of its functions is that of measuring the sound pressure level in the ear canal cavity 131 as a part of testing the hearing acuity of the user as well as confirming the integrity of the acoustic seal and the working condition of itself and the ECR. The ASM 111 is housed in an assembly 113 and monitors sound pressure at the entrance to the occluded or partially occluded ear canal. All transducers shown can receive or transmit audio signals to a processor 121 that undertakes audio signal processing and provides a transceiver for audio via the wired or wireless communication path 119.
Referring to
The earpiece 100 can include an audio interface 212 operatively coupled to the processor 206 to receive audio content, for example from a media player or cell phone, and deliver the audio content to the processor 206. The processor 206 responsive to detecting acute sounds can adjust the audio content and pass the acute sounds directly to the ear canal. For instance, the processor can lower a volume of the audio content responsive to detecting an acute sound for transmitting the acute sound to the ear canal. The processor 206 can also actively monitor the sound exposure level inside the ear canal and adjust the audio to within a safe and subjectively optimized listening level range.
The earpiece 100 can further include a transceiver 204 that can support singly or in combination any number of wireless access technologies including without limitation Bluetooth™, Wireless Fidelity (WiFi), Worldwide Interoperability for Microwave Access (WiMAX), and/or other short or long range communication protocols. The transceiver 204 can also provide support for dynamic downloading over-the-air to the earpiece 100. It should be noted also that next generation access technologies can also be applied to the present disclosure.
The power supply 210 can utilize common power management technologies such as replaceable batteries, supply regulation technologies, and charging system technologies for supplying energy to the components of the earpiece 100 and to facilitate portable applications. A motor (not shown) can be a single supply motor driver coupled to the power supply 210 to improve sensory input via haptic vibration. As an example, the processor 206 can direct the motor to vibrate responsive to an action, such as a detection of a warning sound or an incoming voice call.
The earpiece 100 can further represent a single operational device or a family of devices configured in a master-slave arrangement, for example, a mobile device and an earpiece. In the latter embodiment, the components of the earpiece 100 can be reused in different form factors for the master and slave devices.
The method 300 can start in a state wherein the earpiece 100 has been inserted and powered on. As shown in step 302, the earpiece 100 can monitor the environment for ambient sounds received at the ASM 111. Ambient sounds correspond to sounds within the environment such as the sound of traffic noise, street noise, conversation babble, or any other acoustic sound. Ambient sounds can also correspond to industrial sounds present in an industrial setting, such as factory noise, lifting vehicles, automobiles, and robots to name a few.
Although the earpiece 100 when inserted in the ear can partially occlude the ear canal, the earpiece 100 may not completely attenuate the ambient sound. During the monitoring of ambient sounds in the environment, the earpiece 100 also monitors ear canal levels via the ECM 123 as shown in step 304. The passive aspect of the physical earpiece 100, due to the mechanical and sealing properties, can provide upwards of a 22-26 dB noise reduction. However, portions of ambient sounds higher than 26 dB can still pass through the earpiece 100 into the ear canal. For instance, high energy low frequency sounds are not completely attenuated. Accordingly, residual sound may be resident in the ear canal and heard by the user.
Sound within the ear canal 131 can also be provided via the audio interface 212. The audio interface 212 can receive the audio content from at least one among a portable music player, a cell phone, and a portable communication device. The audio interface 212 responsive to user input can direct sound to the ECR 125. For instance, a user can elect to play music through the earpiece 100 which can be audibly presented to the ear canal 131 for listening. The user can also elect to receive voice communications (e.g., cell phone, voice mail, messaging) via the earpiece 100. For instance, the user can receive audio content for voice mail or a phone call directed to the ear canal via the ECR 125. As shown in step 304, the earpiece 100 can monitor ear canal levels due to ambient sound and user selected sound via the ECM 123.
If at step 306, audio is playing (e.g., music, cell phone, etc.), the earpiece 100 adjusts a sound level of the audio based on the ambient sound to maintain a constant signal to noise ratio with respect to the ear canal level at step 308. For instance, the processor 206 can selectively amplify or attenuate audio content received from the audio interface 212 before it is delivered to the ECR 125. The processor 206 estimates a background noise level from the ambient sound received at the ASM 111, and adjusts the audio level of delivered audio content (e.g., music, cell phone audio) to maintain a constant signal (e.g., audio content) to noise level (e.g., ambient sound). By way of example, if the background noise level increases due to traffic sounds, the earpiece 100 automatically increases the volume of the audio content. Similarly, if the background noise level decreases, the earpiece 100 automatically decreases the volume of the audio content. The processor 206 can track variations on the ambient sound level to adjust the audio content level.
If at step 310, an acute sound is detected within the ambient sound, the earpiece 100 activates “sound pass-through” to reproduce the ambient sound in the ear canal by way of the ECR 125. The processor 206 permits the ambient sound to pass through the ECR 125 to the ear canal 131 directly for example by replicating the ambient sound external to the ear canal within the ear canal. This is important if the acute sound corresponds to an on-set for a warning sound such as a bell, a car, or an object. In such regard, the ambient sound containing the acute sound is presented directly to the ear canal in an original form. Although, the earpiece 100 inherently provides attenuation due to the physical and mechanical aspects of the earpiece and its sealing properties, the processor 206 can reproduce the ambient sound within the ear canal 131 at an original amplitude level and frequency content to provide “transparency”. For instance, the processor 206 measures and applies a transfer function of the ear canal to the passed ambient sound signal to provide an accurate reproduction of the ambient sound within the ear canal.
In one embodiment, the earpiece 100 looks for temporal and spectral characteristics in the ambient sound for detecting acute sounds. For instance, as will be explained ahead, the processor 206 looks for an abrupt change in the Sound Pressure Level (SPL) of an ambient sound across a small time period. The processor 206 can also detect abrupt magnitude changes across frequency sub-bands (e.g. filter-bank, FFT, etc.). Notably, the processor 206 can search for on-sets (e.g., fast rising amplitude wave-front) of an acute sound or other abrupt feature characteristics without initially attempting to initially identify or recognize the sound source. That is, the processor 206 is actively listening for a presence of acute sounds before identifying the type of sound source.
Even though the earplug inherently provides a certain attenuation level (e.g., noise reduction rating), the processor 206 in view of the ear canal level (ECL) and ambient sound level (ASL) can reproduce the ambient sound within the ear canal to allow the user to make an informed decision with regard to the acute sound. The ECL corresponds to all sounds within the ear canal and includes the internal ambient sound level (iASL) resulting from residual ambient sounds through the earpiece and the audio content level (ACL) resulting from the audio delivered via the audio interface 212. Briefly, xASL is the external ambient sound external to the ear canal and the earpiece (e.g., ambient sound outside the ear canal). iASL is the residual ambient sound that remains internal in the ear canal. The following equations describe the relationship among terms:
iASL=xASL−NRR (EQ 1)
iASL=ECL−ACL (EQ 2)
As EQ 1 shows, the iASL is the difference between the external ambient sound (xASL) and the attenuation of the earpiece (Noise Reduction Rating) due to the physical and sealing properties of the earpiece. The processor 206 can measure an external ambient sound level (xASL) of the ambient sound with the ASM 111 and subtracts an attenuation level of the earpiece (NRR) from the xASL to estimate the internal ambient sound level (iASL) within the ear canal.
EQ 2 is an alternate, or supplemental, method for calculating the iASL as the difference between the ECL and the Audio Content Level (ACL). By way of the ECM 123, the processor 206 can estimate an internal ambient sound level (iASL) within the ear canal by subtracting the estimated audio content sound level (ACL) from the ECL. The processor 206 measures a voltage level of the audio content sent to the ECR 125, and applies a transfer function of the ECR 125 to convert the voltage level to the ACL.
The processor 206 evaluates the equations above to pass sound from the ASM 111 directly to the ECR 125 to produce sound within the ear canal at a same sound pressure level (SPL) and frequency representation as the acute sound measured at an entrance to the ear canal. Further, the processor 206 can maintain an approximately constant ratio between an audio content level (ACL) and an internal ambient sound level (iASL) measured within the ear canal.
At step 314, the earpiece 100 can estimate a proximity of the acute sound. For instance, as will be shown ahead, the processor 206 can perform a correlation analysis on at least two microphones to determine whether the sound source is internal (e.g., the user) or external (e.g., an object other than the user). At step 316, the earpiece 100 determines whether it is the user's voice that generates the acute sound when the user speaks, or whether it is an external sound such as a vehicle approaching the user. If at step 316, the processor 206 determines that the acute sound is a result of the user speaking, the processor 206 does not activate a pass-through mode, since this is not considered an external warning sound. The pass-through mode permits ambient sound detected at the ASM 111 to be transmitted directly to the ear canal. If however, the acute sound corresponds to an external sound source, such as an on-set of a warning sound, the earpiece at step 318 activates “sound pass-through” to reproduce the ambient sound in the ear canal by way of the ECR 125. The earpiece 100 can also present an audible notification to the user indicating that an external sound source generating the acute sound has been detected. The method 300 can proceed back to step 302 to continually monitor for acute sounds in the environment.
At step 402, the earpiece 100 captures ambient sound signals from the ASM 111. At step 404, the processor 206 applies analog and discrete time signal processing to condition and compensate the ambient sound signal for the ASM 111 transducer. At step 406, the processor 206 estimates a background noise level (BNL) as will be discussed ahead. At step 408, the processor 206 identifies at least one peak in a data buffer storing a portion of the ambient sound signal. The processor 206 at step 410 gets a level of the peak (e.g., dBV). Block 412 presents a method for warning signal detection (e.g. car horns, klaxons). When a warning signal is detected at step 416, the processor 206 invokes at step 418 a pass-through mode whereby the ASM signal is reproduced with the ECR 125. Upon activating pass-through mode, the processor 206 can perform a safe level check at step 452. If a warning signal is not detected, the method 400 proceeds to step 420.
At step 420, the processor 206 subtracts the estimated BNL from an SPL of the ambient sound signal to produce signal “A”. A high energy level transient signal is indicative of an acute sound. At step 422, a frequency dependent threshold is retrieved at step 424, and subtracted from signal “A”, as shown in step 422 to produce signal “B”. At step 426, the processor 206 determines if signal “B” is positive. If not, the processor 206 performs a hysteresis to determine if the acute sound has already been detected. If not, the processor at step 428 determines if an SPL of the ambient sound is greater than a signal “C” (e.g. threshold). If the SPL is greater than signal “C”, the earpiece generates a user generated sound at step 434. The signal “C” is used to ensure that the SPL between the signal and background noise is positive and greater than a predetermined amount. For instance, a low SPL threshold (e.g., “C” 40 dB) can be used as shown in step 430, although it can adapt to different environmental conditions. The low SPL threshold provides an absolute measure to the SPL difference. At step 436, a proximity of a sound source generating the acute sound can be estimated as will be discussed ahead. The method 400 can continue to step 432.
Briefly, if a transient, high-level sound (or acute sound) is detected in the ambient sound signal (ASM input signal), then it is converted to a level, and its magnitude compared with the BNL is calculated. The magnitude of this resulting difference (signal “A”) is compared with the threshold (see step 422). If the value is positive, and the level of the transient is greater than a predefined threshold (see step 428), the processor 206 invokes the optional Source Proximity Detector at step 436, which determines if the acute sound was created by the User's voice (i.e., a user generated sound). If a user-generated sound is NOT detected, then Pass-through operation at step 438 is invoked, whereby the ambient sound signal is reproduced with the ECR 125. If the difference signal at step 428 is not positive, or the level of the identified transient is too low, then the hysteresis is invoked at step 432. The processor 206 decides if the pass-through was recently used at step 440 (e.g. in the last 10 ms). If pass-through mode was recently activated, then processor 206 invokes the pass-through system at step 438; otherwise there is no pass-through of the ASM signal to the ECR as shown at step 442. Upon activating pass-through mode, the processor 206 can perform a safe level check at step 452.
Briefly,
For instance, at step 502 a left ASM signal from a left headset incorporating the earpiece 100 assembles is received. Simultaneously, at step 504 a right ASM signal from a right headset is received. At step 510, the processor 206 performs a binaural cross correlation on the left ASM signal and the right ASM signal to evaluate a pass through mode 516. At step 506 a left ECM signal from the left headset is received. At step 508, a right ECM signal from the right headset is received. At step 514, the processor 206 performs a binaural cross correlation on the left ECM signal and the right ECM signal to evaluate a pass through mode 518. A pass through mode 524 is invoked if both the ASM and ECM cross correlation analysis are the same as determined in step 520. A safe level check can be performed by processor 206 at step 522.
Briefly,
Briefly,
Briefly, method 800 receives as its input 802 either or both the ASM signal from ASM 111 and a signal from the ECM 123. An audio buffer 804 of the input audio signal is accumulated (e.g. 10 ms of data), which is then processed by squaring step 806 to obtain the temporal envelope. The envelope is smoothed (e.g. an FIR-type low-pass digital filter) at step 808 using a smoothing window 810 stored in data memory (e.g. a Hanning or Hamming shaped window). At step 812, transient peaks in the input buffer can be identified and removed to determine a “steady-state” Background Noise Level (BNL). At step 814 an average BNL 816 can be obtained (similar to, or the same as, the RMS) that is frequency dependent or a single value averaged over all frequencies. If the ECM 123 is used to determine the BNL, then decision step 818 adjusts the ambient BNL estimation to provide an equivalent ear-canal BNL SPL, by deducting an Earpiece Noise Reduction Rating 828 from the BNL estimate 826. Alternatively, if the ECM 123 is used, then the Audio Content SPL level (ACL) 822 of any reproduced Audio Content 820 is deducted from the ECM level at step 824. The updated BNL estimate is then converted to a Sound Pressure Level (SPL) equivalent 832 (i.e. substantially equal to the SPL at the ear-drum in which the earphone device is inserted) by taking into account the sensitivity (e.g. measured in V per dB) of either the ASM 111 or ECM 123 at steps 830 and 834 respectively. The resulting BNL SPL is then combined at step 842 with the previous BNL estimate 840, by averaging 838 a weighted previous BNL (weighted with coefficient 836), to give a new ear-canal BNL 844.
Briefly,
In one exemplary embodiment, method 900 calculates a RMS value over a window (e.g. the last 100 ms). The RMS value can then be first weighted with a first weighting coefficient and then averaged with a weighted previous level estimate. The ACL is converted to an equivalent SPL value (ACL), which may use either a look-up-table or algorithm to calculate the ear-canal SPL of the signal if it was reproduced with the ECR 125. To calculate the equivalent ear canal SPL, the sensitivity of the ear canal receiver can be factored in during processing.
At step 922 the BNL is estimated using inputs from either or both the ASM signal at step 902, and/or the ECM signal at step 906. The BNL may be adjusted by the earpiece noise reduction rating 924. These signals are selected using the BNL input switch at step 918, which may be controlled automatically or with a specific user-generated manual operation at step 926. The Ear-Canal SNR is calculated at step 920 by differencing the ACL from step 914 and the BNL from step 922 and the resulting SNR 930 is passed to the method step 932 for AGC coefficient calculation. The AGC coefficient calculation 932 calculates gains for the Audio Content signal and ASM signal from the Automatic Gain Control steps 928 and 936 (for the Audio Content and ASM signals, respectively). AGC coefficient calculation 932 may use a default preferred SNR 938 or a user-preferred SNR 934 in its calculation. After the ASM signal and Audio content signal have been processed by the AGCs 928 and 936, the two signals are mixed at step 940.
At step 942, a safe-level check determines if the resulting mixed signal is too high, if it were reproduced with the ECR 125 as shown in block 944. The safe-level check can use information regarding the user's listening history to determine if the user's sound exposure is such that it may cause a temporary or a permanent hearing threshold shift. If such high levels are measured, then the safe-level check reduces the signal level of the mixed signals via a feedback path to step 940. The resulting audio signal generated after step 942 is then reproduced with the ECR 125.
Method 950 describes calculation of AGC coefficients. The method 950 receives as its inputs an Ear Canal SNR 952 and a target SNR 960 to provide a SNR mismatch 958. The target SNR 964 is chosen from a pre-defined SNR 954, sorted in computer memory or a manually defined SNR 956. At step 958, a difference is calculated between the actual ear-canal SNR and the target SNR to produce the mismatch 962. The mismatch level 962 is smoothed over time at step 968, which uses a previous mismatch 970 that is weighted using single or multiple weighting coefficients 966, to give a new time-smoothed SNR mismatch 974. Depending on the magnitude of this mismatch, various operating modes 972, 978 can be invoked, for example, as described by the AGC decision module 976 (step 932 in
While the present invention has been described with reference to exemplary embodiments, it is to be understood that the invention is not limited to the disclosed exemplary embodiments. The scope of the following claims is to be accorded the broadest interpretation so as to encompass all modifications, equivalent structures and functions of the relevant exemplary embodiments. Thus, the description of the invention is merely exemplary in nature and, thus, variations that do not depart from the gist of the invention are intended to be within the scope of the exemplary embodiments of the present invention. Such variations are not to be regarded as a departure from the spirit and scope of the present invention.
This is a continuation of and claims priority to U.S. patent application Ser. No. 17/321,892, filed 17 May 2021, which is a continuation of and claims priority to U.S. patent application Ser. No. 16/987,396, filed 7 Aug. 2020, which is a continuation of and claims priority to U.S. patent application Ser. No. 16/669,490, filed 30 Oct. 2019, which is a continuation of and claims priority to U.S. patent application Ser. No. 16/193,568, filed 16 Nov. 2018, now U.S. Pat. No. 10,535,334, which is a continuation of and claims priority to U.S. patent application Ser. No. 14/574,589, filed on Dec. 18, 2014, now U.S. Pat. No. 10,134,377, which claims priority to and is a continuation of U.S. patent application Ser. No. 12/017,878, filed on Jan. 22, 2008, now U.S. Pat. No. 8,917,894, which claims the benefit of U.S. Provisional Patent Application Ser. No. 60/885,917, filed on Jan. 22, 2007, all of which are herein incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
3876843 | Moen | Apr 1975 | A |
4054749 | Suzuki et al. | Oct 1977 | A |
4088849 | Usami et al. | May 1978 | A |
4947440 | Bateman et al. | Aug 1990 | A |
5208867 | Stites, III | May 1993 | A |
5267321 | Langberg | Nov 1993 | A |
5317273 | Hanson | May 1994 | A |
5327506 | Stites | Jul 1994 | A |
5524056 | Killion et al. | Jun 1996 | A |
5541359 | Lee | Jul 1996 | A |
5577511 | Killion | Nov 1996 | A |
5774567 | Heyl | Jun 1998 | A |
5903868 | Yuen et al. | May 1999 | A |
5923624 | Groeger | Jul 1999 | A |
5933510 | Bryant | Aug 1999 | A |
5946050 | Wolff | Aug 1999 | A |
6005525 | Kivela | Dec 1999 | A |
6021207 | Puthuff et al. | Feb 2000 | A |
6021325 | Hall | Feb 2000 | A |
6023517 | Ishige | Feb 2000 | A |
6056698 | Iseberg | May 2000 | A |
6118877 | Lindemann | Sep 2000 | A |
6163338 | Johnson et al. | Dec 2000 | A |
6163508 | Kim et al. | Dec 2000 | A |
6226389 | Lemelson et al. | May 2001 | B1 |
6298323 | Kaemmerer | Oct 2001 | B1 |
6359993 | Brimhall | Mar 2002 | B2 |
6400652 | Goldberg et al. | Jun 2002 | B1 |
6415034 | Hietanen | Jul 2002 | B1 |
6567524 | Svean et al. | May 2003 | B1 |
6606598 | Holthouse | Aug 2003 | B1 |
6639987 | McIntosh | Oct 2003 | B2 |
RE38351 | Iseberg et al. | Dec 2003 | E |
6661901 | Svean et al. | Dec 2003 | B1 |
6728385 | Kvaloy et al. | Apr 2004 | B2 |
6748238 | Lau | Jun 2004 | B1 |
6754359 | Svean et al. | Jun 2004 | B1 |
6804638 | Fiedler | Oct 2004 | B2 |
6804643 | Kiss | Oct 2004 | B1 |
7039195 | Svean | May 2006 | B1 |
7039585 | Wilmot | May 2006 | B2 |
7050592 | Iseberg | May 2006 | B1 |
7072482 | Van Doorn et al. | Jul 2006 | B2 |
7107109 | Nathan et al. | Sep 2006 | B1 |
7158933 | Balan | Jan 2007 | B2 |
7177433 | Sibbald | Feb 2007 | B2 |
7209569 | Boesen | Apr 2007 | B2 |
7280849 | Bailey | Oct 2007 | B1 |
7421084 | Jubien et al. | Sep 2008 | B2 |
7430299 | Armstrong et al. | Sep 2008 | B2 |
7433714 | Howard et al. | Oct 2008 | B2 |
7450730 | Bertg et al. | Nov 2008 | B2 |
7464029 | Visser | Dec 2008 | B2 |
7477756 | Wickstrom et al. | Jan 2009 | B2 |
7562020 | Le et al. | Jun 2009 | B2 |
7574917 | Von Dach | Aug 2009 | B2 |
7756285 | Sjursen et al. | Jul 2010 | B2 |
7778434 | Juneau et al. | Aug 2010 | B2 |
7903825 | Melanson | Mar 2011 | B1 |
7903826 | Boersma | Mar 2011 | B2 |
7920557 | Moote | Apr 2011 | B2 |
7983907 | Visser | Jul 2011 | B2 |
7986802 | Ziller | Jul 2011 | B2 |
8014553 | Radivojevic et al. | Sep 2011 | B2 |
8045840 | Murata et al. | Oct 2011 | B2 |
8086093 | Stuckman | Dec 2011 | B2 |
8140325 | Kanevsky | Mar 2012 | B2 |
8150044 | Goldstein | Apr 2012 | B2 |
8160261 | Schulein | Apr 2012 | B2 |
8160273 | Visser | Apr 2012 | B2 |
8162846 | Epley | Apr 2012 | B2 |
8199942 | Mao | Jun 2012 | B2 |
8218784 | Schulein | Jul 2012 | B2 |
8254591 | Goldstein | Aug 2012 | B2 |
8401200 | Tiscareno | Mar 2013 | B2 |
8493204 | Wong et al. | Jul 2013 | B2 |
8514100 | Yamashita | Aug 2013 | B2 |
8577052 | Silber et al. | Nov 2013 | B2 |
8577062 | Goldstein | Nov 2013 | B2 |
8611560 | Goldstein | Dec 2013 | B2 |
8625818 | Stultz | Jan 2014 | B2 |
8638239 | Cases et al. | Jan 2014 | B2 |
8718305 | Usher | May 2014 | B2 |
8750295 | Liron | Jun 2014 | B2 |
8774433 | Goldstein | Jul 2014 | B2 |
8792648 | Kim | Jul 2014 | B2 |
8798278 | Isabelle | Aug 2014 | B2 |
8855343 | Usher | Oct 2014 | B2 |
8917894 | Goldstein | Dec 2014 | B2 |
8983081 | Bayley | Mar 2015 | B2 |
9037458 | Park et al. | May 2015 | B2 |
9041545 | Zelepugas | May 2015 | B2 |
9053697 | Park | Jun 2015 | B2 |
9123343 | Kurki-Suonio | Sep 2015 | B2 |
9135797 | Couper et al. | Sep 2015 | B2 |
9191740 | McIntosh | Nov 2015 | B2 |
9196247 | Harada | Nov 2015 | B2 |
9491542 | Usher | Nov 2016 | B2 |
9628896 | Ichimura | Apr 2017 | B2 |
9648436 | Doppler | May 2017 | B2 |
10134377 | Goldstein | Nov 2018 | B2 |
10361673 | Matsukawa | Jul 2019 | B1 |
10535334 | Goldstein | Jan 2020 | B2 |
10810989 | Goldstein | Oct 2020 | B2 |
11244666 | Goldstein | Feb 2022 | B2 |
20010046304 | Rast | Nov 2001 | A1 |
20020076057 | Voix | Jun 2002 | A1 |
20020098878 | Mooney | Jul 2002 | A1 |
20020106091 | Furst et al. | Aug 2002 | A1 |
20020118798 | Langhart et al. | Aug 2002 | A1 |
20020135485 | Arakawa | Sep 2002 | A1 |
20020165719 | Wang | Nov 2002 | A1 |
20030035551 | Light et al. | Feb 2003 | A1 |
20030130016 | Matsuura | Jul 2003 | A1 |
20030151678 | Lee et al. | Aug 2003 | A1 |
20030152359 | Kim | Aug 2003 | A1 |
20030161097 | Le et al. | Aug 2003 | A1 |
20030165246 | Kvaloy et al. | Sep 2003 | A1 |
20030165319 | Barber | Sep 2003 | A1 |
20030198359 | Killion | Oct 2003 | A1 |
20040042103 | Mayer | Mar 2004 | A1 |
20040086138 | Kuth | May 2004 | A1 |
20040109668 | Stuckman | Jun 2004 | A1 |
20040109579 | Izuchi | Jul 2004 | A1 |
20040125965 | Alberth, Jr. et al. | Jul 2004 | A1 |
20040179694 | Alley | Sep 2004 | A1 |
20040190737 | Kuhnel et al. | Sep 2004 | A1 |
20040196992 | Ryan | Oct 2004 | A1 |
20040203351 | Shearer et al. | Oct 2004 | A1 |
20040264938 | Felder | Dec 2004 | A1 |
20050028212 | Laronne | Feb 2005 | A1 |
20050058313 | Victorian | Mar 2005 | A1 |
20050068171 | Kelliher | Mar 2005 | A1 |
20050078838 | Simon | Apr 2005 | A1 |
20050123146 | Voix et al. | Jun 2005 | A1 |
20050207597 | Kageyama | Sep 2005 | A1 |
20050207605 | Dehe | Sep 2005 | A1 |
20050281423 | Armstrong | Dec 2005 | A1 |
20050288057 | Lai et al. | Dec 2005 | A1 |
20060067551 | Cartwright et al. | Mar 2006 | A1 |
20060083390 | Kaderavek | Apr 2006 | A1 |
20060083395 | Allen et al. | Apr 2006 | A1 |
20060092043 | Lagassey | May 2006 | A1 |
20060140425 | Berg | Jun 2006 | A1 |
20060167687 | Kates | Jul 2006 | A1 |
20060173563 | Borovitski | Aug 2006 | A1 |
20060182287 | Schulein | Aug 2006 | A1 |
20060188075 | Peterson | Aug 2006 | A1 |
20060188105 | Baskerville | Aug 2006 | A1 |
20060195322 | Broussard et al. | Aug 2006 | A1 |
20060204014 | Isenberg et al. | Sep 2006 | A1 |
20060222185 | Dyer et al. | Oct 2006 | A1 |
20060253282 | Schmidt et al. | Nov 2006 | A1 |
20060262938 | Gauger | Nov 2006 | A1 |
20060264176 | Hong | Nov 2006 | A1 |
20060287014 | Matsuura | Dec 2006 | A1 |
20070003090 | Anderson | Jan 2007 | A1 |
20070036377 | Stirnemann | Feb 2007 | A1 |
20070043563 | Comerford et al. | Feb 2007 | A1 |
20070086600 | Boesen | Apr 2007 | A1 |
20070143820 | Pawlowski | Jun 2007 | A1 |
20070147635 | Dijkstra | Jun 2007 | A1 |
20070189544 | Rosenberg | Aug 2007 | A1 |
20070255435 | Cohen | Nov 2007 | A1 |
20070291953 | Ngia et al. | Dec 2007 | A1 |
20080037801 | Alves et al. | Feb 2008 | A1 |
20080137873 | Goldstein | Jun 2008 | A1 |
20080145032 | Lindroos | Jun 2008 | A1 |
20080165988 | Terlizzi et al. | Jul 2008 | A1 |
20080240458 | Goldstein | Oct 2008 | A1 |
20090010456 | Goldstein et al. | Jan 2009 | A1 |
20090024234 | Archibald | Jan 2009 | A1 |
20090076821 | Brenner | Mar 2009 | A1 |
20090122996 | Klein | May 2009 | A1 |
20090286515 | Othmer | May 2009 | A1 |
20100061564 | Clemow et al. | Mar 2010 | A1 |
20100136950 | Backlund et al. | Jun 2010 | A1 |
20100296668 | Lee et al. | Nov 2010 | A1 |
20110055256 | Phillips | Mar 2011 | A1 |
20110069845 | Cohen | Mar 2011 | A1 |
20110096939 | Ichimura | Apr 2011 | A1 |
20110116643 | Tiscareno | May 2011 | A1 |
20110264447 | Visser et al. | Oct 2011 | A1 |
20110293103 | Park et al. | Dec 2011 | A1 |
20140023203 | Rotschild | Jan 2014 | A1 |
20140122092 | Goldstein | May 2014 | A1 |
20140163976 | Park | Jun 2014 | A1 |
20140185828 | Helbling | Jul 2014 | A1 |
20150215701 | Usher | Jul 2015 | A1 |
20160104452 | Guan et al. | Apr 2016 | A1 |
20160249128 | Goldstein | Aug 2016 | A1 |
20170124847 | Schulz et al. | May 2017 | A1 |
Number | Date | Country |
---|---|---|
1385324 | Jan 2004 | EP |
1401240 | Mar 2004 | EP |
1519625 | Mar 2005 | EP |
1640972 | Mar 2006 | EP |
1519625 | May 2010 | EP |
H0877468 | Mar 1996 | JP |
H10162283 | Jun 1998 | JP |
2004114722 | Dec 2004 | WO |
2006037156 | Apr 2006 | WO |
2006054698 | May 2006 | WO |
2007092660 | Aug 2007 | WO |
2008050583 | May 2008 | WO |
2009023784 | Feb 2009 | WO |
2012097150 | Jul 2012 | WO |
Entry |
---|
Olwal, A. and Feiner S. Interaction Techniques Using Prosodic Features of Speech and Audio Localization. Proceedings of IUI 2005 (International Conference on Intelligent User Interfaces), San Diego, CA, Jan. 9-12, 2005, p. 284-286. |
Bernard Widrow, John R. Glover Jr., John M. McCool, John Kaunitz, Charles S. Williams, Robert H. Heam, James R. Zeidler, Eugene Dong Jr, and Robert C. Goodlin, Adaptive Noise Cancelling: Principles and Applications, Proceedings of the IEEE, vol. 63, No. 12, Dec. 1975. |
Mauro Dentino, John M. McCool, and Bernard Widrow, Adaptive Filtering in the Frequency Domain, Proceedings of the IEEE, vol. 66, No. 12, Dec. 1978. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00282, Dec. 21, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00242, Dec. 23, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00243, Dec. 23, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00234, Dec. 21, 2021. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00253, Jan. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00324, Jan. 13, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00281, Jan. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00302, Jan. 13, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00369, Feb. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00388, Feb. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-00410, Feb. 18, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01078, Jun. 9, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01099, Jun. 9, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01106, Jun. 9, 2022. |
Samsung Electronics Co., Ltd., and Samsung Electronics, America, Inc., v. Staton Techiya, LLC, IPR2022-01098, Jun. 9, 2022. |
Bernard Widrow, John R. Glover Jr., John M. McCool, John Kaunitz, Charles S. Williams, Robert H. Hearn, James R. Zeidler, Eugene Dong Jr, and Robert C. Goodlin, Adaptive Noise Cancelling: Principles and Applications, Proceedings of the IEEE, vol. 63, No. 12, Dec. 1975. |
U.S. Appl. No. 90/019,169, Samsung Electronics Co., Ltd. and Samsung Electronics, America, Inc., Request for Ex Parte Reexamination of U.S. Pat. No. 11,244,666, Date: Feb. 24, 2023. |
Number | Date | Country | |
---|---|---|---|
20220230616 A1 | Jul 2022 | US |
Number | Date | Country | |
---|---|---|---|
60885917 | Jan 2007 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17321892 | May 2021 | US |
Child | 17592143 | US | |
Parent | 16987396 | Aug 2020 | US |
Child | 17321892 | US | |
Parent | 16669490 | Oct 2019 | US |
Child | 16987396 | US | |
Parent | 16193568 | Nov 2018 | US |
Child | 16669490 | US | |
Parent | 14574589 | Dec 2014 | US |
Child | 16193568 | US | |
Parent | 12017878 | Jan 2008 | US |
Child | 14574589 | US |