Headphone responsive to optical signaling

Information

  • Patent Grant
  • 9609416
  • Patent Number
    9,609,416
  • Date Filed
    Monday, June 9, 2014
    10 years ago
  • Date Issued
    Tuesday, March 28, 2017
    7 years ago
Abstract
An optical sensor may be integrated into headphones and feedback from the sensor used to adjust an audio output from the headphones. For example, an emergency vehicle traffic preemption signal may be detected by the optical sensor. Optical signals may be processed in a pattern discriminator, which may be integrated with an audio controller integrated circuit (IC). When the signal is detected, the playback of music through the headphones may be muted and/or a noise cancellation function turned off. The optical sensor may be integrated in a music player, a smart phone, a tablet, a cord-mounted module, or the earpieces of the headphones.
Description
FIELD OF THE DISCLOSURE

The instant disclosure relates to mobile devices. More specifically, this disclosure relates to audio output of mobile devices.


BACKGROUND

Mobile devices, such as smart phones, are carried by a user throughout most or all of a day. These devices include the capability of playing music, videos, or other audio through headphones. Users often take advantage of having a source of music available throughout the day. For example, users often walk along the streets, ride bicycles, or ride motorized vehicles with headphones around their ears or headphone earbuds inserted in their ears. The use of the headphones impairs the user's ability to receive audible clues about the environment around them. For example, a user may be unable to hear the siren of an emergency vehicle while wearing the headphones with audio playing from the mobile device.


In addition to the physical impairment to audible sounds created by a user wearing the headphones, the mobile device and/or the headphones may implement noise cancellation. With noise cancellation, a microphone near the mobile device or headphones is used to detect sounds in the surrounding environment and intentionally subtract the sounds from what the user hears. Thus, when noise cancellation is active, the user only hears the audio from the device. For example, the mobile device or headphones may generate a signal that is out-of-phase with the sounds and add the out-of-phase signal to the music played through the headphones. Thus, when the environmental sound reaches the user's ear, the cancellation signal added to the music offsets the environmental sound and the user does not hear the environment. When the audible sound is the siren of an emergency vehicle, the user may be unaware of an emergency around him or may be unaware of an approaching high speed vehicle. This has become a particularly dangerous situation as noise cancellation in headphones has improved.


One conventional solution is for the mobile device to detect certain sounds, such as an emergency siren through the microphone and mute the audio output through the headphones while particular sounds are detected. However, this solution requires advance knowledge of each of the sounds. For example, a database of all emergency sirens would need to be created and updated regularly in order to recognize all emergency vehicles. Furthermore, the input from the microphone is noisy and the emergency siren may be covered by other nearby audible sounds, such as nearby car engines, generators, wildlife, etc. Thus, audibly detecting warning sounds may be difficult, and mute functionality based on audible detection of sounds may not be reliable.


Shortcomings mentioned here are only representative and are included simply to highlight that a need exists for improved audio devices and headphones, particularly for consumer-level devices. Embodiments described here address certain shortcomings but not necessarily each and every one described here or known in the art.


SUMMARY

Optical detection of particular signals identifying activity in a user's environment may be used to alert the user to certain activities. For example, emergency vehicles often include systems that generate optical signals, such as strobe lights. These optical signals may be detected and their presence used to take action by adjusting audio output of the headphones. These headphones may be paired with smart phones, tablets, media players, and other electronic devices. Sensors may be added to the headphones or to a device coupled to the headphones to detect optical signaling and take action in response to the detected optical signaling.


According to one embodiment, an apparatus may include an optical sensor and an audio controller coupled to the optical sensor. The audio controller may be configured to output an audio signal to an audio transducing device; detect an optical pattern corresponding to a presence of a vehicle in a signal received through the optical sensor; and/or adjust the output audio signal based, at least in part, on the detection of the optical pattern corresponding to the presence of the vehicle.


In some embodiments, the apparatus may also include a microphone coupled to the audio controller, and the microphone may receive an audio signal from the environment around the audio transducing device.


In certain embodiments, the audio controller may be configured to adjust the output audio signal by muting the output audio signal after the optical pattern is detected, turning off a noise cancellation signal within the audio signal after the optical pattern is detected, and/or adding to the output audio signal an audio signal corresponding to an audio signal representative of an environment around the audio transducing device after the optical pattern is detected; the optical sensor may be a visible light sensor or an infrared (IR) sensor; the audio controller may also be configured to generate an anti-noise signal for canceling audio, received through the microphone, in the environment around the audio transducing device using at least one adaptive filter, add to the output audio signal the anti-noise signal, and adjust the output audio signal by disabling the adding of the anti-noise signal to the output audio signal after the optical pattern is detected; the audio controller may also be configured to disable the detection of the optical pattern; the detected optical signal may correspond to a strobe of a traffic control preemption signal of an emergency vehicle; the optical sensor may be attached to a cord-mounted module attached to the apparatus; and/or the optical sensor may be attached to the audio transducing device.


According to another embodiment, a method may include receiving, at an audio controller, a first input corresponding to a signal received from an optical sensor; receiving, at the audio controller, a second input corresponding to an audio signal for playback through an audio transducing device; detecting, by the audio controller, a pattern indicating a presence of a vehicle in the first input; and/or adjusting, by the audio controller, the audio signal for playback through the audio transducing device after the pattern is detected.


In some embodiments, the method may also include receiving, at an audio controller, a third input corresponding to an audio signal received from a microphone in an environment around the audio transducing device; generating, by the audio controller, an anti-noise signal for canceling audio in the environment around the audio transducing device using at least one adaptive filter; detecting, by the audio controller, a vehicle strobe pattern in the first input; and/or disabling the detection of the pattern.


In certain embodiments, the step of adjusting the audio signal may include muting the output audio signal when the pattern is detected, turning off a noise cancellation signal within the audio signal when the pattern is detected, and/or adding to the output audio signal an audio signal corresponding to an audio signal representative of an environment around the audio transducing device when the pattern is detected; and/or the pattern may correspond to a strobe of a traffic control preemption signal of an emergency vehicle.


According to a further embodiment, an apparatus may include an optical sensor; an audio input node configured to receive an audio signal; an audio transducing device coupled to the audio input node; and/or a pattern discriminator coupled to the optical sensor and coupled to the audio transducing device. The pattern discriminator may be configured to detect a pattern indicating a presence of a vehicle at the optical sensor and/or mute the audio transducing device when the pattern is detected.


In some embodiments, the method may also include a controller configured to adjust an output audio signal of the audio transducing device based, at least in part, on the detection of the pattern.


In certain embodiments, the detected pattern may include a strobe of a traffic control preemption signal of an emergency vehicle; the optical sensor may include a visible light sensor or an infrared (IR) sensor; the optical sensor, the audio transducing device, and the pattern discriminator may be integrated into headphones; and/or the audio controller may be configured to adjust the output audio signal by turning off a noise cancellation signal within the audio signal after the pattern is detected or adding to the output audio signal an audio signal corresponding to an audio signal representative of an environment around the audio transducing device after the pattern is detected.


The foregoing has outlined rather broadly certain features and technical advantages of embodiments of the present invention in order that the detailed description that follows may be better understood. Additional features and advantages will be described hereinafter that form the subject of the claims of the invention. It should be appreciated by those having ordinary skill in the art that the conception and specific embodiment disclosed may be readily utilized as a basis for modifying or designing other structures for carrying out the same or similar purposes. It should also be realized by those having ordinary skill in the art that such equivalent constructions do not depart from the spirit and scope of the invention as set forth in the appended claims. Additional features will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended to limit the present invention.





BRIEF DESCRIPTION OF THE DRAWINGS

For a more complete understanding of the disclosed system and methods, reference is now made to the following descriptions taken in conjunction with the accompanying drawings.



FIG. 1 is a drawing illustrating an audio system with an optical sensor embedded in the headphones, a cord-mounted module, and/or an electronic device according to one embodiment of the disclosure.



FIG. 2 is a drawing illustrating an emergency vehicle pattern as one optical signal that an optical sensor may detect according to one embodiment of the disclosure.



FIG. 3 is a block diagram illustrating an audio controller and optical sensor for controlling an output of a speaker according to one embodiment of the disclosure.



FIG. 4 is a flow chart illustrating a method of controlling headphones based on a pattern detected from an optical signal according to one embodiment of the disclosure.



FIG. 5 is a block diagram illustrating an audio controller for mixing several signals for output to headphones based on a pattern detected from an optical signal according to one embodiment of the disclosure.



FIG. 6 is a flow chart illustrating a method of adjusting audio output with an anti-noise signal according to one embodiment of the disclosure.





DETAILED DESCRIPTION


FIG. 1 is a drawing illustrating an audio system with an optical sensor embedded in the headphones, a cord-mounted module, and/or an electronic device according to one embodiment of the disclosure. Headphones 102L and 102R may be coupled to an electronic device 120, such as an MP3 player, a smart phone, or a tablet computer. The headphones 102L and 102R may include speakers 104L and 104R, respectively. The speakers 104R and 104L transduce an audio signal provided by the electronic device 120 into sound waves that a user can hear. The headphones 102L and 102R may also include optical sensors 106L and 106R, respectively. The optical sensors 106L and 106R may be, for example, infrared (IR) sensors or visible light sensors. The headphones 102L and 102R may further include microphones 108L and 108R, respectively.


Optical sensors may be included on components other than the headphones 102L and 102R. A cord-mounted module 110 may be attached to a wire for the headphones 102L and 102R and may include an optical sensor 112. The electronic device 120 coupled to the headphones 102L and 102R may also include an optical sensor 122. Although optical sensors 106L, 106R, 112, and 122 are illustrated, not all the optical sensors may be present. For example, in one embodiment the optical sensor 112 is the only optical sensor. In another embodiment, the optical sensor 122 is the only optical sensor.


Microphones may be included in the audio system for detecting environmental sounds. The microphone may be located on components other than the headphones 102L and 102R. The cord-mounted module 110 may also include a microphone 114, and the electronic device 120 may also include a microphone 124. Although microphones 108L, 108R, 114, and 124 are illustrated, not all the microphones may be present. For example, in one embodiment, the microphone 124 is the only microphone. In another embodiment, the microphone 114 is the only microphone.


Output from optical sensors 106L, 106R, 112, and 122 and microphones 108L, 108R, 114, and 124 may be provided to an audio controller (not shown) located in the headphones 104L, 104R, in the cord-mounted module 110, or in the electronic device 120. In one embodiment, the audio controller may be part of the electronic device 120 and constructed as an integrated circuit (IC) for the electronic device 120. The IC may include other components such as a generic central processing unit (CPU), digital signal processor (DSP), audio amplification circuitry, digital to analog converters (DACs), analog to digital converters (ADC), and/or an audio coder/decoder (CODEC).


The audio controller may process signals including an internal audio signal containing music, sound effects, and/or audio, an external audio signal, such as from a microphone signal, a down-stream audio signal for a telephone call, or a down-stream audio signal for streamed music, and/or a generated audio signal, such as an anti-noise signal. The audio controller may generate or control generation of an audio signal for output to the headphones 102L and 102R. The headphones 102L and 102R then transduce the generated audio signal into audible sound recognized by the user's ears. The audio controller may utilize signals from the optical sensors 106L, 106R, 112, and 122 to recognize specific patterns and take an action based on the detection of a specific pattern. For example, the audio controller may select input signals used to generate the audio signal based, at least in part, on the detection of a specific pattern in the signal from the optical sensors 106L, 106R, 112, and/or 122.


In one example, the specific pattern may be a signal corresponding to the presence of a vehicle, such as an emergency vehicle strobe signal. The optical sensors 106L, 106R, 112, and 122 may be configured to receive the optical signal, and the audio controller may be configured to discriminate and identify the optical signal. In one embodiment, the pattern discriminator is configured to recognize a strobe signal corresponding to an emergency vehicle traffic preemption signal. FIG. 2 is a drawing illustrating an emergency vehicle strobe as one optical signal that an optical sensor may detect according to one embodiment of the disclosure. An emergency vehicle 202, such as a fire truck or an ambulance, may generate strobe signals 204A from light elements 204. The strobe signal 204A activates a strobe signal detector 208 mounted with traffic light 206. The strobe signal detector 208 may cycle the traffic light 206 upon detection of the strobe signal 204A to allow the emergency vehicle 202 to pass through the intersection unimpeded.


A user may be walking alongside the road using smart phone 210 and headphones 214. With music playing through the headphones 214, the user may be unable to hear the approach of the emergency vehicle 202. An optical sensor 212 in the smart phone 210 may detect strobe signal 204A. When the smart phone 210 detects the strobe signal 204A, the smart phone 210 may adjust audio output through the headphones 214. For example, the smart phone 210 may mute the audio output through the headphones 214. In another example, the smart phone 210 may disable noise cancelling within the headphones 214 to allow the user to hear the emergency siren broadcast by the emergency vehicle 202. In a further example, the smart phone 210 may pass to the headphones 214 an audio signal from a microphone that is receiving the emergency siren.


Although the optical sensor 212 is shown on the smart phone 210, the optical sensor 212 may be alternatively placed on a cord-mounted module (not shown) or the headphones 214, as described above with reference to FIG. 1. Further, although the smart phone 210 is described as performing discrimination on the signal of optical sensor 212 and adjusting the audio output to the headphones 214, the processing may be performed by an audio controller housed in the headphones 214 or a cord-mounted module.


An audio controller, regardless of where it is located, may be configured to include several blocks or circuits for performing certain functions. FIG. 3 is a block diagram illustrating an audio controller and optical sensor for controlling an output of a speaker according to one embodiment of the disclosure. An audio controller 310 may include a pattern discriminator 312 and a control block 314. The pattern discriminator 312 may be coupled to an optical sensor 302 and be configured to detect certain patterns within the signals received from the optical sensor 302. For example, the pattern discriminator 312 may include a database of known patterns of emergency vehicles and attempt to match signals from the optical sensor 302 to a known pattern. The patterns may be set by standards or local authorities and may be a repeated flashing of light at a set frequency or a specific pattern of frequencies.


Signals may be identified by processing data received from the optical sensor 302 at the pattern discriminator 312 and/or the control block 314. In one example, the pattern discriminator 312 may count a number of flashes of the strobe signal within a fixed time window. In another example, a message in the received optical signal may be decoded using clock and data recovery. In a further example, the pattern discriminator 312 may perform analysis on a signal from the optical sensor 302 to determine the presence of a certain pattern. In one embodiment, the pattern discriminator 312 may perform a Fast Fourier Transform (FFT) on a signal received by optical sensor 302 and determine whether the received signal has a particular frequency component. A pattern discriminator 312 may also use FFT to detect a pattern of frequencies in the optical sensors.


When the pattern discriminator 312 receives a positive match, the pattern discriminator 312 transmits a control signal to the control block 314. The control block 314 may also receive an audio input from input node 316, which may be an internal audio signal such as music selected for playback on an electronic device. Further, the control block 314 may receive a microphone input from input node 318. The control block 314 may generate an audio signal for transmission to the audio amplifier 320 for output to the speaker 322. The control block 314 may generate the audio signal based on the match signal from the pattern discriminator 312. In one example, when a positive match signal is received, the control block 314 may adjust an audio signal output to the speaker 322. In one embodiment, when a positive match signal is received, the control block 314 may include only the microphone input in the audio signal transmitted to the speaker 322. This may allow the user to hear the emergency vehicle passing by. When a negative match signal is later received, the control block 314 may include only the audio input in the audio signal transmitted to the speaker 322, which allows the user to return to music playback.


A flow chart for operation of the control block 314 is shown in FIG. 4. FIG. 4 is a flow chart illustrating a method of controlling headphones based on a pattern detected from an optical signal according to one embodiment of the disclosure. A method 400 begins at block 402 with outputting an audio signal to an audio transducing device, such as speaker 322 of a headphone. At block 404, the optical sensor is monitored, such as through the pattern discriminator 312, to detect a particular signal. At block 406, it is determined whether the signal is detected. If no signal is detected, the method 400 returns to blocks 402 and 404. If the signal is detected at block 406, then the method 400 continues to block 408 to adjust the audio output signal, such as my muting an internal audio signal.


An audio controller may have several alternative actions available to adjust an audio signal when a signal is detected by the optical sensor. The action taken may be based, for example, on which particular pattern is detected within the optical sensor and/or a user preference indicated through a setting in the electronic device or a switch on the headphones. FIG. 5 is a block diagram illustrating an audio controller for mixing several signals for output to headphones based on a pattern detected from an optical signal according to one embodiment of the disclosure. A control block 520 may be coupled to an optical sensor signal through input node 522, such as through a pattern discriminator. The control block 520 may control the operation of a mux 502, which generates an audio signal for output to an audio amplifier 530 and a headphone speaker 532.


The mux 502 may include a summation block 510 with one or more input signals. The input signals may include an internal audio signal, such as music, received at an input node 504, a noise cancellation signal received at input node 506, and/or a microphone audio signal received at input node 508. The mux 502 may include switches 512, 514, and 516 to couple or decouple the input nodes 504, 506, and 508 from the summation block 510. The switches 512, 514, and 516 may be controlled by the control block 520 based, at least in part, on a match signal that may be received from the input node 522. For example, the control block 520 may mute the internal audio signal by disconnecting switch 512. In another example, the control block 520 may disable a noise cancellation signal by deactivating the switch 514. In a further example, the control block 520 may disable a noise cancellation signal by deactivating the switch 514 and pass through a microphone signal by activating the switch 516. In one embodiment, the noise cancellation signal received at input node 506 may be an adaptive noise cancellation (ANC) signal generated by an ANC circuit. Additional disclosure regarding adaptive noise cancellation (ANC) may be found in U.S. Patent Application Publication No. 2012/0207317 corresponding to U.S. patent application Ser. No. 13/310,380 filed Dec. 2, 2011 and entitled “Ear-Coupling Detection and Adjustment of Adaptive Response in Noise-Canceling in Personal Audio Devices” and may also be found in U.S. patent application Ser. No. 13/943,454 filed on Jul. 16, 2013, both of which are incorporated by reference herein.


When the control block 520 is configured, whether by user preference or in response to a particular detected optical pattern, to control noise cancellation, the control block 520 may be configured to execute the method shown in FIG. 6. FIG. 6 is a flow chart illustrating a method of adjusting audio output with an anti-noise signal according to one embodiment of the disclosure. A method 600 begins at block 602 with receiving a first input of a signal from an optical sensor, at block 604 with receiving a second input of an audio signal for playback, and at block 606 with receiving a third input from a microphone. At block 608, an anti-noise signal may be generated from the third input, either by the control block 520 or by another circuit under control of the control block 520. At block 610, the control block 520 may control a multiplexer to sum the audio signal received at the second input at block 604 and the anti-noise signal received from the third input at block 608. This summed audio signal may be transmitted to an amplifier for output at headphones.


At block 612, the control block 520 determines whether an optical pattern is detected. When the optical pattern is not detected, the control block 520 returns to block 610 to continue providing audio playback. When the optical pattern is detected, the method 600 continues to block 614 where the control block 520 may disable the anti-noise signal and select the microphone signal received at block 606 for output to the audio transducing device, such as the headphones. In one embodiment shown in FIG. 5, block 614 may involve the control block 520 deactivating the switches 512 and 514 and activating the switch 516.


At block 616, it is determined whether the optical pattern is still detected. As long as the optical pattern is detected, the method 600 may return to block 614 where the microphone signal is output to the headphones. When the optical pattern is no longer detected, such as after the emergency vehicle has passed the user, the method 600 may proceed to block 618. At block 618, the anti-noise signal and the audio signal are re-enabled and a sum of the audio signal and the anti-noise signal is output to the headphones. In one embodiment shown in FIG. 5, block 618 may involve activating the switches 512 and 514 and deactivating the switch 516. After the anti-noise signal and the audio signal are re-enabled, the method 600 may return to block 610 to playback the audio signal until an optical pattern is detected again at block 612.


If implemented in firmware and/or software, the functions described above, such as with reference to FIG. 4 and FIG. 6, may be stored as one or more instructions or code on a computer-readable medium. Examples include non-transitory computer-readable media encoded with a data structure and computer-readable media encoded with a computer program. Computer-readable media includes physical computer storage media. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise random access memory (RAM), read-only memory (ROM), Electrically Erasable Programmable Read-Only Memory (EEPROM), compact disc-read only memory (CD-ROM) or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc includes compact discs (CD), laser discs, optical discs, digital versatile discs (DVD), floppy disks and blu-ray discs. Generally, disks reproduce data magnetically, and discs reproduce data optically. Combinations of the above should also be included within the scope of computer-readable media.


In addition to storage on computer readable medium, instructions and/or data may be provided as signals on transmission media included in a communication apparatus. For example, a communication apparatus may include a transceiver having signals indicative of instructions and data. The instructions and data are configured to cause one or more processors to implement the functions outlined in the claims.


Although the present disclosure and certain representative advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the spirit and scope of the disclosure as defined by the appended claims. For example, although a strobe signal is described as one type of optical signal for detecting the presence of a vehicle, an audio controller may be configured to discriminate other types of optical signals. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the present disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.

Claims
  • 1. A headphone device, comprising: an optical sensor configured to (a) receive an optical signal comprising a strobe pattern that corresponds to an emergency vehicle and (b) output a sensor signal; andan audio controller coupled to the optical sensor, wherein the audio controller is configured to: output an audio signal to a transducer;decode the sensor signal using clock and data recovery to obtain the strobe pattern from the sensor signal and to compare a characteristic of the decoded strobe pattern with a known pattern to detect a presence of the emergency vehicle; andadjust the output audio signal based, at least in part, on the detection of the presence of the emergency vehicle.
  • 2. The headphone device of claim 1, wherein the audio controller is configured to adjust the output audio signal by at least one of: muting the output audio signal after the presence of the emergency vehicle is detected;turning off a noise cancellation signal within the audio signal after the presence of the emergency vehicle is detected; andadding to the output audio signal an audio signal corresponding to an audio signal representative of an environment around the transducer after the presence of the emergency vehicle is detected.
  • 3. The headphone device of claim 1, wherein the optical sensor comprises at least one of a visible light sensor and an infrared (IR) sensor.
  • 4. The headphone device of claim 1, wherein the apparatus further comprises a microphone coupled to the audio controller, wherein the microphone receives an audio signal from the environment around the transducer.
  • 5. The headphone device of claim 4, wherein the audio controller is further configured to: generate an anti-noise signal for canceling sounds in the environment around the transducer based, at least in part, on the microphone audio signal;add to the output audio signal the anti-noise signal; andadjust the output audio signal by disabling the adding of the anti-noise signal to the output audio signal after the presence of the emergency vehicle is detected.
  • 6. The headphone device of claim 1, wherein the audio controller is configured to disable the detection of the presence of the emergency vehicle.
  • 7. The headphone device of claim 1, wherein the strobe pattern corresponds to a strobe of a traffic control preemption signal of an emergency vehicle.
  • 8. The headphone device of claim 1, further comprising: a first headphone;a second headphone; anda wire coupling the first headphone and the second headphone to the audio controller, wherein the optical sensor is integrated with the wire.
  • 9. A method, comprising: receiving, at an optical sensor integrated into a headphone device, an optical signal comprising a strobe pattern that corresponds to an emergency vehicle;receiving, at an audio controller, a first input comprising a sensor signal from the optical sensor;receiving, at the audio controller, a second input corresponding to an audio signal for playback through a transducer of the headphone device;decoding, by the audio controller, the sensor signal using clock and data recovery to obtain the strobe pattern from the sensor signal and to compare a characteristic of the decoded strobe pattern with a known pattern to detect the presence of the emergency vehicle; andadjusting, by the audio controller, the audio signal for playback through the transducer after the presence of the emergency vehicle is detected.
  • 10. The method of claim 9, wherein the step of adjusting the audio signal comprises at least one of: muting the output audio signal when the presence of the emergency vehicle is detected;turning off a noise cancellation signal within the audio signal when the presence of the emergency vehicle is detected; andadding to the output audio signal an audio signal corresponding to an audio signal representative of an environment around the transducer when the presence of the emergency vehicle is detected.
  • 11. The method of claim 9, further comprising: receiving, at an audio controller, a third input corresponding to an audio signal received from a microphone in an environment around the transducer;generating, by the audio controller, an anti-noise signal for canceling audio in the environment around the transducer based, at least in part, on the audio signal received from the microphone;adding the anti-noise signal to the audio signal for playback through the transducer; anddisabling the adding of the anti-noise signal to the output audio signal after the presence of the emergency vehicle is detected.
  • 12. The method of claim 9, further comprising disabling detection of the presence of the emergency vehicle.
  • 13. The method of claim 9, wherein the strobe pattern corresponds to a vehicle strobe of a traffic control preemption signal of an emergency vehicle.
  • 14. A headphone device, comprising: an optical sensor configured to (a) receive an optical signal comprising a strobe pattern that corresponds to an emergency vehicle and (b) output a sensor signal;an audio input node configured to receive an audio signal; anda pattern discriminator coupled to the optical sensor to receive the sensor signal and configured to couple to a transducer, wherein the pattern discriminator is configured to: decode the sensor signal using clock and data recovery to obtain the strobe pattern from the sensor signal and to compare a characteristic of the decoded strobe pattern with a known pattern to detect a presence of the emergency vehicle; andmute the transducer when the presence of the emergency vehicle is detected.
  • 15. The headphone device of claim 14, wherein the strobe pattern comprises a strobe of a traffic control preemption signal of an emergency vehicle.
  • 16. The headphone device of claim 14, wherein the optical sensor comprises at least one of a visible light sensor and an infrared (IR) sensor.
  • 17. The headphone device of claim 14, further comprising a controller configured to adjust an output audio signal of the transducer based, at least in part, on the presence of the emergency vehicle.
  • 18. The headphone device of claim 17, wherein the audio controller is configured to adjust the output audio signal by at least one of: turning off a noise cancellation signal within the audio signal after the presence of the emergency vehicle is detected; andadding to the output audio signal an audio signal corresponding to an audio signal representative of an environment around the transducer after the presence of the emergency vehicle is detected.
  • 19. The headphone device of claim 1, wherein the audio controller is configured to detect the presence of the emergency vehicle by performing a Fast Fourier Transform (FFT) on the sensor signal received from the optical sensor to determine whether the signal has a particular frequency component indicating the presence of an emergency vehicle.
  • 20. The method of claim 9, wherein the step of detecting the presence of the emergency vehicle comprises performing a Fast Fourier Transform (FFT) on the sensor signal received from the optical sensor to determine whether the signal has a particular frequency component indicating the presence of an emergency vehicle.
  • 21. The headphone device of claim 14, wherein the pattern discriminator is configured to detect the presence of the emergency vehicle by performing a Fast Fourier Transform (FFT) on the sensor signal received from the optical sensor to determine whether the signal has a particular frequency component indicating the presence of an emergency vehicle.
  • 22. The headphone device of claim 1, wherein the audio controller is an integrated circuit comprising an audio coder/decoder (CODEC).
  • 23. The headphone device of claim 14, wherein the pattern discriminator is integrated with an audio coder/decoder (CODEC).
US Referenced Citations (218)
Number Name Date Kind
3550078 Long Dec 1970 A
3831039 Henschel Aug 1974 A
5044373 Northeved et al. Sep 1991 A
5172113 Hamer Dec 1992 A
5187476 Hamer Feb 1993 A
5251263 Andrea et al. Oct 1993 A
5278913 Delfosse et al. Jan 1994 A
5321759 Yuan Jun 1994 A
5337365 Hamabe et al. Aug 1994 A
5359662 Yuan et al. Oct 1994 A
5410605 Sawada et al. Apr 1995 A
5425105 Lo et al. Jun 1995 A
5445517 Kondou et al. Aug 1995 A
5465413 Enge et al. Nov 1995 A
5495243 McKenna Feb 1996 A
5548681 Gleaves et al. Aug 1996 A
5586190 Trantow et al. Dec 1996 A
5640450 Watanabe Jun 1997 A
5699437 Finn Dec 1997 A
5706344 Finn Jan 1998 A
5740256 Castello Da Costa et al. Apr 1998 A
5768124 Stothers et al. Jun 1998 A
5815582 Claybaugh et al. Sep 1998 A
5832095 Daniels Nov 1998 A
5946391 Dragwidge et al. Aug 1999 A
5991418 Kuo Nov 1999 A
6041126 Terai et al. Mar 2000 A
6118878 Jones Sep 2000 A
6219427 Kates et al. Apr 2001 B1
6278786 McIntosh Aug 2001 B1
6282176 Hemkumar Aug 2001 B1
6326903 Gross et al. Dec 2001 B1
6418228 Terai et al. Jul 2002 B1
6434246 Kates et al. Aug 2002 B1
6434247 Kates et al. Aug 2002 B1
6522746 Marchok et al. Feb 2003 B1
6683960 Fujii et al. Jan 2004 B1
6766292 Chandran et al. Jul 2004 B1
6768795 Feltstrom et al. Jul 2004 B2
6850617 Weigand Feb 2005 B1
6940982 Watkins Sep 2005 B1
7058463 Ruha et al. Jun 2006 B1
7103188 Jones Sep 2006 B1
7181030 Rasmussen et al. Feb 2007 B2
7330739 Somayajula Feb 2008 B2
7365669 Melanson Apr 2008 B1
7446674 McKenna Nov 2008 B2
7680456 Muhammad et al. Mar 2010 B2
7742790 Konchitsky et al. Jun 2010 B2
7817808 Konchitsky et al. Oct 2010 B2
7903825 Melanson Mar 2011 B1
8019050 Mactavish et al. Sep 2011 B2
D666169 Tucker et al. Aug 2012 S
8249262 Chua et al. Aug 2012 B2
8251903 LeBoeuf et al. Aug 2012 B2
8290537 Lee et al. Oct 2012 B2
8325934 Kuo Dec 2012 B2
8379884 Horibe et al. Feb 2013 B2
8401200 Tiscareno et al. Mar 2013 B2
8442251 Jensen et al. May 2013 B2
8526627 Asao et al. Sep 2013 B2
8848936 Kwatra et al. Sep 2014 B2
8907829 Naderi Dec 2014 B1
8908877 Abdollahzadeh Milani et al. Dec 2014 B2
8948407 Alderson et al. Feb 2015 B2
8958571 Kwatra et al. Feb 2015 B2
20010053228 Jones Dec 2001 A1
20020003887 Zhang et al. Jan 2002 A1
20030063759 Brennan et al. Apr 2003 A1
20030185403 Sibbald Oct 2003 A1
20040047464 Yu et al. Mar 2004 A1
20040165736 Hetherington et al. Aug 2004 A1
20040167777 Hetherington et al. Aug 2004 A1
20040202333 Csermak et al. Oct 2004 A1
20040264706 Ray et al. Dec 2004 A1
20050004796 Trump et al. Jan 2005 A1
20050018862 Fisher Jan 2005 A1
20050117754 Sakawaki Jun 2005 A1
20050207585 Christoph Sep 2005 A1
20050240401 Ebenezer Oct 2005 A1
20060035593 Leeds Feb 2006 A1
20060069556 Nadjar et al. Mar 2006 A1
20060153400 Fujita et al. Jul 2006 A1
20070030989 Kates Feb 2007 A1
20070033029 Sakawaki Feb 2007 A1
20070038441 Inoue et al. Feb 2007 A1
20070047742 Taenzer et al. Mar 2007 A1
20070053524 Haulick et al. Mar 2007 A1
20070076896 Hosaka et al. Apr 2007 A1
20070127879 Frank Jun 2007 A1
20070154031 Avendano et al. Jul 2007 A1
20070258597 Rasmussen et al. Nov 2007 A1
20070297620 Choy Dec 2007 A1
20080019548 Avendano Jan 2008 A1
20080079571 Samadani Apr 2008 A1
20080101589 Horowitz et al. May 2008 A1
20080107281 Togami et al. May 2008 A1
20080144853 Sommerfeldt et al. Jun 2008 A1
20080177532 Greiss et al. Jul 2008 A1
20080181422 Christoph Jul 2008 A1
20080226098 Haulick et al. Sep 2008 A1
20080240455 Inoue et al. Oct 2008 A1
20080240457 Inoue et al. Oct 2008 A1
20090012783 Klein Jan 2009 A1
20090034748 Sibbald Feb 2009 A1
20090041260 Jorgensen et al. Feb 2009 A1
20090046867 Clemow Feb 2009 A1
20090060222 Jeong et al. Mar 2009 A1
20090080670 Solbeck et al. Mar 2009 A1
20090086990 Christoph Apr 2009 A1
20090175466 Elko et al. Jul 2009 A1
20090196429 Ramakrishnan et al. Aug 2009 A1
20090220107 Every et al. Sep 2009 A1
20090238369 Ramakrishnan et al. Sep 2009 A1
20090245529 Asada et al. Oct 2009 A1
20090254340 Sun et al. Oct 2009 A1
20090290718 Kahn et al. Nov 2009 A1
20090296965 Kojima Dec 2009 A1
20090304200 Kim et al. Dec 2009 A1
20090311979 Husted et al. Dec 2009 A1
20100014683 Maeda et al. Jan 2010 A1
20100014685 Wurm Jan 2010 A1
20100061564 Clemow et al. Mar 2010 A1
20100069114 Lee et al. Mar 2010 A1
20100082339 Konchitsky et al. Apr 2010 A1
20100098263 Pan et al. Apr 2010 A1
20100098265 Pan et al. Apr 2010 A1
20100124336 Shridhar et al. May 2010 A1
20100124337 Wertz et al. May 2010 A1
20100131269 Park et al. May 2010 A1
20100150367 Mizuno Jun 2010 A1
20100158330 Guissin et al. Jun 2010 A1
20100166203 Peissig et al. Jul 2010 A1
20100195838 Bright Aug 2010 A1
20100195844 Christoph et al. Aug 2010 A1
20100207317 Iwami et al. Aug 2010 A1
20100239126 Grafenberg et al. Sep 2010 A1
20100246855 Chen Sep 2010 A1
20100266137 Sibbald et al. Oct 2010 A1
20100272276 Carreras et al. Oct 2010 A1
20100272283 Carreras et al. Oct 2010 A1
20100274564 Bakalos et al. Oct 2010 A1
20100284546 DeBrunner et al. Nov 2010 A1
20100291891 Ridgers et al. Nov 2010 A1
20100296666 Lin Nov 2010 A1
20100296668 Lee et al. Nov 2010 A1
20100310086 Magrath et al. Dec 2010 A1
20100322430 Isberg Dec 2010 A1
20110007907 Park et al. Jan 2011 A1
20110106533 Yu May 2011 A1
20110116687 McDonald May 2011 A1
20110129098 Delano et al. Jun 2011 A1
20110130176 Magrath et al. Jun 2011 A1
20110142247 Fellers et al. Jun 2011 A1
20110144984 Konchitsky Jun 2011 A1
20110158419 Theverapperuma et al. Jun 2011 A1
20110206214 Christoph et al. Aug 2011 A1
20110222698 Asao et al. Sep 2011 A1
20110249826 Van Leest Oct 2011 A1
20110273374 Wood Nov 2011 A1
20110288860 Schevciw et al. Nov 2011 A1
20110293103 Park et al. Dec 2011 A1
20110299695 Nicholson Dec 2011 A1
20110305347 Wurm Dec 2011 A1
20110317848 Ivanov et al. Dec 2011 A1
20120120287 Funamoto May 2012 A1
20120135787 Kusunoki et al. May 2012 A1
20120140917 Nicholson et al. Jun 2012 A1
20120140942 Loeda Jun 2012 A1
20120140943 Hendrix et al. Jun 2012 A1
20120148062 Scarlett et al. Jun 2012 A1
20120155666 Nair Jun 2012 A1
20120170766 Alves et al. Jul 2012 A1
20120207317 Abdollahzadeh Milani et al. Aug 2012 A1
20120215519 Park et al. Aug 2012 A1
20120250873 Bakalos et al. Oct 2012 A1
20120259626 Li et al. Oct 2012 A1
20120263317 Shin et al. Oct 2012 A1
20120281850 Hyatt Nov 2012 A1
20120300958 Klemmensen Nov 2012 A1
20120300960 Mackay et al. Nov 2012 A1
20120308021 Kwatra et al. Dec 2012 A1
20120308024 Alderson et al. Dec 2012 A1
20120308025 Hendrix et al. Dec 2012 A1
20120308026 Kamath et al. Dec 2012 A1
20120308027 Kwatra Dec 2012 A1
20120308028 Kwatra et al. Dec 2012 A1
20120310640 Kwatra et al. Dec 2012 A1
20130010982 Elko et al. Jan 2013 A1
20130083939 Fellers et al. Apr 2013 A1
20130243198 Van Rumpt Sep 2013 A1
20130243225 Yokota Sep 2013 A1
20130272539 Kim et al. Oct 2013 A1
20130287218 Alderson et al. Oct 2013 A1
20130287219 Hendrix et al. Oct 2013 A1
20130293723 Benson Nov 2013 A1
20130301842 Hendrix et al. Nov 2013 A1
20130301846 Alderson et al. Nov 2013 A1
20130301847 Alderson et al. Nov 2013 A1
20130301848 Zhou et al. Nov 2013 A1
20130301849 Alderson et al. Nov 2013 A1
20130343556 Bright Dec 2013 A1
20130343571 Rayala et al. Dec 2013 A1
20140044275 Goldstein et al. Feb 2014 A1
20140050332 Nielsen et al. Feb 2014 A1
20140086425 Jensen et al. Mar 2014 A1
20140177851 Kitazawa et al. Jun 2014 A1
20140185828 Helbling Jul 2014 A1
20140211953 Alderson et al. Jul 2014 A1
20140226827 Abdollahzadeh Milani Aug 2014 A1
20140254830 Tomono Sep 2014 A1
20140270222 Hendrix et al. Sep 2014 A1
20140270223 Li et al. Sep 2014 A1
20140270224 Zhou et al. Sep 2014 A1
20140270248 Ivanov Sep 2014 A1
20140314246 Hellman Oct 2014 A1
20150092953 Abdollahzadeh Milani et al. Apr 2015 A1
20150104032 Kwatra et al. Apr 2015 A1
Foreign Referenced Citations (21)
Number Date Country
102011013343 Sep 2012 DE
1880699 Jan 2008 EP
1947642 Jul 2008 EP
2133866 Dec 2009 EP
2216774 Aug 2010 EP
2395500 Dec 2011 EP
2395501 Dec 2011 EP
2401744 Nov 2004 GB
2455821 Jun 2009 GB
2455824 Jun 2009 GB
2455828 Jun 2009 GB
2484722 Apr 2012 GB
H06186985 Jul 1994 JP
03015074 Feb 2003 WO
03015275 Feb 2003 WO
2004009007 Jan 2004 WO
2004017303 Feb 2004 WO
2007007916 Jan 2007 WO
2007113487 Oct 2007 WO
2010117714 Oct 2010 WO
2012134874 Oct 2012 WO
Non-Patent Literature Citations (62)
Entry
U.S. Appl. No. 13/686,353, Hendrix et al.
U.S. Appl. No. 13/721,832, Lu et al.
U.S. Appl. No. 13/724,656, Lu et al.
U.S. Appl. No. 13/794,931, Lu et al.
U.S. Appl. No. 13/794,979, Alderson et al.
U.S. Appl. No. 13/968,007, Hendrix et al.
U.S. Appl. No. 13/968,013, Abdollahzadeh Milani et al.
U.S. Appl. No. 14/101,777, Alderson et al.
U.S. Appl. No. 14/101,955, Alderson.
U.S. Appl. No. 14/197,814, Kaller et al.
U.S. Appl. No. 14/210,537, Abdollahzadeh Milani et al.
U.S. Appl. No. 14/210,589, Abdollahzadeh Milani et al.
U.S. Appl. No. 14/252,235, Lu et al.
Emergency Vehicle Strobe Detector, Hoover Fence, http://www.hooverfence.com/catalog/entry—systems/fs2000.htm.
Global Traffic Technologies Data sheet for Opticom™ Infrared System Model 792 Emitter, Oct. 2007.
Global Traffic Technologies Data sheet for Opticom™ Model 792M Multimode Strobe Emitter.
Global Traffic Technologies Data sheet for Opticom™ Infrared System Model 794 LED Emitter.
Global Traffic Technologies Data sheet for Opticom™ Model 794M Multimode LED Emitter.
Chapter 4 of the 2003 Manual on Uniform Traffic Control Devices (MUTCD) with Revision 1 only, Nov. 2004.
Benet et al., Using infrared sensors for distance measurement in mobile roots, Robotics and Autonomous Systems, 2002, vol. 40, pp. 255-266.
Campbell, Mikey, “Apple looking into self-adjusting earbud headphones with noise cancellation tech”, Apple Insider, Jul. 4, 2013, pp. 1-10 (10 pages in pdf), downloaded on May 14, 2014 from http://appleinsider.com/articles/13/07/04/apple-looking-into-self-adjusting-earbud-headphones-with-noise-cancellation-tech.
Pfann, et al., “LMS Adaptive Filtering with Delta-Sigma Modulated Input Signals,” IEEE Signal Processing Letters, Apr. 1998, pp. 95-97, vol. 5, No. 4, IEEE Press, Piscataway, NJ.
Toochinda, et al. “A Single-Input Two-Output Feedback Formulation for ANC Problems,” Proceedings of the 2001 American Control Conference, Jun. 2001, pp. 923-928, vol. 2, Arlington, VA.
Kuo, et al., “Active Noise Control: A Tutorial Review,” Proceedings of the IEEE, Jun. 1999, pp. 943-973, vol. 87, No. 6, IEEE Press, Piscataway, NJ.
Johns, et al., “Continuous-Time LMS Adaptive Recursive Filters,” IEEE Transactions on Circuits and Systems, Jul. 1991, pp. 769-778, vol. 38, No. 7, IEEE Press, Piscataway, NJ.
Shoval, et al., “Comparison of DC Offset Effects in Four LMS Adaptive Algorithms,” IEEE Transactions on Circuits and Systems II: Analog and Digital Processing, Mar. 1995, pp. 176-185, vol. 42, Issue 3, IEEE Press, Piscataway, NJ.
Mali, Dilip, “Comparison of DC Offset Effects on LMS Algorithm and its Derivatives,” International Journal of Recent Trends in Engineering, May 2009, pp. 323-328, vol. 1, No. 1, Academy Publisher.
Kates, James M., “Principles of Digital Dynamic Range Compression,” Trends in Amplification, Spring 2005, pp. 45-76, vol. 9, No. 2, Sage Publications.
Gao, et al., “Adaptive Linearization of a Loudspeaker,” IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 14-17, 1991, pp. 3589-3592, Toronto, Ontario, CA.
Silva, et al., “Convex Combination of Adaptive Filters With Different Tracking Capabilities,” IEEE International Conference on Acoustics, Speech, and Signal Processing, Apr. 15-20, 2007, pp. III 925-928, vol. 3, Honolulu, HI, USA.
Akhtar, et al., “A Method for Online Secondary Path Modeling in Active Noise Control Systems,” IEEE International Symposium on Circuits and Systems, May 23-26, 2005, pp. 264-267, vol. 1, Kobe, Japan.
Davari, et al., “A New Online Secondary Path Modeling Method for Feedforward Active Noise Control Systems,” IEEE International Conference on Industrial Technology, Apr. 21-24, 2008, pp. 1-6, Chengdu, China.
Lan, et al., “An Active Noise Control System Using Online Secondary Path Modeling With Reduced Auxiliary Noise,” IEEE Signal Processing Letters, Jan. 2002, pp. 16-18, vol. 9, Issue 1, IEEE Press, Piscataway, NJ.
Liu, et al., “Analysis of Online Secondary Path Modeling With Auxiliary Noise Scaled by Residual Noise Signal,” IEEE Transactions on Audio, Speech and Language Processing, Nov. 2010, pp. 1978-1993, vol. 18, Issue 8, IEEE Press, Piscataway, NJ.
Black, John W., “An Application of Side-Tone in Subjective Tests of Microphones and Headsets”, Project Report No. NM 001 064.01.20, Research Report of the U.S. Naval School of Aviation Medicine, Feb. 1, 1954, 12 pages (pp. 1-12 in pdf), Pensacola, FL, US.
Peters, Robert W., “The Effect of High-Pass and Low-Pass Filtering of Side-Tone Upon Speaker Intelligibility”, Project Report No. NM 001 064.01.25, Research Report of the U.S. Naval School of Aviation Medicine, Aug. 16, 1954, 13 pages (pp. 1-13 in pdf), Pensacola, FL, US.
Lane, et al., “Voice Level: Autophonic Scale, Perceived Loudness, and the Effects of Sidetone”, The Journal of the Acoustical Society of America, Feb. 1961, pp. 160-167, vol. 33, No. 2., Cambridge, MA, US.
Liu, et al., “Compensatory Responses to Loudness-shifted Voice Feedback During Production of Mandarin Speech”, Journal of the Acoustical Society of America, Oct. 2007, pp. 2405-2412, vol. 122, No. 4.
Paepcke, et al., “Yelling in the Hall: Using Sidetone to Address a Problem with Mobile Remote Presence Systems”, Symposium on User Interface Software and Technology, Oct. 16-19, 2011, 10 pages (pp. 1-10 in pdf), Santa Barbara, CA, US.
Therrien, et al., “Sensory Attenuation of Self-Produced Feedback: The Lombard Effect Revisited”, PLOS ONE, Nov. 2012, pp. 1-7, vol. 7, Issue 11, e49370, Ontario, Canada.
Abdollahzadeh Milani, et al., “On Maximum Achievable Noise Reduction in Anc Systems”,2010 IEEE International Conference on Acoustics Speech and Signal Processing, Mar. 14-19, 2010, pp. 349-352, Dallas, TX, US.
Cohen, Israel, “Noise Spectrum Estimation in Adverse Environments: Improved Minima Controlled Recursive Averaging”, IEEE Transactions on Speech and Audio Processing, Sep. 2003, pp. 1-11, vol. 11, Issue 5, Piscataway, NJ, US.
Ryan, et al., “Optimum Near-Field Performance of Microphone Arrays Subject to a Far-Field Beampattern Constraint”, J. Acoust. Soc. Am., Nov. 2000, pp. 2248-2255, 108 (5), Pt. 1, Ottawa, Ontario, Canada.
Cohen, et al., “Noise Estimation by Minima Controlled Recursive Averaging for Robust Speech Enhancement”, IEEE Signal Processing Letters, Jan. 2002, pp. 12-15, vol. 9, No. 1, Piscataway, NJ, US.
Martin, Rainer, “Noise Power Spectral Density Estimation Based on Optimal Smoothing and Minimum Statistics”, IEEE Transactions on Speech and Audio Processing, Jul. 2001, pp. 504-512, vol. 9, No. 5, Piscataway, NJ, US.
Martin, Rainer, “Spectral Subtraction Based on Minimum Statistics”, Signal Processing VII Theories and Applications, Proceedings of EUSIPCO-94, 7th European Signal Processing Conference, Sep. 13-16, 1994, pp. 1182-1185, vol. III, Edinburgh, Scotland, U.K.
Booij, et al., “Virtual sensors for local, three dimensional, broadband multiple-channel active noise control and the effects on the quiet zones”, Proceedings of the International Conference on Noise and Vibration Engineering, ISMA 2010, Sep. 20-22, 2010, pp. 151-166, Leuven.
Kuo, et al., “Residual noise shaping technique for active noise control systems”, J. Acoust. Soc. Am. 95 (3), Mar. 1994, pp. 1665-1668.
Lopez-Caudana, Edgar Omar, “Active Noise Cancellation: The Unwanted Signal and the Hybrid Solution”, Adaptive Filtering Applications, Dr. Lino Garcia (Ed.), Jul. 2011, pp. 49-84, ISBN: 978-953-307-306-4, InTech.
Senderowicz, et al., “Low-Voltage Double-Sampled Delta-Sigma Converters”, IEEE Journal on Solid-State Circuits, Dec. 1997, pp. 1907-1919, vol. 32, No. 12, Piscataway, NJ.
Hurst, et al., “An improved double sampling scheme for switched-capacitor delta-sigma modulators”, 1992 IEEE Int. Symp. on Circuits and Systems, May 10-13, 1992, vol. 3, pp. 1179-1182, San Diego, CA.
Parkins, John W., “Narrowband and broadband active control in an enclosure using the acoustic energy density” Acoustical Society of America, Jul. 2000, vol. 108, No. 1, pp. 192-203.
Jin, et al. “A simultaneous equation method-based online secondary path modeling algorithm for active noise control”, Journal of Sound and Vibration, Apr. 25, 2007, pp. 455-474, vol. 303, No. 3-5, London, GB.
Erkelens, et al., “Tracking of Nonstationary Noise Based on Data-Driven Recursive Noise Power Estimation”, IEEE Transactions on Audio Speech and Language Processing, Aug. 2008, pp. 1112-1123, vol. 16, No. 6, Piscataway, NJ, US.
Rao, et al., “A Novel Two State Single Channel Speech Enhancement Technique”, India Conference (Indicon) 2011 Annual IEEE, IEEE, Dec. 2011, 6 pages (pp. 1-6 in pdf), Piscataway, NJ, US.
Rangachari, et al., “A noise-estimation algorithm for highly non-stationary environments”, Speech Communication, Feb. 2006, pp. 220-231, vol. 48, No. 2. Elsevier Science Publishers.
Parkins, et al., “Narrowband and broadband active control in an enclosure using the acoustic energy density”, J. Acoust. Soc. Am. Jul. 2000, pp. 192-203, vol. 108, issue 1, US.
Feng, Jinwei et al., “A broadband self-tuning active noise equaliser”, Signal Processing, Elsevier Science Publishers B.V. Amsterdam, NL, vol. 62, No. 2, Oct. 1, 1997, pp. 251-256.
Zhang, Ming et al., “A Robust Online Secondary Path Modeling Method with Auxiliary Noise Power Scheduling Strategy and Norm Constraint Manipulation”, IEEE Transactions on Speech and Audio Processing, IEEE Service Center, New York, NY, vol. 11, No. 1, Jan. 1, 2003.
Lopez-Gaudana, Edgar et al., “A hybrid active noise cancelling with secondary path modeling”, 51st Midwest Symposium on Circuits and Systems, 2008, MWSCAS 2008, Aug. 10 2008, pp. 277-280.
Widrow, B., et al., Adaptive Noice Cancelling; Principles and Applications, Proceedings of the IEEE, Dec. 1975, pp. 1692-1716, vol. 63, No. 13, IEEE, New York, NY, US.
Morgan, et al., A Delayless Subband Adaptive Filter Architecture, IEEE Transactions on Signal Processing, IEEE Service Center, Aug. 1995, pp. 1819-1829, vol. 43, No. 8, New York, NY, US.
Related Publications (1)
Number Date Country
20150358718 A1 Dec 2015 US