When a user listens to music with headphones, audio signals that are mixed to come from the left or right side sound to the user as if they are located adjacent to the left and right ears. Audio signals that are mixed to come from the center sound to the listener as if they are located in the middle of the listener's head. This placement effect is due to the recording process, which assumes that audio signals will be played through speakers that will create a natural dispersion of the reproduced audio signals within a room, where the room provides a sound path to both ears. Playing audio signals through headphones sounds unnatural in part because there is no sound path to both ears.
For purposes of summarizing the disclosure, certain aspects, advantages and novel features of several embodiments are described herein. It is to be understood that not necessarily all such advantages can be achieved in accordance with any particular embodiment of the embodiments disclosed herein. Thus, the embodiments disclosed herein can be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
In certain embodiments, a method of enhancing audio for headphones can be implemented under control of a hardware processor. The method can include receiving a left input audio signal, receiving a right input audio signal, obtaining a difference signal from the left and right input audio signals, filtering the difference signal at least with a notch filter to produce a spatially-enhanced audio signal, filtering the left and right input audio signals with at least two band pass filters to produce bass-enhanced audio signals, filtering the left and right input audio signals with a high pass filter to produce high-frequency enhanced audio signals, mixing the spatially-enhanced audio signal, the bass-enhanced audio signals, and the high-frequency enhanced audio signals to produce left and right headphone output signals, and outputting the left and right headphone output signals to headphones for playback to a listener.
The method of the preceding paragraph may be implemented with any combination of the following features: the notch filter of the spatial enhancer can attenuate frequencies in a frequency band associated with speech; the notch filter can attenuate frequencies in a frequency band centered at about 2500 Hz; the notch filter can attenuate frequencies in a frequency band of at least about 2100 Hz to about 2900 Hz; a spatial enhancement provided by the notch filter can be effective when the headphones are closely coupled with the listener's ears; the band pass filters can emphasize harmonics of a fundamental that may be attenuated or unreproducible by headphones; and the high pass filter can have a cutoff frequency of about 5 kHz.
In certain embodiments, a system for enhancing audio for headphones can include a spatial enhancer that can obtain a difference signal from a left input channel of audio and a right input channel of audio and to process the difference signal with a notch filter to produce a spatially-enhanced channel of audio. The system can further include a low frequency enhancer that can process the left input channel of audio and the right input channel of audio to produce bass-enhanced channels of audio. The system may also include a high frequency enhancer that can process the left input channel of audio and the right input channel of audio to produce high-frequency enhanced channels of audio. In addition, the system can include a mixer that can combine the spatially-enhanced channel of audio, the bass-enhanced channels of audio, and the high-frequency enhanced channels of audio to produce left and right headphone output channels. Moreover, the spatial enhancer, the low frequency enhancer, the high frequency enhancer, and the mixer can be implemented by one or more hardware processors.
The system of the preceding paragraph may be implemented with any combination of the following features: the notch filter of the spatial enhancer can attenuate frequencies in a frequency band associated with speech; the notch filter can attenuate frequencies in a frequency band centered at about 2500 Hz; the notch filter can attenuate frequencies in a frequency band of at least about 2100 Hz to about 2900 Hz; a spatial enhancement provided by the notch filter can be effective when the headphones are closely coupled with the listener's ears; the band pass filters can emphasize harmonics of a fundamental that may be attenuated or unreproducible by headphones; and the high pass filter can have a cutoff frequency of about 5 kHz.
In various embodiments, non-transitory physical computer storage includes instructions stored thereon that, when executed by a hardware processor, can implement a system for enhancing audio for headphones. The system can filter left and right input audio signals with a notch filter to produce spatially-enhanced audio signals. The system can also obtain a difference signal from the spatially-enhanced audio signals. The system may also filter the left and right input audio signals with at least two band pass filters to produce bass-enhanced audio signals. Moreover, the system may filter the left and right input audio signals with a high pass filter to produce high-frequency enhanced audio signals. Additionally, the system may mix the difference signal, the bass-enhanced audio signals, and the high-frequency enhanced audio signals to produce left and right headphone output signals.
Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate embodiments of the features described herein and not to limit the scope thereof.
I. Introduction
With loudspeakers placed in a room, the width between the loudspeakers can create a stereo effect that may be perceived by a listener as providing a spatial, ambient sound. With headphones, due to the close position of the headphone speakers to a listener's ears and the bypassing of the outer ear, an inaccurate overly discrete stereo effect perceived by a listener. This discrete stereo effect may be less immersive than a stereo effect provided by stereo loudspeakers. Many headphones are also poor at reproducing certain low-bass and high frequencies, resulting in a poor listening experience for many listeners.
This disclosure describes embodiments of an audio enhancement system that can provide spatial enhancement, low frequency enhancement, and/or high frequency enhancement for headphone audio. In an embodiment, the spatial enhancement can increase the sense of spaciousness or stereo separation between left and right headphone channels and eliminate the “in the head” effect typically presented by headphones. The low frequency enhancement can enhance bass frequencies that are unreproducible or attenuated in headphone speakers by emphasizing harmonics of the low bass frequencies. The high frequency enhancement can emphasize higher frequencies that may be less reproducible or poorly tuned for headphone speakers. In some embodiments, the audio enhancement system can provide a user interface that enables a user to control the amount (e.g., gains) of each enhancement applied to headphone input signals. The audio enhancement system may also be designed to provide one or more of these enhancements more effectively when headphones with good coupling to the ear are used.
For purposes of summarizing the disclosure, certain aspects, advantages and novel features of several embodiments are described herein. It is to be understood that not necessarily all such advantages can be achieved in accordance with any particular embodiment of the embodiments disclosed herein. Thus, the embodiments disclosed herein can be embodied or carried out in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other advantages as may be taught or suggested herein.
II. Example Embodiments
Advantageously, in certain embodiments, the audio enhancement system 114 can provide enhancements to audio for low-frequency enhancements, high-frequency enhancements, and/or spatial enhancements. These audio enhancements can be used to improve headphone audio for music, videos, television, moves, gaming, conference calls, and the like.
The user device 110 can be any device that includes a hardware processor that can perform the functions associated with the audio enhancement system 114 and/or the audio playback application 112. For instance, the user device 110 can be any computing device or any consumer electronics device, some examples including a television, laptop, desktop, phone (e.g., smartphone or other cell phone), tablet computer, phablet, gaming station, ebook reader, and the like.
The audio playback application 112 can include hardware and/or software for playing back audio, including audio that may be locally stored, downloaded or streamed over a network (not shown), such as the Internet. In the example where the user device 110 is a television or an audio/visual system, the audio playback application 112 can access audio from a media disc, such as a Blu-ray disc or the like. Alternatively, the audio playback application 112 can access the audio from a hard drive or, as described above, from a remote network application or web site over the Internet.
The audio enhancement system 114 can be implemented as software and/or hardware. For example, the audio enhancement system 114 can be implemented as software or firmware executing on a hardware processor, such as a general purpose processor programmed with specific instructions to become a specific purpose processor, a digital signal processor programmed with specific instructions to become a specific purpose processor, or the like. The processor may be a fixed or floating-point processor. In another embodiment, the audio enhancement system 114 can be implemented as programmed logic in a logic-programmable processor, such as a field programmable gate array (FPGA) or the like. Additional examples of processors are described in greater detail below in the “Terminology” section.
In an embodiment, the audio enhancement system 114 is an application that may be downloaded from an online application store, such as the Apple™ App Store or the Google Play store for Android™ devices. The audio enhancement system 114 can interact with an audio library in the user device 110 to access audio functionality of the device 110. In an embodiment, the audio playback application 112 executes program call(s) to the audio enhancement system 114 to cause the audio enhancement system 114 to enhance audio for playback. Conversely, the audio enhancement system 114 may execute program call(s) to the audio playback application 112 to cause playback of enhanced audio to occur. In another embodiment, the audio playback application 112 is part of the audio enhancement system 114 or vice versa.
Advantageously, in certain embodiments, the audio enhancement system 114 can provide one or more audio enhancements that are designed to work well with headphones. In some embodiments, these audio enhancements may be more effective when headphones have good coupling to the ear. An example of headphones 120 connected to the user device 110 via a cable 122 are shown. These headphones 120 are example ear-bud headphones (described in greater detail below with respect to
In other embodiments, some or all of the features described herein as being implemented by the audio enhancement system 114 may also be implemented when the user device 110 is connected to loudspeakers instead of headphones 120. In loudspeaker embodiments, the audio enhancement system 114 may also perform cross-talk canceling to reduce speaker crosstalk between a listener's ears.
As described above, the audio enhancement system 114 can provide a low-frequency enhancement that can enhance the low-frequency response of the headphones 120. Enhancing the low frequency response may be beneficial for headphone speakers because speakers in headphones 120 are relatively small and may have a poor low-bass response. In addition, the audio enhancement system 114 can enhance high frequencies of the headphone speakers 120. Further, the audio enhancement system 114 can provide a spatial enhancement that may increase the sense of spaciousness or stereo separation between headphone channels. Further, the audio enhancement system 114 may implement any sub-combination of low-frequency, high-frequency, and spatial enhancements, among other enhancements.
Referring to
In some embodiments, it can be useful to provide the headphones 120 with the audio enhancement system 114 in the cable 122 or earpieces 124, as opposed to in the user device 110. One example use case for doing so is to enable compatibility of the audio enhancement system 114 with some user devices 110 that do not have open access to audio libraries, such that the audio enhancement system 114 cannot run completely or even at all on the user device 110. In addition, in some embodiments, even when the user device 110 may be compatible with running the audio enhancement system 114, it may still be useful to have the audio enhancement system 114 in the headphones 120.
Further, although not shown, the user device 110 in
Turning to
Turning to
The audio enhancement system 300 receives left and right inputs and outputs left and right outputs. The left and right inputs may be input audio signals, input audio channels, or the like. The left and right stereo inputs may be obtained from a locally-stored audio file or by a downloaded audio file or streamed audio file, as described above. The audio from the left and right inputs is provided to three separate enhancement modules 310, 320 and 330. These modules 310, 320, 330 are shown logically in parallel, indicating that their processing may be performed independently of each other. Independent processing or logically parallel processing can ensure or attempt to ensure that user adjustment of a gain in one of the enhancements does not cause overload or clipping in another enhancement (due to multiplication of gains in logically serial processing). The processing of these modules 310, 320, 330 may be actually performed in parallel (e.g., in separate processor cores, or in separate logic paths of an FPGA or in DSP or computer programming code), or they may be processed serially although logically implemented in parallel.
The enhancement modules 310, 320, 330 shown include a spatial enhancer 310, a low-frequency enhancer 320, and a high-frequency enhancer 330. Each of the enhancements 310, 320 or 330 can be tuned independently by the user or by a provider of the audio enhancement system 300 to sound better based on the particular type of headphones used, user device used, or simply based on user preferences.
In an embodiment, the spatial enhancer 310 can enhance difference information in the stereo signals to create a sense of ambiance or greater stereo separation. The difference information present in the stereo signals can naturally include a sense of ambiance or separation between the channels, which can provide a pleasing stereo effect when played over loudspeakers. However, since the speakers in headphones are close to or in the listener's ears and bypass the outer ear or pinna, the stereo separation actually experienced by a listener in existing audio playback systems may be inaccurate and overly discrete. Thus, the spatial enhancer 310 can emphasize the difference information so as to create a greater sense of spaciousness to achieve an improved stereo effect and sense of ambience with headphones.
The low-frequency enhancer 320 can boost low-bass frequencies by emphasizing one or more harmonics of an unreproducible or attenuated fundamental frequency. Low-bass signals, like other signals, can include one or more fundamental frequencies and one or more harmonics of each fundamental frequency. One or more of the fundamental frequencies may be unreproducible, or only producible in part by a headphone speaker. However, when a listener hears one or more harmonics of a missing or attenuated fundamental frequency, the listener can perceive the fundamental to be present, even though it is not. Thus, by emphasizing one or more of the harmonics, the low-frequency enhancer 320 can create a greater perception of low bass frequencies than are actually present in the signal.
The high-frequency enhancer 330 can emphasize high frequencies relative to the low frequencies emphasized by the low-frequency enhancer 320. This high-frequency enhancement can adjust a poor high-frequency response of a headphone speaker.
Each of the enhancers 310, 320 and 300 can provide left and right outputs, which can be mixed by a mixer 340 down to the left and right outputs provided to the headphones (or to subsequent processing prior to being output to the headphones). A mixer 340 may, for instance, mix each of the left outputs provided by the enhancers 310, 320 and 330 into the left output and similarly mix each of the right outputs provided by the enhancers 310, 320 and 330 into the right output.
Advantageously, in certain embodiments, because the enhancers 310, 320 and 330 are operated in different processing paths, they can be independently tuned and are not required to interact with each other. Thus, a user (who may be the listener or a provider of the user device, audio enhancement system 300, or headphones) can independently tune each of the enhancements in one embodiment. This independent tuning can allow for greater customizability and control over the enhancements to respond to a variety of different types of audio, as well as different types of headphones and user devices.
Although not shown, the audio enhancement system 300 may also include acoustic noise cancellation (ANC) or attenuation features in some embodiments, among possibly other enhancements.
Turning to
In the depicted embodiment, the left and right inputs are provided to an input gain block 402, which can provide an overall gain value to the inputs, which may affect the overall output volume at the outputs. Similarly, an output gain block may be provided before the outputs, although not shown, instead of or in addition to the input gain block 402. An example −6 dB default gain is shown for the input gain block 402, but a different gain may be set by the user (or the block 402 may be omitted entirely). The output of the input gain block 402 is provided to the spatial enhancement components, low-frequency enhancement components, and high-frequency enhancement components referred to above.
Starting with the spatial enhancement components, the left (L) and right (R) outputs are provided from the gain block 402 to a sum block 411, where they are summed to provide an L+R signal. The L+R signal may include the mono or common portion of the left and right signals. The L+R signal is supplied to a gain block 412, which applies a gain to the L+R signal, the output of which is provided to another sum block 413. The gain block 412 may be user-settable, or it may have a fixed gain.
In addition, the left input signal is supplied from the input gain block 402 to a sum block 415, and the right input signal is provided from the input gain block 402 to an inverter 414, which inverts the right input signal and supplies the inverted right input signal to the sum block 415. The sum block 415 produces an L−R signal, or a difference signal, that is then supplied to the gain block 416. The L−R signal can include difference information between the two signals. This difference information can provide a sense of ambience between the two signals.
The gain block 416 may be user-settable, or it may have a fixed gain. The output of the gain block 416 is provided to an L−R filter 417, also referred to herein as a difference filter 417. The difference filter 417 can produce a spatial effect by spatially enhancing the difference information included in the L−R signal. The output of the L−R filter 417 is supplied to the sum block 413 and to an inverter 418, which inverts the output of the L−R signal. The inverter 418 supplies an output to another sum block 419. Thus, the sum block 413 sums inputs from the L+R gain block 412 and the output of the L−R filter 417, while the sum block 419 sums the output of the L+R gain block 412 and the inverted output of the inverter 418.
Each of the sum blocks 413, 419 supplies an output to the output mixer 440. The output of the sum block 413 can be a left output signal that can be mixed down to the overall left output provided by the output mixer 440, while the output of the sum block 419 can be a right output that the output mixer 440 mixes down to the overall right output.
Referring to the low-frequency enhancement components, the output of the input gain block 402 is provided to low-frequency filters 422 including a low-frequency filter for the left input signal (LF FilterL) and a low-frequency filter for the right input signal (LF FilterR). Each of the low-frequency filters 422 can provide a low-frequency enhancement. The output of each filter is provided to a low-frequency gain block 424, which may be user-adjustable or which may be a fixed gain. The outputs of the low-frequency gain block 424 are provided to the output mixer 440, which mixes the left output from the low-frequency left filter down to the overall left output provided by the output mixer 440 and mixes the right output of the left frequency right filter to the overall right output provided by the output mixer 440.
Regarding the high-frequency enhancement components, the left and right inputs that have been supplied through the input gain block 402 are then applied also to the high-frequency filters 432 for both left (HF FilterL) and right inputs (HF FilterR). The high-frequency filters 432 can provide a high-frequency enhancement, which may emphasize certain high frequencies. The output of the high-frequency filters 432 is provided to high-frequency gain block 434, which may apply a user-adjustable or fixed gain. The output of the high-frequency gain block 434 is supplied to the output mixer 440 which, like the other enhancement blocks above, can mix the left output from the left high-frequency filter down to the left overall output from the output mixer 440 and can mix the right output from the right high-frequency filter 432 to the overall right output provided by the output mixer 440. Thus, the output mixer 440 can sum each of the inputs from the left filters and sum block 413 to a left overall output and can sum each of the inputs from the right filters and sum block 419 to a right overall output. In other embodiments, the output mixer 440 may also include one or more gain controls in any of the signal paths to adjust the amount of mixing of each input into the overall output signals.
In another embodiment, the filters shown, including the L−R filter 417, the low-frequency filters 422, and/or the high-frequency filters 432 can be implemented as infinite impulse response, or IIR filters. Each filter may be implemented by one or more first- or second-order filters, and in one embodiment, are implemented with second-order filters in a bi-quad IIR configuration. IIR filters can provide advantages such as low processing requirements and higher resolution for low frequencies, which may be useful for being implemented in a low-end processor of a user device or in a headphone and for providing finer control over low-frequency enhancement.
In other embodiments, finite impulse response filters, or FIR filters, may be used instead of IIR filters, or some of the filters shown may be IIR filters while others are FIR filters. However, FIR filters, while providing useful passband phase linearity, such passband phase linearity may not be required in certain embodiments of the audio enhancement system 400. Thus, it may be desirable to use IIR filters in place of FIR filters in some implementations.
Conceptually, although two filters are shown as low-frequency filters 422 in
Turning to
Although only two band pass filters 523 and 524 are shown, fewer or more than two band pass filters may be provided in other embodiments. The band pass filters 523 and 524 may have different center frequencies. Each of the band pass filters 523 and 524 can emphasize a different aspect of the low-frequency information in the signal. For instance, one of the band pass filters 523 or 524 can emphasize the first harmonics of a typical bass signal, and the other band pass filter can emphasize other harmonics. The harmonics emphasized by the two band pass filters can cause the ear to nonlinearly mix the frequencies filtered by the band pass filters 523 and 524 so as to trick the ear into hearing the missing fundamental. The difference of the harmonics emphasized by the band pass filters 523 and 524 can be heard by the ears as the missing fundamental.
Referring to
Referring again to
The frequency response 830 of the low-pass filter 526 of
Turning to
The notch filter 619 is an example of a band stop filter. The combined notch filter 619, gain block 618, and sum block 620 can create a spatial enhancement effect in one embodiment by de-emphasizing certain frequencies that many listeners perceive as coming from the front of a listener. For instance, referring to
For many people, the ear is very sensitive to speech coming from the front of a listener in a range around about 2500 Hz or about 2600 Hz. Because speech predominantly occurs at a range centered at about 2500 Hz or about 2600 Hz, and because people typically talk to people directly in front of them, the ears tend to be very sensitive to distinguishing sound coming from the front of a listener at these frequencies. Thus, by attenuating these frequencies, the difference filter 617 of
Turning to
The low-frequency response 720, as described above, includes two pass bands 712 and 714 and a valley 617 caused by the band pass filters, followed by a roll-off after the pass band 714. The bandwidth of the first pass band 712 is relatively wider than the bandwidth of the second pass band 714 in the example embodiment shown due to the truncation of the second peak by the low pass filter response 830 (see
The frequency response 710 of the difference filters described above includes a notch 722 that reflects both the deep notch 912 of
Turning to
Playback controls 1020 are also shown on the display 1000, which can allow a user to control playback of audio. Enhancement gain controls 1030 on the display 1000 can allow a user to adjust gain values applied to the separate enhancements. Each of the enhancement gain controls includes a slider for each enhancement so that the gain is selected based on a position of the slider. In one embodiment, moving the position of the slider to the right causes an increase in the gain to be applied to that enhancement, whereas moving position of the slider to the left decreases the gain applied to that enhancement. Thus, a user can selectively emphasize one of the enhancements over the others, or equally emphasize them together.
Selection of the gain controls by a user can cause adjustment of the gain controls shown in
Although sliders and buttons are shown as example user interface controls, many other types of user interface controls may be used in place of sliders and buttons in other embodiments.
III. Terminology
Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (e.g., not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, e.g., through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
The various illustrative logical blocks, modules, and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both. To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, modules, and steps have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. The described functionality can be implemented in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the disclosure.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
The steps of a method, process, or algorithm described in connection with the embodiments disclosed herein can be embodied directly in hardware, in a software module stored in one or more memory devices and executed by one or more processors, or in a combination of the two. A software module can reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable disk, a CD-ROM, or any other form of non-transitory computer-readable storage medium, media, or physical computer storage known in the art. An example storage medium can be coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium can be integral to the processor. The storage medium can be volatile or nonvolatile. The processor and the storage medium can reside in an ASIC.
Conditional language used herein, such as, among others, “can,” “might,” “may,” “e.g.,” and the like, unless specifically stated otherwise, or otherwise understood within the context as used, is generally intended to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or states. Thus, such conditional language is not generally intended to imply that features, elements and/or states are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without author input or prompting, whether these features, elements and/or states are included or are to be performed in any particular embodiment. The terms “comprising,” “including,” “having,” and the like are synonymous and are used inclusively, in an open-ended fashion, and do not exclude additional elements, features, acts, operations, and so forth. Also, the term “or” is used in its inclusive sense (and not in its exclusive sense) so that when used, for example, to connect a list of elements, the term “or” means one, some, or all of the elements in the list. Further, the term “each,” as used herein, in addition to having its ordinary meaning, can mean any subset of a set of elements to which the term “each” is applied.
Disjunctive language such as the phrase “at least one of X, Y and Z,” unless specifically stated otherwise, is to be understood with the context as used in general to convey that an item, term, etc. may be either X, Y, or Z, or a combination thereof. Thus, such conjunctive language is not generally intended to imply that certain embodiments require at least one of X, at least one of Y and at least one of Z to each be present.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
While the above detailed description has shown, described, and pointed out novel features as applied to various embodiments, it will be understood that various omissions, substitutions, and changes in the form and details of the devices or algorithms illustrated can be made without departing from the spirit of the disclosure. As will be recognized, certain embodiments of the inventions described herein can be embodied within a form that does not provide all of the features and benefits set forth herein, as some features can be used or practiced separately from others.
This application is a continuation application of U.S. application Ser. No. 14/992,860 titled “Headphone Audio Enhancement System”, which is a continuation application of U.S. application Ser. No. 14/284,832, filed on May 22, 2014 titled “Headphone Audio Enhancement System”, which claims priority under 35 U.S.C. § 119(e) as a nonprovisional application of U.S. Provisional Application No. 61/826,679, filed May 23, 2013 titled “Audio Processor.” The disclosures of all applications are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
1616639 | Sprague | Feb 1927 | A |
1951669 | Ramsey | Mar 1934 | A |
2113976 | Bagno | Apr 1938 | A |
2315248 | De Rosa | Mar 1943 | A |
2315249 | De Rosa | Mar 1943 | A |
2461344 | Olson | Feb 1949 | A |
3170991 | Glasgal | Feb 1965 | A |
3229038 | Richter | Jan 1966 | A |
3246081 | Edwards | Apr 1966 | A |
3249696 | Van Sickle | May 1966 | A |
3397285 | Golonski | Aug 1968 | A |
3398810 | Clark, III | Aug 1968 | A |
3612211 | Clark, III | Oct 1971 | A |
3665105 | Hafler | May 1972 | A |
3697692 | Iida | Oct 1972 | A |
3725586 | Iida | Apr 1973 | A |
3745254 | Ohta et al. | Jul 1973 | A |
3757047 | Ito et al. | Sep 1973 | A |
3761631 | Ito et al. | Sep 1973 | A |
3772479 | Hilbert | Nov 1973 | A |
3849600 | Ohshima | Nov 1974 | A |
3860951 | Camras | Jan 1975 | A |
3883692 | Tsurushima | May 1975 | A |
3885101 | Ito et al. | May 1975 | A |
3892624 | Shimada | Jul 1975 | A |
3911220 | Tsurushima | Oct 1975 | A |
3916104 | Anazawa et al. | Oct 1975 | A |
3921104 | Gundry | Nov 1975 | A |
3925615 | Nakano | Dec 1975 | A |
3944748 | Kuhn | Mar 1976 | A |
3970787 | Searle | Jul 1976 | A |
3943293 | Bailey | Sep 1976 | A |
3989897 | Carver | Nov 1976 | A |
4024344 | Dolby et al. | May 1977 | A |
4027101 | DeFreitas et al. | May 1977 | A |
4030342 | Bond et al. | Jun 1977 | A |
4045748 | Filliman | Aug 1977 | A |
4052560 | Santmann | Oct 1977 | A |
4063034 | Peters | Dec 1977 | A |
4069394 | Dol et al. | Jan 1978 | A |
4085291 | Cooper | Apr 1978 | A |
4087629 | Atoji et al. | May 1978 | A |
4087631 | Yamada et al. | May 1978 | A |
4097689 | Yamada et al. | Jun 1978 | A |
4118599 | Iwahara et al. | Oct 1978 | A |
4118600 | Stahl | Oct 1978 | A |
4135158 | Parmet | Jan 1979 | A |
4139728 | Haramoto et al. | Feb 1979 | A |
4149031 | Cooper | Apr 1979 | A |
4149036 | Okamoto et al. | Apr 1979 | A |
4152542 | Cooper | May 1979 | A |
4162457 | Grodinsky | Jul 1979 | A |
4177356 | Jaeger et al. | Dec 1979 | A |
4182930 | Blackmer | Jan 1980 | A |
4185239 | Filoux | Jan 1980 | A |
4188504 | Kasuga et al. | Feb 1980 | A |
4191852 | Nishikawa | Mar 1980 | A |
4192969 | Iwahara | Mar 1980 | A |
4204092 | Bruney | May 1980 | A |
4208546 | Laupman | Jun 1980 | A |
4209665 | Iwahara | Jun 1980 | A |
4214267 | Roese | Jul 1980 | A |
4218583 | Poulo | Aug 1980 | A |
4218585 | Carver | Aug 1980 | A |
4219696 | Kogure et al. | Aug 1980 | A |
4237343 | Kurtin et al. | Dec 1980 | A |
4239937 | Kampmann | Dec 1980 | A |
4239939 | Griffis | Dec 1980 | A |
4251688 | Furner | Feb 1981 | A |
4268915 | Parmet | May 1981 | A |
4303800 | DeFreitas | Dec 1981 | A |
4306113 | Morton | Dec 1981 | A |
4308423 | Cohen | Dec 1981 | A |
4308424 | Bice, Jr. | Dec 1981 | A |
4308426 | Kikuchi | Dec 1981 | A |
4309570 | Carver | Jan 1982 | A |
4316058 | Christensen | Feb 1982 | A |
4329544 | Yamada | May 1982 | A |
4332979 | Fischer | Jun 1982 | A |
4334740 | Wray | Jun 1982 | A |
4349698 | Iwahara | Sep 1982 | A |
4352953 | Emmer | Oct 1982 | A |
4355203 | Cohen | Oct 1982 | A |
4356349 | Robinson | Oct 1982 | A |
4388494 | Schone et al. | Jun 1983 | A |
4393270 | van de Berg | Jul 1983 | A |
4394536 | Shima et al. | Jul 1983 | A |
4398158 | Rodgers | Aug 1983 | A |
4408095 | Ariga et al. | Oct 1983 | A |
4446488 | Suzuki | May 1984 | A |
4479235 | Griffs | Oct 1984 | A |
4481662 | Long et al. | Nov 1984 | A |
4489432 | Polk | Dec 1984 | A |
4495637 | Bruney | Jan 1985 | A |
4497064 | Polk | Jan 1985 | A |
4503554 | Davis | Mar 1985 | A |
4546389 | Gibson et al. | Oct 1985 | A |
4549228 | Dieterich | Oct 1985 | A |
4551770 | Palmer et al. | Nov 1985 | A |
4553176 | Mendrala | Nov 1985 | A |
4562487 | Hurst et al. | Dec 1985 | A |
4567607 | Bruney et al. | Jan 1986 | A |
4569074 | Polk | Feb 1986 | A |
4589129 | Blackmer et al. | May 1986 | A |
4593696 | Hochmair et al. | Jun 1986 | A |
4594610 | Patel | Jun 1986 | A |
4594729 | Weingartner | Jun 1986 | A |
4594730 | Rosen | Jun 1986 | A |
4599611 | Bowker et al. | Jul 1986 | A |
4622691 | Tokumo et al. | Nov 1986 | A |
4648117 | Kunugi et al. | Mar 1987 | A |
4683496 | Tom | Jul 1987 | A |
4696036 | Julstrom | Sep 1987 | A |
4698842 | Mackie et al. | Oct 1987 | A |
4703502 | Kasai et al. | Oct 1987 | A |
4739514 | Short et al. | Apr 1988 | A |
4748669 | Klayman | May 1988 | A |
4790014 | Watanabe et al. | Dec 1988 | A |
4803727 | Holt et al. | Feb 1989 | A |
4817149 | Myers | Mar 1989 | A |
4817479 | Myers | Apr 1989 | A |
4819269 | Klayman | Apr 1989 | A |
4831652 | Anderson et al. | May 1989 | A |
4836329 | Klayman | Jun 1989 | A |
4837824 | Orban | Jun 1989 | A |
4841572 | Klayman | Jun 1989 | A |
4856064 | Iwamatsu | Aug 1989 | A |
4866774 | Klayman | Sep 1989 | A |
4866776 | Kasai et al. | Sep 1989 | A |
4888809 | Knibbeler | Dec 1989 | A |
4891560 | Okumura et al. | Jan 1990 | A |
4891841 | Bohn | Jan 1990 | A |
4893342 | Cooper | Jan 1990 | A |
4910779 | Cooper et al. | Mar 1990 | A |
4953213 | Tasaki et al. | Aug 1990 | A |
4955058 | Rimkeit et al. | Sep 1990 | A |
5018205 | Takagi et al. | May 1991 | A |
5033092 | Sadaie | Jul 1991 | A |
5042068 | Scholten et al. | Aug 1991 | A |
5046097 | Lowe et al. | Sep 1991 | A |
5067157 | Ishida et al. | Nov 1991 | A |
5105462 | Lowe et al. | Apr 1992 | A |
5124668 | Christian | Jun 1992 | A |
5146507 | Satoh et al. | Sep 1992 | A |
5172415 | Fosgate | Dec 1992 | A |
5177329 | Klayman | Jan 1993 | A |
5180990 | Ohkuma | Jan 1993 | A |
5208493 | Lendaro et al. | May 1993 | A |
5208860 | Lowe et al. | May 1993 | A |
5228085 | Aylward | Jul 1993 | A |
5251260 | Gates | Oct 1993 | A |
5255326 | Stevenson | Oct 1993 | A |
5319713 | Waller, Jr. et al. | Jun 1994 | A |
5325435 | Date et al. | Jun 1994 | A |
5333201 | Waller, Jr. | Jul 1994 | A |
5359665 | Werrbach | Oct 1994 | A |
5371799 | Lowe et al. | Dec 1994 | A |
5377272 | Albean | Dec 1994 | A |
5386082 | Higashi | Jan 1995 | A |
5390364 | Webster et al. | Feb 1995 | A |
5400405 | Petroff | Mar 1995 | A |
5412731 | Desper | May 1995 | A |
5420929 | Geddes et al. | May 1995 | A |
5452364 | Bonham | Sep 1995 | A |
5459813 | Klayman | Oct 1995 | A |
5533129 | Gefvert | Jul 1996 | A |
5596931 | Rossler et al. | Jan 1997 | A |
5610986 | Miles | Mar 1997 | A |
5638452 | Waller et al. | Jun 1997 | A |
5661808 | Klayman | Aug 1997 | A |
5668885 | Oda | Sep 1997 | A |
5771295 | Waller, Jr. | Jun 1998 | A |
5771296 | Unemura | Jun 1998 | A |
5784468 | Klayman | Jul 1998 | A |
5822438 | Sekine et al. | Oct 1998 | A |
5832438 | Bauer | Nov 1998 | A |
5841879 | Scofield et al. | Nov 1998 | A |
5850453 | Klayman et al. | Dec 1998 | A |
5862228 | Davis | Jan 1999 | A |
5872851 | Petroff | Feb 1999 | A |
5892830 | Klayman | Apr 1999 | A |
5912976 | Klayman | Jun 1999 | A |
5930370 | Ruzicka | Jul 1999 | A |
5930375 | East et al. | Jul 1999 | A |
5999630 | Iwamatsu | Dec 1999 | A |
6134330 | De Poortere et al. | Oct 2000 | A |
6175631 | Davis et al. | Jan 2001 | B1 |
6281749 | Klayman et al. | Aug 2001 | B1 |
6285767 | Klayman | Sep 2001 | B1 |
6430301 | Petrovic | Aug 2002 | B1 |
6470087 | Heo et al. | Oct 2002 | B1 |
6504933 | Chung | Jan 2003 | B1 |
6522265 | Hillman et al. | Feb 2003 | B1 |
6590983 | Kraemer | Jul 2003 | B1 |
6597791 | Klayman | Jul 2003 | B1 |
6614914 | Rhoads et al. | Sep 2003 | B1 |
6647389 | Fitch et al. | Nov 2003 | B1 |
6694027 | Schneider | Feb 2004 | B1 |
6718039 | Klayman et al. | Apr 2004 | B1 |
6737957 | Petrovic et al. | May 2004 | B1 |
6766305 | Fucarile et al. | Jul 2004 | B1 |
7031474 | Yuen et al. | Apr 2006 | B1 |
7043031 | Klayman et al. | May 2006 | B2 |
7200236 | Klayman et al. | Apr 2007 | B1 |
7212872 | Smith et al. | May 2007 | B1 |
7277767 | Yuen et al. | Oct 2007 | B2 |
7451093 | Kraemer | Nov 2008 | B2 |
7457415 | Reitmeier et al. | Nov 2008 | B2 |
7467021 | Yuen et al. | Dec 2008 | B2 |
7492907 | Klayman et al. | Feb 2009 | B2 |
7522733 | Kraemer et al. | Apr 2009 | B2 |
7555130 | Klayman et al. | Jun 2009 | B2 |
7606716 | Kraemer | Oct 2009 | B2 |
7720240 | Wang | May 2010 | B2 |
7801734 | Kraemer | Sep 2010 | B2 |
7907736 | Yuen et al. | Mar 2011 | B2 |
7987281 | Yuen et al. | Jul 2011 | B2 |
8046093 | Yuen et al. | Oct 2011 | B2 |
8050434 | Kato et al. | Nov 2011 | B1 |
8396575 | Kraemer et al. | Mar 2013 | B2 |
8396576 | Kraemer et al. | Mar 2013 | B2 |
8396577 | Kraemer et al. | Mar 2013 | B2 |
8472631 | Klayman et al. | Jun 2013 | B2 |
8509464 | Kato et al. | Aug 2013 | B1 |
20010012370 | Klayman et al. | Aug 2001 | A1 |
20010020193 | Teramachi et al. | Sep 2001 | A1 |
20020129151 | Yuen et al. | Sep 2002 | A1 |
20020157005 | Brunk et al. | Oct 2002 | A1 |
20030115282 | Rose | Jun 2003 | A1 |
20040005066 | Kraemer | Jan 2004 | A1 |
20040136554 | Kirkeby | Jul 2004 | A1 |
20040247132 | Klayman et al. | Dec 2004 | A1 |
20050071028 | Yuen et al. | Mar 2005 | A1 |
20050129248 | Kraemer et al. | Jun 2005 | A1 |
20050246179 | Kraemer | Nov 2005 | A1 |
20060062395 | Klayman et al. | Mar 2006 | A1 |
20060126851 | Yuen et al. | Jun 2006 | A1 |
20060206618 | Zimmer et al. | Sep 2006 | A1 |
20060215848 | Ambourn | Sep 2006 | A1 |
20070147638 | Moon | Jun 2007 | A1 |
20070165868 | Klayman et al. | Jul 2007 | A1 |
20070250194 | Rhoads et al. | Oct 2007 | A1 |
20080015867 | Kraemer | Jan 2008 | A1 |
20080022009 | Yuen et al. | Jan 2008 | A1 |
20090094519 | Yuen et al. | Apr 2009 | A1 |
20090132259 | Kraemer | May 2009 | A1 |
20090190766 | Klayman et al. | Jul 2009 | A1 |
20090252356 | Goodwin | Oct 2009 | A1 |
20100303246 | Walsh et al. | Dec 2010 | A1 |
20110040395 | Kraemer et al. | Feb 2011 | A1 |
20110040396 | Kraemer et al. | Feb 2011 | A1 |
20110040397 | Kraemer et al. | Feb 2011 | A1 |
20110274279 | Yuen et al. | Nov 2011 | A1 |
20110286602 | Yuen et al. | Nov 2011 | A1 |
20120170756 | Kraemer et al. | Jul 2012 | A1 |
20120170757 | Kraemer et al. | Jul 2012 | A1 |
20120170759 | Yuen et al. | Jul 2012 | A1 |
20120230497 | Dressler et al. | Sep 2012 | A1 |
20120232910 | Dressler et al. | Sep 2012 | A1 |
20130202117 | Brungart | Aug 2013 | A1 |
20130202129 | Kraemer et al. | Aug 2013 | A1 |
20140044288 | Kato et al. | Feb 2014 | A1 |
Number | Date | Country |
---|---|---|
3331352 | Mar 1985 | DE |
0729287 | Dec 1983 | EP |
0546619 | Jun 1993 | EP |
0095902 | Aug 1996 | EP |
0756437 | Mar 2006 | EP |
S58146200 | Aug 1983 | JP |
H05300596 | Nov 1993 | JP |
09224300 | Aug 1997 | JP |
40-29936 | Jan 2008 | JP |
4-312585 | Aug 2009 | JP |
WO 9634509 | Apr 1996 | WO |
WO 9742789 | Nov 1997 | WO |
WO 9820709 | May 1998 | WO |
WO 9821915 | May 1998 | WO |
WO 9846044 | Oct 1998 | WO |
WO 9926454 | May 1999 | WO |
WO 0161987 | Aug 2001 | WO |
Entry |
---|
Allison, R., “The Loudspeaker/ Living Room System.” Audio, pp. 18-22, Nov. 1971. |
Boney L. et al., “Digital Watermarks for Audio Signals,” Proceedings of the International Conference on Multimedia Computing and Systems, Los Alamitos, CA, US; Jun. 17, 1996, pp. 473-480. |
Davies, Jeff and Bohn, Dennis “Squeeze Me, Stretch Me: the DC 24 Users Guide” Rane Note 130 [online]. Rane Corporation. 1993 [retrieved Apr. 26, 2005]. Retrieved from the. |
Internet: http://www.rane.com/pdf/note130.pdf pp. 2-3. |
Eargle, J., “Multichannel Stereo Matrix Systems: An Overview,” Journal of the Audio Engineering Society, pp. 552-558 (no date listed). |
Gilman, “Some Factors Affecting the Performance of Airline Entertainment Headsets”, J. Audio Eng. Soc., vol. 31, No. 12, Dec. 1983. |
Ishihara, M., “A new Analog Signal Processor for a Stereo Enhancement System,” IEEE Transactions on Consumer Electronics, vol. 37, No. 4, pp. 806-813, Nov. 1991. |
Japanese Office Action Final Notice of Rejection issued in application No. 2001-528430 dated Feb. 2, 2010. |
Kauffman, Richard J., “Frequency Contouring for Image Enhancement,” Audio, pp. 34-39, Feb. 1985. |
Kurozumi, K., et al., “A New Sound Image Broadening Control System Using a Correlation Coefficient Variation Method,” Electronics and Communications in Japan, vol. 67-A, No. 3, pp. 204-211, Mar. 1984. |
PCT International Search Report and Preliminary Examination Report; International Application No. PCT/US00/27323 dated Jul. 11, 2001. |
Phillips Components, “Integrated Circuits Data Handbook: Radio, audio and associated systems, Bipolar, MOS, CA3089 to TDA1510A,” Oct. 7, 1987, pp. 103-110. |
Schroeder, M.R., “An Artificial Stereophonic Effect Obtained from a Single Audio Signal,” Journal of the Audio Engineering Society, vol. 6, No. 2, pp. 74-79, Apr. 1958. |
Stevens, S., et al., “Chapter 5: The Two-Eared Man,” Sound and Hearing, pp. 98-106 and 196, 1965. |
Stock, “The New Featherweight Headphones”, Audio, pp. 30-32, May 1981. |
Sundberg, J., “The Acoustics of the Singing Voice,” The Physics of Music, pp. 16-23, 1978. |
Vaughan, D., “How We Hear Direction,” Audio, pp. 51-55, Dec. 1983. |
Wilson, Kim, “AC-3 Is Here! But Are You Ready to Pay the Price?” Home Theater, pp. 60-65, Jun. 1995. |
Linkwitz, “Reference Earphones”, Linkwitz Lab—Sensible Reproduction and Recording of Auditory Scenes, http://web.archive.org/web/20120118185312/http://www.linkwitzlab.com/reference_earphones.htm (1999-2011). |
International Search Report and Written Opinion issued in application No. PCT/US2014/039115 dated Oct. 10, 2014. |
Number | Date | Country | |
---|---|---|---|
20180213327 A1 | Jul 2018 | US |
Number | Date | Country | |
---|---|---|---|
61826679 | May 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14992860 | Jan 2016 | US |
Child | 15848965 | US | |
Parent | 14284832 | May 2014 | US |
Child | 14992860 | US |