1. Field
The present disclosure generally relates to audio signal processing.
2. Description of the Related Art
Sound signals can be processed to provide enhanced listening effects. For example, various processing techniques can make a sound source be perceived as being positioned or moving relative to a listener. Such techniques allow the listener to enjoy a simulated three-dimensional listening experience even when using speakers having limited configuration and performance.
However, many sound perception enhancing techniques are complicated, and often require substantial computing power and resources. Thus, use of these techniques are impractical when applied to many electronic devices having limited computing power and resources. Much of the portable devices such as cell phones, PDAs, MP3 players, and the like, generally fall under this category.
At least some of the foregoing problems can be addressed by various embodiments of systems and methods for audio signal processing as disclosed herein.
In one embodiment, a discrete number of simple digital filters can be generated for particular portions of an audio frequency range. Studies have shown that certain frequency ranges are particularly important for human ears' location-discriminating capability, while other ranges are generally ignored. Head-Related Transfer Functions (HRTFs) are examples of response functions that characterize how ears perceive sound positioned at different locations. By selecting one or more “location-relevant” portions of such response functions, one can construct relatively simple filters that can be used to simulate hearing where location-discriminating capability is substantially maintained. Because the complexity of the filters can be reduced, they can be implemented in devices having limited computing power and resources to provide location-discrimination responses that form the basis for many desirable audio effects.
One embodiment of the present disclosure relates to a method for processing audio signals for a set of headphones, which includes receiving a plurality of audio signal inputs, each audio signal input including information about a spatial position of a sound source relative to a listener, mixing two or more of the audio signal inputs to produce a plurality of mixed audio signals, providing each of the mixed audio signals to a plurality of positional filters, each including a head-related transfer function that provides a simulated hearing response, passing each of the audio signal inputs as unmixed audio signals to one or more of the plurality of positional filters, wherein the mixed and unmixed audio signals are arranged such that each audio signal input is provided in mixed and unmixed form to two or more of the positional filters, applying the positional filters to the mixed audio signals and to the unmixed audio signals to create a plurality of left channel filtered signals a plurality of right channel filtered signals, and downmixing the plurality of left channel filtered signals into a left audio output signal and downmixing the plurality of right channel filtered signals into a right audio output channel, such that the spatial positions of the plurality of sound sources are perceptible from the left and right output channels of a set of headphones.
In another embodiment, a method for processing audio signals includes receiving multiple audio signals including information about spatial position of sound sources relative to a listener, applying at least one audio filter to each audio signal so as to yield two corresponding filtered signals for each audio signal, and mixing the filtered signals to create a left audio output and a right audio output, wherein the spatial position of the sound sources are perceptible from the right and left output channels.
Various embodiments of the disclosure contemplate an apparatus for processing audio signals including multiple audio signal inputs, each including information about spatial position of a sound source relative to a listener, a plurality of positional filters, wherein each audio signal input is provided to two or more of the positional filters to create at least one right channel filtered signal and at least one left channel filter signal for each audio signal, and a downmixer that downmixes the right channel filtered signals into a right audio output channel and that downmixes the left channel filtered signals into a left audio output channel, such that the spatial positions of the plurality of sound sources are perceptible from the right and left output channels.
Moreover, in another embodiment an apparatus for processing audio signals includes means for receiving an audio signal including information about spatial position of a sound source relative to a listener, means for selecting at least one audio filter including a head-related transfer function that provides a simulated hearing response, means for applying the at least one audio filter to the audio signal so as to yield two corresponding filtered signals, each of the filtered signals having a simulated effect of the head-related transfer function applied to the sound source, and means for providing one of the filtered signals to a left audio channel and the other filtered signal to a right audio channel, such that the spatial position of the sound source is perceptible from each channel.
These and other aspects, advantages, and novel features of the present teachings will become apparent upon reading the following detailed description and upon reference to the accompanying drawings. In the drawings, similar elements have similar reference numerals.
The present disclosure generally relates to audio signal processing technology. In some embodiments, various features and techniques of the present disclosure can be implemented on audio or audio/visual devices. As described herein, various features of the present disclosure allow efficient processing of sound signals, so that in some applications, realistic positional sound imaging can be achieved even with reduced signal processing resources. As such, in some embodiments, sound having realistic impact on the listener can be output by portable devices such as handheld devices where computing power may be limited. It will be understood that various features and concepts disclosed herein are not limited to implementations in portable devices, but can be implemented in a wide variety of electronic devices that process sound signals.
In some embodiments, such audio perception combined with corresponding visual perception (from a screen, for example) can provide an effective and powerful sensory effect to the listener. Thus, for example, a surround-sound effect can be created for a listener listening to a handheld device through headphones, speakers, or the like. Various embodiments and features of the positional audio engine 104 are described below in greater detail.
Other configurations are possible. For example, various concepts and features of the present disclosure can be implemented for processing of signals in analog systems. In such systems, analog equivalents of various filters in the positional audio engine 130 can be configured based on location-relevant information in a manner similar to the various techniques described herein. Thus, it will be understood that various concepts and features of the present disclosure are not limited to digital systems.
The decoder 142 is a component that decodes a relatively smaller number of audio channel inputs 141 to provide a relatively larger number of audio channel outputs 143. In the example embodiment, the decoder 142 receives left and right audio channel inputs 141 and provides six audio channel outputs 143 to the positional audio engine 130. The audio channel outputs 143 may correspond to surround sound channels. The audio channel inputs 141 can include, for example, a Circle Surround 5.1 encoded source, a Dolby Surround encoded source, a conventional two-channel stereo source (encoded as raw audio, MP3 audio, RealAudio, WMA audio, etc.), and/or a single-channel monaural source.
In one embodiment, the decoder 142 is a decoder for Circle Surround 5.1. Circle Surround 5.1 (CS 5.1) technology, as disclosed in U.S. Pat. No. 5,771,295 (the '259 patent), titled “5-2-5 MATRIX SYSTEM,” which is hereby incorporated by reference in its entirety, is adaptable for use as a multi-channel audio delivery technology. CS 5.1 enables the matrix encoding of 5.1 high-quality channels on two channels of audio. These two channels can then be efficiently transmitted to the decoder 142 using any of the popular compression schemes available (Mp3, RealAudio, WMA, etc.), or alternatively, without using a compression scheme. The decoder 142 may be used to decode a full multi-channel audio output from the two channels, which in one embodiment are streamed over the Internet. The CS 5.1 system is referred to as a 5-2-5 system in the '259 patent because five channels are encoded into two channels, and then the two channels are decoded back into five channels. The “5.1” designation, as used in “CS 5.1,” typically refers to the five channels (e.g., left, right, center, left-rear (also known as left-surround), right-rear (also known as right-surround)) and an optional subwoofer channel derived from the five channels.
Although the '259 patent describes the CS 5.1 system using hardware terminology and diagrams, one of ordinary skill in the art will recognize that a hardware-oriented description of signal processing systems, even signal processing systems intended to be implemented in software, is common in the art, convenient, and efficiently provides a clear disclosure of the signal processing algorithms. One of ordinary skill in the art will recognize that the CS 5.1 system described in the '259 patent can be implement in software by using digital signal processing algorithms that mimic the operation of the described hardware.
Use of CS 5.1 technology to encode multi-channel audio signals creates a backwardly compatible, fully upgradeable audio delivery system. For example, because a decoder 142 implemented as a CS 5.1 decoder can create a multi-channel output from any audio source, the original format of the audio source can include a wide variety of encoded and non-encoded source formats including Dolby Surround, conventional stereo, or a monaural source. When CS 5.1 technology is used to stream audio signals over the Internet, CS 5.1 creates a seamless architecture for both the website developer performing Internet audio streaming and the listener receiving the audio signals over the Internet. If the website developer wants an even higher quality audio experience at the client side, the audio source can first be encoded with CS 5.1 prior to streaming. The CS 5.1 decoding system can then generate 5.1 channels of full bandwidth audio providing an optimal audio experience.
The surround channels that are derived from the CS 5.1 decoder are of higher quality as compared to other available systems. While the bandwidth of the surround channels in a Dolby ProLogic system is limited to 7 kHz monaural, CS 5.1 provides stereo surround channels that are limited only by the bandwidth of the transmission media.
The channel decoders 144, 146, and 148 are various implementations of surround-sound decoders that provide multiple channels of sound. For example, the channel decoder 144 provides 5.1 surround sound channels. The “5” in 5.1 typically refers to left, right, center, left surround, and right surround channels. The “1” in 5.1 typically refers to a subwoofer. Accordingly, the 5.1 channel decoder 144 provides six inputs to the positional audio engine 130. Similarly, the 6.1 channel decoder 146 provides 7 channels to the positional audio engine 130, adding a center surround channel. In place of the center surround channel, the 7.1 channel decoder 148 adds left back and right back channels, thereby providing 8 channels to the positional audio engine. More or fewer channels, including for example 3.0, 4.0, 4.1, 10.2, or 22.2, may be provided to the positional audio engine 130 than shown in the depicted embodiments.
The positional audio engine 130 provides two outputs 150, which correspond to left and right headphone speakers. However, the sounds transmitted to the speakers are perceived by the listener as coming from virtual speaker locations corresponding to the number of input channels to the positional audio engine 130. In many implementations, the sound location of the subwoofer is indiscernible to the human ear. Thus, for example, if the 5.1 channel decoder is used to provide inputs to the positional audio engine 130, a listener will perceive up to 5 sound sources at substantially fixed locations relative to the listener.
The inputs 180 are provided to a premixer 182 within the positional audio engine 130. The premixer 182 may be implemented in hardware or software to include summation blocks, gain blocks, and delay blocks. The premixer 182 mixes one or more of the inputs 180 and provides mixed inputs 184 to one or more positional filters 186. In an alternative embodiment, the premixer 182 passes certain inputs 180, in unmixed form, directly to one or more of the positional filters 186. In still other embodiments, certain of the inputs 180 are passed through the premixer 182 and other inputs 180 bypass the premixer 182 and are provided directly to the positional filters 186. A more detailed example of a premixer is described below under
The depicted positional filters 186 are components that perform signal processing functions. The positional filters 186 of various embodiments filter the premixed outputs 186 to provide sounds that are perceived by the listener as coming from virtual speaker locations corresponding to the number of inputs 180.
The positional filters 186 may be implemented in various ways. For instance, the positional filters 186 may comprise analog or digital circuitry, software, firmware, or the like. The positional filters 186 may also be passive or active, discrete-time (e.g., sampled) or continuous time, linear or non-linear, infinite impulse-response (IIR) or finite impulse-response (FIR), or some combination of the above. Additionally, the positional filters 186 may have a transfer function implemented in a variety of ways. For example, the positional filter 186 may be implemented as a Butterworth filter, Chebyshev filter, Bessel filter, elliptical filter, or as another type of filter.
The positional filters 186 may be formed from a combination of two, three, or more filters, examples of which are described below. In addition, the number of positional filters 186 included in the positional audio engine 130 may be varied to filter a different number of premixed outputs 184. Alternatively, the positional audio engine 130 includes a set number of positional filters 186 that filter a varying number of premixed outputs 184.
In one embodiment, the positional filter 186 is a head-related transfer function (HRTF) configured based on location-relevant information, such as a HRTF described in U.S. patent application Ser. No. 11/531,624, titled “Systems and Methods for Audio Processing,” which is hereby incorporated by reference in its entirety. For the purpose of description, “location-relevant” means a portion of human hearing response spectrum (for example, a frequency response spectrum) where sound source location discrimination is found to be particularly acute. An HRTF is an example of a human hearing response spectrum. Studies (for example, “A comparison of spectral correlation and local feature-matching models of pinna cue processing” by E. A. Macperson, Journal of the Acoustical Society of America, 101, 3105, 1997) have shown that human listeners generally do not process entire HRTF information to distinguish where sound is coming from. Instead, they appear to focus on certain features in HRTFs. For example, local feature matches and gradient correlations in frequencies over 4 KHz appear to be particularly important for sound direction discrimination, while other portions of HRTFs are generally ignored.
The positional filters 186 of various embodiment are linear filters. Linearity provides that the filtered sum of the inputs is equivalent to a sum of the filtered inputs. Accordingly, in one implementation the premixer 182 is not included in the positional audio engine 130. Rather, the outputs of one or more positional filters 186 are combined instead to achieve the same or substantially same result of the premixer 182. The premixer 182 may also be included in addition to combining the outputs of the positional filters 186 in other embodiments.
The positional filters 186 provide filtered outputs to a downmixer 188. Like the premixer 182, the downmixer 188 includes one or more summation blocks, gain blocks, or both. In addition, the downmixer 188 may include delay blocks and reverb blocks. The downmixer 188 may be implemented in analog or digital hardware or software. In various embodiments, the downmixer 188 combines the filtered outputs into two output signals 190. In alternative embodiments, the downmixer 188 provides fewer or more output signals 190.
For the example surround-sound configuration 200, the positional-filtering can be configured to process five sound sources (for example, from five channels of a 5.1 surround decoder). Information about the location of the sound sources (for example, which of the five virtual speakers 210) is provided in some embodiments by the positional filters 186 of
In one particular implementation, two positional filters are employed for each input 180. Consequently, in this implementation, two positional filters are used per each virtual speaker 210. In one embodiment, one of the two positional filters corresponds to a sound perceived by the left ear, and the other corresponds to a sound perceived by the right ear. Thus,
Turning to
The inputs 304 are provided to an input gain bank 306. In the depicted embodiment, the input gain bank 306 attenuates the inputs 304 by −6 dB (decibels). Attenuating the inputs 304 provides added headroom, which is a higher possible signal level without compression or distortion, for later signal processing. The input gain bank 304 provides a left output 314, center output 316, right output 318, subwoofer output 320, left surround output 322, and a right surround output 324.
A premixer 308 receives the outputs from the input gain bank 306. The premixer 308 includes summers 310, 312. In the depicted embodiments, the premixer 308 combines the center output 316 with the left output 314 through summer 310 to produce a left center output 326. Likewise, the premixer 308 combines the center output 316 with the right output 318 through summer 312 to produce a right center output 328. Advantageously, by premixing the center output 316 with the left and right outputs 314, 318, the premixer 308 blends the left, center, and right sounds. As a result, these sounds may be more accurately perceived as coming from a virtual left, center, or right speaker, respectively without additional processing on the center channel. However, in the depicted embodiments, the premixer 308 does not mix the subwoofer, left surround, and right surround outputs 320, 322, 324. Alternatively, the premixer 308 performs some mixing on one or more of these outputs 320, 322, 324.
The premixer 308 provides at least some of the outputs to one or more positional filters 330. Specifically, the left center output 326 is provided to a front left positional filter 332, and the left output 314 is provided to a front right positional filter 334. The right output 318 is provided to a front left positional filter 336, and the right center output 328 is provided to a front right positional filter 338. Likewise, the left surround output 322 is provided to both a rear left positional filter 340 and a rear right positional filter 342, and the right surround output 324 is provided to both a rear left positional filter 344 and a rear right positional filter 346. In contrast, the subwoofer output 320 is not provided to a positional filter 330 in the depicted embodiments; however, the subwoofer output 320 may be provided to a positional filter 330 in an alternative implementation.
The positional filters 330 may be combined in pairs to simulate virtual speaker locations. Within a pair of positional filters 330, one positional filter 330 represents the virtual speaker location heard at a listener's left ear, and the other positional filter 330 represents the virtual speaker location heard at the right ear. Because a real speaker is ordinarily heard by both ears, certain embodiments of this pairing mechanism enhance the realism of the simulated virtual speaker locations.
Turning to the specific positional filter 330 pairs, the front left positional filter 332 and the front right positional filter 334 correspond to a virtual front left speaker. The front left positional filter 336 and the front right positional filter 338 correspond to a virtual front right speaker. The front left positional filters 332, 336 correspond to left channels of the virtual front speakers, and the front right positional filters 334, 338 correspond to right channels of the virtual front speakers. Similarly, the rear left positional filter 340 and the rear right positional filter 342 correspond to a left surround virtual speaker, and the rear left positional filter 344 and the rear right positional filter 346 correspond to a right surround virtual speaker. The rear left positional filters 340, 344 and the rear right positional filters 342, 346 correspond to left and right channels of the virtual left and right surround speaker locations, respectively.
The center output 316 is mixed with the left and right outputs 314, 318, such that the front left positional filters 332 and front right positional filter 338 correspond to left and right channels from a virtual central speaker. As a result, the front left and front right positional filters 332, 338 are used to generate multiple pairs of virtual speaker locations. Consequently, rather than using ten positional filters 330 to represent five virtual speakers, the positional audio engine 300 employs eight positional filters 330. Separate positional filters 330 may be used for the center virtual speaker location in an alternative embodiment.
Outputs 350 of the positional filters 330 are provided to a downmixer 360. The downmixer 188 includes gain blocks 362, 363, 368, 370, summers 364, 366, 372, and reverberation components 374. The various components of the downmixer 188 mix the filtered outputs 350 down to two outputs, including a left channel output 380 and a right channel output 382.
The outputs 350 pass through gain blocks 362. Gain blocks 362 adjust the left and right channels separately to account for any interaural intensity differences (IID) that may exist and that is not accounted for by the application of one or more of the positional filters 330. In one embodiment, the various gain blocks 362 may have different values so as to compensate for IID. This adjustment to account for IID includes determining whether the sound source is positioned at left or right speaker locations relative to the listener. The adjustment further includes assigning as a weaker signal the left or right filtered signal that is on the opposite side as the sound source.
Various gain blocks 362 provide outputs to the summers 364. Summer 364a combines the gained output of the front left positional filters 332, 336 to create a left channel output from each virtual front speaker Summer 364b likewise combines the gained output of the front right positional filters 334, 338 to create a right channel output from each virtual front speaker. Summers 364c and 364d similarly combine the gained positional filter output corresponding to left and right outputs from the left surround and right surround virtual speakers, respectively.
Summer 366a combines the gained outputs of the front left positional filters 332, 336 with the gained outputs of the left surround positional filters 340, 344 to create a left channel signal 367a. Summer 366b combines the gained outputs of the front right positional filters 334, 338 with the gained outputs of the right surround positional filters 342, 346 to create a right channel signal 367b.
The left and right channel signals 367a, 367b are processed further by reverberation components 374 to provide reverberation effect in the output signals 367a, 367b. The reverberation components 374 are used in various implementations to enhance the effect of moving the sound image out of the head and also to further spatialize the sound images in a 3-D space. The left and right channel signals 367a, 367b are then multiplied by a gain block 370a, 370b having a value 1−G1. In parallel, the left and right channel signals 367a, 367b are multiplied by a gain block 368b having a value G1. Thereafter, the output of the gain block 368a, 368b and the gain block 370a, 370b are combined at summer 372a, 372b to produce a left channel output 380 and a right channel output 382.
Thus, the positional audio engine 300 of various embodiments receives multiple inputs corresponding to a surround-sound system and filters and combines the inputs to provide two channels of sound. The positional audio engine 300 of various embodiments therefore enhances the listening experience of headphones or other two-speaker listening devices.
Referring to
The premixer 408 in one embodiment is similar to the premixer 308 of
The premixer 408 combines the center surround output 410 with the left surround output 332 through summer 402 to produce a left surround center output 432. Likewise, the premixer 408 combines the center surround output 410 with the right surround output 324 through summer 404 to produce a right surround center output 434. Advantageously, by premixing the center surround output 410 with the left and right surround outputs 322, 324, the premixer 408 blends the left, center, and right surround sounds. As a result, these sounds may be more accurately perceived as coming from a virtual left, center, or right surround speaker, respectively without additional processing on the center surround.
Turning to the positional filters 430, some or all of the positional filters 430 are the same or substantially the same as the positional filters 330 shown in
Consequently, rather than using twelve positional filters 430 to represent six virtual speakers, the positional audio engine 400 employs eight positional filters 430. Separate positional filters 430, however, may be used for the center and center surround virtual speaker location in alternative embodiments.
The various positional filters 430 provide filtered outputs 450 to the downmixer 460. The downmixer 460 in the depicted embodiment includes the same components as the downmixer 360 described under
In
The premixer 508 in one embodiment is similar to the premixer 308 of
The delay blocks 506 are components that provide delayed signals to the gain blocks 514. The delay blocks 506 receive output signals from the input gain bank 306. Specifically, the left surround output 322 is provided to the delay block 506a, the left back output 502 is provided to the delay block 506b, the right back output 504 is provided to the delay block 506d, and the right surround output 324 is provided to the delay block 506c. The various delay blocks 506 are used to simulate an interaural time difference (ITD) based on the spatial positions of the virtual speakers in 3D space relative to the listener.
The delay blocks 506 provide the delayed output signals 322, 324, 502, 504 to the gain blocks 514. Specifically, the left surround output 322 is provided to the gain block 514a, the left back output 502 is provided to the gain block 514b and 514c, the right back output 504 is provided to the gain block 514e and 514f, and the right surround output 324 is provided to the gain block 514d. The gain block 514 are used to adjust the IID from the virtual surround and back speakers, which are placed at different locations in a 3D space.
Thereafter, the gain blocks 514 provide the gained output signals 322, 324, 502, 504 to the summers 520. Summer 520a mixes delayed left surround output 322 with delayed left back output 502. Summer 520b mixes the left surround output 322 with the left back output 502. Summer 520c mixes the right surround output 324 with the right back output 504. Finally, summer 520d mixes the delayed right surround output 324 with the delayed right back output 504.
The summers 520 provide the combined outputs to the positional filters 540, 542, 546, and 548. Some or all of the positional filters in the depicted embodiment are the same or substantially the same as the positional filters 330 shown in
Each of the four output signals 322, 324, 502, 504 is therefore provided to one of the four positional filters 540, 542, 546, 548 twice. As a result, these positional filters 540, 542, 546, 548 are used to generate multiple pairs of virtual speaker locations. Thus, rather than using fourteen positional filters 530 to represent seven virtual speakers, the positional audio engine 500 employs eight positional filters 530. Separate positional filters 530, however, may be used for the left back and right back virtual speaker locations in alternative embodiments.
The various positional filters 530 provide filtered outputs 550 to the downmixer 560. The downmixer 560 in the depicted embodiment includes the same components as the downmixer 360 described under
Although
Of the component filters 610 shown, there are three types, including band-stop filters, band-pass filters, and high pass filters. In addition, though not shown, in some embodiments low pass filters are employed. The characteristics of the component filters 610 may be varied to produce a desired positional filter 330, 430, or 530. These characteristics may include cutoff frequencies, bandwidth, amplitude, attenuation, phase, rolloff, Q factor, and the like. Moreover, the component filters 610 may be implemented as single-pole or multi-pole filters, according to a Fourier, Laplace, or Z-transform representation of the component filters 610.
More particularly, various implementations of a band-stop component filter 610 stop or attenuate certain frequencies and pass others. The width of the stopband, which attenuates certain frequencies, may be adjusted to deemphasize certain frequencies. Likewise, the passband may be adjusted to emphasize certain frequencies. Advantageously, the band-stop component filter 610 shapes sound frequencies such that a listener associates those frequencies with a virtual speaker location.
In a similar vein, various implementations of a band-pass component filter 610 pass certain frequencies and attenuate others. The width of the passband may be adjusted to emphasize certain frequencies, and the stopband may be adjusted to deemphasize certain frequencies. Thus, like the band-stop component filter 610, the band-pass component filter 610 shapes sound frequencies such that a listener associates those frequencies with a virtual speaker location.
Various implementations of a high pass or low pass component filter 610 also pass certain frequencies and attenuate others. The width of the passband of these filters may be adjusted to emphasize certain frequencies, and the stopband may be adjusted to deemphasize certain frequencies. High and low pass component filters 610 therefore also shape sound frequencies such that a listener associates those frequencies with a virtual speaker location.
Turning to the particular examples of positional filters 330 in
Referring to the particular examples of positional filters 330 in
Referring to the particular examples of positional filters 430 in
Referring to the particular examples of positional filters 530 in
The graphs are plotted on a logarithmic frequency scale 840 and an amplitude scale 850. While phase graphs are not shown, in one embodiment, each depicted graph has a corresponding phase graph. Different graphs may have different magnitude scales 850, reflecting that different filters may have different amplitudes, so as to emphasize certain components of sound and deemphasize others.
In the depicted embodiments, each graph shows a trace 810 having a passband 820 and a stopband 830. In some of the depicted graphs, the passband 820 and the stopband 830 are less well-defined, as the transition between passband 820 and stopband 830 is less apparent. By including a passband 820 and stopband 830, the traces 810 graphically illustrate how the component filters 610 emphasize certain frequencies and deemphasize others.
Turning to more detailed examples, the graph 702 of
The graph 704 of
The graph 706 of
The graph 708 of
The graph 710 of
The graph 712 of
The graph 742 of
The graph 744 of
The graph 746 of
The graph 748 of
The graph 750 of
The graph 752 of
In the example embodiments shown, the component filters 610 are implemented with IIR filters. In one embodiment, IIR filters are recursive filters that sum weighted inputs and previous outputs. Because IIR filters are recursive, they may be calculated more quickly than other filter types, such as convolution-based FIR filters. Thus, some implementations of IIR filters are able to process audio signals more easily on handheld devices, which often have less processing power than other devices.
An IIR filter may be represented by a difference equation, which defines how an input signal is related to an output signal. An example difference equation for a second-order IIR filter has the form:
yn=b0xn+a1yn-1+b1xn-1+a2yn-2+b2xn-22 (1)
where xn is the input signal, yn is the output signal, bn are feedforward filter coefficients, and an are feedback filter coefficients.
In certain of the example positional audio engines described above, the input signal xn is the input to the component filter 610, and the output signal yn is the output of the component filter 610. Example filter coefficients 870 for the twelve example component filters 610 shown in
The filter coefficients 870 shown in the table 860 enable embodiments of the component filters 610, and in turn embodiments of the various positional filters 330, 430, 530, to simulate virtual speaker locations. The coefficients 870 may be varied to simulate different virtual speaker locations or to emphasize or deemphasize certain virtual speaker locations. Thus, the example component filters 610 provide an enhanced virtual listening experience.
In one embodiment, at least some portion of the 3D sound API 920 can reside in the program memory 916 of the system 910, and be under the control of a processor 914. In one embodiment, the system 910 can also include a display 912 component that can provide visual input to the listener. Visual cues provided by the display 912 and the sound processing provided by the API 920 can enhance the audio-visual effect to the listener/viewer.
As described herein, various features of positional filtering and associated processing techniques allow generation of realistic three-dimensional sound effect without heavy computation requirements. As such, various features of the present disclosure can be particularly useful for implementations in portable devices where computation power and resources may be limited.
Other implementations on portable as well as non-portable devices are possible.
In the description herein, various functionalities are described and depicted in terms of components or modules. Such depictions are for the purpose of description, and do not necessarily mean physical boundaries or packaging configurations. It will be understood that the functionalities of these components can be implemented in a single device/software, separate devices/softwares, or any combination thereof. Moreover, for a given component such as the positional filters, its functionalities can be implemented in a single device/software, plurality of devices/softwares, or any combination thereof.
In general, it will be appreciated that the processors can include, by way of example, computers, program logic, or other substrate configurations representing data and instructions, which operate as described herein. In other embodiments, the processors can include controller circuitry, processor circuitry, processors, general purpose single-chip or multi-chip microprocessors, digital signal processors, embedded microprocessors, microcontrollers and the like.
Furthermore, it will be appreciated that in one embodiment, the program logic may advantageously be implemented as one or more components. The components may advantageously be configured to execute on one or more processors. The components include, but are not limited to, software or hardware components, modules such as software modules, object-oriented software components, class components and task components, processes methods, functions, attributes, procedures, subroutines, segments of program code, drivers, firmware, microcode, circuitry, data, databases, data structures, tables, arrays, and variables.
Although the above-disclosed embodiments have shown, described, and pointed out the fundamental novel features of the invention as applied to the above-disclosed embodiments, it should be understood that various omissions, substitutions, and changes in the form of the detail of the devices, systems, and/or methods shown may be made by those skilled in the art without departing from the scope of the invention. Consequently, the scope of the invention should not be limited to the foregoing description, but should be defined by the appended claims.
This application is a continuation of U.S. application Ser. No. 11/696,128, filed Apr. 3, 2007, the disclosure of which is hereby incorporated by reference in its entirety. This application also claims the benefit of priority under 35 U.S.C. §119(e) of U.S. Provisional Application No. 60/788,614 filed on Apr. 3, 2006 and titled MULTI-CHANNEL AUDIO ENHANCEMENT SYSTEM, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4817149 | Myers | Mar 1989 | A |
4819269 | Klayman | Apr 1989 | A |
4836329 | Klayman | Jun 1989 | A |
4841572 | Klayman | Jun 1989 | A |
4866774 | Klayman | Sep 1989 | A |
5033092 | Sadaie | Jul 1991 | A |
5319713 | Waller, Jr. et al. | Jun 1994 | A |
5333201 | Waller, Jr. | Jul 1994 | A |
5438623 | Begault | Aug 1995 | A |
5491685 | Klein et al. | Feb 1996 | A |
5581618 | Satoshi et al. | Dec 1996 | A |
5592588 | Reekes et al. | Jan 1997 | A |
5638452 | Waller et al. | Jun 1997 | A |
5661808 | Klayman | Aug 1997 | A |
5742689 | Tucker et al. | Apr 1998 | A |
5771295 | Waller, Jr. | Jun 1998 | A |
5784468 | Klayman | Jul 1998 | A |
5809149 | Cashion et al. | Sep 1998 | A |
5835895 | Stokes, III | Nov 1998 | A |
5850453 | Klayman et al. | Dec 1998 | A |
5896456 | Desper | Apr 1999 | A |
5912976 | Klayman | Jun 1999 | A |
5943427 | Massie et al. | Aug 1999 | A |
5946400 | Matsuo | Aug 1999 | A |
5970152 | Klayman | Oct 1999 | A |
5974152 | Fujinami | Oct 1999 | A |
5995631 | Kamada et al. | Nov 1999 | A |
6035045 | Fujita et al. | Mar 2000 | A |
6078669 | Maher | Jun 2000 | A |
6091824 | Lin et al. | Jul 2000 | A |
6108626 | Cellario et al. | Aug 2000 | A |
6118875 | Moller et al. | Sep 2000 | A |
6195434 | Cashion et al. | Feb 2001 | B1 |
6281749 | Klayman et al. | Aug 2001 | B1 |
6285767 | Klayman | Sep 2001 | B1 |
6307941 | Tanner, Jr. et al. | Oct 2001 | B1 |
6385320 | Lee | May 2002 | B1 |
6421446 | Cashion et al. | Jul 2002 | B1 |
6504933 | Chung | Jan 2003 | B1 |
6553121 | Matsuo et al. | Apr 2003 | B1 |
6577736 | Clemow | Jun 2003 | B1 |
6590983 | Kraemer | Jul 2003 | B1 |
6741706 | McGrath et al. | May 2004 | B1 |
6763115 | Kobayashi | Jul 2004 | B1 |
6839438 | Riegelsberger et al. | Jan 2005 | B1 |
6993480 | Klayman | Jan 2006 | B1 |
7031474 | Yuen et al. | Apr 2006 | B1 |
7043031 | Klayman et al. | May 2006 | B2 |
7277767 | Yuen et al. | Oct 2007 | B2 |
7451093 | Kraemer | Nov 2008 | B2 |
7680288 | Melchior et al. | Mar 2010 | B2 |
8027477 | Wang | Sep 2011 | B2 |
20010040968 | Mukojima | Nov 2001 | A1 |
20020006081 | Fujishita | Jan 2002 | A1 |
20020034307 | Kubota | Mar 2002 | A1 |
20020038158 | Hashimoto et al. | Mar 2002 | A1 |
20020097880 | Kirkeby | Jul 2002 | A1 |
20020161808 | Kamiya et al. | Oct 2002 | A1 |
20020196947 | Lapicque | Dec 2002 | A1 |
20040175005 | Roeck | Sep 2004 | A1 |
20040196991 | Iida et al. | Oct 2004 | A1 |
20040247132 | Klayman et al. | Dec 2004 | A1 |
20050117762 | Sakurai et al. | Jun 2005 | A1 |
20050171989 | Koyanagi | Aug 2005 | A1 |
20050273324 | Yi | Dec 2005 | A1 |
20070061026 | Wang | Mar 2007 | A1 |
20090237564 | Kikinis et al. | Sep 2009 | A1 |
20090326960 | Breebaat | Dec 2009 | A1 |
20100135510 | Yoo et al. | Jun 2010 | A1 |
20120014528 | Wang | Jan 2012 | A1 |
Number | Date | Country |
---|---|---|
1294782 | May 2001 | CN |
1706100 | Dec 2005 | CN |
1320281 | Jun 2003 | EP |
1 617 707 | Jan 2006 | EP |
03-115500 | Nov 1991 | JP |
10-164698 | Jun 1998 | JP |
3208529 | Sep 2001 | JP |
2001-352599 | Dec 2001 | JP |
2002-191099 | Jul 2002 | JP |
2002-262385 | Sep 2002 | JP |
3686989 | Jun 2005 | JP |
WO 9820709 | May 1998 | WO |
WO 9914983 | Mar 1999 | WO |
WO 2005048653 | May 2005 | WO |
WO 2007033150 | Mar 2007 | WO |
WO 2007123788 | Nov 2007 | WO |
WO 2008035272 | Mar 2008 | WO |
WO 2008035275 | Mar 2008 | WO |
WO 2008084436 | Jul 2008 | WO |
Entry |
---|
Advanced Multimedia Supplements API for JavaTM2 Micro Edition, JSR-234 Exper Group, May 17, 2005, pp. 1-200, Appendix, Nokia Corporation. |
Chinese Office Action, re CN Application No. 200680033693.8, dated Jul. 24, 2009. |
EPO Exam Report dated Aug. 10, 2010, re EP App. No. 06 814 495.5. |
European Examination Report re EP 07754557.2 dated Jul. 1, 2010. |
European Extended Search Report and Opinion re EP 07754557.2 dated Mar. 2, 2010. |
Kahrs M, and Brandenbur K., Chapter 3 Reverberation Algorithms, William G. Garner, Applications of Digital Signal Processing to Audio and Acoustics, 2003, pp. 85-131. |
Lutfi, Robert A. and Wen Wang, Correlational analysis of acoustic cues for the discrimination of auditory motion, J. Acoustical Society of America, Aug. 1999, vol. 106(2), pp. 919-928, Department of Communicative Disorders and Department of Psychology, University of Wisconsin, Madison. |
MacPherson, E.A. A comparison of spectral correlational and local feature-matching models of pinna cue processing, Journal of the Acoustical Society of America, May 1997, vol. 101, No. 5, p. 3104. |
Moore, Richard F., Elements of Computer Music, 1990, pp. 362-369 and 370-391, Prentice-Hall, Inc. Englewood Cliffs, New Jersey 07632. |
Orfanidis, Sophocles, J. Introduction to Signal Processing, 1996, pp. 168-383, Prentice-Hall, Inc. Upper Saddle River, New Jersey 07458. |
PCT International Preliminary Report on Patentability re PCT/US2007/008052 dated Jun. 19, 2009. |
PCT International Search Report and Written Opinion mailed Feb. 20, 2008 regarding International Application No. PCT/US07/08052. |
PCT International Search Report and Written Opinion re PCT/US2006/035446, dated Jan. 19, 2007. |
Vodafone Group, Vodafone VFX Specification, Version 1.1.2., Sep. 10, 2004, pp. 1-134, Vodafone House The Connection, Newbury RG14 2FN England. |
Wang, W., and Lutfi, R.A. Thresholds for detection of a change in the displacement, velocity, and acceleration of a synthesized sound-emitting source, Journal of the Acoustical Society of America, vol. 95, No. 5, p. 2897. |
Wrightman, Frederic L. and Kistler, Doris J., Headphone simulation of free-field listening. I: Stimulus synthesis, J. Acoustical Society of America, Feb. 1989, pp. 858-867. |
Wrightman, Frederic L. and Kistler, Doris J., Headphone simulation of free-field listening. II: Psychophysical validation, J. Acoustical Society of America, 85(2), Feb. 1989, pp. 868-878. |
Office Action issued in Chinese patent application No. 200780019630.1 on Jun. 15, 2011. |
Japanese Office Action re JP Application No. 2008-531246, dated Jan. 11, 2011. |
Office Action issued in Japanese application No. 2009-504224 on Oct. 4, 2011. |
Chinese Second Office Action, re CN Application No. 200680033693.8, dated Dec. 1, 2010. |
Engdegard et al.: “Spatial Audio Object Coding (SAOC)—The Upcoming MPEG Standard on Parametric Object Based Audio Coding”, Audio Engineering Society, convention paper, Presented at the 124th Convention, May 17-20, 2008, Amsterdam, The Netherlands, 15 pages. |
Gatzsche et al.: Beyond DCI: The integration of object oriented 3D sound in the Digital Cinema, 25 pages. |
Potard et al.: “Using XML Schemas to Created and Encode Interactive 3-D Audio Scenes for Multimedia and Virtual Reality Applications”, Whisper Laboratory, University of Wollongong, Australia, 11 pages, 2002. |
Chinese Office Action Re CN Application No. 200780019630.1 on May 4, 2012. |
Korean Office Action, re Korean Application No. 10-2008-7006288, dated Jul. 13, 2012. |
Chinese Office Action issued in Application No. 200780019630.1 on Nov. 2, 2012. |
Chinese Office Action Re CN Application No. 200780019630.1 on May 3, 2013. |
Canadian Office Action, re Canadian Application No. 2,621,175, dated Aug. 7, 2013. |
Canadian Office Action, re Canadian Application No. 2,604,210, dated Aug. 21, 2013. |
Korean Office Action, re Korean Application No. 10-2008-7024715, dated May 21, 2013. |
Number | Date | Country | |
---|---|---|---|
20100226500 A1 | Sep 2010 | US |
Number | Date | Country | |
---|---|---|---|
60788614 | Apr 2006 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11696128 | Apr 2007 | US |
Child | 12781741 | US |