This application relates to co-pending application “Methods and Apparatus for Providing a Distinct Perceptual Location for an Audio Source within an Audio Mixture” Ser. No. 11/946,365, co-filed with this application.
The present disclosure relates generally to audio processing. More specifically, the present disclosure relates to processing audio sources in an audio mixture.
The term audio processing may refer to the processing of audio signals. Audio signals are electrical signals that represent audio, i.e., sounds that are within the range of human hearing. Audio signals may be either digital or analog.
Many different types of devices may utilize audio processing techniques. Examples of such devices include music players, desktop and laptop computers, workstations, wireless communication devices, wireless mobile devices, radio telephones, direct two-way communication devices, satellite radio devices, intercom devices, radio broadcasting devices, on-board computers used in automobiles, watercraft and aircraft, and a wide variety of other devices.
Many devices, such as the ones just listed, may utilize audio processing techniques for the purpose of delivering audio to users. Users may listen to the audio through audio output devices, such as stereo headphones or speakers. Audio output devices may have multiple output channels. For example, a stereo output device (e.g., stereo headphones) may have two output channels, a left output channel and a right output channel.
Under some circumstances, multiple audio signals may be summed together. The result of this summation may be referred to as an audio mixture. The audio signals before the summation occurs may be referred to as audio sources. As mentioned above, the present disclosure relates generally to audio processing, and more specifically, to processing audio sources in an audio mixture.
A method for providing an interface to a processing engine that utilizes intelligent audio mixing techniques is disclosed. The method may include triggering by an event a request to change a perceptual location of an audio source within an audio mixture from a current perceptual location relative to a listener to a new perceptual location relative to the listener. The audio mixture may include at least two audio sources. The method may also include generating one or more control signals that are configured to cause the processing engine to change the perceptual location of the audio source from the current perceptual location to the new perceptual location via separate foreground processing and background processing. The method may also include providing the one or more control signals to the processing engine.
An apparatus for providing an interface to a processing engine that utilizes intelligent audio mixing techniques is also disclosed. The apparatus includes a processor and memory in electronic communication with the processor. Instructions are stored in the memory. The instructions may be executable to trigger by an event a request to change a perceptual location of an audio source within an audio mixture from a current perceptual location relative to a listener to a new perceptual location relative to the listener. The audio mixture may include at least two audio sources. The instructions may also be executable to generate one or more control signals that are configured to cause the processing engine to change the perceptual location of the audio source from the current perceptual location to the new perceptual location via separate foreground processing and background processing. The instructions may also be executable to provide the one or more control signals to the processing engine.
A computer-readable medium is also disclosed. The computer-readable medium may include instructions providing an interface to a processing engine that utilizes audio mixing techniques on a mobile device. When executed by a processor, the instructions may cause the processor to trigger by an event a request to change a perceptual location of an audio source within an audio mixture from a current perceptual location relative to a listener to a new perceptual location relative to the listener. The audio mixture may include at least two audio sources. The instructions may also cause the processor to generate one or more control signals that are configured to cause the processing engine to change the perceptual location of the audio source from the current perceptual location to the new perceptual location via separate foreground processing and background processing. The instructions may also cause the processor to provide the one or more control signals to the processing engine.
An apparatus for providing an interface to a processing engine that utilizes intelligent audio mixing techniques is also disclosed. The apparatus may include means for triggering by event a request to change a perceptual location of an audio source within an audio mixture from a current perceptual location relative to a listener to a new perceptual location relative to the listener. The audio mixture may include at least two audio sources. The apparatus may also include means for generating one or more control signals that are configured to cause the processing engine to change the perceptual location of the audio source from the current perceptual location to the new perceptual location via separate foreground processing and background processing. The apparatus may also include means for providing the one or more control signals to the processing engine.
The present disclosure relates to intelligent audio mixing techniques. More specifically, the present disclosure relates to techniques for providing the audio sources within an audio mixture with distinct perceptual locations, so that a listener may be better able to distinguish between the different audio sources while listening to the audio mixture. To take a simple example, a first audio source may be provided with a perceptual location that is in front of the listener, while a second audio source may be provided with a perceptual location that is behind the listener. Thus, the listener may perceive the first audio source as coming from a location that is in front of him/her, while the listener may perceive the second audio source as coming from a location that is in back of him/her. In addition to providing ways for listeners to distinguish between locations in the front and back, different audio sources may also be provided with different angles, or degrees of skew. For example, a first audio source may be provided with a perceptual location that is in front of the listener and to the left, while a second audio source may be provided with a perceptual location that is in front of the listener and to the right. Providing the different audio sources in an audio mixture with different perceptual locations may help the user to better distinguish between the audio sources.
There are many situations in which the techniques described herein may be utilized. One example is when a user of a wireless communication device is listening to music on the wireless communication device when the user receives a phone call. It may be desirable for the user to continue listening to the music during the phone call, without the music interfering with the phone call. Another example is when a user is participating in an instant messaging (IM) conversation on a computer while listening to music or to another type of audio program. It may be desirable for the user to be able to hear the sounds that are played by the IM client while still listening to the music or audio program. Of course, there are many other examples that may be relevant to the present disclosure. The techniques described herein may be applied to any situation in which it may be desirable for a user to be able to perceptually distinguish between the audio sources within an audio mixture.
As indicated above, under some circumstances multiple audio signals may be summed together. The result of this summation may be referred to as an audio mixture. The audio signals before the summation occurs may be referred to as audio sources.
Audio sources may be broadband audio signals, and may have multiple frequency components with frequency analysis. As used herein, the term “mixing” refers to combining the time domain value (either analog or digital) of two audio sources with addition.
The definition of a perceptual angle that was just described will be used throughout the present disclosure. However, perceptual angles may be defined differently and still be consistent with the present disclosure.
The terms “foreground region” and “background region” should not be limited to the specific foreground region 106 and background region 108 shown in
The processing engine 210 may be configured to utilize intelligent audio mixing techniques. The processing engine 210 is also shown with several audio source processors 216. Each audio source processor 216 may be configured to process an input audio source 202′, and to output an audio source 202 that includes a distinct perceptual location relative to the listener 104. In particular, the processing engine 210 is shown with a first audio source processor 216a that processes the first input audio source 202a′, and that outputs a first audio source 202a that includes a distinct perceptual location relative to the listener 104. The processing engine 210 is also shown with a second audio source processor 216b that processes the second input audio source 202b′, and that outputs a second audio source 202b that includes a distinct perceptual location relative to the listener 104. The processing engine 210 is also shown with an Nth audio source processor 216n that processes the Nth input audio source 202n′, and that outputs an Nth audio source 202n that includes a distinct perceptual location relative to the listener 104. An adder 220 may combine the audio sources 202 into the audio mixture 212 that is output by the processing engine 210.
Each of the audio source processors 216 may be configured to utilize methods that are described in the present disclosure for providing an audio source 202 with a distinct perceptual location relative to a listener 104. Alternatively, the audio source processors 216 may be configured to utilize other methods for providing an audio source 202 with a distinct perceptual location relative to a listener 104. For example, the audio source processors 216 may be configured to utilize methods that are based on head related transfer functions (HRTFs).
The apparatus 200 shown in
In response to receiving the request 224, the control unit 222 may generate one or more control signals 232 to provide to the processing engine 210. The control signal(s) 232 may be configured to cause the processing engine 210 to change the perceptual location of the applicable audio source 202 from its current perceptual location to the new perceptual location that is specified in the request 224. The control unit 222 may provide the control signal(s) 232 to the processing engine 210. In response to receiving the control signal(s) 232, the processing engine 210 (and more specifically, the applicable audio source processor 216) may change the perceptual location of the applicable audio source 202 from its current perceptual location to the new perceptual location that is specified in the request 224.
In one possible implementation, the control unit 222 may be an ARM processor, and the processing engine 210 may be a digital signal processor (DSP). With such an implementation, the control signals 232 may be control commands that the ARM processor sends to the DSP.
Alternatively, the control unit 222 may be an application programming interface (API). The processing engine 210 may be a software component (e.g., an application, module, routine, subroutine, procedure, function, etc.) that is being executed by a processor. With such an implementation, the request 224 may come from a software component (either the software component that serves as the processing engine 210 or another software component). The software component that sends the request 224 may be part of a user interface.
In some implementations, the processing engine 210 and/or the control unit 222 may be implemented within a mobile device. Some examples of mobile devices include cellular telephones, personal digital assistants (PDAs), laptop computers, smartphones, portable media players, handheld game consoles, etc.
The audio source unit engine 210A may be configured to utilize intelligent audio mixing techniques. The audio source unit engine 210A is also shown with several audio source units 216A. Each audio source unit 216A may be configured to process an input audio source 202A′, and to output an audio source 202A that includes a distinct perceptual location relative to the listener 104. In particular, the audio source unit engine 210A is shown with a first audio source unit 216A(1) that processes the first input audio source 202A(1)′, and that outputs a first audio source 202A(1) that includes a distinct perceptual location relative to the listener 104. The audio source unit engine 210A is also shown with a second audio source unit 216A(2) that processes the second input audio source 202A(2)′, and that outputs a second audio source 202A(2) that includes a distinct perceptual location relative to the listener 104. The audio source unit engine 210A is also shown with an Nth audio source unit 216A(N) that processes the Nth input audio source 202A(N)′, and that outputs an Nth audio source 202A(N) that includes a distinct perceptual location relative to the listener 104. An adder 220A may combine the audio sources 202A into the audio mixture 212A that is output by the audio source unit engine 210A.
Each of the audio source units 216 may be configured to utilize methods that are described in the present disclosure for providing an audio source 202A with a distinct perceptual location relative to a listener 104. Alternatively, the audio source units 216A may be configured to utilize other methods for providing an audio source 202A with a distinct perceptual location relative to a listener 104. For example, the audio source units 216A may be configured to utilize methods that are based on head related transfer functions (HRTFs).
The processor 201A shown in
In response to receiving the request 224A, the control unit 222A may generate one or more control signals 232A to provide to the audio source unit engine 210A. The control signal(s) 232A may be configured to cause the audio source unit engine 210A to change the perceptual location of the applicable audio source 202A from its current perceptual location to the new perceptual location that is specified in the request 224A. The control unit 222A may provide the control signal(s) 232A to the audio source unit engine 210A. In response to receiving the control signal(s) 232A, the audio source unit engine 210A (and more specifically, the applicable audio source unit 216A) may change the perceptual location of the applicable audio source 202A from its current perceptual location to the new perceptual location that is specified in the request 224A.
In accordance with the method 300, a request 224 to change the perceptual location of an audio source 202 may be received 302. Values of parameters of the processing engine 210 that are associated with the new perceptual location may be determined 304. Commands may be generated 306 for setting the parameters to the new values. Control signal(s) 232 may be generated 308. The control signal(s) 232 may include the commands for setting the parameters to the new values, and thus the control signal(s) 232 may be configured to cause the processing engine 210 to change the perceptual location of the audio source 202 from its current perceptual location to the new perceptual location that is specified in the request 224. The control signal(s) 232 may be provided 310 to the processing engine 210. In response to receiving the control signal(s) 232, the processing engine 210 may change the perceptual location of the audio source 202 to the new perceptual location.
The method of
The audio source processor 516 is shown with a foreground angle control component 534 and a foreground attenuation component 536 for processing the foreground signal. The audio source processor 516 is also shown with a background angle control component 538 and a background attenuation component 540 for processing the background signal.
The foreground angle control component 534 may be configured to process the foreground signal so that the foreground signal includes a perceptual angle within the foreground region 106. This perceptual angle may be referred to as a foreground perceptual angle. The foreground attenuation component 536 may be configured to process the foreground signal in order to provide a desired level of attenuation for the foreground signal.
The background angle control component 538 may be configured to process the background signal so that the background signal includes a perceptual angle within the background region 108. This perceptual angle may be referred to as a background perceptual angle. The background attenuation component 540 may be configured to process the background signal in order to provide a desired level of attenuation for the background signal.
The foreground angle control component 534, foreground attenuation component 536, background angle control component 538, and background attenuation component 540 may function together to provide a perceptual location for an audio source 202. For example, to provide a perceptual location that is within the foreground region 106, the background attenuation component 540 may be configured to attenuate the background signal, while the foreground attenuation component 536 may be configured to allow the foreground signal to pass without being attenuated. The foreground angle control component 534 may be configured to provide the appropriate perceptual angle within the foreground region 106. Conversely, to provide a perceptual location that is within the background region 108, the foreground attenuation component 536 may be configured to attenuate the foreground signal, while the background attenuation component 540 may be configured to allow the background signal to pass without being attenuated. The background angle control component 538 may be configured to provide the appropriate perceptual angle within the background region 108.
As indicated above, the control unit 522 may generate the control signals 532 in response to receiving a request 224 to change the perceptual location of an audio source 202. As part of generating the control signals 532, the control unit 522 may be configured to determine new values for parameters associated with the processing engine 210, and more specifically, with the audio source processor 516. The control signals 532 may include commands for setting the parameters to the new values.
The control signals 532 are shown with foreground angle control commands 542, foreground attenuation commands 544, background angle control commands 546, and background attenuation commands 548. The foreground angle control commands 542 may be commands for setting parameters associated with the foreground angle control component 534. The foreground attenuation commands 544 may be commands for setting parameters associated with the foreground attenuation component 536. The background angle control commands 546 may be commands for setting parameters associated with the background angle control component 538. The background attenuation commands 548 may be commands for setting parameters associated with the background attenuation component 540.
The audio source processor 616 is shown receiving an input audio source 602′. The input audio source 602′ is a stereo audio source with two channels, a left channel 602a′ and a right channel 602b′. The input audio source 602′ is shown being split into two signals, a foreground signal 650 and a background signal 652. The foreground signal 650 is shown with two channels, a left channel 650a and a right channel 650b. Similarly, the background signal 652 is shown with two channels, a left channel 652a and a right channel 652b. The foreground signal is shown being processed along a foreground path, while the background signal 652 is shown being processed along a background path.
The left channel 652a and the right channel 652b of the background signal 652 are shown being processed by two low pass filters (LPFs) 662, 664. The right channel 652b of the background signal 652 is then shown being processed by a delay line 666. The length of the delay line 666 may be relatively short (e.g., 10 milliseconds). Due to a precedence effect, the interaural time difference (ITD) brought by the delay line 666 could result in a sound image skew (i.e., the sound is not perceived as centered) when both channels 652a, 652b are set to the same level. To counteract this, the left channel 652a of the background signal 652 is then shown being processed by an interaural intensity difference (IID) attenuation component 668. The gain of the IID attenuation component 668 may be tuned according to sampling rate and the length of the delay line 666. The processing that is done by the LPFs 662, 664, the delay line 666, and the IID attenuation component 668 may make the background signal 652 sound more diffuse than the foreground signal 650.
The audio source processor 616 is shown with a foreground angle control component 634. As indicated above, the foreground angle control component 634 may be configured to provide a foreground perceptual angle for the foreground signal 650. In addition, because the input audio source 602′ is a stereo audio source, the foreground angle control component 634 may also be configured to balance the contents of the left channel 650a and the right channel 650b of the foreground signal 650. This may be done for the purpose of preserving contents of the left channel 650a and the right channel 650b of the foreground signal 650 for any perceptual angle that the foreground signal 650 may be set to.
The audio source processor 616 is also shown with a background angle control component 638. As indicated above, the background angle control component 638 may be configured to provide a background perceptual angle for the background signal 652. In addition, because the input audio source 602′ is a stereo audio source, the background angle control component 638 may also be configured to balance the contents of the left channel 652a and the right channel 652b of the background signal 652. This may be done for the purpose of preserving contents of the left channel 652a and the right channel 652b of the background signal 652 for any perceptual angle that the background signal 652 may be set to.
The audio source processor 616 is also shown with a foreground attenuation component 636. As indicated above, the foreground attenuation component 636 may be configured to process the foreground signal 650 in order to provide a desired level of attenuation for the foreground signal 650. The foreground attenuation component 636 is shown with two scalars 654, 656. Collectively, these scalars 654, 656 may be referred to as foreground attenuation scalars 654, 656.
The audio source processor 616 is also shown with a background attenuation component 640. As indicated above, the background attenuation component 640 may be configured to process the background signal 652 in order to provide a desired level of attenuation for the background signal 652. The background attenuation component 640 is shown with two scalars 658, 660. Collectively, these scalars 658, 660 may be referred to as background attenuation scalars 658, 660.
The values of the foreground attenuation scalars 654, 656 may be set to achieve the desired level of attenuation for the foreground signal 650. Similarly, the values of the background attenuation scalars 658, 660 may be set to achieve the desired level of attenuation for the background signal 652. For example, to completely attenuate the foreground signal 650, the foreground attenuation scalars 654, 656 may be set to a minimum value (e.g., zero). In contrast, to allow the foreground signal 650 to pass without being attenuated, these scalars 654, 656 may be set to a maximum value (e.g., unity).
An adder 670 is shown combining the left channel 650a of the foreground signal 650 with the left channel 652a of the background signal 652. The adder 670 is shown outputting the left channel 602a of the output audio source 602. Another adder 672 is shown combining the right channel 650b of the foreground signal 650 with the right channel 652b of the background signal 652. This adder 672 is shown outputting the right channel 602b of the output audio source 602.
The audio source processor 616 illustrates how separate foreground processing and background processing may be implemented in order to change the perceptual location of an audio source 602. An input audio source 602′ is shown being split into two signals, a foreground signal 650 and a background signal 652. The foreground signal 650 and the background signal 652 are then processed separately. In other words, there are differences between the way that the foreground signal 650 is processed as compared to the way that the background signal 652 is processed. The specific differences shown in
The audio source processor 616 of
As indicated above, the foreground angle control component 734 may be configured to balance contents of the left channel 750a and the right channel 750b of the foreground signal 750. This may be accomplished by redistributing the contents of the left channel 750a and the right channel 750b of the foreground signal 750 to two signals 774a, 774b. These signals 774a, 774b may be referred to as content-balanced signals 774a, 774b. The content-balanced signals 774a, 774b may both include a substantially equal mixture of the contents of the left channel 750a and the right channel 750b of the foreground signal 750. To distinguish the content-balanced signals 774 from each other, one content-balanced signal 774a may be referred to as a left content-balanced signal 774a, while the other content-balanced signal 774b may be referred to as a right content-balanced signal 774b.
Mixing scalars 776 may be used to redistribute the contents of the left channel 750a and the right channel 750b of the foreground signal 750 to the two content-balanced signals 774a, 774b. In
As indicated above, the foreground angle control component 734 may also be configured to provide a perceptual angle within the foreground region 106 for the foreground signal 750. This may be accomplished through the use of two scalars 778, which may be referred to as foreground angle control scalars 778. In
To achieve a perceptual angle between 270° and 0° (i.e., on the left side of the foreground region 106), the values of the foreground angle control scalars 778 may be set so that the right content-balanced signal 774b is more greatly attenuated than the left content-balanced signal 774a. Conversely, to achieve a perceptual angle location between 0° and 90° (i.e., on the right side of the foreground region 106), the values of the foreground angle control scalars 778 may be set so that the left content-balanced signal 774a is more greatly attenuated than the right content-balanced signal 774b. To achieve a perceptual location that is directly in front of the listener 104 (0°), the values of the foreground angle control scalars 778 may be set so that the left content-balanced signal 774a and the right content-balanced signal 774b are equally attenuated.
As indicated above, the background angle control component 838 may be configured to balance contents of the left channel 852a and the right channel 852b of the background signal 852. This may be accomplished by redistributing the contents of the left channel 852a and the right channel 852b of the background signal 852 to two content-balanced signals 880, which may be referred to as a left content-balanced signal 880a and a right content-balanced signal 880b. The content-balanced signals 880a, 880b may both include a substantially equal mixture of the contents of the left channel 852a and the right channel 852b of the background signal 852.
Mixing scalars 882 may be used to redistribute the contents of the left channel 852a and the right channel 852b of the background signal 852 to the two content-balanced signals 880a, 880b. In
As indicated above, the background angle control component 838 may also be configured to provide a perceptual angle within the background region 108 for the background signal 852. This may be accomplished by tuning the values of the four mixing scalars 882 so that these scalars 882 also perform the function of providing a perceptual angle for the background signal 882 in addition to the function of redistributing contents of the left and right channels 852a, 852b of the background signal 852. Thus, the background angle control component 838 is shown without any dedicated angle control scalars (such as the g_L scalar 778a and the g_R scalar 778b in the foreground angle control component 734 shown in
As indicated above, the control signals 532 that the control unit 522 sends to the audio source processor 516 may include foreground attenuation commands 544 and background attenuation commands 548. The foreground attenuation commands 544 may include commands for setting the values of the foreground attenuation scalars 654, 656 in accordance with the values shown in
The values of the foreground attenuation scalars 654, 656 and the background attenuation scalars 658, 660 shown in
The table 1084 includes a column 1086 that shows examples of values for the foreground attenuation scalars 654, 656 and the background attenuation scalars 658, 660 when the perceptual location of an audio source 202 is changed from a current location in the foreground region 106 to a new location that is also in the foreground region 106. Another column 1088 shows examples of values for the foreground attenuation scalars 654, 656 and the background attenuation scalars 658, 660 when the perceptual location of an audio source 202 is changed from a current location in the background region 108 to a new location that is also in the background region 108.
As indicated above, the control signals 532 that the control unit 522 sends to the audio source processor 516 may include foreground angle control commands 542. The foreground angle control commands 542 may include commands for setting the values of the foreground angle control scalars 778a, 778b in accordance with the values shown in
As indicated above, the control signals 532 that the control unit 522 sends to the audio source processor 516 may include foreground angle control commands 542. The foreground angle control commands 542 may include commands for setting the values of the mixing scalars 776 in accordance with the values shown in
As indicated above, the control signals 532 that the control unit 522 sends to the audio source processor 516 may include background angle control commands 546. The background angle control commands 546 may include commands for setting the values of the mixing/angle control scalars 882 in accordance with the values shown in
In accordance with the method 1400, an input audio source 602′ may be split 1402 into a foreground signal 650 and a background signal 652. The foreground signal 650 may be processed differently than the background signal 652.
The processing of the foreground signal 650 will be discussed first. If the input audio source 602′ is a stereo audio source, the foreground signal 650 may be processed 1404 to balance contents of the left channel 650a and the right channel 650b of the foreground signal 650. The foreground signal 650 may also be processed 1406 to provide a foreground perceptual angle for the foreground signal 650. The foreground signal 650 may also be processed 1408 to provide a desired level of attenuation for the foreground signal 650.
The processing of the background signal 652 will now be discussed. The background signal 652 may be processed 1410 so that the background signal 652 sounds more diffuse than the foreground signal 650. If the input audio source 602′ is a stereo audio source, the background signal 652 may be processed 1412 to balance contents of the left channel 652a and the right channel 652b of the background signal 652. The background signal 652 may also be processed 1414 to provide a background perceptual angle for the background signal 652. The background signal 652 may also be processed 1416 to provide a desired level of attenuation for the background signal 652.
The foreground signal 650 and the background signal 652 may then be combined 1418 into an output audio source 602. The output audio source 602 may then be combined with other output audio sources to create an audio mixture 212.
The method 1400 of
Although the method 1400 of
The method 1400 of
In accordance with the method 1600, control signals 532 may be received 1602 from a control unit 522. These control signals 532 may include commands for setting various parameters of the audio source processor 616.
For example, suppose that the perceptual location of an audio source 602 is being changed from the foreground region 106 to the background region 108. The control signals 532 may include commands 546 to immediately set the mixing/angle control scalars 882 within the background angle control component 838 to values that correspond to the new perceptual location of the audio source 602. The values of the mixing/angle control scalars 882 may be changed 1604 in accordance with these commands 546.
The control signals 532 may also include commands 548 to gradually transition the values of the background attenuation scalars 658, 660 from values that result in complete attenuation of the background signal 652 to values that result in no attenuation of the background signal 652. The values of the background attenuation scalars 658, 660 may be changed 1606 in accordance with these commands 548.
The control signals 532 may also include commands 544 to gradually transition the values of the foreground attenuation scalars 654, 656 from values that result in no attenuation of the foreground signal 650 to values that result in complete attenuation of the foreground signal 650. The values of the foreground attenuation scalars 654, 656 may be changed 1608 in accordance with these commands 544.
Conversely, suppose that the perceptual location of an audio source 602 is being changed from the background region 108 to the foreground region 106. The control signals 532 may include commands 542 to immediately set the foreground mixing scalars 776 and the foreground angle control scalars 778 within the foreground angle control component 734 to values that correspond to the new perceptual location of the audio source 602. The values of the foreground mixing scalars 776 and the foreground angle control scalars 778 may be changed 1610 in accordance with these commands 542.
The control signals 532 may also include commands 544 to gradually transition the values of the foreground attenuation scalars 654, 656 from values that result in complete attenuation of the foreground signal 650 to values that result in no attenuation of the foreground signal 650. The values of the foreground attenuation scalars 654, 656 may be changed 1612 in accordance with these commands 544.
The control signals 532 may also include commands 548 to gradually transition the values of the background attenuation scalars 658, 660 from values that result in no attenuation of the background signal 652 to values that result in complete attenuation of the background signal 652. The values of the background attenuation scalars 658, 660 may be changed 1614 in accordance with these commands 548.
If the perceptual location of an audio source 602 is being changed within the background region 108, the control signals 532 may also include commands 546 to gradually transition the values of the mixing/angle control scalars 882 within the background angle control component 838 from values that correspond to the current perceptual location to values that correspond to the new perceptual location. The values of the mixing/angle control scalars 882 may be changed 1616 in accordance with these commands 548.
If the perceptual location of an audio source 602 is being changed within the foreground region 106, the control signals 532 may also include commands 542 to gradually transition the values of the foreground mixing scalars 776 and the foreground angle control scalars 778 within the foreground angle control component 734 from values that correspond to the current perceptual location to values that correspond to the new perceptual location. The values of the foreground mixing scalars 776 and the foreground angle control scalars 778 may be changed 1618 in accordance with these commands 542.
The method 1600 of
The method 1600 of
The audio source processor 1816 shown in
There are some differences between the audio source processor 1816 shown in
The input audio source 1802′ is shown being split into a foreground signal 1850 and a background signal 1852. Because the input audio source 1802′ includes one channel, the foreground signal 1850 and the background signal 1852 both initially include one channel.
Because the foreground signal 1850 initially includes just one channel, the foreground angle control component 1834 may be configured to receive just one input 1850. In contrast, as discussed above, the foreground angle control component 634 in the audio source processor 616 of
The foreground angle control component 1834 in the audio source processor 1816 of
As mentioned, the background signal 1852 also initially includes just one channel. Thus, the audio source processor 1816 of
The audio source processor 1816 shown in
The foreground angle control component 1934 is shown receiving the single channel of a foreground signal 1950 as input. The foreground angle control component 1934 may be configured to provide a foreground perceptual angle for the foreground signal 1950. This may be accomplished through the use of two foreground angle control scalars 1978a, 1978b, which in
The apparatus 2001 is shown with a processor 2003 and memory 2005. The processor 2003 may control the operation of the apparatus 2001 and may be embodied as a microprocessor, a microcontroller, a digital signal processor (DSP) or other device known in the art. The processor 2003 typically performs logical and arithmetic operations based on program instructions stored within the memory 2005. The instructions in the memory 2005 may be executable to implement the methods described herein.
The apparatus 2001 may also include one or more communication interfaces 2007 and/or network interfaces 2013 for communicating with other electronic devices. The communication interface(s) 2007 and the network interface(s) 2013 may be based on wired communication technology, wireless communication technology, or both.
The apparatus 2001 may also include one or more input devices 2009 and one or more output devices 2011. The input devices 2009 and output devices 2011 may facilitate user input. Other components 2015 may also be provided as part of the apparatus 2001.
As used herein, the term “determining” (and grammatical variants thereof) is used in an extremely broad sense. The term “determining” encompasses a wide variety of actions and, therefore, “determining” can include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database or another data structure), ascertaining and the like. Also, “determining” can include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Also, “determining” can include resolving, selecting, choosing, establishing and the like.
Information and signals may be represented using any of a variety of different technologies and techniques. For example, data, instructions, commands, information, signals and the like that may be referenced throughout the above description may be represented by voltages, currents, electromagnetic waves, magnetic fields or particles, optical fields or particles or any combination thereof.
The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core or any other such configuration.
The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The methods disclosed herein comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage media may be any available media that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes and variations may be made in the arrangement, operation and details of the methods and apparatus described above without departing from the scope of the claims.
Number | Name | Date | Kind |
---|---|---|---|
5119422 | Price | Jun 1992 | A |
5199075 | Fosgate | Mar 1993 | A |
5243640 | Hadley et al. | Sep 1993 | A |
5301237 | Fosgate | Apr 1994 | A |
5371799 | Lowe et al. | Dec 1994 | A |
5412731 | Desper | May 1995 | A |
5436975 | Lowe et al. | Jul 1995 | A |
5757927 | Gerzon et al. | May 1998 | A |
5809149 | Cashion et al. | Sep 1998 | A |
5850455 | Arnold et al. | Dec 1998 | A |
6011851 | Connor et al. | Jan 2000 | A |
6067361 | Kohut et al. | May 2000 | A |
6154545 | Kohut et al. | Nov 2000 | A |
6195434 | Cashion et al. | Feb 2001 | B1 |
6349223 | Chen | Feb 2002 | B1 |
6421446 | Cashion et al. | Jul 2002 | B1 |
6504934 | Kasai et al. | Jan 2003 | B1 |
6611603 | Norris et al. | Aug 2003 | B1 |
6839438 | Riegelsberger et al. | Jan 2005 | B1 |
6850496 | Knappe et al. | Feb 2005 | B1 |
6882971 | Craner | Apr 2005 | B2 |
6937737 | Polk, Jr. | Aug 2005 | B2 |
6947728 | Tagawa et al. | Sep 2005 | B2 |
6959071 | Fujisawa | Oct 2005 | B2 |
6983251 | Umemoto et al. | Jan 2006 | B1 |
6985594 | Vaudrey et al. | Jan 2006 | B1 |
7012630 | Curry et al. | Mar 2006 | B2 |
7206413 | Eid et al. | Apr 2007 | B2 |
7433716 | Denton | Oct 2008 | B2 |
7489951 | Kanamori et al. | Feb 2009 | B2 |
8041057 | Xiang et al. | Oct 2011 | B2 |
20040078104 | Nguyen et al. | Apr 2004 | A1 |
20050045438 | Keller et al. | Mar 2005 | A1 |
20050147261 | Yeh | Jul 2005 | A1 |
20070078543 | Wakefield | Apr 2007 | A1 |
20070253556 | Nakao et al. | Nov 2007 | A1 |
20080170703 | Zivney | Jul 2008 | A1 |
20090136044 | Xiang et al. | May 2009 | A1 |
Number | Date | Country |
---|---|---|
0666702 | Aug 1995 | EP |
0865025 | Sep 1998 | EP |
1657961 | May 2006 | EP |
61202600 | Sep 1986 | JP |
2086398 | Mar 1990 | JP |
4014920 | Jan 1992 | JP |
5300597 | Nov 1993 | JP |
8046585 | Feb 1996 | JP |
8047100 | Feb 1996 | JP |
8056400 | Feb 1996 | JP |
8107600 | Apr 1996 | JP |
8154300 | Jun 1996 | JP |
09501286 | Feb 1997 | JP |
11215597 | Aug 1999 | JP |
2000197199 | Jul 2000 | JP |
2003330477 | Nov 2003 | JP |
2006005868 | Jan 2006 | JP |
2006074572 | Mar 2006 | JP |
2006174198 | Jun 2006 | JP |
2006238498 | Sep 2006 | JP |
2006254064 | Sep 2006 | JP |
2007228526 | Sep 2007 | JP |
98103499 | Feb 2000 | RU |
98121130 | Sep 2000 | RU |
200636676 | Oct 2006 | TW |
200638338 | Nov 2006 | TW |
9504442 | Feb 1995 | WO |
9705755 | Feb 1997 | WO |
9741711 | Nov 1997 | WO |
2007143373 | Dec 2007 | WO |
Entry |
---|
International Search Report and Written Opinion—PCT/US2008/084903, International Search Authority—European Patent Office—Jul. 6, 2006. |
Chowning, J.M.: “The Simulation of Moving Sound Sources,” Journal of the Audio Engineering Soecity, Audio Engineering Society, New York, NY, vol. 19, No. 1, pp. 2-06 (Jan. 1, 1971) XP000795995, ISSN: 1549-4950. |
Taiwan Search Report—TW097146501—TIPO—Oct. 19, 2012. |
Number | Date | Country | |
---|---|---|---|
20090136063 A1 | May 2009 | US |