Systems and methods for processing audio data are known in the art. Most of these systems and methods are used to process audio data for a known audio environment, such as a two-channel stereo environment, a four-channel quadraphonic environment, a five channel surround sound environment (also known as a 5.1 channel environment), or other suitable formats or environments.
One problem posed by the increasing number of formats or environments is that audio data that is processed for optimal audio quality in a first environment is often not able to be readily used in a different audio environment. One example of this problem is the conversion of stereo sound data to surround sound data. A listener can perceive a noticeable change in sound quality when programming changes from surround sound encoding to stereo encoding. However, as the additional channels of audio data for surround sound encoding are not present in the stereo two-channel data, existing surround systems are unable to change the way such sound is processed.
In accordance with the present invention, a system and method for an audio spatial environment engine are provided that overcome known problems with converting between spatial audio environments.
In particular, a system and method for an audio spatial environment engine are provided that allows conversion between N-channel data and M-channel data, where N and M are integers.
In accordance with an exemplary embodiment of the present invention, an audio spatial environment engine for converting from an N channel audio system to an M channel audio system, where N is an integer greater than M, is provided. The audio spatial environment engine includes one or more correlators receiving two or more of the N channels of audio data and eliminating delays between the channels that are irrelevant to an average human listener. One or more Hilbert transform systems each perform a Hilbert transform on one or more of the correlated channels of audio data. One or more summers receive at least one of the correlated channels of audio data and at least one of the Hilbert transformed correlated channels of audio data and generate one of the M channels of audio data.
The present invention provides many important technical advantages. One important technical advantage of the present invention is a system and method for an audio spatial environment engine that uses magnitude and phase functions for each speaker in an audio system to allow sound that is optimized for an N-speaker system to be converted into sound that is optimized for an M-speaker system.
Those skilled in the art will further appreciate the advantages and superior features of the invention together with other important aspects thereof on reading the detailed description that follows in conjunction with the drawings.
In the description that follows, like parts are marked throughout the specification and drawings with the same reference numerals, respectively. The drawing figures might not be to scale, and certain components can be shown in generalized or schematic form and identified by commercial designations in the interest of clarity and conciseness.
In the exemplary embodiment of system 100, five channel or so-called 5.1 sound is being converted into stereo left and right channel output, but other suitable numbers of input channels can also or alternatively be converted into stereo output.
System 100 includes correlator 102a, which receives left front channel and right front channel input and correlates the data to eliminate time delays below a predetermined maximum time delay, such as seven milliseconds. In this exemplary embodiment, correlator 102a can receive analog data, digital data, or other suitable data and can correlate the signals to eliminate delays between the channels of data that are irrelevant to an average human listener. Correlator 102a outputs front correlated left front channel data and front correlated right front channel data.
In the same manner, correlator 102b receives left rear channel data and right rear channel data and correlates the data to eliminate time delays below a predetermined maximum time delay, such as seven milliseconds. In this exemplary embodiment, correlator 102b can receive analog data, digital data, frequency domain data, or other suitable data and can correlate the signals to eliminate delays between the channels of data that are irrelevant to human listeners. Correlator 102b outputs rear correlated left rear channel data and rear correlated right rear channel data.
Correlator 102c receives front correlated left front channel data and rear correlated left rear channel data and correlates the data to eliminate time offsets below a predetermined maximum time delay, such as seven milliseconds. In this exemplary embodiment, correlator 102c can receive analog data, digital data, frequency domain data, or other suitable data and can correlate the signals to eliminate time offsets between the channels of data that are below levels perceptible by human listeners. Correlator 102c outputs front-rear correlated left front channel data and rear-front correlated left rear channel data.
Correlator 102d receives front correlated right front channel data and rear correlated right rear channel data and correlates the data to eliminate time offsets below a predetermined maximum time delay, such as seven milliseconds. In this exemplary embodiment, correlator 102c can receive analog data, digital data, or other suitable data and can correlate the signals to eliminate delays between the channels of data that are irrelevant to human listeners. Correlator 102d outputs front-rear correlated right front channel data and rear-front correlated right rear channel data.
Center channel data is provided to multiplier 104, the output of which is the multiplied value of the center channel data. In one exemplary embodiment, multiplier 104 can multiply the center channel data by 0.707, altering the “gain” of the center channel data, such as where the output of the system is a stereo channel output.
Hilbert transform 106a receives the rear-front correlated left rear channel data and performs a Hilbert transform on the data, such as:
The output of Hilbert transform 106a is summed at summer 108a with the front-rear correlated left front channel data and the multiplied center channel data to generate the stereo left output signal. Hilbert transform 106b receives the rear-front correlated right rear channel data and performs a Hilbert transform on the data.
The output of Hilbert transform 106b is subtracted at summer 108b from the sum of the front-rear correlated left front channel data and the multiplied center channel data to generate the stereo right output signal.
In operation, system 100 allows an N-channel input to be converted into a 2-dimensional stereo output, such as by eliminating time delays between the front and rear channel signals that are below the level of human perception, by performing a suitable transform on one or more channels of the correlated data, such as a Hilbert transform, by multiplying one or more channels of the data by a suitable scaling factor, and then combining the processed N-channel data to generate stereo left and right channel data. In this manner, a system that is configured to generate stereo channel data, such as a system having two speakers, can receive N-channel data and output sound having a spatial quality that is compatible with sound that has processed for two channel, left-right stereo speaker delivery.
System 200 includes left and right magnitude function 202, which generates a value ranging between 0.0 and 1.0 or between other suitable values based on an input value from magnitude ratio value system 230, which determines the percentage energy in the left and right channels, which is also referred to as the magnitude ratio value or M.R.V. The magnitude ratio value can be determined by dividing the left magnitude by the sum of the left and right magnitude for each sub-band or frequency bin of a time to frequency transformed sample, such as a fast Fourier transformed sample or other suitable samples. In one exemplary embodiment, the time to frequency sample can be a 2048 point time to frequency conversion over a fixed sample, such as a 23.5 millisecond sample, resulting in several frequency and magnitude values, such as 1024. Other suitable sample sizes of time to frequency transforms can be used. The M.R.V. for each transform bin can be independently processed, can be processed in predetermined groups, or other suitable processing can be used. Left and right magnitude function 202 can be empirically or analytically determined, and outputs a maximum left channel value starting at an M.R.V. of 0.0 and decreasing as M.R.V. approaches 50%. Likewise, the right channel value is zero as M.R.V. goes from 0.0 until after it reaches 50%, at which point it increases to a maximum when M.R.V. equals 100%.
Center magnitude function 204 receives the output from the magnitude ratio value system 230 and generates a maximum value when the magnitude of the left and right channels is equal, which occurs when M.R.V. is around 50%. The center magnitude function 204 falls off as the M.R.V. moves away from 50% towards 0.0% and 100%, and the slope of the fall-off can be determined analytically or empirically, can be symmetric or asymmetric, can be linear or non-linear, and can have other suitable characteristics.
Left and right surround magnitude function 206 receives the output from the magnitude ratio value system 230 and generates an output based on functions determined on encoder parameters, either empirically or analytically. In one exemplary embodiment, as M.R.V. increases from 0.0%, the left surround magnitude function begins to increase until it reaches a maximum value at a point less than an M.R.V. value of 50%. Likewise, the right surround magnitude function begins to increase from 0.0 at some point between 0.0% and 0.50% M.R.V. The left surround function then begins to drop off as M.R.V. continues to increase, and eventually reaches zero before M.R.V. reaches 100%. The right surround function increases until it reaches a maximum between 50.0% M.R.V. and 100.0% M.R.V. and then falls off to zero by the time M.R.V. reaches 100.0%. The slope of the rise and fall-off of the left surround function and the right surround function can be determined analytically or empirically, can be symmetric or asymmetric, can be linear or non-linear, and can have other suitable characteristics.
In addition to the magnitude functions, phase functions are also used to generate phase information. Phase difference system 232 generates phase difference data and provide the phase difference data to front phase function 208, left-right phase function 210, and rear phase function 212. The input to the phase information for each frequency bin or sub-band can be the phase difference between the stereo left and right channel, a running-average coherence based on whether the left and right channel are in phase or up to 180 degrees out of phase, or other suitable data. Front phase function 208 receives the phase difference between the left and the right channel and increases from a minimum to a maximum value as the difference increases from 0.0 degrees to a maximum as the value approaches 90 degrees. The front phase function 208 then remains at a maximum and starts to fall off as the phase difference increases towards 180 degrees. The slope of the rise, the point at which maximum is reached, the section over which the maximum is maintained, and the slope of the fall-off of the front phase function can be determined analytically or empirically, can be symmetric or asymmetric, can be linear or non-linear, and can have other suitable characteristics.
Left-right phase function 210 starts at a maximum value as the phase difference between the left and right channels is zero, and decreases as the phase difference drops off until it reaches 0.0 when the phase difference is 180 degrees. The slope of the fall-off of the left-right phase function can be determined analytically or empirically, can be linear or non-linear, and can have other suitable characteristics.
Rear phase function 212 starts at a minimum value as the phase difference between the left and right channels is zero, and increases as the phase difference drops off until it reaches the maximum when the phase difference is 180 degrees. The slope of the increase of the rear phase function can be determined analytically or empirically, can be linear or non-linear, and can have other suitable characteristics.
Multiplier 214 receives the output from left and right magnitude function 202, which includes a left channel value and a right channel value, and multiplies these values by the corresponding value from left-right phase function 210 for the corresponding frequency bin or sub-band. The output from multiplier 214 is then provided to adder 222. Likewise, the output from center magnitude function 204 is multiplied with the output from front phase function 208 by multiplier 216, and redundant channel outputs are provided to added 222 for combination with the output from multiplier 214. The output from adder 222 is then provided to multiplier 228, which multiplies the left channel value times the stereo left channel input to generate the left front channel output for a 5.1 sound system. Likewise, multiplier 228 multiplies the right channel value times the stereo right channel input to generate the right front channel output for a 5.1 sound system, such as after performing a frequency to time transformation, such as a reverse FFT.
Multiplier 218 receives the output from left and right surround magnitude function 206 and multiplies it by the output from rear phase function 212 for the corresponding frequency bin or sub-band. The outputs from multiplier 212 are then provided to multiplier 224, which receive the stereo left channel input minus the stereo right channel input, and multiplies this value times the outputs from multiplier 212 to generate the left rear channel output and the right rear channel output for 5.1 sound system, such as after performing a frequency to time transformation.
The center output for a 5.1 sound system is generated by multiplying the output from center magnitude function 204 with the output from left-right phase function 210 for the corresponding frequency bin or sub-band. The resultant value is then multiplied times the sum of the stereo left channel plus the stereo right channel. A frequency to time transform or other suitable processing is then performed to generate the center output for the 5.1 sound system.
In operation, system 200 allows stereo input to be converted into N-channel input, such as 5.1 sound system input which includes a front left and right speaker, a rear left and right speaker, and a center speaker (as well as typically a sub-woofer that is optional and not a factor in forming the sound image). System 200 thus allows a stereo signal that is optimized for a listener at the apex of an equilateral triangle between a left speaker and a right speaker to be converted for a system where there are N-speakers, such as a 5.1 sound system or other suitable systems.
At 304, the left rear and right rear signals are correlated, such as to eliminate time offsets between channels of data that are irrelevant to human listeners. In one exemplary embodiment, such as for 5.1 sound, there can be a single left rear signal and a single right rear signal, but other suitable numbers of channels can be used. After correlation, rear correlated left rear channel data and rear correlated right rear channel data are generated. The method then proceeds to 304.
At 306, the front correlated left front channel data and the rear correlated left rear channel data are received and correlated to eliminate time offsets that are irrelevant to human listeners, such as seven milliseconds. In this exemplary embodiment, using 5.1 sound, front-rear correlated left front channel data and rear-front correlated left rear channel data are output, but other suitable combinations of channel data can also or alternatively be generated. The method then proceeds to 308.
At 308, the front correlated right front channel data and the rear correlated right rear channel data are received and correlated to eliminate time offsets that are irrelevant to human listeners, such as seven milliseconds. In this exemplary embodiment, using 5.1 sound, front-rear correlated right front channel data and rear-front correlated right rear channel data are output, but other suitable combinations of channel data can also or alternatively be generated. The method then proceeds to 310.
At 310, the center channel is multiplied by suitable factor, such as to generate the root mean square of the center channel data for broadcast through two stereo channels, or other multiplication factors can be used. The method then proceeds to 312 where a Hilbert transform is performed on the rear-front correlated left rear channel data. Likewise, other suitable transforms can be performed on other suitable channels of data. The method then proceeds to 314.
At 314, a Hilbert transform is performed on the rear-front correlated right rear channel data. Likewise, other suitable transforms can be performed on other suitable channels of data. The method then proceeds to 316. At 316, the front-rear correlated left front channel data and Hilbert-transformed rear-front correlated left rear channel data are summed with the amplitude-adjusted center channel data to generate stereo left channel output. The method then proceeds to 318, where the Hilbert-transformed rear-front correlated right rear channel data is subtracted from the front-rear correlated right front channel data and the amplitude-adjusted center channel data to generate stereo right channel output.
In operation, method 300 can be used to transform N-channel sound, such as 5.1 sound, into 2-channel stereo sound, by using the known, predetermined spatial relationships of the N-channel sound to process the sound for transmission over left and right channel stereo speakers. Likewise, other suitable processes can be used, such as to convert from N-channel sound to M-channel sound, where N is an integer greater than 2 and M is an integer greater than or equal to 1.
Method 400 begins at 402 where a time to frequency transform, such as a fast Fourier transform, is performed on a suitable sample, such as a 23.5 millisecond sample. The method then proceeds to 404, where a first frequency sub-band or bin is selected. The method then proceeds to 406 where the left-right magnitude difference is determined. The method then proceeds to 408 where the percentage energy in the left and right channels are determined as a percentage, such as by dividing the left channel energy by the sum of the left and right channel energy to generate the magnitude ratio value or M.R.V. The method then proceeds to 410.
At 410, the left and right magnitude values are selected as a function of the M.R.V. for the given frequency bin. The method then proceeds to 412 where the center magnitude values are selected as a function of the M.R.V. The method then proceeds to 414 where the left and right surround values are selected as a function of the M.R.V. The method then proceeds to 416.
At 416, the left-right phase difference or P.D. is determined. The P.D. is then used at 418, 420 and 422 to determine the front phase value, left-right phase value, and the rear phase value as a function of the P.D. The method then proceeds to 424 where the left and right magnitude values for the selected frequency bin are multiplied by the left-right phase value for the frequency bin to generate a first output. The method then proceeds to 426 where the center magnitude value for the frequency bin is multiplied by the front phase function value for the frequency bin to generate a second output. The method then proceeds to 428 where the left and right surround magnitude values for the selected frequency bin are multiplied times the rear phase value for the selected frequency bin to generate a third output. The method then proceeds to 430 in
At 430, the first and second output are added, and are then multiplied times the stereo left channel and stereo right channel values for the corresponding frequency bins to generate the left front channel output and the right front channel output for a 5.1 sound system. The method then proceeds to 432 where the center magnitude function for the frequency bin is multiplied times the left-right phase value for the frequency bin, which is then multiplied by the stereo left channel summed with the stereo right channel, to generate the center channel output for 5.2 sound. The method then proceeds to 434, where the left and right rear 5.1 sound outputs are generated for the selected frequency bin by subtracting the stereo right channel from the stereo left channel and multiplying the resultant value for the selected frequency bin times the third output value for the selected frequency bin. The method then proceeds to 436.
At 436, it is determined whether all sub0bands or bins for a time sample have been processed. If not, the method proceeds to 438 where the next sub-band or frequency bin is selected, the calculated values are stored, and the method returns to 406. Otherwise, the method proceeds to 440 where the frequency data is integrated over time to generate 5.1 sound.
In operation, method 400 allows stereo sound to be processed to generate N-channel sound, such as 5.1 sound. Extension of system 200 and method 400 to N-channels will be described in relation to the generation of the magnitude and phase functions for a predetermined integer number of input channels and output channels.
Processes 500A through 500D include speaker location diagram 502, showing relative listener location and image width for a speaker, as well as corresponding graphs 504 and 506 for the magnitude function and phase function, respectively. The magnitude function 502 reaches a peak relative to full left pan and full right pan based on the lateral location of the speaker, with a window width at the base determined by the image width rendered by the speaker. Likewise, the window center for the phase function 504 is determined by the depth location of the loudspeaker, with the window width again being determined in relation to the image width rendered by the loudspeaker.
Speaker location diagrams 508, 514 and 520 further exemplify the magnitude functions 510, 516, and 522, and phase functions 512, 518 and 524, respectively, for various locations of a speaker relative to a listener and the relative image width to be rendered by that speaker. Thus, in the previous exemplary embodiments of
Although exemplary embodiments of a system and method of the present invention have been described in detail herein, those skilled in the art will also recognize that various substitutions and modifications can be made to the systems and methods without departing from the scope and spirit of the appended claims.
Number | Date | Country | |
---|---|---|---|
60535843 | Jan 2004 | US |