The present invention relates to a multi-channel audio processing and, in particular, to multi-channel encoding and synthesizing using parametric side information.
In recent times, multi-channel audio reproduction techniques are becoming more and more popular. This may be due to the fact that audio compression/encoding techniques such as the well-known MPEG-1 layer 3 (also known as mp3) technique have made it possible to distribute audio contents via the Internet or other transmission channels having a limited bandwidth.
A further reason for this popularity is the increased availability of multi-channel content and the increased penetration of multi-channel playback devices in the home environment.
The mp3 coding technique has become so famous because of the fact that it allows distribution of all the records in a stereo format, i.e., a digital representation of the audio record including a first or left stereo channel and a second or right stereo channel. Furthermore, the mp3 technique created new possibilities for audio distribution given the available storage and transmission bandwidths
Nevertheless, there are basic shortcomings of conventional two-channel sound systems. They result in a limited spatial imaging due to the fact that only two loudspeakers are used. Therefore, surround techniques have been developed. A recommended multi-channel-surround representation includes, in addition to the two stereo channels L and R, an additional center channel C, two surround channels Ls, Rs and optionally a low frequency enhancement channel or sub-woofer channel. This reference sound format is also referred to as three/two-stereo (or 5.1 format), which means three front channels and two surround channels. Generally, five transmission channels are required. In a playback environment, at least five speakers at the respective five different places are needed to get an optimum sweet spot at a certain distance from the five well-placed loudspeakers.
Several techniques are known in the art for reducing the amount of data required for transmission of a multi-channel audio signal. Such techniques are called joint stereo techniques. To this end, reference is made to
Normally, the carrier channel will include subband samples, spectral coefficients, time domain samples etc, which provide a comparatively fine representation of the underlying signal, while the parametric data does not include such samples of spectral coefficients but include control parameters for controlling a certain reconstruction algorithm such as weighting by multiplication, time shifting, frequency shifting, phase shifting. The parametric data, therefore, include only a comparatively coarse representation of the signal of the associated channel. Stated in numbers, the amount of data required by a carrier channel encoded using a conventional lossy audio coder will be in the range of 60-70 kBit/s, while the amount of data required by parametric side information for one channel will be in the range of 1.5-2.5 kBit/s. An example for parametric data are the well-known scale factors, intensity stereo information or binaural cue parameters as will be described below.
Intensity stereo coding is described in AES preprint 3799, “Intensity Stereo Coding”, J. Herre, K. H. Brandenburg, D. Lederer, at 96th AES, February 1994, Amsterdam. Generally, the concept of intensity stereo is based on a main axis transform to be applied to the data of both stereophonic audio channels. If most of the data points are concentrated around the first principle axis, a coding gain can be achieved by rotating both signals by a certain angle prior to coding and excluding the second orthogonal component from transmission in the bit stream. The reconstructed signals for the left and right channels consist of differently weighted or scaled versions of the same transmitted signal. Nevertheless, the reconstructed signals differ in their amplitude but are identical regarding their phase information. The energy-time envelopes of both original audio channels, however, are preserved by means of the selective scaling operation, which typically operates in a frequency selective manner. This conforms to the human perception of sound at high frequencies, where the dominant spatial cues are determined by the energy envelopes.
Additionally, in practical implementations, the transmitted signal, i.e. the carrier channel is generated from the sum signal of the left channel and the right channel instead of rotating both components. Furthermore, this processing, i.e., generating intensity stereo parameters for performing the scaling operation, is performed frequency selective, i.e., independently for each scale factor band, i.e., encoder frequency partition. Preferably, both channels are combined to form a combined or “carrier” channel, and, in addition to the combined channel, the intensity stereo information is determined which depend on the energy of the first channel, the energy of the second channel or the energy of the combined channel.
The BCC technique is described in AES convention paper 5574, “Binaural cue coding applied to stereo and multi-channel audio compression”, C. Faller, F. Baumgarte, May 2002, Munich. In BCC encoding, a number of audio input channels are converted to a spectral representation using a DFT based transform with overlapping windows. The resulting uniform spectrum is divided into non-overlapping partitions each having an index. Each partition has a bandwidth proportional to the equivalent rectangular bandwidth (ERB). The inter-channel level differences (ICLD) and the inter-channel time differences (ICTD) are estimated for each partition for each frame k. The ICLD and ICTD are quantized and coded resulting in a BCC bit stream. The inter-channel level differences and inter-channel time differences are given for each channel relative to a reference channel. Then, the parameters are calculated in accordance with prescribed formulae, which depend on the certain partitions of the signal to be processed.
At a decoder-side, the decoder receives a mono signal and the BCC bit stream. The mono signal is transformed into the frequency domain and input into a spatial synthesis block, which also receives decoded ICLD and ICTD values. In the spatial synthesis block, the BCC parameters (ICLD and ICTD) values are used to perform a weighting operation of the mono signal in order to synthesize the multi-channel signals, which, after a frequency/time conversion, represent a reconstruction of the original multi-channel audio signal.
In case of BCC, the joint stereo module 60 is operative to output the channel side information such that the parametric channel data are quantized and encoded ICLD or ICTD parameters, wherein one of the original channels is used as the reference channel for coding the channel side information.
Typically, in the most simple embodiment, the carrier channel is formed of the sum of the participating original channels.
Naturally, the above techniques only provide a mono representation for a decoder, which can only process the carrier channel, but is not able to process the parametric data for generating one or more approximations of more than one input channel.
The audio coding technique known as binaural cue coding (BCC) is also well described in the United States patent application publications US 2003, 0219130 A1, 2003/0026441 A1 and 2003/0035553 A1. Additional reference is also made to “Binaural Cue Coding. Part II: Schemes and Applications”, C. Faller and F. Baumgarte, IEEE Trans. On Audio and Speech Proc., Vol. 11, No. 6, November 2003. The cited United States patent application publications and the two cited technical publications on the BCC technique authored by Faller and Baumgarte are incorporated herein by reference in their entireties.
Significant improvements of binaural cue coding schemes that make parametric schemes applicable to a much wider bit-rate range are known as ‘parametric stereo’ (PS), such as standardized in MPEG-4 high-efficiency AAC v2. One of the important extensions of parametric stereo is the inclusion of a spatial ‘diffuseness’ parameter. This percept is captured in the mathematical property of inter-channel correlation or inter-channel coherence (ICC). The analysis, perceptual quantization, transmission and synthesis processes of PS parameters are described in detail in “Parametric coding of stereo audio”, J. Breebaart, S. van de Par, A. Kohlrausch and E. Schuijers, EURASIP J. Appl. Sign. Proc. 2005:9, 1305-1322. Further reference is made to J. Breebaart, S. van de Par, A. Kohlrausch, E. Schuijers, “High-Quality Parametric Spatial Audio Coding at Low Bitrates”, AES 116th Convention, Berlin, Preprint 6072, May 2004, and E. Schuijers, J. Breebaart, H. Purnhagen, J. Engdegard, “Low Complexity Parametric Stereo Coding”, AES 116th Convention, Berlin, Preprint 6073, May 2004.
In the following, a typical generic BCC scheme for multi-channel audio coding is elaborated in more detail with reference to
In the following, the internal construction of the BCC synthesis block 122 is explained with reference to
The BCC synthesis block 122 further comprises a delay stage 126, a level modification stage 127, a correlation processing stage 128 and an inverse filter bank stage IFB 129. At the output of stage 129, the reconstructed multi-channel audio signal having for example five channels in case of a 5-channel surround system, can be output to a set of loudspeakers 124 as illustrated in
As shown in
The same is true for the multiplication parameters a1, a2, . . . , ai, . . . , aN, which are also calculated by the side information processing block 123 based on the inter-channel level differences as calculated by the BCC analysis block 116.
The ICC parameters calculated by the BCC analysis block 116 are used for controlling the functionality of block 128 such that certain correlations between the delayed and level-manipulated signals are obtained at the outputs of block 128. It is to be noted here that the ordering of the stages 126, 127, 128 may be different from the case shown in
It is to be noted here that, in a frame-wise processing of an audio signal, the BCC analysis is performed frame-wise, i.e. time-varying, and also frequency-wise. This means that, for each spectral band, the BCC parameters are obtained. This means that, in case the audio filter bank 125 decomposes the input signal into for example 32 band pass signals, the BCC analysis block obtains a set of BCC parameters for each of the 32 bands. Naturally the BCC synthesis block 122 from
In the following, reference is made to
ICC parameters can be defined in different ways. Most generally, one could estimate ICC parameters in the encoder between all possible channel pairs as indicated in
Regarding the calculation of, for example, the multiplication parameters a1, aN based on transmitted ICLD parameters, reference is made to AES convention paper 5574 cited above. The ICLD parameters represent an energy distribution in an original multi-channel signal. Without loss of generality, it is shown in
Naturally, there are other methods for calculating the multiplication factors, which do not rely on the 2-stage process but which only need a 1-stage process. A 1-stage method is described in AES preprint “The reference model architecture for MPEG spatial audio coding”, J. Herre et al., 2005, Barcelona.
Regarding the delay parameters, it is to be noted that the delay parameters ICTD, which are transmitted from a BCC encoder can be used directly, when the delay parameter d1 for the left front channel is set to zero. No rescaling has to be done here, since a delay does not alter the energy of the signal.
Regarding the inter-channel coherence measure ICC transmitted from the BCC encoder to the BCC decoder, it is to be noted here that a coherence manipulation can be done by modifying the multiplication factors a1, . . . , an such as by multiplying the weighting factors of all subbands with random numbers with values between 20 log 10(−6) and 20 log 10(6). The pseudo-random sequence is preferably chosen such that the variance is approximately constant for all critical bands, and the average is zero within each critical band. The same sequence is applied to the spectral coefficients for each different frame. Thus, the auditory image width is controlled by modifying the variance of the pseudo-random sequence. A larger variance creates a larger image width. The variance modification can be performed in individual bands that are critical-band wide. This enables the simultaneous existence of multiple objects in an auditory scene, each object having a different image width. A suitable amplitude distribution for the pseudo-random sequence is a uniform distribution on a logarithmic scale as it is outlined in the US patent application publication 2003/0219130 A1. Nevertheless, all BCC synthesis processing is related to a single input channel transmitted as the sum signal from the BCC encoder to the BCC decoder as shown in
As has been outlined above with respect to
As has been outlined above with respect to
To this end, the encoder-side calculated reconstruction parameters are quantized in accordance with a certain quantization rule. This means that unquantized reconstruction parameters are mapped onto a limited set of quantization levels or quantization indices as it is known in the art and described specifically for parametric coding in detail in “Parametric coding of stereo audio”, J. Breebaart, S. van de Par, A. Kohlrausch and E. Schuijers, EURASIP J. Appl. Sign. Proc. 2005:9, 1305-1322. and in C. Faller and F. Baumgarte, “Binaural cue coding applied to audio compression with flexible rendering,” AES 113th Convention, Los Angeles, Preprint 5686, October 2002.
Quantization has the effect that all parameter values, which are smaller than the quantization step size, are quantized to zero, depending on whether the quantizer is of the mid-tread or mid-riser type. By mapping a large set of unquantized values to a small set of quantized values additional data saving are obtained. These data rate savings are further enhanced by entropy-encoding the quantized reconstruction parameters on the encoder-side. Preferred entropy-encoding methods are Huffman methods based on predefined code tables or based on an actual determination of signal statistics and signal-adaptive construction of codebooks. Alternatively, other entropy-encoding tools can be used such as arithmetic encoding.
Generally, one has the rule that the data rate required for the reconstruction parameters decreases with increasing quantizer step size. Differently stated, a coarser quantization results in a lower data rate, and a finer quantization results in a higher data rate.
Since parametric signal representations are normally required for low data rate environments, one tries to quantize the reconstruction parameters as coarse as possible to obtain a signal representation having a certain amount of data in the base channel, and also having a reasonable small amount of data for the side information which include the quantized and entropy-encoded reconstruction parameters.
Prior art methods, therefore, derive the reconstruction parameters to be transmitted directly from the multi-channel signal to be encoded. A coarse quantization as discussed above results in reconstruction parameter distortions, which result in large rounding errors, when the quantized reconstruction parameter is inversely quantized in a decoder and used for multi-channel synthesis. Naturally, the rounding error increases with the quantizer step size, i.e., with the selected “quantizer coarseness”. Such rounding errors may result in a quantization level change, i.e., in a change from a first quantization level at a first time instant to a second quantization level at a later time instant, wherein the difference between one quantizer level and another quantizer level is defined by the quite large quantizer step size, which is preferable for a coarse quantization. Unfortunately, such a quantizer level change amounting to the large quantizer step size can be triggered by only a small change in parameter, when the unquantized parameter is in the middle between two quantization levels. It is clear that the occurrence of such quantizer index changes in the side information results in the same strong changes in the signal synthesis stage. When—as an example—the interchannel level difference is considered, it becomes clear that a large change results in a large decrease of loudness of a certain loudspeaker signal and an accompanying large increase of the loudness of a signal for another loudspeaker. This situation, which is only triggered by a single quantization level change for a coarse quantization can be perceived as an immediate relocation of a sound source from a (virtual) first place to a (virtual) second place. Such an immediate relocation from one time instant to another time instant sounds unnatural, i.e., is perceived as a modulation effect, since sound sources of, in particular, tonal signals do not change their location very fast.
Generally, also transmission errors may result in large changes of quantizer indices, which immediately result in the large changes in the multi-channel output signal, which is even more true for situations, in which a coarse quantizer for data rate reasons has been adopted.
State-of-the-art techniques for the parametric coding of two (“stereo”) or more (“multi-channel”) audio input channels derive the spatial parameters directly from the input signals. Examples of such parameters are—as outlined above—inter-channel level differences (ICLD) or inter-channel intensity differences (IID), inter-channel time delays (ICTD) or inter-channel phase differences (IPD), and inter-channel correlation/coherence (ICC), each of which are transmitted in a time and frequency-selective fashion, i.e. per frequency band and as a function of time. For the transmission of such parameters to the decoder, a coarse quantization of these parameters is desirable to keep the side information rate at a minimum. As a consequence, considerable rounding errors occur when comparing the transmitted parameter values to their original values. This means that even a soft and gradual change of one parameter in the original signal may lead to an abrupt change in the parameter value used in the decoder if the decision threshold from one quantized parameter value to the next value is exceeded. Since these parameter values are used for the synthesis of the output signal, abrupt changes in parameter values may also cause “jumps” in the output signal which are perceived as annoying for certain types of signals as “switching” or “modulation” artifacts (depending on the temporal granularity and quantization resolution of the parameters).
The U.S. patent application Ser. No. 10/883,538 describes a process for post processing transmitted parameter values in the context of BCC-type methods in order to avoid artifacts for certain types of signals when representing parameters at low resolution. These discontinuities in the synthesis process lead to artifacts for tonal signals. Therefore, the US patent application proposes to use a tonality detector in the decoder, which is used to analyze the transmitted down-mix signal. When the signal is found to be tonal, then a smoothing operation over time is performed on the transmitted parameters. Consequently, this type of processing represents a means for efficient transmission of parameters for tonal signals.
There are, however, classes of input signals other than tonal input signals, which are equally sensitive to a coarse quantization of spatial parameters.
One example for such cases are point sources that are moving slowly between two positions (e.g. a noise signal panned very slowly to move between Center and Left Front speaker). A coarse quantization of level parameters will lead to perceptible “jumps” (discontinuities) in the spatial position and trajectory of the sound source. Since these signals are generally not detected as tonal in the decoder, prior-art smoothing will obviously not help in this case.
Other examples are rapidly moving point sources that have tonal material, such as fast moving sinusoids. Prior-art smoothing will detect these components as tonal and thus invoke a smoothing operation. However, as the speed of movement is not known to the prior-art smoothing algorithm, the applied smoothing time constant would be generally inappropriate and e.g. reproduce a moving point source with a much too slow speed of movement and a significant lag of reproduced spatial position as compared to the originally intended position.
It is the object of the present invention to provide an improved audio signal processing concept allowing a low data rate on the one hand and a good subjective quality on the other hand.
In accordance with a first aspect of the present invention, this object is achieved by an apparatus for generating a multi-channel synthesizer control signal, comprising: a signal analyzer for analyzing a multi-channel input signal; a smoothing information calculator for determining smoothing control information in response to the signal analyzer, the smoothing information calculator being operative to determine the smoothing control information such that, in response to the smoothing control information, a synthesizer-side post-processor generates a post-processed reconstruction parameter or a post-processed quantity derived from the reconstruction parameter for a time portion of an input signal to be processed; and a data generator for generating a control signal representing the smoothing control information as the multi-channel synthesizer control signal.
In accordance with a second aspect of the present invention, this object is achieved by a multi-channel synthesizer for generating an output signal from an input signal, the input signal having at least one input channel and a sequence of quantized reconstruction parameters, the quantized reconstruction parameters being quantized in accordance with a quantization rule, and being associated with subsequent time portions of the input signal, the output signal having a number of synthesized output channels, and the number of synthesized output channels being greater than one or greater than the number of input channels, the input channel having a multi-channel synthesizer control signal representing smoothing control information, the smoothing control information depending on an encoder-side signal analysis, the smoothing control information being determined such that a synthesizer-side post-processor generates, in response to the synthesizer control signal a post-processed reconstruction parameter or a post-processed quantity derived from the reconstruction parameter, comprising: a control signal provider for providing the control signal having the smoothing control information; a post-processor for determining, in response to the control signal, the post-processed reconstruction parameter or the post-processed quantity derived from the reconstruction parameter for a time portion of the input signal to be processed, wherein the post-processor is operative to determine the post-processed reconstruction parameter or the post-processed quantity such that the value of the post-processed reconstruction parameter or the post-processed quantity is different from a value obtainable using requantization in accordance with the quantization rule; and a multi-channel reconstructor for reconstructing a time portion of the number of synthesized output channels using the time portion of the input channel and the post-processed reconstruction parameter or the post-processed value.
Further aspects of the present invention relate to a method of generating a multi-channel synthesizer control signal, a method of generating an output signal from an input signal, corresponding computer programs, or a multi-channel synthesizer control signal.
The present invention is based on the finding that an encoder-side directed smoothing of reconstruction parameters will result in an improved audio quality of the synthesized multi-channel output signal. This substantial improvement of the audio quality can be obtained by an additional encoder-side processing to determine the smoothing control information, which can, in preferred embodiments of the present invention, transmitted to the decoder, which transmission only requires a limited (small) number of bits.
On the decoder-side, the smoothing control information is used to control the smoothing operation. This encoder-guided parameter smoothing on the decoder-side can be used instead of the decoder-side parameter smoothing, which is based on for example tonality/transient detection, or can be used in combination with the decoder-side parameter smoothing. Which method is applied for a certain time portion and a certain frequency band of the transmitted down-mix signal can also be signaled using the smoothing control information as determined by a signal analyzer on the encoder-side.
To summarize, the present invention is advantageous in that an encoder-side controlled adaptive smoothing of reconstruction parameters is performed within a multi-channel synthesizer, which results in a substantial increase of audio quality on the one hand and which only results in a small amount of additional bits. Due of the fact that the inherent quality deterioration of quantization is mitigated using the additional smoothing control information, the inventive concepts can even be applied without any increase and even with a decrease of transmitted bits, since the bits for the smoothing control information can be saved by applying an even coarser quantization so that less bits are required for encoding the quantized values. Thus, the smoothing control information together with the encoded quantized values can even require the same or less bit rate of quantized values without smoothing control information as outlined in the non-prepublished US-patent application, while keeping the same level or a higher level of subjective audio quality.
Generally, the post processing for quantized reconstruction parameters used in a multi-channel synthesizer is operative to reduce or even eliminate problems associated with coarse quantization on the one hand and quantization level changes on the other hand.
While, in prior art systems, a small parameter change in an encoder may result in a strong parameter change at the decoder, since a requantization in the synthesizer is only admissible for the limited set of quantized values, the inventive device performs a post processing of reconstruction parameters so that the post processed reconstruction parameter for a time portion to be processed of the input signal is not determined by the encoder-adopted quantization raster, but results in a value of the reconstruction parameter, which is different from a value obtainable by the quantization in accordance with the quantization rule.
While, in a linear quantizer case, the prior art method only allows inversely quantized values being integer multiples of the quantizer step size, the inventive post processing allows inversely quantized values to be non-integer multiples of the quantizer step size. This means that the inventive post processing alleviates the quantizer step size limitation, since also post processed reconstruction parameters lying between two adjacent quantizer levels can be obtained by post processing and used by the inventive multi-channel reconstructor, which makes use of the post processed reconstruction parameter.
This post processing can be performed before or after requantization in a multi-channel synthesizer. When the post processing is performed with the quantized parameters, i.e., with the quantizer indices, an inverse quantizer is needed, which can inversely quantize not only to quantizer step multiples, but which can also inversely quantize to inversely quantized values between multiples of the quantizer step size.
In case the post processing is performed using inversely quantized reconstruction parameters, a straight-forward inverse quantizer can be used, and an interpolation/filtering/smoothing is performed with the inversely quantized values.
In case of a non-linear quantization rule, such as a logarithmic quantization rule, a post processing of the quantized reconstruction parameters before requantization is preferred, since the logarithmic quantization is similar to the human ear's perception of sound, which is more accurate for low-level sound and less accurate for high-level sound, i.e., makes a kind of a logarithmic compression.
It is to be noted here that the inventive merits are not only obtained by modifying the reconstruction parameter itself that is included in the bit stream as the quantized parameter. The advantages can also be obtained by deriving a post processed quantity from the reconstruction parameter. This is especially useful, when the reconstruction parameter is a difference parameter and a manipulation such as smoothing is performed on an absolute parameter derived from the difference parameter.
In a preferred embodiment of the present invention, the post processing for the reconstruction parameters is controlled by means of a signal analyser, which analyses the signal portion associated with a reconstruction parameter to find out, which signal characteristic is present. In a preferred embodiment, the decoder controlled post processing is activated only for tonal portions of the signal (with respect to frequency and/or time) or when the tonal portions are generated by a point source only for slowly moving point sources, while the post processing is deactivated for non-tonal portions, i.e., transient portions of the input signal or rapidly moving point sources having tonal material. This makes sure that the full dynamic of reconstruction parameter changes is transmitted for transient sections of the audio signal, while this is not the case for tonal portions of the signal.
Preferably, the post processor performs a modification in the form of a smoothing of the reconstruction parameters, where this makes sense from a psycho-acoustic point of view, without affecting important spatial detection cues, which are of special importance for non-tonal, i.e., transient signal portions.
The present invention results in a low data rate, since an encoder-side quantization of reconstruction parameters can be a coarse quantization, since the system designer does not have to fear significant changes in the decoder because of a change from a reconstruction parameter from one inversely quantized level to another inversely quantized level, which change is reduced by the inventive processing by mapping to a value between two requantization levels.
Another advantage of the present invention is that the quality of the system is improved, since audible artefacts caused by a change from one requantization level to the next allowed requantization level are reduced by the inventive post processing, which is operative to map to a value between two allowed requantization levels.
Naturally, the inventive post processing of quantized reconstruction parameters represents a further information loss, in addition to the information loss obtained by parameterisation in the encoder and subsequent quantization of the reconstruction parameter. This, however, is not a problem, since the inventive post processor preferably uses the actual or preceding quantized reconstruction parameters for determining a post processed reconstruction parameter to be used for reconstruction of the actual time portion of the input signal, i.e., the base channel. It has been shown that this results in an improved subjective quality, since encoder-induced errors can be compensated to a certain degree. Even when encoder-side induced errors are not compensated by the post processing of the reconstruction parameters, strong changes of the spatial perception in the reconstructed multi-channel audio signal are reduced, preferably only for tonal signal portions, so that the subjective listening quality is improved in any case, irrespective of the fact, whether this results in a further information loss or not.
Other features which are considered as characteristic for the invention are set forth in the appended claims.
Although the invention is illustrated and described herein as embodied in an apparatus and a method for generating a multi-channel synthesizer control signal, a multi-channel synthesizer, a method of generating an output signal from an input signal and a machine-readable storage medium, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the spirit of the invention and within the scope and range of equivalents of the claims.
The construction and method of operation of the invention, however, together with additional objects and advantages thereof will be best understood from the following description of specific embodiments when read in connection with the accompanying drawings.
a is a schematic diagram of an encoder-side device and the corresponding decoder-side device in accordance with the first embodiment of the present invention;
b is a schematic diagram of an encoder-side device and the corresponding decoder-side device in accordance with a further preferred embodiment of the present invention;
c is a schematic block diagram of a preferred control signal generator;
a is a schematic representation for determining the spatial position of a sound source;
b is a flow chart of a preferred embodiment for calculating a smoothing time constant as an example for smoothing information;
a is an alternative embodiment for calculating quantized inter-channel intensity differences and corresponding smoothing parameters;
b is an exemplary diagram illustrating the difference between a measured IID parameter per frame and a quantized IID parameter per frame and a processed quantized IID parameter per frame for various time constants;
c is a flow chart of a preferred embodiment of the concept as applied in
a is a schematic representation illustrating a decoder-side directed system;
b is a schematic diagram of a post processor/signal analyzer combination to be used in the inventive multi-channel synthesizer of
c is a schematic representation of time portions of the input signal and associated quantized reconstruction parameters for past signal portions, actual signal portions to be processed and future signal portions;
a is another embodiment of the encoder guided parameter smoothing device shown in
b is another preferred embodiment of the encoder guided parameter smoothing device;
a is another embodiment of the encoder guided parameter smoothing device shown in
b is a schematic indication of the parameters to be post processed in accordance with the invention showing that also a quantity derived from the reconstruction parameter can be smoothed;
a is an exemplary time course of quantized reconstruction parameters associated with subsequent input signal portions;
b is a time course of post processed reconstruction parameters, which have been post-processed by the post processor implementing a smoothing (low-pass) function;
a and 1b show block diagrams of inventive multi-channel encoder/synthesizer scenarios. As will be shown later with respect to
In the BCC case, the number of input channels will be 1 or generally not more than 2, while the number of output channels will be 5 (left-surround, left, center, right, right surround) or 6 (5 surround channels plus 1 sub-woofer channel) or even more in case of a 7.1 or 9.1 multi-channel format. Generally stated, the number of output sources will be higher than the number of input sources.
a illustrates, on the left side, an apparatus 1 for generating a multi-channel synthesizer control signal. Box 1 titled “Smoothing Parameter Extraction” comprises a signal analyzer, a smoothing information calculator and a data generator. As shown in
Furthermore, the smoothing parameter extraction device 1 in
In particular, the control signal representing the smoothing control information can be a smoothing mask, a smoothing time constant, or any other value controlling a decoder-side smoothing operation so that a reconstructed multi-channel output signal, which is based on smoothed values has an improved quality compared to reconstructed multi-channel output signals, which is based on non-smoothed values.
The smoothing mask includes the signaling information consisting e.g. of flags that indicate the “on/off” state of each frequency used for smoothing. Thus, the smoothing mask can be seen as a vector associated to one frame having a bit for each band, wherein this bit controls, whether the encoder-guided smoothing is active for this band or not.
A spatial audio encoder as shown in
The down-mixer 3 may be constructed as outlined for item 114 in
Furthermore, the audio encoder 4 is not necessarily required. This device, however, is used, when the data rate of the down-mix signal at the output of element 3 is too high for a transmission of the down-mix signal via the transmission/storage means.
A spatial audio decoder includes an encoder-guided parameter smoothing device 9a, which is coupled to multi-channel up-mixer 12. The input signal for the multi-channel up-mixer 12 is normally the output signal of an audio decoder 8 for decoding the transmitted/stored down-mix signal.
Preferably, the inventive multi-channel synthesizer for generating an output signal from an input signal, the input signal having at least one input channel and a sequence of quantized reconstruction parameters, the quantized reconstruction parameters being quantized in accordance with a quantization rule, and being associated with subsequent time portions of the input signal, the output signal having a number of synthesized output channels, and the number of synthesized output channels being greater than one or greater than a number of input channels, comprises a control signal provider for providing a control signal having the smoothing control information. This control signal provider can be a data stream demultiplexer, when the control information is multiplexed with the parameter information. When, however, the smoothing control information is transmitted from device 1 to device 9a in
Furthermore, the inventive multi-channel synthesizer comprises a post processor 9a, which is also termed an “encoder-guided parameter smoothing device”. The post processor is for determining a post processed reconstruction parameter or a post processed quantity derived from the reconstruction parameter for a time portion of the input signal to be processed, wherein the post processor is operative to determine the post processed reconstruction parameter or the post processed quantity such that a value of the post processed reconstruction parameter or the post processed quantity is different from a value obtainable using requantization in accordance with the quantization rule. The post processed reconstruction parameter or the post processed quantity is forwarded from device 9a to the multi-channel up mixer 12 so that the multi-channel up mixer or multi-channel reconstructor 12 can perform a reconstruction operation for reconstructing a time portion of the number of synthesized output channels using the time portion of the input channel and the post processed reconstruction parameter or the post processed value.
Subsequently, reference is made to the preferred embodiment of the present invention illustrated in
The
The multi-channel reconstructor 12 is used for reconstructing a time portion of each of the number of synthesis output channels using the time portions of the processed input channel and the post processed reconstruction parameter.
In preferred embodiments of the present invention, the quantized reconstruction parameters are quantized BCC parameters such as inter-channel level differences, inter-channel time differences or inter-channel coherence parameters or inter-channel phase differences or inter-channel intensity differences. Naturally, all other reconstruction parameters such as stereo parameters for intensity stereo or parameters for parametric stereo can be processed in accordance with the present invention as well.
The encoder/decoder control flag transmitted via line 5a is operative to control the switch or combine device 9b to forward either decoder-guided smoothing values or encoder-guided smoothing values to the multi-channel up mixer 12.
In the following, reference will be made to
The inventive method successfully handles problematic situations with slowly moving point sources preferably having noise-like properties or rapidly moving point sources having tonal material such as fast moving sinusoids by allowing a more explicit encoder control of the smoothing operation carried out in the decoder.
As outlined before, the preferred way of performing a postprocessing operation within the encoder-guided parameter smoothing device 9a or the decoder-guided parameter smoothing device 10 is a smoothing operation carried out in a frequency-band oriented way.
Furthermore, in order to actively control the post processing in the decoder performed by the encoder-guided parameter smoothing device 9a, the encoder conveys signaling information preferably as part of the side information to the synthesizer/decoder. The multi-channel synthesizer control signal can, however, also be transmitted separately to the decoder without being part of side information of parametric information or down-mix signal information.
In a preferred embodiment, this signaling information consists of flags that indicate the “on/off” state of each frequency band used for smoothing. In order to allow an efficient transmission of this information, a preferred embodiment can also use a set of “short cuts” to signal certain frequently used configurations with very few bits.
To this end, the smoothing information calculator 1b in
Furthermore, the smoothing information calculator 1b may determine that in all frequency bands, an encoder-guided smoothing operation is to be performed. To this end, the data generator 1c generates an “all-on” short cut signal, which signals that smoothing is applied in all frequency bands. This signal can be a certain bit pattern or a flag.
Furthermore, when the signal analyzer 1a determines that the signal did not very much change from one time portion to the next time portion, i.e. from a current time portion to a future time portion, the smoothing information calculator 1b may determine that no change in the encoder-guided parameter smoothing operation has to be performed. Then, the data generator 1c will generate a “repeat last mask” short cut signal, which will signal to the decoder/synthesizer that the same band-wise on/off status shall be used for smoothing as it was employed for the processing of the previous frame.
In a preferred embodiment, the signal analyzer 1a is operative to estimate the speed of movement so that the impact of the decoder smoothing is adapted to the speed of a spatial movement of a point source. As a result of this process, a suitable smoothing time constant is determined by the smoothing information calculator 1b and signaled to the decoder by dedicated side information via data generator 1c. In a preferred embodiment, the data generator 1c generates and transmits an index value to a decoder, which allows the decoder to select between different pre-defined smoothing time constants (such as 125 ms, 250 ms, 500 ms, . . . ). In a further preferred embodiment, only one time constant is transmitted for all frequency bands. This reduces the amount of signaling information for smoothing time constants and is sufficient for the frequently occurring case of one dominant moving point source in the spectrum. An exemplary process of determining a suitable smoothing time constant is described in connection with
The explicit control of the decoder smoothing process requires a trans-mission of some additional side information compared to a decoder-guided smoothing method. Since this control may only be necessary for a certain fraction of all input signals with specific properties, both approaches are preferably combined into a single method, which is also called the “hybrid method”. This can be done by transmitting signaling information such as one bit determining whether smoothing is to be carried out based on a tonality/transient estimation in the decoder as performed by device 16 in
Subsequently, preferred embodiments for identifying slowly moving point sources and estimating appropriate time constants to be signaled to a decoder are discussed. Preferably, all estimations are carried out in the encoder and can, thus, access non-quantized versions of signal parameters, which are, of course, not available in the decoder because of the fact that device 2 in
Subsequently, reference is made to
The spatial position of the sound event within a certain frequency band and time frame is calculated as the energy-weighted average of these vectors as outlined in the equation of
As outlined in step 40 of
Then, in step 41, it is determined, whether the source having the spatial positions p1, p2 is slowly moving. When the distance between subsequent spatial positions is below a predetermined threshold, then the source is determined to be a slowly moving source. When, however, it is determined that the displacement is above a certain maximum displacement threshold, then it is determined that the source is not slowly moving, and the process in
Values L, C, R, Ls, and Rs in
In step 42, it is determined, whether the source is a point or a near point source. Preferably, point sources are detected, when the relevant ICC parameters exceed a certain minimum threshold such as 0.85. When it is determined that the ICC parameter is below the predetermined threshold, then the source is not a point source and the process in
In a step 44, the slope of an ICLD curve for subsequent time instances is calculated. Then, in step 45, a smoothing time constant is chosen, which is inversely proportional to the slope of the curve.
Then, in step 45, a smoothing time constant as an example of a smoothing information is output and used in a decoder-side smoothing device, which, as it becomes clear from
Regarding
When only one common smoothing time constant is signaled for all frequency bands, the individual results for each band can be combined into an overall result e.g. by averaging or energy-weighted averaging. In this case, the decoder applies the same (energy-weighted) averaged smoothing time constant to each band so that only a single smoothing time constant for the whole spectrum needs to be transmitted. When bands are found with a significant deviation from the combined time constant, smoothing may be disabled for these bands using the corresponding “on/off” flags.
Subsequently, reference is made to
Thus,
Subsequently, reference is made to the flow chart in
In a more elaborate embodiment, which is preferred for advanced devices, this process can also be performed for a set of quantized IID/ICLD parameters selected from the repertoire of possible IID values from the quantizer. In that case, the comparison and selection procedure would comprise a comparison of processed IID and unprocessed IID parameters for various combinations of transmitted (quantized) IID parameters and smoothing time constants. Thus, as outlined by the square brackets in step 47, in contrast to the first embodiment, the second embodiment uses different quantization rules or the same quantization rules but different quantization step sizes to quantize the IID parameters. Then, in step 51, an error is calculated for each quantization way and each time constant. Thus, the number of candidates to be decided in step 52 compared to step 50 of
Then, in step 52, a two-dimensional optimization for (1) error and (2) bit rate is performed to search for a sequence of quantized values and a matching time constant. Finally, in step 53, the sequence of quantized values is entropy-encoded using a Huffman code or an arithmetic code. Step 53 finally results in a bit sequence to be transmitted to a decoder or multi-channel synthesizer.
b illustrates the effect of post processing by smoothing. Item 77 illustrates a quantized IID parameter for frame n. Item 78 illustrates a quantized IID parameter for a frame having a frame index n+1. The quantized IID parameter 78 has been derived by a quantization from the measured IID parameter per frame indicated by reference number 79. Smoothing of this parameter sequence of quantized parameter 77 and 78 with different time constants results in smaller post-processed parameter values at 80a and 80b. The time constant for smoothing the parameter sequence 77, 78, which resulted in the post-processed (smoothed) parameter 80a was smaller than the smoothing time constant, which resulted in a post-processed parameter 80b. As known in the art, the smoothing time constant is inverse to the cut-off frequency of a corresponding low-pass filter.
The embodiment illustrated in connection with steps 51 to 53 in
For example, a large difference in (quantized) IID from frame to frame, in combination with a large smoothing time constant effectively results in only a small net effect of the processed IID. The same net effect may be constructed by a small difference in IID parameters, compared with a smaller time constant. This additional degree of freedom enables the encoder to optimize both the reconstructed IID as well as the resulting bit rate simultaneously (given the fact that transmission of a certain IID value can be more expensive than transmission of a certain alternative IID parameter).
As outlined above, the effect on IID trajectories on the smoothing is outlined in
The examples above are all related to IID parameters. In principle, all described methods can also be applied to IPD, ITD, or ICC parameters.
The present invention, therefore, relates to an encoder-side processing and a decoder-side processing, which form a system using a smoothing enable/disable mask and a time constant signaled via a smoothing control signal. Furthermore, a band-wise signaling per frequency band is performed, wherein, furthermore, short cuts are preferred, which may include an all bands on, an all bands off or a repeat previous status short cut. Furthermore, it is preferred to use one common smoothing time constant for all bands. Furthermore, in addition or alternatively, a signal for automatic tonality-based smoothing versus explicit encoder control can be transmitted to implement a hybrid method.
Subsequently, reference is made to the decoder-side implementation, which works in connection with the encoder-guided parameter smoothing.
a shows an encoder-side 21 and a decoder-side 22. In the encoder, N original input channels are input into a down mixer stage 23. The down mixer stage is operative to reduce the number of channels to e.g. a single mono-channel or, possibly, to two stereo channels. The down mixed signal representation at the output of down mixer 23 is, then, input into a source encoder 24, the source encoder being implemented for example as an mp3 encoder or as an AAC encoder producing an output bit stream. The encoder-side 21 further comprises a parameter extractor 25, which, in accordance with the present invention, performs the BCC analysis (block 116 in
The decoder 22 includes a source decoder 26, which is operative to reconstruct a signal from the received bit stream (originating from the source encoder 24). To this end, the source decoder 26 supplies, at its output, subsequent time portions of the input signal to an up-mixer 12, which performs the same functionality as the multi-channel reconstructor 12 in
Contrary to
It can be seen from
b shows a preferred embodiment of the signal-adaptive reconstruction parameter processing formed by the signal analyser 16 and the ICLD smoother 10.
The signal analyser 16 is formed from a tonality determination unit 16a and a subsequent thresholding device 16b. Additionally, the reconstruction parameter post processor 10 from
When, however, the tonality determination means in a decoder-controlled implementation determines that a certain frequency band of a actual time portion of the input signal, i.e., a certain frequency band of an input signal portion to be processed has a tonality lower than the specified threshold, i.e., is transient, the switch is actuated such that the smoothing filter 10a is by-passed.
In the latter case, the signal-adaptive post processing by the smoothing filter 10a makes sure that the reconstruction parameter changes for transient signals pass the post processing stage unmodified and result in fast changes in the reconstructed output signal with respect to the spatial image, which corresponds to real situations with a high degree of probability for transient signals.
It is to be noted here that the
Naturally, one could also detect transient portions and exaggerate the changes in the parameters to values between predefined quantized values or quantization indices so that, for strong transient signals, the post processing for the reconstruction parameters results in an even more exaggerated change of the spatial image of a multi-channel signal. In this case, a quantization step size of 1 as instructed by subsequent reconstruction parameters for subsequent time portions can be enhanced to for example 1.5, 1.4, 1.3 etc, which results in an even more dramatically changing spatial image of the reconstructed multi-channel signal.
It is to be noted here that a tonal signal characteristic, a transient signal characteristic or other signal characteristics are only examples for signal characteristics, based on which a signal analysis can be performed to control a reconstruction parameter post processor. In response to this control, the reconstruction parameter post processor determines a post processed reconstruction parameter having a value which is different from any values for quantization indices on the one hand or requantization values on the other hand as determined by a predetermined quantization rule.
It is to be noted here that post processing of reconstruction parameters dependent on a signal characteristic, i.e., a signal-adaptive parameter post processing is only optional. A signal-independent post processing also provides advantages for many signals. A certain post processing function could, for example, be selected by the user so that the user gets enhanced changes (in case of an exaggeration function) or damped changes (in case of a smoothing function). Alternatively, a post processing independent of any user selection and independent of signal characteristics can also provide certain advantages with respect to error resilience. It becomes clear that, especially in case of a large quantizer step size, a transmission error in a quantizer index may result in audible artefacts. To this end, one would perform a forward error correction or another similar operation, when the signal has to be transmitted over error-prone channels. In accordance with the present invention, the post processing can obviate the need for any bit-inefficient error correction codes, since the post processing of the reconstruction parameters based on reconstruction parameters in the past will result in a detection of erroneous transmitted quantized reconstruction parameters and will result in suitable counter measures against such errors. Additionally, when the post processing function is a smoothing function, quantized reconstruction parameters strongly differing from former or later reconstruction parameters will automatically be manipulated as will be outlined later.
It has to be noted that the enhanced quantizer 10e (
With respect to the present invention, it basically makes no difference, whether the manipulation is performed before requantization (see
b shows an embodiment in which the enhanced inverse quantizer 10e in
Generally, the post processor 10 is implemented as a post processor as indicated in
b shows an example implementation, in which the post processed value is not derived from the inversely quantized reconstruction parameter but from a value derived from the inversely quantized reconstruction parameter. The processing for deriving is performed by the means 700 for deriving which, in this case, can receive the quantized reconstruction parameter via line 702 or can receive an inversely quantized parameter via line 704. One could for example receive as a quantized parameter an amplitude value, which is used by the means for deriving for calculating an energy value. Then, it is this energy value which is subjected to the post processing (e.g. smoothing) operation. The quantized parameter is forwarded to block 706 via line 708. Thus, postprocessing can be performed using the quantized parameter directly as shown by line 710, or using the inversely quantized parameter as shown by line 712, or using the value derived from the inversely quantized parameter as shown by line 714.
As has been outlined above, the data manipulation to overcome artefacts due to quantization step sizes in a coarse quantization environment can also be performed on a quantity derived from the reconstruction parameter attached to the base channel in the parametrically encoded multi channel signal. When for example the quantized reconstruction parameter is a difference parameter (ICLD), this parameter can be inversely quantized without any modification. Then an absolute level value for an output channel can be derived and the inventive data manipulation is performed on the absolute value. This procedure also results in the inventive artefact reduction, as long as a data manipulation in the processing path between the quantized reconstruction parameter and the actual reconstruction is performed so that a value of the post processed reconstruction parameter or the post processed quantity is different from a value obtainable using requantization in accordance with the quantization rule, i.e. without manipulation to overcome the “step size limitation”.
Many mapping functions for deriving the eventually manipulated quantity from the quantized reconstruction parameter are devisable and used in the art, wherein these mapping functions include functions for uniquely mapping an input value to an output value in accordance with a mapping rule to obtain a non post processed quantity, which is then post processed to obtain the post-processed quantity used in the multi channel reconstruction (synthesis) algorithm.
In the following, reference is made to
A possible inverse quantizer function is to map a quantizer level of 0 to an inversely quantized value of 0. A quantizer level of 1 would be mapped to an inversely quantized value of 10. Analogously, a quantizer level of 2 would be mapped to an inversely quantized value of 20 for example. Requantization is, therefore, controlled by an inverse quantizer function indicated by reference number 31. It is to be noted that, for a straightforward inverse quantizer, only the crossing points of line 30 and line 31 are possible. This means that, for a straightforward inverse quantizer having an inverse quantizer rule of
This is different in the enhanced inverse quantizer 10e, since the enhanced inverse quantizer receives, as an input, values between 0 and 1 or 1 and 2 such as value 0.5. The advanced requantization of value 0.5 obtained by the manipulator 10d will result in an inversely quantized output value of 5, i.e., in a post processed reconstruction parameter which has a value which is different from a value obtainable by requantization in accordance with the quantization rule. While the normal quantization rule only allows values of 0 or 10, the preferred inverse quantizer working in accordance with the preferred quantizer function 31 results in a different value, i.e., the value of 5 as indicated in
While the straight-forward inverse quantizer maps integer quantizer levels to quantized levels only, the enhanced inverse quantizer receives non-integer quantizer “levels” to map these values to “inversely quantized values” between the values determined by the inverse quantizer rule.
The present invention is advantageous in that the inventive post processing smoothes fluctuations or smoothes short extreme values. The situation especially arises in a case, in which signal portions from several input channels having a similar energy are super-positioned in a frequency band of a signal, i.e., the base channel or input signal channel. This frequency band is then, per time portion and depending on the instant situation mixed to the respective output channels in a highly fluctuating manner. From the psycho-acoustic point of view, it would, however, be better to smooth these fluctuations, since these fluctuations do not contribute substantially to a detection of a location of a source but affect the subjective listening impression in a negative manner.
In accordance with a preferred embodiment of the present invention, such audible artefacts are reduced or even eliminated without incurring any quality losses at a different place in the system or without requiring a higher resolution/quantization (and, thus, a higher data rate) of the transmitted reconstruction parameters. The present invention reaches this object by performing a signal-adaptive modification (smoothing) of the parameters without substantially influencing important spatial localization detection cues.
The sudden occurring changes in the characteristic of the reconstructed output signal result in audible artefacts in particular for audio signals having a highly constant stationary characteristic. This is the case with tonal signals. Therefore, it is important to provide a “smoother” transition between quantized reconstruction parameters for such signals. This can be obtained for example by smoothing, interpolation, etc.
Additionally, such a parameter value modification can introduce audible distortions for other audio signal types. This is the case for signals, which include fast fluctuations in their characteristic. Such a characteristic can be found in the transient part or attack of a percussive instrument. In this case, the embodiment provides for a deactivation of parameter smoothing.
This is obtained by post processing the transmitted quantized reconstruction parameters in a signal-adaptive way.
The adaptivity can be linear or non-linear. When the adaptivity is non-linear, a thresholding procedure as described in
Another criterion for controlling the adaptivity is a determination of the stationarity of a signal characteristic. A certain form for determining the stationarity of a signal characteristic is the evaluation of the signal envelope or, in particular, the tonality of the signal. It is to be noted here that the tonality can be determined for the whole frequency range or, preferably, individually for different frequency bands of an audio signal.
This embodiment results in a reduction or even elimination of artefacts, which were, up to now, unavoidable, without incurring an increase of the required data rate for transmitting the parameter values.
As has been outlined above with respect to
It is to be noted here that the inventive post processing can also be used for other concepts of parametric encoding of multi-channel signals such as for parametric stereo, MP3 surround, and similar methods.
The inventive methods or devices or computer programs can be implemented or included in several devices.
Depending on certain implementation requirements of the inventive methods, the inventive methods can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, in particular a disk or a CD having electronically readable control signals stored thereon, which can cooperate with a programmable computer system such that the inventive methods are performed. Generally, the present invention is, therefore, a computer program product with a program code stored on a machine-readable carrier, the program code being configured for performing at least one of the inventive methods, when the computer program products runs on a computer. In other words, the inventive methods are, therefore, a computer program having a program code for performing the inventive methods, when the computer program runs on a computer.
While the foregoing has been particularly shown and described with reference to particular embodiments thereof, it will be understood by those skilled in the art that various other changes in the form and details may be made without departing from the spirit and scope thereof. It is to be understood that various changes may be made in adapting to different embodiments without departing from the broader concepts disclosed herein and comprehended by the claims that follow.
This is a divisional of application Ser. No. 11/212,395, filed Aug. 25, 2005; the application also claims the priority benefit under 35 U.S.C. §119 (e), of copending U.S. Provisional Application No. 60/671,582, filed Apr. 15, 2005; the prior applications are herewith incorporated by reference in their entirety.
Number | Date | Country | |
---|---|---|---|
60671582 | Apr 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11212395 | Aug 2005 | US |
Child | 13158863 | US |