Transform coding is a compression technique often used in digital media compression systems. Uncompressed digital media, such as an audio or video signal is typically represented as a stream of amplitude samples of a signal taken at regular time intervals. For example, a typical format for audio on compact disks consists of a stream of sixteen-bit samples per channel of the audio (e.g., the original analog audio signal from a microphone) captured at a rate of 44.1 KHz. Each sample is a sixteen-bit number representing the amplitude of the audio signal at the time of capture. Other digital media systems may use various different amplitude and time resolutions of signal sampling.
Uncompressed digital media can consume considerable storage and transmission capacity. Transform coding reduces the size of digital media by transforming the time-domain representation of the digital media into a frequency-domain (or other like transform domain) representation, and then reducing resolution of certain generally less perceptible frequency components of the frequency-domain representation. This generally produces much less perceptible degradation of the signal compared to reducing amplitude or time resolution of digital media in the time domain.
More specifically, a typical audio transform coding technique divides the uncompressed digital audio's stream of time-samples into fixed-size subsets or blocks, each block possibly overlapping with other blocks. A linear transform that does time-frequency analysis is applied to each block, which converts the time interval audio samples within the block to a set of frequency (or transform) coefficients generally representing the strength of the audio signal in corresponding frequency bands over the block interval. For compression, the transform coefficients may be selectively quantized (i.e., reduced in resolution, such as by dropping least significant bits of the coefficient values or otherwise mapping values in a higher resolution number set to a lower resolution), and also entropy or variable-length coded into a compressed audio data stream. At decoding, the transform coefficients will inversely transform to nearly reconstruct the original amplitude/time sampled audio signal.
Many audio compression systems utilize the Modulated Lapped Transform (MLT, also known as Modified Discrete Cosine Transform or MDCT) to perform the time-frequency analysis in audio transform coding. MLT reduces blocking artifacts introduced into the reconstructed audio signal by quantization. More particularly, when non-overlapping blocks are independently transform coded, quantization errors will produce discontinuities in the signal at the block boundaries upon reconstruction of the audio signal at the decoder.
One problem in audio coding is commonly referred to as “pre-echo.” Pre-echo occurs when the audio undergoes a sudden change (referred to as a “changing signal characteristic”). For example, a changing signal characteristic such as a transient. In transform coding, particular frequency coefficients commonly are quantized (i.e., reduced in resolution). When the transform coefficients are later inverse-transformed to reproduce the audio signal, this quantization introduces quantization noise that is spread over the entire block in the time domain. This inherently causes rather uniform smearing of noise within the coding frame. The noise, which generally is tolerable for some part of the frame, can be audible and disastrous to auditory quality during portions of the frame where the masking level is low. In practice, this effect shows up most prominently when a signal has a sharp attack immediately following a region of low energy, hence the term “pre-echo.” “Post-echo” is a changing signal characteristic that occurs when the signal transition from high to low energy is less of a problem to perceptible auditory quality due to a property of the human auditory system.
Thus, what is needed is a system that addresses the pre-echo effect by reducing the smearing of quantization noise over a large signal frame.
A transform coder is described that performs an additional time-split transform selectively based on characteristics of media data. A transient detection component identifies changing signal characteristic locations, such as transient locations to apply a time-split transform. For example, a slow transition between two types of signals is usually not considered a transient and yet the described technology provides benefits for such changing signal characteristics. An encoding component transforms an input signal from a time domain to a transform domain. A time-splitting transformer component selectively performs an orthogonal sum-difference transform on adjacent coefficients indicated by the identified changing signal characteristic location. The orthogonal sum/difference transform results in transforming a vector of coefficients in the transform domain as if they were multiplied selectively by one or more exemplary time-split transform matrices.
In other examples, a window configuration component configures window sizes so as to place one or more small window sizes in areas of transient locations and large window sizes in other areas. The encoding component inverse-transforms to produce a reconstructed version of the input signal and a quality measurement component measures the achieved quality of the reconstructed signal. The window configuration component adjusts window sizes according to the achieved quality. The quality measurement component further operates to measure achieved perceptual quantization noise of the reconstructed signal. The window configuration component further operates to increase a window size where the measure of achieved perceptual quantization noise exceeds an acceptable threshold. The quality measurement component further operates to detect pre-echo in the reconstructed signal and the window configuration component further operates to decrease window size where pre-echo is detected.
A transform decoder provides an inverse time-splitting transformer and an inverse transformer. The inverse time-splitting transformer receives side information and coefficient data in a transform domain and selectively performs an inverse orthogonal sum-difference transformation on adjacent coefficients indicated in received side information. Next, the inverse transformer transforms coefficient data from the transform domain to a time domain.
In other examples, an inverse window configuration component receives side information about window and sub-frame sizes and the inverse transformer transforms coefficient data according to the window and sub-band sizes. In one such example, the inverse orthogonal sum-difference transformation results in transforming a vector of coefficients in the transform domain as if it were multiplied by an inverse of a time-splitting transform. In another example, the inverse time-splitting transformer component receives side information indicating that there are no time-splits in at least one sub-frame, and in another example, the side information indicates whether or not there is a time-split in an extended band.
A method of decoding receives side information and coefficient data in a transform domain. The method selectively performs an inverse time-split transform on adjacent coefficients as indicated in received side information and further transforms the coefficient data from the transform domain to a time domain. In another example, the method identifies sub-frame sizes in received side information and the inverse transform is performed according to the identified sub-frame sizes. In yet another example, the side information indicates whether there is a time-split in a sub-band, or whether or not there is a time-split in each sub-band in an extended band. In another example, the method determines a pair of adjacent coefficients in a transform domain on which to perform an inverse sum-difference transform.
Additional features and advantages of the invention will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.
The following describes a transform coder capable of performing an additional time-split transform selectively based on characteristics of spectral digital media data.
Optionally, an adaptive window size is provided when a selective time-split transform does not produce a sufficient benefit. The coder selects one or more window sizes within a frame of spectral digital media data. Spectral Data analysis (e.g., changing signal characteristic detection) identifies one or more frequencies for a time-split transform. If the results of a time-split transform are not sufficient, then a window size may be adapted. Optionally, using one or more passes at time-split transform, data energy analysis, and or window size adaptation provides improved coding efficiency overall.
When providing a sub-band decomposition for coding of data, with overlapped or block based transform, or when using a filterbank (which can also be represented as an overlapped transform), the sub-band structure is typically fixed. When providing an overlapped transform (such as modulated lapped transform (MLT)), the sub-frame size can be varied which results in adapting the time/frequency resolution depending on signal characteristics. However, there are certain cases in which using a large sub-frame size (better frequency resolution, lower time resolution) provides efficient coding, but results in noticeable artifacts at higher frequencies. In order to remove these artifacts, various possible features are described for reducing artifacts. For example, a block based transform (e.g., a time-split transform) is applied subsequent to an existing fixed transform (e.g., a discrete cosine (DCT transform, a MLT transform, etc.)). In one example, the time-split transform is used selectively to provide better time resolution upon determining that the time-split transform is beneficial for one or more select groups of frequency coefficients. The frequency selections is based on detected energy change.
If there are only certain regions of the frequency that need better time resolution, then using a smaller time window can result in a significant increase in the number of bits needed to code the spectral data. If sufficient bits are available this is not an issue, and a smaller time window should be used. However, when there are not enough bits, using a selective time-split transform on only those frequency ranges where it is needed can provide improved quality.
A time-split transform improves data coding when better time resolution is needed for coding of certain frequencies. A time-split transform and or various other features described herein can be used in any media encoder or decoder. For example, a time-split transform can be used with the digital media codec techniques described by Mehrotra et. al., “Efficient Coding of Digital Media Spectral Data Using Wide-Sense Perceptual Similarity” U.S. patent application Ser. No. 10/882,801, filed Jun. 29, 2004. For example, a time-split transform can be used to improve coding of high, medium, or low frequencies.
The generalized audio encoder (100) includes a frequency transformer (110), a multi-channel transformer (120), a perception modeler (130), a weighter (140), a quantizer (150), an entropy encoder (160), a rate/quality controller (170), and a bitstream multiplexer [“MUX”] (180).
The encoder (100) receives a time series of input audio samples (105). For input with multiple channels (e.g., stereo mode), the encoder (100) processes channels independently, and can work with jointly coded channels following the multi-channel transformer (120). The encoder (100) compresses the audio samples (105) and multiplexes information produced by the various modules of the encoder (100) to output a bitstream (195) in a format such as Windows Media Audio [“WMA”] or Advanced Streaming Format [“ASF”]. Alternatively, the encoder (100) works with other input and/or output formats.
The frequency transformer (110) receives the audio samples (105) and converts them into data in the frequency domain. The frequency transformer (110) splits the audio samples (105) into blocks, which can have variable size to allow variable temporal resolution. Small blocks allow for greater preservation of time detail at short but active transition segments in the input audio samples (105), but sacrifice some frequency resolution. In contrast, large blocks have better frequency resolution and worse time resolution, and usually allow for greater compression efficiency at longer and less active segments. Blocks can overlap to reduce perceptible discontinuities between blocks that could otherwise be introduced by later quantization. The frequency transformer selectively applies a time-split transform based on characteristics of the data. The frequency transformer (110) outputs blocks of frequency coefficient data to the multi-channel transformer (120) and outputs side information such as block sizes to the MUX (180). The frequency transformer (110) outputs both the frequency coefficient data and the side information to the perception modeler (130).
The frequency transformer (110) partitions a frame of audio input samples (105) into overlapping sub-frame blocks with time-varying size and applies a time-varying MLT to the sub-frame blocks. Possible sub-frame sizes include 128, 256, 512, 1024, 2048, and 4096 samples. The MLT operates like a DCT modulated by a time window function, where the window function is time varying and depends on the sequence of sub-frame sizes. The MLT transforms a given overlapping block of samples x[n],0≦n<subframe_size into a block of frequency coefficients X[k],0≦k<subframe_size/2 The frequency transformer (110) can also output estimates of the complexity of future frames to the rate/quality controller (170). Alternative embodiments use other varieties of MLT. In still other alternative embodiments, the frequency transformer (110) applies a DCT, FFT, or other type of modulated or non-modulated, overlapped or non-overlapped frequency transform, or use subband or wavelet coding. Typically after the transform to the frequency domain, the frequency transformer selectively applies a time-split transform based on characteristics of the data.
For multi-channel audio data, the multiple channels of frequency coefficient data produced by the frequency transformer (110) often correlate. To exploit this correlation, the multi-channel transformer (120) can convert the multiple original, independently coded channels into jointly coded channels. For example, if the input is stereo mode, the multi-channel transformer (120) can convert the left and right channels into sum and difference channels:
Or, the multi-channel transformer (120) can pass the left and right channels through as independently coded channels. More generally, for a number of input channels greater than one, the multi-channel transformer (120) passes original, independently coded channels through unchanged or converts the original channels into jointly coded channels. The decision to use independently or jointly coded channels can be predetermined, or the decision can be made adaptively on a block by block or other basis during encoding. The multi-channel transformer (120) produces side information to the MUX (180) indicating the channel mode used.
The perception modeler (130) models properties of the human auditory system to improve the quality of the reconstructed audio signal for a given bitrate. The perception modeler (130) computes the excitation pattern of a variable-size block of frequency coefficients. First, the perception modeler (130) normalizes the size and amplitude scale of the block. This enables subsequent temporal smearing and establishes a consistent scale for quality measures. Optionally, the perception modeler (130) attenuates the coefficients at certain frequencies to model the outer/middle ear transfer function. The perception modeler (130) computes the energy of the coefficients in the block and aggregates the energies by 25 critical bands. Alternatively, the perception modeler (130) uses another number of critical bands (e.g., 55 or 109). The frequency ranges for the critical bands are implementation-dependent, and numerous options are well known. For example, see ITU-R BS 1387 or a reference mentioned therein. The perception modeler (130) processes the band energies to account for simultaneous and temporal masking. In alternative embodiments, the perception modeler (130) processes the audio data according to a different auditory model, such as one described or mentioned in ITU-R BS 1387.
The weighter (140) generates weighting factors (alternatively called a quantization matrix) based upon the excitation pattern received from the perception modeler (130) and applies the weighting factors to the data received from the multi-channel transformer (120). The weighting factors include a weight for each of multiple quantization bands in the audio data. The quantization bands can be the same or different in number or position from the critical bands used elsewhere in the encoder (100). The weighting factors indicate proportions at which noise is spread across the quantization bands, with the goal of minimizing the audibility of the noise by putting more noise in bands where it is less audible, and vice versa. The weighting factors can vary in amplitudes and number of quantization bands from block to block. In one implementation, the number of quantization bands varies according to block size; smaller blocks have fewer quantization bands than larger blocks. For example, blocks with 128 coefficients have 13 quantization bands, blocks with 256 coefficients have 15 quantization bands, up to 25 quantization bands for blocks with 2048 coefficients. The weighter (140) generates a set of weighting factors for each channel of multi-channel audio data in independently coded channels, or generates a single set of weighting factors for jointly coded channels. In alternative embodiments, the weighter (140) generates the weighting factors from information other than or in addition to excitation patterns.
The weighter (140) outputs weighted blocks of coefficient data to the quantizer (150) and outputs side information such as the set of weighting factors to the MUX (180). The weighter (140) can also output the weighting factors to the rate/quality controller (140) or other modules in the encoder (100). The set of weighting factors can be compressed for more efficient representation. If the weighting factors are lossy compressed, the reconstructed weighting factors are typically used to weight the blocks of coefficient data. If audio information in a band of a block is completely eliminated for some reason (e.g., noise substitution or band truncation), the encoder (100) may be able to further improve the compression of the quantization matrix for the block.
The quantizer (150) quantizes the output of the weighter (140), producing quantized coefficient data to the entropy encoder (160) and side information including quantization step size to the MUX (180). Quantization introduces irreversible loss of information, but also allows the encoder (100) to regulate the bitrate of the output bitstream (195) in conjunction with the rate/quality controller (170). In
The entropy encoder (160) losslessly compresses quantized coefficient data received from the quantizer (150). For example, the entropy encoder (160) uses multi-level run length coding, variable-to-variable length coding, run length coding, Huffman coding, dictionary coding, arithmetic coding, LZ coding, a combination of the above, or some other entropy encoding technique.
The rate/quality controller (170) works with the quantizer (150) to regulate the bitrate and quality of the output of the encoder (100). The rate/quality controller (170) receives information from other modules of the encoder (100). In one implementation, the rate/quality controller (170) receives estimates of future complexity from the frequency transformer (110), sampling rate, block size information, the excitation pattern of original audio data from the perception modeler (130), weighting factors from the weighter (140), a block of quantized audio information in some form (e.g., quantized, reconstructed, or encoded), and buffer status information from the MUX (180). The rate/quality controller (170) can include an inverse quantizer, an inverse weighter, an inverse multi-channel transformer, and, potentially, an entropy decoder and other modules, to reconstruct the audio data from a quantized form.
The rate/quality controller (170) processes the information to determine a desired quantization step size given current conditions and outputs the quantization step size to the quantizer (150). The rate/quality controller (170) then measures the quality of a block of reconstructed audio data as quantized with the quantization step size, as described below. Using the measured quality as well as bitrate information, the rate/quality controller (170) adjusts the quantization step size with the goal of satisfying bitrate and quality constraints, both instantaneous and long-term. In alternative embodiments, the rate/quality controller (170) applies works with different or additional information, or applies different techniques to regulate quality and bitrate.
In conjunction with the rate/quality controller (170), the encoder (100) can apply noise substitution, band truncation, and/or multi-channel rematrixing to a block of audio data. At low and mid-bitrates, the audio encoder (100) can use noise substitution to convey information in certain bands. In band truncation, if the measured quality for a block indicates poor quality, the encoder (100) can completely eliminate the coefficients in certain (usually higher frequency) bands to improve the overall quality in the remaining bands. In multi-channel rematrixing, for low bitrate, multi-channel audio data in jointly coded channels, the encoder (100) can suppress information in certain channels (e.g., the difference channel) to improve the quality of the remaining channel(s) (e.g., the sum channel).
The MUX (180) multiplexes the side information received from the other modules of the audio encoder (100) along with the entropy encoded data received from the entropy encoder (160). The MUX (180) outputs the information in WMA or in another format that an audio decoder recognizes.
The MUX (180) includes a virtual buffer that stores the bitstream (195) to be output by the encoder (100). The virtual buffer stores a pre-determined duration of audio information (e.g., 5 seconds for streaming audio) in order to smooth over short-term fluctuations in bitrate due to complexity changes in the audio. The virtual buffer then outputs data at a relatively constant bitrate. The current fullness of the buffer, the rate of change of fullness of the buffer, and other characteristics of the buffer can be used by the rate/quality controller (170) to regulate quality and bitrate.
With reference to
The decoder (200) receives a bitstream (205) of compressed audio data in WMA or another format. The bitstream (205) includes entropy encoded data as well as side information from which the decoder (200) reconstructs audio samples (295). For audio data with multiple channels, the decoder (200) processes each channel independently, and can work with jointly coded channels before the inverse multi-channel transformer (260).
The DEMUX (210) parses information in the bitstream (205) and sends information to the modules of the decoder (200). The DEMUX (210) includes one or more buffers to compensate for short-term variations in bitrate due to fluctuations in complexity of the audio, network jitter, and/or other factors.
The entropy decoder (220) losslessly decompresses entropy codes received from the DEMUX (210), producing quantized frequency coefficient data. The entropy decoder (220) typically applies the inverse of the entropy encoding technique used in the encoder.
The inverse quantizer (230) receives a quantization step size from the DEMUX (210) and receives quantized frequency coefficient data from the entropy decoder (220). The inverse quantizer (230) applies the quantization step size to the quantized frequency coefficient data to partially reconstruct the frequency coefficient data. In alternative embodiments, the inverse quantizer applies the inverse of some other quantization technique used in the encoder.
The noise generator (240) receives from the DEMUX (210) indication of which bands in a block of data are noise substituted as well as any parameters for the form of the noise. The noise generator (240) generates the patterns for the indicated bands, and passes the information to the inverse weighter (250).
The inverse weighter (250) receives the weighting factors from the DEMUX (210), patterns for any noise-substituted bands from the noise generator (240), and the partially reconstructed frequency coefficient data from the inverse quantizer (230). As necessary, the inverse weighter (250) decompresses the weighting factors. The inverse weighter (250) applies the weighting factors to the partially reconstructed frequency coefficient data for bands that have not been noise substituted. The inverse weighter (250) then adds in the noise patterns received from the noise generator (240).
The inverse multi-channel transformer (260) receives the reconstructed frequency coefficient data from the inverse weighter (250) and channel mode information from the DEMUX (210). If multi-channel data is in independently coded channels, the inverse multi-channel transformer (260) passes the channels through. If multi-channel data is in jointly coded channels, the inverse multi-channel transformer (260) converts the data into independently coded channels. If desired, the decoder (200) can measure the quality of the reconstructed frequency coefficient data at this point.
The inverse frequency transformer (270) receives the frequency coefficient data output by the multi-channel transformer (260) as well as side information such as block sizes from the DEMUX (210). The inverse frequency transformer (270) applies the inverse time-split transform selectively (as indicated by the side information), and applies the inverse of the frequency transform used in the encoder and outputs blocks of reconstructed audio samples (295).
A transform coder 110 need not employ adaptive window sizing. In one such an example, a default window size is used to transform coefficients from the time domain to the transform domain (e.g., frequency domain). Changing signal characteristic detection is used to determine where to selectively apply a time-split transform to coefficients in the frequency domain.
Optionally, a time-split transform may be used in conjunction with adaptive window sizing. The transform coder 300 utilizes a one or more pass process to select window sizes for transform coding. In a first, open-loop pass, the transform coder detects changing signal characteristics in the input signal, and selectively performs a time-split transform. An initial window configuration may or may not take changing signal characteristic detection into consideration. Optionally, window sizes may be adapted before or after selectively applying a time-split transform.
When window size adaptation is employed for an initial window-size configuration, the transform coder places one or more small windows over changing signal characteristic regions and places large windows in frames without changing signal characteristics. The transform coder first transform codes, time-split transforms (selectively) and then reconstructs the signal using the initial window configuration, so that it can then analyze auditory quality of transform coding using the initial window configuration. Based on the quality measurement, the transform coder adjusts window sizes, either combining to form larger windows to improve coding efficiency to achieve a desired bit-rate, or dividing to form smaller windows to avoid pre-echo. To save on computation, the transform coder 300 can use the quality measured on the previous frame to make adjustments to the window configuration of the current frame, thereby merging the functionality of the two passes, without having to re-code.
With reference to a particular example shown in
The changing signal characteristic detection component 320 detects regions of the input signal that exhibit characteristics of a changing signal characteristic, and identifies such regions to the windows configuration component 330. The changing signal characteristic detection component 320 can use various conventional techniques to detect changing signal characteristic regions in the input signal. An exemplary changing signal characteristic detection process 400 is illustrated in
The windows configuration component 330 configures windows sizes for transform coding. An initial window configuration may be provided based on results of changing signal characteristic detection. An initial window configuration may also be provided by a default configuration without considering changing signal characteristic detection. An initial configuration may be determined on an open-loop basis based on the changing signal characteristic locations identified by the changing signal characteristic detector component 320. An exemplary open-loop windows configuration process 500 is illustrated in
The encoding component 335 implements processes for transform coding (e.g., DCT transform, etc.), rate control, quantization and their inverse processes, and may encompass the various components that implement these processes in the generalized audio encoder 100 and decoder 200 described above. The encoding component 335 initially transform codes (with rate control and quantization) the input signal using the initial window size configuration produced by the windows configuration component 330. The time-split component 340 then selectively performs a time-split transform, as described below. Optionally, when a decoder is employing feedback 365, the encoding component 335 then decodes to provide a reconstructed signal for auditory quality analysis by the quality measurement component 350. The encoding component 335 again transform codes (with rate-control and quantization) the input signal using the second-pass window size configuration provided by the windows configuration component 330 to produce the compressed stream 360.
The quality measurement component 350 analyzes the auditory quality of the reconstructed signal produced from transform coding using the initial or next window size configuration, so as to provide closed-loop quality measurement feedback to the windows configuration component 330. The quality measurement component analyzes the quality of each coding window, such as by measuring the noise-to-excitation ratio achieved for the coding window. Alternatively, various other quality measures (e.g., the noise-to-mask ratio) can be used to assess the quality achieved with the selected window size. Optionally, this quality measure is used by the windows configuration component 330 in its second-pass to select particular window sizes to increase for rate control, with minimal loss of quality.
The quality measurement component 350 may also use the quality analysis to detect pre-echo. An exemplary process to detect pre-echo is illustrated in
In the case of multi-channel audio encoding, the transform coder 300 in one implementation produces a common window size configuration for the multiple coding channels. In an alternative implementation for multi-channel audio encoding, the transform coder 300 separately configures transform window sizes for individual coding channels.
The changing signal characteristic detection process 400 first band-pass filters (at first stage 410) the input signal frame. The changing signal characteristic detection process 400 uses three filters with pass bands in different audio ranges, i.e., low, middle and high-pass ranges. The filters may be elliptic filters, such as may be designed using a standard filter design tool (e.g., MATLAB), although other filter shapes alternatively can be used. The squared output of the filters represents the power of the input signal in the respective audio spectrum range at each sample. The low-pass, mid-pass and high-pass power outputs are denoted herein as Pl(n), Pm(n), and Ph(n), where n is the sample number within the frame.
Next (at stage 420), the changing signal characteristic detection process 400 further low-pass filters (i.e., smoothes) the power outputs of the band-pass filter stage for each sample. The changing signal characteristic detection process 400 performs low-pass filtering by computing the following sums (denoted Ql(n), Qm(n) and Qh(n)) of the low-pass, mid-pass and high-pass filtered power outputs at each sample n, as shown in the following equations:
where s and t are predefined constants and (t≧s). Examples of suitable values for the constants are t=288 and s=256.
The changing signal characteristic detection process 400 then (at stage 430) calculates the local power at each sample by again summing the power outputs of the three bands over a smaller interval centered at each sample, as shown by the following equations:
where u and v are predefined constants smaller than t and s. Examples of suitable values of the constants are u=32 and v=32.
At stage 440, the changing signal characteristic detection process 400 compares the local power at each sample to the low-pass filter power output, by calculating the ratios shown in the following equations:
Rl(n)=Sl(n)/Ql(n)
Rm(n)=Sm(n)/Qm(n)
Rh(n)=Sh(n)/Qh(n)
Finally, at decision stage 450 and 460, the changing signal characteristic detection process 400 determines that a changing signal characteristic exists if the ratio calculated at stage 440 exceeds predetermined thresholds, Tl, Tm, and Th for the respective bands. In other words, if any of Rl(n)>Tl, or 1/Rl(n)>T′l, or Rm(n)>Tm, or 1/Rm(n)>T′m, or Rh(n)>Th, or 1/Rh(n)>T′h, where T, T′l, Tm, T′m, Th, T′h are thresholds, then the sample location n is marked as a changing signal characteristic location. An example of suitable threshold values is in the range of 10 to 40. It is important to note that a changing signal characteristic is declared so long as there is sufficient change in energy in any of the three bands. So coding efficiency may be reduced if there are certain frequency ranges where a changing signal characteristic did not exist.
As indicated at step 510 in the window configuration process 500, the process 500 determines if any changing signal characteristics (CSC), such as a transient or otherwise were detected in the frame. If so, the window configuration process places windows of the smallest size over changing signal characteristic-containing regions of the frame (as indicated at 520), such that the changing signal characteristics are completely encompassed by one or more smallest size windows. Then (at 530), the process 500 fills gaps before and after the smallest size windows with one or more transition windows.
If no changing signal characteristics are detected in a frame, the window configuration process 500 configures the frame to contain a largest size window (as indicated at 540). The process 500 continues on a frame-by-frame basis as indicated at step 550.
As discussed above, an optional quality measurement component 350 analyzes the achieved quality of audio information and feeds back the quality measurements to the window configuration component for use in adjusting window sizes. A window configuration component 350 may take two actions depending on the achieved quality of the signal. First, when the quantization noise is not acceptable, the window configuration component 350 trades the time resolution for better quantization by increasing the smallest window size. Further, when pre-echo is detected, the window configuration component splits the corresponding windows to increase time resolution, provided there are sufficient spare bits to meet bit rate constraints.
More specifically,
At a first processing step 720 in the first processing loop 720-750, the window configuration process 700 measures the achieved quality of the transform coded signal. In one implementation, the process 700 measures the achieved Noise-To-Excitation Ratio (NER) for each coding window. The NER of the coding window of the reconstructed, transform coded signal can be calculated as described in the Perceptual Audio Quality Measurement Patent Application, U.S. patent application Ser. No. 10/017,861, filed Dec. 14, 2001. Alternatively, other quality measures applicable to assessing acceptability or perceptibility of quantization noise can be used, such as noise-to-mask ration described or referenced in “Method for objective measurements of perceived audio quality,” International Telecommunication Union-Recommendation Broadcasting Service (Sound) Series (ITU-R BS) 1387 (1998).
Next (at 730), the window configuration process 700 compares the quality measurement to a threshold. If the quantization noise is not acceptable, the window configuration process 700 (at 750) increases the minimum allowed window size for the frame. As an example, in one implementation, the window configuration process 700 increases the minimally allowed window size for the frame by a factor of 2 if the NER of a coding window in the frame exceeds 0.5. If the NER is greater than 1.0, the minimum allowed window size is increased by 4 times. The acceptable quantization noise threshold and the increase in minimum allowed window size are parameters that can be varied in alternative implementations.
As indicated at decision 740, the window configuration process 700 also can increase the window size when the quantization noise is acceptable, but the rate control buffer of the transform coder is nearly full (e.g., 95% or other like amount depending on size of buffer, variance in bit rate, and other factors).
In an alternative implementation of the process 700, the window configuration process 700 at processing step 720 uses a delayed quality measurement. As examples, the quality of coding of the preceding frame or average quality of previous few frames could be used to determine the minimum allowed window size for the current frame. In one implementation, the final NER obtained at the preceding frame is used to determine the minimum window size (at 750) used in the configuration process 500. Such use of a delayed quality measurement reduces the implementation complexity, albeit with some sacrifice in accuracy.
In the second processing loop 820-840, the window configuration process 700 also measures to detect pre-echo in the frame. For pre-echo detection, the process 700 divides the frame of the reconstructed, transform coded signal into a set of very small windows (smaller than the smallest coding window), and calculates the quality measure (e.g., the NMR or NER) for each of the very small windows. This produces a quality measure vector (e.g., a vector of NMR or NER values). The process 700 also calculates a global achieved quality measure for the frame (e.g., the NMR or NER of the frame). The process 700 determines that pre-echo exists if any component of the vector is significantly higher (e.g., by a threshold factor) than the achieved global quality measure for the frame. Suitable threshold factor is in the range 4 to 10. Alternative implementations can use other values for the threshold.
In the case where pre-echo is detected and there is sufficient spare coding capacity (e.g., rate control buffer not full or nearly full), the window configuration process 700 (at 840) adjusts the window configuration in the frame to further reduce the window size. In one implementation, the process 700 decomposes the frame into a series of smallest size windows (e.g., the size of window 620 of
In order to determine whether to apply time-splitting, the data is programmatically examined for certain characteristics (see e.g.,
Optionally, an input signal is coded into a baseband and then the baseband shape is examined to determine similar shapes in an extended band. A similar shape in the baseband provides a shape model for similar shapes being coded at other frequencies. The baseband shapes provide synthetic models or codewords used to code the higher frequencies. The coded baseband is used to create an extended band or enhanced layer. In one such example, a time envelope is created resulting from reconstructing with the enhancement layer and comparing with the original time envelope. If there is a big difference, in the original versus reconstructed signal, then a determination is made to time-split at or near sub-bands where signal quality is compromised between the enhanced and original signal.
A changing signal characteristic detection routine (320) should also look for large energy differences in a high band which is being coded in the enhancement layer. If there are significant energy differences only present in the high band (such as, those being coded with enhancement in the extended band), and not in frequencies which are being coded with the baseband codec, then this is the ideal case when a large window size should be used for the baseband. Then, time-splitting can be used for the enhancement to get better time resolution in high frequencies without requiring a shorter window in the baseband. This will give the best compression efficiency without causing undesirable artifacts due to poor time resolution at high frequencies.
However, there might be cases when artifacts remain even after performing the time-split transform. Although the time-split results in energy compaction in time domain, it does not always work as well as truly using a smaller time window (e.g., see smaller windows in
Additionally, it will be apparent that any similarly suitable and invertible transform can be used to alter or dampen the artifacts created by spreading the error across the spectrum. Here, since the MLT is an orthogonal transform, applying a orthogonal transform keeps the overall transform still orthogonal. The effect it has is in modifying the basis functions.
When utilizing an MLT (e.g., MDCT), overlapping windows are used to segment the data into blocks. For each of these overlapping blocks, a DCT transform is performed on the data in the window. Optionally, plural overlapping window sizes can be used. The windows sizes can be applied based upon signal characteristics, where small windows are used at changing signal characteristics (e.g., where signal characteristics such as energy change), and larger windows are used elsewhere to obtain better compression efficiency.
For each block (or sub-frame), an invertible transform is computed transforming input audio samples from the time domain to the transform domain (e.g., a DCT or other known transform domains). The M resulting MLT coefficients from the 2M window are used for each M-size sub-frame. The overlap ensures that this 2M-to-M transformation can be inverted without any loss. Of course, there will be some loss during quantization. The 2M-to-M transformation can be represented as a projection of the 2M-dimensional signal vector onto the basis vectors. The shape of the M basis vectors are dependent on the window shape. Neither overlapping windows nor any particular segmentation methodology is required to time-split adjacent coefficients. However, if overlapping windows are used, the basis vectors typically vary based on the current sub-frame size, the previous sub-frame size, and the next sub-frame size. If the DCT cosine basis vectors (e.g., basis vectors) are to provide good time resolution, then they should have localized support in the time domain. If the basis vectors are viewed as a function of time index, then they should have most of their energy concentrated around the center of the frame.
Consider an example, with a 32-dimensional vector (e.g., 32 input samples, such as audio/video), that has been split into 4 sub-frames of size 8 (e.g., a segmentation of [8 8 8 8]). Often, the frame would be larger (e.g., 2048 samples) and the segmentation (e.g., sub-frame sizes) would be larger and possibly variable in size within the frame (e.g., [8, 8, 64, 64, 32, 128, 128]). As will be discussed, time-splitting transform can be selectively performed without regard to sub-frame size and whether or not segmentation size is variable. However, the 32-dimensional vector provides an example for the following discussion, with the understanding that the described technology is not limited to any such configurations.
From these two
A time-splitting transform is selectively applied at adjacent coefficients where better time resolution is desired. Instead of just using the coefficients obtained from the MLT, a post block transform on a subset of the M spectral coefficients is performed, such as a time-splitting transform. By imposing constraints on the structure of the transform, better time resolution is selectively obtained for some frequency coefficients, but not others.
As can be seen, the post block transform (e.g., time-split transform) results in time separation. Although the time span of the resulting basis vectors is the same as before, the energy concentration has been more localized. This is better understood in view of the following analysis.
The MLT coefficients for a sub-frame of size M are defined as:
where h[n] is the window. The time index n=0 is defined to be M/2 samples to the left of the start of the current sub-frame, so that x[M/2] is the start of the current sub-frame. Notice that the equation provides an optional overlapping window sizes (e.g., 2M). Starting with X[k]+X[k+1], and then using the known relationship of cos(a)+cos(b)=2 cos((a−b)/2)cos((a+b)/2), the following is obtained:
Similarly, staring with X[k]-X[k+1], and using the known relationship of cos(a)-cos(b)=−2 sin((a−b)/2)sin((a+b)/2), the following is obtained:
Equations 2 and 3 can be rewritten as equations 4 and 5, respectively, as follows,
such that h1[n] and h2[n] are defined as shown in equations 7 and 8.
Thus, the two original frequency-domain coefficients X[k] and X[k+1], which corresponded to the modulating frequencies (k+1/2)π/M and (k+3/2)π/M, respectively, are replaced. By replacing those coefficients with the following coefficients (X[k]+X[k+1]) and (X[k]−X[k+1]), there are two new frequency-domain coefficients that now correspond to the same frequency (k+1)π/M (but with a 90 degree phase shift, since one is modulated by a cosine function and the other by a sine function), but modulated by different windows h1[n] and h2[n], respectively.
A sub-band merging approach was first described in R. Cox, “The Design of Uniformly and Nonuniformly Spaced Pseudoquadrature Mirror Filters” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 34, pp. 1090-1096, October 1986, and was applied to the MLT in H. S. Malvar, “Enhancing the Performance of Sub-Band Audio Coders for Speech Signals”, Proc. 1998 IEEE International Symposium on Circuits and Systems, vol. 5, pp. 98-101, June 1998 (“Malvar”). Contrary to the decomposition in Malvar, where all high-frequencies (after a predetermined value of k) are pairwise split according to the construction above, a time-splitting transform is performed only on selected pairs of coefficients, according to a selection criteria. Thus, fixed time-splitting is replaced by selective time-splitting based upon the characteristics of the input signal or derivations thereof. In practice one can combine more then two sub-bands, but the quality of time-splitting suffers that is, time selectivity will not be as good. See e.g., O. A. Niamut and R. Heusdens, “Sub-band Merging in Cosine-modulated Filter Banks”, IEEE Signal Processing Letters, vol. 10, pp. 111-114, April 2003, and see ADAPTIVE WINDOW-SIZE SELECTION IN TRANSFORM CODING, U.S. patent application Ser. No. 10/020,708 filed Dec. 14, 2001.
In one example, a portion of the output of the transform called the baseband. (1502) is encoded by the baseband coder. Then the extended band (1504) is divided into sub-bands of homogeneous or varied sizes (1506). Shapes in the baseband (1508) (e.g., shapes as represented by a series of coefficients) are compared to shapes in the extended band (1510), and an offset (1512) representing a similar shape in the baseband is used to encode a shape (e.g., sub-band) in the extended band so that fewer bits need to be encoded and sent to the decoder.
Sub-bands may vary from subframe to subframe. Similarly, a baseband (1502) size may vary, and a resulting extended band (1504) may vary in size based on the baseband. The extended band may be divided into various and multiple size sub-band sizes (1506).
In this example, a baseband segment is used to identify a codeword for a particular shape (1508) to simulate a sub-band in the extended band (1510) transformed to create other shapes (e.g., other series of coefficients) that might more closely provide a model for the vector (1510) being coded. Thus, plural segments in the baseband are used as potential models to code data in the extended band. Instead of sending the actual coefficients (1510) in a sub-band in the extended band an identifier such as a motion vector offset (1512), is sent to the encoder to represent the data for the extended band. However, sometimes there are no close matches in the baseband for data being modeled in a sub-band. This may be because of low bitrate constraints that allow a limited size baseband. The baseband size (1502) as relative to the extended band may vary based on computing resources such as time, output device, or bandwidth.
One channel of audio/video is split into time segments as shown in
As shown in
Thus, at the decoder, when the frequency domain coefficients X[k] are transformed back to the time domain, each vector 1708 is multiplied by similar orthogonal matrices. The selected frequencies within the basis vectors 1704, are effectively multiplied by the basis vectors show in
As shown in
The time-split transform should be done prior to quantization, but after the first transform 1704. For example, a time split transform 1706 could also be applied before or after the channel transform 120, but before quantization 150 and before the weighter 140. A 2×2 block can be place along the diagonal selectivity (as shown in
A time-split transform is a selectively applied sum and difference of adjacent coefficients. For example, a time-split transform may also be called a selectively applied sum-difference of adjacent coefficient orthogonal transform (e.g., a SASDACO transform). Additionally, the coder signals the decoder in an output stream, where to orthogonally apply the inverse transform. For example, a side-information bit for each frequency pair signals where to apply the time-splitting SASDACO transform, and eligible blocks may be anywhere along the diagonal (e.g., two examples in
Of course, a sum-difference orthogonal transform 2×2 block is not limited to the 2×2 block shown in
where c is a scale factor selected to vary the properties of the transform.
In one example, an extended portion 1504 of a sub-frame is signaled (e.g., a bit) as with or without time-split. A signaled sub-frame, may further signal a sub-band 706 as time-split, and signal blocks to perform a SASDACO transform. In one such example, a signaled block implicitly indicates applying a SASDACO transform to the other sub-bands in the sub-frame. In another example, a signal(s) is provided for each sub-band 706. A pre-echo/post decisions can be used to decide where to apply the time-split transform. A changing signal characteristic detection component may also be used to break a signal up into frequency ranges, such as high, medium, and low. For these distinctions, the transform coder determines whether there is a change in energy and applies a SASDACO transform accordingly.
Thus, a block transform (e.g., time-split transform) is used after MLT decomposition to selectively get better time resolution for only some frequency components. This is useful when larger time windows can be used to get better compression efficiency, for example with low, medium, or frequency coefficients, and still provide better time resolution only where needed. A decision is used to select where to perform time-split, by programmatically examining characteristics of the spectral data. For example, examining a time envelope, energy change, changing signal characteristic detection, pre-echo, or post-echo. A decision where to perform time-split may instead be made by programmatically examining characteristics of changing signal characteristic detection. In another example, modification (reduction) of sub-frame size for base coding is made by programmatically examining the output of enhancement layer coding. These various ways of making a decision of where to make a time-split transform, may also be used to determine in a second pass at coding, where to vary window size.
With reference to
A computing environment may have additional features. For example, the computing environment (1800) includes storage (1840), one or more input devices (1850), one or more output devices (1860), and one or more communication connections (1870). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment (1800). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment (1800), and coordinates activities of the components of the computing environment (1800).
The storage (1840) may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (1800). The storage (1840) stores instructions for the software (1880) implementing the audio encoder.
The input device(s) (1850) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment (1800). For audio, the input device(s) (1850) may be a sound card or similar device that accepts audio input in analog or digital form. The output device(s) (1860) may be a display, printer, speaker, or another device that provides output from the computing environment (1800).
The communication connection(s) (1870) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The invention can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment (1800), computer-readable media include memory (1820), storage (1840), communication media, and combinations of any of the above.
The invention can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “determine,” “get,” “adjust,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
Having described and illustrated the principles of our invention with reference to an illustrative embodiment, it will be recognized that the illustrative embodiment can be modified in arrangement and detail without departing from such principles. It should be understood that the programs, processes, or methods described herein are not related or limited to any particular type of computing environment, unless indicated otherwise. Various types of general purpose or specialized computing environments may be used with or perform operations in accordance with the teachings described herein. Elements of the illustrative embodiment shown in software may be implemented in hardware and vice versa.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.