There are a number of possible encoding methods which may be used to compress audio files. Constant bitrate (CBR encoding provides a constant rate output from a codec, i.e., a CBR encoder uses the same frame size for every frame. This may be beneficial when audio files are to be streamed across a medium of fixed bandwidth (e.g. over a wireless channel) because an audio file can be encoded at a bitrate which matches the available bandwidth. However as the nature of an audio stream is typically very non-uniform, such CBR coding techniques use more bits than are required for simple passages whilst being limited in bit allocation for complex passages. Where a particular frame has a complex sound in it, the encoder reduces the quality of the signal until it can be encoded in the available number of bits.
Variable bitrate (VBR) encoding however can respond to the complexity of any particular passage and allocate more bits to complex passages and fewer bits to less complex passages. Problems may occur, however, when streaming BVR encoded files because the resultant bitrate is unpredictable and the receiver may only have a limited buffer.
A compromise between CBR and VBR is average bitrate encoding (ABR). In ABR the encoder has flexibility in allocating bits to frames dependent on the complexity of the signal in any particular frame whilst maintaining a target average bitrate over a defined time period. This results in a higher quality signal than CBR and a ore predictable bitrate than VBR. However, as the encoder does not know in advance which portions of the audio are more complex and therefore require more bits, some form of bit rate adjustment is usually required in order to ensure that the target average bitrate is achieved. This bite rate adjustment, which may be referred to as ‘post-processing’, often requires many iterations around a loop before the target average bitrate is achieved and these iterations may be computationally intensive.
This Summary is provided to introduce a selection of concepts in a simplified form that re further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Methods of encoding a signal using a perceptual model are described in which a signal to mask ratio parameter within the perceptual model is tuned. The signal to mask ratio parameter is tuned based on a function of the bitrate of the part of the signal which has already been encoded and the target bitrate for the encoding process. The tuned signal to mask ratio parameter is used to compute a masking threshold for the signal which is then used to quantise the signal.
A first aspect provides a method of encoding a signal comprising: inputting the signal to a perceptual model; generating a masking threshold for the signal based on the signal and a signal to mask ratio parameter; quantising and encoding the signal based on the masking threshold; and tuning the signal to mask ratio parameter based on at least a function of a bitrate of an encoded portion of the signal and a target bitrate.
The method may further comprise: repeating tuning the signal to mask ratio parameter periodically. The signal may be divided into a sequence of frames the signal to mask ratio may be tuned every N frames, where N is an integer.
The signal to mask ratio parameter may be tuned by calculating an average bitrate of the encoded portion; and adjusting the signal to mask ratio parameter based on at least a function of the average bitrate and the target bitrate for the signal.
The adjustment of the signal to mask ratio parameter may be further based on a function of a short-term average bitrate calculated over a part of the encoded portion. The part of the encoded portion may comprise N frames, where N is an integer.
The adjustment of the signal to mask ratio parameter may also be based on a tuning factor. The tuning factor may be updated based on a measured change in bitrate.
The signal to mask ratio parameter may be adjusted using:
where BT is the target bitrate,
The tuning factor may be updated using:
where ΔSMR is a previous change in signal to mask ratio parameter, Δb(n) is a corresponding resultant change in the short-term average bitrate and M is a smoothing factor
The method may further comprise limiting any change in signal to mask ratio parameter and/or limiting any change in tuning factor.
The perceptual model may comprise a psychoacoustic model and the signal may comprise an audio signal.
A second aspect provides a method of encoding substantially as described with reference to any of
A third aspect provides an encoder comprising: a perceptual model arranged to generate a masking threshold for a signal based on the signal and a signal to mask ratio parameter; means for quantising and encoding the signal based on the masking threshold; and means for tuning the signal to mask ratio parameter based on at least a function of a bitrate of an encoded portion of the signal and a target bitrate.
The means for tuning may be arranged to: calculate an average bitrate of the encoded portion; and adjust the signal to mask ratio parameter based on at least a function of the average bitrate and the target bitrate for the signal.
The adjustment of the signal to mask ratio parameter may be further based on a function of a short-term average bitrate calculated over a part of the encoded portion. The part of the encoded portion may comprise N frames, where N is an integer.
The adjustment of the signal to mask ratio parameter may also be based on a tuning factor. The tuning factor may be updated based on a measured change in bitrate.
The means for tuning may be arranged to adjust the signal to mask ratio parameter by computing:
where BT is the target bitrate,
The means for tuning may be further arranged to: limit any change in signal to mask ratio parameter and/or any change in tuning parameter.
The perceptual model may comprise a psychoacoustic model and the signal may comprise an audio signal.
The methods described herein may be performed by firmware or software in machine readable form on a storage medium. The software can be suitable for execution on a parallel processor or a serial processor such that the method steps may be carried out in any suitable order, or simultaneously.
A fourth aspect provides a computer program arranged to perform any of the methods described herein. The computer program may be stored on a tangible machine readable medium.
This acknowledges that firmware and software can be valuable, separately tradable commodities. It is intended to encompass software, which runs on or controls “dumb” or standard hardware, to carry out the desired functions. It is also intended to encompass software which “describes” or defines the configuration of hardware, such as HDL (hardware description language) software, as is used for designing silicon chips, or for configuring universal programmable chips, to carry out desired functions.
The preferred features may be combined as appropriate, as would be apparent to a skilled person, and may be combined with any of the aspects of the invention.
Embodiments of the invention will be described, by way of example, with reference to the following drawings, in which:
Common reference numerals are used throughout the figures to indicate similar features.
Embodiments of the present invention are described below by way of example only. These examples represent the best ways of putting the invention into practice that are currently known to the Applicant although they are not the only ways in which this could be achieved. The description sets forth the functions of the example and the sequence of steps for constructing and operating the example. However, the same or equivalent functions and sequences may be accomplished by different examples.
It will be appreciated that
In a perceptual encoder, such as shown in
In order to achieve a target bitrate (particularly in ABR) post-processing may be required. This post-processing involves iterating the encoding of signal frames (e.g. through adjusting the quantisation step size and/or scaling factors of sub-bands) until the target bitrate is achieved. These iterations are processor intensive. In an example, the post-processing may involve nested loops, e.g. an inner loop which changes the quantisation size until the bit requirements for Huffman coding of a frame are small enough (as defined by the target bit rate), and an outer loop which applies scaling factors if the quantisation noise in a band exceeds the masking threshold. As these two loops are related, (i.e. changes in quantisation size affect the quantisation noise as well as the bitrate), the iteration process is complex.
The masking thresholds are determined within the psychoacoustic model 102 using a signal to mask ratio (SMR) parameter, which determines the ratio of signal energy to the energy of ‘just noticeable noise’. The SMR is based on the principle that a sound may be made inaudible due to the presence of another sound and factors which may influence this include the frequencies of the sounds and the volume (or sound pressure level (SPL)) of the sounds. The nature of the sound, i.e. whether it is a tone or noise, can also affect the masking effect of the sound and the determination of the masking thresholds (by the psychoacoustic model) also includes analysis of the audio signal to identify potential noise maskers and tone maskers. SMR, noise maskers and tone maskers are described in more detail below with reference to
Within the psychoacoustic model, after a frame of audio is transformed into a frequency domain representation, it is analysed in the following manner. Every potential tone/noise masker is determined and for each critical band one masker type (either tone or noise) is selected. The masking effect of each masker is then spread over neighbouring frequencies. The functions used for spreading the masking effect depend on the type (noise/tone), energy and central frequency of the masker. A typical spreading function which gives the masking effect of masker at frequency bin j at frequency bin i is:
where: T(i,j) is the noise threshold at frequency i due to the masker at frequency j (in dB);
Having obtained spreading functions (in dB) for all of the maskers (e.g. using equation (1)), the spreading functions are overlap-added in the linear domain to obtain the global masking threshold. The effect of the absolute threshold of hearing (ATH) which represents the sensitivity of human ear to sounds in different frequencies is also included in calculation of global masking threshold (e.g. by taking the maximum of the overlap-added spreading functions and the ATH at each point in frequency).
The constants in equations (1) and (2) are obtained through exhaustive psychoacoustic experiments and while the constant values in equation (2) can change the characteristic of the masking spread functions in different ranges, those of equation (1) are more global. Specifically the value of K changes the behaviour of the spreading function across different frequencies and SMR parameter has an even broader effect and determines a fixed offset applied to the whole masking threshold.
The encoder of
Typically, audio encoders use a value of SMR parameter in the psychoacoustic model which is based on a lookup table, which may have different SMR values for different target bitrates. These lookup tables may be based on values reported in literature. However, use of such a value of SMR to determine the quantisation levels results in a very variable bitrate. As described above, post-processing is then required to ensure that an average bitrate target is met over a predefined number of frames (which may be the whole file). Some encoders use a bitrate pool to limit the variability in bitrate between frames. In such an encoder, each frame is allowed to use a certain percentage of the bitrate pool and post-processing is still required to meet the target bitrate.
Whilst the method of
The methods described below use audio signals and a psychoacoustic model by way of example only. The methods described herein are applicable to any signals and any perceptual model.
The SMR parameter may be tuned based on the short-term and/or long-term bitrates and the target bitrate, including being tuned based on any function of one or more of the bitrates, e.g. functions of the square of one or more of the bitrates, logarithms of one or more of the bitrates etc, and/or based on functions of other parameters such as the number of samples encoded (which increases with increasing number of iterations). A function of a bitrate may, in an example, be the bitrate itself.
An example implementation of the second step (block 402) of the method of
In an implementation of the method of
where b(i) is the bitrate of frame i. If the average bitrate after another an frames is to be equal to the target bitrate, BT, the average bitrate for the next an frames, bA, should be equal to:
The instantaneous bitrate, b(n), therefore should change by:
And the change in SMR should be:
where β(n) is a measure of amount of change in bitrate which results from a 1 dB change in SMR and is measured in kB.s−1.dB−1. As a result the new SMR for frame n+1 is given by:
The value of β(n) may be a predefined parameter and may be a fixed value or a value which is dependent on n. In some example, the value of β(n) may be dependent upon the music type and/or the target bitrate. In an example, β(n) may be 10 kbps/dB at 160 kbps. The value of β(n) may also be tuned, as described below.
Although in the above description and equations (5)-(7), b(n) is described as the instantaneous bitrate, as also described above, the process may be repeated every frame or every N frames. Where the process is repeated every N frames, b(n) may be a short-term average bitrate, averaged over the N frames (e.g. a short-term average bitrate, averaged over 10 frames where N=10). In the limit where N=1, the short-term average bitrate is the same as an instantaneous bitrate. The value of
In the above description, the averages are described as normal average values. However, in other embodiments, different forms of average values may be used. For example,
In an embodiment, the value of α may be equal to two. This parameter sets the period over which the tuning of the SMR aims to correct the mismatch of the average bitrate calculated so far for the signal and the target bitrate. The value of this parameter may be selected so that the performance of the ABR encoding with adaptive tuning performs better than using an internal bit reservoir. The value may be fixed or variable and may be selected based on the file size and/or based on the current position in the file (i.e. based on the value of n). In an example of a variable α, the value may be given by:
α=max(1000−n, 2)
Such a variable value of a would prevent large changes in SMR at the start of the encoding process and would decrease with time until it reaches a minimum value (in this case equal to two).
By adjusting the value of the SMR, the value can be tuned to the statistics of the actual signal, rather than using fixed values from literature. As the tuning results in the resultant bitrate being closer to the target bitrate, the amount of post-processing (i.e. number of iterations) required is reduced and the quality vs. bitrate compromise is made using a long term soft decision. Furthermore, as the number of iterations is reduced, the number also becomes more predictable and this provides a reasonably predictable processing time for the encoding of a signal.
In a further variation of the method, the value of β(n) may be tuned based on a measured change in bitrate as a result of a change in SMR. This enables the parameter β(n) to be made more accurate and to be adapted to the statistics of the actual signal. Such a method is shown in
In an example implementation:
where the bitrate change, Δb(n), is a measured value and is the change in the short-term average bitrate since the last change in SMR, ΔSMR is known (e.g. from equation (6) above) and M is a smoothing factor (and in an example, M=10).
In a further variation of the methods described above, the change in SMR may be controlled dependent on the position of the frame n in the signal. This may result in the controlled change in SMR (ΔSMR′) being given by:
ΔSMR′=f(n), ΔSMR (9)
where ΔSMR is determined by equation (6) above and f(n) is a function which is dependent on the position of the frame. The value of this function may be chosen so that for a first set of frames in the signal, there is no change in the value of SMR, e.g.:
For n=1 to 50: f(n)=0
For n>50: f(n)=1
In another example, the value of f(n) may change gradually and an example curve is shown in
In addition to, or instead of, controlling the change in SMR as described above (i.e. using function, f(n)), the maximum change in SMR may be limited, i.e. the value of ΔSMR (or ΔSMR′ where appropriate) may have a maximum permissible value. By limiting the step change in SMR, any over-compensation which might occur when going from passages of silence to speech/music is reduced.
In a similar manner, where the tuning parameter β(n), is also tuned (e.g. as shown in
Experimental results obtained using the method of
The ‘average number of iterations’ in the table above is the average number of iterations to sub-bands (e.g. by changing the quantisation step size or scaling factor) which are required in order to achieve the target bitrate. If a single sub-band is iterated more than once, each iteration is included within this figure.
In these results, the frame by frame bitrate variance is similar but the file by file variance is reduced substantially by the use of adaptive tuning methods as described herein. This has the result that the overall bitrate of each file is much closer to the mean value when adaptive tuning is applied, compared to without using adaptive tuning. This may be particularly important when the audio signal is encoded for transmission over a medium of limited bandwidth or power or to a receiver with a limited buffer, because the system may not be able to receive signals with an average bitrate which varies by a large amount. The number of iterations has also been reduced by around 10%, which is significant because the bitrate adjustment is one of the most computationally intensive parts of an encoder.
The methods described above relate to a single value of SMR parameter and this parameter may be either the SMR(TMN) or the SMR(NMT). Where one SMR parameter (e.g. SMR(TMN)) is adaptively tuned using one of the methods described above, the value of other the SMR parameter (SMR(NMT) in this example) may be adjusted in a corresponding manner to maintain an approximate relationship between the two SMR parameters (e.g. a constant difference between the two).
In another example, however, the two SMR parameters (SMR(TMN) and SMR(NMT)) may be tuned independently, as shown in the example method of
In a variation of the method shown in
ΔSMR(TMN)′=γ·ΔSMR(TMN) (10)
ΔSMR(NMT)′=(1−γ)·ΔSMR(NMT) (11)
where the values of ΔSMR(TMN) and ΔSMR(NMT) may be calculated, for example, using equation (6) or (9).
The proportion, γ, may be calculated in many different ways and may be based on data for a single frame or for multiple frames (e.g. N frames). In an example:
where nTM is the number of tone maskers during the past N frames and nNM is the number of noise maskers in the past N frames. In another variation the determination of the numbers of maskers may be performed over N′ frames, where N′≠N.
Whilst the above description refers to the tuning of the SMR parameter within the psychoacoustic model, in a further embodiment a different parameter within the model may be tuned in a similar manner. For example, the parameter K (from equation (1)) may be tuned instead of, or in addition to, the SMR parameter.
Where the psychoacoustic model (or equivalent for non-audio applications) uses a different spreading function from that shown in equation (1) above, parameters within that spreading function may be tuned in a corresponding manner to that described above.
Whilst the above description refers to the methods being useful in ABR encoding, the methods are also applicable for other coding techniques such as CBR encoding. In such an embodiment, the frame may be initially encoded using the parameters output by the psychoacoustic model and the post-processing may be used to ensure that the particular bitrate of the frame is the same as the target bitrate. Use of the methods described herein which tune parameters within the psychoacoustic model, reduce the amount of post-processing required to meet the target bitrate. In an example implementation, the same equations may be used (as described above) but the short-term and long-term bitrates may be obtained from the bitrate resulting from the first iteration of the quantisation, i.e. the bitrate is the bitrate suggested by the psychoacoustic model. Use of such techniques for coding techniques other than ABR (such as CBR), reduces the number of iterations required and also reduces the computational requirements.
The methods are described above in relation to encoding audio signals, however this is by way of example only and the methods are also applicable to encoding other signals which use a perceptual model. Any reference to audio signals or psychoacoustic models may alternatively relate to any signal and any perceptual model. For video signals, the psychoacoustic model may be replaced by a perceptual model which is based on the physiology of the human eye and human visual acuity, rather than the physiology of the human ear and human aural perceptive abilities. As described above, the SMR parameter may also be interpreted as the desired perceptual SNR.
Any range or device value given herein may be extended or altered without losing the effect sought, as will be apparent to the skilled person.
It will be understood that the benefits and advantages described above may relate to one embodiment or may relate to several embodiments. It will further be understood that reference to ‘an’ item refers to one or more of those items.
The steps of the methods described herein may be carried out in any suitable order, or simultaneously where appropriate. Additionally, individual blocks may be deleted from any of the methods without departing from the spirit and scope of the subject matter described herein. Aspects of any of the examples described above may be combined with aspects of any of the other examples described to form further examples without losing the effect sought.
It will be understood that the above description of a preferred embodiment is given by way of example only and that various modifications may be made by those skilled in the art.
Number | Date | Country | Kind |
---|---|---|---|
0721376.2 | Oct 2007 | GB | national |
This application is a continuation of U.S. application Ser. No. 12/679,729, filed Mar. 24, 2010, which claims priority to PCT International Application No. PCT/GB2008/050804, filed Sep. 9, 2008, the contents of which are incorporated herein.
Number | Name | Date | Kind |
---|---|---|---|
5649053 | Kim | Jul 1997 | A |
6009399 | Spille | Dec 1999 | A |
6487535 | Smyth et al. | Nov 2002 | B1 |
6792402 | Chen | Sep 2004 | B1 |
6999919 | Layeghi et al. | Feb 2006 | B2 |
7725313 | Konda et al. | May 2010 | B2 |
20010053973 | Tsuzuki | Dec 2001 | A1 |
20020004718 | Hasegawa et al. | Jan 2002 | A1 |
20040098268 | Ha | May 2004 | A1 |
20040181393 | Baumgarte | Sep 2004 | A1 |
20060069555 | Konda et al. | Mar 2006 | A1 |
Number | Date | Country |
---|---|---|
10113322 | Oct 2002 | DE |
0661821 | Jun 1999 | EP |
0803989 | Jun 1999 | EP |
1076295 | Feb 2001 | EP |
WO 2007098258 | Aug 2007 | WO |
Entry |
---|
Painter et al., Perceptual Coding of Digital Audio, Proceedings of the IEEE, vol. 88, No. 4, Apr. 2009, pp. 451-513. |
Lewiston, “Fundamentals of Perceptual Audio Encoding, HST.723 Lab II”, Feb. 25, 2005. |
Schuller, “Perceptual Audio Coding Using Adaptive Pre- and Post-Filters and Lossless Compression”, IEEE Transactions on Speech and Audio Processing, vol. 10, No. 6, Sep. 2002, pp. 379-380. |
MP3, From Hydrogenaudio Knowledgebase, http://wiki.hydrogenaudio.org/index.php?title=MPEG1—Layer—3, Jun. 14, 2007. |
Te, Hsueh Lai et al., “A NMR Optimized Bitrate Transcoder for MPEG-2/4 LC-AAC, Circuits and Systems” 2007, IEEE pp. 3936-3939. |
Kurniawati et al., “Low Power Stereo Perceptual Audio Coding Based on Adaptive Masking Threshold Reuse”, Audio Engineering Society, Presented at the 122nd Convention May 5-8, 2007, Vienna, Austria, pp. 1-7. |
Office Action for U.S. Appl. No. 12/679,729, filed Mar. 24, 2012, mailed Jan. 27, 2012. |
International Search Report for International Application No. PCT/GB2008/050804, dated Dec. 23, 2008. |
Number | Date | Country | |
---|---|---|---|
20130024201 A1 | Jan 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12679729 | US | |
Child | 13562841 | US |