The present document relates to methods and systems for audio encoding. In particular, the present document relates to methods and systems for fast audio encoding using parallel encoder architecture.
Today's media players support various different audio formats such as mp3, mp4, WMA (Windows Media Audio), AAC (Advanced Audio Coding), HE-AAC (High Efficiency AAC) etc. On the other hand, media databases (such as Simfy) provide millions of audio files for download. Typically, it is not economical to encode and store these millions of audio files in the various different audio formats and the various different bit-rates that may be supported by the different media players. As such, it is beneficial to provide fast audio encoding schemes which enable encoding of audio files “on the fly”, thereby enabling media databases to generate a particularly encoded audio file (in a particular audio format, at a particular bit-rate) as and when it is requested.
According to an aspect, a frame-based audio encoder is described. The audio encoder may be configured to divide an audio signal comprising a plurality of time-domain samples into a sequence of frames, wherein each frame typically comprises a pre-determined number of samples. By way of example, a frame may comprise a fixed number M (e.g. M=1024) of samples. In an embodiment, the audio encoder is configured to perform Advanced Audio Coding (AAC).
The audio encoder may comprise K parallel transform units processing K frames of the audio signal (e.g. K successive frames of the audio signal) in parallel. The K parallel transform units may be implemented on K different processing units (e.g. graphical processing units), thereby accelerating the transform process by a factor of K (compared to a sequential processing of the K frames). A transform unit may be configured to transform a frame into a set of frequency coefficients. In other words, a transform unit may perform a time-domain to frequency domain transformation, such as a Modified Discrete Cosine Transform (MDCT).
As such, each of the K parallel transform units may be configured to transform a respective one of the group of K frames (also referred to as a frame group) of the audio signal into a respective one of K sets of frequency coefficients. K may be greater than 1, 2, 3, 4, 5, 10, 20, 50, 100.
As indicated above, the K parallel transform units may be configured to apply a MDCT to the K frames of the frame group, respectively. In addition, the K parallel transform units may be configured to apply a window function to the K frames of the frame group, respectively. It should be noted that the type of transform and/or the type of window applied to a frame typically depends on a type of the frame (i.e. the frame-type which is also referred to herein as the block-type). As such, the K parallel transform units may be configured to transform the K frames into K frame-type dependent sets of frequency coefficients, respectively.
The audio encoder may comprise K parallel signal-attack detection units. A signal-attack detection unit may be configured to classify a frame of the audio signal as a frame comprising an acoustic attack (e.g. a transient frame) or as a frame which does not comprise an acoustic attack (e.g. a tonal frame). As such the K parallel signal-attack detection units may be configured to classify the K frames of the frame group, respectively, based on the presence or absence of an acoustic attack within the respective one of the K frames. The K parallel signal-attack detection units may be implemented on at least K different processing units. In particular, the K parallel signal-attack detection units may be implemented on the same respective processing units as the K parallel transform units.
The audio encoder may further comprise a frame-type detection unit configured to determine a frame-type of each of the K frames based on the classification of the K frames. Examples for frame-types are a short-block type (which is typically used for frames comprising a transient audio signal), a long-block type (which is typically used for frames comprising a tonal audio signal), a start-block type (which is typically used as a transit frame from a long-block type to a short-block type) and/or a stop-type (which is typically used as a transit frame from a short-block type to a long-block type). As such, the frame-type of a frame may be dependent on the frame-type of one or more former frames. Consequently, the frame-type detection unit may be configured to determine a frame-type of a frame k, k=1, . . . , K, of the K frames also based on the frame-type of the preceding frame k−1.
By way of example, the frame-type detection unit may be configured to determine that a frame k, k=1, . . . , K, is of a short-block type if the frame k is classified as comprising an attack and if its preceding frame k−1 is of a short-block type or of a start-block type. The frame-type detection unit may be configured to determine that a frame k, k=1, . . . , K, is of a long-block type if the frame k is classified as not comprising an attack and if its preceding frame k−1 is of a long-block type or of a stop-block type. The frame-type detection unit may be configured to determine that a frame k, k=1, . . . , K, is of a start-block type if the frame k is classified as comprising an attack and if its preceding frame k−1 is of a long-block type. Furthermore, the frame-type detection unit may be configured to determine that a frame k, k=1, . . . , K, is of a stop-block type if the frame k is classified as not comprising an attack and if its preceding frame k−1 is of a short-block type.
The K parallel transform units may be operated in parallel to the K parallel signal-attack detection units and the frame-type detection unit. As such, the K parallel transform units may be implemented in different processing units than the K parallel signal-attack detection units, thereby enabling a further parallelization of the encoder on at least 2K processing units. In such cases, the transform units may be configured to perform speculative execution of the frame-type dependent windowing and/or transform processing. In particular, the transform units may be configured to determine a plurality of frame-type dependent sets of frequency coefficients for a respective frame of the frame group. Even more particularly, the transform units may be configured to determine a frame-type dependent set of frequency coefficients for each of the possible frame-types of the frame. The audio encoder may then comprise a selection unit configured to select (for each one of the K frames) the appropriate set of frequency coefficients from the plurality of frame-type dependent sets of frequency coefficients, wherein the appropriate set of frequency coefficients corresponds to the frame-type of the respective frame.
Alternatively, the K parallel signal-attack detection units may be operated in sequence with the frame-type detection unit and in sequence with the K parallel transform units. As such, the K parallel signal-attack detection units may be implemented on the same respective processing units as the K parallel transform units. In this case, the K parallel transform units may know the frame-type of the respective frame, such that the K parallel transform units may be configured to transform the K frames into the respective frame-type dependent sets of frequency coefficients which correspond to the frame-type of the respective frame.
The audio encoder may comprise K parallel quantization and encoding units. The K parallel quantization and encoding units may be implemented on at least K different processing units (e.g. the respective processing units of the K parallel transform units). The quantization and encoding units may be configured to quantize and entropy encode (e.g. Huffman encode) the sets of frequency coefficients, respectively, under consideration of a respective number of allocated bits. In other words, the quantization and encoding of the K frames of the frame group may be performed independently by K parallel quantization and encoding units. For this purpose, the K parallel quantization and encoding units are provided with K indications of respective numbers of allocated bits. The indications of respective numbers of allocated bits may be determined jointly for the frame group in a joint bit allocation process, as will be outlined below.
The audio encoder may further comprise K parallel psychoacoustic units. The K parallel psychoacoustic units may be implemented on at least K different processing units. Typically, the K parallel psychoacoustic units may be implemented on the same respective processing units as the K parallel transform units, as the K parallel psychoacoustic units typically further process the respective K sets of frequency coefficients provided by the K parallel transform units. The K parallel psychoacoustic units may be configured to determine one or more frame dependent (and typically frequency dependent) masking thresholds based on the K sets of frequency coefficients, respectively. Alternatively or in addition, the K parallel psychoacoustic units may be configured to determined K perceptual entropy values for the corresponding K frames of the frame group. In general terms, a perceptual entropy value provides an indication of the informational content of a corresponding frame. Typically, the perceptual entropy value corresponds to an estimate of a number of bits which should be used to encode the corresponding frame. In particular, the perceptual entropy value for a given frame may indicate how many bits are needed to quantize and encode the given frame, under the assumption that the noise which is allocated to the quantized frame lies just at below the one or more masking thresholds.
The K parallel quantization and encoding units may be configured to quantize and entropy encode the K sets of frequency coefficients, respectively, under consideration of the respective one or more frame dependent masking thresholds. As such, it can be ensured that the quantization of the sets of frequency coefficients is performed under psychoacoustic considerations, thereby reducing the audible quantization noise.
The audio encoder may comprise a bit allocation unit configured to allocate the respective number of bits to the K parallel quantization and encoding units, respectively. For this purpose, the bit allocation unit may consider a total number of available bits for the frame group and distribute the total number of available bits to the respective frames of the frame group. The bit allocation unit may be configured to allocate the respective number of bits under consideration of the frame-type of the respective frame of the frame group. Furthermore, the bit allocation unit may take into account the frame-types of some of all of the frames of the frame group, in order to improve the allocation of bits to the frames of the frame group. Alternatively or in addition, the bit allocation unit may take into account the K perceptual entropy values for the K frames of the frame group determined by the K parallel psychoacoustic units, in order to allocate the respective number of bits to the K frames. In particular, the bit allocation unit may be configured to scale or modify the K perceptual entropy values in dependency of the total number of available bits for the frame group, thereby adapting the bit allocation to the perceptual entropy of the K frames of the frame group.
The audio encoder may further comprise a bit reservoir tracking unit configured to track a number of previously consumed bits used for encoding frames of the audio signal preceding the K frames. Typically, the audio encoder is provided with a target bit-rate for the encoded audio signal. As such, the bit reservoir tracking unit may be configured to track the number of previously consumed bits in relation to the number of targeted bits. Furthermore, the bit reservoir tracking unit may be configured to update the number of previously consumed bits with a number of bits used by the K parallel quantization and encoding units for encoding the K sets of frequency coefficients, thereby yielding a number of currently consumed bits. The number of currently consumed bits may then be the basis for the bit allocation process for the subsequent frame group of subsequent K frames.
The bit allocation unit may be configured to allocate the respective number of bits (i.e. the respective number of bits allocated for the encoding of the K frames of the frame group) under consideration of the number of previously consumed bits (provided by the bit reservoir tracking unit). Furthermore, the bit allocation unit may be configured to allocate the respective number of bits under consideration of the target bit-rate for encoding the audio signal.
As such, the bit allocation unit may be configured to allocate the respective bits to the frames of a frame group in a group-wise manner (in contrast to a frame-by-frame manner). In order to further improve the allocation of bits, the bit allocation unit may be configured to allocate the respective number of bits to the K quantization and encoding units in an analysis-by-synthesis manner by taking into account the number of currently consumed bits. In other words, for a frame group, several iterations of bit allocation and quantization & encoding may be performed, wherein at subsequent iterations, the bit allocation unit may take into account the number of currently consumed bits used by the K quantization and encoding units.
As such, the bit allocation unit may be configured to allocate the respective number of bits under consideration of the number of currently consumed bits, thereby yielding a respective updated number of allocated bits for the K parallel quantization and encoding units, respectively. The K parallel quantization and encoding units may be configured to quantize and entropy encode the respective K sets of frequency coefficients, under consideration of the respective updated number of allocated bits. This iterative bit allocation process may be repeated for a pre-determined number of iterations, in order to improve the bit allocation among the frames of the frame group.
The K parallel quantization and encoding units and the K parallel transform units may be configured to operate in a pipeline architecture. This means that the K parallel transform units may be configured to process a succeeding frame group comprising K succeeding frames, while the K parallel quantization and encoding units encode the sets of frequency coefficients of the current frame group. In other words, the K parallel quantization and encoding units may quantize and encode K preceding sets of frequency coefficients corresponding to K preceding frames of the group of K frames, while the K parallel transform units transform the frames of the group of K frames.
According to a further aspect, a frame-based audio encoder configured to encode K frames (i.e. a frame group) of an audio signal in parallel on at least K different processing units is described. Any of the features related to audio encoders described in the present document are applicable. The audio encoder may comprise at least one of: K parallel transform units, wherein the K parallel transform units are configured to transform the K frames into K sets of frequency coefficients, respectively; K parallel signal-attack detection units, wherein the signal-attack detection units are configured to classify the K frames, respectively, based on the presence or absence of an acoustic attack within the respective one of the K frames; and/or K parallel quantization and encoding units, wherein the K parallel quantization and encoding units are configured to quantize and entropy encode the K sets of frequency coefficients, respectively.
According to a further aspect, a frame-based audio encoder configured to encode K frames (i.e. a frame group) of an audio signal in parallel on at least K different processing units is described. Any of the features related to audio encoders described in the present document are applicable. The audio encoder comprises a transform unit configured to transform the K frames into K corresponding sets of frequency coefficients, respectively. Furthermore, the audio encoder comprises K parallel quantization and encoding units, wherein the K parallel quantization and encoding units are configured to quantize and entropy encode the K sets of frequency coefficients, respectively, under consideration of a respective number of allocated bits. In addition, the audio encoder comprises a bit allocation unit configured to allocate the respective number of bits to the K parallel quantization and encoding units, respectively, based on a previously consumed number of bits used for encoding frames of the audio signal preceding the K frames.
According to another aspect, a frame-based audio encoder configured to encode K frames of an audio signal in parallel on at least K different processing units is described. Any of the features related to audio encoders described in the present document are applicable. The audio encoder comprises K parallel signal-attack detection units, wherein the signal-attack detection units are configured to classify the K frames based on the presence or absence of an acoustic attack within the respective frame, respectively. Furthermore, the audio encoder comprises a frame-type detection unit configured to determine a frame-type of frame k, k=1, . . . , K, of the frame group based on the classification of the frame k and based on the frame-type of the previous frame k−1. In addition, the audio encoder comprises K parallel transform units, wherein the K parallel transform units are configured to transform the K frames into K sets of frequency coefficients, respectively. Typically, the set of frequency coefficients corresponding to a frame depends on the frame-type of that frame. In other words, the transform units are configured to perform a frame-type dependent transformation. According to a further aspect, a method for encoding an audio signal comprising a sequence of frames is described. The method may comprise any one or more of: transforming K frames of the audio signal into corresponding K sets of frequency coefficients in parallel; classifying in parallel each of the K frames based on the presence or absence of an acoustic attack within the respective one of the K frames; and quantizing and entropy encoding in parallel each one of the K sets of frequency coefficients, under consideration of a respective number of allocated bits.
According to another aspect, a method for encoding an audio signal comprising a sequence of frames is described. The method may comprise transforming K frames of the audio signal into K corresponding sets of frequency coefficients; quantizing and entropy encoding each of the K sets of frequency coefficients in parallel, under consideration of a respective number of allocated bits; and allocating the respective number of bits based on a previously consumed number of bits used for encoding frames of the audio signal preceding the K frames.
According to a further aspect, a method for encoding an audio signal comprising a sequence of frames is described. The method may comprise classifying each of K frames of the audio signal in parallel, based on the presence or absence of an acoustic attack within a respective one of the K frames; determining a frame-type of each frame k, k=1, . . . , K, of the K frames based on the classification of the frame k and based on the frame-type of the frame k−1; and transforming each of the K frames in parallel into a respective one of K sets of frequency coefficients; wherein the set k of frequency coefficients corresponding to frame k depends on the frame-type of frame k.
According to a further aspect, a software program is described. The software program may be adapted for execution on a processor and for performing the method steps outlined in the present document when carried out on a computing device.
According to another aspect, a storage medium is described. The storage medium may comprise a software program adapted for execution on a processor and for performing the method steps outlined in the present document when carried out on a computing device.
According to a further aspect, a computer program product is described. The computer program may comprise executable instructions for performing the method steps outlined in the present document when executed on a computer.
It should be noted that the methods and systems including its preferred embodiments as outlined in the present document may be used stand-alone or in combination with the other methods and systems disclosed in this document. Furthermore, all aspects of the methods and systems outlined in the present document may be arbitrarily combined. In particular, the features of the claims may be combined with one another in an arbitrary manner.
The invention is explained below in an exemplary manner with reference to the accompanying drawings, wherein
Each frame of samples is converted into the frequency domain using a Modified Discrete Cosine Transform (MDCT). In order to circumvent the problem of spectral leakage, which typically occurs in the context of block-based (also referred to as frame-based) time frequency transformations, MDCT makes use of overlapping windows, i.e. MDCT is an example of a so-called overlapped transform. This is illustrated in
In the present document, various measures for accelerating the audio encoding scheme illustrated in
Subsequent to the decision on the block-type, an appropriate window is applied to the frame of the audio signal 101 (windowing unit 202). As outlined above, the MDCT transform is an overlapped transform, i.e. the window is applied to the current frame k of the audio signal 101 and to the previous frame k−1 (i.e. to a total of 2M=2048 samples). The windowing unit 202 typically applies a type of window which is adapted to the block-type determined in the block-type decision unit 201. This means that the shape of the window is dependent on the actual type of the frame k. Subsequently to applying a window to a group of adjacent frames, the appropriate MDCT transform is applied to the windowed group of adjacent frames, in order to yield the set of frequency coefficients corresponding to the frame of the audio signal 101. By way of example, if the block-type of the current frame k is “short-blocks”, a sequence of eight short-blocks of windowed samples of the current frame k are converted into eight sets of frequency coefficients using eight consecutive MDCT transforms 203. On the other hand, if the block-type of the current frame k is “long-block”, the windowed samples of the current frame k are converted into a single set of frequency coefficients using a single MDCT transform.
The above process is repeated for all of the frames of the audio signal 101, thereby yielding a sequence of sets of frequency coefficients which are quantized and encoded in a sequential manner. Due to the sequential encoding scheme, the overall encoding speed is limited by the processing power of the processing unit which is used to encode the audio signal 101.
It is proposed in the present document to break up the dependency chain of a conventional audio encoder 100, 200 described in the context of
Having determined the respective block-type, the window-and-transform unit 303 may apply the appropriate window and the appropriate MDCT transform to each of the plurality of K frames 305. This may be done in parallel for the K frames 305. In view of the overlap between adjacent frames, the K parallel windowing and transform processes may be fed with groups of adjacent frames. By way of example, the K parallel windowing and transform processes may be indentified by the index k=1, . . . , K. A kth process handles the kth frame of the plurality of K frames. As the windowing and the transform typically overlap, the kth process may in addition be provided with one or more preceding frames of the kth frame (e.g. with the (k−1)th frame). As such, the K processes may be performed in parallel, thereby providing K sets of frequency coefficients for the K frames 305 of the audio signal 101.
In contrast to the sequential architecture 200 illustrated in
Alternatively or in addition, the architecture 200 of
As outlined in the context of
As a result of such speculative execution, L frames of the audio signal may be submitted to windowing and transformation processing 403 in parallel using different processing units. Each of the processing units (e.g. the lth processing unit, l=1, . . . , L) determines four sets of frequency coefficients for the lth frame handled by the processing unit, i.e. each processing unit performs about four times more processing steps compared to the windowing and transformation 301 performed when the block-type is already known. Nevertheless, the overall encoding speed can be increased by a factor of L/4 by the parallelized architecture 400 shown in
The parallel architecture 400 may be used alternatively or in combination with the parallel architecture 300. It should be noted, however, that as a result of parallelization, the encoding latency will typically increase. On the other hand, the encoding speed may be significantly increased, thereby making the parallelized architectures interesting in the context of audio download applications, where fast (“on the fly”) downloads can be achieved by massive parallelization of the encoding process.
Furthermore,
It should be noted that the search for a particular (optimum) Huffman table may be further parallelized. It is assumed that P is the total number of possible Huffman tables. For the kth frame (k=1, . . . , K), the kth set of frequency coefficients may be encoded using a different one of the P Huffman tables in P parallel processes (running on P parallel processing units). This leads to P encoded sets of frequency coefficients, wherein each of the P encoded sets of frequency coefficients has a corresponding bit-length. The Huffman table which leads to the encoded set of frequency coefficient with the lowest bit-length may be selected as the particular (optimum) Huffman table for the kth frame. Alternatively to a full parallelization scheme, intermediate parallelization schemes such as a divide-and-conquer strategy with alpha/beta pruning of branches (wherein each branch is executed in a separate parallel processing unit) may be used to determine the particular (optimum) Huffman table for the kth frame.
Since Huffman coding is a variable code length method and since noise shaping should be performed to keep the quantization noise below the frequency dependent masking threshold, a global gain value (determining the quantization step size) and scalefactors (determining noise shaping factors for each scalefactor (i.e. frequency) band) are typically applied prior to the actual quantization. The process for determining an optimum tradeoff between the global gain value and the scalefactors for a given frame of the audio signal 101 (under the constraint of a target bit-rate and/or target perceptual distortion) is usually performed by two nested iteration loops in an analysis-by-synthesis manner. In other words, the quantization and encoding process 152 typically comprises two nested iterations loops, a so-called inner iteration loop (or rate loop) and an outer iteration loop (or noise control loop).
In the context of the inner iteration loop (rate loop), a global gain value is determined such that the quantized and encoded set of frequency coefficients meets the target bit-rate (or meets the allocated number of bits for the particular frame k). In general, the Huffman code tables assign shorter code words to (more frequent) smaller quantized values. If the number of bits resulting from the coding operation exceeds the number of bits available to code a given frame k, this can be corrected by adjusting the global gain to result in a larger quantization step size, thus leading to smaller quantized values. This operation is repeated with different quantization step sizes until the number of bits required for the Huffman coding is smaller or equal to the bits allocated to the frame. This loop is called rate loop because the loop modifies the overall encoder bit-rate until the bit-rate meets a target bit-rate. In the context of the outer iteration loop (noise control loop), the frequency dependent scalefactors are adapted to the frequency dependent masking thresholds to control the overall perceptual distortion. In order to shape the quantization noise according to the frequency dependent masking thresholds, scalefactors are applied to each scalefactor band. The scalefactor bands correspond to frequency intervals within the audio signal and each scalefactor band comprises a different subset of a set of frequency coefficients. Typically, the scalefactor bands correspond to a perceptually motivated fragmentation of the overall frequency range of the audio signal into critical subbands. The encoder typically starts with a default scalefactor of 1 for each scalefactor band. If the quantization noise in a given band is found to exceed the frequency dependent masking threshold (i.e. the allowed noise in this band), the scalefactor for this band is adjusted to reduce the quantization noise. As such, the scalefactor corresponds to a frequency dependent gain value (in contrast to the overall gain value adjusted in the rate adjustment loop), which may be used to control the quantization step in each scalefactor band individually.
Since achieving a smaller quantization noise requires a larger number of quantization steps and thus a higher bit-rate, the rate adjustment loop may need to be repeated every time new scalefactors are used. In other words, the rate loop is nested within the noise control loop. The outer (noise control) loop is executed until the actual noise (computed from the difference of the original spectral values minus the quantized spectral values) is below the masking threshold for every scalefactor band (i.e. critical band).
While the inner iteration loop always converges, this is not true for the combination of both iteration loops. By way of example, if the perceptual model requires quantization step sizes so small that the rate loop always has to increase the quantization step sizes to enable coding at the target bit-rate, both loops will not converge. Conditions may be set to stop the iterations if no convergence is achieved. Alternatively or in addition, the determination of the masking thresholds may be based on the target bit-rate. In other words, the masking thresholds determined e.g. in the perceptual processing unit 506 may be dependent on the target bit-rate. This typically enables a convergence of the quantization and encoding scheme to the target bit-rate.
It should be noted that the above mentioned iterative quantization and encoding process (also referred to as noise allocation process) is only one possible process for determining a set of quantized and encoded frequency coefficients. The parallelization schemes described in the present document equally apply to other implementations of the parallel noise allocation processes within the quantization and encoding unit 508.
As a result of the quantization and encoding process, a set of quantized and encoded frequency coefficients is obtained for a corresponding frame of the audio signal 101. This set of quantized and encoded frequency coefficients is represented as a certain number of bits which typically depends on the number of bits allocated to the frame. The acoustic content of an audio signal 101 may vary significantly from one frame to the next, e.g. a frame comprising tonal content versus a frame comprising transient content. Accordingly, the number of bits required to encode the frames (given a certain allowed perceptual distortion) may vary from frame to frame. By way of example, a frame comprising tonal content may require a reduced number of bits compared to a frame comprising transient content. At the same time, the overall encoded audio signal should meet a certain target bit-rate, i.e. the average number of bits per frame should meet a pre-determined target value.
In order to ensure a pre-determined target bit-rate and in order to take into account the varying bit requirements of the frames, the AAC encoder 100 typically makes use of a bit allocation process which works in conjunction with an overall bit reservoir. The overall bit reservoir is filled with a number of bits on a frame-by-frame basis in accordance to the target bit-rate. At the same time, the overall bit reservoir is updated with the number of bits which were used to encode a past frame. As such, the overall bit reservoir tracks the amount of bits which have already been used to encode the audio signal 101 and thereby provides an indication of the number of bits which are available for encoding a current frame of the audio signal 101. This information is used by the bit allocation process to allocate a number of bits for encoding of the current frame. For this allocation process, the block-type of the current frame may be taken into account. As a result, the bit allocation process may provide the quantization and encoding unit 152 with an indication of the number of bits which are available for the encoding of the current frame. This indication may comprise a minimum number of allocated bits, a maximum number of allocated bits and/or an average number of allocated bits.
The quantization and encoding unit 152 uses the indication of the number of allocated bits to quantize and encode the set of frequency coefficients corresponding to the current frame and thereby determines a set of quantized and encoded frequency coefficients which takes up an actual number of bits. This actual number of bits is typically only known after execution of the above explained quantization and encoding (including the nested loops), and may vary within the bounds provided by the indication of the number of allocated bits. The overall bit reservoir is updated using the actual number of bits and the bit allocation process is repeated for the succeeding frame.
An example bit allocation process 507 may comprise the step of updating the bit reservoir subsequent to the actual quantization and encoding 508 of K sets of frequency coefficients. The updated bit reservoir may then be the basis for a bit allocation process 507 which provides the allocation of bits to the subsequent K sets of frequency coefficients in parallel. In other words, the bit reservoir update process 509 and the bit allocation process 507 may be performed per groups of K frames (instead of performing the process on a per frame basis). More particularly, the bit allocation process 507 may comprise the step of obtaining a total number T of available bits for a group of K frames (instead of obtaining the number of available bits on a frame-by-frame basis) from the bit reservoir. Subsequently, the bit allocation process 507 may distribute the total number T of available bits to the individual frames of the group of K frames, thereby yielding a respective number Tk, k=1, . . . , K, of allocated bits for the respective kth frame of the group of K frames. The bit allocation process 507 may take into account the block-type of the frames of the K frames. In particular, the bit allocation process 507 may take into account the block-type of all the frames of the K frames in conjunction, in contrast to a sequential bit allocation process 507, where only the block-type of each individual frame is taken into account. This additional information regarding the block-type of adjacent frames within a group of K frames may be taken into account to provide an improved allocation of bits.
In order to further improve the allocation of bits to the frames of the group of K frames, the bit allocation/bit reservoir update process may be performed in an analysis-by-synthesis manner, thereby optimizing the overall bit allocation. An example iterative bit allocation process 700 making use of an analysis-by-synthesis scheme is illustrated in
Subsequently, it is verified if a stop criterion for the iterative bit allocation process 700 is fulfilled (step 704). Example stop criterion may comprise AND or OR combinations of the following one or more criteria: the iterative bit allocation process has performed a pre-determined maximum number of iterations; the sum of the used-up bits, i.e. ΣUk, meets a pre-determined relation to the available number T of bits; the numbers Uk and Tk meet a pre-determined relationship for some or all of k=1, . . . , K, etc. By way of example, if U1<T1 for a frame 1, it may be beneficial to perform another iteration of the bit allocation process 700, wherein T1 is reduced by the difference of T1 and U1 and the available bits (T1-U1) are allocated to another frame.
If the stop criterion is not met (reference numeral 705), a further iteration of the bit allocation process 700 is performed, wherein the distribution of the T bits (step 702) is performed under consideration of the used up bits Uk, k=1, . . . , K, of the previous iteration. On the other hand, if the stop criterion is met (reference numeral 706), then the iterative process it terminated and the bit reservoir is updated with the actually used up number Uk of bits (i.e. the used up bits of the last iteration).
In other words, for a group of K frames, preliminary bits may first be allocated to each of the K parallel quantization and encoding processes 508. As a result, K sets of quantized and encoded frequency coefficients and K actual numbers of used bits are determined The distribution of the K actual numbers of bits may then be analyzed and the bit allocations to the K parallel quantization and encoding processes 508 may be modified. By way of example, allocated bits which were not used by a particular frame may be assigned to another frame (e.g. a frame which has used up all of the allocated bits). The K parallel quantization and encoding processes 508 may be repeated using the modified bit allocation process, and so on. Several iterations (e.g. two or three iterations) of this process may be performed, in order to optimize the group-wise bit allocation process 507.
As illustrated in
In the
In the present document, various methods and systems for fast audio encoding are described. Several parallel encoder architectures are presented which enable the implementation of various components of an audio encoder on parallel processing units, thereby reducing the overall encoding time. The methods and systems for fast audio encoding may be used for faster-than-realtime audio encoding e.g. in the context of audio download applications.
It should be noted that the description and drawings merely illustrate the principles of the proposed methods and systems. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its spirit and scope. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the proposed methods and systems and the concepts contributed by the inventors to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Moreover, all statements herein reciting principles, aspects, and embodiments of the invention, as well as specific examples thereof, are intended to encompass equivalents thereof.
The methods and systems described in the present document may be implemented as software, firmware and/or hardware. Certain components may e.g. be implemented as software running on a digital signal processor or microprocessor. Other components may e.g. be implemented as hardware and or as application specific integrated circuits. The signals encountered in the described methods and systems may be stored on media such as random access memory or optical storage media. They may be transferred via networks, such as radio networks, satellite networks, wireless networks or wireline networks, e.g. the Internet. Typical devices making use of the methods and systems described in the present document are portable electronic devices or other consumer equipment which are used to store and/or render audio signals.
This application claims priority to U.S. Provisional Patent Application No. 61/565,037 filed 30 Nov. 2011, hereby incorporated by reference in its entirety.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2012/075056 | 12/11/2012 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2013/092292 | 6/27/2013 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
6567781 | Lafe | May 2003 | B1 |
6690726 | Yavits | Feb 2004 | B1 |
7418394 | Cowdery | Aug 2008 | B2 |
7676647 | Codrescu | Mar 2010 | B2 |
20010033699 | Eshraghian | Oct 2001 | A1 |
20040024592 | Matsunuma | Feb 2004 | A1 |
20040225495 | Makino | Nov 2004 | A1 |
20080027715 | Rajendran | Jan 2008 | A1 |
20080040120 | Kurniawati | Feb 2008 | A1 |
20090154690 | Wu | Jun 2009 | A1 |
20090259829 | Grover | Oct 2009 | A1 |
20100088356 | Lloyd | Apr 2010 | A1 |
20100145688 | Sung | Jun 2010 | A1 |
20110087345 | Chan | Apr 2011 | A1 |
20110178795 | Bayer | Jul 2011 | A1 |
Number | Date | Country |
---|---|---|
101350199 | Jan 2009 | CN |
1793372 | Jun 2007 | EP |
1973372 | Sep 2008 | EP |
3171598 | May 2001 | JP |
2001-242894 | Sep 2001 | JP |
2002-014696 | Jan 2002 | JP |
2004-069773 | Mar 2004 | JP |
2007-212895 | Aug 2007 | JP |
2008-539462 | Nov 2008 | JP |
Entry |
---|
Balevic, A. et al “Using Arithmetic Coding for Reduction of Resulting Simulation Data Size on Massively Parallel GPGPUs” 15th European PVM/MPI User's Group Meeting, Sep. 7-10, 2008. |
FLACCL from CUETOOLS, last modified in Jun. 2011. |
Shikano, H. et al “Heterogenous Multi-Core Architecture That Enables 54x AAC-LC Stereo Encoding” IEEE Journal of Solid-State Circuits, vol. 43, No. 4, Apr. 2008, pp. 902-910. |
Savioja, L. et al “Audio Signal Processing Using Graphics Processing Units” JAES vol. 59 Issue pp. 3-19, Jan. 2011. |
Mason, A. J. “Implementations of an MPEG 2 Layer IIII Multichannel Audio Encoder Running in Real Time” International Broadcasting Convention, London, UK, Sep. 12-16, 1996, pp. 460-465. |
Number | Date | Country | |
---|---|---|---|
20150025895 A1 | Jan 2015 | US |
Number | Date | Country | |
---|---|---|---|
61578376 | Dec 2011 | US |