Block Transform-Based Coding
Transform coding is a compression technique used in many audio, image and video compression systems. Uncompressed digital image and video is typically represented or captured as samples of picture elements or colors at locations in an image or video frame arranged in a two-dimensional (2D) grid. This is referred to as a spatial-domain representation of the image or video. For example, a typical format for images consists of a stream of 24-bit color picture element samples arranged as a grid. Each sample is a number representing color components at a pixel location in the grid within a color space, such as RGB, or YIQ, among others. Various image and video systems may use various different color, spatial and time resolutions of sampling. Similarly, digital audio is typically represented as time-sampled audio signal stream. For example, a typical audio format consists of a stream of 16-bit amplitude samples of an audio signal taken at regular time intervals.
Uncompressed digital audio, image and video signals can consume considerable storage and transmission capacity. Transform coding reduces the size of digital audio, images and video by transforming the spatial-domain representation of the signal into a frequency-domain (or other like transform domain) representation, and then reducing resolution of certain generally less perceptible frequency components of the transform-domain representation. This generally produces much less perceptible degradation of the digital signal compared to reducing color or spatial resolution of images or video in the spatial domain, or of audio in the time domain.
More specifically, a typical block transform-based codec 100 shown in
The block transform 120-121 can be defined as a mathematical operation on a vector x of size N. Most often, the operation is a linear multiplication, producing the transform domain output y=M x, M being the transform matrix. When the input data is arbitrarily long, it is segmented into N sized vectors and a block transform is applied to each segment. For the purpose of data compression, reversible block transforms are chosen. In other words, the matrix M is invertible. In multiple dimensions (e.g., for image and video), block transforms are typically implemented as separable operations. The matrix multiplication is applied separably along each dimension of the data (i.e., both rows and columns).
For compression, the transform coefficients (components of vector y) may be selectively quantized (i.e., reduced in resolution, such as by dropping least significant bits of the coefficient values or otherwise mapping values in a higher resolution number set to a lower resolution), and also entropy or variable-length coded into a compressed data stream.
At decoding in the decoder 150, the inverse of these operations (dequantization/entropy decoding 160 and inverse block transform 170-171) are applied on the decoder 150 side, as show in
In many block transform-based coding applications, the transform is desirably reversible to support both lossy and lossless compression depending on the quantization factor. With no quantization (generally represented as a quantization factor of 1) for example, a codec utilizing a reversible transform can exactly reproduce the input data at decoding. However, the requirement of reversibility in these applications constrains the choice of transforms upon which the codec can be designed.
Many image and video compression systems, such as MPEG and Windows Media, among others, utilize transforms based on the Discrete Cosine Transform (DCT). The DCT is known to have favorable energy compaction properties that result in near-optimal data compression. In these compression systems, the inverse DCT (IDCT) is employed in the reconstruction loops in both the encoder and the decoder of the compression system for reconstructing individual image blocks.
Entropy Coding of Wide-Range Transform Coefficients Wide dynamic range input data leads to even wider dynamic range transform coefficients generated during the process of encoding an image. For instance, the transform coefficients generated by an N by N DCT operation have a dynamic range greater than N times the dynamic range of the original data. With small or unity quantization factors (used to realize low-loss or lossless compression), the range of quantized transform coefficients is also large. Statistically, these coefficients have a Laplacian distribution as shown in
Conventional transform coding is tuned for a small dynamic range of input data (typically 8 bits), and relatively large quantizers (such as numeric values of 4 and above).
On the other hand, conventional transform coding is less suited to compressing wide dynamic range distributions such as that shown in
The wide dynamic range distribution also has an increased alphabet of symbols, as compared to the narrow range distribution. Due to this increased symbol alphabet, the entropy table(s) used to encode the symbols will need to be large. Otherwise, many of the symbols will end up being escape coded, which is inefficient. The larger tables require more memory and may also result in higher complexity.
The conventional transform coding therefore lacks versatility—working well for input data with the narrow dynamic range distribution, but not on the wide dynamic range distribution.
However, on narrow-range data, finding efficient entropy coding of quantized transform coefficients is a critical processes. Any performance gains that can be achieved in this step (gains both in terms of compression efficiency and encoding/decoding speed) translate to overall quality gains.
Different entropy encoding schemes are marked by their ability to successfully take advantage of such disparate efficiency criteria as: use of contextual information, higher compression (such as arithmetic coding), lower computational requirements (such as found in Huffman coding techniques), and using a concise set of code tables to minimize encoder/decoder memory overhead. Conventional entropy encoding methods, which do not meet all of these features, do not demonstrate thorough efficiency of encoding transformation coefficients.
A digital media coding and decoding technique and realization of the technique in a digital media codec described herein achieves more effective compression of transform coefficients. For example, one exemplary block transform-based digital media codec illustrated herein more efficiently encodes transform coefficients by jointly-coding non-zero coefficients along with succeeding runs of zero-value coefficients. When a non-zero coefficient is the last in its block, a last indicator is substituted for the run value in the symbol for that coefficient. Initial non-zero coefficients are indicated in a special symbol which jointly-codes the non-zero coefficient along with initial and subsequent runs of zeroes.
The exemplary codec allows for multiple coding contexts by recognizing breaks in runs of non-zero coefficients and coding non-zero coefficients on either side of such a break separately. Additional contexts are provided by context switching based on inner, intermediate, and outer transforms as well as by context switching based on whether transforms correspond to luminance or chrominance channels. This allows code tables to have smaller entropy, without creating so many contexts as to dilute their usefulness.
The exemplary codec also reduces code table size by indicating in each symbol whether a non-zero coefficient has absolute value greater than 1 and whether runs of zeros have positive value, and separately encodes the level of the coefficients and the length of the runs outside of the symbols. The codec can take advantage of context switching for these separately-coded runs and levels.
The various techniques and systems can be used in combination or independently.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
Additional features and advantages will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.
The following description relates to coding and decoding techniques that adaptively adjust for more efficient entropy coding of wide-range transform coefficients, as well as for more efficient entropy coding of transform coefficients in general. The following description describes an example implementation of the technique in the context of a digital media compression system or codec. The digital media system codes digital media data in a compressed form for transmission or storage, and decodes the data for playback or other processing. For purposes of illustration, this exemplary compression system incorporating this adaptive coding of wide range coefficients is an image or video compression system. Alternatively, the technique also can be incorporated into compression systems or codecs for other 2D data. The adaptive coding of wide range coefficients technique does not require that the digital media compression system encodes the compressed digital media data in a particular coding format.
1. Encoder/Decoder
The 2D data encoder 400 produces a compressed bitstream 420 that is a more compact representation (for typical input) of 2D data 410 presented as input to the encoder. For example, the 2D data input can be an image, a frame of a video sequence, or other data having two dimensions. The 2D data encoder tiles 430 the input data into macroblocks, which are 16×16 pixels in size in this representative encoder. The 2D data encoder further tiles each macroblock into 4×4 blocks. A “forward overlap” operator 440 is applied to each edge between blocks, after which each 4×4 block is transformed using a block transform 450. This block transform 450 can be the reversible, scale-free 2D transform described by Srinivasan, U.S. patent application Ser. No. 11/015,707, entitled, “Reversible Transform For Lossy And Lossless 2-D Data Compression,” filed Dec. 17, 2004. The overlap operator 440 can be the reversible overlap operator described by Tu et al., U.S. patent application Ser. No. 11/015,148, entitled, “Reversible Overlap Operator for Efficient Lossless Data Compression,” filed Dec. 17, 2004; and by Tu et al., U.S. patent application Ser. No. 11/035,991, entitled, “Reversible 2-Dimensional Pre-/Post-Filtering For Lapped Biorthogonal Transform,” filed Jan. 14, 2005. Alternatively, the discrete cosine transform or other block transforms and overlap operators can be used. Subsequent to the transform, the DC coefficient 460 of each 4×4 transform block is subject to a similar processing chain (tiling, forward overlap, followed by 4×4 block transform). The resulting DC transform coefficients and the AC transform coefficients are quantized 470, entropy coded 480 and packetized 490.
The decoder performs the reverse process. On the decoder side, the transform coefficient bits are extracted 510 from their respective packets, from which the coefficients are themselves decoded 520 and dequantized 530. The DC coefficients 540 are regenerated by applying an inverse transform, and the plane of DC coefficients is “inverse overlapped” using a suitable smoothing operator applied across the DC block edges. Subsequently, the entire data is regenerated by applying the 4×4 inverse transform 550 to the DC coefficients, and the AC coefficients 542 decoded from the bitstream. Finally, the block edges in the resulting image planes are inverse overlap filtered 560. This produces a reconstructed 2D data output.
In an exemplary implementation, the encoder 400 (
The illustrated LT and the ILT are inverses of each other, in an exact sense, and therefore can be collectively referred to as a reversible lapped transform. As a reversible transform, the LT/ILT pair can be used for lossless image compression.
The input data 410 compressed by the illustrated encoder 400/decoder 500 can be images of various color formats (e.g., RGB/YUV4:4:4, YUV4:2:2 or YUV4:2:0 color image formats). Typically, the input image always has a luminance (Y) component. If it is a RGB/YUV4:4:4, YUV4:2:2 or Yuv4:2:0 image, the image also has chrominance components, such as a U component and a V component. The separate color planes or components of the image can have different spatial resolutions. In case of an input image in the YUV 4:2:0 color format for example, the U and V components have half of the width and height of the Y component.
As discussed above, the encoder 400 tiles the input image or picture into macroblocks. In an exemplary implementation, the encoder 400 tiles the input image into 16×16 macroblocks in the Y channel (which may be 16×16, 16×8 or 8×8 areas in the U and V channels depending on the color format). Each macroblock color plane is tiled into 4×4 regions or blocks. Therefore, a macroblock is composed for the various color formats in the following manner for this exemplary encoder implementation:
2. Adaptive Coding of Wide-Range Coefficients
In the case of wide dynamic range data, especially decorrelated transform data (such as, the coefficients 460, 462 in the encoder of
2.1 Grouping
Further, the Laplacian probability distribution function of wide range transform coefficients shown in
(for convenience, the random variable corresponding to the transform coefficients is treated as a continuous value). For wide dynamic range data, λ is small, and the absolute mean 1/λ is large. The slope of this distribution is bounded within ±½ (λ2), which is very small. This means that the probability of a transform coefficient being equal to x is very close to the probability of x+ζ for a small shift ζ. In the discrete domain, this translates to the claim, “the probability of a transform coefficient taking on adjacent values j and (j+1) is almost identical.”
With reference now to
This grouping has the benefit that with a suitable choice of N, the probability distribution of the bin index for wide range coefficients more closely resembles the probability distribution of narrow range data, e.g., that shown in
Based on the grouping of coefficients into bins, the encoder can then encode a transform coefficient 615 using an index of its bin (also referred to herein as the normalized coefficient 620) and its address within the bin (referred to herein as the bin address 625). The normalized coefficient is encoded using variable length entropy coding, while the bin address is encoded by means of a fixed length code.
The choice of N (or equivalently, the number of bits k for the fixed length coding of the bin address) determines the granularity of grouping. In general, the wider the range of the transform coefficients, the larger value of k should be chosen. When k is carefully chosen, the normalized coefficient Y is zero with high probability that matches the entropy coding scheme for Y.
As described below, the value k can be varied adaptively (in a backward-adaptive manner) in the encoder and decoder. More specifically, the value of k on both the encoder and decoder varies based on the previously encoded/decoded data only.
In one particular example of this encoding shown in
Continuing this example,
2.2 Layering
With reference again to
For layering, sections of the compressed bitstream containing the flexbits portion are signaled by a separate layer header or other indication in the bitstream so that the decoder can identify and separate (i.e., parse) the Flexbits layer 645 (when not omitted) from the core bitstream 640.
Layering presents a further challenge in the design of backward adaptive grouping (described in the following section). Since the Flexbits layer may be present or absent in a given bitstream, the backward-adaptive grouping model cannot reliably refer to any information in the Flexbits layer. All information needed to determine the number of fixed length code bits k (corresponding to the bin size N=2k) should reside in the causal, core bitstream.
2.3 Adaptation
The encoder and decoder further provide a backward-adapting process to adaptively adjust the choice of the number k of fixed length code bits, and correspondingly the bin size N of the grouping described above, during encoding and decoding. In one implementation, the adaptation process can be based on modeling the transform coefficients as a Laplacian distribution, such that the value of k is derived from the Laplacian parameter λ. However, such a sophisticated model would require that the decoder perform the inverse of the grouping 610 (reconstructing the transform coefficients from both the normalized coefficients in the core bitstream 640 and the bin address/sign in the Flexbits layer 645) in
In the example implementation shown in
In its adaptation process, the example encoder and decoder performs the adaptation on a backward adaptation basis. That is to say, a current iteration of the adaptation is based on information previously seen in the encoding or decoding process, such as in the previous block or macroblock. In the example encoder and decoder, the adaptation update occurs once per macroblock for a given transform band, which is intended to minimize latency and cross dependence. Alternative codec implementations can perform the adaptation at different intervals, such as after each transform block.
In the example encoder and decoder, the adaptation process 900 updates the value k. If the number of non-zero normalized coefficient is too large, then k is bumped up so that this number will tend to drop in future blocks. If the number of non-zero normalized coefficients is too small, then k is reduced with the expectation that future blocks will then produce more non-zero normalized coefficients because the bin size N is smaller. The example adaptation process constrains the value k to be within the set of numbers {0, 1, . . . 16}, but alternative implementations could use other ranges of values for k. At each adaptation update, the encoder and decoder either increments, decrements, or leaves k unchanged. The example encoder and decoder increments or decrements k by one, but alternative implementations could use other step sizes.
The adaptation process 900 in the example encoder and decoder further uses an internal model parameter or state variable (M) to control updating of the grouping parameter k with a hysteresis effect. This model parameter provides a lag before updating the grouping parameter k, so as to avoid causing rapid fluctuation in the grouping parameter. The model parameter in the example adaptation process has 17 integer steps, from −8 to 8.
With reference now to
At action 920, the adaptation process then counts the number of non-zero normalized coefficients of the transform band within the immediate previously encoded/decoded macroblock. At action 930, this raw count is normalized to reflect the integerized number of non-zero coefficients in a regular size area The adaptation process then calculates (action 940) the deviation of the count from the desired model (i.e., the “sweet-spot” of one quarter of the coefficients being non-zero). For example, a macroblock of AC coefficients in the example encoder shown in
At next actions 960, 965, 970, 975, the adaptation process then adapts the value k according to any change in the internal model parameter. If the model parameter is less than a negative threshold, the value k is decremented (within its permissible bounds). This adaptation should produce more non-zero coefficients. On the other hand, if the model parameter exceeds a positive threshold, the value k is incremented (within permissible bounds). Such adaptation should produce fewer non-zero coefficients. The value k is otherwise left unchanged.
Again, as indicated at actions 910, 980, the adaptation process is repeated separately for each channel and sub-band of the data, such as separately for the chrominance and luminance channels.
The example adaptation process 900 is further detailed in the pseudo-code listing 1000 shown in
3. Efficient Entropy Encoding
3.1 Prior Art Methods
In various encoding standards, the process of coding of transform blocks reduces to the coding of a string of coefficients. One example of such a string is given in
Certain properties traditionally hold for such a string of transform coefficients:
Various encoding techniques take advantage of the fact that the zero-value coefficients, which typically occur rather frequently, can be coded with run length codes. However, when the input image being encoded is high dynamic range data (e.g. greater than 8 bits), or when the quantization parameter is unity or small, fewer transform coefficients are zero, as discussed above. In such a situation the adaptive coding and decoding techniques described above may be used to condition the data such that the conditioned data has these characteristics. Other techniques can also produce transform coefficient sets similar to those of transform coefficients example 1200 though other means, such as, for example, by setting a high quantization level.
Another alternative encoding scheme is 3D coding, an example of which is illustrated in example 1240. In 3D coding, the run of zeros is typically coded jointly with the succeeding non-zero coefficient, as in 2D coding. Further, a Boolean data element, “last,” indicating whether this non-zero coefficient is the last non-zero coefficient in the block is encoded. The symbol 1245 therefore jointly-encodes run, level and last; in the illustrated case the symbol <2, C1, not last> indicates that two zeroes precede the non-zero coefficient C1, and that it is not the last non-zero coefficient in the series. Since each of these elements can freely take on all values, the symbol encodes three independent dimensions, giving rise to the name “3D coding.”
Each of these techniques has separate advantages. Each symbol in the 2D coding technique has smaller entropy than the symbol used in 3D coding, because the former conveys less information than the latter. Thus, the number of possible symbols in a given 3D coding scheme will be twice as large as for a comparable 2D coding scheme. This increases code table size, and can slow down encoding and decoding for the 3D coding scheme. However, in 2D coding an additional symbol is sent to signal the end of block, and requiring the sending of an entire additional symbol is expensive from the perspective of the size of the bitstream. In fact, in practice, 3D coding is more efficient than 2D coding, despite the larger code table sizes.
3.2 3½D-2½D Coding
While the prior art techniques illustrated in
Besides taking advantage of the strong correlation between non-zero coefficients and runs of succeeding zeros, this method provides a further advantage when a non-zero coefficient is the last non-zero coefficient in the block, by utilizing a special value of run to signal that the non-zero coefficient is the last one in the series. Thus, in the joint-coding of a symbol, the information being sent is a level value and another value indicating either the length of a run of zeros, or a “last” value. This is illustrated in
This feature of 2½D coding is not necessarily required of a joint-coding scheme which combines levels and succeeding runs; in an alternative implementation, the final symbol transmitted could simply encode the length of the final run of zeros, although this would be undesirable because it could substantially increase the size of the coded bitstream. In another alternative, an EOB symbol, like that used in 2D coding, could be used. However, as in 3D coding, the 2½D coding use of a “last” value carries an advantage over 2D coding in that there is no need to code an extra symbol to denote end of block. Additionally, 2½D coding carries advantages over 3D coding in that (1) the entropy of each symbol of the former is less than that of the latter and (2) the code table design of the former is simpler than that of the latter. Both these advantages are a result of the 2½D code having fewer possibilities than the 3D code.
However, 2½D coding alone cannot describe an entire run of transform coefficients because it does not provide for a way to send a run length prior to the first non-zero coefficient. As
Although the extra information in 3½D coding might seem, at first glance, to negate some of the advantages of 2½D coding, the different handling of the first symbol is actually advantageous from the coding efficiency perspective. A 3½D symbol necessarily has a different alphabet from the other, 2½D, symbols, which means it is encoded separately from the other symbols and does not increase the 2½D entropy.
The process begins at action 1420, where the first non-zero transform coefficient is identified. Then, at action 1430, a 3½D symbol is created using the length of the initial run of zeroes (which could either be of length 0 or of positive length) and the first non-zero coefficient. At this point, the 3½D symbol is not complete. Next, the process reaches decision action 1435, where it determines if the non-zero coefficient which is currently identified is the last non-zero coefficient in the series of transform coefficients. If this is the last non-zero coefficient, the process continues to action 1480, where the “last” indicator is inserted into the symbol rather than a run of succeeding zeroes. The process then encodes the symbol using entropy encoding at action 1490, and the process ends. One example of such a process of encoding symbols is given below with reference to
If, however, the process determines at decision action 1435 that this is not the last non-zero coefficient, then at action 1440 the length of the succeeding run of zeros (which could either be 0 or a positive number) is inserted into the symbol, and the symbol is encoded at action 1450. One example of such a process of encoding symbols is given below with reference to
3.3 Context Information
In addition to encoding symbols according to 2½D and 3½D coding, several pieces of causal information may be used to generate a context for the symbol being encoded. This context may be used by the encoder 400 (
With these points in mind the context model described herein is chosen to consult three factors to determine which context is chosen for each symbol. In one implementation these factors are (1) the level of transformation—whether the transform is an inner, intermediate, or outer transformation, (2) whether the coefficients are of the luminance or chrominance channels, and (3) whether there has been any break in the run of non-zero coefficients within the series of coefficients. In alternative implementations one or more of these factors may not be used for determining coding context, and/or other factors may be considered.
Thus, by (1), an inner transform uses a different set of code tables than an intermediate transform, which uses a different set of code tables than an outer transform. In other implementations, context models may only differentiate between two levels of transformation. Similarly, by (2) luminance coefficients use a different set of code tables than chrominance coefficients. Both of these context factors do not change within a given set of transform coefficients.
However, factor (3) does change within a set of transform coefficients.
As all three example illustrate, the first symbol in a block, being a 3½D symbol, is necessarily coded with a different table than the other symbols because its alphabet is different from the others. This forms a “natural” context for the first symbol. Thus, coefficient A, being the first non-zero coefficient of all three examples is coded with a 3½D code. Additionally, because the 3½D symbol encodes preceding and succeeding runs of zeroes around the first non-zero coefficient, the first two coefficients of example 1520 (A, 0) and the first two coefficients of example 1540 (0, A) are jointly-coded in a 3½D symbol. Because of this, in one implementation, factor (3) does not apply to determine the context of 3½D symbols.
The 2½D symbols, by contrast, are encoded differently depending on factor (3). Thus, in example 1500, it can be seen that because there is no break in the run of non-zero coefficients until after coefficient D, coefficients B, C, and D (as well as the zeroes following D) are encoded with the first context model. However, the zeroes after D constitute a break in the run of non-zero coefficients. Therefore, the remaining coefficients E, F, G, H, (and any which follow) . . . are coded using the second context model. This means that while each non-zero coefficient other than A is encoded with a 2½D symbol, different code tables will be used for coefficients B, C, and D (and any associated zero-value runs) than are used for coefficients E, F, G, and H.
By contrast, in example 1520, there is a break between A and B. This constitutes a break in the run of non-zero coefficients, and hence coefficient B, and all subsequent non-zero coefficients are encoded with the second context model. Likewise, in example 1540, there is a break before A. Thus, as in example 1520, the coefficients B, C, D, . . . are coded with the second context model.
If the symbol is not a 3½D symbol, the process continues to decision action 1615, where the encoder determines whether at least one zero has preceded the non-zero coefficient which is jointly-coded in the symbol. If not, the process continues to action 1620, where the symbol is encoded using 2½D code tables from the first context model and the process ends. If there has been a break, then at action 1630 the symbol is encoded using 2½D code tables from the second context model and the process ends.
3.4 Code Table Size Reduction
While the techniques described above create efficiencies over traditional techniques, they are still not able, on their own, to reduce code table size significantly. Code tables created for the techniques should be able to transmit all combinations of (max_level×(max_run+2)) for the 2½D symbols, and (max_level×(max_run+1)×(max_run+2)) for the 3½D symbols, where max_level is the maximum (absolute) value of a non-zero coefficient and max_run is the maximum possible length of a run of zeroes. The value (max_run+1) is derived for the initial run of a 3½D symbol because the possible values for a run of zeroes run from 0 to max_run, for a total of (max_run+1). Similarly, each symbol encodes a succeeding run of zeros of length between 0 and max_run, as well as a “last” symbol, for a total of (max_run+2) values. Even with escape coding (where rarely occurring symbols are grouped together into one or multiple meta-symbols signaled through escape codes), code table sizes can be formidable.
In order to reduce code table size the techniques described above can be further refined. First, each run and each level is broken into a symbol pair:
run=nonZero_run(+run1)
level=nonOne_level(+level1)
In this symbol pair, the symbols nonZero_run and nonOne_level are Booleans, indicating respectively whether the run is greater than zero, and the absolute level is greater than 1. The values run1 and level1 are used only when the Booleans are true, and indicate the run (between 1 and the max_run) and level (between 2 and the max_level). However, because the case of “last” must also be coded, the value (run OR last) of any succeeding run of zeroes in a jointly-coded symbol is sent as a ternary symbol nonZero_run_last, which takes on the value 0 when the run has zero-length, 1 when the run has non-zero length, and 2 when the non-zero coefficient of the symbol is the last in the series.
Therefore, to utilize this reduced encoding the first, 3½D symbol takes on form <nonZero_run, nonOne_level, nonZero_run_last>. This creates an alphabet of size 2×2×3=12. Subsequent 2½D symbols take the form <nonOne_level, nonZero_run_last>, creating an alphabet of size 2×3=6. In one implementation, these symbols are referred to as the “Index.” In some implementations, run1 is also called NonzeroRun and level1 is called SignificantLevel.
Because the Index only contains information about whether levels and runs are significant, additional information may need to be sent along with the symbols in order to allow a decoder to accurately recreate a series of transform coefficients. Thus, after each symbol from the index, if the level is a significant level, the value of the level is separately encoded and sent after the symbol. Likewise, if a symbol indicates that a run of zeroes is of non-zero (positive) length, that length is separately encoded and sent after the symbol.
Because some symbols require additional information be sent after them, symbols from the Index should be analyzed to determine if additional information should be sent along with them.
The regardless of the determination at action 1820, at decision action 1840, the encoder determines if the symbol is of form <1, x, x>. This determination is equivalent to asking whether the non-zero coefficient represented by the symbol has any preceding zeroes. If so, at action 1850, the encoder encodes the length of the run of zeroes preceding the non-zero coefficient and sends this value.
Next, at decision action 1860, the encoder considers the value of t where the symbol is <x, x, t>. This determination is equivalent to asking whether the non-zero coefficient represented by the symbol has any zeroes following it. If t=0, then the encoder knows there are no succeeding zeroes, and continues to send more symbols at action 1880 and process 1800 ends. In one implementation, the process 1900 of
Next, at decision action 1940, the encoder considers the value of t where the symbol is <x, t>. This determination is equivalent to asking whether the non-zero coefficient represented by the symbol has any zeroes following it. If t=0, then the encoder knows there are no succeeding zeroes, and continues to send more symbols at action 1960 and process 1900 ends. In one implementation, the process 1900 of
3.5 Additional Efficiencies
Besides the code table size reduction discussed above, one benefit of breaking down run and level symbols is that subsequent to the transmission of the 3½D joint symbol, the decoder can determine whether or not there are any leading zeros in the block. This means that context information describing whether the first or second context model holds is known on the decoder side and constitutes a valid context for encoding the level1 value of the first non-zero coefficient. This means that the contexts which apply to the level1 values of the 2½D symbols can apply equally to level1 values of 3½D symbols, even while the jointly-coded Index symbols utilize different alphabets.
Moreover, since the total number of transform coefficients in a block is a constant, each successive run is bounded by a monotonically decreasing sequence. In a preferred implementation, this information is exploited in the encoding of run values. For example, a code table may include a set of run value codes for runs starting in the first half of a set of coefficients and a different set for runs starting in the second half. Because length of any possible run starting in the second half is necessarily smaller than the possible lengths of runs starting in the first half, the second set of codes does not have to be as large, reducing the entropy and improving coding performance.
Other information can be gleaned by careful observation of coefficient placement. For example, if the non-zero coefficient represented by a symbol occurs at the last location in the series of coefficients, then “last” is true always. Similarly, if the non-zero coefficient represented by a symbol occurs at the penultimate location in the array, then either “last” is true, or the succeeding run is zero. Each of these observations allows for coding via shorter tables.
3.6 Index Implementation Example
The first Index has an alphabet size of 12. In one implementation, five Huffman tables are available for this symbol, which is defined as FirstIndex=a+2b+4 c, where the symbol is <a,b,c> and a and b are 0 or 1, and c can take on values 0, 1 or 2. One implementation of code word lengths for the twelve symbols for each of the tables is given below. Standard Huffman code construction procedures may, in one implementation, be applied to derive these sets of prefix codewords:
Table 1: 5,6,7,7,5,3,5,1,5,4,5,3
Table 2: 4,5,6,6,4,3,5,2,3,3,5,3
Table 3: 2,3,7,7,5,3,7,3,3,3,7,4
Table 4: 3,2,7,5,5,3,7,3,5,3,6,3
Table 5: 3,1,7,4,7,3,8,4,7,4,8,5
Subsequent Index symbols have an alphabet size of 6. In one implementation, Index is defined as Index=a+2b, where the symbol is <a,b> and a is Boolean and b can take on values of 0, 1 or 2. Four Huffman tables are defined for Index, as shown below:
Table 1: 1,5,3,5,2,4
Table 2: 2,4,2,4,2,3
Table 3: 4,4,2,2,2,3
Table 4: 5,5,2,1,4,3
Additionally, in one implementation, in order to take advantage of some of the information described in Section 3.5 above, when the coefficient is located at the last array position, a one bit code (defined by a) is used (b is uniquely 2 in this case). In one implementation, when the coefficient is in the penultimate position, a two bit code is used since it is known that b≠1.
One implementation of SignificantLevel codes the level using a binning procedure that collapses a range of levels into seven bins. Levels within a bin are coded using fixed length codes, and the bins themselves are coded using Huffman codes. This can be done, in one implementation, through the grouping techniques described above. Similarly, in one implementation, NonzeroRun is coded using a binning procedure that indexes into five bins based on the location of the current symbol.
3.7 Decoding 3½D-2½D Symbols
If the symbol is not for the last non-zero coefficient, the process continues to decision action 2040, where the decoder determines if any zero coefficients have been indicated by any symbol thus far. If not, the process continues to action 2050, where the next symbol is received and decoded using 2½D code tables following the first context model. If instead zero coefficients have been indicated at decision action 2040, then at process 2060, the decoder receives and decodes the next symbol using 2½D code tables following the second context model. Regardless of which context model was used, the process then proceeds to action 2070, where transform coefficients are populated based on the decoded symbol (including any level or run information also present in the compressed bitstream). As in action 2020, one implementation of this action is described in greater detail below with respect to
Next, the process continues to decision action 2140, where the decoder determines if the symbol indicates that its non-zero coefficient has absolute value greater than 1. This can be done by determining if the value of nonOne_level in the symbol is 1, indicating the level has absolute value greater than 1, or 0, indicating that the non-zero coefficient is either −1 or 1. If the symbol does not indicate a coefficient with absolute value greater than 1, the process continues to action 2150, where the next coefficient is populated with either a −1 or a 1, depending on the sign of the non-zero coefficient. If the symbol does indicate a coefficient with absolute value greater than 1, the process instead continues to action 2160, where the level of the coefficient is decoded and the coefficient is populated with the level value, along with its sign. As discussed above, the sign may be indicated in various ways, thus decoding of the coefficient sign is not explicitly discussed in actions 2150 or 2160.
Next, at decision action 2170, the decoder determines if the symbol indicates a positive-length subsequent run of zero coefficients. This can be done by determining if the value of nonzero_run_last in the symbol is 1, indicating a positive-length run, or 0, indicating a zero-length run. (The case of nonzero_run_last equaling 2 is not shown, as that case is caught in process 2000.) If the symbol does indicate a positive-length run of zero coefficients, the process continues to action 2180, where the length of the run is decoded, based on the encoded run1 following the symbol, and subsequent transform coefficients are populated with zeroes according to the run length and process 2100 ends.
4. Computing Environment
The above described encoder 400 (
With reference to
A computing environment may have additional features. For example, the computing environment (2200) includes storage (2240), one or more input devices (2250), one or more output devices (2260), and one or more communication connections (2270). An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment (2200). Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment (2200), and coordinates activities of the components of the computing environment (2200).
The storage (2240) may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, CD-RWs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment (2200). The storage (2240) stores instructions for the software (2280) implementing the described encoder/decoder and efficient transform coefficient encoding/decoding techniques.
The input device(s) (2250) may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment (2200). For audio, the input device(s) (2250) may be a sound card or similar device that accepts audio input in analog or digital form, or a CD-ROM reader that provides audio samples to the computing environment. The output device(s) (2260) may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment (2200).
The communication connection(s) (2270) enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, compressed audio or video information, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The digital media processing techniques herein can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment (2200), computer-readable media include memory (2220), storage (2240), communication media, and combinations of any of the above.
The digital media processing techniques herein can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “determine,” “generate,” “adjust,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
In view of the many possible variations of the subject matter described herein, we claim as our invention all such embodiments as may come within the scope of the following claims and equivalents thereto.
Number | Name | Date | Kind |
---|---|---|---|
4420771 | Pirsch | Dec 1983 | A |
4684923 | Koga | Aug 1987 | A |
4698672 | Chen et al. | Oct 1987 | A |
4792981 | Cahill et al. | Dec 1988 | A |
4813056 | Fedele | Mar 1989 | A |
4901075 | Vogel | Feb 1990 | A |
4968135 | Wallace et al. | Nov 1990 | A |
5040217 | Brandenburg et al. | Aug 1991 | A |
5043919 | Callaway et al. | Aug 1991 | A |
5045938 | Sugiyama | Sep 1991 | A |
5089818 | Mahieux et al. | Feb 1992 | A |
5128758 | Azadegan | Jul 1992 | A |
5146324 | Miller et al. | Sep 1992 | A |
5179442 | Azadegan | Jan 1993 | A |
5227788 | Johnston | Jul 1993 | A |
5227878 | Puri et al. | Jul 1993 | A |
5266941 | Akeley et al. | Nov 1993 | A |
5381144 | Wilson et al. | Jan 1995 | A |
5394170 | Akeley et al. | Feb 1995 | A |
5400075 | Savatier | Mar 1995 | A |
5457495 | Hartung | Oct 1995 | A |
5461421 | Moon | Oct 1995 | A |
5467134 | Laney | Nov 1995 | A |
5481553 | Suzuki | Jan 1996 | A |
5493407 | Takahara | Feb 1996 | A |
5504591 | Dujari | Apr 1996 | A |
5533140 | Sirat et al. | Jul 1996 | A |
5544286 | Laney | Aug 1996 | A |
5559557 | Kato | Sep 1996 | A |
5568167 | Galbi et al. | Oct 1996 | A |
5574449 | Golin | Nov 1996 | A |
5579413 | Bjontegaard | Nov 1996 | A |
5579430 | Grill et al. | Nov 1996 | A |
5640420 | Jung | Jun 1997 | A |
5654706 | Jeong et al. | Aug 1997 | A |
5661755 | Van de Kerkhof | Aug 1997 | A |
5668547 | Lee | Sep 1997 | A |
5706001 | Sohn | Jan 1998 | A |
5717821 | Tsutsui | Feb 1998 | A |
5748789 | Lee et al. | May 1998 | A |
5819215 | Dobson et al. | Oct 1998 | A |
5825830 | Kopf | Oct 1998 | A |
5828426 | Yu | Oct 1998 | A |
5835144 | Matsumura | Nov 1998 | A |
5883633 | Gill et al. | Mar 1999 | A |
5884269 | Cellier et al. | Mar 1999 | A |
5946043 | Lee et al. | Aug 1999 | A |
5956686 | Takashima et al. | Sep 1999 | A |
5969650 | Wilson et al. | Oct 1999 | A |
5974184 | Eifrig et al. | Oct 1999 | A |
5982437 | Okazaki | Nov 1999 | A |
5990960 | Murakami | Nov 1999 | A |
5991451 | Keith et al. | Nov 1999 | A |
5995670 | Zabinsky | Nov 1999 | A |
6002439 | Murakami | Dec 1999 | A |
6049630 | Wang et al. | Apr 2000 | A |
6054943 | Lawrence | Apr 2000 | A |
6078691 | Luttmer | Jun 2000 | A |
6097759 | Murakami | Aug 2000 | A |
6100825 | Sedluk | Aug 2000 | A |
6111914 | Bist | Aug 2000 | A |
6148109 | Boon | Nov 2000 | A |
6154572 | Chaddha | Nov 2000 | A |
6205256 | Chaddha | Mar 2001 | B1 |
6215910 | Chaddha | Apr 2001 | B1 |
6223162 | Chen | Apr 2001 | B1 |
6226407 | Zabih et al. | May 2001 | B1 |
6233017 | Chaddha | May 2001 | B1 |
6253165 | Malvar | Jun 2001 | B1 |
6256064 | Chujoh et al. | Jul 2001 | B1 |
6259810 | Gill et al. | Jul 2001 | B1 |
6272175 | Sriram et al. | Aug 2001 | B1 |
6292588 | Shen | Sep 2001 | B1 |
6300888 | Chen | Oct 2001 | B1 |
6304928 | Mairs et al. | Oct 2001 | B1 |
6337881 | Chaddha | Jan 2002 | B1 |
6341165 | Gbur | Jan 2002 | B1 |
6345123 | Boon | Feb 2002 | B1 |
6349152 | Chaddha | Feb 2002 | B1 |
6360019 | Chaddha | Mar 2002 | B1 |
6377930 | Chen | Apr 2002 | B1 |
6392705 | Chaddha | May 2002 | B1 |
6404931 | Chen | Jun 2002 | B1 |
6420980 | Ejima | Jul 2002 | B1 |
6421738 | Ratan et al. | Jul 2002 | B1 |
6441755 | Dietz et al. | Aug 2002 | B1 |
6477280 | Malvar | Nov 2002 | B1 |
6493385 | Sekiguchi et al. | Dec 2002 | B1 |
6499010 | Faller | Dec 2002 | B1 |
6542631 | Ishikawa | Apr 2003 | B1 |
6542863 | Surucu | Apr 2003 | B1 |
6573915 | Sivan et al. | Jun 2003 | B1 |
6600872 | Yamamoto | Jul 2003 | B1 |
6611798 | Bruhn et al. | Aug 2003 | B2 |
6646578 | Au | Nov 2003 | B1 |
6678419 | Malvar | Jan 2004 | B1 |
6680972 | Liljeryd et al. | Jan 2004 | B1 |
6690307 | Karczewicz | Feb 2004 | B2 |
6704705 | Kabal et al. | Mar 2004 | B1 |
6721700 | Yin | Apr 2004 | B1 |
6728317 | Demos | Apr 2004 | B1 |
6766293 | Herre | Jul 2004 | B1 |
6771777 | Gbur | Aug 2004 | B1 |
6795584 | Karczewicz et al. | Sep 2004 | B2 |
6825847 | Molnar et al. | Nov 2004 | B1 |
6879268 | Karczewicz | Apr 2005 | B2 |
6947886 | Rose et al. | Sep 2005 | B2 |
6954157 | Kadono et al. | Oct 2005 | B2 |
6975254 | Sperschneider et al. | Dec 2005 | B1 |
7016547 | Smirnov | Mar 2006 | B1 |
7132963 | Pearlstein et al. | Nov 2006 | B2 |
20020076115 | Leeder et al. | Jun 2002 | A1 |
20030138150 | Srinivasan | Jul 2003 | A1 |
20030147561 | Faibish et al. | Aug 2003 | A1 |
20030156648 | Holcomb et al. | Aug 2003 | A1 |
20030202601 | Bjontegaard et al. | Oct 2003 | A1 |
20030225576 | Li et al. | Dec 2003 | A1 |
20040044534 | Chen et al. | Mar 2004 | A1 |
20040049379 | Thumpudi et al. | Mar 2004 | A1 |
20040136457 | Funnell et al. | Jul 2004 | A1 |
20050015249 | Mehrotra et al. | Jan 2005 | A1 |
20050041874 | Langelaar et al. | Feb 2005 | A1 |
20050052294 | Liang et al. | Mar 2005 | A1 |
20070030183 | Kadono et al. | Feb 2007 | A1 |
20070172071 | Mehrotra et al. | Jul 2007 | A1 |
20070174062 | Mehrotra et al. | Jul 2007 | A1 |
20070200737 | Gao et al. | Aug 2007 | A1 |
Number | Date | Country |
---|---|---|
0 540 350 | May 1993 | EP |
0 910 927 | Jan 1998 | EP |
0 966 793 | Sep 1998 | EP |
0 931 386 | Jan 1999 | EP |
1 142 130 | Apr 2003 | EP |
1 142 129 | Jun 2004 | EP |
1 453 208 | Sep 2004 | EP |
2 372 918 | Sep 2002 | GB |
5-292481 | Nov 1993 | JP |
6-021830 | Jan 1994 | JP |
6-217110 | Aug 1994 | JP |
7-504546 | May 1995 | JP |
7-274171 | Oct 1995 | JP |
8-079091 | Mar 1996 | JP |
2000-506715 | May 2000 | JP |
2002 204170 | Jul 2002 | JP |
2002-246914 | Aug 2002 | JP |
2004-364340 | Dec 2004 | JP |
2090973 | Sep 1997 | RU |
WO 9217884 | Oct 1992 | WO |
WO 9217942 | Oct 1992 | WO |
WO 9800924 | Jan 1998 | WO |
WO 03045065 | May 2003 | WO |
Entry |
---|
U.S. Appl. No. 60/341,674, filed Dec. 2001, Lee et al. |
U.S. Appl. No. 60/488,710, filed Jul. 2003, Srinivasan et al. |
Bosi et al., “ISO/IEC MPEG-2 Advance Audio Coding,” J. Audio Eng. Soc., vol. 45, No. 10, pp. 789-812 (1997). |
Brandenburg, “ASPEC Coding,” AES 10th International Conference, pp. 81-90 (1991). |
Chung et al., “A Novel Memory-efficient Huffman Decoding Algorithm and its Implementation,” Signal Processing 62, pp. 207-213 (1997). |
Cui et al., “A novel VLC based on second-run-level coding and dynamic truncation,” Proc. SPIE, vol. 6077, pp. 607726-1 to 607726-9 (2006). |
De Agostino et al., “Parallel Algorithms for Optimal Compression using Dictionaries with the Prefix Property,” Proc. Data Compression Conference '92, IEEE Computer Society Press, pp. 52-62 (1992). |
Gailly, “comp.compression Frequently Asked Questions (part 1/3),” 64 pp., document marked Sep. 5, 1999 [Downloaded from the World Wide Web on Sep. 5, 2007]. |
Gibson et al., Digital Compression for Multimedia, “Chapter 2: Lossless Source Coding,” Morgan Kaufmann Publishers, Inc., San Francisco, pp. 17-61 (1998). |
Gill et al., “Creating High-Quality Content with Microsoft Windows Media Encoder 7,” 4 pp. (2000) [ Downloaded from the World Wide Web on May 1, 2002]. |
Hui et al., “Matsushita Algorithm for Coding of Moving Picture Information,” ISO/IEC-JTC1/SC29/WG11, MPEG91/217, 76 pp. (Nov. 1991). |
Ishii et al., “Parallel Variable Length Decoding with Inverse Quantization for Software MPEG-2 Decoders,” IEEE Signal Processing Systems, pp. 500-509 (1997). |
ISO/IEC, “ISO/IEC 11172-2, Information Technology—Coding of Moving Pictures and Associated Audio for Digital Storage Media at up to About 1.5 Mbit/s—Part 2: Video,” 112 pp. (1993). |
“ISO/IEC 11172-3, Information Technology—Coding of Moving Pictures and Associated Audio for Digital Storage Media at Up to Ab out 1.5 Mbit/s—Part 3: Audio,” 154 pp. (1993). |
“ISO/IEC 13818-7, Information Technology—Generic Coding of Moving Pictures and Associated Audio Information—Part 7: Advanced Audio Coding (AAC),” 174 pp. (1997). |
ISO/IEC, “JTC1/SC29/WG11 N2202, Information Technology—Coding of Audio-Visual Objects: Visual, ISO/IEC 14496-2,” 329 pp. (1998). |
ITU-T, “ITU-T Recommendation H.261, Video Codec for Audiovisual Services at p x 64 kbits,” 25 pp. (1993). |
ITU-T, “ITU-T Recommendation H.262, Information Technology—Generic Coding of Moving Pictures and Associated Audio Information: Video,” 205 pp. (1995). |
ITU-T, “ITU-T Recommendation H.263, Video coding for low bit rate communication,” 162 pp. (1998). |
Jeong et al., “Adaptive Huffman Coding of 2-D DCT Coefficients for Image Sequence Compression,” Signal Processing: Image Communication, vol. 7, 11 pp. (1995). |
Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, “Joint Final Committee Draft (JFCD) of Joint Video Specification,” JVT-D157, 207 pp. (Aug. 2002). |
Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG, “Draft ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec. H.264, ISO/IEC 14496-10 AVC),” 253 pp. (May 2003). |
Lee, “Wavelet Filter Banks in Perceptual Audio Coding,” Thesis presented to the University of Waterloo, 2003, 144 pages. |
Li et al., “Optimal Linear Interpolation Coding for Server-Based Computing,” Proc. IEEE Int'l Conf. on Communications, 5 pp. (Apr.-May 2002). |
Microsoft Corporation, “Microsoft Debuts New Windows Media Player 9 Series, Redefining Digital Media on the PC,” 4 pp. (Sep. 4, 2002) [Downloaded from the World Wide Web on Jul. 16, 2004]. |
Mook, “Next-Gen Windows Media Player Leaks to the Web,” BetaNews, 18 pp. (Jul. 2002) [Downloaded from the World Wide Web on Mar. 16, 2004]. |
Nelson, The Data Compression Book, “Huffman One Better: Arithmetic Coding,” Chapter 5, pp. 123-165 (1992). |
Novak et al., “Spectral Band Replication and aacPlus Coding—An Overview,” © 2003 TLC Corp., 2 pages. |
Painter et al., “A Review of Algorithms for Perceptual Coding of Digital Audio Signals,” 13th International Conference on Digital Signal Processing Proceedings, 1997, 30 pages. |
Printouts of FTP directories from http://ftp3.itu.ch, 8 pp. [Downloaded from the World Wide Web on Sep. 20, 2005]. |
Reader, “History of MPEG Video Compression—Ver. 4.0,” 99 pp., document marked Dec. 16, 2003. |
Shamoon et al., “A Rapidly Adaptive Lossless Compression Algorithm for High Fidelity Audio Coding,” IEEE Data Compression Conf., pp. 430-439 (Mar. 1994). |
Sullivan et al., “The H.264/AVC Advanced Video Coding Standard: Overview and Introduction to the Fidelity Range Extensions,” 21 pp. (Aug. 2004). |
Tu et al., “Context-Based Entropy Coding of Block Transform Coefficients for Image Compression,” IEEE Transactions on Image Processing, vol. 11, No. 11, pp. 1271-1283 (Nov. 2002). |
Wien et al., “16 Bit Adaptive Block Size Transforms,” JVT-C107r1, 54 pp. |
Wien, “Variable Block-Size Transforms for Hybrid Video Coding,” Dissertation, 182 pp. (Feb. 2004). |
Costa et al., “Efficient Run-Length Encoding of Binary Sources with Unknown Statistics”, Technical Report No. MSR-TR-2003-95, pp. 1-10, Microsoft Research, Microsoft Corporation (Dec. 2003). |
Malvar, “Fast Progressive Image Coding without Wavelets”, IEEE Data Compression Conference, Snowbird, Utah, 10 pp. (Mar. 2000). |
ITU-T Recommendation H.264, “Series H: Audiovisual and Multimedia Systems, Infrastructure of Audiovisual Services—Coding of Moving Video,” International Telecommunication Union, pp. 1-262 (May 2003). |
ITU-T Recommendation T.800, “Series T: Terminals for Telematic Services,” International Telecommunication Union, pp. 1-194 (Aug. 2002). |
ISO/IEC 14496-2, “Coding of Audio-Visual Object—Part 2: Visual,” Third Edition, pp. 1-727, (Jun. 2004). |
Search Report and Written Opinion of PCT/US06/30308 dated Oct. 23, 2007. |
Examination Report dated Sep. 24, 2009, from corresponding New Zealand Patent Application No. 565672, 2 pp. |
First Office Action dated Jan. 8, 2010, from corresponding Chinese Patent Application No. 200680029309.7, 9 pp. |
Office action dated Jan. 19, 2010, from corresponding Australian Patent Application No. 2006280226, 2 pp. |
Decision on Grant dated Sep. 27, 2010, from Russian Patent Application No. 2008105046, 17 pp. (including English translation). |
Examiner's Report dated Oct. 22, 2010, from Australian Patent Application No. 2006280226, 4 pp. |
Examination Report dated Jun. 10, 2011, from New Zealand Patent Application No. 565672, 2 pp. |
Examination Report and Notice of Acceptance dated Jul. 8, 2011, from New Zealand Application No. 565672, 1 p. |
Notice of Rejection dated Jun. 10, 2011, from Japanese Patent Application No. 2008-526079 (with English translation), 4 pp. |
Notice of Allowance dated Dec. 27, 2011, from Japanese Patent Application No. 2008-526079, 6 pp. |
Notice on the Second Office Action dated Oct. 17, 2011, from Chinese Patent Application No. 200680029309.7 (with English translation), 6 pp. |
Examination Report dated Oct. 26, 2010, from New Zealand Patent Application No. 565672, 2 pp. |
Notice of Acceptance dated Mar. 2, 2011, from Australian Patent Application No. 2006280226, 3 pp. |
Notice of Rejection dated Nov. 19, 2010, from Japanese Patent Application No. 2008-526079 (with English translation), 7 pp. |
Substantive Examination Adverse Report dated Dec. 31, 2010, from Malaysian Patent Application No. PI20063563, 4 pp. |
Official Action dated Oct. 25, 2011, from Phillipines Patent Application No. 1-2008-500289, 1 pp. |
Supplementary European Search Report dated Nov. 18, 2011, from European Patent Application No. 06789322.2, 3 pp. |
Office action dated Jun. 22, 2011, from Israeli Patent Application No. 189126 (with English translation), 6 pp. |
Notice on Grant of Patent Right for Invention dated Aug. 3, 2012, from Chinese Patent Application No. 200680029309.7, 4 pp. |
Office action dated Mar. 5, 2013, from Israeli Patent Application No. 189126, 7 pp. |
Number | Date | Country | |
---|---|---|---|
20070036223 A1 | Feb 2007 | US |