The following co-pending U.S. patent applications relate to the present application and are hereby incorporated herein by reference: 1) U.S. patent application Ser. No. 10/622,378 entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed concurrently herewith; 2) U.S. patent application Ser. No. 10/622,284, entitled, “Intraframe and Interframe Interlace Coding and Decoding,” filed concurrently herewith; 3) U.S. patent application Ser. No. 10/622,841 entitled, “Coding of Motion Vector Information,” filed concurrently herewith; 4) U.S. patent application Ser. No. 10/321,415, entitled, “Skip Macroblock Coding,” filed Dec. 16, 2002; and 5) U.S. patent application Ser. No. 10/379,615, entitled “Chrominance Motion Vector Rounding,” filed Mar. 4, 2003.
A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by any one of the patent disclosure, as it appears in the Patent and Trademark Office patent files or records, but otherwise reserves all copyright rights whatsoever.
The invention relates generally to differential quantization in digital video coding or compression.
Digital video consumes large amounts of storage and transmission capacity. A typical raw digital video sequence includes 15 or 30 frames per second. Each frame can include tens or hundreds of thousands of pixels (also called pels). Each pixel represents a tiny element of the picture. In raw form, a computer commonly represents a pixel with 24 bits. Thus, the number of bits per second, or bit rate, of a typical raw digital video sequence can be 5 million bits/second or more.
Most computers and computer networks lack the resources to process raw digital video. For this reason, engineers use compression (also called coding or encoding) to reduce the bit rate of digital video. Compression can be lossless, in which quality of the video does not suffer but decreases in bit rate are limited by the complexity of the video. Or, compression can be lossy, in which quality of the video suffers but decreases in bit rate are more dramatic. Decompression reverses compression.
In general, video compression techniques include intraframe compression and interframe compression. Intraframe compression techniques compress individual frames, typically called I-frames or key frames. Interframe compression techniques compress frames with reference to preceding and/or following frames, which are typically called predicted frames, P-frames, or B-frames.
Microsoft Corporation's Windows Media Video, Version 8 [“WMV8”] includes a video encoder and a video decoder. The WMV8 encoder uses intraframe and interframe compression, and the WMV8 decoder uses intraframe and interframe decompression.
A. Intraframe Compression in WMV8
The encoder then quantizes 120 the DCT coefficients, resulting in an 8×8 block of quantized DCT coefficients 125. For example, the encoder applies a uniform, scalar quantization step size to each coefficient. Quantization is lossy. Since low frequency DCT coefficients tend to have higher values, quantization results in loss of precision but not complete loss of the information for the coefficients. On the other hand, since high frequency DCT coefficients tend to have values of zero or close to zero, quantization of the high frequency coefficients typically results in contiguous regions of zero values. In addition, in some cases high frequency DCT coefficients are quantized more coarsely than low frequency DCT coefficients, resulting in greater loss of precision/information for the high frequency DCT coefficients.
The encoder then prepares the 8×8 block of quantized DCT coefficients 125 for entropy encoding, which is a form of lossless compression. The exact type of entropy encoding can vary depending on whether a coefficient is a DC coefficient (lowest frequency), an AC coefficient (other frequencies) in the top row or left column, or another AC coefficient.
The encoder encodes the DC coefficient 126 as a differential from the DC coefficient 136 of a neighboring 8×8 block, which is a previously encoded neighbor (e.g., top or left) of the block being encoded. (
The entropy encoder can encode the left column or top row of AC coefficients as a differential from a corresponding column or row of the neighboring 8×8 block.
The encoder scans 150 the 8×8 block 145 of predicted, quantized AC DCT coefficients into a one-dimensional array 155 and then entropy encodes the scanned AC coefficients using a variation of run length coding 160. The encoder selects an entropy code from one or more run/level/last tables 165 and outputs the entropy code.
B. Interframe Compression in WMV8
Interframe compression in the WMV8 encoder uses block-based motion compensated prediction coding followed by transform coding of the residual error.
For example, the WMV8 encoder splits a predicted frame into 8×8 blocks of pixels. Groups of four 8×8 blocks form macroblocks. For each macroblock, a motion estimation process is performed. The motion estimation approximates the motion of the macroblock of pixels relative to a reference frame, for example, a previously coded, preceding frame. In
The encoder then prepares the 8×8 block 355 of quantized DCT coefficients for entropy encoding. The encoder scans 360 the 8×8 block 355 into a one dimensional array 365 with 64 elements, such that coefficients are generally ordered from lowest frequency to highest frequency, which typically creates long runs of zero values.
The encoder entropy encodes the scanned coefficients using a variation of run length coding 370. The encoder selects an entropy code from one or more run/level/last tables 375 and outputs the entropy code.
In summary of
The amount of change between the original and reconstructed frame is termed the distortion and the number of bits required to code the frame is termed the rate for the frame. The amount of distortion is roughly inversely proportional to the rate. In other words, coding a frame with fewer bits (greater compression) will result in greater distortion, and vice versa.
C. Bi-directional Prediction
Bi-directionally coded images (e.g., B-frames) use two images from the source video as reference (or anchor) images. For example, referring to
Some conventional encoders use five prediction modes (forward, backward, direct, interpolated and intra) to predict regions in a current B-frame. In intra mode, an encoder does not predict a macroblock from either reference image, and therefore calculates no motion vectors for the macroblock. In forward and backward modes, an encoder predicts a macroblock using either the previous or future reference frame, and therefore calculates one motion vector for the macroblock. In direct and interpolated modes, an encoder predicts a macroblock in a current frame using both reference frames. In interpolated mode, the encoder explicitly calculates two motion vectors for the macroblock. In direct mode, the encoder derives implied motion vectors by scaling the co-located motion vector in the future reference frame, and therefore does not explicitly calculate any motion vectors for the macroblock.
D. Interlace Coding
A typical interlaced video frame consists of two fields scanned at different times. For example, referring to
E. Standards for Video Compression and Decompression
Aside from WMV8, several international standards relate to video compression and decompression. These standards include the Motion Picture Experts Group [“MPEG”] 1, 2, and 4 standards and the H.261, H.262, and H.263 standards from the International Telecommunication Union [“ITU”]. Like WMV8, these standards use a combination of intraframe and interframe compression. The MPEG 4 standard describes coding of macroblocks in 4:2:0 format using, for example, frame DCT coding, where each luminance block is composed of lines from two fields alternately, and field DCT coding, where each luminance block is composed of lines from only one of two fields.
F. Differential Quantization
Differential quantization is a technique in which the amount of quantization applied to various blocks within a single video frame can vary. Differential quantization has been adopted or used in various standards. The key benefit is to control bit rate at finer resolution to meet hardware requirements. One common problem that occurs when it is used is that the visual quality is compromised, especially when it is used in low bit rate encoding. For example, signaling quantization parameters individually per each block in a frame of video can consume a significant number of bits in the compressed bitstream, which bits could otherwise be used to encode better quality video.
Given the critical importance of video compression and decompression to digital video, it is not surprising that video compression and decompression are richly developed fields. Whatever the benefits of previous video compression and decompression techniques, however, they do not have the advantages of the following techniques and tools.
A video compression encoder/decoder (codec) described herein includes techniques for intelligent differential quantization. With these techniques, video can be intelligently quantized at differing strength levels within a frame, such as on a macroblock (MB) or a group of MB basis. The key benefits of intelligent differential quantization are the abilities to control bit usage on a finer granularity than a frame to meet hardware constraints (e.g., in a CD player, DVD player, etc.). In addition, the intelligent differential quantization allows perceptual optimization by coarsely quantizing unimportant regions, while finely quantizing important regions within a frame.
The intelligent differential quantization techniques are particularly beneficial in consumer devices that have a fixed reading/writing speed requirement, and can not handle a sudden burst of data. By allowing the codec to control the amount of data generated on a finer scale, manufacturers will be able to build consumer devices that can more readily handle the compressed bitstream. In addition, intelligent differential quantization helps to improve the perceptual quality of the video.
The intelligent differential quantization techniques described herein address this quality loss issue. The techniques use the information gathered from encoding and analysis of the video to classify the importance of different regions of the image and quantize them accordingly. In addition, the techniques include an efficient way to signal all the necessary information of the differential quantization strengths in the compressed bit stream.
Additional features and advantages of the invention will be made apparent from the following detailed description of embodiments that proceeds with reference to the accompanying drawings.
For purposes of illustration, the innovations summarized above are incorporated into embodiments of a video encoder and decoder (codec) illustrated in
I. Computing Environment
With reference to
A computing environment may have additional features. For example, the computing environment 700 includes storage 740, one or more input devices 750, one or more output devices 760, and one or more communication connections 770. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing environment 700. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing environment 700, and coordinates activities of the components of the computing environment 700.
The storage 740 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information and which can be accessed within the computing environment 700. The storage 740 stores instructions for the software 780 implementing the video encoder or decoder.
The input device(s) 750 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing environment 700. For audio or video encoding, the input device(s) 750 may be a sound card, video card, TV tuner card, or similar device that accepts audio or video input in analog or digital form, or a CD-ROM or CD-RW that reads audio or video samples into the computing environment 700. The output device(s) 760 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing environment 700.
The communication connection(s) 770 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media include wired or wireless techniques implemented with an electrical, optical, RF, infrared, acoustic, or other carrier.
The techniques and tools can be described in the general context of computer-readable media. Computer-readable media are any available media that can be accessed within a computing environment. By way of example, and not limitation, with the computing environment 700, computer-readable media include memory 720, storage 740, communication media, and combinations of any of the above.
The techniques and tools can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing environment on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing environment.
For the sake of presentation, the detailed description uses terms like “indicate,” “choose,” “obtain,” and “apply” to describe computer operations in a computing environment. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
II. Generalized Video Encoder and Decoder
The relationships shown between modules within the encoder and decoder indicate the main flow of information in the encoder and decoder; other relationships are not shown for the sake of simplicity. In particular,
The encoder 800 and decoder 900 are block-based and use a 4:1:1 macroblock format. Each macroblock includes four 8×8 luminance blocks and four 4×8 chrominance blocks. Further details regarding the 4:1:1 format are provided below. The encoder 800 and decoder 900 also can use a 4:2:0 macroblock format with each macroblock including four 8×8 luminance blocks (at times treated as one 16×16 macroblock) and two 8×8 chrominance blocks. Alternatively, the encoder 800 and decoder 900 are object-based, use a different macroblock or block format, or perform operations on sets of pixels of different size or configuration.
Depending on implementation and the type of compression desired, modules of the encoder or decoder can be added, omitted, split into multiple modules, combined with other modules, and/or replaced with like modules. In alternative embodiments, encoder or decoders with different modules and/or other configurations of modules perform one or more of the described techniques.
A. Video Encoder
The encoder system 800 compresses predicted frames and key frames. For the sake of presentation,
A predicted frame (also called P-frame, B-frame, or inter-coded frame) is represented in terms of prediction (or difference) from one or more reference (or anchor) frames. A prediction residual is the difference between what was predicted and the original frame. In contrast, a key frame (also called I-frame, intra-coded frame) is compressed without reference to other frames.
If the current frame 805 is a forward-predicted frame, a motion estimator 810 estimates motion of macroblocks or other sets of pixels of the current frame 805 with respect to a reference frame, which is the reconstructed previous frame 825 buffered in a frame store (e.g., frame store 820). If the current frame 805 is a bi-directionally-predicted frame (a B-frame), a motion estimator 810 estimates motion in the current frame 805 with respect to two reconstructed reference frames. Typically, a motion estimator estimates motion in a B-frame with respect to a temporally previous reference frame and a temporally future reference frame. Accordingly, the encoder system 800 can comprise separate stores 820 and 822 for backward and forward reference frames. For more information on bi-directionally predicted frames, see U.S. patent application Ser. No. aa/bbb,ccc, entitled, “Advanced Bi-Directional Predictive Coding of Video Frames,” filed concurrently herewith.
The motion estimator 810 can estimate motion by pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion estimation on a frame-by-frame basis or other basis. The resolution of the motion estimation can be the same or different horizontally and vertically. The motion estimator 810 outputs as side information motion information 815 such as motion vectors. A motion compensator 830 applies the motion information 815 to the reconstructed frame(s) 825 to form a motion-compensated current frame 835. The prediction is rarely perfect, however, and the difference between the motion-compensated current frame 835 and the original current frame 805 is the prediction residual 845. Alternatively, a motion estimator and motion compensator apply another type of motion estimation/compensation.
A frequency transformer 860 converts the spatial domain video information into frequency domain (i.e., spectral) data. For block-based video frames, the frequency transformer 860 applies a discrete cosine transform [“DCT”] or variant of DCT to blocks of the pixel data or prediction residual data, producing blocks of DCT coefficients. Alternatively, the frequency transformer 860 applies another conventional frequency transform such as a Fourier transform or uses wavelet or subband analysis. If the encoder uses spatial extrapolation (not shown in
A quantizer 870 then quantizes the blocks of spectral data coefficients. The quantizer applies uniform, scalar quantization to the spectral data with a step-size that varies on a frame-by-frame basis or other basis. Alternatively, the quantizer applies another type of quantization to the spectral data coefficients, for example, a non-uniform, vector, or non-adaptive quantization, or directly quantizes spatial domain data in an encoder system that does not use frequency transformations. In addition to adaptive quantization, the encoder 800 can use frame dropping, adaptive filtering, or other techniques for rate control.
If a given macroblock in a predicted frame has no information of certain types (e.g., no motion information for the macroblock and no residual information), the encoder 800 may encode the macroblock as a skipped macroblock. If so, the encoder signals the skipped macroblock in the output bit stream of compressed video information 895.
When a reconstructed current frame is needed for subsequent motion estimation/compensation, an inverse quantizer 876 performs inverse quantization on the quantized spectral data coefficients. An inverse frequency transformer 866 then performs the inverse of the operations of the frequency transformer 860, producing a reconstructed prediction residual (for a predicted frame) or a reconstructed key frame. If the current frame 805 was a key frame, the reconstructed key frame is taken as the reconstructed current frame (not shown). If the current frame 805 was a predicted frame, the reconstructed prediction residual is added to the motion-compensated current frame 835 to form the reconstructed current frame. A frame store (e.g., frame store 820) buffers the reconstructed current frame for use in predicting another frame. In some embodiments, the encoder applies a deblocking filter to the reconstructed frame to adaptively smooth discontinuities in the blocks of the frame.
The entropy coder 880 compresses the output of the quantizer 870 as well as certain side information (e.g., motion information 815, spatial extrapolation modes, quantization step size). Typical entropy coding techniques include arithmetic coding, differential coding, Huffman coding, run length coding, LZ coding, dictionary coding, and combinations of the above. The entropy coder 880 typically uses different coding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular coding technique.
The entropy coder 880 puts compressed video information 895 in the buffer 890. A buffer level indicator is fed back to bit rate adaptive modules.
The compressed video information 895 is depleted from the buffer 890 at a constant or relatively constant bit rate and stored for subsequent streaming at that bit rate. Therefore, the level of the buffer 890 is primarily a function of the entropy of the filtered, quantized video information, which affects the efficiency of the entropy coding. Alternatively, the encoder system 800 streams compressed video information immediately following compression, and the level of the buffer 890 also depends on the rate at which information is depleted from the buffer 890 for transmission.
Before or after the buffer 890, the compressed video information 895 can be channel coded for transmission over the network. The channel coding can apply error detection and correction data to the compressed video information 895.
B. Video Decoder
The decoder system 900 decompresses predicted frames and key frames. For the sake of presentation,
A buffer 990 receives the information 995 for the compressed video sequence and makes the received information available to the entropy decoder 980. The buffer 990 typically receives the information at a rate that is fairly constant over time, and includes a jitter buffer to smooth short-term variations in bandwidth or transmission. The buffer 990 can include a playback buffer and other buffers as well. Alternatively, the buffer 990 receives information at a varying rate. Before or after the buffer 990, the compressed video information can be channel decoded and processed for error detection and correction.
The entropy decoder 980 entropy decodes entropy-coded quantized data as well as entropy-coded side information (e.g., motion information 915, spatial extrapolation modes, quantization step size), typically applying the inverse of the entropy encoding performed in the encoder. Entropy decoding techniques include arithmetic decoding, differential decoding, Huffman decoding, run length decoding, LZ decoding, dictionary decoding, and combinations of the above. The entropy decoder 980 frequently uses different decoding techniques for different kinds of information (e.g., DC coefficients, AC coefficients, different kinds of side information), and can choose from among multiple code tables within a particular decoding technique.
A motion compensator 930 applies motion information 915 to one or more reference frames 925 to form a prediction 935 of the frame 905 being reconstructed. For example, the motion compensator 930 uses a macroblock motion vector to find a macroblock in a reference frame 925. A frame buffer (e.g., frame buffer 920) stores previously reconstructed frames for use as reference frames. Typically, B-frames have more than one reference frame (e.g., a temporally previous reference frame and a temporally future reference frame). Accordingly, the decoder system 900 can comprise separate frame buffers 920 and 922 for backward and forward reference frames.
The motion compensator 930 can compensate for motion at pixel, ½ pixel, ¼ pixel, or other increments, and can switch the resolution of the motion compensation on a frame-by-frame basis or other basis. The resolution of the motion compensation can be the same or different horizontally and vertically. Alternatively, a motion compensator applies another type of motion compensation. The prediction by the motion compensator is rarely perfect, so the decoder 900 also reconstructs prediction residuals.
When the decoder needs a reconstructed frame for subsequent motion compensation, a frame buffer (e.g., frame buffer 920) buffers the reconstructed frame for use in predicting another frame. In some embodiments, the decoder applies a deblocking filter to the reconstructed frame to adaptively smooth discontinuities in the blocks of the frame.
An inverse quantizer 970 inverse quantizes entropy-decoded data. In general, the inverse quantizer applies uniform, scalar inverse quantization to the entropy-decoded data with a step-size that varies on a frame-by-frame basis or other basis. Alternatively, the inverse quantizer applies another type of inverse quantization to the data, for example, a non-uniform, vector, or non-adaptive quantization, or directly inverse quantizes spatial domain data in a decoder system that does not use inverse frequency transformations.
An inverse frequency transformer 960 converts the quantized, frequency domain data into spatial domain video information. For block-based video frames, the inverse frequency transformer 960 applies an inverse DCT [“IDCT”] or variant of IDCT to blocks of the DCT coefficients, producing pixel data or prediction residual data for key frames or predicted frames, respectively. Alternatively, the frequency transformer 960 applies another conventional inverse frequency transform such as a Fourier transform or uses wavelet or subband synthesis. If the decoder uses spatial extrapolation (not shown in
When a skipped macroblock is signaled in the bit stream of information 995 for a compressed sequence of video frames, the decoder 900 reconstructs the skipped macroblock without using the information (e.g., motion information and/or residual information) normally included in the bit stream for non-skipped macroblocks.
III. Intelligent Differential Quantization
With reference to
More particularly, the video encoder 800/decoder 900 analyzes the global motion of the video to classify the importance of the regions within a frame. As discussed above, the video encoder 800 gathers motion vector information in the encoding process, which is used in encoding the video (e.g., for predictive interframe coding). This motion vector information is encoded as side information in the compressed bit stream. Based on the motion vector information gathered in the encoding process, the video encoder 800/decoder 900 estimates the global motion of the video (at action 1010), including whether the video is panning left/right/up/down/diagonals or zooming in/out.
In one embodiment, the video panning detection can be performed be calculating an aggregate value of the motion vectors within the video frame, and comparing this aggregate value to a motion threshold value. If the aggregate motion vector exceeds the threshold, the video is determined to be panning in the opposite direction. Zoom detection in some embodiments of the invention can be performed by calculating an aggregate of the motion vectors for separate quadrants of the video frame, and testing whether the quadrants' aggregate motion vectors are directed inwardly or outwardly. In alternative embodiments, other methods of video panning and zoom detection based on the motion vectors can be used.
Based on this global motion estimate, the intelligent differential quantization technique then classifies which regions of the video frame may be less important to perceptual quality of the video (action 1020). In particular, if the video is panning toward some direction, the opposite side of the image has less perceptual significance, and can be more coarsely quantized without much impact of overall perceptual quality. For example, if the video is panning towards left, then the right edge of the image will quickly disappear in the following frames. Therefore, the quality of the disappearing edge macroblocks can be compromised (compressed more) to save bits to either meet the bit rate requirement or to improve quality of other part of images without much perceptual degradation. Likewise, if the video is zooming in, the all edges of the image will quickly disappear in the following frames, and the quality of all these disappearing edge macroblocks can be compromised.
According to the intelligent differential quantization technique, the video encoder 800 determines the differential quantization to apply to macroblocks in the frame at action 1030. The regions classified as less perceptually significant are quantized more strongly, which saves bits that can be used to meet bit rate requirements or to decrease the quantization of the macroblocks in regions that are not classified as less perceptually significant.
At action 1040, the video encoder 800 encodes information in the compressed bit stream using a signaling scheme described below for signaling the differential quantization to the video decoder 900. At decoding, the video decoder 900 reads the signaled differential quantization information, and dequantizes the macroblocks accordingly to decompress the video.
A. Differential Quantization Signaling Scheme
With reference to
On sequence header (which is sent per video sequence), this syntax includes a DQUANT flag 1120, which is a 2-bit field that indicates whether or not the quantization step size can vary within a frame. In this syntax, there are three possible values for DQUANT. If DQUANT=0, then only one quantization step size (i.e. the frame quantization step size) is used per frame. If DQUANT=1 or 2, the DQUANT flag indicates the possibility to quantize each macroblock in the frame differently.
On the frame level, a VOPDQUANT field 1110 is made up of several bitstream syntax elements as shown in
Case 1: DQUANT=1.
In this case, the syntax provides four possibilities:
Case 2: DQUANT=2.
The macroblocks located on the boundary are quantized with ALTPQUANT while the rest of the macroblocks are quantized with PQUANT.
The bitstream syntax for case 1 includes the following fields:
DQUANTFRM (1 bit)
The DQUANTFRM field 1131 is a 1 bit value that is present only when DQUANT=1. If DQUANT=0 then the current picture is only quantized with PQUANT.
DQPROFILE (2 bits)
The DQPROFILE field 1132 is a 2 bits value that is present only when DQUANT=1 and DQUANTRFM=1. It indicates where we are allowed to change quantization step sizes within the current picture. This field is coded to represent the location of the differentially quantized region as shown in the code Table 1 below.
DQSBEDGE (2 bits)
The DQSBEDGE field 1133 is a 2 bits value that is present when DQPROFILE=Single Edge. It indicates which edge will be quantized with ALTQUANT, as shown in the following Table 2.
DQDBEDGE (2 bits)
The DQSBEDGE field 1134 is a 2 bits value that is present when DQPROFILE=Double Edge. It indicates which two edges will be quantized with ALTPQUANT, as shown in the following code Table 3.
DQBILEVEL (1 bit)
The DQBILEVEL field 1135 is a 1 bit value that is present when DQPROFILE=All Macroblock. If DQBILEVEL=1, then each macroblock in the picture can take one of two possible values (PQUANT or ALTPQUANT). If DQBILEVEL=0, then each macroblock in the picture can take on any quantization step size.
PQDIFF (3 bits)
The PQDIFF field 1136 is a 3 bit field that encodes either the PQUANT differential or encodes an escape code.
If the PQDIFF field does not equal 7 then the PQDIFF field encodes the differential and the ABSPQ field does not follow in the bitstream. In this case:
If the PQDIFF field equals 7 then the ABSPQ field follows in the bitstream and the ALTPQUANT value is decoded as:
The ABSPQ field 1137 is present in the bitstream if PQDIFF equals 7. In this case, ABSPQ directly encodes the value of ALTPQUANT as described above.
In view of the many possible embodiments to which the principles of our invention may be applied, we claim as our invention all such embodiments as may come within the scope and spirit of the following claims and equivalents thereto.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4583114 | Catros | Apr 1986 | A |
| 4679079 | Catros et al. | Jul 1987 | A |
| 4774574 | Daly et al. | Sep 1988 | A |
| 4862264 | Wells et al. | Aug 1989 | A |
| 4965830 | Barham et al. | Oct 1990 | A |
| 4992889 | Yamagami et al. | Feb 1991 | A |
| 5072295 | Murakami et al. | Dec 1991 | A |
| 5128758 | Azadegan et al. | Jul 1992 | A |
| 5136377 | Johnston et al. | Aug 1992 | A |
| 5179442 | Azadegan et al. | Jan 1993 | A |
| 5237410 | Inoue | Aug 1993 | A |
| 5241395 | Chen | Aug 1993 | A |
| 5253058 | Gharavi | Oct 1993 | A |
| 5301242 | Gonzales et al. | Apr 1994 | A |
| 5303058 | Fukuda et al. | Apr 1994 | A |
| 5317396 | Fujinami | May 1994 | A |
| 5317672 | Crossman et al. | May 1994 | A |
| 5374958 | Yanagihara | Dec 1994 | A |
| 5452104 | Lee | Sep 1995 | A |
| 5461421 | Moon | Oct 1995 | A |
| 5481553 | Suzuki et al. | Jan 1996 | A |
| 5559557 | Kato | Sep 1996 | A |
| 5565920 | Lee et al. | Oct 1996 | A |
| 5587708 | Chiu | Dec 1996 | A |
| 5606371 | Gunnewiek et al. | Feb 1997 | A |
| 5623424 | Azadegan et al. | Apr 1997 | A |
| 5631644 | Katata et al. | May 1997 | A |
| 5654760 | Ohtsuki | Aug 1997 | A |
| 5657087 | Jeong et al. | Aug 1997 | A |
| 5663763 | Yagasaki et al. | Sep 1997 | A |
| 5731836 | Lee | Mar 1998 | A |
| 5731837 | Hurst, Jr. | Mar 1998 | A |
| 5739861 | Music | Apr 1998 | A |
| 5748789 | Lee et al. | May 1998 | A |
| 5751358 | Suzuki et al. | May 1998 | A |
| 5751379 | Markandey et al. | May 1998 | A |
| 5761088 | Hulyalkar et al. | Jun 1998 | A |
| 5764803 | Jacquin et al. | Jun 1998 | A |
| 5786856 | Hall et al. | Jul 1998 | A |
| 5802213 | Gardos | Sep 1998 | A |
| 5809178 | Anderson et al. | Sep 1998 | A |
| 5819035 | Devaney et al. | Oct 1998 | A |
| 5825310 | Tsutsui | Oct 1998 | A |
| 5835145 | Ouyang et al. | Nov 1998 | A |
| 5835237 | Ebrahimi | Nov 1998 | A |
| 5844613 | Chaddha | Dec 1998 | A |
| 5867167 | Deering | Feb 1999 | A |
| 5870435 | Choi et al. | Feb 1999 | A |
| 5883672 | Suzuki et al. | Mar 1999 | A |
| 5969764 | Sun et al. | Oct 1999 | A |
| 5970173 | Lee et al. | Oct 1999 | A |
| 5990957 | Ryoo | Nov 1999 | A |
| 6058362 | Malvar | May 2000 | A |
| 6072831 | Chen | Jun 2000 | A |
| 6084636 | Fujiwara et al. | Jul 2000 | A |
| 6088392 | Rosenberg | Jul 2000 | A |
| 6104751 | Artieri | Aug 2000 | A |
| 6118817 | Wang | Sep 2000 | A |
| 6125140 | Wilkinson | Sep 2000 | A |
| 6148107 | Ducloux et al. | Nov 2000 | A |
| 6148109 | Boon et al. | Nov 2000 | A |
| 6160846 | Chiang et al. | Dec 2000 | A |
| 6167091 | Okada et al. | Dec 2000 | A |
| 6182034 | Malvar | Jan 2001 | B1 |
| 6212232 | Reed et al. | Apr 2001 | B1 |
| 6223162 | Chen et al. | Apr 2001 | B1 |
| 6240380 | Malvar | May 2001 | B1 |
| 6243497 | Chiang et al. | Jun 2001 | B1 |
| 6249614 | Bocharova et al. | Jun 2001 | B1 |
| 6256422 | Mitchell et al. | Jul 2001 | B1 |
| 6256423 | Krishnamurthy | Jul 2001 | B1 |
| 6263024 | Matsumoto | Jul 2001 | B1 |
| 6275614 | Krishnamurthy et al. | Aug 2001 | B1 |
| 6278735 | Mohsenian | Aug 2001 | B1 |
| 6292588 | Shen et al. | Sep 2001 | B1 |
| 6347116 | Haskell et al. | Feb 2002 | B1 |
| 6356709 | Abe et al. | Mar 2002 | B1 |
| 6370502 | Wu et al. | Apr 2002 | B1 |
| 6393155 | Bright et al. | May 2002 | B1 |
| 6418166 | Wu et al. | Jul 2002 | B1 |
| 6438167 | Shimizu et al. | Aug 2002 | B1 |
| 6456744 | Lafe | Sep 2002 | B1 |
| 6473534 | Merhav et al. | Oct 2002 | B1 |
| 6490319 | Yang | Dec 2002 | B1 |
| 6493385 | Sekiguchi et al. | Dec 2002 | B1 |
| 6519284 | Pesquet-Popescu et al. | Feb 2003 | B1 |
| 6546049 | Lee | Apr 2003 | B1 |
| 6571019 | Kim et al. | May 2003 | B1 |
| 6593925 | Hakura et al. | Jul 2003 | B1 |
| 6647152 | Willis et al. | Nov 2003 | B2 |
| 6654417 | Hui | Nov 2003 | B1 |
| 6678422 | Sharma et al. | Jan 2004 | B1 |
| 6687294 | Yan et al. | Feb 2004 | B2 |
| 6704718 | Burges et al. | Mar 2004 | B2 |
| 6721359 | Bist et al. | Apr 2004 | B1 |
| 6728317 | Demos | Apr 2004 | B1 |
| 6738423 | Lainema et al. | May 2004 | B1 |
| 6759999 | Doyen | Jul 2004 | B1 |
| 6765962 | Lee et al. | Jul 2004 | B1 |
| 6771830 | Goldstein et al. | Aug 2004 | B2 |
| 6785331 | Jozawa et al. | Aug 2004 | B1 |
| 6792157 | Koshi et al. | Sep 2004 | B1 |
| 6795584 | Karczewicz et al. | Sep 2004 | B2 |
| 6801572 | Yamada et al. | Oct 2004 | B2 |
| 6810083 | Chen et al. | Oct 2004 | B2 |
| 6831947 | Ribas Corbera | Dec 2004 | B2 |
| 6873654 | Rackett | Mar 2005 | B1 |
| 6876703 | Ismaeil et al. | Apr 2005 | B2 |
| 6882753 | Chen et al. | Apr 2005 | B2 |
| 6947045 | Ostermann et al. | Sep 2005 | B1 |
| 6970479 | Abrahamsson et al. | Nov 2005 | B2 |
| 6990242 | Malvar | Jan 2006 | B2 |
| 7020204 | Auvray et al. | Mar 2006 | B2 |
| 7027507 | Wu | Apr 2006 | B2 |
| 7035473 | Zeng et al. | Apr 2006 | B1 |
| 7042941 | Laksono et al. | May 2006 | B1 |
| 7058127 | Lu et al. | Jun 2006 | B2 |
| 7099389 | Yu et al. | Aug 2006 | B1 |
| 7110455 | Wu et al. | Sep 2006 | B2 |
| 20010048718 | Bruls et al. | Dec 2001 | A1 |
| 20020044602 | Ohki | Apr 2002 | A1 |
| 20020136308 | Le Maguet et al. | Sep 2002 | A1 |
| 20020154693 | Demos et al. | Oct 2002 | A1 |
| 20020186890 | Lee et al. | Dec 2002 | A1 |
| 20030021482 | Lan et al. | Jan 2003 | A1 |
| 20030113026 | Srinivasan et al. | Jun 2003 | A1 |
| 20030128754 | Akimoto et al. | Jul 2003 | A1 |
| 20030194010 | Srinivasan et al. | Oct 2003 | A1 |
| 20030215011 | Wang et al. | Nov 2003 | A1 |
| 20040022316 | Ueda et al. | Feb 2004 | A1 |
| 20040090397 | Doyen et al. | May 2004 | A1 |
| 20040264568 | Florencio | Dec 2004 | A1 |
| 20040264580 | Chiang Wei Yin et al. | Dec 2004 | A1 |
| 20050013365 | Mukerjee et al. | Jan 2005 | A1 |
| 20050013497 | Hsu et al. | Jan 2005 | A1 |
| 20050013498 | Srinivasan et al. | Jan 2005 | A1 |
| 20050015246 | Thumpudi et al. | Jan 2005 | A1 |
| 20050015259 | Thumpudi et al. | Jan 2005 | A1 |
| 20050036698 | Beom | Feb 2005 | A1 |
| 20050036699 | Holcomb et al. | Feb 2005 | A1 |
| 20050041738 | Lin et al. | Feb 2005 | A1 |
| 20050052294 | Liang et al. | Mar 2005 | A1 |
| 20050094731 | Xu et al. | May 2005 | A1 |
| 20050123274 | Crinon et al. | Jun 2005 | A1 |
| 20050135484 | Lee et al. | Jun 2005 | A1 |
| 20050147163 | Li et al. | Jul 2005 | A1 |
| 20050152451 | Byun | Jul 2005 | A1 |
| 20050180502 | Puri | Aug 2005 | A1 |
| 20050207492 | Pao | Sep 2005 | A1 |
| 20050232501 | Mukerjee | Oct 2005 | A1 |
| 20060013307 | Olivier et al. | Jan 2006 | A1 |
| 20060013309 | Ha et al. | Jan 2006 | A1 |
| 20060140267 | He et al. | Jun 2006 | A1 |
| 20070009039 | Ryu | Jan 2007 | A1 |
| 20070009042 | Craig et al. | Jan 2007 | A1 |
| Number | Date | Country |
|---|---|---|
| 1327074 | Feb 1994 | CA |
| 0932306 | Jul 1999 | EP |
| 897363 | May 1962 | GB |
| 2003061090 | Feb 2003 | JP |
| 132895 | Oct 1998 | KR |
| WO 9721302 | Jun 1997 | WO |
| WO 0021207 | Apr 2000 | WO |
| WO 0207438 | Jan 2002 | WO |
| WO 2004100554 | Nov 2004 | WO |
| WO 2004100556 | Nov 2004 | WO |
| WO 2005065030 | Jul 2005 | WO |
| WO 2006075895 | Jul 2006 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 20050013500 A1 | Jan 2005 | US |