Computer systems that capture, editing and playback motion video typically process motion video data as digital data, representing a sequence of digital images. Such data typically is stored in computer data files on a random access computer readable medium. An image may represent a single frame, i.e., two fields, or a single field of motion video data. Such systems generally allow any particular image in the sequence of still images to be randomly accessed for editing and for playback.
Since digital data representing motion video may consume large amounts of computer memory, particularly for full motion broadcast quality video (e.g., sixty field per second for NTSC and fifty fields per second for PAL), the digital data typically is compressed to reduce storage requirements. There are several kinds of compression for motion video information. One kind of compression is called “intraframe” compression, which involves compressing the data representing each image independently of other images. Commonly-used intraframe compression techniques employ a transformation to the frequency domain from the spatial domain, for example, by using discrete cosine transforms, to generate a set of coefficients in the frequency domain that represent the image or portions of the image. These coefficients generally are quantized, placed in a specified order (commonly called a zig-zag ordering), then entropy encoded. Entropy encoding is a lossless process that typically involves generating code words that represent the coefficients, using a form of Huffman coding scheme. Image quality of compressed images is primarily affected by the loss of information through quantizing.
Some compression techniques involve additional operations that further affect image quality. For example, some compression techniques reduce the size of an image before it is transformed and quantized. Some other compression techniques reduce the bit depth, by rounding, for example, from 10-bits to 8-bits.
More compression can obtained for motion video sequences by using what is commonly called “interframe” compression. Interframe compression involves predicting one image using another. This kind of compression often is used in combination with intraframe compression. For example, a first image may be compressed using intraframe compression, and typically is called a key frame. The subsequent images may be compressed by generating predictive information that, when combined with other image data, results in the desired image. Intraframe compressed images may occur every so often throughout the sequence. For interframe compressed image sequences, the interframe compressed images in the sequence can be accessed and decompressed only with reference to other images in the sequence.
Compression techniques for video also may provide a variable bit rate per image or a fixed bit rate per image. Either type of technique generally uses a desired bit rate in a control loop to adjust parameters of the compression algorithm, typically parameters for quantization, so that the desired bit rate is met. For fixed bit rate compression, the desired bit rate must be met by each compressed image or by the compressed data for each subset of each image. For variable bit rate compression, the desired bit rate is generally the average bit rate (in terms of bits per image) that is sought.
High quality fixed bit rate, intraframe-only compression of video can be achieved using rate distortion optimization. The compression process involves transforming portions of the image to generate frequency domain coefficients for each portion. A bit rate for each transformed portion using a plurality of scale factors is determined. Distortion for each portion is estimated according to the plurality of scale factors. A scale factor is selected for each portion to minimize the total distortion in the image to achieve a desired bit rate. A quantization matrix is selected according to the desired bit rate. The frequency domain coefficients for each portion are quantized using the selected quantization matrix as scaled by the selected scale factor for the portion. The quantized frequency domain coefficients are encoded using a variable length encoding to provide compressed data for each of the defined portions. The compressed data is output for each of the defined portions to provide a compressed bitstream at the desired bit rate.
Rate-distortion optimization may be performed by obtaining a bit rate for each of a plurality of scale factors, each of which is a power of two. The selected scale factor also may be limited to a scale factor that is a power of two. Portions of the rate-distortion curve that extend beyond the data available also may be estimated. In particular, for any portion of an image and a quantization matrix, there is a scale factor, called the maximum scale factor. Such a scale factor causes all of the quantizers to be such that all of the coefficients are quantized to zero. The maximum scale factor provides the minimum bit rate. Bit rates corresponding to scale factors between the maximum scale factor and another scale factor for which a computed bit rate is available can be estimated by interpolation.
A weighting factor may be used to scale the values in the selected quantization matrix for the bit depth of the image data. Thus, the numerical accuracy of subsequent operations can be controlled for data of multiple bit depths, such as both 8-bit and 10-bit data.
Entropy encoding of the AC coefficients may be performed in the following manner. The range of potential amplitudes for quantized coefficients is split into two parts. The first part is a base range for amplitudes between 1 and a convenient value AB. The second part is an index range for the remaining amplitudes [AB+1, . . . Amax] where Amax is the maximum, quantized coefficient amplitude. Amplitudes in the base range are encoded with a Huffman code word that represents that amplitude. The index range is further divided into a number of segments, each having a range of values corresponding to AB. Amplitudes in the index range are encoded with a Huffman code word that represents the amplitude and an index value that indicates the segment from which they originate. If there is one or more preceding zero valued coefficients, the amplitude is encoded by a Huffman code word, and, if the amplitude is in the index range, followed by an index value, followed by another Huffman code word representing the length of the preceding run of zeros. This encoding may be applicable to forms of data other than quantized coefficient data.
The coefficients are then quantized (by quantizer 106) using a set of quantizers, one quantizer for each frequency, to provide a quantized coefficient 108 for each frequency. The set of quantizers typically is referred to as a quantization table or quantization matrix. The quantization matrices appropriate for a particular bit rate, for example 220 Mbits per frame and 140 Mbits per frame, can be defined experimentally using sample images and a procedure defined in: “RD-OPT: An Efficient Algorithm for Optimizing DCT Quantization Tables,” by Viresh Ratnakar and Miron Livny, in 1995 Data Compression Conference, pp. 332-341 (“Ratnakar”). Ratnakar teaches how to optimize a quantization table for a single image; however, this procedure may be extended to optimize a quantization table using statistics for multiple example images selected as “typical” images. Such a quantization table can be developed for each of a set of different desired output bit rates.
The quantization table quantizes the frequency data by dividing each coefficient by its corresponding quantizer and rounding. For example, the following formula may be used:
round[S(u,v)/Q(u,v)];
where S(u,v) is the value at position u,v in the matrix of frequency coefficients, Q(u,v) is the quantizer at position u,v in the quantization matrix.
The values Q(u,v) in the quantization matrix may be a function of a fixed quantization matrix, a scale factor and a weighting factor. The weighting factor scales the values in the quantization matrix so that they are appropriate for the bit depth of the image data, so that the variability in dynamic ranges is accounted for data of multiple bit depths, such as both 8-bit and 10-bit data.
The quantization also may be performed to provide a variable width “deadzone”. The deadzone is the area around zero that is quantized to zero. In the equation above, using rounding, the deadzone has a width of the quantizer value Q(u,v). Noise can be reduced by increasing the deadzone as a function of quantizer value, for example, using the following equations:
The quantized coefficient, c, is defined as:
The dequantized value, {circumflex over (x)}, would be:
where δ is typically one-half.
Then the width of the deadzone equals 2 (1−k) Q(u, v)
With these equations, if k=0.5 and δ=0.5, the quantization/dequantization are conventional with a deadzone of width Q(u, v). For non-zero k the deadzone can be (−1, 0.5) the deadzone is larger. To reduce noise a value of k ε (−0.5, 0.25) might be used to produce a deadzone between 1.5 Q(u,v) and 3.0 Q(u,v).
The scale factor may be controlled by a rate controller 114, described in more detail below. In one embodiment, a set of scale factors that are powers of two, e.g., 1, 2, 4, 8, 16 . . . , may be used.
An entropy encoder 110 encodes the quantized values using entropy encoding to produce code words that are formatted to provide the compressed data 112. Prior to entropy encoding a pre-defined coefficient ordering process is applied to the matrix of quantized coefficients to provide a one-dimensional sequence of coefficients. A set of patterns, called symbols, is identified from the sequence of coefficients. The symbols, in turn, are mapped to code words. The symbols may be defined, for example, using a form of run length encoding. Huffman encoding is generally employed to encode the sequence of symbols to variable length codes. The compressed data 112 includes the entropy encoded data and any other data for each block, macroblock or image that may be used to decode it, such as scale factors. A form of entropy encoding is described in more detail below in connection with
Compression parameters can be changed to affect both the bit rate and the image quality of decompressed data. In DCT-based image compression, compression parameters that may be changed include the quantizers, either within an image between portions of an image, or from one image to the next. Typically, a portion of an image is a set of DCT blocks called a macroblock. A change to the quantizers affects the compressed bit rate and the image quality upon decompression. An increase in a quantizer value typically decreases the bit rate but also reduces the image quality. Conversely, a decrease in a quantizer value typically increases the bit rate but also improves the image quality. Quantizers may be adapted individually, or the set of quantizers may be scaled uniformly by a scale factor. In one embodiment, the scale factor is adjusted for each macroblock to ensure that each frame has an amount of data that matches a desired fixed bit rate.
A rate controller 114 generally receives the bit rate 122 of the compressed data produced by compressing an image, any constraints 116 on the compression (such as buffer size, bit rate, etc.), and a distortion metric 120. The bit rate and distortion is determined for each macroblock for a number of scale factors in a statistics gathering pass on the image. The rate controller then determines, for each macroblock, an appropriate scale factor 118 to apply to the quantization matrix. The rate controller 114 seeks to minimize the distortion metric 120 over the image according to the constraints 116 by using a technique that is called “rate-distortion optimization,” such as described in “Rate-distortion optimized mode selection for very low bit rate video coding and the emerging H.263 standard,” by T. Wiegard, M. Lightstone, D. Mukherjee, T. G. Campbell, and S. K. Mitra, in IEEE Trans. Circuits Syst. Video Tech., Vol. 6, No. 2, pp. 182-190, April 1996, and in “Optimal bit allocation under multiple rate constraints,” by Antonio Ortega, in Proc. of the Data Compression Conference (DCC 1996), April 1996. In particular, the total distortion over all macroblocks in the image is optimized over the image to meet a desired bit rate and thus select a scale factor for each macroblock.
There are several ways to compute a distortion metric. For example, but not limited to this example, the distortion metric 120 (d) may estimated by the square of the scale factor (q), i.e., d=q2. Thus, the distortion metric is known for each scale factor without analyzing the compressed image data.
The bit rate and distortion metric corresponding to a scale factor for which quantization is not performed may be estimated by interpolating measured rate and distortion values obtained from other scale factors. Such a technique is described in “Bit-rate control using piecewise approximated rate-distortion characteristics,” by L-J. Lin and A. Ortega, in IEEE Trans. Circuits Syst. Video Tech., Vol. 8, No. 4, pp. 446-459, August 1998, and in “Cubic Spline Approximation of Rate and Distortion Functions for MPEG Video,” by L-J. Lin, A. Ortega and C.-C. Jay Kuo, in Proceedings of IST/SPIE, Digital Video Compression Algorithms and Technologies 1996, vol. 2668, pp. 169-180, and in “Video Bit-Rate Control with Spline Approximated Rate-Distortion Characteristics,” by Liang-Jin Lin, PhD Thesis, University of Southern California, 1997. For example, bit rates may be computed for two scale factors, one small and one large such as 2 and 128. Interpolation between these two points may be used to obtain a suitable scale factor with a corresponding desired bit rate. If the resulting compressed image data exceeds the desired bit rate, the image data can be compressed again using a different scale factor.
Portions of the rate-distortion curve that extend beyond the data available also may be estimated. In particular, for any portion of an image and a quantization matrix, there is a scale factor, called the maximum scale factor. Such a scale factor causes all of the quantizers to be such that all of the coefficients are quantized to zero. The maximum scale factor provides the minimum bit rate. Bit rates corresponding to scale factors between the maximum scale factor and a scale factor for which an actual bit rate is available can be estimated by interpolation, such as linear interpolation.
A more specific example of a rate controller is described in more detail below in connection with
Referring now to
The image processing application 907 performs operations on the image data to produce uncompressed image data 908. For example, such image processing operations may include, but are not limited to, operations for combining images, such as compositing, blending, and keying, or operations within an image, such as resizing, filtering, and color correction, or operations between two images, such as motion estimation. The image processing application also may be an application that captures and/or creates digital image data, without using any input image data 906. The image processing application also may manipulate metadata about the image data, for example to define a sequence of scenes of motion video information. The image processing application also may playback image data in one or more formats, without providing any output data 908.
Although
Entropy encoding and decoding will now be described in connection with
Therefore, for the AC coefficients, there are six types of symbol sets: four for amplitude symbols, one for run lengths, and one for end of block, as follows below. In this example, AB=64 and AMAX=4096, but this can be easily generalized to other partitionings of the quantized coefficient amplitude range.
1. Anrb={A1nrb, A2nrb, . . . , A64nrb}: Non-zero amplitude coefficients in the base range, with no preceding run of zero valued coefficients. The amplitudes vary from A1nrb=1 to A64nrb=64.
2. Awrb={A1wrb, A2wrb, . . . , A64wrb}: Non-zero amplitude coefficients in the base range, with preceding run of zero valued coefficients. The amplitudes vary from A1wrb=1 to A64wrb=64.
3. Anri={A1nri, A2nri, . . . , A64nri}: Non-zero amplitude coefficients in the index range, with no preceding run of zero valued coefficients. The amplitudes vary from 65 to 4096.
4. Awri={A1wri, A2wri, . . . A64wri} Non-zero amplitude coefficients in the index range, with preceding run of zero valued coefficients. The amplitudes vary from 65 to 4096.
5. R={R1, R2, . . . , Rmax}: a run of 1 or more zero valued coefficients. R1=1 and Rmax=62.
6. E={EOB}: the end of block symbol.
If the amplitude of a coefficient maps to one of the index ranges, either Anri 310 or Awri 312, it is encoded by a variable length code word and an index value. The index value, P, is computed from the amplitude A by:
P=((A−1)>>6), 65≦A≦4096.
The value used to determine the variable length code word, V, is computed according to:
Â=(P<<6) 1≦Â≦64; V=VLCLUT(Â).
Using these techniques, a set of Huffman code words is generated for the symbols in the five sets of Anrb, Anri, Awrb, Awri E, which results in a set of amplitude code words VA={Vnrb, Vnri, Vwrb, Vwri, VE}. There are 4*64+1=129 code words in VA. Another set of Huffman code words is generated for the 62 symbols in R, which results in a set of zero-run code words VR. The set of code words and how they map to amplitude values or run length values can be defined using statistics from sample data according to Huffman coding principles.
The format of such code words will now be described in connection with
Such variable length encoding may be performed using two lookup tables, examples of which are shown in
Each entry, e.g., 502, in the amplitude table 500 uses sixteen bits for the code word 504 and five bits that represent the length 506 of the code word. The maximum storage requirement for one entry, e.g., 502, is twenty-one bits. Thus, each entry can be stored in three successive bytes. In some instances, it may be useful to store the value as a 32-bit word. The total number of bytes required for the amplitude encoding table is
Given an amplitude, it can be converted to a value between 1 and 64 and an indication of whether it is preceded by a run, and an indication of whether it is in the base range or the index range, and the index value P. This information is applied to the lookup table 500 to retrieve the code word Vnrb, Vnri, Vwrb, or Vwri, which can be combined with a sign bit, index value P, and, if appropriate, the subsequent code word VR for the run length.
The run-length table 600 has entries, e.g., 602, that require a maximum of 14 bits, including 10 bits for the code word 604 and 4 bits for the length 606 of the code word, which can be stored in two bytes. There are a total of 62 entries, which means that the table requires
Given a run length, the code word corresponding to that run length is simply retrieved from the table.
An example format for decoding tables is shown in
For run length values, either table 700 or 702 receive as an input 704 a run length code word, and provide as an output the corresponding value. The corresponding value includes a number 706 or 710 representing the length of the run and a length 708 or 712 representing the length in bits of the number 706 or 710.
For amplitude values, either table 800 or 802 receive as an input 804 the amplitude code, and provide as an output the corresponding values including a number 806 or 814 representing the length in bits of the value to be output, a number 808 or 816 representing the amplitude, a run flag 810 or 818 indicating whether a run code will follow, and index flag 812 or 820 indicating whether an index code will follow.
Using these encoding principles, the first code word for AC coefficients of a block is an amplitude code word. The run flag and index flag indicate whether the subsequent code word is another amplitude code word, an index value or a run length code word. If both the run flag and index flag are set, the amplitude code word is followed by an index code word, then a run length code word, which are then followed by another amplitude code word.
An example implementation of a rate controller will now be described in connection with
In particular, in
Such encoding and decoding may be used for, for example, but not limited to, high definition video, in which images have from 720 to 1080 lines and 1280 to 1920 pixels per line. Frame rates generally vary from 23.976 to 60, with higher frame rates typically representing the field rate of an interlaced frame. Each pixel may be represented using a number of components, for example, but not limited to, luminance and chrominance (Y, Cr, Cb) or red, green and blue, with each component represented using a number of bits (called the bit depth). The bit depth typically is 8 or 10 bits, but could be 12 or 16 bits. Such data has a significantly higher bandwidth than standard definition video. By providing the pre-scale factor as described above, the same encoder may be used to encode both 8-bit and 10-bit data. A fixed quantization matrix may be provided for each of a number of different desired bit rates.
The various components of the system described herein may be implemented as a computer program using a general-purpose computer system. Such a computer system typically includes a main unit connected to both an output device that displays information to a user and an input device that receives input from a user. The main unit generally includes a processor connected to a memory system via an interconnection mechanism. The input device and output device also are connected to the processor and memory system via the interconnection mechanism.
One or more output devices may be connected to the computer system. Example output devices include, but are not limited to, a cathode ray tube (CRT) display, liquid crystal displays (LCD) and other video output devices, printers, communication devices such as a modem, and storage devices such as disk or tape. One or more input devices may be connected to the computer system. Example input devices include, but are not limited to, a keyboard, keypad, track ball, mouse, pen and tablet, communication device, and data input devices. The invention is not limited to the particular input or output devices used in combination with the computer system or to those described herein.
The computer system may be a general purpose computer system which is programmable using a computer programming language, such as “C++,” Visual Basic, JAVA or other language, such as a scripting language or even assembly language. The computer system may also be specially programmed, special purpose hardware. In a general-purpose computer system, the processor is typically a commercially available processor, such as various processors available from Intel, AMD, Cyrix, Motorola, and IBM. The general-purpose computer also typically has an operating system, which controls the execution of other computer programs and provides scheduling, debugging, input/output control, accounting, compilation, storage assignment, data management and memory management, and communication control and related services. Example operating systems include, but are not limited to, the UNIX operating system and those available from Microsoft and Apple Computer.
A memory system typically includes a computer readable medium. The medium may be volatile or nonvolatile, writeable or nonwriteable, and/or rewriteable or not rewriteable. A memory system stores data typically in binary form. Such data may define an application program to be executed by the microprocessor, or information stored on the disk to be processed by the application program. The invention is not limited to a particular memory system.
A system such as described herein may be implemented in software or hardware or firmware, or a combination of the three. The various elements of the system, either individually or in combination may be implemented as one or more computer program products in which computer program instructions are stored on a computer readable medium for execution by a computer. Various steps of a process may be performed by a computer executing such computer program instructions. The computer system may be a multiprocessor computer system or may include multiple computers connected over a computer network. The components shown in
Having now described an example embodiment, it should be apparent to those skilled in the art that the foregoing is merely illustrative and not limiting, having been presented by way of example only. Numerous modifications and other embodiments are within the scope of one of ordinary skill in the art and are contemplated as falling within the scope of the invention.
This application claims priority to and the benefit of, under 35 U.S.C. §120, and is a continuation application of application Ser. No. 10/817,217, filed on Apr. 2, 2004, now U.S. Pat. No. 7,403,561, which is a nonprovisional application claiming priority under 35 U.S.C. §119 to provisional Application Ser. No. 60/460,517, filed on Apr. 4, 2003, abandoned; both of which are incorporated herein by reference. This application claims priority to and the benefit of, under 35 U.S.C. §119, provisional Application Ser. No. 60/460,517.
Number | Name | Date | Kind |
---|---|---|---|
668088 | Werner | Feb 1901 | A |
3769453 | Bahl et al. | Oct 1973 | A |
3984833 | Van Voorhis | Oct 1976 | A |
4044347 | Van Voorhis | Aug 1977 | A |
4092675 | Saran | May 1978 | A |
4092677 | Saran | May 1978 | A |
4136363 | Saran | Jan 1979 | A |
4302775 | Wildergren et al. | Nov 1981 | A |
4325085 | Gooch | Apr 1982 | A |
4330833 | Pratt et al. | May 1982 | A |
4385363 | Wildergren et al. | May 1983 | A |
4394774 | Wildergren et al. | Jul 1983 | A |
4410916 | Pratt et al. | Oct 1983 | A |
4476495 | Fujisawa et al. | Oct 1984 | A |
4520490 | Wei | May 1985 | A |
4558302 | Welch | Dec 1985 | A |
4558370 | Mitchell et al. | Dec 1985 | A |
4586027 | Tsukiyama et al. | Apr 1986 | A |
4663325 | Ohtaka et al. | May 1987 | A |
4698672 | Chen et al. | Oct 1987 | A |
4704628 | Chen et al. | Nov 1987 | A |
4706265 | Furukawa | Nov 1987 | A |
4710813 | Wallis et al. | Dec 1987 | A |
4813056 | Fedele | Mar 1989 | A |
4837571 | Lutz | Jun 1989 | A |
4841299 | Weaver | Jun 1989 | A |
4849812 | Borgers et al. | Jul 1989 | A |
4872009 | Tsukiyama et al. | Oct 1989 | A |
4901075 | Vogel | Feb 1990 | A |
4988998 | O'Brien | Jan 1991 | A |
5027206 | Vreeswijk et al. | Jun 1991 | A |
5049880 | Stevens | Sep 1991 | A |
5072295 | Murakami et al. | Dec 1991 | A |
5128758 | Azadegan et al. | Jul 1992 | A |
5179442 | Azadegan et al. | Jan 1993 | A |
5191436 | Yonemitsu | Mar 1993 | A |
5268686 | Battail | Dec 1993 | A |
5272478 | Allen | Dec 1993 | A |
5291486 | Koyanagi | Mar 1994 | A |
5333135 | Wendorf | Jul 1994 | A |
5426464 | Casavant et al. | Jun 1995 | A |
5428390 | Cooper et al. | Jun 1995 | A |
5461420 | Yonemitsu et al. | Oct 1995 | A |
5481553 | Suzuki et al. | Jan 1996 | A |
5559557 | Kato | Sep 1996 | A |
5579413 | Bjontegaard | Nov 1996 | A |
5663763 | Yagasaki et al. | Sep 1997 | A |
5724097 | Hibi et al. | Mar 1998 | A |
5751359 | Suzuki et al. | May 1998 | A |
5821887 | Zhu | Oct 1998 | A |
5959675 | Mita et al. | Sep 1999 | A |
5982437 | Okazaki et al. | Nov 1999 | A |
6023531 | Peters | Feb 2000 | A |
6028639 | Bhatt et al. | Feb 2000 | A |
6198543 | Ryan | Mar 2001 | B1 |
6249546 | Bist | Jun 2001 | B1 |
6256349 | Suzuki et al. | Jul 2001 | B1 |
6438167 | Shimizu et al. | Aug 2002 | B1 |
6456659 | Zuccaro et al. | Sep 2002 | B1 |
6484142 | Miyasaka et al. | Nov 2002 | B1 |
6668088 | Werner et al. | Dec 2003 | B1 |
6687407 | Peters | Feb 2004 | B2 |
7212681 | Chen et al | May 2007 | B1 |
7403561 | Kottke et al. | Jul 2008 | B2 |
7433519 | Rynderman | Oct 2008 | B2 |
20020163966 | Ramaswamy | Nov 2002 | A1 |
20030115021 | Mates | Jun 2003 | A1 |
20040062448 | Zeng et al. | Apr 2004 | A1 |
20090080785 | Rynderman | Mar 2009 | A1 |
Number | Date | Country |
---|---|---|
0500077 | Aug 1992 | EP |
WO 9716029 | May 1997 | WO |
WO9803550 | Aug 1998 | WO |
WO 9803550 | Aug 1998 | WO |
WO 9835500 | Aug 1998 | WO |
WO 0111893 | Feb 2001 | WO |
WO 2004091221 | Oct 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20090003438 A1 | Jan 2009 | US |
Number | Date | Country | |
---|---|---|---|
60460517 | Apr 2003 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10817217 | Apr 2004 | US |
Child | 12215228 | US |