[Not Applicable]
[Not Applicable]
[Not Applicable]
Video communications systems are continually being enhanced to meet requirements such as reduced cost, reduced size, improved quality of service, and increased data rate. In addition to quantitative measures, video can also be judged subjectively. Human response to visual stimulus is not uniform. The eye can perceive some aspects of a picture with more acuity than others.
Many advanced processing techniques can be specified in a video compression standard. Typically, the design of a compliant video encoder is not specified in the standard. Optimization of the communication system's requirements is dependent on the design of the video encoder.
Limitations and disadvantages of conventional and traditional approaches will become apparent to one of ordinary skill in the art through comparison of such systems with the present invention as set forth in the remainder of the present application with reference to the drawings.
Described herein are system(s) and method(s) for encoding video data, substantially as shown in and/or described in connection with at least one of the figures, as set forth more completely in the claims.
These and other advantages and novel features of the present invention will be more fully understood from the following description.
According to certain aspects of the present invention, a system and method for quantization in a video encoder are presented.
In
Human eyes have a different sensitivity to different frequency components. The eye may be less sensitive to variation of a higher spatial frequency and the corresponding portion of video would have a low perceptual value. The perceptual value of a coded video section may also be dependent on the method of coding.
Referring now to
The quantizer 200 can be illustrated with an equation. The frequency-based coefficient X 215 is to be quantized by a step size Y 219 that is chosen from a set of step sizes 207. The quantizer output Z 225 is given by Z=(X+R)/Y. The summer 211 and the divider 213 can be defined according to fixed-point arithmetic. Specifically, the result 225 of the division may result in a loss of precision when the step size 219 is larger than the least significant bit of (X+R) 223. R 221 can be chosen as C 217 multiplied 209 by Y 219 where C 217 is one of a set of rounding factors 205. The rounding factor C 217 is typically between 0 and ½.
Some factors that are considered in video encoding include subjective quality, bandwidth, and peak signal-to-noise ratio. The subjective quality comprises a perceptual value of video. The bandwidth is a measure of the number of bits required to encode a picture.
In the frequency domain, many coefficients can approach a value of zero. If a coefficient is rounded to zero, the corresponding number of bits required for transmission is reduced. A smaller rounding factor can, on average, increase the number of coefficients that are rounded to zero. Therefore, if the perceptual value of a frequency coefficient is low the rounding factor can be lowered without a detrimental effect to the subjective quality, but setting the rounding factor lower for all frequency coefficients can reduce peak signal to noise ratio.
A different rounding factor 217 can be used for different coefficients 215. A set of rounding factors 205 can be used to emphasize spatial frequency components with higher perceptual value, and a spatial frequency with low perceptual value will be more likely to be rounded to zero.
Produce a frequency sample by transforming a video sample into a frequency domain at 301. The video sample is a time domain representation that may comprise a set of prediction errors produced by a motion estimator or spatial predictor.
Bias the frequency sample by a rounding factor that is based on a perceptual value of the frequency sample at 303. If the perceptual value is low, the rounding factor can be low. Human eyesight will generally have a low perceptual value for high spatial frequencies. Quantize the biased frequency sample at 305. Dividing a fixed-point biased frequency sample by a fixed-point step size and saving only the integer result can accomplish quantizing and rounding.
The invention can be applied to video data encoded with a wide variety of standards, one of which is H.264. An overview of H.264 will now be given. A description of an exemplary quantizer for H.264 will also be given.
H.264 Video Coding Standard
The ITU-T Video Coding Experts Group (VCEG) and the ISO/IEC Moving Picture Experts Group (MPEG) drafted a video coding standard titled ITU-T Recommendation H.264 and ISO/IEC MPEG-4 Advanced Video Coding, which is incorporated herein by reference for all purposes. In the H.264 standard, video is encoded on a macroblock-by-macroblock basis. The generic term “picture” is used throughout this specification to refer to frames, fields, slices, blocks, macroblocks, or portions thereof.
The specific algorithms used for video encoding and compression form a video-coding layer VCL, and the protocol for transmitting the VCL is called the Network Access Layer (NAL). The H.264 standard allows a clean interface between the signal processing technology of the VCL and the transport-oriented mechanisms of the NAL, so no source-based encoding is necessary in networks that may employ multiple standards.
By using the H.264 compression standard, video can be compressed while preserving image quality through a combination of spatial, temporal, and spectral compression techniques. To achieve a given Quality of Service (QoS) within a small data bandwidth, video compression systems exploit the redundancies in video sources to de-correlate spatial, temporal, and spectral sample dependencies. Statistical redundancies that remain embedded in the video stream are distinguished through higher order correlations via entropy coders. Advanced entropy coders can take advantage of context modeling to adapt to changes in the source and achieve better compaction.
Referring now to
Generally, the human eye is more perceptive to the luma characteristics of video, compared to the chroma red and chroma blue characteristics. Accordingly, there are more pixels in the luma grid 409 compared to the chroma red grid 411 and the chroma blue grid 413. The set of rounding factors 205 in
The luma grid 409 can be divided into 16×16 pixel blocks. For a luma block 415, there is a corresponding 8×8 chroma red block 417 in the chroma red grid 411 and a corresponding 8×8 chroma blue block 419 in the chroma blue grid 413. Blocks 415, 417, and 419 are collectively known as a macroblock that can be part of a slice group. Currently, subsampling is the only color space used in the H.264 specification. This means, a macroblock consist of a 16×16 luminance block 415 and two (subsampled) 8×8 chrominance blocks 417 and 418.
Referring now to
The weights can also be encoded explicitly, or implied from an identification of the picture containing the prediction partitions. The weights can be implied from the distance between the pictures containing the prediction partitions and the picture containing the partition.
Referring now to
In the 4×4 mode, a macroblock 601 is divided into 4×4 partitions. The 4×4 partitions of the macroblock 601 are predicted from a combination of left edge partitions 603, a corner partition 605, top edge partitions 607, and top right partitions 609. The difference between the macroblock 601 and prediction pixels in the partitions 603, 605, 607, and 609 is known as the prediction error. The prediction error is encoded along with an identification of the prediction pixels and prediction mode.
The prediction error is transformed to the frequency domain, thereby resulting in a corresponding set of frequency coefficients, which are quantized resulting in a set of quantized frequency coefficients, as shown in
An H.264 encoder can generate three types of coded pictures: Intra-coded (I), Predictive (P), and Bi-directional (B) pictures. An I picture is encoded independently of other pictures based on a transformation, quantization, and entropy coding. I pictures are referenced during the encoding of other picture types and are coded with the least amount of compression. P picture coding includes motion compensation with respect to the previous I or P picture. A B picture is an interpolated picture that requires both a past and a future reference picture (I or P). The picture type I uses the exploitation of spatial redundancies while types P and B use exploitations of both spatial and temporal redundancies. Typically, I pictures require more bits than P pictures, and P pictures require more bits than B pictures.
Macroblocks in an I picture are all Intra-coded. Macroblocks in P and B pictures can be either Intra-coded or Inter-coded. For Intra-coded macroblocks, the rounding factor 217 of
Using the following rounding factors, for each 4×4 frequency coefficient matrix, can result in bit savings without loss to peak signal-to-noise ratio or subjective quality.
For Intra-Coded Macroblocks Add:
For Inter-Coded Macroblocks Add:
Specific rounding factors are given in TABLE 1 and TABLE 2 as an example. Rounding factors may be applied as a matrix larger or smaller than 4×4. Rounding factors may be adapted based on the perceptual results. High definition pictures and standard definition pictures may require different rounding factors. Adaptation of rounding factors based on content may be open loop or closed loop.
Referring now to
The spatial predictor 701 requires only the content of a current picture 719. The spatial predictor 701 receives the current picture 719 and produces spatial-predictors 751 corresponding to reference blocks as described in reference to
Spatially predicted pictures are intra-coded. Luma macroblocks can be divided into 4×4 blocks or 16×16 blocks. There are 7 prediction modes available for 4×4 macroblocks and 4 prediction modes available for 16×16 macroblocks. Chroma macroblocks are 8×8 blocks and have 4 possible prediction modes.
In the temporal predictor 703 (i.e. motion estimator), the current picture 719 is estimated from reference blocks 749 using a set of motion vectors 747. The temporal predictor 703 receives the current picture 719 and a set of reference blocks 749 that are stored in the frame buffer 713. A temporally encoded macroblock can be divided into 16×8, 8×16, 8×8, 4×8, 8×4, or 4×4 blocks. Each block of a macroblock is compared to one or more prediction blocks in another picture(s) that may be temporally located before or after the current picture. Motion vectors describe the spatial displacement between blocks and identify the prediction block(s).
The Mode Decision Engine 705 will receive the spatial predictions 751 and temporal predictions 747 and select the prediction mode according to a rate-distortion optimization. A selected prediction 721 is output.
Once the mode is selected, a corresponding prediction error 725 is the difference 723 between the current picture 719 and the selected prediction 721. The transformer 707 transforms the prediction errors 725 representing blocks into transform values 727. In the case of temporal prediction, the prediction error 725 is transformed along with the motion vectors.
Transformation in H.264 utilizes Adaptive Block-size Transforms (ABT). The block size used for transform coding of the prediction error 725 corresponds to the block size used for prediction. The prediction error is transformed independently of the block mode by means of a low-complexity 4×4 matrix that together with an appropriate scaling in the quantization stage approximates the 4×4 Discrete Cosine Transform (DCT). The Transform is applied in both horizontal and vertical directions. When a macroblock is encoded as intra 16×16, the DC coefficients of all 16 4×4 blocks are further transformed with a 4×4 Hardamard Transform.
The quantizer 708 quantizes the transformed values 727. In H.264, there are 52 quantization levels. In certain embodiments of the present invention, the quantizer 708 can comprise the quantizer described in
The quantizer 708 can use the rounding factors in TABLE 1 to achieve bit savings without loss to peak to peak signal-to-noise ratio or subjective quality, for each 4×4 frequency coefficient matrix in an intra-coded macroblock. The quantizer 708 can use the rounding factors in TABLE 2 to achieve bit savings without loss to peak to peak signal-to-noise ratio or subjective quality, for each 4×4 frequency coefficient matrix in an inter-coded macroblock.
H.264 specifies two types of entropy coding: Context-based Adaptive Binary Arithmetic Coding (CABAC) and Context-based Adaptive Variable-Length Coding (CAVLC). The entropy encoder 711 receives the quantized transform coefficients 729 and produces a video output 730. The quantized transform coefficients 729 are also fed into an inverse quantizer 710 to produce an output 731. The output 731 is sent to the inverse transformer 709 to produce a regenerated error 735. The original prediction 721 and the regenerated error 735 are summed 737 to regenerate reference pictures 739 that are stored in the frame buffer 713.
The embodiments described herein may be implemented as a board level product, as a single chip, application specific integrated circuit (ASIC), or with varying levels of a video classification circuit integrated with other portions of the system as separate components. An integrated circuit may store a supplemental unit in memory and use an arithmetic logic to encode, detect, and format the video output.
The degree of integration of the video classification circuit will primarily be determined by the speed and cost considerations. Because of the sophisticated nature of modern processors, it is possible to utilize a commercially available processor, which may be implemented external to an ASIC implementation.
If the processor is available as an ASIC core or logic block, then the commercially available processor can be implemented as part of an ASIC device wherein certain functions can be implemented in firmware as instructions stored in a memory. Alternatively, the functions can be implemented as hardware accelerator units controlled by the processor.
While the present invention has been described with reference to certain embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted without departing from the scope of the present invention.
Additionally, many modifications may be made to adapt a particular situation or material to the teachings of the present invention without departing from its scope. For example, although the invention has been described with a particular emphasis on MPEG-1 encoded video data, the invention can be applied to a video data encoded with a wide variety of standards.
Therefore, it is intended that the present invention not be limited to the particular embodiment disclosed, but that the present invention will include all embodiments falling within the scope of the appended claims.