The disclosed embodiments of the present invention relate to video coding, and more particularly, to a video coding method using at least evaluated visual quality determined by one or more visual quality metrics and a related video coding apparatus.
The conventional video coding standards generally adopt a block based (or coding unit based) coding technique to exploit spatial redundancy. For example, the basic approach is to divide the whole source frame into a plurality of blocks (coding units), perform prediction on each block (coding unit), transform residues of each block (coding unit) using discrete cosine transform, and perform quantization and entropy encoding. Besides, a reconstructed frame is generated in a coding loop to provide reference pixel data used for coding following blocks (coding units). For certain video coding standards, in-loop filter(s) may be used for enhancing the image quality of the reconstructed frame. For example, a de-blocking filter is included in an H.264 coding loop, and a de-blocking filter and a sample adaptive offset (SAO) filter are included in an HEVC (High Efficiency Video Coding) coding loop.
For many applications (e.g., a video streaming application), the transmission channel in used typically has a limited transmission bandwidth. Under this circumstance, the encoder's output bitrate must be regulated to meet the transmission bandwidth requirement. Thus, rate control may play an important role in video coding. In general, the conventional rate control algorithm performs the bit allocation based on pixel-based distortion such as spatial activity (image complexity) of a source frame to be encoded. However, the pixel-based distortion merely considers source content complexity, and sometimes is not correlated to the actual visual quality of a reconstructed frame generated from decoding an encoded frame. Specifically, based on experimental results, different processed images, each derived from an original image and having the same distortion (e.g., the same mean square error (MSE)) with respect to the original image, may present different visual quality to a viewer. That is, the smaller pixel-based distortion does not mean better visual quality in the human visual system. Hence, an encoded frame generated due to the conventional distortion-based rate control mechanism does not guarantee that a reconstructed frame generated from decoding the encoded frame would have the best visual quality.
In accordance with exemplary embodiments of the present invention, a video coding method using at least evaluated visual quality obtained by one or more visual quality metrics and a related video coding apparatus are proposed.
According to a first aspect of the present invention, an exemplary video coding method is disclosed. The exemplary video coding method includes: utilizing a visual quality evaluation module for evaluating visual quality based on data involved in a coding loop; and referring to at least the evaluated visual quality for deciding a target bit allocation of a rate-controlled unit in video coding.
According to a second aspect of the present invention, an exemplary video coding apparatus is disclosed. The exemplary video coding apparatus includes a visual quality evaluation module, a rate controller and a coding circuit. The visual quality evaluation module is arranged to evaluate visual quality based on data involved in a coding loop. The rate controller is arranged to refer to at least the evaluated visual quality for deciding a target bit allocation of a rate-controlled unit. The coding circuit has the coding loop included therein, and is arranged to encode the rate-controlled unit according to the target bit allocation.
These and other objectives of the present invention will no doubt become obvious to those of ordinary skill in the art after reading the following detailed description of the preferred embodiment that is illustrated in the various figures and drawings.
Certain terms are used throughout the description and following claims to refer to particular components. As one skilled in the art will appreciate, manufacturers may refer to a component by different names. This document does not intend to distinguish between components that differ in name but not function. In the following description and in the claims, the terms “include” and “comprise” are used in an open-ended fashion, and thus should be interpreted to mean “include, but not limited to . . . ”. Also, the term “couple” is intended to mean either an indirect or direct electrical connection. Accordingly, if one device is coupled to another device, that connection may be through a direct electrical connection, or through an indirect electrical connection via other devices and connections.
The concept of the present invention is to incorporate characteristics of a human visual system into a video coding procedure to improve the video compression efficiency or visual quality. More specifically, visual quality evaluation is involved in the video coding procedure such that a reconstructed frame generated from decoding an encoded frame is capable of having enhanced visual quality. Further details of the proposed visual quality based video coding design are described as below.
As shown in
One key feature of the present invention is using the visual quality evaluation module 104 to evaluate visual quality based on data involved in the coding loop of the coding circuit 102. In one embodiment, the data involved in the coding loop and processed by the visual quality evaluation module 104 may be raw data of the source frame IMGIN. In another embodiment, the data involved in the coding loop and processed by the visual quality evaluation module 104 may be processed data derived from raw data of the source frame IMGIN. For example, the processed data used to evaluate the visual quality may be transformed coefficients generated by the transform module 113, quantized coefficients generated by the quantization module 114, reconstructed pixel data before the optional de-blocking filter 119, reconstructed pixel data after the optional de-blocking filter 119, reconstructed pixel data before the optional SAO filter 120, reconstructed pixel data after the optional SAO filter 120, reconstructed pixel data stored in the frame buffer 121, motion-compensated pixel data generated by the motion compensation unit 125, or intra-predicted pixel data generated by the intra prediction module 123.
The visual quality evaluation performed by the visual quality evaluation module 104 may calculate one or more visual quality metrics to decide one evaluated visual quality. For example, the evaluated visual quality is derived from checking at least one image characteristic that affects human visual perception, and the at least one image characteristic may include sharpness, noise, blur, edge, dynamic range, blocking artifact, mean intensity (e.g., brightness/luminance), color temperature, scene composition (e.g., landscape, portrait, night scene, etc.), human face, animal presence, image content that attracts more or less interest (e.g., region of interest (ROI)), spatial masking (i.e., human's visual insensitivity of more complex texture), temporal masking (i.e., human's visual insensitivity of high-speed moving object), or frequency masking (i.e., human's visual insensitivity of higher pixel value variation). By way of example, the noise metric may be obtained by calculating an ISO 15739 visual noise value VN, where VN=σL*+0.852·σu*+0.323·σu* Alternatively, the noise metric may be obtained by calculating other visual noise metric, such as an S-CIELAB metric, a vSNR (visual signal-to-noise ratio) metric, or a Keelan NPS (noise power spectrum) based metric. The sharpness/blur metric may be obtained by measuring edge widths. The edge metric may be a ringing metric obtained by measuring ripples or oscillations around edges.
In one exemplary design, the visual quality evaluation module 104 calculates a single visual quality metric (e.g., one of the aforementioned visual quality metrics) according to the data involved in the coding loop of the coding circuit 102, and determines each evaluated visual quality solely based on the single visual quality metric. In other words, one evaluated visual quality may be obtained by referring to a single visual quality metric only.
In another exemplary design, the visual quality evaluation module 104 calculates a plurality of distinct visual quality metrics (e.g., many of the aforementioned visual quality metrics) according to the data involved in the coding loop of the coding circuit 102, and determines each evaluated visual quality based on the distinct visual quality metrics. In other words, one evaluated visual quality may be obtained by referring to a composition of multiple visual quality metrics. For example, the visual quality evaluation module 104 may be configured to assign a plurality of pre-defined weighting factors to multiple visual quality metrics (e.g., a noise metric and a sharpness metric), and decide one evaluated visual quality by a weighted sum derived from the weighting factors and the visual quality metrics. For another example, the visual quality evaluation module 104 may employ a Minkowski equation to determine a plurality of non-linear weighting factors for the distinct visual quality metrics, respectively; and then determine one evaluated visual quality by combining the distinct visual quality metrics according to respective non-linear weighting factors. Specifically, based on the Minkowski equation, the evaluated visual quality ΔQm is calculated using following equation:
ΔQi is derived from each of the distinct visual quality metrics, and 16.9 is a single universal parameter based on psychophysical experiments. For yet another example, the visual quality evaluation module 104 may employ a training-based manner (e.g., a support vector machine (SVM)) to determine a plurality of trained weighting factors for the distinct visual quality metrics, respectively; and then determine one evaluated visual quality by combining the distinct visual quality metrics according to respective trained weighting factors. Specifically, supervised learning models with associated learning algorithms are employed to analyze the distinct visual quality metrics and recognized patterns, and accordingly determine the trained weighting factors.
After the evaluated visual quality is generated by the visual quality evaluation module 104, the evaluated visual quality is referenced by the rate controller 106 to perform rate control for regulating the bitrate of the bitstream BS generated from the video coding apparatus 100. As the evaluated visual quality is involved in making the bit allocation decision for rate control, the source frame IMGIN is encoded based on characteristics of the human visual system to thereby allow a decoded/reconstructed frame to have enhanced visual quality. By way of example, but not limitation, the bitrate of the bitstream BS may be regulated to achieve constant visual quality, thus allowing smooth video playback at a receiving end of the bitstream BS.
More specifically, the rate controller 106 is arranged for referring to at least the evaluated visual quality to decide a target bit allocation of a rate-controlled unit, where the rate-controlled unit may be one coding unit of the source frame IMGIN or the whole source frame IMGIN. The coding circuit 100 is arranged to encode the rate-controlled unit according to the target bit allocation. Specifically, after the target bit allocation of the rate-controlled unit is determined by the rate controller 106, size of the quantization step for quantizing transformed coefficients generated from the transform module 113 is properly selected in response to the target bit allocation.
Please refer to
In contrast to the distortion-based rate control, the visual quality based rate control proposed by the present invention uses the evaluated visual quality VQ (C or R′) derived from data involved in the coding loop of the coding unit 102 to decide a target bit allocation of a rate-controlled unit (e.g., one coding unit or one frame), where the evaluated visual quality VQ (C or R′) may be obtained by a single visual quality metric or a composition of multiple visual quality metrics, R′ represents processed data derived from raw data of the source frame IMGIN (particularly, processed data derived from processing pixel data of the rate-controlled unit in the source frame IMGIN), and C represents raw data of the source frame IMGIN (particularly, pixel data of the rate-controlled unit in the source frame IMGIN). In one exemplary design, the rate controller 106 shown in
In an alternative design, both of the evaluated visual quality (e.g., a single visual quality metric or a composition of multiple visual quality metrics) and the pixel-based distortion (e.g., spatial activity or image complexity) may be involved in deciding a target bit allocation for a rate-controlled unit of the source frame IMGIN, where the rate-controlled unit may be one coding unit or one frame. Please refer to
Concerning the bit allocation unit 402, it decides the target bit allocation BAVQ,D according to the evaluated visual quality and the calculated pixel-based distortion for the rate-controlled unit. For example, the bit allocation unit 402 refers to the evaluated visual quality to find a first bit allocation (e.g., BAVQ in
For another example, the bit allocation unit 402 performs a coarse decision according to one of the evaluated visual quality and the pixel-based distortion to select M coarse bit allocations (i.e., rough coded bit lengths) from all possible N bit allocations, and performs a fine decision according to another of the evaluated visual quality and the pixel-based distortion to determine P fine bit allocations (i.e., accurate coded bit lengths) from the coarse bit allocations (N>M & M>P≧1), wherein the target bit allocation BAVQ,D is derived from the P fine bit allocations. In a case where P=1, the target bit allocation BAVQ,D is directly determined by the fine decision based on the pixel-based distortion if the coarse decision is made based on the evaluated visual quality; or the target bit allocation BAVQ,D is directly determined by the fine decision based on the evaluated visual quality if the coarse decision is made based on the pixel-based distortion.
Step 500: Start.
Step 502: Evaluate visual quality based on data involved in a coding loop, wherein the data involved in the coding loop may be raw data of a source frame or processed data derived from the raw data of the source frame, and each evaluated visual quality may be obtained from a single visual quality metric or a composition of multiple visual quality metrics.
Step 504: Check if pixel-based distortion should be used for bit allocation decision. If yes, go to step 506; otherwise, go to step 510.
Step 506: Calculate the pixel-based distortion based on at least a portion of raw data of the source frame.
Step 508: Refer to both of the evaluated visual quality and the calculated pixel-based distortion for decoding a target bit allocation of a rate-controlled unit. For example, the rate-controlled unit may be one coding unit or one frame. Go to step 512.
Step 510: Refer to the evaluated visual quality for deciding a target bit allocation of a rate-controlled unit. For example, the rate-controlled unit may be one coding unit or one frame.
Step 512: End.
As a person skilled in the art can readily understand details of each step in
Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.
This application claims the benefit of U.S. provisional application No. 61/776,053, filed on Mar. 11, 2013 and incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6160846 | Chiang | Dec 2000 | A |
6233283 | Chiu | May 2001 | B1 |
7742532 | Jeon | Jun 2010 | B2 |
7873727 | Pal | Jan 2011 | B2 |
8077775 | He | Dec 2011 | B2 |
8111300 | Hwang | Feb 2012 | B2 |
8345777 | Lee | Jan 2013 | B2 |
9282328 | Chen | Mar 2016 | B2 |
20030128754 | Akimoto | Jul 2003 | A1 |
20030206587 | Gomila | Nov 2003 | A1 |
20030206664 | Gomila | Nov 2003 | A1 |
20040114817 | Jayant | Jun 2004 | A1 |
20040156559 | Cheng | Aug 2004 | A1 |
20040208392 | Raveendran | Oct 2004 | A1 |
20050243915 | Kwon | Nov 2005 | A1 |
20060114997 | Lelescu | Jun 2006 | A1 |
20060215766 | Wang | Sep 2006 | A1 |
20060238445 | Wang | Oct 2006 | A1 |
20080069247 | He | Mar 2008 | A1 |
20080117981 | Lee | May 2008 | A1 |
20080240252 | He | Oct 2008 | A1 |
20090323803 | Gomila | Dec 2009 | A1 |
20100220796 | Yin | Sep 2010 | A1 |
20100296588 | Fujii | Nov 2010 | A1 |
20110033119 | Rezazadeh | Feb 2011 | A1 |
20110211637 | Blum | Sep 2011 | A1 |
20110222607 | An | Sep 2011 | A1 |
20110235715 | Chien | Sep 2011 | A1 |
20110255589 | Saunders | Oct 2011 | A1 |
20110280321 | Chou | Nov 2011 | A1 |
20120082241 | Tsai | Apr 2012 | A1 |
20120163452 | Horowitz | Jun 2012 | A1 |
20120177104 | Budagavi | Jul 2012 | A1 |
20120201475 | Carmel | Aug 2012 | A1 |
20120257681 | Sato | Oct 2012 | A1 |
20120328004 | Coban | Dec 2012 | A1 |
20120328029 | Sadafale | Dec 2012 | A1 |
20130051454 | Sze | Feb 2013 | A1 |
20130051455 | Sze | Feb 2013 | A1 |
20130083844 | Chong | Apr 2013 | A1 |
20130094569 | Chong | Apr 2013 | A1 |
20130094572 | Van der Auwera | Apr 2013 | A1 |
20130177068 | Minoo | Jul 2013 | A1 |
20130243090 | Li | Sep 2013 | A1 |
20130318253 | Kordasiewicz | Nov 2013 | A1 |
20130343447 | Chen | Dec 2013 | A1 |
20140002670 | Kolarov | Jan 2014 | A1 |
20140056363 | He | Feb 2014 | A1 |
20140160239 | Tian | Jun 2014 | A1 |
20140254659 | Ho | Sep 2014 | A1 |
20140254662 | Ho | Sep 2014 | A1 |
20140254663 | Ho | Sep 2014 | A1 |
20140254680 | Ho | Sep 2014 | A1 |
20140254689 | Ho | Sep 2014 | A1 |
20140321552 | He | Oct 2014 | A1 |
20140334559 | Kim | Nov 2014 | A1 |
20160044332 | Maaninen | Feb 2016 | A1 |
Number | Date | Country |
---|---|---|
1471319 | Jan 2004 | CN |
1669338 | Sep 2005 | CN |
1694500 | Nov 2005 | CN |
1695164 | Nov 2005 | CN |
101090502 | Dec 2007 | CN |
101232619 | Jul 2008 | CN |
101325711 | Dec 2008 | CN |
101489130 | Jul 2009 | CN |
102150429 | Aug 2011 | CN |
102415088 | Apr 2012 | CN |
102685472 | Sep 2012 | CN |
2013030833 | Mar 2013 | WO |
2013074365 | May 2013 | WO |
Entry |
---|
“International Search Report” mailed on Jun. 30, 2014 for International application No. PCT/CN2014/073176, International filing date:Mar. 11, 2014. |
“International Search Report” mailed on Jun. 23, 2014 for International application No. PCT/CN2014/073146, International filing date:Mar. 10, 2014. |
“International Search Report” mailed on Jun. 13, 2014 for International application No. PCT/CN2014/073171, International filing date:Mar. 11, 2014. |
“International Search Report” mailed on Jun. 3, 2014 for International application No. PCT/CN2014/073178, International filing date: Mar. 11, 2014. |
“International Search Report” mailed on Jun. 18, 2014 for International application No. PCT/CN2014/073167, International filing date:Mar. 11, 2014. |
Chikkerur et al., “Objective Video Quality Assessment Methods: A Classification, Review, and Performance Comparison”, IEEE Transactions on Broadcasting, vol. 57, No. 2, Jun. 2011, p. 165-182. |
Chih-Ming Fu et al., “Sample Adaptive Offest for HEVC”, 2011 IEEE. |
Number | Date | Country | |
---|---|---|---|
20140254689 A1 | Sep 2014 | US |
Number | Date | Country | |
---|---|---|---|
61776053 | Mar 2013 | US |