This application claims the benefit, under 35 U.S.C. §365 of International Application PCT/EP2007/051500, filed Feb. 16, 2007, which was published in accordance with PCT Article 21(2) on Sep. 7, 2007 in English and which claims the benefit of European patent application No. 06300184.6, filed Mar. 2, 2006.
The invention relates to a method and to an apparatus for determining in picture signal encoding the bit allocation for groups of pixel blocks, whereby these groups of pixel blocks belong to different attention importance levels of corresponding arbitrary-shaped areas of pixel blocks in a picture.
Optimised bit allocation is an important issue in video compression to increase the coding efficiency, i.e. to make optimum use of the available data rate. Achieving the best perceptual quality with respect to the limited bit rate is the target of optimised bit allocation. In view of the human visual system, a human usually pays more attention to some part of a picture rather than to other parts of that picture. The ‘attention area’, which is the perceptual sensitive area in a picture, tends to catch more human attention, as is described e.g. in L. Itti, Ch. Koch, E. Niebur, “A Model of Saliency-Based Visual Attention for Rapid Scene Analysis”, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 20, No. 11, November 1998. Therefore optimum bit allocation based on different perceptual importance of different attention picture areas is a research topic in video compression technology.
E.g. the macroblock-layer bit rate control in MPEG2 and MPEG4 selects the values of the quantisation (following transformation) step sizes Qstep for all the macroblocks in a current frame so that the sum of the bits used for these macroblocks is close to the frame target bit rate B. MPEG2 and MPEG4 support 31 different values of Qstep. In MPEG4 AVC a total of 52 Qstep values are supported by the standard and these are indexed by a quantisation parameter value QP. Ch. W. Tang, Ch. H. Chen, Y. H. Yu, Ch. J. Tsai, “A novel visual distortion sensitivity analysis for video encoder bit allocation”, ICIP 2004, Volume 5, 24-27 Oct. 2004, pp. 3225-3228, propose a description for the visual distortion sensitivity, namely the capability for human vision to detect distortion in moving scenes, while the bit allocation scheme is very simple, by using the formula:
QPN=QP+(1−VDSi,j/255)*Δq (1)
where QP is the initial quantisation parameter assigned by the rate control, Δq is a parameter for limiting the modification of QP, VDSi,j is the visual distortion sensitivity value of the (i,j)th macroblock (denoted MB) in a picture, and QPN is the refined quantisation parameter. This bit allocation scheme is rough and simple and does not consider an accurate bit rate control and distortion distribution control.
In S. Sengupta, S. K. Gupta, J. M. Hannah, “Perceptually motivated bit-allocation for H.264 encoded video sequences”, ICIP 2003, a picture is divided into foreground and background area, and then a target distortion Dtar is pre-decided for foreground quality without guarantee for the background quality. The quality of the background varies as a function of the distance from the foreground. The rate of degradation is controlled by a visual sensivity factor
S=e−d/a (2)
where a is a constant controlling the rate of fall of the background degradation and d is the distance of a background pixel from the nearest foreground pixel. This scheme tries to give a distortion distribution consistent with the human visual system, while its performance suffers from the accuracy of the used model, and it does not explain how to get Dtar and to keep the quality degradation according to given equations under a pre-determined bit budget.
In order to solve the problem of optimised bit allocation with bit budget constraint, typical bit allocation algorithms are based on a Rate-Distortion optimisation with Lagrangian multiplier processing which can be described as a constraint optimisation problem to minimise the total distortion D with the constraint rate R less than RT, using an expression like:
where Di and Ri are the distortion and the bit rate of each unit i (MB or attention area).
Assuming that the rate and distortion of each MB are only dependent on the choice of the encoding parameters as described above, the optimisation of equation (3) can be simplified to minimise the cost of each MB separately:
minJi:Ji=Di+λ×Ri (4)
It has been proposed to use an optimised bit allocation scheme based on modifying the Lagrange multiplier in the coding mode decision of each MB according to formula:
λ′=α×λ (5)
where α is a scaling factor for modifying the Lagrange multiplier according to different levels of perceptual importance.
It has also been proposed to add a different weighting factor Wi to the distortion of different attention areas to perform optimised bit allocation:
whereby the rate and distortion model can be deduced based on a ρ-domain bit rate control model like this one:
Ri=Aρi+B
Di=384σi2e−θσ
Equations (7) can be put into equation (6) to get the optimised solution for bit allocation.
It is also known to use a Rate-Distortion model based on Gaussian distribution to get an optimum result for bit allocation, as shown in formula (8):
Di=σi2×e−γR
However, known methods can not significantly influence the distortion distribution so as to make it more consistent with the properties of the human visual system. For example, using a different weighting factor Wi for different attention areas according to their different importance to human eyes can not really influence the distortion distribution accordingly. In addition, known methods are not accurate enough (especially on MB level) to get a better performance. Further, known methods have problems in keeping a good trade-off between distortion distribution and bit allocation constraint.
A problem to be solved by the invention is to provide an improved distortion-driven bit allocation for bit rate control in video signal encoding. This problem is solved by the method disclosed in claim 1. An apparatus that utilises this method is disclosed in claim 2.
The inventive distortion-driven bit allocation scheme allocates the coding/decoding error distortion to picture areas consistently with the human visual system, and satisfies the constraint of bit rate as well. “Distortion-driven” means a bit allocation achieving an optimised coding/decoding error distortion distribution under bit rate constraint, whereby the primary target is not to minimise the total distortion. The invention makes the distortion distribution inside a picture more consistent with the human visual properties, and thus achieves a much better subjective quality with limited bit rate, or allows decreasing the bit rate while keeping a consistent subjective quality.
The invention uses a distortion/bitrate/Rho-quantisation parameter histogram analysis. Based on such histogram analysis, the relationships between quantisation parameter, rate, distortion and percentage of non-zero coefficients are determined. The distortion allocation scheme is based on the assumption of Gaussian distribution of the residual signals (following mode decision and intra or inter prediction, before transform and quantisation). Then a rho-domain (σ-domain) bit rate control is used for calculating the bit allocation inside each group of macroblocks (GOB).
In principle, the inventive method is suited for determining in picture signal encoding, which includes transform and quantisation of transform coefficient blocks, the bit allocation for groups of pixel blocks in a picture, whereby said picture is divided into a regular grid of pixel blocks and said groups of pixel blocks belong to different attention importance levels of corresponding arbitrary-shaped areas of pixel blocks in said picture, and wherein the encoding/decoding distortion of each group of pixel blocks is to be controlled such that it is basically proportional to the attention importance level for that group of pixel blocks, and wherein a bit budget is given for the encoded picture and said quantisation is controlled by a quantisation parameter, said method including the steps:
In principle the inventive apparatus is suited for determining in picture signal encoding, which includes transform and quantisation of transform coefficient blocks, the bit allocation for groups of pixel blocks in a picture, whereby said picture is divided into a regular grid of pixel blocks and said groups of pixel blocks belong to different attention importance levels of corresponding arbitrary-shaped areas of pixel blocks in said picture, and wherein the encoding/decoding distortion of each group of pixel blocks is to be controlled such that it is basically proportional to the attention importance level for that group of pixel blocks, and wherein a bit budget is given for the encoded picture and said quantisation is controlled by a quantisation parameter, said apparatus including:
Advantageous additional embodiments of the invention are disclosed in the respective dependent claims.
Exemplary embodiments of the invention are described with reference to the accompanying drawings, which show in:
As mentioned above, humans pay more attention to some parts of a picture than to other parts of that picture. The part receiving more human attention are called ‘attention area’. In the above-mentioned article from L. Itti et al. and in EP application 05300974.2, a set of grey-level feature maps is extracted from the visual input of a given image. These features include intensity, colour, orientation, and so on. Then in each feature map, the most salient areas are picked out. Finally all feature maps are integrated, in a purely bottom-up manner, into a master ‘saliency map’ which is regarded as the attention information of a picture. Therefrom an attention mask can be obtained for each picture, which mask describes the different attention importance levels of corresponding areas of that picture.
How to calculate the level sets and the corresponding visual importance values Mi is not described in detail here. The bit allocation scheme can be based on the assumption that the attention mask has been obtained with Mi, (i=1 . . . N), where N is the number of attention levels.
Basically, the distortion-driven bit allocation problem can be described as follows:
According to the attention mask, the picture can be divided into N GOBs (groups of macroblocks) according to different attention or mask values Mi, each GOBi contains Ki macroblocks (denoted MB), and
where NMB is the total number of MBs in one picture. With a frame bit budget Rf which can be derived from any frame-level bit rate control, the inventive processing allocates the distortion
where
In
Step 1) Pre-Analysis and Look-Up Table Initialisation PREALUTI
For the currently processed picture, for each GOBi, (i=1 . . . N), based on the histogram analysis algorithms, three look-up tables or lists or functions DGOB
Step 2) Distortion Allocation DISALL
Step 2.1) Estimate Distortion Distribution Under Constant Bit Rate ESTDDISTR
Assume each GOBi contains Ki MBs, and
where NMB is the total number of MBs in one picture, then the allocated bit rate for each GOBi in case of Constant Bit Rate (CBR) output (i.e. without regarding different distortion levels in the different attention level areas) should be
Therefore the corresponding initial distortion value
Step 2.2) Calculate Distortion Unit CALLDU
Based on the assumption that the residual signal amplitudes (i.e. the transformed prediction error signals) have a Gaussian distribution and a corresponding rate distortion model (cf. W. Lai, X. D. Gu, R. H. Wang, W. Y. Ma, H. J. Zhang, “A content-based bit allocation model for video streaming”, ICME '04, 2004 IEEE International Conference on Multimedia and Expo, Volume 2, 27-30 Jun. 2004, pp. 1315-1318), the following formula can be used to get the distortion unit of Du:
Step 2.3) Get the Refined Distortion Distribution GREFDDIRTR
Get the allocated distortion
Since distortion or bit rate under each quantisation parameters should be discrete and one can not achieve the exact distortion value under formula
Concerning the feedback from Step 3), where the estimated bit rate
Step 3) Bit Rate Allocation RALL
Based on table of DGOB
Step 4) Rho-Domain Bit Allocation Inside Each GOBi RDBALL
Step 4.1) Quantisation Parameter Determination QPDET
Similar to the ρ domain rate control model in Z. He and S. K. Mitra, “A linear source model and a unified rate control algorithm for DCT video coding”, IEEE Transactions on Circuits and Systems for Video Technology, Vol. 12, No. 11, November 2002, pp. 970-982, the number of zeros to be produced by quantising the remaining MBs inside GOBi should be:
where Nic is the number of the coded MBs of GOBi in the current picture, Ric is the number of bits used to encode these Nic MBs, and Ki is the total number of the MBs of GOBi. Looking-up table ρGOB
Step 4.2) Update Parameters UPDPAR
Updating of the parameters θ and ρGOB
where ρic is the number of quantised zero coefficients in these coded Nic MBs. Thereafter the next macroblock is encoded.
In the following, additional explanations for steps 1) to 4) are given:
Step 1) Pre-Analysis and Look-Up Table Initialisation
Based on the algorithm proposed by S. H. Hong, S. D. Kim, “Histogram-based rate-distortion estimation for MPEG-2 video”, IEE Proceedings Vision, Image Signal Processing, Vol. 146, No. 4, August 1999, pp. 198-205, a histogram based rate-distortion estimation for the ISO/IEC 14496-10 AVC/H.264 standard is carried out. The basic assumptions are: the distortion in terms of mean squared error (MSE) is proportional to the square of the applied quantisation parameter, and a uniform quantisation is normally used in existing video coding standards. Then an iterative algorithm can be used to estimate the distortion of different quantisation parameters according to D(QPn)=D(QPn−1)+ΔD(QPn−1,QPn), where ΔD(QPn−1,QPn) is the distortion increment when the quantisation parameter increases from QPn−1 to QPn.
Thereafter a histogram or list Hist1→0[QPn] based on DCT coefficient magnitudes and related QP is established for the current GOB (group of macroblocks). This histogram represents the number of DCT coefficients which are quantised to amplitude amount ‘1’ when applying QPn, but will be quantised to amplitude ‘0’ when applying the following quantisation parameter QPn+1.
Based on the above histogram or list Hist1→0[QPn], for H.264/AVC one can easily establish the look-up table of the number of non-zero magnitude coefficients under each quantisation parameter by formula
wherein NCoeff is the total number of coefficients in each unit (which is GOB in this invention).
To each quantisation parameter QPn belongs a corresponding quantisation step QPSn that can be derived from a corresponding look-up table e.g. in the H.264/AVC standard.
In order to estimate the distortion increment ΔD(QPn−1,QPn) between two successive QP values accurately, three cases are considered:
For Case 1) the distortion increment can be estimated using the following formula, based on the assumption that the residual signal is uniformly distributed inside the quantisation level gap:
where UD(QPn) is the distortion introduced by QPn in Case 1) where the assumption of uniform distribution exists, and ΔE1(QPn) represents the distortion increase from QPn−1 to QPn in Case 1).
For Case 2) the distortion increment can be estimated using the following formula:
For Case 3) there is no distortion increment.
In H.264/AVC, δn is
in case of DC transform used in the intra 16×16 mode and
is used in other cases of intra coding, and
is used in case of inter coding. Then the final distortion increment is
ΔDe(QPn−1,QPn)=ΔE1(QPn)+ΔE2(QPn).
The initial value of the iterative distortion estimation method is
Then the QP-Distortion table DGOB
Because, as is known from the above Z. He and S. K. Mitra article, the number of non-zero amplitude coefficients is proportional to the number of coded bits, the liner model can fit very well. Therefore a linear model is used to estimate the bit rate based on the established list NZC.
Assuming that the number of coded bits will be zero when the number of non-zero coefficients is zero, a corresponding interpolation line can be fixed if at least two points on the line are given. One reference point QP is set to QPref, then the real number of coded bits Rbits is determined by entropy coding or by estimating it through frequency analysis (e.g. according to Q. Chen, Y. He, “A Fast Bits Estimation Method for Rate-Distortion Optimisation in H.264/AVC”, Picture Coding Symposium, 15-17 Dec. 2004, San Francisco, Calif., USA) according to different complexity and accuracy requirements. Combining this linear model and the look-up table NZC[QPn], the QP-Rate table of RGOB
Then based on non-zero list or table NZC[QPn], one can easily get the QP-ρ Table ρGOB
Step 2) Distortion Allocation
Based on the assumption of Gaussian distribution of the residual signals, a simplified rate-distortion model as equation (8) has been used in optimum bit allocation. Based on equations (8) and (10), one basic conclusion can be arrived:
Therefore the multiplication result of the distortion of each GOB should keep constant with a given bit budget Rf. As described in Step 2.1), the
equation (11) can be derived from equation (15).
Since equation (8) is not always accurate enough due to content property differences inside a picture and the quantisation level being limited by a discrete finite set of values, some minor modification can be done based on a feedback from Step 3) to reach the bit budget. The result from equation (11) should always be a good starting point for searching an optimised distortion allocation using the look-up tables.
Step 3) Rate Allocation
The result of this step is fed back to Step 2.3) in order to refine the bit allocation.
Step 4) Bit Allocation Inside Each GOBi
This step is used to accurately meet the bit budget constraint. Usually, if the bit budget is already met in Step 3), the corresponding quantisation parameter derived from the look-up table RGOB
Instead of being frame based, the invention can also be carried out in a field based manner for interlaced video signals.
The invention can also be used in multi-pass rate control video coding and in error resilience video coding.
Number | Date | Country | Kind |
---|---|---|---|
06300184 | Mar 2006 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2007/051500 | 2/16/2007 | WO | 00 | 8/15/2008 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2007/099039 | 9/7/2007 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
20070076957 | Wang et al. | Apr 2007 | A1 |
Number | Date | Country |
---|---|---|
WO 2006107280 | Oct 2006 | WO |
Number | Date | Country | |
---|---|---|---|
20100183069 A1 | Jul 2010 | US |