This patent document relates to video coding techniques, devices and systems.
Currently, efforts are underway to improve the performance of current video codec technologies to provide better compression ratios or provide video coding and decoding schemes that allow for lower complexity or parallelized implementations. Industry experts have recently proposed several new video coding tools and tests are currently underway for determining their effectivity.
Devices, systems and methods related to digital video coding, and specifically, to management of motion vectors are described. The described methods may be applied to existing video coding standards (e.g., High Efficiency Video Coding (HEVC) or Versatile Video Coding) and future video coding standards or video codecs.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes performing a conversion between a current video unit and a bitstream representation of the current video unit, wherein, during the conversion, a decision is made to selectively apply a same filtering operation on multiple color components of the current video unit, wherein the decision to apply the filtering operation is Binary valued based on achieving at least one condition.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes deriving, for a conversion between a video processing unit of the video and a bitstream representation of the video processing unit, at least one decision result associated with decisions in a chroma deblocking filter decision process of the video processing unit; applying a same decision result from the at least one decision result for all chroma components of the video processing unit; and performing the conversion based on the same decision result.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes deriving, for a conversion between a video processing unit of the video and a bitstream representation of the video processing unit, at least one deblocking filter associated with a chroma deblocking filter process of the video processing unit; applying a same deblocking filter from the at least one deblocking filter for all chroma components of the video processing unit; and performing the conversion based on the same deblocking filter.
In one representative aspect, the disclosed technology may be used to provide a method for video processing. This method includes deriving, for a conversion between a video processing unit of the video and a bitstream representation of the video processing unit, deblocking parameters associated with a chroma deblocking filter decision process and/or a chroma deblocking filter process of the video processing unit; applying same deblocking parameters from deblocking parameters for all chroma components of the video processing unit; and performing the conversion based on the same deblocking parameters.
Further, in a representative aspect, an apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon is disclosed. The instructions upon execution by the processor, cause the processor to implement any one or more of the disclosed methods.
Also, a computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out any one or more of the disclosed methods is disclosed.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards. The ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM). In April 2018, the Joint Video Expert Team (JVET) between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting at 50% bitrate reduction compared to HEVC.
A deblocking filter process is performed for each CU in the same order as the decoding process. First, vertical edges are filtered (horizontal filtering), then horizontal edges are filtered (vertical filtering). Filtering is applied to 8×8 block boundaries which are determined to be filtered, for both luma and chroma components. 4×4 block boundaries are not processed in order to reduce the complexity.
Three kinds of boundaries may be involved in the filtering process: CU boundary, TU boundary and PU boundary. CU boundaries, which are outer edges of CU, are always involved in the filtering since CU boundaries are always also TU boundary or PU boundary. When PU shape is 2N×N (N>4) and RQT depth is equal to 1, TU boundary at 8×8 block grid and PU boundary between each PU inside CU are involved in the filtering. One exception is that when the PU boundary is inside the TU, the boundary is not filtered.
Generally speaking, boundary strength (Bs) reflects how strong filtering is needed for the boundary. If Bs is large, strong filtering should be considered.
Let P and Q be defined as blocks which are involved in the filtering, where P represents the block located in left (vertical edge case) or above (horizontal edge case) side of the boundary and Q represents the block located in right (vertical edge case) or above (horizontal edge case) side of the boundary.
Bs is calculated on a 4×4 block basis, but it is re-mapped to an 8×8 grid. The maximum of the two values of Bs which correspond to 8 pixels consisting of a line in the 4×4 grid is selected as the Bs for boundaries in the 8×8 grid.
In order to reduce line buffer memory requirement, only for CTU boundary, information in every second block (4×4 grid) in left or above side is re-used as depicted in
Threshold values β and tC which involving in filter on/off decision, strong and weak filter selection and weak filtering process are derived based on luma quantization parameter of P and Q blocks, QPP and QPQ, respectively. Q used to derive 0 and tC is calculated as follows.
Q=((QPP+QPQ+1)>>1).
A variable β is derived as shown in Table 1, based on Q. If Bs is greater than 1, the variable tC is specified as Table 1 with Clip3(0, 55, Q+2) as input. Otherwise (BS is equal or less than 1), the variable tC is specified as Table 1 with Q as input.
Filter on/off decision is done for four lines as a unit.
If dp0+dq0+dp3+dq3<0, filtering for the first four lines is turned on and strong/weak filter selection process is applied. Each variable is derived as follows.
dp0=|p2,0−2*p1,0+p0,0|, dp3=|p2,3−2*p1,3+p0,3|, dp4=|p2,4−2*p1,4+p0,4|, dp7=|p2,7−2*p1,7+p0,7|
dq0=|q2,0−2*q1,0+q0,0|, dq3=|q2,3−2*q1,3+q0,3|, dq4=|q2,4−2*q1,4+q0,4|, dq7=|q2,7−2*q1,7+q0,7|
If the condition is not met, no filtering is done for the first 4 lines. Additionally, if the condition is met, dE, dEp1 and dEp2 are derived for weak filtering process. The variable dE is set equal to 1. If dp0+dp3<(β+(β>>1))>>3, the variable dEp1 is set equal to 1. If dq0+dq3<(β+(β>>1))>>3, the variable dEq1 is set equal to 1.
For the second four lines, decision is made in a same fashion with above.
After the first four lines are determined to filtering on in filter on/off decision, if following two conditions are met, strong filter is used for filtering of the first four lines. Otherwise, weak filter is used for filtering. Involving pixels are same with those used for filter on/off decision as depicted in
2*(dp0+dq0)<(β>>2), |p30−p00|+|q00−q30|<(β>>3) and |p00−q00|<(5*tC+1)>>1 1)
2*(dp3+dq3)<(β>>2), |p33−p03|+|q03−q33|<(β>>3) and |p03−q03|<(5*tC+1)>>1 2)
As a same fashion, if following two conditions are met, strong filter is used for filtering of the second 4 lines. Otherwise, weak filter is used for filtering.
2*(dp4+dq4)<(β>>2), |p34−p04|+|q04−q34|<(β>>3) and |p04−q04|<(5*tC+1)>>1 1)
2*(dp7+dq7)<(β>>2), |p37−p07|+|q07−q37|<(β>>3) and |p07−q07|<(5*tC+1)>>1 2)
For strong filtering, filtered pixel values are obtained by following equations. It is worth to note that three pixels are modified using four pixels as an input for each P and Q block, respectively.
p
0′=(p2+2*p1+2*p0+2*q0+q1+4)>>3
q
0′=(p1+2*p0+2*q0+2*q1+q2+4)>>3
p
1′=(p2+p1+p0+q0+2)>>2
q
1′=(p0+q0+q1+q2+2)>>2
p
2′=(2*p3+3*p2+p1+p0+q0+4)>>3
q
2′=(p0+q0+q1+3*q2+2*q3+4)>>3
In some embodiments, A may be defined as follows.
Δ=(9*(q0−p0)−3*(q1−p1)+8)>>4
When abs(Δ) is less than tC*10,
Δ=Clip3(−tC,tC,Δ)
p
0′=Clip1Y(p0+Δ)
q
0′=Clip1Y(q0−Δ)
If dEp1 is equal to 1,
Δp=Clip3(−(tC>>1),tC>>1,(((p2+p0+1)>>1)−p1+Δ)>>1)
p
1′=Clip1Y(p1+Δp)
If dEq1 is equal to 1,
Δq=Clip3(−(tC>>1),tC>>1,(((q2+q0+1)>>1)−q1−Δ)>>1)
qi′=Clip1Y(q1+Δq)
It is worth to note that maximum two pixels are modified using three pixels as an input for each P and Q block, respectively.
Bs of chroma filtering is inherited from luma. If Bs>1 or if coded chroma coefficient existing case, chroma filtering is performed. No other filtering decision is there. And only one filter is applied for chroma. No filter selection process for chroma is used. The filtered sample values p0′ and q0′ are derived as follows.
Δ=Clip3(−tC,tC,((((q0−p0)<<2)+p1−q1+4)>>3))
p
0′=Clip1C(p0+Δ)
q
0′=Clip1C(q0−Δ)
In the current VTM, i.e., VTM-4.0, the deblocking scheme is used.
The proposal uses a bilinear filter when samples at either one side of a boundary belong to a large block. A sample belonging to a large block is defined as when the width>=32 for a vertical edge, and when height>=32 for a horizontal edge.
The bilinear filter is listed below.
Block boundary samples pi for i=0 to Sp−1 and qi for j=0 to Sq−1 (pi and qi follow the definitions in HEVC deblocking described above) are then replaced by linear interpolation as follows:
p
i′=(fi*Middles,t+(64−fi)*Ps+32)»6), clipped to pi±tcPDi
q
j′=(gj*Middles,t+(64−gj)*Qs+32)»6), clipped to qj±tcPDj
where tcPDi and tcPDj term is a position dependent clipping described in Section 2.2.5 and gj, fi, Middles,t, Ps and Qs are given below:
The deblocking decision process is described in this sub-section.
Wider-Stronger Luma Filter is Filters are Used Only if all of the Condition1, Condition2 and Condition 3 are TRUE.
The condition 1 is the “large block condition”. This condition detects whether the samples at P-side and Q-side belong to large blocks, which are represented by the variable bSidePisLargeBlk and bSideQisLargeBlk respectively. The bSidePisLargeBlk and bSideQisLargeBlk are defined as follows.
bSidePisLargeBlk=((edge type is vertical and p0 belongs to CU with width>=32)∥(edge type is horizontal and p0 belongs to CU with height>=32))?TRUE: FALSE
bSideQisLargeBlk=((edge type is vertical and q0 belongs to CU with width>=32)∥(edge type is horizontal and q0 belongs to CU with height>=32))?TRUE: FALSE
Based on bSidePisLargeBlk and bSideQisLargeBlk, the condition 1 is defined as follows.
Condition1=(bSidePisLargeBlk∥bSidePisLargeBlk)?TRUE: FALSE
Next, if Condition 1 is true, the condition 2 will be further checked. First, the following variables are derived:
dp0, dp3, dq0, dq3 are first derived as in HEVC
if (p side is greater than or equal to 32)
dp0=(dp0+Abs(p5,0−2*p4,0+p3,0)+1)>>1
dp3=(dp3+Abs(p5,3−2*p4,3+p3,3)+1)>>1
if (q side is greater than or equal to 32)
dq0=(dq0+Abs(q5,0−2*q4,0+q3,0)+1)>>1
dq3=(dq3+Abs(q5,3−2*q4,3+q3,3)+1)>>1
dpq0, dpq3, dp, dq, d are then derived as in HEVC.
Then the condition 2 is defined as follows.
Condition2=(d<β)?TRUE: FALSE
Where d=dp0+dq0+dp3+dq3, as shown in section 2.1.4.
If Condition1 and Condition2 are valid it is checked if any of the blocks uses sub-blocks:
Finally, if both the Condition 1 and Condition 2 are valid, the proposed deblocking method will check the condition 3 (the large block Strong filter condition), which is defined as follows.
In the Condition3 StrongFilterCondition, the following variables are derived:
As in HEVC derive, StrongFilterCondition=(dpq is less than (β>>2), sp3+sq3 is less than (3*β>>5), and Abs(p0−q0) is less than (5*tC+1)>>1)?TRUE: FALSE
The following strong deblocking filter for chroma is defined:
p
2′=(3*p3+2*p2+p1+p0+q0+4)>>3
p
1′=(2*p3+p2+2*p1+p0+q0+q1+4)>>3
p
0′=(p3+p2+p1+2*p0+q0+q1+q2+4)>>3
The proposed chroma filter performs deblocking on a 4×4 chroma sample grid.
The chroma strong filters are used on both sides of the block boundary. Here, the chroma filter is selected when both sides of the chroma edge are greater than or equal to 8 (chroma position), and the following decision with three conditions are satisfied: the first one is for decision of boundary strength as well as large block. The proposed filter can be applied when the block width or height which orthogonally crosses the block edge is equal to or larger than 8 in chroma sample domain. The second and third one are basically the same as for HEVC luma deblocking decision, which are on/off decision and strong filter decision, respectively.
In the first decision, boundary strength (bS) is modified for chroma filtering as shown in Table 1. The conditions in Table 2 are checked sequentially. If a condition is satisfied, then the remaining conditions with lower priorities are skipped.
Chroma deblocking is performed when bS is equal to 2, or bS is equal to 1 when a large block boundary is detected.
The second and third condition is basically the same as HEVC luma strong filter decision as follows.
In the second condition:
d is then derived as in HEVC luma deblocking.
The second condition will be TRUE when d is less than β.
In the third condition StrongFilterCondition is derived as follows:
dpq is derived as in HEVC.
sp
3=Abs(p3−p0), derived as in HEVC
sq
3=Abs(q0−q3), derived as in HEVC
As in HEVC derive, StrongFilterCondition=(dpq is less than (β>>2), sp3+sq3 is less than (β>>3), and Abs(p0−q0) is less than (5*tC+1)>>1)
The proposal also introduces a position dependent clipping tcPD which is applied to the output samples of the luma filtering process involving strong and long filters that are modifying 7, 5 and 3 samples at the boundary. Assuming quantization error distribution, it is proposed to increase clipping value for samples which are expected to have higher quantization noise, thus expected to have higher deviation of the reconstructed sample value from the true sample value.
For each P or Q boundary filtered with proposed asymmetrical filter, depending on the result of decision making process described in Section 2.2, position dependent threshold table is selected from Tc7 and Tc3 tables that are provided to decoder as a side information:
Tc7={6,5,4,3,2,1,1};
Tc3={6,4,2};
tcPD=(SP==3)?Tc3:Tc7;
tcQD=(SQ==3)?Tc3:Tc7;
For the P or Q boundaries being filtered with a short symmetrical filter, position dependent threshold of lower magnitude is applied:
Tc3={3,2,1};
Following defining the threshold, filtered p′i and q′i sample values are clipped according to tcP and tcQ clipping values:
p″
i=clip3(p′i+tcPi,p′i−tcPi,p′i);
q″
j=clip3(q′j+tcQj,q′j−tcQj,q′j);
where p′i and q′i are filtered sample values, p″i and q″j are output sample value after the clipping and tcPi tcPi are clipping thresholds that are derived from the VVC tc parameter and tcPD and tcQD. Term clip3 is a clipping function as it is specified in VVC.
To enable parallel friendly deblocking using both long filters and sub-block deblocking the long filters is restricted to modify at most 5 samples on a side that uses sub-block deblocking (AFFINE or ATMVP) as shown in the luma control for long filters. Additionally, the sub-block deblocking is adjusted such that that sub-block boundaries on an 8×8 grid that are close to a CU or an implicit TU boundary is restricted to modify at most two samples on each side.
Following applies to sub-block boundaries that not are aligned with the CU boundary.
Where edge equal to 0 corresponds to CU boundary, edge equal to 2 or equal to orthogonalLength−2 corresponds to sub-block boundary 8 samples from a CU boundary etc. Where implicit TU is true if implicit split of TU is used.
Filtering of horizontal boundary is limiting Sp=3 for luma, Sp=1 and Sq=1 for chroma, when the horizontal boundary is aligned with the CTU boundary.
Inputs to this process are the reconstructed picture prior to deblocking, i.e., the array recPictureL and, when ChromaArrayType is not equal to 0, the arrays recPictureCb and recPictureCr.
Outputs of this process are the modified reconstructed picture after deblocking, i.e., the array recPictureL and, when ChromaArrayType is not equal to 0, the arrays recPictureCb and recPictureCr. The vertical edges in a picture are filtered first. Then the horizontal edges in a picture are filtered with samples modified by the vertical edge filtering process as input. The vertical and horizontal edges in the CTBs of each CTU are processed separately on a coding unit basis. The vertical edges of the coding blocks in a coding unit are filtered starting with the edge on the left-hand side of the coding blocks proceeding through the edges towards the right-hand side of the coding blocks in their geometrical order. The horizontal edges of the coding blocks in a coding unit are filtered starting with the edge on the top of the coding blocks proceeding through the edges towards the bottom of the coding blocks in their geometrical order.
NOTE—Although the filtering process is specified on a picture basis in this Specification, the filtering process can be implemented on a coding unit basis with an equivalent result, provided the decoder properly accounts for the processing dependency order so as to produce the same output values. The deblocking filter process is applied to all coding subblock edges and transform block edges of a picture, except the following types of edges:
When slice_deblocking_filter_disabled_flag of the current slice is equal to 0, the following applies:
Inputs to this process are:
firstCompIdx=(treeType==DUAL_TREE_CHROMA)?1:0 (8-1004)
lastCompIdx=(treeType==DUAL_TREE_LUMA ChromaArrayType==0)?0:2 (8-1005)
For each coding unit and each coding block per colour component of a coding unit indicated by the colour component index cIdx ranging from firstCompIdx to lastCompIdx, inclusive, with coding block width nCbW, coding block height nCbH and location of top-left sample of the coding block (xCb, yCb), when edgeType is equal to EDGE_VER and xCb%8 is equal 0 or when edgeType is equal to EDGE_HOR and yCb%8 is equal to 0, the edges are filtered by the following ordered steps:
1. The variable filterEdgeFlag is derived as follows:
Inputs to this process are:
a variable edgeType specifying whether a vertical (EDGE_VER) or a horizontal (EDGE_HOR) edge is filtered.
Outputs of this process are:
Inputs to this process are:
edgeFlags[x][y]=0 (8-1006)
edgeFlags[x][y]=1 (8-1007)
maxFilterLengthQs[x][y]=Min(5,maxFilterLengthQs[x][y]) (8-1008)
maxFilterLengthPs[x][y]=Min(5,maxFilterLengthPs[x][y]) (8-1009)
maxFilterLengthPs[x][y]=Min(5,maxFilterLengthPs[x][y]) (8-1010)
maxFilterLengthQs[x][y]=Min(5,maxFilterLengthQs[x][y]) (8-1011)
maxFilterLengthPs[x][y]=2 (8-1012)
maxFilterLengthQs[x][y]=2 (8-1013)
maxFilterLengthPs[x][y]=3 (8-1014)
maxFilterLengthQs[x][y]=3 (8-1015)
edgeFlags[x][y]=0 (8-1016)
edgeFlags[x][y]=1 (8-1017)
maxFilterLengthQs[x][y]=Min(5,maxFilterLengthQs[x][y]) (8-1018)
maxFilterLengthPs[x][y]=Min(5,maxFilterLengthPs[x][y]) (8-1019)
maxFilterLengthPs[x][y]=Min(5,maxFilterLengthPs[x][y]) (8-1020)
maxFilterLengthQs[x][y]=Min(5,maxFilterLengthQs[x][y]) (8-1021)
maxFilterLengthPs[x][y]=2 (8-1022)
maxFilterLengthQs[x][y]=2 (8-1023)
maxFilterLengthPs[x][y]=3 (8-1024)
maxFilterLengthQs[x][y]=3 (8-1025)
Inputs to this process are:
xDi=(i<<3) (8-1026)
yDj=cIdx==0?(j<<2):(j<<1) (8-1027)
xN is set equal to Max(0,(nCbW/8)−1) (8-1028)
yN=cIdx==0?(nCbH/4)−1:(nCbH/2)−1 (8-1029)
xDi=cIdx==0?(i<<2):(i<<1) (8-1030)
yDj=(j<<3) (8-1031)
xN=cIdx==0?(nCbW/4)−1:(nCbW/2)−1 (8-1032)
yN=Max(0,(nCbH/8)−1) (8-1033)
For xDi with i=0 . . . xN and yDj with j=0 . . . yN, the following applies:
Inputs to this process are:
subW=cIdx==0?1:SubWidthC (8-1034)
subH=cIdx==0?1:SubHeightC (8-1035)
xN=edgeType==EDGE_VER?Max(0,(nCbW/8)−1):(nCbW/4/subW)−1 (8-1036)
yN=edgeType==EDGE_VER?(nCbH/4/subH)−1:Max(0,(nCbH/8)−1) (8-1037)
xDk=edgeType==EDGE_VER?(k<<3):(k<<(2/subW)) (8-1038)
yDm=edgeType==EDGE_VER?(m<<(2/subH)):(m<<3) (8-1039)
cQpPicOffset=cIdx==1?pps_cb_qp_offset:pps_cr_qp_offset (8-1040)
Inputs to this process are:
qj,k=recPictureL[xCb+xBl+j][yCb+yBl+k] (8-1041)
pi,k=recPictureL[xCb+xBl−i−1][yCb+yBl+k] (8-1042)
qj,k=recPicture[xCb+xBl+k][yCb+yBl+j] (8-1043)
pi,k=recPicture[xCb+xBl+k][yCb+yBl−i−1] (8-1044)
The variable qpOffset is derived as follows:
lumaLevel=((p0,0+p0,3+q0,0+q0,3)>>2), (8-1045)
qP=((QpQ+QpP+1)>>1)+qpOffset (8-1047)
The value of the variable (3′ is determined as specified in Table 8 20 based on the quantization parameter Q derived as follows:
Q=Clip3(0,63,qP+(slice_beta_offset_div2<<1)) (8-1048)
where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
The variable β is derived as follows:
β=β′*(1<<(BitDepthY−8)) (8-1049)
The value of the variable tC′ is determined as specified in Table 8 20 based on the quantization parameter Q derived as follows:
Q=Clip3(0,65,qP+2*(bS−1)+(slice_tc_offset_div2<<1)) (8-1050)
where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample q0,0.
The variable tC is derived as follows:
tC=tC′*(1<<(BitDepthY−8)) (8-1051)
The following ordered steps apply:
1. The variables dp0, dp3, dq0 and dq3 are derived as follows:
dp0=Abs(p2,0−2*p1,0+p0,0) (8-1052)
dp3=Abs(p2,3−2*p1,3+p0,3) (8-1053)
dq0=Abs(q2,0−2*q1,0+q0,0) (8-1054)
dq3=Abs(q2,3−2*q1,3+q0,3) (8-1055)
2. When maxFilterLengthP and maxFilterLengthQ both are equal to or greater than 3 the variables sp0, sq0, spq0, sp3, sq3 and spq3 are derived as follows:
sp0=Abs(p3,0−p0,0) (8-1056)
sq0=Abs(q0,0−q3,0) (8-1057)
spq0=Abs(p0,0−q0,0) (8-1058)
sp3=Abs(p3,3−p0,3) (8-1059)
sq3=Abs(q0,3−q3,3) (8-1060)
spq3=Abs(p0,3 q0,3) (8-1061)
3. The variables sidePisLargeBlk and sideQisLargeBlk are set equal to 0.
4. When maxFilterLengthP is larger than 3, sidePisLargeBlk is set equal to 1:
5. When maxFilterLengthQ is larger than 3, sideQisLargeBlk is set equal to 1:
6. When edgeType is equal to EDGE_HOR and (yCb+yBl)%CtbSizeY is equal to 0, sidePisLargeBlk is set equal to 0.
7. The variables dSam0 and dSam3 are initialized to 0.
8. When sidePisLargeBlk or sideQisLargeBlk is greater than 0, the following applies:
dp0L=(dp0+Abs(p5,0−2*p4,0+p3,0)+1)>>1 (8-1062)
dp3L=(dp3+Abs(p5,3−2*p4,3+p3,3)+1)>>1 (8-1063)
dp0L=dp0 (8-1064)
dp3L=dp3 (8-1065)
maxFilterLengthP=3 (8-1066)
dq0L=(dq0+Abs(q5,0−2*q4,0+q3,0)+1)>>1 (8-1067)
dq3L=(dq3+Abs(q5,3−2*q4,3+q3,3)+1)>>1 (8-1068)
dq0L=dq0 (8-1069)
dq3L=dq3 (8-1070)
dpq0L=dp0L+dq0L (8-1071)
dpq3L=dp3L+dq3L (8-1072)
dL=dpq0L+dpq3L (8-1073)
p3=p3,0 (8-1074)
p0=pmaxFilterLengthP,0 (8-1075)
q3=q3,0 (8-1076)
q0=qmaxFilterLengthQ,0 (8-1077)
p3=p3,3 (8-1078)
p0=pmaxFilterLengthP,3 (8-1079)
q3=q3,3 (8-1080)
q0=qmaxFilterLengthQ,3 (8-1081)
dpq0=dp0+dq0 (8-1082)
dpq3=dp3+dq3 (8-1083)
dp=dp0+dp3 (8-1084)
dq=dq0+dq3 (8-1085)
d=dpq0+dpq3 (8-1086)
Inputs to this process are:
qj,k=recPictureL[xCb+xBl+j][yCb+yBl+k] (8-1087)
pi,k=recPictureL[xCb+xBl−i−1][yCb+yBl+k] (8-1088)
recPicture[xCb+xBl−i−1][yCb+yBl+k]=pi′ (8-1089)
recPicture[xCb+xBl+j][yCb+yBl+k]=qj′ (8-1090)
recPicture[xCb+xBl−i−1][yCb+yBl+k]=pi′ (8-1091)
recPicture[xCb+xBl+j][yCb+yBl+k]=qj′ (8-1092)
qj,k=recPictureL[xCb+xBl+k][yCb+yBl+j] (8-1093)
pi,k=recPictureL[xCb+xBl+k][yCb+yBl−i−1] (8-1094)
recPicture[xCb+xBl+k][yCb+yBl−i−1]=pi′ (8-1095)
recPicture[xCb+xBl+k][yCb+yBl+j]=qj′ (8-1096)
recPicture[xCb+xBl+k][yCb+yBl−i−1]=pi′ (8-1097)
recPicture[xCb+xBl+k][yCb+yBl+j]=qj′ (8-1098)
This process is only invoked when ChromaArrayType is not equal to 0.
Inputs to this process are:
qi,k=recPicture[xCb+xBl+i][yCb+yBl+k] (8-1099)
pi,k=recPicture[xCb+xBl−i−1][yCb+yBl+k] (8-1100)
qi,k=recPicture[xCb+xBl+k][yCb+yBl+i] (8-1101)
pi,k=recPicture[xCb+xBl+k][yCb+yBl−i−1] (8-1102)
The variables QpQ and QpP are set equal to the QpY values of the coding units which include the coding blocks containing the sample q0,0 and p0,0, respectively.
The variable QpC is derived as follows:
If ChromaArrayType is equal to 1, the variable QpC is determined as specified in Table 8 15 based on the index qPi derived as follows:
qPi=((QpQ+QpP+1)>>1)+cQpPicOffset (8-1103)
Q=Clip3(0,63,QpC+(slice_beta_offset_div2<<1)) (8-1104)
where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
The variable β is derived as follows:
β=(β′*(1<<(BitDepthC−8)) (8-1105)
The value of the variable tC′ is determined as specified in Table 8 20 based on the chroma quantization parameter Q derived as follows:
Q=Clip3(0,65,QpC+2*(bS−1)+(slice_tc_offset_div2<<1)) (8-1106)
where slice_tc_offset_div2 is the value of the syntax element slice_tc_offset_div2 for the slice that contains sample q0,0.
The variable tC is derived as follows:
tC=tC′*(1<<(BitDepthC−8)) (8-1107)
When maxFilterLengthCbCr is equal to 1 and bS is not equal to 2, maxFilterLengthCbCr is set equal to 0.
When maxFilterLengthCbCr is equal to 3, the following ordered steps apply:
1. The variables dpq0, dpq1, dp, dq and d are derived as follows:
dp0=Abs(p2,0−2*p1,0+p0,0) (8-1108)
dp1=Abs(p2,1−2*p1,1+p0,1) (8-1109)
dq0=Abs(q2,0−2*q1,0+q0,0) (8-1110)
dq1=Abs(q2,1−2*q1,1+q0,1) (8-1111)
dpq0=dp0+dq0 (8-1112)
dpq1=dp1+dq1 (8-1113)
dp=dp0+dp1 (8-1114)
dq=dq0+dq1 (8-1115)
d=dpq0+dpq1 (8-1116)
2. The variables dSam0 and dSam1 are both set equal to 0.
3. When d is less than β, the following ordered steps apply:
This process is only invoked when ChromaArrayType is not equal to 0.
Inputs to this process are:
maxK=(SubHeightC==1)?3:1 (8-1117)
maxK=(SubWidthC==1)?3:1 (8-1118)
The values pi and qi with i=0 . . . maxFilterLengthCbCr and k=0 . . . maxK are derived as follows:
qi,k=recPicture[xCb+xBl+i][yCb+yBl+k] (8-1119)
pi,k=recPicture[xCb+xBl−i−1][yCb+yBl+k] (8-1120)
qi,k=recPicture[xCb+xBl+k][yCb+yBl+i] (8-1121)
pi,k=recPicture[xCb+xBl+k][yCb+yBl−i−1] (8-1122)
Depending on the value of edgeType, the following applies:
recPicture[xCb+xBl+i][yCb+yBl+k]=qi′ (8-1123)
recPicture[xCb+xBl−i−1][yCb+yBl+k]=pi′ (8-1124)
recPicture[xCb+xBl+k][yCb+yBl+i]=qi′ (8-1125)
recPicture[xCb+xBl+k][yCb+yBl−i−1]=pi′ (8-1126)
Inputs to this process are:
sp=(sp+Abs(p3−p0)+1)>>1 (8-1127)
sq=(sq+Abs(q3−q0)+1)>>1 (8-1128)
The variable sThr is derived as follows:
sThr=3*β>>5 (8-1129)
sThr=β>>3 (8-1130)
The variable dSam is specified as follows:
Inputs to this process are:
p0′=Clip3(p0−3*tC,p0+3*tC,(p2+2*p1+2*p0+2*q0+q1+4)>>3) (8-1131)
p1′=Clip3(p1−2*tC,p1+2*tC,(p2+p1+p0+q0+2)>>2) (8-1132)
p2′=Clip3(p2−1*tC,p2+l*tC,(2*p3+3*p2+p1+p0+q0+4)>>3) (8-1133)
q0′=Clip3(q0−3*tC,q0+3*tC,(p1+2*p0+2*q0+2*q1+q2+4)>>3) (8-1134)
q1′=Clip3(q1−2*tC,q1+2*tC,(p0+q0+q1+q2+2)>>2) (8-1135)
q2′=Clip3(q2−1*tC,q2+1*tC,(p0+q0+q1+3*q2+2*q3+4)>>3) (8-1136)
Δ=(9*(q0−p0)−3*(q1−p1)+8)>>4 (8-1137)
Δ=Clip3(−tC,tC,Δ) (8-1138)
p0′=Clip1Y(p0+Δ) (8-1139)
q0′=Clip1Y(q0) (8-1140)
Δp=Clip3(−(tC>>1),tC>>1,(((p2+p0+1)>>1)−p1+)>>1) (8-1141)
p1′=Clip1Y(p1+Δp) (8-1142)
Δq=Clip3(−(tC>>1),tC>>1,(((q2+q0+1)>>1)−q1)>>1) (8-1143)=
q1′=Clip1Y(q1+Δq) (8-1144)
Inputs to this process are:
refMiddle=(p4+p3+2*(p2+p1+p0+q0+q1+q2)+q3+q4+8)>>4 (8-1145)
refMiddle=(p6+p5+p4+p3+p2+p1+2*(p0+q0)+q1+q2+q3+q4+q5+q6+8)>>4 (8-1146)
refMiddle=(p4+p3+2*(p2+p1+p0+q0+q1+q2)+q3+q4+8)>>4 (8-1147)
refMiddle=(p3+p2+p1+p0+q0+q1+q2+q3+4)>>3 (8-1148)
refMiddle=(2*(p2+p1+p0+q0)+p0+p1+q1+q2+q3+q4+q5+q6+8)>>4 (8-1149)
refMiddle=(p6+p5+p4+p3+p2+p1+2*(q2+q1+q0+p0)+q0+q1+8)>>4 (8-1150)
The variables refP and refQ are derived as follows:
refP=(pmaxFilterLengtP+pmaxFilterLengthP−1+1)>>1 (8-1151)
refQ=(qmaxFilterLengtQ+qmaxFilterLengthQ−1+1)>>1 (8-1152)
The variables fi and tCPDi are defined as follows:
f
0.6={59,50,41,32,23,14,5} (8-1153)
tCPD
0.6={6,5,4,3,2,1,1} (8-1154)
f
0.4={58,45,32,19,6} (8-1155)
tCPD
0.4={6,5,4,3,2} (8-1156)
f
0.2={53,32,11} (8-1157)
tCPD
0.2={6,4,2} (8-1158)
The variables gj and tCQDj are defined as follows:
g
0.6={59,50,41,32,23,14,5} (8-1159)
tCQD
0.6={6,5,4,3,2,1,1} (8-1160)
g
0.4={58,45,32,19,6} (8-1161)
tCQD
0.4={6,5,4,3,2} (8-1162)
g
0.2={53,32,11} (8-1163)
tCQD
0.2={6,4,2} (8-1164)
The filtered sample values pi′ and qj′ with i=0 . . . maxFilterLengthP−1 and j=0 . . . maxFilterLengthQ−1 are derived as follows:
pi′=Clip3(pi−(tC*tCPDi)>>1,pi+(tC*tCPDi)>>1,(refMiddle*fi+refP*(64−fi)+32)>>6) (8-1165)
qj′=Clip3(qj−(tC*tCQDj)>>1,qj+(tC*tCQDj)>>1,(refMiddle*gj+refQ*(64−gj)+32)>>6) (8-1166)
When one or more of the following conditions are true, the filtered sample value, p i′ is substituted by the corresponding input sample value p i with i=0 . . . maxFilterLengthP−1:
Inputs to this process are:
The variable dSam is specified as follows:
If all of the following conditions are true, dSam is set equal to 1:
This process is only invoked when ChromaArrayType is not equal to 0.
Inputs to this process are:
p0′=Clip3(p0−tC,p0+tC,(p3+p2+p1+2*p0+q0+q1+q2+4)>>3) (8-1167)
p1′=Clip3(p1−tC,p1+tC,(2*p3+p2+2*p1+p0+q0+q1+4)>>3) (8-1168)
p2′=Clip3(p2−tC,p2+tC,(3*p3+2*p2+p1+p0+q0+4)>>3) (8-1169)
q0′=Clip3(q0−tC,q0+tC,(p2+p1+p0+2*q0+q1+q2+q3+4)>>3) (8-1170)
q1′=Clip3(g1−tC,q1+tC,(p1+p0+q0+2*q1+q2+2*q3+4)>>3) (8-1171)
q2′=Clip3(q2−tC,q2+tC,(p0+q0+q1+2*q2+3*q3+4)>>3) (8-1172)
=Clip3(tC,tC,((((q0−p0)<<2)+p1−q1+4)>>3)) (8-1173)
p0′=Clip1C(p0+□) (8-1174)
q0′=Clip1C(q0−□) (8-1175)
When one or more of the following conditions are true, the filtered sample value, pi′ is substituted by the corresponding input sample value pi with i=0 . . . maxFilterLengthCbCr−1:
In the current VVC/VTM deblocking design, for chroma, the decision and filtering operations can be much different from component to component, which may make parallel processing for chroma components difficult.
It is proposed to harmonize deblocking for all chroma components to ensure that the same deblocking decision and operations can be applied to different chroma components. It can enable a uniform procedure for different chroma components for the benefit of high parallelism and throughput.
It is noted that the chroma components may represent the Cb/Cr colour components, or B/R colour components for the RGB format. In the following descriptions, we take ‘Cb/Cr’ for examples.
The detailed embodiments described below should be considered as examples to explain general concepts. These embodiments should not be interpreted narrowly way. Furthermore, these embodiments can be combined in any manner.
The methods described below may be also applicable to other decoder motion information derivation technologies in addition to the DMVR and BIO mentioned below.
The Modified Boundary Strength
Chroma deblocking is performing when bS is equal to 2, or bS is equal to 1 when a large block boundary is detected.
If ChromaArrayType is equal to 1, the variable Qpc is determined as specified in Table 8-15 based on the index qPi derived as follows:
qPi=((QpQ+QpP+1)>>1)+((pps_cb_qp_offset+pps_cr_qp_offset+1)>>1)
or qPi=((QpQ+QpP+pps_cb_qp_offset+pps_cr_qp_offset+1)>>1)
Otherwise (ChromaArrayType is greater than 1), the variable Qpc is set equal to Min(qPi, 63).
The value of the variable β′ is determined as specified in Table 8-20 based on the quantization parameter Q derived as follows:
Q=Clip3(0,63,QpC+(slice_beta_offset_div2<<1))
where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
The variable β is derived as follows:
β=β′*(1<<(BitDepthC−8))
The decision of long/normal/none deblocking filter for Cb or Cr following VVC draft 5 and the uniform deblocking filter follows the table below
If ChromaArrayType is equal to 1, the variable Qpc is determined as specified in Table 8-15 based on the index qPi derived as follows:
qPi=((QpQ+QpP+1)>>1)+((pps_cb_qp_offset+pps_cr_qp_offset+1)>>1)
or qPi=((QpQ+QpP+pps_cb_qp_offset+pps_cr_qp_offset+1)>>1)
Otherwise (ChromaArrayType is greater than 1), the variable Qpc is set equal to Min(qPi, 63).
The value of the variable β′ is determined as specified in Table 8-20 based on the quantization parameter Q derived as follows:
Q=Clip3(0,63,QpC+(slice_beta_offset_div2<<1))
where slice_beta_offset_div2 is the value of the syntax element slice_beta_offset_div2 for the slice that contains sample q0,0.
The variable β is derived as follows:
β=β′*(1<<(BitDepthC−8))
The decision of long/normal/none deblocking filter for Cb or Cr following VVC draft 5 and the uniform deblocking filter follows the table below
In the present document, the term “video processing” may refer to video encoding, video decoding, video compression or video decompression. For example, video compression algorithms may be applied during conversion from pixel representation of a video to a corresponding bitstream representation or vice versa. The bitstream representation of a current video block may, for example, correspond to bits that are either co-located or spread in different places within the bitstream, as is defined by the syntax. For example, a macroblock may be encoded in terms of transformed and coded error residual values and also using bits in headers and other fields in the bitstream.
It will be appreciated that the disclosed methods and techniques will benefit video encoder and/or decoder embodiments incorporated within video processing devices such as smartphones, laptops, desktops, and similar devices by allowing the use of the techniques disclosed in the present document.
Some embodiments may be described using the following clause-based format.
1. A method of visual media processing, comprising: performing a conversion between a current video unit and a bitstream representation of the current video unit, wherein, during the conversion, a decision is made to selectively apply a same filtering operation on multiple color components of the current video unit, wherein the decision to apply the filtering operation is Binary valued based on achieving at least one condition.
2. The method of clause 1, wherein the same filtering operation is applied to boundaries of the current video unit.
3. The method of clause 1, wherein the at least one condition relates to a length of a boundary of the current video unit.
3. The method of clause 1, wherein the at least one condition is associated with only one of the multiple color components.
4. The method of clause 1, wherein the at least one condition is associated with all of the multiple color components.
5. The method of clause 1, wherein the decision to apply the same filtering operation on the multiple color components of the current video unit is based on individual outcomes of decisions to apply the same filtering operation to each of the multiple color components.
6. The method of clause 1, further comprising: upon detecting that at least one color component of a video unit adjacent to the current video unit has non-zero transform coefficients, setting a boundary strength value of the current video unit to a predefined number.
7. The method of clause 1, further comprising: upon detecting that at least one color component of a video unit adjacent to the current video unit has non-zero transform coefficients and the at least one color component is not intracoded, setting a boundary strength value of the current video unit to a predefined number.
8. The method of clause 1, further comprising: upon detecting that multiple color components of a video unit adjacent to the current video unit have non-zero transform coefficients and the multiple color components none of the multiple color components are intracoded, setting a boundary strength value of the current video unit to a predefined number.
9. The method of clause 1, wherein when the decision to apply the filtering operation is true for one of the multiple color components of the current video unit, applying the filtering operation on each of the multiple color components of the current video unit.
10. The method of clause 1, wherein information related to one of the multiple color components of the current video unit is used to derive filtering operations related to the multiple color components of the current video unit.
11. The method of clause 1, wherein when a boundary strength value of the current video unit is not equal to a predefined number, enabling the filtering operation on the multiple color components of the current video unit.
12. The method of clause 1, wherein when a boundary strength value of the current video unit is not equal to a predefined number, disabling the filtering operation on each of the multiple color components of the current video unit.
13. The method of clause 1, wherein the at least one condition is related to a color format.
14. The method of clause 1, wherein the color format is 4:2:0 or 4:2:2.
15. The method of clause 1, wherein the multiple components of the current video unit are chroma components.
16. The method of clause 1, wherein the multiple components of the current video unit are in Cb, Cr format.
17. The method of clause 1, wherein the multiple components of the current video unit are in RGB format.
18. An apparatus in a video system comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 17.
19. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of clauses 1 to 17.
In some examples, the chroma components of the video processing unit include a first chroma component and a second chroma component.
In some examples, the first chroma component is a Cb colour component and the second chroma component is a Cr colour component when the video processing unit is in a YCbCr format.
In some examples, the first chroma component is a G colour component and the second chroma component is a B colour component when the video processing unit is in a RGB format.
In some examples, the decision result indicates a decision that deblocking filter shall be performed to chroma block boundaries.
In some examples, the decision result indicates a decision of boundary strength.
In some examples, information of only one colour component is utilized to derive decision for both the first and second chroma components.
In some examples, the decision made for the first chroma component is applied to the second chroma components.
In some examples, the decision made for the second chroma component is applied to the first chroma components.
In some examples, information of both the first and second chroma components is utilized to derive decision for both the first and second chroma components.
In some examples, the decision is applied to both the first and second chroma components.
In some examples, the decision includes a first decision and a final decision, wherein the first decision is made for the first and second chroma components respectively, and the final decision applied to both the first and second chroma components is based on the first decision.
In some examples, when at least one of the adjacent first or second chroma component blocks has non-zero transform coefficients, the boundary strength for the first and second chroma component blocks are set to 1.
In some examples, when at least one of the adjacent first or second chroma component blocks has non-zero transform coefficients and none of the adjacent first and second chroma component blocks is intra coded, the boundary strength for the first and second chroma component blocks are set to 1.
In some examples, when at least one of the adjacent first or second chroma component blocks has non-zero transform coefficients and at least one of the adjacent second chroma component blocks has non-zero transform coefficients, the boundary strength for the first and second chroma component blocks are set to 1.
In some examples, when both adjacent first chroma component blocks do not have non-zero transform coefficients, or both adjacent second chroma component blocks do not have non-zero transform coefficients, the boundary strength for the first and second chroma component blocks are set to 0.
In some examples, when at least one of the adjacent first or second chroma component blocks has non-zero transform coefficients and at least one of the adjacent second chroma component blocks has non-zero transform coefficients and none of the adjacent first and second chroma component blocks is intra coded, the boundary strength for the first and second chroma component blocks are set to 1.
In some examples, the decision result indicates a decision whether applying deblocking filter for one chroma component, and when the decision indicates that applying deblocking filter for one chroma component, the decision is applied to all chroma components.
In some examples, the decision result indicates a decision whether applying deblocking filter for one colour component, and when the decision indicates that applying deblocking filter for one colour component, the decision is applied to all colour components.
In some examples, the decision result indicates a decision whether applying strong deblocking filter for one chroma component, and when the decision indicates that applying strong deblocking filter for one chroma component, the decision is applied to all chroma components.
In some examples, the decision result indicates a decision whether applying strong deblocking filter for one colour component, and when the decision indicates that applying strong deblocking filter for one colour component, the decision is applied to all colour components.
In some examples, the chroma components of the video processing unit include a first chroma component and a second chroma component.
In some examples, the first chroma component is a Cb colour component and the second chroma component is a Cr colour component when the video processing unit is in a YCbCr format.
In some examples, the first chroma component is a G colour component and the second chroma component is a B colour component when the video processing unit is in a RGB format.
In some examples, information of only one colour component is utilized to derive the deblocking filter applied to all chroma components.
In some examples, the deblocking filter is derived from signals of the first chroma component.
In some examples, the deblocking filter is derived from signals of the second chroma component.
In some examples, the deblocking filter is derived from signals of both the first and second chroma components.
In some examples, when boundary strength for the first chroma component blocks is not equal to 0 or boundary strength for the second chroma component blocks is not equal to 0, the chroma deblocking filter process is performed on both the first and second chroma components.
In some examples, when boundary strength for the first chroma component blocks is equal to 0 or boundary strength for the second chroma component blocks is equal to 0, the chroma deblocking filter process is disallowed for both the first and second chroma components.
In some examples, when an indication of strong deblocking filter is true for one chroma component, the strong deblocking filter is applied to all chroma components, wherein the indication is StrongFilterCondition.
In some examples, when an indication of strong deblocking filter is false for one chroma component, the strong deblocking filter is disallowed for all chroma components, wherein the indication is StrongFilterCondition.
In some examples, when it is decided for one chroma component to apply normal deblocking filter and no deblocking filter is applied for the other chroma component, the normal deblocking filter is applied to both chroma components.
In some examples, when it is decided for one chroma component to apply normal deblocking filter and no deblocking filter is applied for the other chroma component, no deblocking filter is applied to both chroma components.
In some examples, when it is decided for one chroma component to apply strong or long deblocking filter and no deblocking filter is applied for the other chroma component, normal deblocking filter is applied to both chroma components.
In some examples, when it is decided for one chroma component to apply strong or long deblocking filter and no deblocking filter is applied for the other chroma component, no deblocking filter is applied to both chroma components.
In some examples, when it is decided for one chroma component to apply strong or long deblocking filter and no deblocking filter is applied for the other chroma component, the strong or long deblocking filter is applied to both chroma components.
In some examples, when it is decided for one chroma component to apply strong or long deblocking filter and normal deblocking filter is applied for the other chroma component, the strong or long deblocking filter is applied to both chroma components.
In some examples, when it is decided for one chroma component to apply strong or long deblocking filter and normal deblocking filter is applied for the other chroma component, the normal deblocking filter is applied to both chroma components.
In some examples, the chroma components of the video processing unit include a first chroma component and a second chroma component.
In some examples, the first chroma component is a Cb colour component and the second chroma component is a Cr colour component when the video processing unit is in a YCbCr format.
In some examples, the first chroma component is a G colour component and the second chroma component is a B colour component when the video processing unit is in a RGB format.
In some examples, the deblocking parameters include at least one of parameters β and tC involved in the chroma deblocking filter decision process and the chroma deblocking filter process, wherein the parameters β and tC are derived based on quantization parameters of blocks on both sides of the boundary.
In some examples, the parameters β and tC for all chroma components follow one chroma component.
In some examples, the parameters β and tC for all chroma components depend on the average of pps_cb_qp_offset or pps_cr_qp_offset, wherein the pps_cb_qp_offset and pps_cr_qp_offset are syntax elements which specify the offsets to the luma quantization parameter used for deriving chroma quantization parameter of Cb and Cr components, respectively.
In some examples, the parameters β and tC for all chroma components depend on pps_joint_cbcr_qp_offset, wherein pps_joint_cbcr_qp_offset is a syntax element which specifies the offset to the luma quantization parameter used for deriving joint chroma quantization parameter.
In some examples, the parameters β and tC for all chroma components depend on the average of (pps_cb_qp_offset+slice_cb_qp_offset) and (pps_cr_qp_offset+slice_cr_qp_offset);
wherein the pps_cb_qp_offset and pps_cr_qp_offset are syntax elements signaled in sequence parameter set, which specify the offsets to the luma quantization parameter used for deriving chroma quantization parameter of Cb and Cr components, respectively;
wherein the slice_cb_qp_offset and slice_cr_qp_offset are syntax elements signaled in slice header, which specifies a difference to be added to the value of pps_cb_qp_offset and pps_cr_qp_offset when determining the value of the chroma quantization parameter of Cb and Cr components, respectively.
In some examples, the parameters β and tC for all chroma components depend on slice_joint_cbcr_qp_offset, wherein the slice_joint_cbcr_qp_offset is a syntax element signaled in slice header, which specifies a difference to be added to the value of pps_joint_cbcr_qp_offset value when determining the value of joint chroma quantization parameter, and the pps_joint_cbcr_qp_offset is a syntax element signaled in sequence parameter set, which specifies the offset to the luma quantization parameter used for deriving joint chroma quantization parameter.
In some examples, whether to apply the chroma deblocking filter decision process and/or the chroma deblocking filter process depends on certain condition.
In some examples, the condition is colour format of the video processing unit is 4:2:0 and/or 4:2:2.
In some examples, indications of usage of the chroma deblocking filter decision process and/or the chroma deblocking filter process are signaled in at least one of sequence, picture, slice, tile group, tile, brick and a video region-level.
In some examples, indications of usage of the chroma deblocking filter decision process and/or the chroma deblocking filter process are signaled in at least one of video parameter set (VPS), sequence parameter set (SPS) and picture parameter set (PPS), picture header, slice header, and tile group header.
In some examples, the video processing unit includes at least one of Coding Unit (CU), Prediction Unit (PU) and Transform Unit (TU).
In some examples, the conversion generates the video processing unit of video from the bitstream representation.
In some examples, the conversion generates the bitstream representation from the video processing unit of video.
The disclosed and other solutions, examples, embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any subject matter or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular techniques. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2019/085511 | May 2019 | CN | national |
PCT/CN2019/092818 | Jun 2019 | CN | national |
This application is a continuation of International Patent Application No. PCT/CN2020/088733 filed on May 6, 2020 which claims the priority to and benefits of International Patent Application No. PCT/CN2019/085511, filed on May 5, 2019 and No. PCT/CN2019/092818, filed on Jun. 25, 2019. All the aforementioned patent applications are hereby incorporated by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2020/088733 | May 2020 | US |
Child | 17519269 | US |