The present disclosure is directed generally to video coding and decoding technologies.
Video coding standards have evolved primarily through the development of the well-known International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) and International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standards. The ITU-T produced H.261 and H.263, ISO/IEC produced Moving Picture Experts Group (MPEG)-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/ High Efficiency Video Coding (HEVC) standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (WET) was founded by Video Coding Experts Group (VCEG) and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM). In April 2018, the JVET between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the next generation Versatile Video Coding (VVC) standard targeting at 50% bitrate reduction compared to HEVC.
Using the disclosed video coding, transcoding or decoding techniques, embodiments of video encoders or decoders can handle virtual boundaries of coding tree blocks to provide better compression efficiency and simpler implementations of coding or decoding tools.
In one example aspect, a method of video processing is disclosed. The method includes determining, for a conversion between a current block of a video and a bitstream representation of the video, that one or more samples of the video outside the current block are unavailable for a coding process of the conversion. The coding process comprises an adaptive loop filter (ALF) coding process. The method includes performing, based on the determining, the conversion by using padded samples for the one or more samples of the video. The padded samples are generated by checking for availability of samples in an order.
In another example aspect, a method of video processing is disclosed. The method includes making a first determination, for a conversion between a current block of a video and a bitstream representation of the video, about whether a sample in a neighboring block of the current block is in a same video region as the current block. The neighboring block is located (1) above and to the left of the current block, (2) above and to the right of the current block, (3) below and to the left of the current block, or (4) below and to the right of the current block. The method includes using the first determination to make a second determination about applicability of a coding tool that uses samples outside the current block to the conversion of the current block. The coding tool comprises an adaptive loop filter (ALF) tool that comprises an ALF classification process and/or an ALF filtering process. The method also includes performing the conversion according to the first determination and the second determination.
In another example aspect, a method of video processing is disclosed. The method includes determining, for a conversion between a current block of a video and a bitstream representation of the video, that one or more samples of the video outside the current block are unavailable for a coding process of the conversion. The method also includes performing, based on the determining, the conversion by using padded samples for the one or more samples of the video. The padded samples are generated by checking for availability of samples in an order.
In another example aspect, a method of video processing is disclosed. The method includes performing a conversion between a video comprising video pictures comprising a video unit and a bitstream representation of the video. A first set of syntax elements is included in the bitstream representation to indicate whether samples across boundaries of the video unit are accessible in a filtering process applicable to boundaries of the video unit, and the first set of syntax elements are included in different levels.
In another example aspect, a method of video processing is disclosed. The method includes performing a conversion between video blocks of a video picture and a bitstream representation thereof. Here, the video blocks are processed using logical groupings of coding tree blocks and the coding tree blocks are processed based on whether a bottom boundary of a bottom coding tree block is outside a bottom boundary of the video picture.
In another example aspect, another video processing method is disclosed. The method includes determining, based on a condition of a coding tree block of a current video block, a usage status of virtual samples during an in-loop filtering and performing a conversion between the video block and a bitstream representation of the video block consistent with the usage status of virtual samples.
In yet another example aspect, another video processing method is disclosed. The method includes determining, during a conversion between a video picture that is logically grouped into one or more video slices or video bricks, and a bitstream representation of the video picture, to disable a use of samples in another slice or brick in the adaptive loop filter process and performing the conversion consistent with the determining.
In yet another example aspect, another video processing method is disclosed. The method includes determining, during a conversion between a current video block of a video picture and a bitstream representation of the current video block, that the current video block includes samples located at a boundary of a video unit of the video picture and performing the conversion based on the determining, wherein the performing the conversion includes generating virtual samples for an in-loop filtering process using a unified method that is same for all boundary types in the video picture.
In yet another example aspect, another method of video processing is disclosed. The method includes determining to apply, during a conversion between a current video block of a video picture and a bitstream representation thereof, one of multiple adaptive loop filter (ALF) sample selection methods available for the video picture during the conversion and performing the conversion by applying the one of multiple ALF sample selection methods.
In yet another example aspect, another method of video processing is disclosed. The method includes performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule disables using samples that cross a virtual pipeline data unit (VPDU) of the video picture and performing the conversion using a result of the in-loop filtering operation.
In yet another example aspect, another method of video processing is disclosed. The method includes performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule specifies to use, for locations of the current video block across a video unit boundary, samples that are generated without using padding and performing the conversion using a result of the in-loop filtering operation.
In yet another example aspect, another method of video processing is disclosed. The method includes performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule specifies selecting, for the in-loop filtering operation, a filter having dimensions such that samples of current video block used during the in-loop filtering do not cross a boundary of a video unit of the video picture and performing the conversion using a result of the in-loop filtering operation.
In yet another example aspect, another method of video processing is disclosed. The method includes performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule specifies selecting, for the in-loop filtering operation, clipping parameters or filter coefficients based on whether or not padded samples are needed for the in-loop filtering and performing the conversion using a result of the in-loop filtering operation.
In yet another example aspect, another method of video processing is disclosed. The method includes performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule depends on a color component identity of the current video block and performing the conversion using a result of the in-loop filtering operation.
In yet another example aspect, a video encoding apparatus configured to perform an above-described method is disclosed.
In yet another example aspect, a video decoder that is configured to perform an above-described method is disclosed.
In yet another example aspect, a machine-readable medium is disclosed. The medium stores code which, upon execution, causes a processor to implement one or more of the above-described methods.
The above and other aspects and features of the disclosed technology are described in greater detail in the drawings, the description and the claims.
Section headings are used in the present disclosure to facilitate ease of understanding and do not limit the embodiments disclosed in a section to only that section. Furthermore, while certain embodiments are described with reference to Versatile Video Coding or other specific video codecs, the disclosed techniques are applicable to other video coding technologies also. Furthermore, while some embodiments describe video coding steps in detail, it will be understood that corresponding steps decoding that undo the coding will be implemented by a decoder. Furthermore, the term video processing encompasses video coding or compression, video decoding or decompression and video transcoding in which video pixels are represented from one compressed format into another compressed format or at a different compressed bitrate.
This disclosure is related to video coding technologies. Specifically, it is related to picture/slice/tile/brick boundary and virtual boundary coding especially for the non-linear adaptive loop filter. It may be applied to the existing video coding standard like HEVC, or the standard Versatile Video Coding (VVC) to be finalized. It may be also applicable to future video coding standards or video codec.
Video coding standards have evolved primarily through the development of the well-known ITU-T and ISO/IEC standards. The ITU-T produced H.261 and H.263, ISO/IEC produced MPEG-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/HEVC standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG j ointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM). In April 2018, the JVET between VCEG (Q6/16) and ISO/IEC JTC1 SC29/WG11 (MPEG) was created to work on the VVC standard targeting at 50% bitrate reduction compared to HEVC.
Color space, also known as the color model (or color system), is an abstract mathematical model which simply describes the range of colors as tuples of numbers, typically as 3 or 4 values or color components (e.g., red green blue (RGB)). Basically speaking, color space is an elaboration of the coordinate system and sub-space.
For video compression, the most frequently used color spaces are YCbCr and RGB.
YCbCr, Y′CbCr, or Y Pb/Cb Pr/Cr, also written as YCBCR or Y′CBCR, is a family of color spaces used as a part of the color image pipeline in video and digital photography systems. Y′ is the luma component and CB and CR are the blue-difference and red-difference chroma components. Y′ (with prime) is distinguished from Y, which is luminance, meaning that light intensity is nonlinearly encoded based on gamma corrected RGB primaries.
Chroma subsampling is the practice of encoding images by implementing less resolution for chroma information than for luma information, taking advantage of the human visual system's lower acuity for color differences than for luminance.
Each of the three Y′CbCr components have the same sample rate, thus there is no chroma subsampling. This scheme is sometimes used in high-end film scanners and cinematic post production.
The two chroma components are sampled at half the sample rate of luma: the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third with little to no visual difference
In 4:2:0, the horizontal sampling is doubled compared to 4:1:1, but as the Cb and Cr channels are only sampled on each alternate line in this scheme, the vertical resolution is halved. The data rate is thus the same. Cb and Cr are each subsampled at a factor of 2 both horizontally and vertically. There are three variants of 4:2:0 schemes, having different horizontal and vertical siting.
In MPEG-2, Cb and Cr are cosited horizontally. Cb and Cr are sited between pixels in the vertical direction (sited interstitially).
In JPEG/JFIF, H.261, and MPEG-1, Cb and Cr are sited interstitially, halfway between alternate luma samples.
In 4:2:0 DV, Ch and Cr are co-sited in the horizontal direction. In the vertical direction, they are co-sited on alternating lines.
A picture is divided into one or more tile rows and one or more tile columns. A tile is a sequence of CTUs that covers a rectangular region of a picture.
A tile is divided into one or more bricks, each of which consisting of a number of CTU rows within the tile.
A tile that is not partitioned into multiple bricks is also referred to as a brick. However, a brick that is a true subset of a tile is not referred to as a tile.
A slice either contains a number of tiles of a picture or a number of bricks of a tile.
Two modes of slices are supported, namely the raster-scan slice mode and the rectangular slice mode. In the raster-scan slice mode, a slice contains a sequence of tiles in a tile raster scan of a picture. In the rectangular slice mode, a slice contains a number of bricks of a picture that collectively form a rectangular region of the picture. The bricks within a rectangular slice are in the order of brick raster scan of the slice.
In VVC, the CTU size, signaled in sequence parameter set (SPS) by the syntax element log 2_ctu_size_minus2, could be as small as 4×4.
sps_decoding_parameter_set_id
sps_video_parameter_set_id
sps_max_sub_layers_minus1
sps_reserved_zero_5bits
gra_enabled_flag
sps_seq_parameter_set_id
chroma_format_idc
separate_colour_plane_flag
pic_width_in_luma_samples
pic_height_in_luma_samples
conformance_window_flag
conf_win_left_offset
conf_win_right_offset
conf_win_top_offset
conf_win_bottom_offset
bit_depth_luma_minus8
bit_depth_chroma_minus8
log2_max_pic_order_cnt_lsb_minus4
sps_sub_layer_ordering_info_present_flag
sps_max_dec_pic_buffering_minus1[ i ]
sps_max_num_reorder_pics[ i ]
sps_max_latency_increase_plus1[ i ]
}
long_term_ref_pics_flag
sps_idr_rpl_present_flag
rpl1_same_as_rpl0_flag
qtbtt_dual_tree_intra_flag
log2_ctu_size_minus2
log2_min_luma_coding_block_size_minus2
partition_constraints_override_enabled_flag
sps_log2_diff_min_qt_min_cb_intra_slice_luma
sps_log2_diff_min_qt_min_cb_inter_slice
sps_max_mtt_hierarchy_depth_inter_slice
sps_max_mtt_hierarchy_depth_intra_slice_luma
sps_log2_diff_max_bt_min_qt_intra_slice_luma
sps_log2_diff_max_tt_min_qt_intra_slice_luma
sps_log2_diff_max_bt_min_qt_inter_slice
sps_log2_diff_max_tt_min_qt_inter_slice
sps_log2_diff_min_qt_min_cb_intra_slice_chroma
sps_max_mtt_hierarchy_depth_intra_slice_chroma
log 2_ctu_size_minus2 plus 2 specifies the luma coding tree block size of each CTU.
log 2_min_luma_coding_block_size_minus2 plus 2 specifies the minimum luma coding block size.
The variables CtbLog2SizeY, CtbSizeY, MinCbLog2SizeY, MinCbSizeY, MinTbLog2SizeY, MaxTbLog2SizeY, MinCbSizeY, MaxTbSizeY, PicWidthInCtbsY, PicHeightInCtbsY, PicSizeInCtbsY, PicWidthInMinCbsY, PicHeightInMinCbsY, PicSizeInMinCbsY, PicSizeInSamplesY, PicWidthInSamplesC and PicHeightInSamplesC are derived as follows:
CtbLog2SizeY=log 2_ctu_size_minus2+2 (7-9)
CtbSizeY=1<<CtbLog2SizeY (7-10)
MinCbLog2SizeY=log 2_min_luma_coding_block_size_minus2+2 (7-11)
MinCbSizeY=1<<MinCbLog2SizeY (7-12)
MinTbLog2SizeY=2 (7-13)
MaxTbLog2SizeY=6 (7-14)
MinCbSizeY=1<<MinTbLog2SizeY (7-15)
MaxTbSizeY=1<<MaxTbLog2SizeY (7-16)
PicWidthInCtbsY=Ceil(pic_width_in_luma_samples+CtbSizeY) (7-17)
PicHeightInCtbsY=Ceil(pic_height_in_luma_samples+CtbSizeY) (7-18)
PicSizeInCtbsY=PicWidthInCtbsY*PicHeightInCtbsY (7-19)
PicWidthInMinCbsY=pic_width_in_luma_samples/MinCbSizeY (7-20)
PicHeightInMinCbsY=pic_height_in_luma_samples/MinCbSizeY (7-21)
PicSizeInMinCbsY=PicWidthInMinCbsY*PicHeightInMinCbsY (7-22)
PicSizeInSamplesY=pic_width_in_luma_samples*pic_height_in_lumasamples (7-23)
PicWidthInSamplesC=pic_width_in_luma_samples/SubWidthC (7-24)
PicHeightInSamplesC=pic_height_in_luma_samples/SubHeightC (7-25)
Suppose the CTB/largest coding unit (LCU) size indicated by M×N (typically M is equal to N, as defined in HEVC/VVC), and for a CTB located at picture (or tile or slice or other kinds of types, picture border is taken as an example) border, K×L samples are within picture border wherein either K<M or L<N. For those CTBs as depicted in
The Input of DB is the Reconstructed Samples before In-Loop Filters.
The vertical edges in a picture are filtered first. Then the horizontal edges in a picture are filtered with samples modified by the vertical edge filtering process as input. The vertical and horizontal edges in the CTBs of each CTU are processed separately on a coding unit basis. The vertical edges of the coding blocks in a coding unit are filtered starting with the edge on the left-hand side of the coding blocks proceeding through the edges towards the right-hand side of the coding blocks in their geometrical order. The horizontal edges of the coding blocks in a coding unit are filtered starting with the edge on the top of the coding blocks proceeding through the edges towards the bottom of the coding blocks in their geometrical order.
Filtering is applied to 8×8 block boundaries. In addition, it must be a transform block boundary or a coding subblock boundary (e.g., due to usage of Affine motion prediction, advanced temporal motion vector prediction (ATMVP)). For those which are not such boundaries, filter is disabled.
For a transform block boundary/coding subblock boundary, if it is located in the 8×8 grid, it may be filterd and the setting of bS[xDi][yj] (wherein [xDi][yDj] denotes the coordinate) for this edge is defined in Table 1 and Table 2, respectively.
The deblocking decision process is described in this sub-section.
Wider-stronger luma filter is filters are used only if all the Condition1, Condition2 and Condition 3 are TRUE.
The condition 1 is the “large block condition”. This condition detects whether the samples at P-side and Q-side belong to large blocks, which are represented by the variable b SidePisLargeBlk and b SideQisLargeBlk respectively. The bSidePisLargeBlk and bSideQisLargeBlk are defined as follows.
Based on bSidePisLargeBlk and bSideQisLargeBlk, the condition 1 is defined as follows.
Condition1=(bSidePisLargeBlk∥bSidePisLargeBlk)? TRUE: FALSE
Next, if Condition 1 is true, the condition 2 will be further checked. First, the following variables are derived:
where d=dp0+dq0+dp3+dq3.
If Condition1 and Condition2 are valid, whether any of the blocks uses sub-blocks is further checked:
Finally, if both the Condition 1 and Condition 2 are valid, the proposed deblocking method will check the condition 3 (the large block strong filter condition), which is defined as follows.
In the Condition3 StrongFilterCondition, the following variables are derived:
dpq is derived as in HEVC.
sp3=Abs(p3−p0), derived as in HEVC
if (p side is greater than or equal to 32)
As in HEVC, StrongFilterCondition=(dpq is less than (β>>2), sp3+sq3 is less than (3*β>>5), and Abs(p0−q0) is less than (5*tC+1)>>1)? TRUE:FALSE.
2.4.4 Stronger Deblocking Filter for Luma (Designed for Larger Blocks)
Bilinear filter is used when samples at either one side of a boundary belong to a large block. A sample belonging to a large block is defined as when the width>=32 for a vertical edge, and when height>=32 for a horizontal edge.
The bilinear filter is listed below.
Block boundary samples pi for i=0 to Sp-1 and qi for j=0 to Sq-1 (pi and qi are the i-th sample within a row for filtering vertical edge, or the i-th sample within a column for filtering horizontal edge) in HEVC deblocking described above) are then replaced by linear interpolation as follows:
pi′=(fi*Middles,t+(64−fi)*Ps+32)>>6), clipped to pi±tcPDi
qi′=(gj*Middles,t+(64−gi)*Qs+32)>>6), clipped to qj±tcPDj
where tcPDi and tcPDj term is a position dependent clipping described in Section 2.4.7 and gj, fi, Middles,t, Ps, and Qs are given below:
The chroma strong filters are used on both sides of the block boundary. Here, the chroma filter is selected when both sides of the chroma edge are greater than or equal to 8 (chroma position), and the following decision with three conditions are satisfied: the first one is for decision of boundary strength as well as large block. The proposed filter can be applied when the block width or height which orthogonally crosses the block edge is equal to or larger than 8 in chroma sample domain. The second and third one is basically the same as for HEVC luma deblocking decision, which are on/off decision and strong filter decision, respectively.
In the first decision, boundary strength (bS) is modified for chroma filtering and the conditions are checked sequentially. If a condition is satisfied, then the remaining conditions with lower priorities are skipped.
Chroma deblocking is performed when bS is equal to 2, or bS is equal to 1 when a large block boundary is detected.
The second and third condition is basically the same as HEVC luma strong filter decision as follows.
In the second condition:
The second condition will be TRUE when d is less than β.
In the third condition StrongFilterCondition is derived as follows:
As in HEVC design, StrongFilterCondition=(dpq is less than (β>>2), sp3+sq3 is less than (β>>3), and Abs(p0−q0) is less than (5*tC+1)>>1)
The following strong deblocking filter for chroma is defined:
p2′=(3*p3+2*p2+p1+p0+q0+4)>>3
p1′=(2*p3+p2+2*p1+p0+q0+q1+4)>>3
p0′=(p3+p2+p1+2*p0+q0+q1+q2+4)>>3
The proposed chroma filter performs deblocking on a 4×4 chroma sample grid.
The position dependent clipping tcPD is applied to the output samples of the luma filtering process involving strong and long filters that are modifying 7, 5 and 3 samples at the boundary. Assuming quantization error distribution, it is proposed to increase clipping value for samples which are expected to have higher quantization noise, thus expected to have higher deviation of the reconstructed sample value from the true sample value.
For each P or Q boundary filtered with asymmetrical filter, depending on the result of decision-making process in section 2.4.2, position dependent threshold table is selected from two tables (e.g., Tc7 and Tc3 tabulated below) that are provided to decoder as a side information:
Tc7={6, 5, 4, 3, 2, 1, 1}; Tc3={6, 4, 2};
tcPD=(Sp==3)? Tc3:Tc7;
tcQD=(Sq==3)? Tc3:Tc7;
For the P or Q boundaries being filtered with a short symmetrical filter, position dependent threshold of lower magnitude is applied:
Tc3={3, 2, 1};
Following defining the threshold, filtered p′i and q′i sample values are clipped according to tcP and tcQ clipping values:
p″i=Clip3(p′i+tcPi, p′i−tcPi, p′i);
q″j=Clip3(q′j+tcQj, q′j−tcQj, q′j);
where p′i and q′i are filtered sample values, p″i and q′j are output sample value after the clipping and tcPi tcPi are clipping thresholds that are derived from the VVC tc parameter and tcPD and tcQD. The function Clip3 is a clipping function as it is specified in VVC.
To enable parallel friendly deblocking using both long filters and sub-block deblocking the long filters is restricted to modify at most 5 samples on a side that uses sub-block deblocking (AFFINE or ATMVP or DMVR) as shown in the luma control for long filters. Additionally, the sub-block deblocking is adjusted such that that sub-block boundaries on an 8×8 grid that are close to a coding unit (CU) or an implicit transform unit (TU) boundary is restricted to modify at most two samples on each side.
Following applies to sub-block boundaries that not are aligned with the CU boundary.
Where edge equal to 0 corresponds to CU boundary, edge equal to 2 or equal to orthogonalLength-2 corresponds to sub-block boundary 8 samples from a CU boundary etc. Where implicit TU is true if implicit split of TU is used.
The input of SAO is the reconstructed samples after DB. The concept of SAO is to reduce mean sample distortion of a region by first classifying the region samples into multiple categories with a selected classifier, obtaining an offset for each category, and then adding the offset to each sample of the category, where the classifier index and the offsets of the region are coded in the bitstream. In HEVC and VVC, the region (the unit for SAO parameters signaling) is defined to be a CTU.
Two SAO types that can satisfy the requirements of low complexity are adopted in HEVC. Those two types are edge offset (EO) and band offset (BO), which are discussed in further detail below. An index of an SAO type is coded (which is in the range of [0, 2]). For EO, the sample classification is based on comparison between current samples and neighboring samples according to one dimensional (1-D) directional patterns: horizontal, vertical, 135° diagonal, and 45° diagonal.
For a given EO class, each sample inside the CTB is classified into one of five categories. The current sample value, labeled as “c,” is compared with its two neighbors along the selected 1-D pattern. The classification rules for each sample are summarized in Table I. Categories 1 and 4 are associated with a local valley and a local peak along the selected 1-D pattern, respectively. Categories 2 and 3 are associated with concave and convex corners along the selected 1-D pattern, respectively. If the current sample does not belong to EO categories 1-4, then it is category 0 and SAO is not applied.
The input of DB is the reconstructed samples after DB and SAO. The sample classification and filtering process are based on the reconstructed samples after DB and SAO.
In some embodiments, a geometry transformation-based adaptive loop filter (GALF) with block-based filter adaption is applied. For the luma component, one among 25 filters is selected for each 2×2 block, based on the direction and activity of local gradients.
In some embodiments, up to three diamond filter shapes (as shown in
Each 2×2 block is categorized into one out of 25 classes. The classification index C is derived based on its directionality D and a quantized value of activity A, as follows:
C=5D+Â. (1)
To calculate D and A, gradients of the horizontal, vertical and two diagonal direction are first calculated using 1-D Laplacian:
Indices i and j refer to the coordinates of the upper left sample in the 2×2 block and R(i, j) indicates a reconstructed sample at coordinate (i,j).
Then D maximum and minimum values of the gradients of horizontal and vertical directions are set as:
and the maximum and minimum values of the gradient of two diagonal directions are set as:
To derive the value of the directionality D, these values are compared against each other and with two thresholds t1 and t2:
Step 1. If both
are true, D is set to 0.
Step 2. If
continue from Step 3; otherwise continue from Step 4.
Step 3. If
is set to 2; otherwise D is set to 1.
Step 4. If
D is set to 3.
The activity value A is calculated as:
A is further quantized to the range of 0 to 4, inclusively, and the quantized value is denoted as Â.
For both chroma components in a picture, no classification method is applied, e.g., a single set of ALF coefficients is applied for each chroma component.
Before filtering each 2×2 block, geometric transformations such as rotation or diagonal and vertical flipping are applied to the filter coefficients f(k, l), which is associated with the coordinate (k, l), depending on gradient values calculated for that block. This is equivalent to applying these transformations to the samples in the filter support region. The idea is to make different blocks to which ALF is applied more similar by aligning their directionality.
Three geometric transformations, including diagonal, vertical flip and rotation are introduced:
where K is the size of the filter and 0≤k, l≤K−1 are coefficients coordinates, such that location (0,0) is at the upper left corner and location (K−1, K−1) is at the lower right corner. The transformations are applied to the filter coefficients f(k, l) depending on gradient values calculated for that block. The relationship between the transformation and the four gradients of the four directions are summarized in Table 4.
In some embodiments, GALF filter parameters are signalled for the first CTU, e.g., after the slice header and before the SAO parameters of the first CTU. Up to 25 sets of luma filter coefficients could be signalled. To reduce bits overhead, filter coefficients of different classification can be merged. Also, the GALF coefficients of reference pictures are stored and allowed to be reused as GALF coefficients of a current picture. The current picture may choose to use GALF coefficients stored for the reference pictures and bypass the GALF coefficients signalling. In this case, only an index to one of the reference pictures is signalled, and the stored GALF coefficients of the indicated reference picture are inherited for the current picture.
To support GALF temporal prediction, a candidate list of GALF filter sets is maintained. At the beginning of decoding a new sequence, the candidate list is empty. After decoding one picture, the corresponding set of filters may be added to the candidate list. Once the size of the candidate list reaches the maximum allowed value (e.g., 6), a new set of filters overwrites the oldest set in decoding order, and that is, first-in-first-out (FIFO) rule is applied to update the candidate list. To avoid duplications, a set could only be added to the list when the corresponding picture doesn't use GALF temporal prediction. To support temporal scalability, there are multiple candidate lists of filter sets, and each candidate list is associated with a temporal layer. More specifically, each array assigned by temporal layer index (TempIdx) may compose filter sets of previously decoded pictures with equal to lower TempIdx. For example, the k-th array is assigned to be associated with TempIdx equal to k, and it only contains filter sets from pictures with TempIdx smaller than or equal to k. After coding a certain picture, the filter sets associated with the picture will be used to update those arrays associated with equal or higher TempIdx.
Temporal prediction of GALF coefficients is used for inter coded frames to minimize signalling overhead. For intra frames, temporal prediction is not available, and a set of 16 fixed filters is assigned to each class. To indicate the usage of the fixed filter, a flag for each class is signalled and if required, the index of the chosen fixed filter. Even when the fixed filter is selected for a given class, the coefficients of the adaptive filter f(k, l) can still be sent for this class in which case the coefficients of the filter which will be applied to the reconstructed image are sum of both sets of coefficients.
The filtering process of luma component can controlled at CU level. A flag is signalled to indicate whether GALF is applied to the luma component of a CU. For chroma component, whether GALF is applied or not is indicated at picture level only.
At decoder side, when GALF is enabled for a block, each sample R(i, j) within the block is filtered, resulting in sample value R′(i, j) as shown below, where L denotes filter length, fm,n represents filter coefficient, and f(k, l) denotes the decoded filter coefficients.
In some embodiments, the filtering process of the Adaptive Loop Filter, is performed as follows:
O(x, y)=Σ(i,j)w(i, j)·I(x+i,y+j), (11)
where samples I(x+i, y+j) are input samples, 0(x, y) is the filtered output sample (e.g., filter result), and w(i, j) denotes the filter coefficients. In practice, in VTM4.0 it is implemented using integer arithmetic for fixed point precision computations:
where L denotes the filter length, and where w(i, j) are the filter coefficients in fixed point precision.
The current design of GALF in VVC has the following major changes:
(1) The adaptive filter shape is removed. Only 7×7 filter shape is allowed for luma component and 5×5 filter shape is allowed for chroma component.
(2) Signaling of ALF parameters in removed from slice/picture level to CTU level.
(3) Calculation of class index is performed in 4×4 level instead of 2×2. In addition, in some embodiments, sub-sampled Laplacian calculation method for ALF classification is utilized. More specifically, there is no need to calculate the horizontal/vertical/45 diagonal/135 degree gradients for each sample within one block. Instead, 1:2 subsampling is utilized.
Equation (11) can be reformulated, without coding efficiency impact, in the following expression:
0(x, y)=I(x, y)+Σ(i,j)≠(0,0)w(i, j)·(I(x+i, y+j)−1(x, y)), (13)
where w(i,j) are the same filter coefficients as in equation (11) [excepted w(0, 0) which is equal to 1 in equation (13) while it is equal to 1−Σ(i,j)≠(0,0)w(i,j) in equation (11)].
Using this above filter formula of (13), VVC introduces the non-linearity to make ALF more efficient by using a simple clipping function to reduce the impact of neighbor sample values (I(x+i, y+j)) when they are too different with the current sample value (I(x, y)) being filtered.
More specifically, the ALF filter is modified as follows:
0′(x, y)=I(x, y)+Σ(i, j)(0,0)w(i, j)·K(I(x+i, y+j)−I(x, y), k(i, j)), (14)
where K(d, b)=min(b, max(−b, d)) is the clipping function, and k(i, j) are clipping parameters, which depends on the (i, j) filter coefficient. The encoder performs the optimization to find the best k(i, j).
In some embodiments, the clipping parameters k(i, j) are specified for each ALF filter, one clipping value is signaled per filter coefficient. It means that up to 12 clipping values can be signalled in the bitstream per Luma filter and up to 6 clipping values for the Chroma filter.
In order to limit the signaling cost and the encoder complexity, only 4 fixed values which are the same for INTER and INTRA slices are used.
Because the variance of the local differences is often higher for Luma than for Chroma, two different sets for the Luma and Chroma filters are applied. The maximum sample value (here 1024 for 10 bits bit-depth) in each set is also introduced, so that clipping can be disabled if it is not necessary.
The sets of clipping values used in some embodiments are provided in the Table 5. The 4 values have been selected by roughly equally splitting, in the logarithmic domain, the full range of the sample values (coded on 10 bits) for Luma, and the range from 4 to 1024 for Chroma.
More precisely, the Luma table of clipping values have been obtained by the following formula:
Similarly, the Chroma tables of clipping values is obtained according to the following formula:
The selected clipping values are coded in the “alf_data” syntax element by using a Golomb encoding scheme corresponding to the index of the clipping value in the above Table 5. This encoding scheme is the same as the encoding scheme for the filter index.
In hardware and embedded software, picture-based processing is practically unacceptable due to its high picture buffer requirement. Using on-chip picture buffers is very expensive and using off-chip picture buffers significantly increases external memory access, power consumption, and data access latency. Therefore, DF, SAO, and ALF will be changed from picture-based to LCU-based decoding in real products. When LCU-based processing is used for DF, SAO, and ALF, the entire decoding process can be done LCU by LCU in a raster scan with an LCU-pipelining fashion for parallel processing of multiple LCUs. In this case, line buffers are required for DF, SAO, and ALF because processing one LCU row requires pixels from the above LCU row. If off-chip line buffers (e.g., dynamic random-access memory (DRAM)) are used, the external memory bandwidth and power consumption will be increased; if on-chip line buffers (e.g., static random-access memory (SRAM)) are used, the chip area will be increased. Therefore, although line buffers are already much smaller than picture buffers, it is still desirable to reduce line buffers.
In some embodiments, as shown in
Therefore, for the block classification of the 4×4 block overlapping with lines G, H, I, J needs, SAO filtered samples below the Virtual boundary. In addition, the SAO filtered samples of lines D, E, F are required for ALF classification. Moreover, the ALF filtering of Line G needs three SAO filtered lines D, E, F from above lines. Therefore, the total line buffer requirement is as follows:
Therefore, the total number of luma lines required is 7+4+0.25=11.25.
Similarly, the line buffer requirement of the Chroma component is illustrated in
In order to eliminate the line buffer requirements of SAO and ALF, the concept of virtual boundary (VB) is introduced in the latest VVC. As shown in
Similarly, as depicted in
As depicted in
The padding method used for ALF virtual boundaries may be denoted as ‘Two-side Padding’ wherein if one sample located at (i, j) (e.g., the P0A with dash line in
Similarly, as depicted in
When the non-linear ALF is disabled for a CTB, e.g., the clipping parameters k(i, j) in equation (14) are equal to (1<<Bitdepth), the padding process could be replaced by modifying the filter coefficients (a.k.a., modified-coefficient based ALF (MALF)). For example, when filtering samples in line L/I, the filter coefficient c5 is modified to c5′, in this case, there is no need to copy the luma samples from the solid P0A to dashed P0A and solid P3B to dashed P3B FIG. 18A. In this case, the two-side padding and MALF will generate the same results, assuming the current sample to be filtered is located at (x, y).
c5·K(x−1, y−1)−I(x, y), k)−1, −1))+c1·K(I(x−1, y−2)−I(x, y), k(−1, −2))=(c5+c1)·K(I(x−1, y−1)−I(x, y), k(−1, −1)) (17)
since K(d, b)=d and I(x−1, y−1)=I(x−1, y−2) due to padding.
However, when the non-linear ALF is enabled, MALF and two-side padding may generate different filtered results, since the non-linear parameters are associated with each coefficient, such as for filter coefficients c5 and c1, the clipping parameters are different. Therefore,
c5·K(I(x−1, y−1)−I(x, y), k(−1, −1))+c1·K(I(x−1, y−2)−I(x, y), k(−1, −2))!=(c5+c1)·K(I(x−1, y−1)−I(x, y), I(x, y), k(−1, −1)) (18)
since K(d, b) !=d, even I(x−1, y−1)=I(x−1,y−2) due to padding.
Newly added parts are indicated in bold italicized underlined text. The deleted parts are indicated using.
pps_pic_parameter_set_id
pps_seq_parameter_set_id
output_flag_present_flag
single_tile_in_pic_flag
uniform_tile_spacing_flag
tile_cols_width_minus1
tile_rows_height_minus1
num_tile_columns_minus1
num_tile_rows_minus1
tile_column_width_minus1[ i ]
tile_row_height_minus1[ i ]
}
brick_splitting_present_flag
brick_split_flag[ i ]
uniform_brick_spacing_flag[ i ]
brick_height_minus1[ i ]
num_brick_rows minus1[ i ]
brick_row_height_minus1[ i ][ j ]
single_brick_per_slice_flag
rect_slice_flag
num_slices_in_pic_minus1
top_left_brick_idx[ i ]
bottom_right_brick_idx_delta[ i ]
loop
filter
across
bricks
enabled
flag
u(1)
if(
loop
filter
across
bricks
enabled
flag
)
loop
filter
across
slices
enabled
flag
u(1)
}
signalled_slice_id_flag
signalled_slice_id_length_minus1
slice_id[ i ]
entropy_coding_sync_enabled_flag
cabac_init_present_flag
num_ref_idx_default_active_minus1[ i ]
rpl1 _idx_present_flag
init_qp_minus26
transform_skip_enabled_flag
log2_transform_skip_max_size_minus2
cu_qp_delta_enabled_flag
cu_qp_delta_subdiv
pps_cb_qp_offset
pps_cr_qp_offset
pps_joint_cbcr_qp_offset
pps_slice_chroma_qp_offsets_present_flag
weighted_pred_flag
weighted_bipred_flag
deblocking_filter_control_present_flag
deblocking_filter_override_enabled_flag
pps_deblocking_filter_disabled_flag
pps_beta_offset_div2
pps_tc_offset_div2
pps_loop_filter_across_virtual_boundaries_disabled_flag
pps_virtual_boundaries_pos_x[ i ]
pps_num_hor_virtual_boundaries
pps_virtual_boundaries_pos_y[ i ]
pps_extension_flag
pps_extension_data_flag
loop_filter_across_bricks_enabled_flag equal to 1 specifies that in-loop filtering operations may be performed across brick boundaries in pictures referring to the PPS. loop_filter_across_bricks_enabled_flag equal to 0 specifies that in-loop filtering operations are not performed across brick boundaries in pictures referring to the PPS. The in-loop filtering operations include the deblocking filter, sample adaptive offset filter, and adaptive loop filter operations. When not present, the value of loop_filter_across_bricks_enabled_flag is inferred to be equal to 1.
loop_filter_across_slices_enabled_flag equal to 1 specifies that in-loop filtering operations may be performed across slice boundaries in pictures referring to the PPS. loop_filter_across_slice_enabled_flag equal to 0 specifies that in-loop filtering operations are not performed across slice boundaries in pictures referring to the PPS. The in-loop filtering operations include the deblocking filter, sample adaptive offset filter, and adaptive loop filter operations. When not present, the value of loop_filter_across_slices_enabled_flag is inferred to be equal to 0.
pps_loop_fifter_across_virtual_boundaries_disabled_flag equal to 1 specifies that the in-loop filtering operations are disabled across the virtual boundaries in pictures referring to the PPS. pps_loop_filter_across_virtual_boundaries_disabled_flag equal to 0 specifies that no such disabling of in-loop filtering operations is applied in pictures referring to the PPS. The in-loop filtering operations include the deblocking filter, sample adaptive offset filter, and adaptive loop filter operations. When not present, the value of pps_loop_filter_across_virtual_boundaries_disabled_flag is inferred to be equal to 0.
pps_num_ver_virtual_boundaries specifies the number of pps_virtual_boundaries_pos_x[i] syntax elements that are present in the PPS. When pps_num_ver_virtual_boundaries is not present, it is inferred to be equal to 0.
Inputs of this process are:
Inputs of this process are:
According to the current VVC design, if the bottom boundary of one CTB is a bottom boundary of a slice/brick, the ALF virtual boundary handling method is disabled. For example, one picture is split to multiple CTUs and 2 slices as depicted
Suppose the CTU size is M×M (e.g., M=64), according to the virtual boundary definition, the last 4 lines within a CTB are treated below a virtual boundary. In hardware implementation, the following apply:
The horizontal wrap around motion compensation in the VTM5 is a 360-specific coding tool designed to improve the visual quality of reconstructed 360-degree video in the equi-rectangular (ERP) projection format. In conventional motion compensation, when a motion vector refers to samples beyond the picture boundaries of the reference picture, repetitive padding is applied to derive the values of the out-of-bounds samples by copying from those nearest neighbors on the corresponding picture boundary. For 360-degree video, this method of repetitive padding is not suitable, and could cause visual artefacts called “seam artefacts” in a reconstructed viewport video. Because a 360-degree video is captured on a sphere and inherently has no “boundary,” the reference samples that are out of the boundaries of a reference picture in the projected domain can always be obtained from neighboring samples in the spherical domain. For a general projection format, it may be difficult to derive the corresponding neighboring samples in the spherical domain, because it involves two dimensional (2D)-to-three dimensional (3D) and 3D-to-2D coordinate conversion, as well as sample interpolation for fractional sample positions. This problem is much simpler for the left and right boundaries of the ERP projection format, as the spherical neighbors outside of the left picture boundary can be obtained from samples inside the right picture boundary, and vice versa.
The horizontal wrap around motion compensation process is as depicted in
For projection formats composed of a plurality of faces, no matter what kind of compact frame packing arrangement is used, discontinuities appear between two or more adjacent faces in the frame packed picture. For example, considering the 3×2 frame packing configuration depicted in
To alleviate face seam artifacts, in-loop filtering operations may be disabled across discontinuities in the frame-packed picture. A syntax was proposed to signal vertical and/or horizontal virtual boundaries across which the in-loop filtering operations are disabled. Compared to using two tiles, one for each set of continuous faces, and to disable in-loop filtering operations across tiles, the proposed signaling method is more flexible as it does not require the face size to be a multiple of the CTU size.
In some embodiments, the following features are included:
1) Pictures may be divided into sub-pictures.
2) The indication of existence of sub-pictures is indicated in the SPS, along with other sequence-level information of sub-pictures.
3) Whether a sub-picture is treated as a picture in the decoding process (excluding in-loop filtering operations) can be controlled by the bitstream.
4) Whether in-loop filtering across sub-picture boundaries is disabled can be controlled by the bitstream for each sub-picture. The DBF, SAO, and ALF processes are updated for controlling of in-loop filtering operations across sub-picture boundaries.
5) For simplicity, as a starting point, the sub-picture width, height, horizontal offset, and vertical offset are signalled in units of luma samples in SPS. Sub-picture boundaries are constrained to be slice boundaries.
6) Treating a sub-picture as a picture in the decoding process (excluding in-loop filtering operations) is specified by slightly updating the coding_tree_unit( ) syntax, and updates to the following decoding processes:
7) Sub-picture IDs are explicitly specified in the SPS and included in the tile group headers to enable extraction of sub-picture sequences without the need of changing VCL NAL units.
Output sub-picture sets (OSPS) are proposed to specify normative extraction and conformance points for sub-pictures and sets thereof.
The current VVC design has the following problems:
1. The current setting of enabling ALF virtual boundary is dependent on whether the bottom boundary of a CTB is a bottom boundary of a picture. If it is true, then ALF virtual boundary is disabled, such as CTU-D in
2. The way for handling ALF virtual boundary is disabled for bottom picture boundary and slice/tile/brick boundary. Disabling VB along slice/brick boundary may create pipeline bubble or require processing 68 lines per Virtual pipeline data units (VPDU, 64×64 in VVC) assuming the LCU size to be 64×64. For example:
3. Different ways for handling virtual boundary and video unit boundary, e.g., different padding methods are existing. Meanwhile, more than one padding methods may be performed for a line when it is at multiple boundaries.
4. The way for handling virtual boundary may be sub-optimal, since padded samples are utilized which may be less efficient.
5. When the non-linear ALF is disabled, the MALF and two-side padding methods would be able to generate the same results for filtering a sample which requires to access samples crossing virtual boundary. However, when the non-linear ALF is enabled, the two methods would bring different results. It would be beneficial to align the two cases.
6. A slice could be a rectangular one, or a non-rectangular one, such as depicted in
7. A subpicture is a rectangular region of one or more slices within a picture. A subpicture contains one or more slices that collectively cover a rectangular region of a picture. The syntax table is modified as follows to include the concept of subpictures (bold, italicized, and underlined).
sps_decoding_parameter_set_id
sps_video_parameter_set_id
sps_max_sub_layers_minus1
sps_reserved_zero_5bits
gra_enabled_flag
sps_seq_parameter_set_id
chroma_format_idc
separate_colour_plane_flag
pic_width_max_in_luma_samples
pic_height_max_in_luma_samples
sub
pics
present
flag
u(1)
if(
sub
pics
present
flag
)
{
max
grid
idxs
minus1
u(8)
sub
pic
grid
col
width
minus1
u(v)
sub
pic
grid
row
height
minus1
u(v)
for(
i
=
0;
i
<
NumSubPicGridRows;
i++ )
for(
j
=
0;
j
<
NumSubPicGridCols;
j++ )
sub
pic
grid
idx
[
i
]
[
j
]
u(1)
for(
i
=
0;
i
<=
NumSubPics;
i++
)
{
sub
pic
treated
as
pic
flag
[
i
]
u(1)
loop
filter
across
sub
pic
enabled
flag
[
i
]
u(1)
}
}
bit_depth_luma_minus8
bit_depth_chroma_minus8
log2_max_pic_order_cnt_lsb_minus4
It is noted that enabling filtering crossing subpictures is controlled for each subpicture. However, the controlling of enabling filtering crossing slice/tile/brick is controlled in picture level which is signaled once to control all slices/tiles/bricks within one picture.
8. ALF classification is performed in 4×4 unit, that is, all samples within one 4×4 unit share the same classification results. However, to be more precise, samples in a 8×8 window containing current 4×4 block need to calculate their gradients. In this case, 10'10 samples need to be accessed, as depicted in
9. In the VVC design, the four boundary positions (e.g., left vertical/right vertical/above horizontal/below horizontal) are identified. If a sample is located within the four boundary positions, it is marked as available. However, in VVC, a slice could be covering a non-rectangular region, as shown in
The listing below should be considered as examples to explain general concepts. The listed techniques should not be interpreted in a narrow way. Furthermore, these techniques can be combined in any manner.
The padding method used for ALF virtual boundaries may be denoted as ‘Two-side Padding’ wherein if one sample located at (i, j) is padded, then the corresponding sample located at (m, n) which share the same filter coefficient is also padded even the sample is available, as depicted in
The padding method used for picture boundaries/360-degree video virtual boundaries, normal boundaries (e.g., top and bottom boundaries) may be denoted as ‘One-side Padding’ wherein if one sample to be used is outside the boundaries, it is copied from an available one inside the picture.
The padding method used for 360-degree video left and right boundaries may be denoted as ‘wrapping-base Padding’ wherein if one sample to be used is outside the boundaries, it is copied using the motion compensated results.
In the following discussion, a sample is “at a boundary of a video unit” may mean that the distance between the sample and the boundary of the video unit is less or no greater than a threshold. A “line” may refer to samples at one same horizontal position or samples at one same vertical position. (e.g., samples in the same row and/or samples in the same column). Function Abs(x) is defined as follows:
In the following discussion, a “virtual sample” refers to a generated sample which may be different from the reconstructed sample (may be processed by deblocking and/or SAO). A virtual sample may be used to conduct ALF for another sample. The virtual sample may be generated by padding.
‘ALF virtual boundary handling method is enabled for one block’ may indicate that applyVirtualBoundary in the specification is set to true. ‘Enabling virtual boundary’ may indicate that the current block is split to at least two parts by a virtual boundary and the samples located in one part are disallowed to utilize samples in the other part in the filtering process (e.g., ALF). The virtual boundary may be K rows above the bottom boundary of one block.
In the following descriptions, the neighboring samples may be those which are required for the filter classification and/or filtering process.
In the disclosure, a neighboring sample is “unavailable” if it is out of the current picture, or current sub-picture, or current tile, or current slice, or current brick, or current CTU, or current processing unit (such as ALF processing unit or narrow ALF processing unit), or any other current video unit.
1. The determination of ‘The bottom boundary of the current coding tree block is the bottom boundary of the picture’ is replaced by ‘The bottom boundary of the current coding tree block is the bottom boundary of the picture or outside the picture’.
2. Whether to enable the usage of virtual samples (e.g., whether to enable virtual boundary (e.g., set applyVirtualBoundary to true or false)) in the in-loop filtering process may depend on the CTB size.
3. Whether to enable the usage of virtual samples (e.g., padded from reconstructed samples) in the in-loop filtering processes (e.g., ALF) may depend on whether the bottom boundary of the block is the bottom boundary of a video unit which is in a finer granularity compared to a picture (e.g., slice/tile/brick) or a virtual boundary.
4. It is proposed to disable the usage of samples across brick/slice boundaries in the filtering process (e.g., ALF) even when the signaled controlling usage flags for loop filters crossing brick/slice boundaries (e.g., loop_filter_across_bricks_enabled_flag/loop_filter_across_slices_enabled_flag) is true.
5. When one block (e.g., CTB) contains a sample located at a boundary of a video unit (such as slice/brick/tile/360-degree video virtual or normal boundaries boundaries/picture boundary), how to generate the virtual sample inside or outside the video unit (e.g., padding methods) for in-loop filtering such as ALF may be unified for different kinds of boundaries.
6. When a sample is of at least two boundaries of one block (e.g., at least one which is above current line is the ALF Virtual boundary, and below is the other boundary), how many lines to be padded is not purely decided by the distance between current line relative to the ALF virtual boundary. Instead, it is determined by the distances between current line relative to the two boundaries.
7. At least two ways of selecting samples in the ALF classification and/or ALF linear or non-linear filtering process may be defined, with one of them selects samples before any in-loop filtering method is applied; and the other selects samples after one or multiple in-loop filtering methods are applied but before ALF is applied.
8. It is proposed to disable the usage of samples crossing a VPDU boundary (e.g., a 64×64 region) in the filtering process.
VPDU boundary, or below the virtual boundary, it may be replaced by a virtual sample, such as padded from available ones.
9. Instead of using padded samples (e.g., not unavailable, above/below virtual boundaries, above/below boundaries of a video unit) in the ALF classification/filtering process, it is proposed to use the reconstructed samples before all in-loop filters.
10. Instead of using padded samples (e.g., not unavailable, above/below virtual boundaries, above/below boundaries of a video unit) in the ALF filtering process, it is proposed to employ different ALF filter supports.
11. Selection of clipping parameters/filter coefficients/filter supports may be dependent on whether filtering a sample need to access padded samples (e.g., not unavailable, above/below virtual boundaries, above/below boundaries of a video unit).
12. How to handle a sample at a boundary for in-loop filtering (such as ALF) may depend on the color component and/or color format.
13. When bottom/top/left/right boundary of one CTU/VPDU is also a boundary of a slice/tile/brick/sub-region with independent coding, a fixed order of multiple padding processes is applied.
14. The proposed methods may be applied to one or multiple boundaries between two sub-pictures.
15. The above proposed methods may be applied to samples/blocks at vertical boundaries.
16. Whether or/and how proposed method is applied at “360 virtual boundary” may be dependent on the position of the “360 virtual boundary”.
17. When a reference sample required in the ALF filtering process (e.g., P0i with i being A/B/C/D in
Abs(x1−x2)+Abs(y1−y2).
18. How to derive the padded sample of unavailable reference samples may depend on whether the CTU coincides with any boundaries.
19. Whether the filtering process (e.g., deblocking, SAO, ALF, bilateral filtering, Hadamard transform filtering etc.) can access samples across boundaries of a video unit (e.g., slice/brick/tile/sub-picture boundary) may be controlled at different levels, such as being controlled by itself, instead of being controlled for all video units in a sequence/picture.
20. It is proposed to check whether samples located at above-left/above-right/below-left/below-right neighboring regions of current block are in the same video unit (e.g., slice/brick/tile/subpicture/360 virtual boundaries) as the current block in the ALF processes (e.g., classification and/or filtering processes). Denote the top-left sample of the current luma coding tree block relative to the top left sample of the current picture by (x0, y0), denote ctbXSize and ctbYSize as the CTU width and height respectively.
21. The above proposed methods may be applied not only to ALF, but also to other kinds of filtering methods that require to access samples outside current block.
22. Whether to and/or how to apply the above methods may be determined by:
In the sections below, some examples of how current version of the VVC standard be modified to accommodate some embodiments of disclosed technology is described. Newly added parts are indicated in bold italicized underlined text. The deleted parts are indicated using.
loop_filter_across_bricks_enabled_flag equal to 1 specifies that in-loop filtering operations may be performed across brick boundaries in pictures referring to the PPS. loop_filter_across_bricks_enabled_flag equal to 0 specifies that in-loop filtering operations are not performed across brick boundaries in pictures referring to the PPS. The in-loop filtering operations include the deblocking filter, sample adaptive offset filter operations. When not present, the value of loop_filter_across_bricks_enabled_flag is inferred to be equal to 1.
loop_filter_across_slices_enabled_flag equal to 1 specifies that in-loop filtering operations may be performed across slice boundaries in pictures referring to the PPS. loop_filter_across_slice_enabled_flag equal to 0 specifies that in-loop filtering operations are not performed across slice boundaries in pictures referring to the PPS. The in-loop filtering operations include the deblocking filter, sample adaptive offset filter operations. When not present, the value of loop_filter_across_slices_enabled_flag is inferred to be equal to 0.
Inputs of this process are:
Inputs of this process are:
Alternatively, the condition “the bottom boundary of the current coding tree block is the bottom boundary of the picture” can be replaced by “the bottom boundary of the current coding tree block is the bottom boundary of the picture or outside the picture.”
This embodiment shows an example of disallowing using samples below the VPDU region in the ALF classification process (corresponding to bullet 7 in section 4).
Inputs of this process are:
2. The variables minY, maxY and ac are derived as follows:
3. The variables varTempH1[x][y], varTempV1[x][y], varTempD01[x][y], varTempD11[x][y] and varTemp[x][y] with x, y=0 . . . (CtbSizeY−1)>>2 are derived as follows:
sumH[x][y]=ΣiΣj filtH[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−2 . . . 5, j=minY . . . maxY (8-1205)
sumV[x][y]=ΣiΣj filtV[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−. . . 5, j=minY . . . maxY (8-1206)
sumD0[x][y]=ΣiΣj filtD0[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−. . . 5, j=minY . . . maxY (8-1207)
sumD1[x][y]=ΣiΣj filtD1[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−. . . 5, j=minY . . . maxY (8-1208)
sumOfHV[x][y]=sumH[x][y]+sumV[x][y] (8-1209)
4. The variables dir1[x][y], dir2[x][y] and dirS[x][y] with x, y=0 . . . CtbSizeY−1 are derived as follows:
. . .
. . .
5. The variable avgVar[x][y] with x, y=0 . . . CtbSizeY−1 is derived as follows:
varTab[ ]={0, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4} (8-1227)
avgVar[x][y]=varTab[Clip3(0, 15, (sumOfHV[x>>2][y>>2]*ac)>>(3+BitDepthY))] (8-1228)
6. The classification filter index array filtIdx[x][y] and the transpose index array transposeIdx[x][y] with x=y=0 . . . CtbSizeY−1 are derived as follows:
transposeTable[ ]={0, 1, 0, 2, 2, 3, 1, 3}
transposeIdx[x][y]=transposeTable[dir1[x][y]*2+(dir2[x][y]>>1)]
filtIdx[x][y]=avgVar[x][y]
When dirS[x][y] is not equal 0, filtIdx[x][y] is modified as follows:
filtIdx[x][y]+=(((dir1[x][y]& 0x1)<<1)+dirS[x][y])*5 (8-1229)
For samples locate at multiple kinds of boundaries (e.g., slice/brick boundary, 360-degree virtual boundary), the padding process is only invoked once. And how many lines to be padded per side is dependent on the location of current sample relative to the boundaries.
In one example, the ALF 2-side padding method is applied. Alternatively, furthermore, In the symmetric 2-side padding method, when a sample is at two boundaries, e.g., one boundary in the above side and one boundary in the below side, how many samples are padded is decided by the nearer boundary as shown in
Inputs of this process are:
condition
r1
r2
r3
(
y
=
=
boundaryPos1
−1
|
|
y
=
=
boundaryPos1
)
&&
0
0
0
(
boundaryPos1
>
−1
&&
(
boundaryPos2
=
=
−1
|
|
boundaryPos2
>=
boundaryPos1
+
8
)
)
(
y
=
=
boundaryPos1
−2
|
|
y
=
=
boundary Pos1
+
1)
&&
1
1
1
(
boundaryPos1
>
−1
&&
(
boundaryPos2
=
=
−1
|
|
boundaryPos2
>=
boundaryPos1
+
8
)
)
(
y
=
=
boundaryPos1
−3
|
|
y
=
=
boundary Pos1
+
2)
&&
1
2
2
(
boundaryPos1
>
−1
&&
(
boundaryPos2
=
=
−1
|
|
boundaryPos2
>=
boundaryPos1
+
8
)
)
(
y
=
=
boundaryPos1
−1
|
|
y
=
=
boundaryPos1
|
|
y
=
=
boundaryPos2
−1
|
|
y
=
=
0
0
0
boundaryPos2
)
&&
(
boundaryPos1
>
−1
&&
(
boundaryPos2
=
=
boundaryPos1 +
4
)
)
(
y
=
=
boundaryPos1
−2
|
|
y
=
=
boundaryPos1
+
1
|
|
y =
=
boundaryPos2
−2
|
|
y
=
=
1
1
1
boundaryPos2
+
1
)
&&
(
boundaryPos1
>
−1
&&
(
boundaryPos2
=
=
boundaryPos1
+
4
)
)
(
y
=
=
boundaryPos1
−3
|
|
y
=
=
boundaryPos2
+
2
)
&&
1
2
2
(
boundaryPos1
>
−1
&&
(
boundaryPos2
=
=
|
|
boundaryPos2
=
=
boundaryPos1
+
4
)
)
otherwise
1
2
3
Inputs of this process are:
1. The variables filtH[x][y], filtV[x][y], filtD0[x][y] and filtD1[x][y] with x, y=−2 . . . CtbSizeY+1 are derived as follows:
2. The variables minY, maxY and ac are derived as follows:
3. The variables sumH[x][y], sumV[x][y], sumO[x][y], sumD1[x][y] and sumOfHV[x][y] with x, y=0 . . . (CtbSizeY−1)>>2 are derived as follows:
sumH[x][y]=ΣiΣj filtH[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−2 . . . 5, j=minY . . . maxY (8-1220)
sumV[x][y]=ΣiΣj filtV[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−2 . . . 5, j=minY . . . maxY (8-1221)
sumO[x][y]=ΣiΣj filtD0[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=−2 . . . 5, j=minY . . . maxY (8-1222)
sumD1[x][y]=ΣiΣj filtD1[h(x<<2)+i−xCtb][v(y<<2)+jyCtb] with i=−2 . . . 5, j=minY . . . maxY (8-1223)
sumOfHV[x][y]=sumH[x][y]+sumV[x][y] (8-1224)
Inputs of this process are:
Inputs of this process are:
For a CTU, it may not coincide with any boundaries (e.g., picture/slice/tile/brick/sub-picture boundary). However, it may need to access samples outside the current unit (e.g., picture/slice/tile/brick/sub-picture). If filtering crossing the slice boundary (e.g., loop_filter_across_slices_enabled_flag is false) is disabled, we need to pad the sample outside the current unit.
For example, for the sample 2801 (take luma sample for example) in
In this embodiment, the following main ideas are applied:
On enabling ALF virtual boundaries:
On padding of boundaries (including ALF virtual boundaries, slice/tile/brick/sub-picture boundaries, “360 virtual boundaries”) in the classification process:
For a sample at one (or multiple kinds of) boundary, when neighboring samples across the boundary are disallowed to be used, 1-side padding is performed to pad such neighboring samples.
On padding of boundaries (including ALF virtual boundaries, slice/tile/brick/sub-picture boundaries, “360 virtual boundaries”) in the ALF filtering process:
Inputs of this process are:
condition
r1
r2
r3
(
y
=
=
clipBottomPos
−
1
|
|
y
=
=
clipTopPos
)
0
0
0
(
y
=
=
clipBottomPos
−
2
|
|
y
=
=
clipTopPos
+
1)
1
1
1
(
y
=
=
clipBottomPos
−
3
|
|
y
=
=
clipTopPos
+
2)
&&
1
2
2
(clipBottomPos
!=
clipTopPos
+
4)
(
y
=
=
clipTopPos
+
2
)
&&
(clipBottomPos
=
=
1
1
1
clipTopPos
+
4)
otherwise
1
2
3
Table 8-24—Specification of c1, c2, and c3 according to the vertical luma sample position x, clipLeftPos and clipRightPos
condition
c1
c2
c3
(xCtb
+
x
==
clipLeftPos
|
|
xCtb
+
x
==
clipRightPos
−1)
0
0
0
(xCtb
+
x
==
clipLeftPos
+1
|
|
xCtb
+
x
==
clipRightPos
−2)
1
1
1
(xCtb
+
x
==
clipLeftPos
+2
|
|
xCtb
+
x
==
clipRightPos
−3)
1
2
2
otherwise
1
2
3
Inputs of this process are:
1. The variables filtH[x][y], filtV[x][y], filtD0[x][y] and filtD1[x][y] with x, y=−2 . . . CtbSizeY+1 are derived as follows:
filtD0[x][y]=Abs((recPicture[hx, vy]<<1)−recPicture[ h−1, vy−1]−recPicture[hx+1, vy+1]) (8-1218)
filtD1[x][y]=Abs((recPicture[hx, vy]<<1)−recPicture[hx+1, vy−1]−recPicture[hx−1, vy+1]) (8-1219)
2.
3. The variables sumH[x][y], sumV[x][y], sumD0[x][y], sumD1[x][y] and sumOfHV[x][y] with x, y=0 . . . (CtbSizeY−1)>>2 are derived as follows:
The variables minY, maxY and ac are derived as follows:
The variable ac is derived according to Table 8-24.
(maxY
−
minY
+
1)
*
8
*
8
8
*
6
8
*
4
6
*
6
6
*
4
(maxX
−
minX
+
1)
ac
64
96
128
112
192
sumH[x][y]=ΣiΣj filtH[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1220)
sumV[x][y]=ΣiΣj filtV[hx<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1221)
sumO[x][y]=ΣiΣj filtD0[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1222)
sumD1[x][y]=ΣiΣj filtD1[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1223)
sumOfHV[x][y]=sumH[x][y]+sumV[x][y] (8-1224)
4. The variables dir1 [x][y], dir2[x][y] and dirS[x][y] with x, y=0 . . . CtbSizeY−1 are derived as follows:
5. The variable avgVar[x][y] with x, y=0 . . . CtbSizeY−1 is derived as follows:
varTab[ ]={0, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4} (8-1242)
avgVar[x][y]=varTab[Clip3(0, 15,(sum0fHV[x>>2][y>>2]*ac)>>(3+BitDepthY))] (8-1243)
6. The classification filter index array filtIdx[x][y] and the transpose index array transposeIdx[x][y] with x=y=0 . . . CtbSizeY−1 are derived as follows:
transposeTable[ ]={0, 1, 0, 2, 2, 3, 1, 3}
transposeIdx[x][y]=transposeTable[dir1[x][y]*2+(dir2[x][y]>>1)]
filtIdx[x][y]=avgVar[x][y]
When dirS[x][y] is not equal 0, filtIdx[x][y] is modified as follows:
filtIdx[x][y]+=(((dir1[x][y]& 0x1)<<1)+dirS[x][y])*5 (8-1244)
Inputs of this process are:
For the derivation of the filtered reconstructed chroma samples alfPicture[x][y], each reconstructed chroma sample inside the current chroma coding tree block recPicture[x][y] is filtered as follows with x=0 . . . ctbWidthC−1, y=0 . . . ctbHeightC−1:
condition
r1
r2
(
y
=
=
clipBottomPos
−
1
|
|
y
=
=
clipTopPos )
0
0
(
y
=
=
clipBottomPos
−
2
|
|
y
=
=
clipTopPos
+
1
)
&&
1
1
(clipBottomPos
!=
clipTopPos
+
2
)
(
y
=
=
clipTopPos
+
1)
&&
(clipBottomPos
=
=
clipTopPos
+
2
)
0
0
otherwise
1
2
condition
c1
c2
(xCtbC
+
x
==
clipLeftPo
s
||
xCtbC
+
x
==
0
0
clipRightPos
−
1
)
(xCtbC
+
x
==
clipLeftPos
+
1
||
xCtbC
+
x
==
1
1
clipRightPos
−
2)
otherwise
1
2
Inputs of this process are:
The specific value −128 used in above emodiment may be replaced by other values, such as −K wherein for example, K is greater than or no smaller than the number of lines shifted from a CTU bottom boundary (e.g., K=−5).
Alteratively, the conditional check of “PpsVirtualBoundariesPosY[n] is not equal to pic_height_in_luma_samples−1 or 0” could be further removed based on that PpsVirtualBoundariesPosY[n] is in the range of 1 to Ceil(pic_height_in_luma_samples 8)−1, inclusive.
Alternatively, one flag may be used to mark whether each sample need to be handled in a different way if it is located at video unit boundaries.
In this embodiment, the following main ideas are applied:
On enabling ALF virtual boundaries:
On padding of boundaries (including ALF virtual boundaries, slice/tile/brick/sub-picture boundaries, “360 virtual boundaries”) in the classification process:
On padding of boundaries (including ALF virtual boundaries, slice/tile/brick/sub-picture boundaries, “360 virtual boundaries”) in the ALF filtering process:
Inputs of this process are:
The variable sum is derived as follows:
sum=f[idx[0]*(Clip3(−c[idx[0]], c[idx[0]], recPictureLhx, vy+r3]−curr)+Clip3(−c[idx[0]], c[idx[0]], recPictureL[hx, vy−r3]−curr))+f[idx[1]]*(Clip3(−c[idx[1]], c[idx[1]], recPictureL[hx+c1, vy+r2]−curr)+Clip3(−c[idx[1]], c[idx[1]], recPictureL[hx−c1, vy−r2]−curr))+f[idx[2]]*(Clip3(−c[idx[2]], c[idx[2]], recPictureL[hx, vy+r2]−curr)+Clip3(−c[idx[2]], c[idx[2]], recPictureL[hx, vy−r2]−curr))+f[idx[3]]*(Clip3(−c[idx[3]], c[idx[3]], recPictureL[hx−c1, vy+r2]−curr)+Clip3(−c[idx[3]], c[idx[3]], recPictureL[hx+c1, vy−r2]−curr))+f[idx[4]]*(Clip3(−c[idx[4]], c[idx[4]], recPictureL[hx+c2, vy+r1]−curr)+Clip3(−c[idx[4]], c[idx[4]], recPictureL[hx−c2, vy−r1]−curr))+f[idx[5]]*(Clip3(−c[idx[5]], c[idx[5]], recPictureL[hx+c1, vy+r1]−curr)+Clip3(−c[idx[5]], c[idx[5]], recPictureL[hx−c1, vy−r1]−curr))+f[idx[6]]*(Clip3(−c[idx[6]], c[idx[6]], recPictureL[hx, vy+r1]−curr)+Clip3(−c[idx[6]], c[idx[6]], recPictureL[hx, vy−r1]−curr))+ (8-1204)
f[idx[7]]*(Clip3(−c[idx[7]], c[idx[7]], recPictureL[hx−c1, vy+r1]−curr)+Clip3(−c[idx[7]], c[idx[7]], recPictureL[hx+c1, vy−r1]−curr))+f[idx[8]]*(Clip3(−c[idx[8]], c[idx[8]], recPictureL[hx−c2, vy+r1]−curr)+Clip3(−c[idx[8]], c[idx[8]], recPictureL[hx+c2, vy−r1]−curr))+f[idx[9]]*(Clip3(−c[idx[9]], c[idx[9]], recPictureL[hx+c3, vy]−curr)+Clip3(−c[idx[9]], c[idx[9]], recPictureL[hx−c3, vy]−curr))+f[idx[10]]*(Clip3(−c[idx[10]], c[idx[10]], recPictureL[hx+c2, vy]−curr)+Clip3(−c[idx[10]], c[idx[10]], recPictureL[hx−c2, vy]−curr))+f[idx[11]]*(Clip3(−c[idx[11]], c[idx[11]], recPictureL[hc+c1, vy]−curr)+Clip3(−c[idx[11]], c[idx[11]], recPictureL[hx−c1, vy]−curr)) sum=curr+((sum+64)>>7) (8-1205)
C
ondition
r1
r2
r3
(
yCtb
+
y
=
=
clipBottomPos
−
1
|
|
yCtb
+
y
=
=
clipTopPos
)
0
0
0
(
yCtb
+
y
=
=
clipBottomPos
−
2
|
|
yCtb
+
y
=
=
clipTopPos
+
1)
1
1
1
(
yCtb
+
y
=
=
clipBottomPos
−
3
|
|
yCtb
+
y
=
=
clipTopPos
+
2)
&&
1
2
2
(clipBottomPos
!=
clipTopPos
+
4)
(
yCtb
+
y
=
=
clipTopPos
+
2
)
&&
(clipBottomPos
=
=
clipTopPos
+
4)
1
1
1
O
therwise
1
2
3
Condition
c1
c2
c3
(
xCtb
+
x
=
=
clipLeftPos
|
|
xCtb
+
x
=
=
clipRightPos
−
1
)
0
0
0
(
xCtb
+
x
=
=
clipLeftPos
+1
|
|
xCtb
+
x
=
=
clipRightPos
−
2
)
1
1
1
(
xCtb
+
x
=
=
clipLeftPos
+2
|
|
xCtb
+
x
=
=
clipRightPos
−
3
)
1
2
2
Otherwise
1
2
3
Inputs of this process are:
(maxY
−
minY
+
1)
*
8
*
8
8
*
6
8
*
4
6
*
6
6
*
4
(maxX
−
minX
+
1)
ac
64
96
128
112
192
sumH[x][y]=ΣiΣj filtH[hx<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1220)
sumV[x][y]=ΣiΣj filtV[hx<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1221)
sumD0[x][y]=ΣiΣj filtD0[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1222)
sumD1[x][y]=ΣiΣj filtD1[h(x<<2)+i−xCtb][v(y<<2)+j−yCtb] with i=minX . . . maxX, j=minY . . . maxY (8-1223)
sumOfHV[x][y]=sumH[x][y]+sumV[x][y] (8-1224)
4. The variables dir1[x][y], dir2[x][y] and dirS[x][y] with x, y=0 . . . CtbSizeY−1 are derived as follows:
When dirS[x][y] is not equal 0, filtIdx[x][y] is modified as follows:
filtIdx[x][y]+=(((dir1[x][y]&0x1)<<1)+dirS[x][y])*5 (8-1244)
Inputs of this process are:
vertical
chroma
sample position
Condition
r1
r2
(
yCtbC
y
=
=
clipBottomPos
−
|
|
yCtbC
+
y
y
=
=
clipTopPos
)
0
0
(
yCtbC
+
y
=
=
clipBottomPos
−
|
|
yCtbC
+
y
=
=
clipTopPos
+
1
)
&&
1
1
(clipBottomPos
!=
clipTopPos
+
2 )
(
yCtbC
+
y
=
=
clipTopPos
+
1)
&&
(clipBottomPos
=
=
clipTopPos
+
2
)
0
0
Otherwise
1
2
Condition
c1
c2
(
xCtb
+
x
=
=
clipLeftPos
|
|
xCtb
+
x
=
=
clipRightPos
−
1
)
0
0
(
xCtb
+
x
=
=
clipLeftPos
+
1
|
|
xCtb
+
x
=
=
clipRightPos
−
2
)
1
1
Otherwise
1
2
Inputs of this process are:
The specific value −128 used in above emodiment may be replaced by other values, such as −K wherein for example, K is greater than or no smaller than the number of lines shifted from a CTU bottom boundary (e.g., K=−5).
Alternatively, one flag may be used to mark whether each sample need to be handled in a different way if it is located at video unit boundaries.
Inputs of this process are:
Inputs of this process are:
. . .
Inputs of this process are:
. . .
8.5.5.5 ALF boundary position derivation process
Inputs of this process are:
a luma location (xCtb, yCtb) specifying the top-left sample of the current luma coding tree block relative to the top left sample of the current picture,
a luma location (x, y) specifying the current sample relative to the top-left sample of the current luma coding tree block.
Output of this process are:
In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to
Various solutions and embodiments described in the present disclosure are further described using a list of solutions.
Section 4, item 1 provides additional examples of the following solutions.
1. A method of video processing, comprising: performing a conversion between video blocks of a video picture and a bitstream representation thereof, wherein the video blocks are processed using logical groupings of coding tree blocks, wherein the coding tree blocks are processed based on whether a bottom boundary of a bottom coding tree block is outside a bottom boundary of the video picture.
2. The method of solution 1, wherein the processing the coding tree block includes performing an adaptive loop filtering of sample values of the coding tree block by using samples within the coding tree block.
3. The method of solution 1, wherein the processing the coding tree block includes performing an adaptive loop filtering of sample values of the coding tree block by disabling splitting the coding tree block into two parts according to virtual boundaries.
Section 4, item 2 provides additional examples of the following solutions.
4. A method of video processing, comprising: determining, based on a condition of a coding tree block of a current video block, a usage status of virtual samples during an in-loop filtering; and performing a conversion between the video block and a bitstream representation of the video block consistent with the usage status of virtual samples.
5. The method of solution 4, wherein, a logical true value of the usage status indicates that the current video block is split at least to two parts by a virtual boundary and filtering samples in one part is disallowed to utilize the information from another part.
6. The method of solution 4, wherein, a logical true value of the usage status indicates virtual samples are used during the in-loop filtering, and wherein the in-loop filtering is performed using modified values of reconstructed samples of the current video block.
7. The method of solution 4, wherein a logical false value of the usage status indicates that filtering samples in the block is allowed to utilize the information in the same block.
8. The method of solution 4, wherein, a logical true value of the usage status indicates the in-loop filtering is performed on reconstructed samples of the current video block without further modifying the reconstructed samples.
9. The method of any of solutions 4-8, wherein the condition specifies to set the usage status to the logical false value due to the coding tree block having a specific size.
10. The method of any of solutions 4-8, wherein the condition specifies to set the usage status to the logical false value due to the coding tree block having a size greater than a specific size.
11. The method of any of solutions 4-8, the tree block having a size less than a specific size.
Section 4, item 3 provides additional examples of the following solutions.
12. The method of solution 5, wherein the condition depends on whether a bottom boundary of the current video block is a bottom boundary of a video unit that is smaller than the video picture or the bottom boundary of the current video block is a virtual boundary.
13. The method of solution 12, wherein the condition depends on whether a bottom boundary of the current video block is a bottom boundary of a slice or tile or brick boundary.
14. The method of solution 12, wherein the condition specifies to set the usage status to the logical true value when the bottom boundary of the current video block is a bottom boundary of a slice or tile or brick boundary.
15. The method of solution 4-12, wherein the condition specifies to set the usage status to the logical false value when the bottom boundary of the current video block is a bottom boundary of a picture boundary or outside the bottom boundary of a picture boundary.
Section 4, item 4 provides additional examples of the following solutions.
16. A method of video processing, comprising: determining, during a conversion between a video picture that is logically grouped into one or more video slices or video bricks, and a bitstream representation of the video picture, to disable a use of samples in another slice or brick in the adaptive loop filter process; and performing the conversion consistent with the determining.
Section 4, item 5 provides additional examples of the following solutions.
17. A method of video processing, comprising: determining, during a conversion between a current video block of a video picture and a bitstream representation of the current video block, that the current video block includes samples located at a boundary of a video unit of the video picture; and performing the conversion based on the determining, wherein the performing the conversion includes generating virtual samples for an in-loop filtering process using a unified method that is same for all boundary types in the video picture.
18. The method of solution 17, wherein the video unit is a slice or tile or 360-degree video.
19. The method of solution 17, wherein the in-loop filtering includes adaptive loop filtering.
20. The method of any of solutions 17-19, wherein the unified method is a two-side padding method.
21. The method of any of solutions 17-20, wherein the unified method is when accessing samples below a first line is disallowed and padding is utilized to generate virtual samples for those below the first line, then accessing samples above a second line is also set to be disallowed and padding is utilized to generate virtual samples for those above the second line.
22. The method of any of solutions 17-20, wherein the unified method is when accessing samples above a first line is disallowed and padding is utilized to generate virtual samples for those above the first line, then accessing samples below a second line is also set to be disallowed and padding is utilized to generate virtual samples for those below the second line.
23. The method of any of solutions 21-22, wherein the distance between the first line and a current line where the current sample to be filtered is located and distance between the second line and the first line is equal.
Section 4, item 6 provides additional examples of the following solutions.
24. A method of video processing, comprising: determining to apply, during a conversion between a current video block of a video picture and a bitstream representation thereof, one of multiple adaptive loop filter (ALF) sample selection methods available for the video picture during the conversion; and performing the conversion by applying the one of multiple ALF sample selection methods.
25. The method of solution 24, wherein the multiple ALF sample selection methods include a first method in which samples are selected before an in-loop filter is applied to the current video block during the conversion and a second method in which samples are selected after an in-loop filter is applied to the current video block during the conversion.
Section 4, item 7 provides additional examples of the following solutions.
26. A method of video processing, comprising: performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule disables using samples that cross a virtual pipeline data unit (VPDU) of the video picture, and performing the conversion using a result of the in-loop filtering operation.
27. The method of solution 26, wherein the VPDU corresponds to a region of the video picture having a fixed size.
28. The method of any of solutions 26-27, wherein the boundary rule further specifies to use virtual samples for the in-loop filtering in place of disabled samples.
29. The method of solution 28, wherein the virtual samples are generated by padding.
Section 4, item 8 provides additional examples of the following solutions.
30. A method of video processing, comprising: performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule specifies to use, for locations of the current video block across a video unit boundary, samples that are generated without using padding; and performing the conversion using a result of the in-loop filtering operation.
31. The method of solution 30, wherein the samples are generated using a two-side padding technique.
32. The method of solution 30, wherein the in-loop filtering operation comprises using a same virtual sample generation technique for symmetrically located samples during the in-loop filtering operation.
33. The method of any of solutions 30-32, wherein the in-loop filtering operation over samples of the current video block includes performing reshaping of the samples of the current video block prior to applying the in-loop filtering.
Section 4, item 9 provides additional examples of the following solutions.
34. A method of video processing, comprising: performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule specifies selecting, for the in-loop filtering operation, a filter having dimensions such that samples of current video block used during the in-loop filtering do not cross a boundary of a video unit of the video picture; and performing the conversion using a result of the in-loop filtering operation.
Section 4, item 10 provides additional examples of the following solutions.
35. A method of video processing, comprising: performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule specifies selecting, for the in-loop filtering operation, clipping parameters or filter coefficients based on whether or not padded samples are needed for the in-loop filtering; and performing the conversion using a result of the in-loop filtering operation.
36. The method of solution 35, wherein the clipping parameters or filter coefficients are included in the bitstream representation.
Section 4, item 11 provides additional examples of the following solutions.
37. A method of video processing, comprising: performing, based on a boundary rule, an in-loop filtering operation over samples of a current video block of a video picture during a conversion between the current video block and a bitstream representation of a current video block; wherein the boundary rule depends on a color component identity of the current video block; and performing the conversion using a result of the in-loop filtering operation.
38. The method of solution 37, wherein the boundary rule is different for luma and/or different color components.
39. The method of any of solutions 1-38, wherein the conversion includes encoding the current video block into the bitstream representation.
40. The method of any of solutions 1-38, wherein the conversion includes decoding the bitstream representation to generate sample values of the current video block.
41. A video encoding apparatus comprising a processor configured to implement a method recited in any one or more of solutions 1-38.
42. A video decoding apparatus comprising a processor configured to implement a method recited in any one or more of solutions 1-38.
43. A computer-readable medium having code stored thereon, the code, upon execution by a processor, causing the processor to implement a method recited in any one or more of solutions 1-38.
The system 3100 may include a coding component 3104 that may implement the various coding or encoding methods described in the present disclosure. The coding component 3104 may reduce the average bitrate of video from the input 3102 to the output of the coding component 3104 to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component 3104 may be either stored, or transmitted via a communication connected, as represented by the component 3106. The stored or communicated bitstream (or coded) representation of the video received at the input 3102 may be used by the component 3108 for generating pixel values or displayable video that is sent to a display interface 3110. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.
Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on. Examples of storage interfaces include serial advanced technology attachment (SATA), peripheral component interconnect (PCI), integrated drive electronics (IDE) interface, and the like. The techniques described in the present disclosure may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display.
In some embodiments, the current block comprises a coding unit, a picture unit, or a coding tree unit. In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located above and to the left of the current block. The order specifies that (1) samples in a second neighboring block that is above the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block that is positioned to the left of the current block are checked next, and (3) in case the samples in the third neighboring block are also unavailable, a top-left sample of the current block is used to generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located below and to the right of the current block. The order specifies that (1) samples in a second neighboring block that is below the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block that is positioned to the right of the current block are checked next, and (3) in case the samples in the third neighboring block are also unavailable, a bottom-right sample of the current block is used generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located above and to the right of the current block. The order specifies that: (1) samples in a second neighboring block that is above the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block that is positioned to the right of the current block are checked next, and (3) in case the sample in the third neighboring block are also unavailable, a top-right sample of the current block is used generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located below and to the left of the current block. The order specifies that: (1) samples in a second neighboring block that is below the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block that is positioned to the left of the current block are checked next, and (3) in case the sample in the third neighboring block are also unavailable, a top-left sample of the current block is used to generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located above and to the left of the current block. The order specifies that: (1) samples in a second neighboring block that is positioned to the left of the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block that is above the current block are checked next, and (3) in case the sample in the third neighboring block are also unavailable, a top-left sample of the current block is used for to generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located above and to the right of the current block. The order specifies that (1) samples in a second neighboring block that is positioned to the right of the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block above the right of the current block are checked next, and (3) in case the sample in the third neighboring block are also unavailable, a top-right sample of the current block is used to generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located below and to the left of the current block. The order specifies that (1) samples in a second neighboring block that is positioned to the left of the current block are checked first, (2) in case the samples in the second neighboring block unavailable, samples in a third neighboring block below the left of the current block are checked next, and (3) in case the sample in the third neighboring block are also unavailable, a top-left sample of the current block is used to generate a padded sample.
In some embodiments, the one or more samples outside the current block that are unavailable comprise a sample located in a first neighboring block that is located below and to the right of the current block. The order specifies that: (1) samples in a second neighboring block that is positioned to the right of the current block are checked first, (2) in case the samples in the second neighboring block are unavailable, samples in a third neighboring block below the right of the current block are checked next, and (3) in case the sample in the third neighboring block are also unavailable, a bottom-right sample of the current block is used to generate a padded sample.
In some embodiments, the order specifies that (1) vertical available samples are checked first, and (2) horizontal available samples are optionally checked next. In some embodiments, the order specifies that (1) horizontal available samples are checked first, and (2) vertical available samples are optionally checked next. In some embodiments, samples in each neighboring block of the current block are checked in an order. In some embodiments, only one sample is checked in each neighboring block of current block.
In some embodiments, a current sample in the current block is used in the ALF coding process in case no available sample is found to generate a padded sample. In some embodiments, for an unavailable sample in a neighboring block that is located (1) above and to the left of the current block, (2) above and to the right of the current block, (3) below and to the right of the current block, or (4) below and to the right of the current block, the sample is padded using one or more samples in the current block. In some embodiments, a top-left sample of the current block is used to pad a sample in a neighboring block that is located above and to the left of the current block. In some embodiments, a top-right sample of the current block is used to pad a sample in a neighboring block that is located above and to the right of the current block. In some embodiments, a below-left sample of the current block is used to pad a sample in a neighboring block that is located below and to the left of the current block. In some embodiments, a below-right sample of the current block is used to pad a sample in a neighboring block that is located below and to the right of the current block. In some embodiments, the coding process further comprises another filtering coding process that accesses the one or more samples outside the current block.
In some embodiments, the current block comprises a coding unit, a picture unit, or a coding tree unit. In some embodiments, a video region comprises a slice, a brick a tile, or a sub-picture. In some embodiments, a sample is considered to be unavailable for the coding tool in case (1) the sample and a current sample of the current block are in different video regions, and (2) accessing samples across different video regions of the video is disabled for the coding tool. In some embodiments, a syntax element is signaled in the coded representation to indicate whether accessing samples across different video regions of the video is disabled. In some embodiments, the method further includes applying a padding process in response to the sample being unavailable to derive a padded sample for the coding tool.
In some embodiments, a top-left sample of the current block is represented as (x0, y0), and a sample (x0−offsetX0, y0−offsetY0) in the neighboring block that is located above and to the left of the current block is checked, offsetX0 and offsetY0 being integers. In some embodiments, (offsetX0, offsetY0) comprises (1, 1), (2, 1), or (1, 2).
In some embodiments, a top-left sample of the current block is represented as (x0, y0), and a sample (x0+offsetX1, y0−offsetY1) in the neighboring block that is located above and to the right of the current block is checked, offsetX1 and offsetY1 being integers. In some embodiments, (offsetX1, offsetY1) comprises (BlockWidth, 1), (BlockWidth+1, 1), or (BlockWidth, 2), where BlockWidth is a width of the current block.
In some embodiments, a top-left sample of the current block is represented as (x0, y0), and a sample (x0−offsetX2, y0+offsetY0) in the neighboring block that is located below and to the left of the current block is checked, offsetX2 and offsetY2 being integers. In some embodiments, (offsetX2, offsetY2) comprises (1, BlockHeight), (2, BlockHeight), or (1, BlockHeight+1), where BlockHeight is a height of the current block.
In some embodiments, a top-left sample of the current block is represented as (x0, y0), and a sample (x0+offsetX3, y0+offsetY3) in the neighboring block that is located above and to the right of the current block is checked, offsetX3 and offsetY3 being integers. In some embodiments, (offsetX3, offsetY3) comprises (BlockWidth, BlockHeight), (BlockWidth+1, BlockHeight), or (BlockWidth, BlockHeight+1), where BlockWidth is a width of the current block and BlockHeight is a height of the current block.
In some embodiments, offsetXk or offsetYk are determined based on a width or a height of the current block, wherein k=0, 1, or 3. In some embodiments, the coding tool further comprises another filtering coding tool.
In some embodiments, applicability of one or more of the methods to the current block is determined based on a characteristic of the video. In some embodiments, the characteristic of the video comprises information signaled in a decoder parameter set, a slice parameter set, a video parameter set, a picture parameter set, an adaptation parameter set, a picture header, a slice header, a tile group header, a largest coding unit (LCU), a coding unit, a LCU row, a group of LCUs, a transform unit, a picture unit, or a video coding unit in the coded representation. In some embodiments, the characteristic of the video comprises a position of a coding unit, a picture unit, a transform unit, a block, or a video coding unit within the video. In some embodiments, the characteristic of the video comprises a characteristic of a current block or a neighboring block of the current block. In some embodiments, the characteristic of a current block or neighboring blocks of the current block comprises a dimension of the current block or a dimension of the neighboring block of the current block. In some embodiments, the characteristic of a current block or neighboring blocks of the current block comprises a shape of the current block or a shape of the neighboring block of the current block. In some embodiments, the characteristic of the video comprises an indication of a color format of the video. In some embodiments, the characteristic of the video comprises a coding tree structure applicable to the video. In some embodiments, the characteristic of the video comprises a slice type, a tile group type, or a picture type of the video. In some embodiments, the characteristic of the video comprises color component of the video. In some embodiments, the characteristic of the video comprises a temporal layer identifier of the video. In some embodiments, the characteristic of the video comprises a profile, a level, or a tier of a video standard.
In some embodiments, the conversion includes encoding the video into the bitstream representation. In some embodiments, the conversion includes decoding the bitstream representation into the video.
As shown in
Source device 110 may include a video source 112, a video encoder 114, and an input/output (I/O) interface 116.
Video source 112 may include a source such as a video capture device, an interface to receive video data from a video content provider, and/or a computer graphics system for generating video data, or a combination of such sources. The video data may comprise one or more pictures. Video encoder 114 encodes the video data from video source 112 to generate a bitstream. The bitstream may include a sequence of bits that form a coded representation of the video data. The bitstream may include coded pictures and associated data. The coded picture is a coded representation of a picture. The associated data may include sequence parameter sets, picture parameter sets, and other syntax structures. I/O interface 116 may include a modulator/demodulator (modem) and/or a transmitter. The encoded video data may be transmitted directly to destination device 120 via I/O interface 116 through network 130a. The encoded video data may also be stored onto a storage medium/server 130b for access by destination device 120.
Destination device 120 may include an I/O interface 126, a video decoder 124, and a display device 122.
I/O interface 126 may include a receiver and/or a modem. I/O interface 126 may acquire encoded video data from the source device 110 or the storage medium/server 130b. Video decoder 124 may decode the encoded video data. Display device 122 may display the decoded video data to a user. Display device 122 may be integrated with the destination device 120, or may be external to destination device 120 which be configured to interface with an external display device.
Video encoder 114 and video decoder 124 may operate according to a video compression standard, such as the High Efficiency Video Coding (HEVC) standard, Versatile Video Coding (VVC) standard and other current and/or further standards.
Video encoder 200 may be configured to perform any or all of the techniques of this disclosure. In the example of
The functional components of video encoder 200 may include a partition unit 201, a predication unit 202 which may include a mode select unit 203, a motion estimation unit 204, a motion compensation unit 205 and an intra prediction unit 206, a residual generation unit 207, a transform unit 208, a quantization unit 209, an inverse quantization unit 210, an inverse transform unit 211, a reconstruction unit 212, a buffer 213, and an entropy encoding unit 214.
In other examples, video encoder 200 may include more, fewer, or different functional components. In an example, predication unit 202 may include an intra block copy (IBC) unit. The IBC unit may perform predication in an IBC mode in which at least one reference picture is a picture where the current video block is located.
Furthermore, some components, such as motion estimation unit 204 and motion compensation unit 205 may be highly integrated, but are represented in the example of
Partition unit 201 may partition a picture into one or more video blocks. Video encoder 200 and video decoder 300 may support various video block sizes.
Mode select unit 203 may select one of the coding modes, intra or inter, e.g., based on error results, and provide the resulting intra- or inter-coded block to a residual generation unit 207 to generate residual block data and to a reconstruction unit 212 to reconstruct the encoded block for use as a reference picture. In some example, Mode select unit 203 may select a combination of intra and inter predication (CIIP) mode in which the predication is based on an inter predication signal and an intra predication signal. Mode select unit 203 may also select a resolution for a motion vector (e.g., a sub-pixel or integer pixel precision) for the block in the case of inter-predication.
To perform inter prediction on a current video block, motion estimation unit 204 may generate motion information for the current video block by comparing one or more reference frames from buffer 213 to the current video block. Motion compensation unit 205 may determine a predicted video block for the current video block based on the motion information and decoded samples of pictures from buffer 213 other than the picture associated with the current video block.
Motion estimation unit 204 and motion compensation unit 205 may perform different operations for a current video block, for example, depending on whether the current video block is in an I slice, a P slice, or a B slice.
In some examples, motion estimation unit 204 may perform uni-directional prediction for the current video block, and motion estimation unit 204 may search reference pictures of list 0 or list 1 for a reference video block for the current video block. Motion estimation unit 204 may then generate a reference index that indicates the reference picture in list 0 or list 1 that contains the reference video block and a motion vector that indicates a spatial displacement between the current video block and the reference video block. Motion estimation unit 204 may output the reference index, a prediction direction indicator, and the motion vector as the motion information of the current video block. Motion compensation unit 205 may generate the predicted video block of the current block based on the reference video block indicated by the motion information of the current video block.
In other examples, motion estimation unit 204 may perform bi-directional prediction for the current video block, motion estimation unit 204 may search the reference pictures in list 0 for a reference video block for the current video block and may also search the reference pictures in list 1 for another reference video block for the current video block. Motion estimation unit 204 may then generate reference indexes that indicate the reference pictures in list 0 and list 1 containing the reference video blocks and motion vectors that indicate spatial displacements between the reference video blocks and the current video block. Motion estimation unit 204 may output the reference indexes and the motion vectors of the current video block as the motion information of the current video block. Motion compensation unit 205 may generate the predicted video block of the current video block based on the reference video blocks indicated by the motion information of the current video block.
In some examples, motion estimation unit 204 may output a full set of motion information for decoding processing of a decoder.
In some examples, motion estimation unit 204 may do not output a full set of motion information for the current video. Rather, motion estimation unit 204 may signal the motion information of the current video block with reference to the motion information of another video block. For example, motion estimation unit 204 may determine that the motion information of the current video block is sufficiently similar to the motion information of a neighboring video block.
In one example, motion estimation unit 204 may indicate, in a syntax structure associated with the current video block, a value that indicates to the video decoder 300 that the current video block has the same motion information as the another video block.
In another example, motion estimation unit 204 may identify, in a syntax structure associated with the current video block, another video block and a motion vector difference (MVD). The motion vector difference indicates a difference between the motion vector of the current video block and the motion vector of the indicated video block. The video decoder 300 may use the motion vector of the indicated video block and the motion vector difference to determine the motion vector of the current video block.
As discussed above, video encoder 200 may predictively signal the motion vector. Two examples of predictive signaling techniques that may be implemented by video encoder 200 include advanced motion vector predication (AMVP) and merge mode signaling.
Intra prediction unit 206 may perform intra prediction on the current video block. When intra prediction unit 206 performs intra prediction on the current video block, intra prediction unit 206 may generate prediction data for the current video block based on decoded samples of other video blocks in the same picture. The prediction data for the current video block may include a predicted video block and various syntax elements.
Residual generation unit 207 may generate residual data for the current video block by subtracting (e.g., indicated by the minus sign) the predicted video block(s) of the current video block from the current video block. The residual data of the current video block may include residual video blocks that correspond to different sample components of the samples in the current video block.
In other examples, there may be no residual data for the current video block for the current video block, for example in a skip mode, and residual generation unit 207 may not perform the subtracting operation.
Transform processing unit 208 may generate one or more transform coefficient video blocks for the current video block by applying one or more transforms to a residual video block associated with the current video block.
After transform processing unit 208 generates a transform coefficient video block associated with the current video block, quantization unit 209 may quantize the transform coefficient video block associated with the current video block based on one or more quantization parameter (QP) values associated with the current video block.
Inverse quantization unit 210 and inverse transform unit 211 may apply inverse quantization and inverse transforms to the transform coefficient video block, respectively, to reconstruct a residual video block from the transform coefficient video block. Reconstruction unit 212 may add the reconstructed residual video block to corresponding samples from one or more predicted video blocks generated by the predication unit 202 to produce a reconstructed video block associated with the current block for storage in the buffer 213.
After reconstruction unit 212 reconstructs the video block, loop filtering operation may be performed reduce video blocking artifacts in the video block.
Entropy encoding unit 214 may receive data from other functional components of the video encoder 200. When entropy encoding unit 214 receives the data, entropy encoding unit 214 may perform one or more entropy encoding operations to generate entropy encoded data and output a bitstream that includes the entropy encoded data.
The video decoder 300 may be configured to perform any or all of the techniques of this disclosure. In the example of
In the example of
Entropy decoding unit 301 may retrieve an encoded bitstream. The encoded bitstream may include entropy coded video data (e.g., encoded blocks of video data). Entropy decoding unit 301 may decode the entropy coded video data, and from the entropy decoded video data, motion compensation unit 302 may determine motion information including motion vectors, motion vector precision, reference picture list indexes, and other motion information. Motion compensation unit 302 may, for example, determine such information by performing the AMVP and merge mode.
Motion compensation unit 302 may produce motion compensated blocks, possibly performing interpolation based on interpolation filters. Identifiers for interpolation filters to be used with sub-pixel precision may be included in the syntax elements.
Motion compensation unit 302 may use interpolation filters as used by video encoder 20 during encoding of the video block to calculate interpolated values for sub-integer pixels of a reference block. Motion compensation unit 302 may determine the interpolation filters used by video encoder 200 according to received syntax information and use the interpolation filters to produce predictive blocks.
Motion compensation unit 302 may uses some of the syntax information to determine sizes of blocks used to encode frame(s) and/or slice(s) of the encoded video sequence, partition information that describes how each macroblock of a picture of the encoded video sequence is partitioned, modes indicating how each partition is encoded, one or more reference frames (and reference frame lists) for each inter-encoded block, and other information to decode the encoded video sequence.
Intra prediction unit 303 may use intra prediction modes for example received in the bitstream to form a prediction block from spatially adjacent blocks. Inverse quantization unit 303 inverse quantizes, i.e., de-quantizes, the quantized video block coefficients provided in the bitstream and decoded by entropy decoding unit 301. Inverse transform unit 303 applies an inverse transform.
Reconstruction unit 306 may sum the residual blocks with the corresponding prediction blocks generated by motion compensation unit 202 or intra-prediction unit 303 to form decoded blocks. If desired, a deblocking filter may also be applied to filter the decoded blocks in order to remove blockiness artifacts. The decoded video blocks are then stored in buffer 307, which provides reference blocks for subsequent motion compensation.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the disclosed embodiments. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
Some embodiments of the disclosed technology include making a decision or determination to enable a video processing tool or mode. In an example, when the video processing tool or mode is enabled, the encoder will use or implement the tool or mode in the processing of a block of video, but may not necessarily modify the resulting bitstream based on the usage of the tool or mode. That is, a conversion from the block of video to the bitstream representation of the video will use the video processing tool or mode when it is enabled based on the decision or determination. In another example, when the video processing tool or mode is enabled, the decoder will process the bitstream with the knowledge that the bitstream has been modified based on the video processing tool or mode. That is, a conversion from the bitstream representation of the video to the block of video will be performed using the video processing tool or mode that was enabled based on the decision or determination.
Some embodiments of the disclosed technology include making a decision or determination to disable a video processing tool or mode. In an example, when the video processing tool or mode is disabled, the encoder will not use the tool or mode in the conversion of the block of video to the bitstream representation of the video. In another example, when the video processing tool or mode is disabled, the decoder will process the bitstream with the knowledge that the bitstream has not been modified using the video processing tool or mode that was enabled based on the decision or determination.
Implementations of the subject matter and the functional operations described in the present disclosure can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, e.g., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise.
While the present disclosure contains many specifics, these should not be construed as limitations on the scope of any embodiment or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular embodiments. Certain features that are described in the present disclosure in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in the present disclosure should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
PCT/CN2019/107160 | Sep 2019 | WO | international |
This application is a continuation of U.S. application No. 17/701,224, filed on Mar. 22, 2022, which is a continuation of International Patent Application No. PCT/CN2020/116707, filed on Sep. 22, 2020, which claims claim the priority to and benefits of International Patent Application No. PCT/CN2019/107160, filed on Sep. 22, 2019. All the aforementioned patent applications are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
9077998 | Wang | Jul 2015 | B2 |
9247258 | Coban | Jan 2016 | B2 |
9473779 | Rapaka | Oct 2016 | B2 |
9591325 | Li | Mar 2017 | B2 |
9628792 | Rapaka | Apr 2017 | B2 |
9807406 | Ramasubramonian | Oct 2017 | B2 |
10057574 | Li | Aug 2018 | B2 |
10321130 | Dong | Jun 2019 | B2 |
10404999 | Liu | Sep 2019 | B2 |
10506230 | Zhang | Dec 2019 | B2 |
10531111 | Li | Jan 2020 | B2 |
10708592 | Dong | Jul 2020 | B2 |
10721469 | Zhang | Jul 2020 | B2 |
10728573 | Sun | Jul 2020 | B2 |
10778974 | Karczewicz | Sep 2020 | B2 |
10819987 | Jang | Oct 2020 | B2 |
10855985 | Zhang | Dec 2020 | B2 |
10965941 | Zhao | Mar 2021 | B2 |
11095922 | Zhang | Aug 2021 | B2 |
11190765 | Bordes | Nov 2021 | B2 |
11277635 | Xiu | Mar 2022 | B2 |
11284116 | Gisquet | Mar 2022 | B2 |
11303890 | Hu | Apr 2022 | B2 |
11589042 | Zhang | Feb 2023 | B2 |
11652998 | Liu | May 2023 | B2 |
11700368 | Zhang | Jul 2023 | B2 |
11706462 | Liu | Jul 2023 | B2 |
12003712 | Zhang | Jun 2024 | B2 |
20040252759 | John Winder | Dec 2004 | A1 |
20100202262 | Adams | Aug 2010 | A1 |
20110274158 | Fu | Nov 2011 | A1 |
20110280304 | Jeon | Nov 2011 | A1 |
20120082244 | Chen | Apr 2012 | A1 |
20120106624 | Huang | May 2012 | A1 |
20120320973 | Xu | Dec 2012 | A1 |
20130044809 | Chong | Feb 2013 | A1 |
20130094569 | Chong | Apr 2013 | A1 |
20130107973 | Wang | May 2013 | A1 |
20130128986 | Tsai | May 2013 | A1 |
20130272624 | Budagavi | Oct 2013 | A1 |
20130322523 | Huang | Dec 2013 | A1 |
20130343447 | Chen | Dec 2013 | A1 |
20140146875 | Chong | May 2014 | A1 |
20140146881 | Kim | May 2014 | A1 |
20140153844 | Jeon | Jun 2014 | A1 |
20140198844 | Hsu | Jul 2014 | A1 |
20150016506 | Fu | Jan 2015 | A1 |
20150016543 | Rapaka | Jan 2015 | A1 |
20150071357 | Pang | Mar 2015 | A1 |
20150172724 | Minezawa | Jun 2015 | A1 |
20160227224 | Hsieh | Aug 2016 | A1 |
20160234492 | Li | Aug 2016 | A1 |
20160241881 | Chao | Aug 2016 | A1 |
20160360210 | Xiu | Dec 2016 | A1 |
20170085917 | Hannuksela | Mar 2017 | A1 |
20170195670 | Budagavi | Jul 2017 | A1 |
20170238020 | Karczewicz | Aug 2017 | A1 |
20170332075 | Karczewicz | Nov 2017 | A1 |
20170374385 | Huang | Dec 2017 | A1 |
20180020215 | Ramamurthy | Jan 2018 | A1 |
20180041778 | Zhang | Feb 2018 | A1 |
20180041779 | Zhang | Feb 2018 | A1 |
20180048907 | Rusanovskyy | Feb 2018 | A1 |
20180054613 | Lin | Feb 2018 | A1 |
20180063527 | Chen | Mar 2018 | A1 |
20180091825 | Zhao | Mar 2018 | A1 |
20180115787 | Koo | Apr 2018 | A1 |
20180184127 | Zhang | Jun 2018 | A1 |
20180192050 | Zhang | Jul 2018 | A1 |
20190044809 | Willis | Feb 2019 | A1 |
20190082193 | Sun | Mar 2019 | A1 |
20190141318 | Li | May 2019 | A1 |
20190141321 | Yin | May 2019 | A1 |
20190166363 | Zhang | May 2019 | A1 |
20190166375 | Jun | May 2019 | A1 |
20190215518 | Alagappan | Jul 2019 | A1 |
20190215532 | He | Jul 2019 | A1 |
20190230353 | Gadde | Jul 2019 | A1 |
20190238845 | Zhang | Aug 2019 | A1 |
20190281273 | Lin | Sep 2019 | A1 |
20190306502 | Gadde | Oct 2019 | A1 |
20190335207 | Abe | Oct 2019 | A1 |
20190373258 | Karczewicz | Dec 2019 | A1 |
20200029080 | Kim | Jan 2020 | A1 |
20200074687 | Lin | Mar 2020 | A1 |
20200092574 | Li | Mar 2020 | A1 |
20200120359 | Hanhart | Apr 2020 | A1 |
20200204801 | Hu | Jun 2020 | A1 |
20200236353 | Zhang | Jul 2020 | A1 |
20200260120 | Hanhart | Aug 2020 | A1 |
20200267381 | Vanam | Aug 2020 | A1 |
20200296425 | Seregin | Sep 2020 | A1 |
20200314418 | Wang | Oct 2020 | A1 |
20200322632 | Hanhart | Oct 2020 | A1 |
20200329239 | Hsiao | Oct 2020 | A1 |
20200374540 | Wang | Nov 2020 | A1 |
20200413038 | Zhang | Dec 2020 | A1 |
20210014537 | Hu | Jan 2021 | A1 |
20210044809 | Abe | Feb 2021 | A1 |
20210076033 | Hu | Mar 2021 | A1 |
20210076034 | Misra | Mar 2021 | A1 |
20210120275 | Misra | Apr 2021 | A1 |
20210136407 | Aono | May 2021 | A1 |
20210136413 | He | May 2021 | A1 |
20210185353 | Xiu | Jun 2021 | A1 |
20210195171 | Rath | Jun 2021 | A1 |
20210211662 | Wang | Jul 2021 | A1 |
20210218962 | Lim | Jul 2021 | A1 |
20210235109 | Liu | Jul 2021 | A1 |
20210274223 | Lim | Sep 2021 | A1 |
20210281838 | Lee | Sep 2021 | A1 |
20210314628 | Zhang | Oct 2021 | A1 |
20210314630 | Misra | Oct 2021 | A1 |
20210321095 | Zhang | Oct 2021 | A1 |
20210321121 | Zhang | Oct 2021 | A1 |
20210337228 | Wang | Oct 2021 | A1 |
20210337239 | Zhang | Oct 2021 | A1 |
20210360238 | Chen | Nov 2021 | A1 |
20210368171 | Zhang | Nov 2021 | A1 |
20210377524 | Zhang | Dec 2021 | A1 |
20210385446 | Liu | Dec 2021 | A1 |
20210392381 | Wang | Dec 2021 | A1 |
20210400267 | Kotra | Dec 2021 | A1 |
20210409699 | Andersson | Dec 2021 | A1 |
20210409703 | Wang | Dec 2021 | A1 |
20220007014 | Wang | Jan 2022 | A1 |
20220103817 | Zhang | Mar 2022 | A1 |
20220116596 | Zhang | Apr 2022 | A1 |
20220132117 | Zhang | Apr 2022 | A1 |
20220132145 | Choi | Apr 2022 | A1 |
20220141461 | Zhang | May 2022 | A1 |
Number | Date | Country |
---|---|---|
101207812 | Jun 2008 | CN |
102804776 | Nov 2012 | CN |
103141106 | Jun 2013 | CN |
103503456 | Jan 2014 | CN |
103518375 | Jan 2014 | CN |
103891292 | Jun 2014 | CN |
104054339 | Sep 2014 | CN |
104205829 | Dec 2014 | CN |
104813661 | Jul 2015 | CN |
105409221 | Mar 2016 | CN |
105847843 | Aug 2016 | CN |
106105227 | Nov 2016 | CN |
106878729 | Jun 2017 | CN |
107211154 | Sep 2017 | CN |
108111851 | Jun 2018 | CN |
108293136 | Jul 2018 | CN |
108432247 | Aug 2018 | CN |
108449591 | Aug 2018 | CN |
108605143 | Sep 2018 | CN |
109076218 | Dec 2018 | CN |
109417632 | Mar 2019 | CN |
109479130 | Mar 2019 | CN |
109600611 | Apr 2019 | CN |
109660797 | Apr 2019 | CN |
109691099 | Apr 2019 | CN |
109792525 | May 2019 | CN |
109996069 | Jul 2019 | CN |
109996070 | Jul 2019 | CN |
114503594 | May 2022 | CN |
114097225 | Apr 2024 | CN |
114175637 | Apr 2024 | CN |
113994671 | May 2024 | CN |
114450954 | Jun 2024 | CN |
114424539 | Jul 2024 | CN |
2772051 | Sep 2014 | EP |
3057320 | Aug 2016 | EP |
3496399 | Jun 2019 | EP |
3984223 | Apr 2022 | EP |
550007 | Sep 2024 | IN |
2014517555 | Jul 2014 | JP |
7549082 | Sep 2024 | JP |
7560227 | Oct 2024 | JP |
102669852 | May 2024 | KR |
102707854 | Sep 2024 | KR |
2521081 | Jun 2014 | RU |
2639958 | Dec 2017 | RU |
11202200257 | Aug 2024 | SG |
2012092777 | Jul 2012 | WO |
2013063455 | May 2013 | WO |
2013109946 | Jul 2013 | WO |
2013148466 | Oct 2013 | WO |
2015011339 | Jan 2015 | WO |
2015070772 | May 2015 | WO |
2015165030 | Nov 2015 | WO |
2016066093 | May 2016 | WO |
2016200777 | Dec 2016 | WO |
2017045580 | Mar 2017 | WO |
2018097607 | May 2018 | WO |
2018119429 | Jun 2018 | WO |
2018182377 | Oct 2018 | WO |
2019089695 | May 2019 | WO |
2019131400 | Jul 2019 | WO |
2019147813 | Aug 2019 | WO |
2019160763 | Aug 2019 | WO |
2021004542 | Jan 2021 | WO |
2021046096 | Mar 2021 | WO |
Entry |
---|
Inter prediction and motion vector coding (CE4); Yang; et al.—Jul. 2018; (Year: 2018). |
Results of reference picture boundary padding; Samsung—Jul. 2018; (Year: 2018). |
Adaptive loop filter with virtual boundary process; Chen—Mediatek—Jan. 2019; (Year: 2019). |
Padding method for samples at variant boundaries in ALF; Liu—Bytedance—Jul. 2019; (Year: 2019). |
Foreign Communication From a Related Counterpart Application, Canadian Application No. 3,146,773, Canadian Office Action dated May 23, 2023, 4 pages. |
Non-Final Office Action dated Mar. 20, 2024 from U.S. Appl. No. 18/169,442 dated Mar. 20, 2024. |
Retrieved from the internet: https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/tags/VTM-5.0, Mar. 21, 2022. |
Lim et al. “CE2: Subsampled Laplacian Calculation (Test 6.1, 6.2, 6.3, and 6.4),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET L0147, 2018. |
Taquet et al. “CE5: Results of Tests CE5-3.1 to CE5-3.4 on Non-Linear Adaptive Loop Filter.” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0242, 2019. |
Chen et al. “Algorithm Description of Joint Exploration Test Model 7 (JEM 7),” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 7th Meeting: Torino, IT, Jul. 13-21, 2017, document JVET-G1001, 2011. |
M. Karczewicz, L. Zhang, W. Chien and X. Li, “Geometry transformation based adaptive in-loop filter, Picture Coding Symposium (PCS),” 2016. |
Wang et al. “AHG12: Sub-Picture Based Motion-Constrained Independent Regions,” Joint Video Experts Team JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting: Gothemburg, SE, Jul. 3-12, 2019, ocument JVET-O0141, 2019. |
Wang et al. “AHG12: Harmonized Proposal for Sub-Picture-Based Coding for WC,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019 document JVET-N0826, 2019. |
Chen et al. “CE5-1: Adaptive Loop Filter with Virtual Boundary Processing,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1S0/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0088, 2019. |
Park et al. “CE4: Results on Reference Picture Boundary Padding in J0025,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, document JVET-K0117, 2018. |
Hu et al. “CE5-Related: Unification of Picture Boundary and Line Buffer Handling for ALF,” Joint Video Experts Team JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0416, 2019. |
Liu et al. “Non-CE5: Padding Method for Samples at Variant Boundaries in ALF,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting: Gothenburg, SE, Jul. 3-12, 2019, document JVET-O0625, 2019. |
Kotra et al. “CE5-2: Loop filter Line Buffer Reduction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0180, 2019. |
Chen et al. “Description of Core Experiment 5 (CE5): Adaptive Loop Filter,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1S0/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, document JVET-M1025, 2019. |
Chen et al. “Adaptive Loop Filter with Virtual Boundary Processing,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1S0/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, document JVET-M0164, 2019. |
Liu et al. “Non-CE5: Fixes of ALF Sample Padding,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting: Geneva, CH, Oct. 1-11, 2019, document JVET-P0492, 2019. |
Sauer et al. “Geometry Padding for Cube Based 360 Degree Video Using Uncoded Areas,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting: Gothenburg, SE, Jul. 3-12, 2019, document JVET-O0487, 2019. |
Helmrich et al. “CE11-Related: Very Strong Deblocking Filtering with Conditional Activation Signaling,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2019, document JVET-L0523, 2018. |
Heng et al. “Non-CE8: Comments on Current Picture Referencing,” Joint Video Experts Team (JVETO of ITU-SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA Jan. 9-18, 2019, document JVET-M0402, 2019. |
Tsai et al. “AHG6: Alf with Modified Padding Process,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 10th Meeting, Stockholm, SE, Jul. 11-20, 2012, document JCTVC-J0050, 2012. |
Wang et al. “AHG12: Text for Subpicture Agreements Integration,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 15th Meeting, Gothenburg, SE, Jul. 3-12, 2019, document JVET-O1143, 2019. |
Chen et al. “On Padding Process of Adaptive Loop Filter,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP3 anc SO/IEC JTC 1/SC 29/WG 11 16th Meeting, Geneva, CH Oct. 1-11, 2019, document JVET-P0121, 2019. |
Lai et al. “CE5-Related: ALF Padding Process When Raster Scan Slices are Used,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting, Geneva, CH Oct. 1-11, 2019, document JVET-P0156, 2019. |
Hu et al. “AHG12/Non-CE5: Extending Slice Boundary Processing for Adaptive Loop Filter for Raster Scanned Slices,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting, Geneva CH Oct. 1-11, 2019, document JVET-P0552, 2019. |
Liu et al. “Non-CE5: Suggested Text Changes for ALF Padding Process,” Joint Video Experts Team {JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting, Geneva, CH, Oct. 1-11, 2019, document JVET-P1038, 2019. |
Lee et al. “Non-CE6: Simplified Reference Samples Padding for Intra Prediction,” Joint Collaborative Team on Video Coding of ISO/IEC JTC 1/SC 29/WG 11 and ITU-T SG 16, Nov. 21-30, 2011, Geneva, document JCTVC-G791, 2011. |
Document: JVET-M0385, Taquet, J., et al., “Non-Linear Adaptive Loop Filter,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, 6 pages. |
Document: JCTVC-G212, Chen, C., et al., “Non-CE8.c.7: Single-source SAO and ALF virtual boundary processing with cross9×9,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 7th Meeting: Geneva, CH, Nov. 21-30, 2011, 25 pages. |
Document: JVET-P0053-v1, Zhou, M., “AHG16/HLS: A clean-up for the ALF sample padding,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 16th Meeting: Geneva, CH, Oct. 1-11, 2019, 3 pages. |
Document: JVET-O0636_r1, Misra, K., et al., “Cross-Component Adaptive Loop Filter for chroma,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting: Gothenburg, SE, Jul. 3-12, 2019, 9 pages. |
Examination Report from Indian Patent Application No. 202247015882 mailed Feb. 13, 2023 (5 pages). |
Non Final Office Action from U.S. Appl. No. 17/856,601 dated Mar. 14, 2023. |
Extended European Search Report from European Patent Application No. 20865073.9 dated Oct. 5, 2022 (7 pages). |
Extended European Search Report from European Patent Application No. 20868382.1 dated Oct. 14, 2022 (11 pages). |
Extended European Search Report from European Patent Application No. 20875114.9 dated Nov. 7, 2022 (12 pages). |
Notice of Allowance from U.S. Appl. No. 17/575,754 dated Oct. 28, 2022. |
Notice of Allowance from U.S. Appl. No. 17/716,380 dated Dec. 8, 2022. |
Extended European Search Report from European Patent Application No. 20836696.3 dated Jul. 5, 2022 (12 pages). |
Partial European Search Report from European Patent Application No. 20836703.7 dated Jul. 11, 2022 (18 pages). |
Extended European Search Report from European Patent Application No. 220840191.9 dated Jul. 19, 2022 (16 pages). |
Examination Report from Indian Patent Application No. 202247015795 mailed Aug. 26, 2022 (6 pages). |
Final Office Action from U.S. Appl. No. 17/548,187 dated Jun. 22, 2022. |
Non Final Office Action from U.S. Appl. No. 17/705,488 dated Jul. 12, 2022. |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/096044 dated Sep. 17, 2020 (9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/096045 dated Sep. 24, 2020 (13 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/100962 dated Oct. 12, 2020 (10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/101589 dated Oct. 14, 2020 (12 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/102003 dated Oct. 19, 2020 (9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/102040 dated Oct. 19, 2020 (10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/116707 dated Dec. 21, 2020 (9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/116708 dated Dec. 10, 2020 (11 pages). |
Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/118029 dated Dec. 30, 2020 (10 pages). |
Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/120063 dated Jan. 4, 2021 (9 pages). |
Non Final Office Action from U.S. Appl. No. 17/548,187 dated Mar. 2, 2022. |
Non Final Office Action from U.S. Appl. No. 17/548,134 dated Mar. 3, 2022. |
Non Final Office Action from U.S. Appl. No. 17/575,754 dated Mar. 28, 2022. |
Non Final Office Action from U.S. Appl. No. 17/701,224 dated Jul. 5, 2022. |
Notice of Allowance from U.S. Appl. No. 17/701,224 dated Oct. 27, 2022. |
Document: JVET-O0142-v1, Wang, Y.K., et al., “AHG12: On turning off ALF filtering at brick and slice boundaries,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting: Gothenburg, SE, Jul. 3-12, 2019, 2 pages. |
Chinese Notice of Allowance from Chinese Patent Application No. 202080051539.3 dated Jul. 19, 2024, 14 pages. |
Korean Notice of Allowance from Korean Patent Application No. 10-2022-7008192 dated Jul. 19, 2024, 8 pages. |
Non-Final Office Action from U.S. Appl. No. 18/503,773 dated Aug. 19, 2024, 46 pages. |
Final Office Action from U.S. Appl. No. 18/174,961 dated Aug. 13, 2024, 18 pages. |
Chinese Office Action from Chinese Patent Application No. 202080071449.0 dated Sep. 29, 2024, 7 pages. |
Indonesian Office Action from Indonesian Patent Application No. P00202200872 dated Sep. 25, 2024, 5 pages. |
Document: JVET-K1024-v2, Yang, H., et al., “Description of Core Experiment 4 (CE4): Inter prediction and motion vector coding,” Joint Video Experts Team (JVET) of ITU-T SG Hi WP 3 and ISO/IEC JTC I/SC 29/WG 11 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, 46 pages. |
Non-Final Office Action from U.S. Appl. No. 18/174,961 dated Apr. 26, 2024, 46 pages. |
Singaporean Notice of Allowance from Singapore Application No. 11202200257S dated Jun. 12, 2024, 6 pages. |
Singaporean Office Action from Singapore Patent Application No. 11202200379W dated May 27, 2024, 10 pages. |
Chinese Notice of Allowance from Chinese Patent Application No. 202080068120.9 dated Apr. 10, 2024, 7 pages. |
Number | Date | Country | |
---|---|---|---|
20230199190 A1 | Jun 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17701224 | Mar 2022 | US |
Child | 18172028 | US | |
Parent | PCT/CN2020/116707 | Sep 2020 | WO |
Child | 17701224 | US |