Interaction between LUT and AMVP

Information

  • Patent Grant
  • 12167018
  • Patent Number
    12,167,018
  • Date Filed
    Thursday, October 27, 2022
    2 years ago
  • Date Issued
    Tuesday, December 10, 2024
    6 days ago
Abstract
A method of video decoding is provided to include maintaining tables, wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information; and performing a conversion between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.
Description
TECHNICAL FIELD

This patent document relates to video coding and decoding techniques, devices and systems.


BACKGROUND

In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.


SUMMARY

This document discloses methods, systems, and devices for encoding and decoding digital video.


In one example aspect, a method of video decoding is provided to include maintaining tables, wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information; and performing a conversion between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.


In yet another representative aspect, the various techniques described herein may be embodied as a computer program product stored on a non-transitory computer readable media. The computer program product includes program code for carrying out the methods described herein.


The details of one or more implementations are set forth in the accompanying attachments, the drawings, and the description below. Other features will be apparent from the description and drawings, and from the claims.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram showing an example of a video encoder implementation



FIG. 2 illustrates macroblock partitioning in the H.264 video coding standard.



FIG. 3 illustrates an example of splitting coding blocks (CB) into prediction blocks (PU).



FIG. 4 illustrates an example implementation for subdivision of a CTB into CBs and transform block (TBs). Solid lines indicate CB boundaries and dotted lines indicate TB boundaries, including an example CTB with its partitioning, and a corresponding quadtree.



FIG. 5 shows an example of a Quad Tree Binary Tree (QTBT) structure for partitioning video data.



FIG. 6 shows an example of video block partitioning.



FIG. 7 shows an example of quad-tree partitioning.



FIG. 8 shows an example of tree-type signaling.



FIG. 9 shows an example of a derivation process for merge candidate list construction.



FIG. 10 shows example positions of spatial merge candidates.



FIG. 11 shows examples of candidate pairs considered for redundancy check of spatial merge candidates.



FIG. 12 shows examples of positions for the second PU of N×2N and 2N×N partitions.



FIG. 13 illustrates motion vector scaling for temporal merge candidates.



FIG. 14 shows candidate positions for temporal merge candidates, and their co-located picture.



FIG. 15 shows an example of a combined bi-predictive merge candidate.



FIG. 16 shows an example of a derivation process for motion vector prediction candidates.



FIG. 17 shows an example of motion vector scaling for spatial motion vector candidates.



FIG. 18 shows an example Alternative Temporal Motion Vector Prediction (ATMVP) for motion prediction of a CU.



FIG. 19 pictorially depicts an example of identification of a source block and a source picture.



FIG. 20 shows an example of one CU with four sub-blocks and neighboring blocks.



FIG. 21 illustrates an example of bilateral matching.



FIG. 22 illustrates an example of template matching.



FIG. 23 depicts an example of unilateral Motion Estimation (ME) in Frame Rate Up Conversion (FRUC).



FIG. 24 shows an example of decoder-side motion vector refinement (DMVR) based on bilateral template matching.



FIG. 25 shows an example of spatially neighboring blocks used to derive spatial merge candidates.



FIG. 26 depicts an example how selection of a representative position for look up table updates.



FIGS. 27A and 27B illustrate examples of updating look up table with new set of motion information.



FIG. 28 is a block diagram of an example of a hardware platform for implementing a visual media decoding or a visual media encoding technique described in the present document.



FIG. 29 is a flowchart for another example method of video bitstream processing.



FIG. 30 shows an example of a decoding flow chart with the proposed HMVP method.



FIG. 31 shows examples of updating tables using the proposed HMVP method.



FIGS. 32A and 32B show examples of a redundancy-removal based look up table (LUT) updating method (with one redundancy motion candidate removed).



FIGS. 33A and 33B show examples of a redundancy-removal based LUT updating method (with multiple redundancy motion candidates removed).



FIG. 34 shows an example of differences between Type 1 and Type 2 blocks.





DETAILED DESCRIPTION

To improve compression ratio of video, researchers are continually looking for new techniques by which to encode video.


1. INTRODUCTION

The present document is related to video coding technologies. Specifically, it is related to motion information coding (such as merge mode, Advanced Motion Vector Prediction (AMVP) mode) in video coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.


Brief Discussion


Video coding standards have evolved primarily through the development of the well-known International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) and International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standards. The ITU-T produced H.261 and H.263, ISO/IEC produced Picture Experts Group (MPEG)-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/High Efficiency Video Coding (HEVC) standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. An example of a typical HEVC encoder framework is depicted in FIG. 1.


2.1 Partition Structure


2.1.1 Partition Tree Structure in H.264/AVC


The core of the coding layer in previous standards was the macroblock, containing a 16×16 block of luma samples and, in the usual case of 4:2:0 color sampling, two corresponding 8×8 blocks of chroma samples.


An intra-coded block uses spatial prediction to exploit spatial correlation among pixels. Two partitions are defined: 16×16 and 4×4.


An inter-coded block uses temporal prediction, instead of spatial prediction, by estimating motion among pictures. Motion can be estimated independently for either 16×16 macroblock or any of its sub-macroblock partitions: 16×8, 8×16, 8×8, 8×4, 4×8, 4×4 (see FIG. 2). Only one motion vector (MV) per sub-macroblock partition is allowed.


2.1.2 Partition Tree Structure in HEVC


In HEVC, a CTU is split into CUs by using a quadtree structure denoted as coding tree to adapt to various local characteristics. The decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the CU level. Each CU can be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU splitting type, a CU can be partitioned into transform units (TUs) according to another quadtree structure similar to the coding tree for the CU. One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.


In the following, the various features involved in hybrid video coding using HEVC are highlighted as follows.


1) Coding tree units and coding tree block (CTB) structure: The analogous structure in HEVC is the coding tree unit (CTU), which has a size selected by the encoder and can be larger than a traditional macroblock. The CTU consists of a luma CTB and the corresponding chroma CTBs and syntax elements. The size L×L of a luma CTB can be chosen as L=16, 32, or 64 samples, with the larger sizes typically enabling better compression. HEVC then supports a partitioning of the CTBs into smaller blocks using a tree structure and quadtree-like signaling.


2) Coding units (CUs) and coding blocks (CBs): The quadtree syntax of the CTU specifies the size and positions of its luma and chroma CBs. The root of the quadtree is associated with the CTU. Hence, the size of the luma CTB is the largest supported size for a luma CB. The splitting of a CTU into luma and chroma CBs is signaled jointly. One luma CB and ordinarily two chroma CBs, together with associated syntax, form a coding unit (CU). A CTB may contain only one CU or may be split to form multiple CUs, and each CU has an associated partitioning into prediction units (PUs) and a tree of transform units (TUs).


3) Prediction units and prediction blocks (PBs): The decision whether to code a picture area using inter picture or intra picture prediction is made at the CU level. A PU partitioning structure has its root at the CU level. Depending on the basic prediction-type decision, the luma and chroma CBs can then be further split in size and predicted from luma and chroma prediction blocks (PBs). HEVC supports variable PB sizes from 64×64 down to 4×4 samples. FIG. 3 shows examples of allowed PBs for a M×M CU.


4) TUs and transform blocks: The prediction residual is coded using block transforms. A TU tree structure has its root at the CU level. The luma CB residual may be identical to the luma transform block (TB) or may be further split into smaller luma TBs. The same applies to the chroma TBs. Integer basis functions similar to those of a discrete cosine transform (DCT) are defined for the square TB sizes 4×4, 8×8, 16×16, and 32×32. For the 4×4 transform of luma intra picture prediction residuals, an integer transform derived from a form of discrete sine transform (DST) is alternatively specified.



FIG. 4 shows an example of a subdivision of a CTB into CBs [and transform block (TBs)]. Solid lines indicate CB borders and dotted lines indicate TB borders. (a) CTB with its partitioning. (b) corresponding quadtree.


2.1.2.1 Tree-Structured Partitioning into Transform Blocks and Units


For residual coding, a CB can be recursively partitioned into transform blocks (TBs). The partitioning is signaled by a residual quadtree. Only square CB and TB partitioning is specified, where a block can be recursively split into quadrants, as illustrated in FIG. 4. For a given luma CB of size M×M, a flag signals whether it is split into four blocks of size M/2×M/2. If further splitting is possible, as signaled by a maximum depth of the residual quadtree indicated in the SPS, each quadrant is assigned a flag that indicates whether it is split into four quadrants. The leaf node blocks resulting from the residual quadtree are the transform blocks that are further processed by transform coding. The encoder indicates the maximum and minimum luma TB sizes that it will use. Splitting is implicit when the CB size is larger than the maximum TB size. Not splitting is implicit when splitting would result in a luma TB size smaller than the indicated minimum. The chroma TB size is half the luma TB size in each dimension, except when the luma TB size is 4×4, in which case a single 4×4 chroma TB is used for the region covered by four 4×4 luma TBs. In the case of intra-picture-predicted CUs, the decoded samples of the nearest-neighboring TBs (within or outside the CB) are used as reference data for intra picture prediction.


In contrast to previous standards, the HEVC design allows a TB to span across multiple PBs for inter-picture predicted CUs to maximize the potential coding efficiency benefits of the quadtree-structured TB partitioning.


2.1.2.2 Parent and Child Nodes


A CTB is divided according to a quad-tree structure, the nodes of which are coding units. The plurality of nodes in a quad-tree structure includes leaf nodes and non-leaf nodes. The leaf nodes have no child nodes in the tree structure (i.e., the leaf nodes are not further split). The non-leaf nodes include a root node of the tree structure. The root node corresponds to an initial video block of the video data (e.g., a CTB). For each respective non-root node of the plurality of nodes, the respective non-root node corresponds to a video block that is a sub-block of a video block corresponding to a parent node in the tree structure of the respective non-root node. Each respective non-leaf node of the plurality of non-leaf nodes has one or more child nodes in the tree structure.


2.1.3 Quadtree Plus Binary Tree Block Structure with Larger CTUs in JEM


To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (WET) was founded by Video Coding Experts Group (VCEG) and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM).


2.1.3.1 QTBT Block Partitioning Structure


Different from HEVC, the QTBT structure removes the concepts of multiple partition types, i.e. it removes the separation of the CU, PU and TU concepts, and supports more flexibility for CU partition shapes. In the QTBT block structure, a CU can have either a square or rectangular shape. As shown in FIG. 5, a coding tree unit (CTU) is first partitioned by a quadtree structure. The quadtree leaf nodes are further partitioned by a binary tree structure. There are two splitting types, symmetric horizontal splitting and symmetric vertical splitting, in the binary tree splitting. The binary tree leaf nodes are called coding units (CUs), and that segmentation is used for prediction and transform processing without any further partitioning. This means that the CU, PU and TU have the same block size in the QTBT coding block structure. In the JEM, a CU sometimes consists of coding blocks (CBs) of different colour components, e.g. one CU contains one luma CB and two chroma CBs in the case of P and B slices of the 4:2:0 chroma format and sometimes consists of a CB of a single component, e.g., one CU contains only one luma CB or just two chroma CBs in the case of I slices.


The following parameters are defined for the QTBT partitioning scheme.

    • CTU size: the root node size of a quadtree, the same concept as in HEVC
    • MinQTSize: the minimally allowed quadtree leaf node size
    • MaxBTSize: the maximally allowed binary tree root node size
    • MaxBTDepth: the maximally allowed binary tree depth
    • MinBTSize: the minimally allowed binary tree leaf node size


In one example of the QTBT partitioning structure, the CTU size is set as 128×128 luma samples with two corresponding 64×64 blocks of chroma samples, the MinQTSize is set as 16×16, the MaxBTSize is set as 64×64, the MinBTSize (for both width and height) is set as 4×4, and the MaxBTDepth is set as 4. The quadtree partitioning is applied to the CTU first to generate quadtree leaf nodes. The quadtree leaf nodes may have a size from 16×16 (i.e., the MinQTSize) to 128×128 (i.e., the CTU size). If the leaf quadtree node is 128×128, it will not be further split by the binary tree since the size exceeds the MaxBTSize (i.e., 64×64). Otherwise, the leaf quadtree node could be further partitioned by the binary tree. Therefore, the quadtree leaf node is also the root node for the binary tree and it has the binary tree depth as 0. When the binary tree depth reaches MaxBTDepth (i.e., 4), no further splitting is considered. When the binary tree node has width equal to MinBTSize (i.e., 4), no further horizontal splitting is considered. Similarly, when the binary tree node has height equal to MinBTSize, no further vertical splitting is considered. The leaf nodes of the binary tree are further processed by prediction and transform processing without any further partitioning. In the JEM, the maximum CTU size is 256×256 luma samples.



FIG. 5 (left) illustrates an example of block partitioning by using QTBT, and FIG. 5 (right) illustrates the corresponding tree representation. The solid lines indicate quadtree splitting and dotted lines indicate binary tree splitting. In each splitting (i.e., non-leaf) node of the binary tree, one flag is signalled to indicate which splitting type (i.e., horizontal or vertical) is used, where 0 indicates horizontal splitting and 1 indicates vertical splitting. For the quadtree splitting, there is no need to indicate the splitting type since quadtree splitting always splits a block both horizontally and vertically to produce 4 sub-blocks with an equal size.


In addition, the QTBT scheme supports the ability for the luma and chroma to have a separate QTBT structure. Currently, for P and B slices, the luma and chroma CTBs in one CTU share the same QTBT structure. However, for I slices, the luma CTB is partitioned into CUs by a QTBT structure, and the chroma CTBs are partitioned into chroma CUs by another QTBT structure. This means that a CU in an I slice consists of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice consists of coding blocks of all three colour components.


In HEVC, inter prediction for small blocks is restricted to reduce the memory access of motion compensation, such that bi-prediction is not supported for 4×8 and 8×4 blocks, and inter prediction is not supported for 4×4 blocks. In the QTBT of the JEM, these restrictions are removed.


2.1.4 Ternary-Tree for Versatile Video Coding (VVC)


In some embodiments, tree types other than quad-tree and binary-tree are supported. In the implementation, two more ternary tree (TT) partitions, i.e., horizontal and vertical center-side ternary-trees are introduced, as shown in FIGS. 6 (d) and (e).



FIG. 6 shows: (a) quad-tree partitioning (b) vertical binary-tree partitioning (c) horizontal binary-tree partitioning (d) vertical center-side ternary-tree partitioning (e) horizontal center-side ternary-tree partitioning.


In some implementations, there are two levels of trees, region tree (quad-tree) and prediction tree (binary-tree or ternary-tree). A CTU is firstly partitioned by region tree (RT). A RT leaf may be further split with prediction tree (PT). A PT leaf may also be further split with PT until max PT depth is reached. A PT leaf is the basic coding unit. It is still called CU for convenience. A CU cannot be further split. Prediction and transform are both applied on CU in the same way as JEM. The whole partition structure is named ‘multiple-type-tree’.


2.1.5 Partitioning Structure


The tree structure used in this response, called Multi-Tree Type (MTT), is a generalization of the QTBT. In QTBT, as shown in FIG. 5, a Coding Tree Unit (CTU) is firstly partitioned by a quad-tree structure. The quad-tree leaf nodes are further partitioned by a binary-tree structure.


The fundamental structure of MTT constitutes two types of tree nodes: Region Tree (RT) and Prediction Tree (PT), supporting nine types of partitions, as shown in FIG. 7.



FIG. 7 shows: (a) quad-tree partitioning (b) vertical binary-tree partitioning (c) horizontal binary-tree partitioning (d) vertical ternary-tree partitioning (e) horizontal ternary-tree partitioning (f) horizontal-up asymmetric binary-tree partitioning (g) horizontal-down asymmetric binary-tree partitioning (h) vertical-left asymmetric binary-tree partitioning (i) vertical-right asymmetric binary-tree partitioning.


A region tree can recursively split a CTU into square blocks down to a 4×4 size region tree leaf node. At each node in a region tree, a prediction tree can be formed from one of three tree types: Binary Tree (BT), Ternary Tree (TT), and Asymmetric Binary Tree (ABT). In a PT split, it is prohibited to have a quadtree partition in branches of the prediction tree. As in JEM, the luma tree and the chroma tree are separated in I slices. The signaling methods for RT and PT are illustrated in FIG. 8.


2.2 Inter Prediction in HEVC/H.265


Each inter-predicted PU has motion parameters for one or two reference picture lists. Motion parameters include a motion vector and a reference picture index. Usage of one of the two reference picture lists may also be signalled using inter_pred_idc. Motion vectors may be explicitly coded as deltas relative to predictors, such a coding mode is called AMVP mode.


When a CU is coded with skip mode, one PU is associated with the CU, and there are no significant residual coefficients, no coded motion vector delta or reference picture index. A merge mode is specified whereby the motion parameters for the current PU are obtained from neighbouring PUs, including spatial and temporal candidates. The merge mode can be applied to any inter-predicted PU, not only for skip mode. The alternative to merge mode is the explicit transmission of motion parameters, where motion vector, corresponding reference picture index for each reference picture list and reference picture list usage are signalled explicitly per each PU.


When signalling indicates that one of the two reference picture lists is to be used, the PU is produced from one block of samples. This is referred to as ‘uni-prediction’. Uni-prediction is available both for P-slices and B-slices.


When signalling indicates that both of the reference picture lists are to be used, the PU is produced from two blocks of samples. This is referred to as ‘bi-prediction’. Bi-prediction is available for B-slices only.


The following text provides the details on the inter prediction modes specified in HEVC. The description will start with the merge mode.


2.2.1 Merge Mode


2.2.1.1 Derivation of Candidates for Merge Mode


When a PU is predicted using merge mode, an index pointing to an entry in the merge candidates list is parsed from the bitstream and used to retrieve the motion information. The construction of this list is specified in the HEVC standard and can be summarized according to the following sequence of steps:

    • Step 1: Initial candidates derivation
      • Step 1.1: Spatial candidates derivation
      • Step 1.2: Redundancy check for spatial candidates
      • Step 1.3: Temporal candidates derivation
    • Step 2: Additional candidates insertion
      • Step 2.1: Creation of bi-predictive candidates
      • Step 2.2: Insertion of zero motion candidates


These steps are also schematically depicted in FIG. 9. For spatial merge candidate derivation, a maximum of four merge candidates are selected among candidates that are located in five different positions. For temporal merge candidate derivation, a maximum of one merge candidate is selected among two candidates. Since constant number of candidates for each PU is assumed at decoder, additional candidates are generated when the number of candidates does not reach to maximum number of merge candidate (MaxNumMergeCand) which is signalled in slice header. Since the number of candidates is constant, index of best merge candidate is encoded using truncated unary binarization (TU). If the size of CU is equal to 8, all the PUs of the current CU share a single merge candidate list, which is identical to the merge candidate list of the 2N×2N prediction unit.


In the following, the operations associated with the aforementioned steps are detailed.


2.2.1.2 Spatial Candidate Derivation


In the derivation of spatial merge candidates, a maximum of four merge candidates are selected among candidates located in the positions depicted in FIG. 10. The order of derivation is A1, B1, B0, A0 and B2. Position B2 is considered only when any PU of position A1, B1, B0, A0 is not available (e.g. because it belongs to another slice or tile) or is intra coded. After candidate at position A1 is added, the addition of the remaining candidates is subject to a redundancy check which ensures that candidates with same motion information are excluded from the list so that coding efficiency is improved. To reduce computational complexity, not all possible candidate pairs are considered in the mentioned redundancy check. Instead only the pairs linked with an arrow in FIG. 11 are considered and a candidate is only added to the list if the corresponding candidate used for redundancy check has not the same motion information. Another source of duplicate motion information is the “second PU” associated with partitions different from 2N×2N. As an example, FIG. 12 depicts the second PU for the case of N×2N and 2N×N, respectively. When the current PU is partitioned as N×2N, candidate at position A1 is not considered for list construction. In fact, by adding this candidate will lead to two prediction units having the same motion information, which is redundant to just have one PU in a coding unit. Similarly, position B1 is not considered when the current PU is partitioned as 2N×N.


2.2.1.3 Temporal Candidate Derivation


In this step, only one candidate is added to the list. Particularly, in the derivation of this temporal merge candidate, a scaled motion vector is derived based on co-located PU belonging to the picture which has the smallest picture order count (POC) difference with current picture within the given reference picture list. The reference picture list to be used for derivation of the co-located PU is explicitly signaled in the slice header. The scaled motion vector for temporal merge candidate is obtained as illustrated by the dashed line in FIG. 13, which is scaled from the motion vector of the co-located PU using the POC distances, tb and td, where tb is defined to be the POC difference between the reference picture of the current picture and the current picture and td is defined to be the POC difference between the reference picture of the co-located picture and the co-located picture. The reference picture index of temporal merge candidate is set equal to zero. A practical realization of the scaling process is described in the HEVC specification. For a B-slice, two motion vectors, one is for reference picture list 0 and the other is for reference picture list 1, are obtained and combined to make the bi-predictive merge candidate. Illustration of motion vector scaling for temporal merge candidate.


In the co-located PU (Y) belonging to the reference frame, the position for the temporal candidate is selected between candidates C0 and C1, as depicted in FIG. 14. If PU at position C0 is not available, is intra coded, or is outside of the current CTU, position C1 is used. Otherwise, position C0 is used in the derivation of the temporal merge candidate.


2.2.1.4 Additional Candidate Insertion


Besides spatio-temporal merge candidates, there are two additional types of merge candidates: combined bi-predictive merge candidate and zero merge candidate. Combined bi-predictive merge candidates are generated by utilizing spatio-temporal merge candidates. Combined bi-predictive merge candidate is used for B-Slice only. The combined bi-predictive candidates are generated by combining the first reference picture list motion parameters of an initial candidate with the second reference picture list motion parameters of another. If these two tuples provide different motion hypotheses, they will form a new bi-predictive candidate. As an example, FIG. 15 depicts the case when two candidates in the original list (on the left), which have mvL0 and refIdxL0 or mvL1 and refIdxL1, are used to create a combined bi-predictive merge candidate added to the final list (on the right). There are numerous rules regarding the combinations which are considered to generate these additional merge candidates, defined in.


Zero motion candidates are inserted to fill the remaining entries in the merge candidates list and therefore hit the MaxNumMergeCand capacity. These candidates have zero spatial displacement and a reference picture index which starts from zero and increases every time a new zero motion candidate is added to the list. The number of reference frames used by these candidates is one and two for uni and bi-directional prediction, respectively. Finally, no redundancy check is performed on these candidates.


2.2.1.5 Motion Estimation Regions for Parallel Processing


To speed up the encoding process, motion estimation can be performed in parallel whereby the motion vectors for all prediction units inside a given region are derived simultaneously. The derivation of merge candidates from spatial neighbourhood may interfere with parallel processing as one prediction unit cannot derive the motion parameters from an adjacent PU until its associated motion estimation is completed. To mitigate the trade-off between coding efficiency and processing latency, HEVC defines the motion estimation region (MER) whose size is signalled in the picture parameter set using the “log 2_parallel_merge_level_minus2” syntax element. When a MER is defined, merge candidates falling in the same region are marked as unavailable and therefore not considered in the list construction.


7.3.2.3 Picture Parameter Set Raw Byte Sequence Payload (RBSP) Syntax


7.3.2.3.1 General Picture Parameter Set RBSP Syntax















Descriptor



















pic_parameter_set_rbsp( ) {




 pps_pic_parameter_set_id
ue(v)



 pps_seq_parameter_set_id
ue(v)



 dependent_slice_segments_enabled_flag
u(1)



...



 pps_scaling_list_data_present_flag
u(1)



 if( pps_scaling_list_data_present_flag )



  scaling_list_data( )



 lists_modification_present_flag
u(1)



 log2_parallel_merge_level_minus2
ue(v)



 slice_segment_header_extension_present_flag
u(1)



 pps_extension_present_flag
u(1)



...



 rbsp_trailing_bits( )



}












    • log 2_parallel_merge_level_minus2 plus 2 specifies the value of the variable Log 2ParMrgLevel, which is used in the derivation process for luma motion vectors for merge mode as specified in clause 8.5.3.2.2 and the derivation process for spatial merging candidates as specified in clause 8.5.3.2.3. The value of log 2_parallel_merge_level_minus2 shall be in the range of 0 to Ctb Log 2SizeY−2, inclusive.

    • The variable Log 2ParMrgLevel is derived as follows:

      Log 2ParMrgLevel=log 2_parallel_merge_level_minus2+2  (7-37)

    • NOTE 3—The value of Log 2ParMrgLevel indicates the built-in capability of parallel derivation of the merging candidate lists. For example, when Log 2ParMrgLevel is equal to 6, the merging candidate lists for all the prediction units (PUs) and coding units (CUs) contained in a 64×64 block can be derived in parallel.


      2.2.2 Motion Vector Prediction in AMVP Mode





Motion vector prediction exploits spatial-temporal correlation of motion vector with neighboring PUs, which is used for explicit transmission of motion parameters. It constructs a motion vector candidate list by firstly checking availability of left, above temporally neighboring PU positions, removing redundant candidates and adding zero vector to make the candidate list to be constant length. Then, the encoder can select the best predictor from the candidate list and transmit the corresponding index indicating the chosen candidate. Similarly to merge index signaling, the index of the best motion vector candidate is encoded using truncated unary. The maximum value to be encoded in this case is 2 (e.g., FIGS. 2 to 8). In the following sections, details about derivation process of motion vector prediction candidate are provided.


2.2.2.1 Derivation of Motion Vector Prediction Candidates



FIG. 16 summarizes derivation process for motion vector prediction candidate.


In motion vector prediction, two types of motion vector candidates are considered: spatial motion vector candidate and temporal motion vector candidate. For spatial motion vector candidate derivation, two motion vector candidates are eventually derived based on motion vectors of each PU located in five different positions as depicted in FIG. 11.


For temporal motion vector candidate derivation, one motion vector candidate is selected from two candidates, which are derived based on two different co-located positions. After the first list of spatio-temporal candidates is made, duplicated motion vector candidates in the list are removed. If the number of potential candidates is larger than two, motion vector candidates whose reference picture index within the associated reference picture list is larger than 1 are removed from the list. If the number of spatio-temporal motion vector candidates is smaller than two, additional zero motion vector candidates is added to the list.


2.2.2.2 Spatial Motion Vector Candidates


In the derivation of spatial motion vector candidates, a maximum of two candidates are considered among five potential candidates, which are derived from PUs located in positions as depicted in FIG. 11, those positions being the same as those of motion merge. The order of derivation for the left side of the current PU is defined as A0, A1, and scaled A0, scaled A1. The order of derivation for the above side of the current PU is defined as B0, B1, B2, scaled B0, scaled B1, scaled B2. For each side there are therefore four cases that can be used as motion vector candidate, with two cases not required to use spatial scaling, and two cases where spatial scaling is used. The four different cases are summarized as follows.

    • No spatial scaling
      • (1) Same reference picture list, and same reference picture index (same POC)
      • (2) Different reference picture list, but same reference picture (same POC)
    • Spatial scaling
      • (3) Same reference picture list, but different reference picture (different POC)
      • (4) Different reference picture list, and different reference picture (different POC)


The no-spatial-scaling cases are checked first followed by the spatial scaling. Spatial scaling is considered when the POC is different between the reference picture of the neighbouring PU and that of the current PU regardless of reference picture list. If all PUs of left candidates are not available or are intra coded, scaling for the above motion vector is allowed to help parallel derivation of left and above MV candidates. Otherwise, spatial scaling is not allowed for the above motion vector.


In a spatial scaling process, the motion vector of the neighbouring PU is scaled in a similar manner as for temporal scaling, as depicted as FIG. 17. The main difference is that the reference picture list and index of current PU is given as input; the actual scaling process is the same as that of temporal scaling.


2.2.2.3 Temporal Motion Vector Candidates


Apart for the reference picture index derivation, all processes for the derivation of temporal merge candidates are the same as for the derivation of spatial motion vector candidates (see, e.g., FIG. 6). The reference picture index is signalled to the decoder.


2.2.2.4 Signaling of AMVP Information


For the AMVP mode, four parts may be signalled in the bitstream, i.e., prediction direction, reference index, MVD and my predictor candidate index.


Syntax Tables:















Descriptor

















prediction_unit( x0, y0, nPbW, nPbH ) {



 if( cu_skip_flag[ x0 ][ y0 ] ) {


  if( MaxNumMergeCand > 1 )


   merge_idx[ x0 ][ y0 ]
ae(v)


 } else { /* MODE_INTER */


  merge_flag[ x0 ][ y0 ]
ae(v)


  if( merge_flag[ x0 ][ y0 ] ) {


   if( MaxNumMergeCand > 1 )


    merge_idx[ x0 ][ y0 ]
ae(v)


  } else {


   if( slice_type = = B )


    inter_pred_idc[ x0 ][ y0 ]
ae(v)


   if( inter_pred idc[ x0 ][ y0 ] != PRED_L1 ) {


    if( num_ref_idx_l0_active_minus1 > 0 )


     ref_idx_l0[ x0 ][ y0 ]
ae(v)


    mvd_coding( x0, y0, 0 )


    mvp_l0_flag[ x0 ][ y0 ]
ae(v)


   }


   if( inter_pred_idc[ x0 ][ y0 ] != PRED_L0 ) {


    if( num_ref_idx_l1_active_minus1 > 0 )


     ref_idx_l1[ x0 ][ y0 ]
ae(v)


    if( mvd_l1_zero_flag && inter_pred_idc[


    x0 ][ y0 ] == PRED_BI )


{


     MvdL1[ x0 ][ y0 ][ 0 ] = 0


     MvdL1[ x0 ][ y0 ][ 1 ] = 0


    } else


     mvd_coding( x0, y0, 1 )


    mvp_l1_flag[ x0 ][ y0 ]
ae(v)


   }


  }


 }


}










7.3.8.9 Motion Vector Difference Syntax















Descriptor



















mvd_coding( x0, y0, refList ) {




 abs_mvd_greater0_flag[ 0 ]
ae(v)



 abs_mvd_greater0_flag[ 1 ]
ae(v)



 if( abs_mvd_greater0_flag[ 0 ] )



  abs_mvd_greater1_flag[ 0 ]
ae(v)



 if( abs_mvd_greater0_flag[ 1 ] )



  abs_mvd_greater1_flag[ 1 ]
ae(v)



 if( abs_mvd_greater0_flag[ 0 ] ) {



  if( abs_mvd_greater1_flag[ 0 ] )



   abs_mvd_minus2[ 0 ]
ae(v)



  mvd_sign_flag[ 0 ]
ae(v)



 }



 if( abs_mvd_greater0_flag[ 1 ] ) {



  if( abs_mvd_greater1_flag[ 1 ] )



   abs_mvd_minus2[ 1 ]
ae(v)



  mvd_sign_flag[ 1 ]
ae(v)



 }



}











2.3 New Inter Prediction Methods in JEM (Joint Exploration Model)


2.3.1 Sub-CU Based Motion Vector Prediction


In the JEM with QTBT, each CU can have at most one set of motion parameters for each prediction direction. Two sub-CU level motion vector prediction methods are considered in the encoder by splitting a large CU into sub-CUs and deriving motion information for all the sub-CUs of the large CU. Alternative temporal motion vector prediction (ATMVP) method allows each CU to fetch multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In spatial-temporal motion vector prediction (STMVP) method motion vectors of the sub-CUs are derived recursively by using the temporal motion vector predictor and spatial neighbouring motion vector.


To preserve more accurate motion field for sub-CU motion prediction, the motion compression for the reference frames is currently disabled.


2.3.1.1 Alternative Temporal Motion Vector Prediction


In the alternative temporal motion vector prediction (ATMVP) method, the motion vectors temporal motion vector prediction (TMVP) is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in FIG. 18, the sub-CUs are square N×N blocks (N is set to 4 by default).


ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps. The first step is to identify the corresponding block in a reference picture with a so-called temporal vector. The reference picture is called the motion source picture. The second step is to split the current CU into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in FIG. 18.


In the first step, a reference picture and the corresponding block is determined by the motion information of the spatial neighbouring blocks of the current CU. To avoid the repetitive scanning process of neighbouring blocks, the first merge candidate in the merge candidate list of the current CU is used. The first available motion vector as well as its associated reference index are set to be the temporal vector and the index to the motion source picture. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU. In one example, if the first merge candidate is from the left neighboring block (i.e., A1 in FIG. 19), the associated MV and reference picture are utilized to identify the source block and source picture.



FIG. 19 shows an example of the identification of source block and source picture.


In the second step, a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding to the coordinate of the current CU the temporal vector. For each sub-CU, the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU. After the motion information of a corresponding N×N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply. For example, the decoder checks whether the low-delay condition (i.e. the POCs of all reference pictures of the current picture are smaller than the POC of the current picture) is fulfilled and possibly uses motion vector MVx (the motion vector corresponding to reference picture list X) to predict motion vector MVy (with X being equal to 0 or 1 and Y being equal to 1−X) for each sub-CU.


2.3.1.2 Spatial-Temporal Motion Vector Prediction


In this method, the motion vectors of the sub-CUs are derived recursively, following raster scan order. FIG. 20 illustrates this concept. Let us consider an 8×8 CU which contains four 4×4 sub-CUs A, B, C, and D. The neighbouring 4×4 blocks in the current frame are labelled as a, b, c, and d.


The motion derivation for sub-CU A starts by identifying its two spatial neighbours. The first neighbour is the N×N block above sub-CU A (block c). If this block c is not available or is intra coded the other N×N blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbour is a block to the left of the sub-CU A (block b). If block b is not available or is intra coded other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b). The motion information obtained from the neighbouring blocks for each list is scaled to the first reference frame for a given list. Next, temporal motion vector predictor (TMVP) of sub-block A is derived by following the same procedure of TMVP derivation as specified in HEVC. The motion information of the collocated block at location D is fetched and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged separately for each reference list. The averaged motion vector is assigned as the motion vector of the current sub-CU.



FIG. 20 shows an example of one CU with four sub-blocks (A-D) and its neighbouring blocks (a-d).


2.3.1.3 Sub-CU Motion Prediction Mode Signalling


The sub-CU modes are enabled as additional merge candidates and there is no additional syntax element required to signal the modes. Two additional merge candidates are added to merge candidates list of each CU to represent the ATMVP mode and STMVP mode. Up to seven merge candidates are used, if the sequence parameter set indicates that ATMVP and STMVP are enabled. The encoding logic of the additional merge candidates is the same as for the merge candidates in the HM, which means, for each CU in P or B slice, two more rate distortion (RD) checks is needed for the two additional merge candidates.


In the JEM, all bins of merge index are context coded by context adaptive binary arithmetic coding (CABAC). While in HEVC, only the first bin is context coded and the remaining bins are context by-pass coded.


2.3.2 Adaptive Motion Vector Difference Resolution


In HEVC, motion vector differences (MVDs) (between the motion vector and predicted motion vector of a PU) are signalled in units of quarter luma samples when use_integer_mv_flag is equal to 0 in the slice header. In the JEM, a locally adaptive motion vector resolution (LAMVR) is introduced. In the JEM, MVD can be coded in units of quarter luma samples, integer luma samples or four luma samples. The MVD resolution is controlled at the coding unit (CU) level, and MVD resolution flags are conditionally signalled for each CU that has at least one non-zero MVD components.


For a CU that has at least one non-zero MVD components, a first flag is signalled to indicate whether quarter luma sample MV precision is used in the CU. When the first flag (equal to 1) indicates that quarter luma sample MV precision is not used, another flag is signalled to indicate whether integer luma sample MV precision or four luma sample MV precision is used.


When the first MVD resolution flag of a CU is zero, or not coded for a CU (meaning all MVDs in the CU are zero), the quarter luma sample MV resolution is used for the CU. When a CU uses integer-luma sample MV precision or four-luma-sample MV precision, the MVPs in the AMVP candidate list for the CU are rounded to the corresponding precision.


In the encoder, CU-level RD checks are used to determine which MVD resolution is to be used for a CU. That is, the CU-level RD check is performed three times for each MVD resolution. To accelerate encoder speed, the following encoding schemes are applied in the JEM.


During RD check of a CU with normal quarter luma sample MVD resolution, the motion information of the current CU (integer luma sample accuracy) is stored. The stored motion information (after rounding) is used as the starting point for further small range motion vector refinement during the RD check for the same CU with integer luma sample and 4 luma sample MVD resolution so that the time-consuming motion estimation process is not duplicated three times.


RD check of a CU with 4 luma sample MVD resolution is conditionally invoked. For a CU, when RD cost integer luma sample MVD resolution is much larger than that of quarter luma sample MVD resolution, the RD check of 4 luma sample MVD resolution for the CU is skipped.


2.3.3 Pattern Matched Motion Vector Derivation


Pattern matched motion vector derivation (PMMVD) mode is a special merge mode based on Frame-Rate Up Conversion (FRUC) techniques. With this mode, motion information of a block is not signalled but derived at decoder side.


A FRUC flag is signalled for a CU when its merge flag is true. When the FRUC flag is false, a merge index is signaled and the regular merge mode is used. When the FRUC flag is true, an additional FRUC mode flag is signalled to indicate which method (bilateral matching or template matching) is to be used to derive motion information for the block.


At encoder side, the decision on whether using FRUC merge mode for a CU is based on RD cost selection as done for normal merge candidate. That is the two matching modes (bilateral matching and template matching) are both checked for a CU by using RD cost selection. The one leading to the minimal cost is further compared to other CU modes. If a FRUC matching mode is the most efficient one, FRUC flag is set to true for the CU and the related matching mode is used.


Motion derivation process in FRUC merge mode has two steps. A CU-level motion search is first performed, then followed by a Sub-CU level motion refinement. At CU level, an initial motion vector is derived for the whole CU based on bilateral matching or template matching. First, a list of MV candidates is generated and the candidate which leads to the minimum matching cost is selected as the starting point for further CU level refinement. Then a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU. Subsequently, the motion information is further refined at sub-CU level with the derived CU motion vectors as the starting points.


For example, the following derivation process is performed for a W×H CU motion information derivation. At the first stage, MV for the whole W×H CU is derived. At the second stage, the CU is further split into M×M sub-CUs. The value of M is calculated as in (16), D is a predefined splitting depth which is set to 3 by default in the JEM. Then the MV for each sub-CU is derived.









M
=

max


{

4
,

min


{


M

2
D


,

N

2
D



}



}






(
1
)







As shown in the FIG. 21, the bilateral matching is used to derive motion information of the current CU by finding the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures. Under the assumption of continuous motion trajectory, the motion vectors MV0 and MV1 pointing to the two reference blocks shall be proportional to the temporal distances, i.e., temporal distance zero (TD0) and temporal distance one (TD1), between the current picture and the two reference pictures. As a special case, when the current picture is temporally between the two reference pictures and the temporal distance from the current picture to the two reference pictures is the same, the bilateral matching becomes mirror based bi-directional MV.


As shown in FIG. 22, template matching is used to derive motion information of the current CU by finding the closest match between a template (top and/or left neighbouring blocks of the current CU) in the current picture and a block (same size to the template) in a reference picture. Except the aforementioned FRUC merge mode, the template matching is also applied to AMVP mode. In the JEM, as done in HEVC, AMVP has two candidates. With template matching method, a new candidate is derived. If the newly derived candidate by template matching is different to the first existing AMVP candidate, it is inserted at the very beginning of the AMVP candidate list and then the list size is set to two (meaning remove the second existing AMVP candidate). When applied to AMVP mode, only CU level search is applied.


2.3.3.1 CU Level MV Candidate Set


The MV candidate set at CU level consists of

    • (i) Original AMVP candidates if the current CU is in AMVP mode,
    • (ii) all merge candidates,
    • (iii) several MVs in the interpolated MV field, and
    • (iv) top and left neighbouring motion vectors.


When using bilateral matching, each valid MV of a merge candidate is used as an input to generate a MV pair with the assumption of bilateral matching. For example, one valid MV of a merge candidate is (MVa, refa) at reference list A. Then the reference picture refb of its paired bilateral MV is found in the other reference list B so that refa and refb are temporally at different sides of the current picture. If such a refb is not available in reference list B, refb is determined as a reference which is different from refa and its temporal distance to the current picture is the minimal one in list B. After refb is determined, MVb is derived by scaling MVa based on the temporal distance between the current picture and refa, refb.


Four MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at the position (0, 0), (W/2, 0), (0, H/2) and (W/2, H/2) of the current CU are added.


When FRUC is applied in AMVP mode, the original AMVP candidates are also added to CU level MV candidate set.


At the CU level, up to 15 MVs for AMVP CUs and up to 13 MVs for merge CUs are added to the candidate list.


2.3.3.2 Sub-CU Level MV Candidate Set


The MV candidate set at sub-CU level consists of

    • (i) an MV determined from a CU-level search,
    • (ii) top, left, top-left and top-right neighbouring MVs,
    • (iii) scaled versions of collocated MVs from reference pictures,
    • (iv) up to 4 ATMVP candidates, and
    • (v) up to 4 STMVP candidates.


The scaled MVs from reference pictures are derived as follows. All the reference pictures in both lists are traversed. The MVs at a collocated position of the sub-CU in a reference picture are scaled to the reference of the starting CU-level MV.


ATMVP and STMVP candidates are limited to the four first ones.


At the sub-CU level, up to 17 MVs are added to the candidate list.


2.3.3.3 Generation of Interpolated MV Field


Before coding a frame, interpolated motion field is generated for the whole picture based on unilateral ME. Then the motion field may be used later as CU level or sub-CU level MV candidates.


First, the motion field of each reference pictures in both reference lists is traversed at 4×4 block level. For each 4×4 block, if the motion associated to the block passing through a 4×4 block in the current picture (as shown in FIG. 23) and the block has not been assigned any interpolated motion, the motion of the reference block is scaled to the current picture according to the temporal distance TD0 and TD1 (the same way as that of MV scaling of TMVP in HEVC) and the scaled motion is assigned to the block in the current frame. If no scaled MV is assigned to a 4×4 block, the block's motion is marked as unavailable in the interpolated motion field.


2.3.3.4 Interpolation and Matching Cost


When a motion vector points to a fractional sample position, motion compensated interpolation is needed. To reduce complexity, bi-linear interpolation instead of regular 8-tap HEVC interpolation is used for both bilateral matching and template matching.


The calculation of matching cost is a bit different at different steps. When selecting the candidate from the candidate set at the CU level, the matching cost is the absolute sum difference (SAD) of bilateral matching or template matching. After the starting MV is determined, the matching cost C of bilateral matching at sub-CU level search is calculated as follows:

C=SAD+w·(|MVx−MVxs|+MVy−MVys|)  (2)


where w is a weighting factor which is empirically set to 4, MV and MVs indicate the current MV and the starting MV, respectively. SAD is still used as the matching cost of template matching at sub-CU level search.


In FRUC mode, MV is derived by using luma samples only. The derived motion will be used for both luma and chroma for MC inter prediction. After MV is decided, final MC is performed using 8-taps interpolation filter for luma and 4-taps interpolation filter for chroma.


2.3.3.5 MV Refinement


MV refinement is a pattern based MV search with the criterion of bilateral matching cost or template matching cost. In the JEM, two search patterns are supported—an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search for MV refinement at the CU level and sub-CU level, respectively. For both CU and sub-CU level MV refinement, the MV is directly searched at quarter luma sample MV accuracy, and this is followed by one-eighth luma sample MV refinement. The search range of MV refinement for the CU and sub-CU step are set equal to 8 luma samples.


2.3.3.6 Selection of Prediction Direction in Template Matching FRUC Merge Mode


In the bilateral matching merge mode, bi-prediction is always applied since the motion information of a CU is derived based on the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures. There is no such limitation for the template matching merge mode. In the template matching merge mode, the encoder can choose among uni-prediction from list0, uni-prediction from list1 or bi-prediction for a CU. The selection is based on a template matching cost as follows:

    • If costBi<=factor*min (cost0, cost1)
      • bi-prediction is used;
    • Otherwise, if cost0<=cost1
      • uni-prediction from list0 is used;
    • Otherwise,
      • uni-prediction from list1 is used;


where cost0 is the SAD of list0 template matching, cost1 is the SAD of list1 template matching and costBi is the SAD of bi-prediction template matching. The value of factor is equal to 1.25, which means that the selection process is biased toward bi-prediction. The inter prediction direction selection is only applied to the CU-level template matching process.


2.3.4 Decoder-Side Motion Vector Refinement


In bi-prediction operation, for the prediction of one block region, two prediction blocks, formed using a motion vector (MV) of list0 and a MV of list1, respectively, are combined to form a single prediction signal. In the decoder-side motion vector refinement (DMVR) method, the two motion vectors of the bi-prediction are further refined by a bilateral template matching process. The bilateral template matching applied in the decoder to perform a distortion-based search between a bilateral template and the reconstruction samples in the reference pictures in order to obtain a refined MV without transmission of additional motion information.


In DMVR, a bilateral template is generated as the weighted combination (i.e. average) of the two prediction blocks, from the initial MV0 of list0 and MV1 of list1, respectively, as shown in FIG. 23. The template matching operation consists of calculating cost measures between the generated template and the sample region (around the initial prediction block) in the reference picture. For each of the two reference pictures, the MV that yields the minimum template cost is considered as the updated MV of that list to replace the original one. In the JEM, nine MV candidates are searched for each list. The nine MV candidates include the original MV and 8 surrounding MVs with one luma sample offset to the original MV in either the horizontal or vertical direction, or both. Finally, the two new MVs, i.e., MV0′ and MV1′ as shown in FIG. 24, are used for generating the final bi-prediction results. A sum of absolute differences (SAD) is used as the cost measure.


DMVR is applied for the merge mode of bi-prediction with one MV from a reference picture in the past and another from a reference picture in the future, without the transmission of additional syntax elements. In the JEM, when LIC, affine motion, FRUC, or sub-CU merge candidate is enabled for a CU, DMVR is not applied.


2.3.5 Merge/Skip Mode with Bilateral Matching Refinement


A merge candidate list is first constructed by inserting the motion vectors and reference indices of the spatial neighboring and temporal neighboring blocks into the candidate list with redundancy checking until the number of the available candidates reaches the maximum candidate size of 19. The merge candidate list for the merge/skip mode is constructed by inserting spatial candidates (FIG. 11), temporal candidates, affine candidates, advanced temporal MVP (ATMVP) candidate, spatial temporal MVP (STMVP) candidate and the additional candidates as used in HEVC (Combined candidates and Zero candidates) according to a pre-defined insertion order:

    • Spatial candidates for blocks 1-4.
    • Extrapolated affine candidates for blocks 1-4.
    • ATMVP.
    • STMVP.
    • Virtual affine candidate.
    • Spatial candidate (block 5) (used only when the number of the available candidates is smaller than 6).
    • Extrapolated affine candidate (block 5).
    • Temporal candidate (derived as in HEVC).
    • Non-adjacent spatial candidates followed by extrapolated affine candidate (blocks 6 to 49, as depicted in FIG. 25).
    • Combined candidates.
    • Zero candidates.


It is noted that IC flags are also inherited from merge candidates except for STMVP and affine. Moreover, for the first four spatial candidates, the bi-prediction ones are inserted before the ones with uni-prediction.


In some implementations, blocks which are not connected with the current block may be accessed. If a non-adjacent block is coded with non-intra mode, the associated motion information may be added as an additional merge candidate.


2.3.6 Shared Merge List JVET-M0170


It proposes to share the same merging candidate list for all leaf coding units (CUs) of one ancestor node in the CU split tree for enabling parallel processing of small skip/merge-coded CUs. The ancestor node is named merge sharing node. The shared merging candidate list is generated at the merge sharing node pretending the merge sharing node is a leaf CU.


For Type-2 definition, the merge sharing node will be decided for each CU inside a CTU during parsing stage of decoding; moreover, the merge sharing node is an ancestor node of leaf CU which must satisfy the following 2 criteria:


The merge sharing node size is equal to or larger than the size threshold.


In the merge sharing node, one of the child CU size is smaller than the size threshold.


Moreover, no samples of the merge sharing node are outside the picture boundary has to be guaranteed. During parsing stage, if an ancestor node satisfies the criteria (1) and (2) but has some samples outside the picture boundary, this ancestor node will not be the merge sharing node and it proceeds to find the merge sharing node for its child CUs.



FIG. 35 shows an example for the difference of Type-1 and Type-2 definition. In this example, the parent node is ternary-split into 3 child CUs. The size of parent node is 128. For Type-1 definition, the 3 child-CUs will be merge sharing nodes separately. But for Type-2 definition, the parent node is the merge sharing node.


The proposed shared merging candidate list algorithm supports translational merge (including merge mode and triangle merge mode, history-based candidate is also supported) and subblock-based merge mode. For all kinds of merge mode, the behavior of shared merging candidate list algorithm looks basically the same, and it just generates candidates at the merge sharing node pretending the merge sharing node is a leaf CU. It has 2 major benefits. The first benefit is to enable parallel processing for merge mode, and the second benefit is to share all computations of all leaf CUs into the merge sharing node. Therefore, it significantly reduces the hardware cost of all merge modes for hardware codec. By the proposed shared merging candidate list algorithm, the encoder and decoder can easily support parallel encoding for merge mode and it relieves the cycle budget problem of merge mode.


2.3.7 Tile Groups


JVET-L0686 was adopted in which slices are removed in favor of tile groups and the HEVC syntax element slice_address is substituted with tile_group_address in the tile_group_header (if there is more than one tile in the picture) as address of the first tile in the tile group.


3. EXAMPLES OF PROBLEMS ADDRESSED BY EMBODIMENTS DISCLOSED HEREIN

The current HEVC design could take the correlation of current block its neighbouring blocks (next to the current block) to better code the motion information. However, it is possible that that the neighbouring blocks correspond to different objects with different motion trajectories. In this case, prediction from its neighbouring blocks is not efficient.


Prediction from motion information of non-adjacent blocks could bring additional coding gain with the cost of storing all the motion information (typically on 4×4 level) into cache which significantly increase the complexity for hardware implementation.


4. SOME EXAMPLES

To overcome the drawbacks of existing implementations, LUT-based motion vector prediction techniques using one or more tables (e.g., look up tables) with at least one motion candidate stored to predict motion information of a block can be implemented in various embodiments to provide video coding with higher coding efficiencies. A look up table is an example of a table which can be used to include motion candidates to predict motion information of a block and other implementations are also possible. Each LUT can include one or more motion candidates, each associated with corresponding motion information. Motion information of a motion candidate can include partial or all of the prediction direction, reference indices/pictures, motion vectors, local illumination compensation (LIC) flags, affine flags, Motion Vector Derivation (MVD) precisions, and/or MVD values. Motion information may further include the block position information to indicate from which the motion information is coming.


The LUT-based motion vector prediction based on the disclosed technology, which may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations. Because the LUTs allow the encoding/decoding process to be performed based on historical data (e.g., the blocks that have been processed), the LUT-based motion vector prediction can also be referred to as History-based Motion Vector Prediction (HMVP) method. In the LUT-based motion vector prediction method, one or multiple tables with motion information from previously coded blocks are maintained during the encoding/decoding process. These motion candidates stored in the LUTs are named HMVP candidates. During the encoding/decoding of one block, the associated motion information in LUTs may be added to the motion candidate lists (e.g., merge/AMVP candidate lists), and after encoding/decoding one block, LUTs may be updated. The updated LUTs are then used to code the subsequent blocks. Thus, the updating of motion candidates in the LUTs are based on the encoding/decoding order of blocks. The examples below should be considered as examples to explain general concepts. These examples should not be interpreted in a narrow way. Furthermore, these examples can be combined in any manner.


Some embodiments may use one or more look up tables with at least one motion candidate stored to predict motion information of a block. Embodiments may use motion candidate to indicate a set of motion information stored in a look up table. For conventional AMVP or merge modes, embodiments may use AMVP or merge candidates for storing the motion information.


The examples below explain general concepts.


Examples of Look-Up Tables


Example A1: Each look up table may contain one or more motion candidates wherein each candidate is associated with its motion information.

    • a. Motion information of a motion candidate here may include partial or all of the prediction direction, reference indices/pictures, motion vectors, LIC flag, affine flag, MVD precision, MVD values.
    • b. Motion information may further include the block position information and/or block shape to indicate wherein the motion information is coming from.


      Selection of LUTs


Example B1: For coding a block, partial or all of motion candidates from one look up table may be checked in order. When one motion candidate is checked during coding a block, it may be added to the motion candidate list (e.g., AMVP, merge candidate lists). Example B2: The selection of look up tables may depend on the position of a block.


Usage of Look Up Tables


Example C1: The total number of motion candidates in a look up table to be checked may be pre-defined.


Example C2: The motion candidate(s) included in a look up table may be directly inherited by a block.

    • a. They may be used for the merge mode coding, i.e., motion candidates may be checked in the merge candidate list derivation process.
    • b. They may be used for the affine merge mode coding.
      • i. A motion candidate in a look up table can be added as an affine merge candidate if its affine flag is one.
    • c. They may be used for other kinds of merge modes, such as sub-block merge mode, affine merge mode, triangular merge mode, inter-intra merge mode, merge with MVD (MMVD) mode.
    • d. Checking of motion candidates in look up tables may be enabled when:
      • i. the merge candidate list is not full after inserting the TMVP candidate;
      • ii. the merge candidate list is not full after checking a certain spatial neighboring block for spatial merge candidate derivation;
      • iii. the merge candidate list is not full after all spatial merge candidates;
      • iv. the merge candidate list is not full after combined bi-predictive merge candidates;
      • v. when the number of spatial or temporal (e.g., including adjacent spatial and non-adjacent spatial, TMVP, STMVP, ATMVP, etc. al) merge candidates that have been put into the merge candidate list from other coding methods (e.g., the merge derivation process of HEVC design, or JEM design) is less than the maximally allowed merge candidates minus a given threshold.
        • 1. in one example, the threshold is set to 1 or 0.
        • 2. Alternatively, the threshold may be signaled or pre-defined in sequence parameter set (SPS)/picture parameter set (PPS)/sequence, picture, slice header/tile.
        • 3. Alternatively, the threshold may be adaptively changed from block to block. For example, it may be dependent on coded block information, like block size/block shape/slice type, and/or dependent on the number of available spatial or temporal merge candidates.
        • 4. In another example, when the number of a certain kind of merge candidates than have been put into the merge candidate list is less than the maximally allowed merge candidates minus a given threshold. The “certain kind of merge candidates” may be spatial candidates as in HEVC or non-adjacent merge candidates.
      • vi. Pruning may be applied before adding a motion candidate to the merge candidate list. In various implementations of this example and other examples disclosed in this patent document, the pruning may include a) comparing the motion information with existing entries for uniqueness, or b) if unique, then adding the motion information to the list, or c) if not unique, then either c1) not adding or c2) adding the motion information and deleting existing entry that matched. In some implementations, the pruning operation is not invoked when adding a motion candidate from a table to a candidate list.
        • 1. In one example, a motion candidate may be pruned to all or partial of the available spatial or temporal (e.g., including adjacent spatial and non-adjacent spatial, TMVP, STMVP, ATMVP, etc. al) merge candidates from other coding methods in the merge candidate list.
        • 2. a motion candidate may be NOT pruned to sub-block based motion candidates, e.g., ATMVP, STMVP.
        • 3. In one example, a current motion candidate may be pruned to all or partial of the available motion candidates (inserted before the current motion candidate) in the merge candidate list.
        • 4. Number of pruning operations related to motion candidates (e.g., how many times that motion candidates need to be compared to other candidates in the merge list) may depend on the number of available spatial or temporal merge candidates. For example, when checking a new motion candidate, if there are M candidates available in the merge list, the new motion candidate may be only compared to the first K (K<=M) candidates. If the pruning function returns false (e.g., not identical to any of the first K candidates), the new motion candidate is considered to be different from all of the M candidates and it could be added to the merge candidate list. In one example, K is set to min (K, 2).
        • 5. In one example, a newly appended motion candidate is only compared with the first N candidate in the merge candidate list. For example, N=3, 4 or 5. N may be signaled from the encoder to the decoder.
        • 6. In one example, a new motion candidate to be checked is only compared with the last N candidate in the merge candidate list. For example, N=3, 4 or 5. N may be signaled from the encoder to the decoder.
        • 7. In one example, how to select candidates previously added in the list to be compared with a new motion candidate from a table may depend on where the previously added candidates derived from.
          • a. In one example, a motion candidate in a look-up table may be compared to candidates derived from a given temporal and/or spatial neighboring block.
          • b. In one example, different entries of motion candidates in a look-up table may be compared to different previously added candidates (i.e., derived from different locations).
    • e. Checking of motion candidates in the lookup table may be enabled before checking other merge (or affine merge or other inter coding methods) candidates, such as derived from adjacent/non-adjacent spatial or temporal blocks.
    • f. Checking of motion candidates in the lookup table may be enabled when there is at least one motion candidate in a look up table.


Example C3: The motion candidate(s) included in a look up table may be used as a predictor for coding motion information of a block.

    • a. They may be used for the AMVP mode coding, i.e., motion candidates may be checked in the AMVP candidate list derivation process.
    • b. They may be used for the symmetric motion vector difference (SMVD) coding wherein only partial of MVDs (such as only signaled MVD for one reference picture list and derived from another reference picture list).
    • c. They may be used for the symmetric motion vector (SMV) coding wherein only partial of MVs (such as only signaled for one reference picture list and derived from another reference picture list).
    • d. Checking of motion candidates in look up tables may be enabled when:
      • i. the AMVP candidate list is not full after checking or inserting the TMVP candidate;
      • ii. the AMVP candidate list is not full after selecting from spatial neighbors and pruning, right before inserting the TMVP candidate;
      • iii. when there is no AMVP candidate from above neighboring blocks without scaling and/or when there is no AMVP candidate from left neighboring blocks without scaling; and
      • iv. the AMVP candidate list is not full after inserting a certain AMVP candidate.
      • v. Pruning may be applied before adding a motion candidate to the AMVP candidate list.
      • vi. Similar rules as mentioned in Example C2. vi. 3 and 4 may be applied to AMVP mode.
    • e. Checking of motion candidates may be enabled before checking other AMVP (or SMVD/SMV/affine inter or other inter coding methods) candidates, such as derived from adjacent/non-adjacent spatial or temporal blocks.
    • f. Checking of motion candidates may be enabled when there is at least one motion candidate in a look up table.
    • g. Motion candidates with identical reference picture to the current reference picture (i.e., picture-order-count (POC) is the same) is checked. That is, when a motion candidate includes an identical reference picture to the current reference picture, the corresponding motion vector may be taken into consideration in the AMVP candidate list construction process.
      • i. Alternatively, in addition, motion candidates with different reference pictures from the current reference picture are also checked (with MV scaled). That is, when a motion candidate has a different reference picture to the current reference picture, the corresponding motion vector may be taken into consideration in the AMVP candidate list construction process.
      • ii. Alternatively, all motion candidates with identical reference picture to the current reference picture are first checked, then, motion candidates with different reference pictures from the current reference picture are checked. That is, higher priority is assigned to those motion candidates having the identical reference picture.
      • iii. Alternatively, motion candidates are checked following the same in merge.
      • iv. When one motion candidate is a bi-prediction candidate, reference picture (such as, reference picture index or picture order counter of the reference picture) of the reference picture list X may be firstly checked, followed by the reference picture of the reference picture list Y (Y!=X, e.g., Y=1−X), if the current target reference picture list is X.
      • v. Alternatively, when one motion candidate is a bi-prediction candidate, reference picture (such as, reference picture index or picture order counter of the reference picture) of the reference picture list Y (Y!=X, e.g., Y=1−X) may be firstly checked, followed by the reference picture of the reference picture list X, if the current target reference picture list is X.
      • vi. Alternatively, reference pictures of reference picture list is X associated with all motion candidates to be checked may be checked before reference pictures of reference picture list is Y (Y!=X, e.g., Y=1−X) associated with all motion candidates to be checked.


Example C4: The checking order of motion candidates in a look up table is defined as follows (suppose K (K>=1) motion candidates are allowed to be checked):

    • a. The last K motion candidates in the look up table.
    • b. The first K % L candidates wherein L is the look up table size when K>=L.
    • c. All the candidates (L candidates) in the look up table when K>=L.
    • d. Alternatively, furthermore, based on the descending order of motion candidate indices.
    • e. Alternatively, furthermore, based on the ascending order of motion candidate indices.
    • f. Alternatively, selecting K motion candidates based on the candidate information, such as the distance of positions associated with the motion candidates and current block.
      • i. In one example, K nearest motion candidates are selected.
      • ii. in one example, the candidate information may further consider block shape when calculating the distance.
    • g. In one example, the checking order of K of motion candidates from the table which includes L candidates may be defined as: selecting those candidates with index equal to a0, a0+T0, a0+T0+T1, a0+T0+T1+T2, . . . a0+T0+T1+T2+ . . . +TK-1 in order wherein a0 and Ti (i being 0 . . . K−1) are integer values.
      • i. In one example, a0 is set to 0 (i.e., the first entry of motion candidate in the table). Alternatively, a0 is set to (K−L/K). The arithmetic operation ‘/’ is defined as integer division with truncation of the result toward zero. Alternatively, a0 is set to any integer between 0 and L/K.
        • 1. Alternatively, the value of a0 may depend on coding information of the current block and neighbouring blocks.
      • ii. In one example, all the intervals Ti (i being 0 . . . K−1) are the same, such as L/K. The arithmetic operation ‘/’ is defined as integer division with truncation of the result toward zero.
      • iii. In one example, (K, L, a0, Ti) is set to (4, 16, 0, 4), or (4, 12, 0, 3) or (4, 8, 0, 1) or (4, 16, 3, 4) or (4, 12, 2, 3), or (4, 8, 1, 2). Ti are the same for all i.
      • iv. Such method may be only applied when K is smaller than L.
      • v. Alternatively, furthermore, when K is larger than or equal to a threshold, bullet 7.c. may be applied. The threshold may be defined as L, or it may depend on K or adaptively changed from block to block. In one example, the threshold may depend on the number of available motion candidate in the list before adding a new one from the look-up table.
    • h. In one example, the checking order of K of motion candidates from the table which includes L candidates may be defined as: selecting those candidates with index equal to a0, a0-T0, a0-T0-T1, a0-T0-T1-T2, . . . a0-T0-T1-T2- . . . -TK-1 in order wherein a0 and Ti (i being 0 . . . K−1) are integer values.
      • i. In one example, a0 is set to L−1 (i.e., the last entry of motion candidate in the table). Alternatively, a0 is set to any integer between L−1−L/K and L−1.
      • ii. In one example, all the intervals Ti (i being 0 . . . K−1) are the same, such as L/K.
      • iii. In one example, (K, L, a0, Ti) is set to (4, 16, L−1, 4), or (4, 12, L−1, 3) or (4, 8, L−1, 1) or (4, 16, L−4, 4) or (4, 12, L−3, 3), or (4, 8, L−2, 2). Ti are the same for all i.
      • iv. Such method may be only applied when K is smaller than L.
      • v. Alternatively, furthermore, when K is larger than or equal to a threshold, bullet 7.c. may be applied. The threshold may be defined as L, or it may depend on K or adaptively changed from block to block. In one example, the threshold may depend on the number of available motion candidate in the list before adding a new one from the look-up table.
    • i. How many and/or how to select motion candidates from a lookup table may depend on the coded information, such as block size/block shape.
      • i. In one example, for a smaller block size, instead of choosing the last K motion candidates, the other K motion candidates (starting not from the last one) may be chosen.
      • ii. In one example, the coded information may be the AMVP or merge mode.
      • iii. In one example, the coded information may be the affine mode or non-affine AMVP mode or non-affine merge mode.
      • iv. In one example, the coded information may be the affine AMVP (inter) mode affine merge mode or non-affine AMVP mode or non-affine merge mode.
      • v. In one example, the coded information may be Current Picture Reference (CPR) mode or not CPR mode.
      • vi. Alternatively, how to select motion candidates from a look-up table may further depend on the number of motion candidates in the look-up table, and/or number of available motion candidates in the list before adding a new one from the look-up table.
    • j. In one example, maximum number of motion candidates in a look up table to be checked (i.e., which may be added to the merge/amvp candidate list) may depend on the number of available motion candidates (denoted by NavaiMCinLUT) in a look up table, and/or maximally allowed motion candidates (denoted by NUMmaxMC) to be added (which may be pre-defined or signaled), and/or number of available candidates (denoted by NavaiC) in a candidate list before checking the candidates from the look up table.
      • i. In one example, maximum number of motion candidates in the look up table to be checked is set to minimum value of (NavaiMCinLUT, NUMmaxMC, NavaiC).
      • ii. Alternatively, maximum number of motion candidates in the look up table to be checked is set to minimum value of (NavaiMCinLUT, NUMmaxMC−NavaiC).
      • iii. In one example, NavaiC denotes the number of inserted candidates derived from spatial or temporal (adjacent and/or non-adjacent) neighboring blocks. Alternatively, furthermore, the number of sub-block candidates (like ATMVP, STMVP) is not counted in NavaiC.
      • iv. NUMmaxMC may depend on the coded mode, e.g., for merge mode and AMVP mode, NUMmaxMC may be set to different values. In one example, for merge mode, NUMmaxMC may be set to 4, 6, 8, 10, etc. al. for AMVP mode, NUMmaxMC may be set to 1, 2, 4, etc. al.
      • v. Alternatively, NUMmaxMC may depend on other coded information, like block size, block shape, slice type etc. al.
    • k. The checking order of different look up tables is defined in usage of look up tables in the next subsection.
    • l. The checking process will terminate once the merge/AMVP candidate list reaches the maximally allowed candidate numbers.
    • m. The checking process will terminate once the merge/AMVP candidate list reaches the maximally allowed candidate numbers minus a threshold (Th). In one example, Th may be pre-defined as a positive integer value, e.g., 1, or 2, or 3. Alternatively, Th may be adaptively changed from block to block. Alternatively, Th may be signaled in the SPS/PPS/slice header etc. al. Alternatively, Th may further depend on block shape/block size/coded modes etc. al. Alternatively, Th may depend on how many available candidates before adding the motion candidates from LUTs.
    • n. Alternatively, it will terminate once the number of added motion candidates reaches the maximally allowed motion candidate numbers. The maximally allowed motion candidate numbers may be signaled or pre-defined. Alternatively, the maximally allowed motion candidate numbers may further depend on block shape/block size/coded modes etc. al.
    • o. One syntax element to indicate the table size as well as the number of motion candidates (i.e., K=L) allowed to be checked may be signaled in SPS, PPS, Slice header, tile header.


In some implementations, the motion candidates in a look up table may be utilized to derive other candidates and the derived candidates may be utilized for coding a block.


In some implementations, enabling/disabling the usage of look up tables for motion information coding of a block may be signaled in SPS, PPS, Slice header, tile header, CTU, CTB, CU or PU, region covering multiple CTU/CTB/CU/PUs.


In some implementations, whether to apply prediction from look up tables may further depend on the coded information. When it is inferred not to apply for a block, additional signaling of indications of the prediction is skipped. Alternatively, when it is inferred not to apply for a block, there is no need to access motion candidates of look up tables, and the checking of related motion candidates is omitted.


In some implementations, the motion candidates of a look up table in previously coded frames/slices/tiles may be used to predict motion information of a block in a different frame/slice/tile.

    • a. In one example, only look up tables associated with reference pictures of current block may be utilized for coding current block.
    • b. In one example, only look up tables associated with pictures with the same slice type and/or same quantization parameters of current block may be utilized for coding current block.


      Update of Look Up Tables


After coding a block with motion information (i.e., intra block copy (IntraBC) mode, inter coded mode), one or multiple look up tables may be updated.


For all above examples and implementations, the look up tables indicate the coded information or information derived from coded information from previously coded blocks in a decoding order.

    • a. A look up table may include the translational motion information, or affine motion information, or affine model parameters, or intra mode information, or illumination compensation information, etc. al.
    • b. Alternatively, a look up table may include at least two kinds of information, such as translational motion information, or affine motion information, or affine model parameters, or intra mode information, or illumination compensation information, etc. al.


Additional Example Embodiments

A history-based MVP (HMVP) method is proposed wherein a HMVP candidate is defined as the motion information of a previously coded block. A table with multiple HMVP candidates is maintained during the encoding/decoding process. The table is emptied when a new slice is encountered. Whenever there is an inter-coded block, the associated motion information is added to the last entry of the table as a new HMVP candidate. The overall coding flow is depicted in FIG. 30.


In one example, the table size is set to be L (e.g., L=16 or 6, or 44), which indicates up to L HMVP candidates may be added to the table.


In one embodiment (corresponding to example 11.g.i), if there are more than L HMVP candidates from the previously coded blocks, a First-In-First-Out (FIFO) rule is applied so that the table always contains the latest previously coded L motion candidates. FIG. 31 depicts an example wherein the FIFO rule is applied to remove a HMVP candidate and add a new one to the table used in the proposed method.


In another embodiment (corresponding to invention 11.g.iii), whenever adding a new motion candidate (such as the current block is inter-coded and non-affine mode), a redundancy checking process is applied firstly to identify whether there are identical or similar motion candidates in LUTs.


Some examples are depicted as follows:



FIG. 32A shows an example when the LUT is full before adding a new motion candidate.



FIG. 32B shows an example when the LUT is not full before adding a new motion candidate.



FIGS. 32A and 32B together show an example of redundancy-removal based LUT updating method (with one redundancy motion candidate removed).



FIGS. 33A and 33B show example implementation for two cases of the redundancy-removal based LUT updating method (with multiple redundancy motion candidates removed, 2 candidates in the figures.)



FIG. 33A shows an example case of when the LUT is full before adding a new motion candidate.



FIG. 33B shows an example case of When the LUT is not full before adding a new motion candidate.


HMVP candidates could be used in the merge candidate list construction process. All HMVP candidates from the last entry to the first entry (or the last K0 HMVP, e.g., K0 equal to 16 or 6) in the table are inserted after the TMVP candidate. Pruning is applied on the HMVP candidates. Once the total number of available merge candidates reaches the signaled maximally allowed merge candidates, the merge candidate list construction process is terminated. Alternatively, once the total number of added motion candidates reaches a given value, the fetching of motion candidates from LUTs is terminated.


Similarly, HMVP candidates could also be used in the AMVP candidate list construction process. The motion vectors of the last K1 HMVP candidates in the table are inserted after the TMVP candidate. Only HMVP candidates with the same reference picture as the AMVP target reference picture are used to construct the AMVP candidate list. Pruning is applied on the HMVP candidates. In one example, K1 is set to 4.



FIG. 28 is a block diagram of a video processing apparatus 2800. The apparatus 2800 may be used to implement one or more of the methods described herein. The apparatus 2800 may be embodied in a smartphone, tablet, computer, Internet of Things (IoT) receiver, and so on. The apparatus 2800 may include one or more processors 2802, one or more memories 2804 and video processing hardware 2806. The processor(s) 2802 may be configured to implement one or more methods described in the present document. The memory (memories) 2804 may be used for storing data and code used for implementing the methods and techniques described herein. The video processing hardware 2806 may be used to implement, in hardware circuitry, some techniques described in the present document.



FIG. 29 is a flowchart for an example of a video decoding method 2900. The method 2900 includes maintaining tables (2902) wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information. The method 2900 further includes performing a conversion (2904) between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.


With respect to method 2900, in some embodiments, the motion information includes at least one of a prediction direction, a reference picture index, motion vector values, intensity compensation flag, affine flag, motion vector difference precision, and motion vector difference value. Further, the motion information may further include block position information indicating source of the motion information. In some embodiments, the video block may be a CU or a PU and the portion of video may correspond to one or more video slices or one or more video pictures.


In some embodiments, each LUT includes an associated counter, wherein the counter is initialized to a zero value at beginning of the portion of video and increased for each encoded video region in the portion of the video. The video region comprises one of a coding tree unit, a coding tree block, a coding unit, a coding block or a prediction unit. In some embodiments, the counter indicates, for a corresponding LUT, a number of motion candidates that were removed from the corresponding LUT. In some embodiments, the set of motion candidates may have a same size for all LUTs. In some embodiments, the portion of video corresponds to a slice of video, and wherein the number of LUTs is equal to N*P, wherein N is an integer representing LUTs per decoding thread, and P is an integer representing a number of Largest Coding Unit rows or a number of tiles in the slice of video. Additional details of the method 2900 is described in the examples provided in Section 4 and the examples listed below.


Features and embodiments of the above-described methods/techniques are described below.


1. A video processing method, comprising: maintaining tables, wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information; and performing a conversion between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.


2. The method of clause 1, wherein the tables include motion candidates derived from previously decoded video blocks that are decoded prior to the first video block.


3. The method of clause 1, wherein the performing of the conversion includes performing an Advanced Motion Vector Prediction (AMVP) candidate list derivation process using at least some of the set of motion candidates.


4. The method of clause 3, wherein the AMVP candidate list derivation process includes checking motion candidates from one or more tables.


5. The method of any one of clauses 1 to 4, wherein the performing of the conversion includes checking a motion candidate and a motion vector associated with the checked motion candidate is used as a motion vector predictor for coding the motion vector of the first video block.


6. The method of clause 4, wherein a motion vector associated with a checked motion candidate is added to the AMVP motion candidate list.


7. The method of clause 1, wherein the performing of the conversion includes checking at least some of the motion candidates based on a rule.


8. The method of clause 7, wherein the rule enables the checking when an AMVP candidate list is not full after checking a temporal motion vector prediction (TMVP) candidate.


9. The method of clause 7, wherein the rule enables the checking when an AMVP candidate list is not full after selecting from spatial neighbors and pruning, before inserting a TMVP candidate.


10. The method of clause 7, wherein the rule enables the checking when i) there is no AMVP candidate from above neighboring blocks without scaling, or ii) when there is no AMVP candidate from left neighboring blocks without scaling.


11. The method of clause 7, wherein the rule enables the checking when a pruning is applied before adding a motion candidate to a AMVP candidate list.


12. The method of clause 1, wherein motion candidates with an identical reference picture to a current reference picture are checked.


13. The method of clause 12, wherein motion candidates with a different reference picture from the current reference picture are further checked.


14. The method of clause 13, wherein the checking of the motion candidates with the identical reference picture is performed prior to the checking of the motion candidates with the different reference picture.


15. The method of clause 1, further comprising an AMVP candidate list construction process including a pruning operation before adding a motion vector from a motion candidate in a table.


16. The method of clause 15, wherein the pruning operation includes comparing a motion candidate to at least a part of available motion candidates in an AMVP candidate list.


17. The method of clause 15, wherein the pruning operation includes a number of operations, the number being a function of a number of spatial or temporal AMVP candidates.


18. The method of clause 17, wherein the number of operations is such that in case that M candidates are available in an AMVP candidate list, the pruning is applied only to K AMVP candidates where K<=M and where K and M are integers.


19. The method of clause 1, wherein the performing of the conversion includes performing a symmetric motion vector difference (SMVD) process using some of the motion vector differences.


20. The method of clause 1, wherein the performing of the conversion includes performing a symmetric motion vector (SMV) process using some of motion vectors.


21. The method of clause 7, wherein the rule enables the checking when an AMVP candidate list is not full after inserting a certain AMVP candidate.


22. The method of clause 1, further comprising enabling checking of motion candidates in the table, wherein the checking is enabled before checking other candidates derived from a spatial or temporal block and other candidates include AMVP candidates, SMVD candidates, SMV candidates, or affine inter candidates.


23. The method of clause 1, further comprising enabling checking of motion candidates in the table, wherein the checking is enabled when there is at least one motion candidate in the table.


24. The method of clause 1, wherein, for a motion candidate that is a bi-prediction candidate, a reference picture of a first reference picture list is checked before a reference picture of a second reference picture list is checked, the first reference picture list being a current target reference picture list.


25. The method of clause 1 or 2, wherein, for a motion candidate that is a bi-prediction candidate, a reference picture of a first reference picture list is checked before a reference picture of a second reference picture list is checked, the second reference picture list being a current target reference picture list.


26. The method of clause 1, wherein reference pictures of a first reference picture list are checked before reference pictures of a second reference picture list.


27. The method of clause 1, wherein the performing of the conversion includes generating the bitstream representation from the first video block.


28. The method of clause 1, wherein the performing of the conversion includes generating the first video block from the bitstream representation.


29. The method of any one of clauses 1 to 28, wherein a motion candidate is associated with motion information including at least one of a prediction direction, a reference picture index, motion vector values, an intensity compensation flag, an affine flag, a motion vector difference precision, or motion vector difference value.


30. The method of any of clauses 1 to 29, wherein a motion candidate is associated with intra prediction modes used for intra-coded blocks.


31. The method of any of clauses 1 to 29, wherein a motion candidate is associated with multiple illumination compensation (IC) parameters used for IC-coded blocks.


32. The method of any of clauses 1 to 29, wherein a motion candidate is associated with filter parameters used in the filtering process.


33. The method of any one of clauses 1 to 29, further comprising updating, based on the conversion, one or more tables.


34. The method of any one of clauses 1 to 33, wherein the updating of one or more tables includes updating one or more tables based on the motion information of the first video block after performing the conversion.


35. The method of clause 34, further comprising: performing a conversion between a subsequent video block of the video and the bitstream representation of the video based on the updated tables.


36. An apparatus comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 35.


37. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of clauses 1 to 35.


From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.


The disclosed and other embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.


A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.


The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC).


Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and compact disc, read-only memory (CD ROM) and digital versatile disc read-only memory (DVD-ROM) disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.


While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.


Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.


Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.

Claims
  • 1. A video processing method, comprising: maintaining one or more tables, wherein each table includes a set of motion candidates, each of which is associated with corresponding motion information and is derived from a previous video block, and arrangement of the motion candidates in the table is based on a sequence of addition of the motion candidates into the table;performing a motion candidate list derivation process to derive a motion candidate list for a first video block, wherein the motion candidate list derivation process comprises selectively checking one or more motion candidates in a table of the one or more tables in an order;deriving, based on the motion candidate list, motion information which is used as a motion vector predictor; andperforming a conversion between the first video block and a bitstream of a video including the first video block based on the motion information and a motion vector difference (MVD) between a motion vector and the motion vector predictor, wherein the MVD is indicated in the bitstream,wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after checking a temporal block in a picture different from a picture comprising the first video block to derive a temporal motion vector prediction (TMVP) motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after inserting a certain motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when i) there is no motion candidate from above neighboring blocks without scaling, or ii) when there is no motion candidate from left neighboring blocks without scaling.
  • 2. The method of claim 1, wherein the performing of the conversion includes encoding the first video block into the bitstream.
  • 3. The method of claim 1, wherein the performing of the conversion includes decoding the first video block from the bitstream.
  • 4. The method of claim 1, wherein the checking of the one or more motion candidates in the table is enabled when there is at least one motion candidate in the table.
  • 5. The method of claim 1, wherein the motion candidate list is an Advanced Motion Vector Prediction (AMVP) candidate list.
  • 6. The method of claim 1, wherein whether to update the motion candidate list using one or more checked motion candidates is based on a checking result.
  • 7. The method of claim 1, wherein updating the motion candidate list comprises: adding a motion vector associated with a checked motion candidate into the motion candidate list.
  • 8. The method of claim 1, wherein at least one motion candidate of checked motion candidates used to update the motion candidate list has a same reference picture as a reference picture of the first video block.
  • 9. The method of claim 1, wherein the performing of the conversion includes performing at least one of a symmetric motion vector difference (SMVD) process using motion vector differences, or a symmetric motion vector (SMV) process using motion vectors.
  • 10. The method of claim 1, wherein during the checking of the one or more motion candidates in the table, a reference picture of a first reference picture list is checked and then a reference picture of a second reference picture list is checked.
  • 11. The method of claim 10, wherein the first reference picture list is a current target reference picture list.
  • 12. The method of claim 10, wherein a motion candidate to be checked is a bi-predicted motion candidate.
  • 13. The method of claim 1, wherein motion candidates with an identical reference picture in the table to a current reference picture are checked.
  • 14. The method of claim 13, wherein motion candidates with a different reference picture from the current reference picture are further checked, wherein the checking of the motion candidates with the identical reference picture is performed prior to the checking of the motion candidates with the different reference picture.
  • 15. The method of claim 1, wherein a motion candidate list construction process comprises a pruning operation before updating the motion candidate list based on at least one checked motion candidate in the table.
  • 16. The method of claim 15, wherein the pruning operation includes comparing a motion candidate to be checked to part or all of available motion candidates in the motion candidate list.
  • 17. The method of claim 15, wherein the pruning operation includes a number of operations, the number being a function of a number of spatial or temporal motion candidates.
  • 18. The method of claim 1, wherein the checking of the one or more motion candidates in the table is enabled before checking other motion candidates comprising at least one of: motion candidates derived from a spatial or temporal block, AMVP motion candidates, SMVD motion candidates, SMV motion candidates, or affine inter motion candidates.
  • 19. The method of claim 1, wherein a motion candidate in the table is associated with motion information including at least one of: a prediction direction, a reference picture index, motion vector values, an intensity compensation flag, an affine flag, a motion vector difference precision, an intra mode information, an illumination compensation (IC) parameter, filter parameters used in a filtering process or motion vector difference value.
  • 20. The method of claim 1, wherein the method further comprises: updating a table of one or more tables using a motion candidate which is based on the motion information of the first video block; andwherein an index of the motion candidate in the table corresponding to the motion information of the first video block has an index larger than other motion candidates in the table.
  • 21. The method of claim 1, wherein the one or more motion candidates in the table is checked in an order of one or more indices of the one or more motion candidates.
  • 22. An apparatus for processing video data comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to: maintain one or more tables, wherein each table includes a set of motion candidates, each of which is associated with corresponding motion information and is derived from a previous video block, and arrangement of the motion candidates in the table is based on a sequence of addition of the motion candidates into the table;perform a motion candidate list derivation process to derive a motion candidate list for a first video block, wherein the motion candidate list derivation process comprises selectively checking one or more motion candidates in a table of the one or more tables in an order;derive, based on the motion candidate list, motion information which is used as a motion vector predictor; andperform a conversion between the first video block and a bitstream of a video including the first video block based on the motion information and a motion vector difference (MVD) between a motion vector and the motion vector predictor, wherein the MVD is indicated in the bitstream,wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after checking a temporal block in a picture different from a picture comprising the first video block to derive a temporal motion vector prediction (TMVP) motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after inserting a certain motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when i) there is no motion candidate from above neighboring blocks without scaling, or ii) when there is no motion candidate from left neighboring blocks without scaling.
  • 23. A non-transitory computer-readable storage medium storing instructions that cause a processor to: maintain one or more tables, wherein each table includes a set of motion candidates, each of which is associated with corresponding motion information and is derived from a previous video block, and arrangement of the motion candidates in the table is based on a sequence of addition of the motion candidates into the table;perform a motion candidate list derivation process to derive a motion candidate list for a first video block, wherein the motion candidate list derivation process comprises selectively checking one or more motion candidates in a table of the one or more tables in an order;derive, based on the motion candidate list, motion information which is used as a motion vector predictor; andperform a conversion between the first video block and a bitstream of a video including the first video block based on the motion information and a motion vector difference (MVD) between a motion vector and the motion vector predictor, wherein the MVD is indicated in the bitstream,wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after checking a temporal block in a picture different from a picture comprising the first video block to derive a temporal motion vector prediction (TMVP) motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after inserting a certain motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when i) there is no motion candidate from above neighboring blocks without scaling, or ii) when there is no motion candidate from left neighboring blocks without scaling.
  • 24. A non-transitory computer-readable recording medium storing a bitstream which is generated by a method performed by a video processing apparatus, wherein the method comprises: maintaining one or more tables, wherein each table includes a set of motion candidates, each of which is associated with corresponding motion information and is derived from a previous video block, and arrangement of the motion candidates in the table is based on a sequence of addition of the motion candidates into the table;performing a motion candidate list derivation process to derive a motion candidate list for a first video block, wherein the motion candidate list derivation process comprises selectively checking one or more motion candidates in a table of the one or more tables in an order;deriving, based on the motion candidate list, motion information which is used as a motion vector predictor; andgenerating the bitstream from the first video block based on the motion information and a motion vector difference (MVD) between a motion vector and the motion vector predictor, wherein the MVD is indicated in the bitstream,wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after checking a temporal block in a picture different from a picture comprising the first video block to derive a temporal motion vector prediction (TMVP) motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when the motion candidate list is not full after inserting a certain motion candidate; or wherein the checking of the one or more motion candidates in the table is enabled when i) there is no motion candidate from above neighboring blocks without scaling, or ii) when there is no motion candidate from left neighboring blocks without scaling.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a continuation application of U.S. application Ser. No. 17/019,675 filed on Sep. 14, 2020, which is a continuation application of International Application No. PCT/IB2019/055595, filed on Jul. 1, 2019, which claims the priority to and benefits of International Patent Application No. PCT/CN2018/093663, filed on Jun. 29, 2018, International Patent Application No. PCT/CN2018/105193, filed on Sep. 12, 2018, and International Patent Application No. PCT/CN2019/072058, filed on Jan. 16, 2019. All of the aforementioned patent applications are hereby incorporated by reference in their entireties.

US Referenced Citations (344)
Number Name Date Kind
7023922 Xu et al. Apr 2006 B1
7653134 Xu et al. Jan 2010 B2
7675976 Xu et al. Mar 2010 B2
7680189 Xu et al. Mar 2010 B2
7680190 Xu et al. Mar 2010 B2
7801220 Zhang et al. Sep 2010 B2
8804816 Li et al. Aug 2014 B2
9350970 Kang et al. May 2016 B2
9445076 Zhang et al. Sep 2016 B2
9485503 Zhang et al. Nov 2016 B2
9503702 Chen et al. Nov 2016 B2
9621888 Jeon Apr 2017 B2
9667996 Chen et al. May 2017 B2
9699450 Zhang et al. Jul 2017 B2
9762882 Zhang et al. Sep 2017 B2
9762900 Park et al. Sep 2017 B2
9807431 Hannuksela et al. Oct 2017 B2
9872016 Chuang et al. Jan 2018 B2
9900615 Li et al. Feb 2018 B2
9918102 Kohn et al. Mar 2018 B1
9967592 Zhang et al. May 2018 B2
9998727 Zhang et al. Jun 2018 B2
10021414 Seregin et al. Jul 2018 B2
10085041 Zhang et al. Sep 2018 B2
10116934 Zan et al. Oct 2018 B2
10154286 He et al. Dec 2018 B2
10158876 Chen et al. Dec 2018 B2
10200709 Chen et al. Feb 2019 B2
10200711 Li et al. Feb 2019 B2
10230980 Liu et al. Mar 2019 B2
10271064 Chien et al. Apr 2019 B2
10277909 Ye et al. Apr 2019 B2
10284869 Han et al. May 2019 B2
10306225 Zhang et al. May 2019 B2
10349083 Chen et al. Jul 2019 B2
10362330 Li et al. Jul 2019 B1
10368072 Zhang et al. Jul 2019 B2
10390029 Ye et al. Aug 2019 B2
10440378 Xu et al. Oct 2019 B1
10448010 Chen et al. Oct 2019 B2
10462439 He et al. Oct 2019 B2
10491902 Xu et al. Nov 2019 B1
10491917 Chen et al. Nov 2019 B2
10531118 Li et al. Jan 2020 B2
10560718 Lee et al. Feb 2020 B2
10595035 Karczewicz et al. Mar 2020 B2
10681383 Ye et al. Jun 2020 B2
10687077 Zhang et al. Jun 2020 B2
10694204 Chen et al. Jun 2020 B2
10701366 Chen et al. Jun 2020 B2
10771811 Liu et al. Sep 2020 B2
10778997 Zhang et al. Sep 2020 B2
10778999 Li et al. Sep 2020 B2
10805650 Wang et al. Oct 2020 B2
10812791 Chien et al. Oct 2020 B2
10841615 He et al. Nov 2020 B2
10873756 Zhang et al. Nov 2020 B2
10911769 Zhang et al. Feb 2021 B2
11128887 Lee Sep 2021 B2
11134243 Zhang et al. Sep 2021 B2
11134244 Zhang et al. Sep 2021 B2
11134267 Zhang et al. Sep 2021 B2
11140383 Zhang et al. Oct 2021 B2
11140385 Zhang Oct 2021 B2
11146785 Zhang et al. Oct 2021 B2
11146786 Zhang et al. Oct 2021 B2
11153557 Zhang et al. Oct 2021 B2
11153558 Zhang et al. Oct 2021 B2
11153559 Zhang et al. Oct 2021 B2
11159787 Zhang et al. Oct 2021 B2
11159807 Zhang et al. Oct 2021 B2
11159817 Zhang et al. Oct 2021 B2
11245892 Zhang et al. Feb 2022 B2
11412211 Lee Aug 2022 B2
11528501 Zhang Dec 2022 B2
11997253 Zhang May 2024 B2
12034914 Zhang Jul 2024 B2
20050105812 Molino et al. May 2005 A1
20060233243 Ridge et al. Oct 2006 A1
20070025444 Okada et al. Feb 2007 A1
20090180538 Visharam et al. Jul 2009 A1
20100080296 Lee et al. Apr 2010 A1
20110109964 Kim et al. May 2011 A1
20110116546 Guo et al. May 2011 A1
20110170600 Ishikawa Jul 2011 A1
20110194608 Rusert et al. Aug 2011 A1
20110194609 Rusert et al. Aug 2011 A1
20110200107 Ryu et al. Aug 2011 A1
20120082229 Su et al. Apr 2012 A1
20120134415 Lin et al. May 2012 A1
20120195366 Liu et al. Aug 2012 A1
20120195368 Chien et al. Aug 2012 A1
20120257678 Zhou et al. Oct 2012 A1
20120263231 Zhou Oct 2012 A1
20120287999 Li et al. Nov 2012 A1
20120300846 Sugio et al. Nov 2012 A1
20120307903 Sugio et al. Dec 2012 A1
20120320984 Zhou Dec 2012 A1
20130064301 Guo et al. Mar 2013 A1
20130070855 Zheng et al. Mar 2013 A1
20130094580 Zhou et al. Apr 2013 A1
20130101041 Fishwick Apr 2013 A1
20130114717 Zheng et al. May 2013 A1
20130114723 Bici et al. May 2013 A1
20130128982 Kim et al. May 2013 A1
20130163668 Chen et al. Jun 2013 A1
20130188013 Chen et al. Jul 2013 A1
20130188715 Seregin et al. Jul 2013 A1
20130208799 Srinivasamurthy et al. Aug 2013 A1
20130243093 Chen et al. Sep 2013 A1
20130265388 Zhang et al. Oct 2013 A1
20130272377 Karczewicz et al. Oct 2013 A1
20130272410 Seregin et al. Oct 2013 A1
20130272412 Seregin et al. Oct 2013 A1
20130272413 Seregin et al. Oct 2013 A1
20130294513 Seregin Nov 2013 A1
20130301734 Gisquet et al. Nov 2013 A1
20130336406 Zhang et al. Dec 2013 A1
20140049605 Chen Feb 2014 A1
20140064372 Laroche et al. Mar 2014 A1
20140078251 Kang et al. Mar 2014 A1
20140086327 Ugur et al. Mar 2014 A1
20140105295 Shiodera et al. Apr 2014 A1
20140105302 Takehara et al. Apr 2014 A1
20140126629 Park et al. May 2014 A1
20140133558 Seregin et al. May 2014 A1
20140161186 Zhang et al. Jun 2014 A1
20140185685 Asaka et al. Jul 2014 A1
20140219356 Nishitani et al. Aug 2014 A1
20140241434 Lin et al. Aug 2014 A1
20140286427 Fukushima et al. Sep 2014 A1
20140286433 He et al. Sep 2014 A1
20140321547 Takehara Oct 2014 A1
20140334557 Schierl et al. Nov 2014 A1
20140341289 Schwarz et al. Nov 2014 A1
20140355685 Chen et al. Dec 2014 A1
20140376614 Fukushima et al. Dec 2014 A1
20140376626 Lee Dec 2014 A1
20140376638 Nakamura et al. Dec 2014 A1
20150085932 Lin Mar 2015 A1
20150110197 Kim et al. Apr 2015 A1
20150189313 Shimada et al. Jul 2015 A1
20150195558 Kim Jul 2015 A1
20150237370 Zhou Aug 2015 A1
20150256853 Li et al. Sep 2015 A1
20150264386 Pang Sep 2015 A1
20150281733 Fu et al. Oct 2015 A1
20150312588 Yamamoto et al. Oct 2015 A1
20150326880 He et al. Nov 2015 A1
20150341635 Seregin et al. Nov 2015 A1
20150358635 Xiu et al. Dec 2015 A1
20160044332 Maaninen Feb 2016 A1
20160050430 Xiu et al. Feb 2016 A1
20160219278 Chen et al. Jul 2016 A1
20160227214 Rapaka et al. Aug 2016 A1
20160234492 Li et al. Aug 2016 A1
20160241867 Sugio et al. Aug 2016 A1
20160277761 Li et al. Sep 2016 A1
20160286230 Li et al. Sep 2016 A1
20160286232 Li et al. Sep 2016 A1
20160295240 Kim et al. Oct 2016 A1
20160301936 Chen et al. Oct 2016 A1
20160330471 Zhu et al. Nov 2016 A1
20160337661 Pang et al. Nov 2016 A1
20160366416 Liu et al. Dec 2016 A1
20160366442 Liu et al. Dec 2016 A1
20160373784 Bang Dec 2016 A1
20160381374 Bang Dec 2016 A1
20170006302 Lee et al. Jan 2017 A1
20170013269 Kim et al. Jan 2017 A1
20170048550 Hannuksela Feb 2017 A1
20170054995 Kim Feb 2017 A1
20170054996 Xu et al. Feb 2017 A1
20170078699 Park et al. Mar 2017 A1
20170099495 Rapaka et al. Apr 2017 A1
20170127082 Chen et al. May 2017 A1
20170127086 Lai et al. May 2017 A1
20170150168 Nakamura et al. May 2017 A1
20170163999 Li et al. Jun 2017 A1
20170188045 Zhou et al. Jun 2017 A1
20170214932 Huang et al. Jul 2017 A1
20170223352 Kim et al. Aug 2017 A1
20170238005 Chien et al. Aug 2017 A1
20170238011 Pettersson et al. Aug 2017 A1
20170264895 Takehara et al. Sep 2017 A1
20170272746 Sugio et al. Sep 2017 A1
20170280159 Xu et al. Sep 2017 A1
20170289570 Zhou et al. Oct 2017 A1
20170332084 Seregin et al. Nov 2017 A1
20170332095 Zou et al. Nov 2017 A1
20170332099 Lee et al. Nov 2017 A1
20170339425 Jeong et al. Nov 2017 A1
20180014017 Li et al. Jan 2018 A1
20180041769 Chuang et al. Feb 2018 A1
20180070100 Chen et al. Mar 2018 A1
20180077417 Huang Mar 2018 A1
20180084260 Chien Mar 2018 A1
20180098063 Chen et al. Apr 2018 A1
20180124398 Park May 2018 A1
20180184085 Yang et al. Jun 2018 A1
20180192069 Chen et al. Jul 2018 A1
20180192071 Chuang et al. Jul 2018 A1
20180242024 Chen et al. Aug 2018 A1
20180262753 Sugio et al. Sep 2018 A1
20180270500 Li Sep 2018 A1
20180278949 Karczewicz et al. Sep 2018 A1
20180310018 Guo et al. Oct 2018 A1
20180332284 Liu et al. Nov 2018 A1
20180332312 Liu et al. Nov 2018 A1
20180343467 Lin Nov 2018 A1
20180352223 Chen et al. Dec 2018 A1
20180352247 Park Dec 2018 A1
20180352256 Bang Dec 2018 A1
20180359483 Chen et al. Dec 2018 A1
20180376149 Zhang et al. Dec 2018 A1
20180376160 Zhang et al. Dec 2018 A1
20180376164 Zhang et al. Dec 2018 A1
20190098329 Han et al. Mar 2019 A1
20190116374 Zhang et al. Apr 2019 A1
20190116381 Lee et al. Apr 2019 A1
20190141334 Lim et al. May 2019 A1
20190158827 Sim et al. May 2019 A1
20190158866 Kim May 2019 A1
20190200040 Lim et al. Jun 2019 A1
20190215529 Laroche Jul 2019 A1
20190222848 Chen et al. Jul 2019 A1
20190222865 Zhang et al. Jul 2019 A1
20190230362 Chen et al. Jul 2019 A1
20190230376 Hu et al. Jul 2019 A1
20190297325 Lim et al. Sep 2019 A1
20190297343 Seo et al. Sep 2019 A1
20190320180 Yu et al. Oct 2019 A1
20190342557 Robert Nov 2019 A1
20190356925 Ye et al. Nov 2019 A1
20200014948 Lai et al. Jan 2020 A1
20200021839 Pham Van et al. Jan 2020 A1
20200021845 Lin et al. Jan 2020 A1
20200029088 Xu et al. Jan 2020 A1
20200036997 Li et al. Jan 2020 A1
20200077106 Jhu et al. Mar 2020 A1
20200077116 Lee et al. Mar 2020 A1
20200099951 Hung et al. Mar 2020 A1
20200112715 Hung et al. Apr 2020 A1
20200112741 Han et al. Apr 2020 A1
20200120334 Xu et al. Apr 2020 A1
20200128238 Lee et al. Apr 2020 A1
20200128266 Xu et al. Apr 2020 A1
20200145690 Li et al. May 2020 A1
20200154124 Lee May 2020 A1
20200169726 Kim et al. May 2020 A1
20200169745 Han et al. May 2020 A1
20200169748 Chen et al. May 2020 A1
20200186793 Racape et al. Jun 2020 A1
20200186820 Park et al. Jun 2020 A1
20200195920 Racape et al. Jun 2020 A1
20200195959 Zhang et al. Jun 2020 A1
20200195960 Zhang et al. Jun 2020 A1
20200204820 Zhang et al. Jun 2020 A1
20200396466 Zhang et al. Jun 2020 A1
20200221108 Xu et al. Jul 2020 A1
20200228815 Xu Jul 2020 A1
20200228825 Lim et al. Jul 2020 A1
20200236353 Zhang et al. Jul 2020 A1
20200244954 Heo et al. Jul 2020 A1
20200244979 Li Jul 2020 A1
20200267408 Lee et al. Aug 2020 A1
20200275124 Ko et al. Aug 2020 A1
20200280733 Li Sep 2020 A1
20200280735 Lim et al. Sep 2020 A1
20200280736 Wang Sep 2020 A1
20200288150 Jun et al. Sep 2020 A1
20200288157 Li Sep 2020 A1
20200288168 Zhang et al. Sep 2020 A1
20200296411 Li Sep 2020 A1
20200296414 Park et al. Sep 2020 A1
20200304805 Li Sep 2020 A1
20200322628 Lee et al. Oct 2020 A1
20200336726 Wang et al. Oct 2020 A1
20200366923 Zhang et al. Nov 2020 A1
20200374542 Zhang et al. Nov 2020 A1
20200374543 Liu et al. Nov 2020 A1
20200374544 Liu et al. Nov 2020 A1
20200382770 Zhang et al. Dec 2020 A1
20200396446 Zhang et al. Dec 2020 A1
20200396447 Zhang et al. Dec 2020 A1
20200396462 Zhang et al. Dec 2020 A1
20200404253 Chen Dec 2020 A1
20200404254 Zhao et al. Dec 2020 A1
20200404285 Zhang et al. Dec 2020 A1
20200404305 Ye Dec 2020 A1
20200404306 Auyeung Dec 2020 A1
20200404316 Zhang et al. Dec 2020 A1
20200404319 Zhang et al. Dec 2020 A1
20200404320 Zhang et al. Dec 2020 A1
20200413038 Zhang et al. Dec 2020 A1
20200413044 Zhang et al. Dec 2020 A1
20200413045 Zhang et al. Dec 2020 A1
20210006787 Zhang et al. Jan 2021 A1
20210006788 Zhang et al. Jan 2021 A1
20210006790 Zhang et al. Jan 2021 A1
20210006819 Zhang et al. Jan 2021 A1
20210006823 Zhang et al. Jan 2021 A1
20210014520 Zhang et al. Jan 2021 A1
20210014525 Zhang et al. Jan 2021 A1
20210021856 Zheng Jan 2021 A1
20210029351 Zhang et al. Jan 2021 A1
20210029352 Zhang et al. Jan 2021 A1
20210029362 Liu et al. Jan 2021 A1
20210029366 Zhang et al. Jan 2021 A1
20210029372 Zhang et al. Jan 2021 A1
20210029374 Zhang et al. Jan 2021 A1
20210051324 Zhang et al. Feb 2021 A1
20210051339 Liu et al. Feb 2021 A1
20210067783 Liu et al. Mar 2021 A1
20210076063 Liu et al. Mar 2021 A1
20210092357 Wang Mar 2021 A1
20210092379 Zhang et al. Mar 2021 A1
20210092436 Zhang et al. Mar 2021 A1
20210105482 Zhang et al. Apr 2021 A1
20210120234 Zhang et al. Apr 2021 A1
20210168368 Xu Jun 2021 A1
20210185326 Wang et al. Jun 2021 A1
20210203984 Salehifar et al. Jul 2021 A1
20210235108 Zhang et al. Jul 2021 A1
20210243476 Ko Aug 2021 A1
20210258569 Chen et al. Aug 2021 A1
20210297659 Zhang et al. Sep 2021 A1
20210321089 Lin Oct 2021 A1
20210329292 Jeong Oct 2021 A1
20210337216 Zhang et al. Oct 2021 A1
20210344947 Zhang et al. Nov 2021 A1
20210352312 Zhang et al. Nov 2021 A1
20210360230 Zhang et al. Nov 2021 A1
20210360277 Jeong Nov 2021 A1
20210360278 Zhang et al. Nov 2021 A1
20210368180 Park Nov 2021 A1
20210377518 Zhang et al. Dec 2021 A1
20210377545 Zhang et al. Dec 2021 A1
20210377558 Xiu Dec 2021 A1
20210400298 Zhao Dec 2021 A1
20220007047 Zhang et al. Jan 2022 A1
20220021900 Jeong Jan 2022 A1
20220385887 Jun Dec 2022 A1
20220417551 Lim Dec 2022 A1
Foreign Referenced Citations (159)
Number Date Country
2019293670 Jun 2023 AU
112020024142 Mar 2021 BR
3020265 Nov 2017 CA
1898715 Jan 2007 CN
1925614 Mar 2007 CN
101193302 Jun 2008 CN
101933328 Dec 2010 CN
102474619 May 2012 CN
102860006 Jan 2013 CN
102907098 Jan 2013 CN
102946536 Feb 2013 CN
103004204 Mar 2013 CN
103096071 May 2013 CN
103096073 May 2013 CN
103339938 Oct 2013 CN
103370937 Oct 2013 CN
103404143 Nov 2013 CN
103444182 Dec 2013 CN
103518374 Jan 2014 CN
103535039 Jan 2014 CN
103535040 Jan 2014 CN
103609123 Feb 2014 CN
103797799 May 2014 CN
103828364 May 2014 CN
103858428 Jun 2014 CN
103891281 Jun 2014 CN
103931192 Jul 2014 CN
104041042 Sep 2014 CN
104054350 Sep 2014 CN
104079944 Oct 2014 CN
104126302 Oct 2014 CN
104247434 Dec 2014 CN
104272743 Jan 2015 CN
104350749 Feb 2015 CN
104365102 Feb 2015 CN
104396248 Mar 2015 CN
104539950 Apr 2015 CN
104584549 Apr 2015 CN
104662909 May 2015 CN
104756499 Jul 2015 CN
104915966 Sep 2015 CN
105245900 Jan 2016 CN
105324996 Feb 2016 CN
105556971 May 2016 CN
105681807 Jun 2016 CN
105917650 Aug 2016 CN
106464864 Feb 2017 CN
106471806 Mar 2017 CN
106716997 May 2017 CN
106797477 May 2017 CN
106851046 Jun 2017 CN
106851267 Jun 2017 CN
106851269 Jun 2017 CN
107071458 Aug 2017 CN
107079161 Aug 2017 CN
107079162 Aug 2017 CN
107087165 Aug 2017 CN
107113424 Aug 2017 CN
107113442 Aug 2017 CN
107113446 Aug 2017 CN
107197301 Sep 2017 CN
107211156 Sep 2017 CN
107295348 Oct 2017 CN
107347159 Nov 2017 CN
107431820 Dec 2017 CN
107493473 Dec 2017 CN
107592529 Jan 2018 CN
107690809 Feb 2018 CN
107690810 Feb 2018 CN
107710764 Feb 2018 CN
107959853 Apr 2018 CN
108134934 Jun 2018 CN
108200437 Jun 2018 CN
108235009 Jun 2018 CN
108293127 Jul 2018 CN
108353184 Jul 2018 CN
109076218 Dec 2018 CN
109089119 Dec 2018 CN
110169073 Aug 2019 CN
113615193 Nov 2021 CN
2532160 Dec 2012 EP
2668784 Dec 2013 EP
2741499 Jun 2014 EP
2983365 Feb 2016 EP
3791585 Mar 2021 EP
3791588 Mar 2021 EP
3794825 Mar 2021 EP
201111867 Aug 2011 GB
2488815 Sep 2012 GB
2492778 Jan 2013 GB
2588006 Apr 2021 GB
2013110766 Jun 2013 JP
2013537772 Oct 2013 JP
2014501091 Jan 2014 JP
2014509480 Apr 2014 JP
2014197883 Oct 2014 JP
2016059066 Apr 2016 JP
2017123542 Jul 2017 JP
2017028712 Jan 2019 JP
2019515587 Jun 2019 JP
2020523853 Aug 2020 JP
2021052373 Apr 2021 JP
2021510265 Apr 2021 JP
2021513795 May 2021 JP
2022504073 Jan 2022 JP
2022507682 Jan 2022 JP
2022507683 Jan 2022 JP
7502380 Jun 2024 JP
20170058871 May 2017 KR
20170115969 Oct 2017 KR
102680903 Jul 2024 KR
2550554 May 2015 RU
2571572 Dec 2015 RU
2632158 Oct 2017 RU
2669005 Oct 2018 RU
201444349 Nov 2014 TW
201832556 Sep 2018 TW
2011095259 Aug 2011 WO
2011095260 Aug 2011 WO
2012074344 Jun 2012 WO
2012095467 Jul 2012 WO
2012172668 Dec 2012 WO
2013081365 Jun 2013 WO
2013157251 Oct 2013 WO
2014007058 Jan 2014 WO
2014054267 Apr 2014 WO
2015006920 Jan 2015 WO
2015010226 Jan 2015 WO
2015042432 Mar 2015 WO
2015052273 Apr 2015 WO
2015100726 Jul 2015 WO
2015180014 Dec 2015 WO
2016008409 Jan 2016 WO
2016054979 Apr 2016 WO
2016091161 Jun 2016 WO
2017043734 Mar 2017 WO
2017058633 Apr 2017 WO
2017076221 May 2017 WO
2017084512 May 2017 WO
2017147765 Sep 2017 WO
2017197126 Nov 2017 WO
2017197126 Nov 2017 WO
2017222237 Dec 2017 WO
2018012886 Jan 2018 WO
2018026148 Feb 2018 WO
2018045944 Mar 2018 WO
2018048904 Mar 2018 WO
2018058526 Apr 2018 WO
2018061522 Apr 2018 WO
2018065397 Apr 2018 WO
2018070107 Apr 2018 WO
2018127119 Jul 2018 WO
2018231700 Dec 2018 WO
2018237299 Dec 2018 WO
2019223746 Nov 2019 WO
2020003275 Jan 2020 WO
2020003279 Jan 2020 WO
2020003284 Jan 2020 WO
2020113051 Jun 2020 WO
Non-Patent Literature Citations (149)
Entry
US 11,089,321 B2, 08/2021, Zhang (withdrawn)
Enhanced AMVP Mechanism Based Adaptive Motion Search Range Decision, HEVC; 2014. (Year: 2014).
Parallel AMVP candidate list construction for HEVC; Yu; 2016. (Year: 2016).
Enhanced AMVP Mechanism Based Adaptive Motion Search Range, HEVC; 2014. (Year: 2014).
Description of SDR & HDR video coding technology proposed by Ericsson and Nokia; 2018 (Year: 2018).
Reducing coding cost of merge index by dynamic merge reallocation; 2012.
Non-Final Office Action from U.S. Appl. No. 16/998,258 dated Nov. 25, 2020.
Notice of Allowance from U.S. Appl. No. 17/229,019 dated Oct. 12, 2022.
Notice of Eligibility of Grant from Singapore Patent Application No. 11202011714R dated Jul. 25, 2022.
Final Office Action from U.S. Appl. No. 17/480,184 dated May 2, 2022.
Examination Report from Patent Application GB2020091.1 mailed Mar. 21, 2022.
Examination Report from Patent Application GB2018263.0 mailed Mar. 30, 2022.
Examination Report from Patent Application GB2019557.4 mailed Apr. 1, 2022.
Extended European Search Report European Patent Application No. 20737921.5 dated Feb. 22, 2022 (9 pages).
Notice of Allowance from U.S. Appl. No. 17/019,753 dated Dec. 1, 2021.
Non-Final Office Action from U.S. Appl. No. 17/480,184 dated Dec. 29, 2021.
Non-Final Office Action from U.S. Appl. No. 17/796,708 dated Aug. 11, 2021.
Non-Final Office Action from U.S. Appl. No. 17/019,753 dated Jul. 22, 2021.
Notice of Allowance from U.S. Appl. No. 16/998,296 dated Mar. 23, 2021.
Notice of Allowance from U.S. Appl. No. 16/998,258 dated Mar. 24, 2021.
Non-Final Office Action from U.S. Appl. No. 17/011,058 dated Apr. 13, 2021.
Final Office Action from U.S. Appl. No. 17/071,139 dated Apr. 16, 2021.
Non-Final Office Action from U.S. Appl. No. 17/229,019 dated Jun. 25, 2021.
Notice of Allowance from U.S. Appl. No. 17/011,068 dated Mar. 1, 2021.
Notice of Allowance from U.S. Appl. No. 17/018,200 dated Mar. 1, 2021.
Final Office Action from U.S. Appl. No. 17/019,753 dated Mar. 8, 2021.
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055588 dated Sep. 16, 2019 (21 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055591 dated Jan. 10, 2019 (16 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055593 dated Sep. 16, 2019 (23 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055595 dated Sep. 16, 2019 (25 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055619 dated Sep. 16, 2019 (26 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055620 dated Sep. 25, 2019 (18 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055621 dated Sep. 30, 2019 (18 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055622 dated Sep. 16, 2019 (13 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055623 dated Sep. 26, 2019 (17 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055624 dated Sep. 26, 2019 (17 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055625 dated Sep. 26, 2019 (19 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055626 dated Sep. 16, 2019 (17 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/057690 dated Dec. 16, 2019 (17 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/057692 dated Jan. 7, 2020 (16 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055571 dated Sep. 16, 2019 (20 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2020/080597 dated Jun. 30, 2020 (11 pages).
Non-Final Office Action from U.S. Appl. No. 16/803,706 dated Apr. 17, 2020.
Non-Final Office Action from U.S. Appl. No. 16/796,693 dated Apr. 28, 2020.
Non-Final Office Action from U.S. Appl. No. 16/796,708 dated May 29, 2020.
Non-Final Office Action from U.S. Appl. No. 16/993,598 dated Oct. 14, 2020.
Final Office Action from U.S. Appl. No. 16/796,693 dated Oct. 27, 2020.
Non-Final Office Action from U.S. Appl. No. 17/005,634 dated Nov. 13, 2020.
Non-Final Office Action from U.S. Appl. No. 17/019,753 dated Nov. 17, 2020.
Non-Final Office Action from U.S. Appl. No. 17/037,322 dated Nov. 17, 2020.
Non-Final Office Action from U.S. Appl. No. 17/011,068 dated Nov. 19, 2020.
Non-Final Office Action from U.S. Appl. No. 17/018,200 dated Nov. 20, 2020.
Non-Final Office Action from U.S. Appl. No. 16/998,296 dated Nov. 24, 2020.
Document: JCT-VC-B0078, Guionnet et al., “CE5.h: Reducing the Coding Cost of Merge Index by Dynamic Merge Index Reallocation,” Joint Collaborative Team on 3D Video Coding Extension Extension Development of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 2nd Meeting: Shanghai, CN, Oct. 13-19, 2012.
Document: JVET-C0035, Lee et al., “Modification of Merge Candidate Derivation: ATMVP Simplification and Merge Pruning,” Joint Video Experts Tomn (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 3rd Meeting: Geneva, CH, May 26-Jun. 2016.
Non-Final Office Action from U.S. Appl. No. 17/005,702 dated Nov. 27, 2020.
Non-Final Office Action from U.S. Appl. No. 17/005,574 dated Dec. 1, 2020.
Non-Final Office Action from U.S. Appl. No. 17/011,058 dated Dec. 15, 2020.
Non-Final Office Action from U.S. Appl. No. 17/071,139 dated Dec. 15, 2020.
Non-Final Office Action from U.S. Appl. No. 16/993,561 dated Dec. 24, 2020.
Non-Final Office Action from U.S. Appl. No. 17/031,404 dated Dec. 24, 2020.
Notice of Allowance from U.S. Appl. No. 16/796,693 dated Feb. 10, 2021.
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055549 dated Aug. 20, 2019 (16 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055575 dated Aug. 20, 2019 (12 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055576 dated Sep. 16, 2019 (15 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055582 dated Sep. 20, 2019 (18 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/071332 dated Apr. 9, 2020(9 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/071656 dated Apr. 3, 2020(12 pages).
Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/072387 dated Apr. 20, 2020(10 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/072391 dated Mar. 6, 2020 (11 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055554 dated Aug. 20, 2019 (16 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055556 dated Aug. 29, 2019 (15 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055581 dated Aug. 29, 2019 (25 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055586 dated Sep. 16, 2019 (16 pages).
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055587 dated Sep. 16, 2019 (23 pages).
Non-Final Office Action from U.S. Appl. No. 17/457,868 dated Nov. 25, 2022.
Non-Final Office Action from U.S. Appl. No. 16/796,708 dated Nov. 23, 2022.
Non-Final Office Action from U.S. Appl. No. 17/135,054 dated Nov. 25, 2022.
Non-Final Office Action dated Nov. 10, 2020, 11 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020.
Final Office Action dated Mar. 19, 2021, 50 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020.
Non-Final Office Action dated Nov. 18, 2021, 39 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020.
Notice of Allowance dated Mar. 11, 2022, 23 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020.
Notice of Allowance dated Jun. 16, 2022, 19 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020.
Nevdyaev, “Telecommunication Technologies, English-Russian Explanatory Dictionary and Reference Book,” Communications and Business, Moscow, 2002, 5 pages.
Non-Final Office Action dated Mar. 30, 2023, 98 pages, U.S. Appl. No. 17/369,132 filed Jul. 7, 2021.
Chen et al. “Description of SOR, HOr and 360° video coding technology proposal by Qualcomm and Technicolor- ow and high complexity versions”, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 9/ WG 11, 10th Meeting: San Diego, US, JVET-J0021 (Apr. 2018).
Chen et al. “CE4.3.1: Shared merging candidate list”, JVET 13th Meeting, JVET-M0170-v1 (Jan. 2019).
Wang et al. “Spec text for the agreed starting point on slicing and tiling”, JVET 12th Meeting, JVET-L0686-v2 (Oct. 2018).
Han et al. “A dynamic motion vector referencing scheme for video coding” IEEE International Conference on Image Processing (ICIP), (Sep. 2016).
Rapaka et al. “On intra block copy merge vector handling” JCT-VG Meeting, JCTVC-V0049 {Oct. 2015).
Chen et al. “Symmetrical mode for bi-prediction” JVET Meeting,JVET-J0063 (Apr. 2018).
Wang et al. “Description of Core Experiment 4 (CE4): Inter prediction and motion vector coding” JVET-K1024 (Jul. 2018).
Zhang et al. “CE4-related: Restrictions on History-based Motion Vector Prediction”, JVET-M0272 (Jan. 2019).
Zhang et al. “CE2-related: Early awareness of accessing temporal blocks in sub-block merge list construction”, JVET-M0273 (Jan. 2019).
Robert et al. “High precision FRUC with additional candidates” JVET Meeting JVET-D0046 (Oct. 2016).
Toma et al. “Description of SDR video coding technology proposal by Panasonic”, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 10th Meeting: San Diego, US, JVET-J0020-v1and v2 (Apr. 2018).
Zhang et al. “CE4-related: History-based Motion Vector Prediction”, Joint Video Experts Team (JVET) of ITU-T SG 6 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, Document JVET-K0104-v5, Meeting Report of the 11th meeting of the Uoint Video Experts Team (JVET), Ljubljana, SI, 10-18 (Jul. 2018).
Zhang et al., “History-Based Motion Vector Prediction in Versatile Video Coding”, 2019 Data Compression Conference (DCC), IEEE, pp. 43-52, XP033548557 (Mar. 2019).
Esenlik et al. “Description of Core Experiment 9 (CE9): Decoder Side Motion Vector Derivation” JVET-J1029-r4, Apr. 2018).
Sjoberg et al. “Description of SOR and HDR video coding technology proposal by Ericsson and Nokia” JVET Meeting, JVET-J0012-v1 (Apr. 2018).
Ku et al. “Intra block copy improvement on top ofTencent's cfp response” JVET Meeting, JVET-J0050-r2 (Apr. 2018).
Lin et al. “CE3: Summary report on motion prediction for texture coding” JCT-3V Meeting, JCT3V-G0023, Jan. 2014.
Sprljan et al. “TE3 subtest 3: Local intensity compensation (LIC) for inter prediction”, JCT-VG of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 3rd Meeting: Guangzhou, CN, JCTVC-C233 (Oct. 2010).
Document: JVET-J0022, Bordes, et al., “Description of SDR, HDR and 360° video coding technology proposal by Qualcomm and Technicolor—medium complexity version,” Joint Video Experts Team (JVET) of ITU-T S,G 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 10th Meeting: San Diego, US, Apr. 10-20, 2018.
Zhang et al. “CE4: History-based Motion Vector Prediction(Test 4.4.7)”, Joint Video Experts Team (JVET) of ITU-T S, G 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th meeting: Macao, CN, Oct. 3-12, 2018, Document JVET-L0266-V1and v2, Oct. 12, 2018.
Wang et al. Description of Core Experiment 4 {CE4); Interprediction and Motion Vector Coding,JVET Meeting, The Joint Video Exploration Team of ISO/IEC JTC1/SC29/WG11 and ITU-TSG. 16 No Meeting San Diego, Apr. 20, 2018, Document JVET-J1024, Apr. 20, 2018.
Lee et al., “Non-CE4: HMVP Unification between the Merge and MVP List,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N0373, Mar. 2019.
Zhu et al. “Simplified HMVP,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, Document JVET-M0473, Jan. 2019.
Document: JVET-M0562, Bandyopadhyay, S., “Cross-Check of JVET-M0436: AHG2: Regarding HMVO Table Size,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA Jan. 9-18, 2019.
Zhang et al. “CE4-4.4: Merge List Construction for Triangular Prediction Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document UVET-N0269, Mar. 2019.
Solovyev et al. “CE-4.6: Simplification for Merge List Derivation in Triangular Prediction Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0454, Mar. 2019.
Zhang et al. “CE10-related: Merge List Construction Process for Triangular Protection Mode,” Joint Video Experts Iream (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, document JVET-M0271, Jan. 2019.
Ma et al. “Eleventh Five-Year Plan” teaching materials for ordinary colleges and universities, Principle and Application of S7-200 PLC and Digital Speed Control Systems, Jul. 31, 2009.
Enhanced AMVP Mechanism Based Adaptive Motion Search for Fast HEVC Coding; 2014.
Motion vector prediction methods considering prediction continuity in HEVC; 2016.
Hardware-friendly Advanced Motion Vector Predictor for hevc; 2018.
Parallel AMVP candidate list construction for HEVC; 2016.
Guionnet et al. “CE5.h: Reducing the Coding Cost of Merge Index by Dynamic Merge Index Reallocation,” Joint Collaborative Team on 30 Video Coding Extension Development of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1, 2nd Meeting: Shanghai, CN, Oct. 13-19, 2012, document JCT3V-B0078, 2012.
Lee et al. “EE2.6: Modification of Merge Candidate Derivation: ATMVP Simplification and Merge Pruning,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 3rd Meeting: Geneva, CH, 26 May Jun. 2016, document JVET-C0035, 2016.
Document: JVET-L1002, Chen et al., “Algorithm Description for Versatile Video Coding and Terst Model 3 (VTM 3),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018.
Library—USPTO search query; 2022.
Document: JVET-L0401-r3 Chien, et al., “CE4-related: Modification on History-based Mode Vector Prediction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018.
Xin, Y., “Exploration and Optimization of Merge Mode Candidate Decision in HEVC,” 2016 Microcomputers and Applications No. 15, (School Information Engineering, Shanghai Maritime University, Shanghai 201306) , Sep. 1, 2016.
Document: JVET-K1000, Sullivan et al., “Meeting Report of the 11th Meeting of the Joint Video Experts Team (JVET), Ljubljana, SI, Jul. 10-18, 2018,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 11th Meeting:Ljubljana, SI, Jul. 10-18, 2018.
Document: JVET-J0024-v2, Akula, S., et al., “Description of SDR, HDR and 360° video coding technology proposal considering mobile application scenario by Samsung, Huawei, GoPro, and HiSilicon,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 10th Meeting: San Diego, US, Apr. 10-20, 2018, 139 pages.
Document: JVET-M0124, Zhao, J., et al., “CE4: Methods of Reducing No. of Pruning Checks of History Based Motion Vector Prediction (Test 4.1.1),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, 6 pages.
Jia, J., et al., “A fast candidate selection method for Merge mode based on adaptive threshold,” Journal of Optoelectronics—Laser, vol. 27, No. 9, Sep. 2016, 7 pages.
Non-Final Office Action dated Jul. 3, 2023, 101 pages, U.S. Appl. No. 17/374,160 filed Jul. 13, 2021.
Non-Final Office Action dated Aug. 7, 2023, 101 pages, U.S. Appl. No. 17/374,311, filed Jul. 13, 2021.
Non-Final Office Action dated Aug. 21, 2023, 126 pages, U.S. Appl. No. 17/374,208, filed Jul. 13, 2021.
Canadian Office Action from Canadian Application No. 3,101,730 dated Aug. 10, 2023.
Document: JVET-J0024-v2, Akula, S., et al., “Description of SDR, HDR and 360 video coding technology proposal by Samsung, Huawei, GoPro, and HiSilicon,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-J0024-v2, 10th Meeting: San Diego, Apr. 10-20, 2018, 139 pages.
Document: JCTVC-S1014, Joshi, R., et al., “Screen content coding test model 3 (SCM 3),” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 19th Meeting: Strasbourg, FR, Oct. 17-24, 2014, 12 pages.
Document: JVET-K0104-v5, Zhang, L., et al., “CE4-related: History-based Motion Vector Prediction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, 7 pages.
Document: JVET-L0124-v2, Liao, R., et al., “CE10.3.1.b: Triangular prediction unit mode,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, 8 pages.
Document: JCTVC-G157, Hendry, “Reference List Construction for Random Access Settings,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 7th Meeting: Geneva, CH, Nov. 21-30, 2011, 5 pages.
Partial European Search Report from European Appliciation No. 23210728.4 dated Jan. 10, 2024, 19 pages.
Document: JVET-J0012-v1, Sjoberg, R., et al., “Description of SDR and HDR video coding technology proposal by Ericsson and Nokia,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 10th Meeting: San Diego, CA, USA, Apr. 10-20, 2018, 32 pages.
Document: JVET-L1002-v1, Chen, J., et al., “Algorithm description for Versatile Video Coding and Test Model 3 (VTM 3),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, 48 pages.
Document: JVET-L0266-v2, Zhang, L., et al., “CE4: History-based Motion Vector Prediction (Test 4.4.7),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, 8 pages.
Taiwanese Office Action from Taiwan Patent Application No. 108133113 dated Apr. 26, 2024, 23 pages.
Japanese Notice of Reasons for Refusal from Japanese Patent Application No. 2023-072498 dated May 28, 2024, 8 pages.
Extended European Search Report from European Application No. 23213700.0 dated May 16, 2024, 24 pages.
Final Office Action from U.S. Appl. No. 17/388,146 dated Jun. 5, 2024, 34 pages.
Chinese Notice of Allowance from Chinese Patent Application No. 202080009387.0 dated May 16, 2024, 6 pages.
Chinese Notice of Allowance from Chinese Patent Application No. 202210307588.x dated Aug. 1, 2024, 6 pages.
Non-Final Office Action from U.S. Appl. No. 17/380,225 dated Jul. 10, 2024, 21 pages.
Non-Final Office Action from U.S. Appl. No. 17/388,146 dated Jun. 5, 2024, 34 pages.
Notice of Allowance from U.S. Appl. No. 18/156,666 dated Jun. 13, 2024, 22 pages.
Related Publications (1)
Number Date Country
20230064498 A1 Mar 2023 US
Continuations (2)
Number Date Country
Parent 17019675 Sep 2020 US
Child 17975323 US
Parent PCT/IB2019/055595 Jul 2019 WO
Child 17019675 US