This patent document relates to video coding and decoding techniques, devices and systems.
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
This document discloses methods, systems, and devices for encoding and decoding digital video.
In one example aspect, a method of video decoding is provided to include maintaining tables, wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information; and performing a conversion between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.
In yet another representative aspect, the various techniques described herein may be embodied as a computer program product stored on a non-transitory computer readable media. The computer program product includes program code for carrying out the methods described herein.
The details of one or more implementations are set forth in the accompanying attachments, the drawings, and the description below. Other features will be apparent from the description and drawings, and from the claims.
To improve compression ratio of video, researchers are continually looking for new techniques by which to encode video.
The present document is related to video coding technologies. Specifically, it is related to motion information coding (such as merge mode, Advanced Motion Vector Prediction (AMVP) mode) in video coding. It may be applied to the existing video coding standard like HEVC, or the standard (Versatile Video Coding) to be finalized. It may be also applicable to future video coding standards or video codec.
Brief Discussion
Video coding standards have evolved primarily through the development of the well-known International Telecommunication Union-Telecommunication Standardization Sector (ITU-T) and International Organization for Standardization (ISO)/International Electrotechnical Commission (IEC) standards. The ITU-T produced H.261 and H.263, ISO/IEC produced Picture Experts Group (MPEG)-1 and MPEG-4 Visual, and the two organizations jointly produced the H.262/MPEG-2 Video and H.264/MPEG-4 Advanced Video Coding (AVC) and H.265/High Efficiency Video Coding (HEVC) standards. Since H.262, the video coding standards are based on the hybrid video coding structure wherein temporal prediction plus transform coding are utilized. An example of a typical HEVC encoder framework is depicted in
2.1 Partition Structure
2.1.1 Partition Tree Structure in H.264/AVC
The core of the coding layer in previous standards was the macroblock, containing a 16×16 block of luma samples and, in the usual case of 4:2:0 color sampling, two corresponding 8×8 blocks of chroma samples.
An intra-coded block uses spatial prediction to exploit spatial correlation among pixels. Two partitions are defined: 16×16 and 4×4.
An inter-coded block uses temporal prediction, instead of spatial prediction, by estimating motion among pictures. Motion can be estimated independently for either 16×16 macroblock or any of its sub-macroblock partitions: 16×8, 8×16, 8×8, 8×4, 4×8, 4×4 (see
2.1.2 Partition Tree Structure in HEVC
In HEVC, a CTU is split into CUs by using a quadtree structure denoted as coding tree to adapt to various local characteristics. The decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the CU level. Each CU can be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU splitting type, a CU can be partitioned into transform units (TUs) according to another quadtree structure similar to the coding tree for the CU. One of key feature of the HEVC structure is that it has the multiple partition conceptions including CU, PU, and TU.
In the following, the various features involved in hybrid video coding using HEVC are highlighted as follows.
1) Coding tree units and coding tree block (CTB) structure: The analogous structure in HEVC is the coding tree unit (CTU), which has a size selected by the encoder and can be larger than a traditional macroblock. The CTU consists of a luma CTB and the corresponding chroma CTBs and syntax elements. The size L×L of a luma CTB can be chosen as L=16, 32, or 64 samples, with the larger sizes typically enabling better compression. HEVC then supports a partitioning of the CTBs into smaller blocks using a tree structure and quadtree-like signaling.
2) Coding units (CUs) and coding blocks (CBs): The quadtree syntax of the CTU specifies the size and positions of its luma and chroma CBs. The root of the quadtree is associated with the CTU. Hence, the size of the luma CTB is the largest supported size for a luma CB. The splitting of a CTU into luma and chroma CBs is signaled jointly. One luma CB and ordinarily two chroma CBs, together with associated syntax, form a coding unit (CU). A CTB may contain only one CU or may be split to form multiple CUs, and each CU has an associated partitioning into prediction units (PUs) and a tree of transform units (TUs).
3) Prediction units and prediction blocks (PBs): The decision whether to code a picture area using inter picture or intra picture prediction is made at the CU level. A PU partitioning structure has its root at the CU level. Depending on the basic prediction-type decision, the luma and chroma CBs can then be further split in size and predicted from luma and chroma prediction blocks (PBs). HEVC supports variable PB sizes from 64×64 down to 4×4 samples.
4) TUs and transform blocks: The prediction residual is coded using block transforms. A TU tree structure has its root at the CU level. The luma CB residual may be identical to the luma transform block (TB) or may be further split into smaller luma TBs. The same applies to the chroma TBs. Integer basis functions similar to those of a discrete cosine transform (DCT) are defined for the square TB sizes 4×4, 8×8, 16×16, and 32×32. For the 4×4 transform of luma intra picture prediction residuals, an integer transform derived from a form of discrete sine transform (DST) is alternatively specified.
2.1.2.1 Tree-Structured Partitioning into Transform Blocks and Units
For residual coding, a CB can be recursively partitioned into transform blocks (TBs). The partitioning is signaled by a residual quadtree. Only square CB and TB partitioning is specified, where a block can be recursively split into quadrants, as illustrated in
In contrast to previous standards, the HEVC design allows a TB to span across multiple PBs for inter-picture predicted CUs to maximize the potential coding efficiency benefits of the quadtree-structured TB partitioning.
2.1.2.2 Parent and Child Nodes
A CTB is divided according to a quad-tree structure, the nodes of which are coding units. The plurality of nodes in a quad-tree structure includes leaf nodes and non-leaf nodes. The leaf nodes have no child nodes in the tree structure (i.e., the leaf nodes are not further split). The non-leaf nodes include a root node of the tree structure. The root node corresponds to an initial video block of the video data (e.g., a CTB). For each respective non-root node of the plurality of nodes, the respective non-root node corresponds to a video block that is a sub-block of a video block corresponding to a parent node in the tree structure of the respective non-root node. Each respective non-leaf node of the plurality of non-leaf nodes has one or more child nodes in the tree structure.
2.1.3 Quadtree Plus Binary Tree Block Structure with Larger CTUs in JEM
To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (WET) was founded by Video Coding Experts Group (VCEG) and MPEG jointly in 2015. Since then, many new methods have been adopted by JVET and put into the reference software named Joint Exploration Model (JEM).
2.1.3.1 QTBT Block Partitioning Structure
Different from HEVC, the QTBT structure removes the concepts of multiple partition types, i.e. it removes the separation of the CU, PU and TU concepts, and supports more flexibility for CU partition shapes. In the QTBT block structure, a CU can have either a square or rectangular shape. As shown in
The following parameters are defined for the QTBT partitioning scheme.
In one example of the QTBT partitioning structure, the CTU size is set as 128×128 luma samples with two corresponding 64×64 blocks of chroma samples, the MinQTSize is set as 16×16, the MaxBTSize is set as 64×64, the MinBTSize (for both width and height) is set as 4×4, and the MaxBTDepth is set as 4. The quadtree partitioning is applied to the CTU first to generate quadtree leaf nodes. The quadtree leaf nodes may have a size from 16×16 (i.e., the MinQTSize) to 128×128 (i.e., the CTU size). If the leaf quadtree node is 128×128, it will not be further split by the binary tree since the size exceeds the MaxBTSize (i.e., 64×64). Otherwise, the leaf quadtree node could be further partitioned by the binary tree. Therefore, the quadtree leaf node is also the root node for the binary tree and it has the binary tree depth as 0. When the binary tree depth reaches MaxBTDepth (i.e., 4), no further splitting is considered. When the binary tree node has width equal to MinBTSize (i.e., 4), no further horizontal splitting is considered. Similarly, when the binary tree node has height equal to MinBTSize, no further vertical splitting is considered. The leaf nodes of the binary tree are further processed by prediction and transform processing without any further partitioning. In the JEM, the maximum CTU size is 256×256 luma samples.
In addition, the QTBT scheme supports the ability for the luma and chroma to have a separate QTBT structure. Currently, for P and B slices, the luma and chroma CTBs in one CTU share the same QTBT structure. However, for I slices, the luma CTB is partitioned into CUs by a QTBT structure, and the chroma CTBs are partitioned into chroma CUs by another QTBT structure. This means that a CU in an I slice consists of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice consists of coding blocks of all three colour components.
In HEVC, inter prediction for small blocks is restricted to reduce the memory access of motion compensation, such that bi-prediction is not supported for 4×8 and 8×4 blocks, and inter prediction is not supported for 4×4 blocks. In the QTBT of the JEM, these restrictions are removed.
2.1.4 Ternary-Tree for Versatile Video Coding (VVC)
In some embodiments, tree types other than quad-tree and binary-tree are supported. In the implementation, two more ternary tree (TT) partitions, i.e., horizontal and vertical center-side ternary-trees are introduced, as shown in
In some implementations, there are two levels of trees, region tree (quad-tree) and prediction tree (binary-tree or ternary-tree). A CTU is firstly partitioned by region tree (RT). A RT leaf may be further split with prediction tree (PT). A PT leaf may also be further split with PT until max PT depth is reached. A PT leaf is the basic coding unit. It is still called CU for convenience. A CU cannot be further split. Prediction and transform are both applied on CU in the same way as JEM. The whole partition structure is named ‘multiple-type-tree’.
2.1.5 Partitioning Structure
The tree structure used in this response, called Multi-Tree Type (MTT), is a generalization of the QTBT. In QTBT, as shown in
The fundamental structure of MTT constitutes two types of tree nodes: Region Tree (RT) and Prediction Tree (PT), supporting nine types of partitions, as shown in
A region tree can recursively split a CTU into square blocks down to a 4×4 size region tree leaf node. At each node in a region tree, a prediction tree can be formed from one of three tree types: Binary Tree (BT), Ternary Tree (TT), and Asymmetric Binary Tree (ABT). In a PT split, it is prohibited to have a quadtree partition in branches of the prediction tree. As in JEM, the luma tree and the chroma tree are separated in I slices. The signaling methods for RT and PT are illustrated in
2.2 Inter Prediction in HEVC/H.265
Each inter-predicted PU has motion parameters for one or two reference picture lists. Motion parameters include a motion vector and a reference picture index. Usage of one of the two reference picture lists may also be signalled using inter_pred_idc. Motion vectors may be explicitly coded as deltas relative to predictors, such a coding mode is called AMVP mode.
When a CU is coded with skip mode, one PU is associated with the CU, and there are no significant residual coefficients, no coded motion vector delta or reference picture index. A merge mode is specified whereby the motion parameters for the current PU are obtained from neighbouring PUs, including spatial and temporal candidates. The merge mode can be applied to any inter-predicted PU, not only for skip mode. The alternative to merge mode is the explicit transmission of motion parameters, where motion vector, corresponding reference picture index for each reference picture list and reference picture list usage are signalled explicitly per each PU.
When signalling indicates that one of the two reference picture lists is to be used, the PU is produced from one block of samples. This is referred to as ‘uni-prediction’. Uni-prediction is available both for P-slices and B-slices.
When signalling indicates that both of the reference picture lists are to be used, the PU is produced from two blocks of samples. This is referred to as ‘bi-prediction’. Bi-prediction is available for B-slices only.
The following text provides the details on the inter prediction modes specified in HEVC. The description will start with the merge mode.
2.2.1 Merge Mode
2.2.1.1 Derivation of Candidates for Merge Mode
When a PU is predicted using merge mode, an index pointing to an entry in the merge candidates list is parsed from the bitstream and used to retrieve the motion information. The construction of this list is specified in the HEVC standard and can be summarized according to the following sequence of steps:
These steps are also schematically depicted in
In the following, the operations associated with the aforementioned steps are detailed.
2.2.1.2 Spatial Candidate Derivation
In the derivation of spatial merge candidates, a maximum of four merge candidates are selected among candidates located in the positions depicted in
2.2.1.3 Temporal Candidate Derivation
In this step, only one candidate is added to the list. Particularly, in the derivation of this temporal merge candidate, a scaled motion vector is derived based on co-located PU belonging to the picture which has the smallest picture order count (POC) difference with current picture within the given reference picture list. The reference picture list to be used for derivation of the co-located PU is explicitly signaled in the slice header. The scaled motion vector for temporal merge candidate is obtained as illustrated by the dashed line in
In the co-located PU (Y) belonging to the reference frame, the position for the temporal candidate is selected between candidates C0 and C1, as depicted in
2.2.1.4 Additional Candidate Insertion
Besides spatio-temporal merge candidates, there are two additional types of merge candidates: combined bi-predictive merge candidate and zero merge candidate. Combined bi-predictive merge candidates are generated by utilizing spatio-temporal merge candidates. Combined bi-predictive merge candidate is used for B-Slice only. The combined bi-predictive candidates are generated by combining the first reference picture list motion parameters of an initial candidate with the second reference picture list motion parameters of another. If these two tuples provide different motion hypotheses, they will form a new bi-predictive candidate. As an example,
Zero motion candidates are inserted to fill the remaining entries in the merge candidates list and therefore hit the MaxNumMergeCand capacity. These candidates have zero spatial displacement and a reference picture index which starts from zero and increases every time a new zero motion candidate is added to the list. The number of reference frames used by these candidates is one and two for uni and bi-directional prediction, respectively. Finally, no redundancy check is performed on these candidates.
2.2.1.5 Motion Estimation Regions for Parallel Processing
To speed up the encoding process, motion estimation can be performed in parallel whereby the motion vectors for all prediction units inside a given region are derived simultaneously. The derivation of merge candidates from spatial neighbourhood may interfere with parallel processing as one prediction unit cannot derive the motion parameters from an adjacent PU until its associated motion estimation is completed. To mitigate the trade-off between coding efficiency and processing latency, HEVC defines the motion estimation region (MER) whose size is signalled in the picture parameter set using the “log 2_parallel_merge_level_minus2” syntax element. When a MER is defined, merge candidates falling in the same region are marked as unavailable and therefore not considered in the list construction.
7.3.2.3 Picture Parameter Set Raw Byte Sequence Payload (RBSP) Syntax
7.3.2.3.1 General Picture Parameter Set RBSP Syntax
Motion vector prediction exploits spatial-temporal correlation of motion vector with neighboring PUs, which is used for explicit transmission of motion parameters. It constructs a motion vector candidate list by firstly checking availability of left, above temporally neighboring PU positions, removing redundant candidates and adding zero vector to make the candidate list to be constant length. Then, the encoder can select the best predictor from the candidate list and transmit the corresponding index indicating the chosen candidate. Similarly to merge index signaling, the index of the best motion vector candidate is encoded using truncated unary. The maximum value to be encoded in this case is 2 (e.g.,
2.2.2.1 Derivation of Motion Vector Prediction Candidates
In motion vector prediction, two types of motion vector candidates are considered: spatial motion vector candidate and temporal motion vector candidate. For spatial motion vector candidate derivation, two motion vector candidates are eventually derived based on motion vectors of each PU located in five different positions as depicted in
For temporal motion vector candidate derivation, one motion vector candidate is selected from two candidates, which are derived based on two different co-located positions. After the first list of spatio-temporal candidates is made, duplicated motion vector candidates in the list are removed. If the number of potential candidates is larger than two, motion vector candidates whose reference picture index within the associated reference picture list is larger than 1 are removed from the list. If the number of spatio-temporal motion vector candidates is smaller than two, additional zero motion vector candidates is added to the list.
2.2.2.2 Spatial Motion Vector Candidates
In the derivation of spatial motion vector candidates, a maximum of two candidates are considered among five potential candidates, which are derived from PUs located in positions as depicted in
The no-spatial-scaling cases are checked first followed by the spatial scaling. Spatial scaling is considered when the POC is different between the reference picture of the neighbouring PU and that of the current PU regardless of reference picture list. If all PUs of left candidates are not available or are intra coded, scaling for the above motion vector is allowed to help parallel derivation of left and above MV candidates. Otherwise, spatial scaling is not allowed for the above motion vector.
In a spatial scaling process, the motion vector of the neighbouring PU is scaled in a similar manner as for temporal scaling, as depicted as
2.2.2.3 Temporal Motion Vector Candidates
Apart for the reference picture index derivation, all processes for the derivation of temporal merge candidates are the same as for the derivation of spatial motion vector candidates (see, e.g.,
2.2.2.4 Signaling of AMVP Information
For the AMVP mode, four parts may be signalled in the bitstream, i.e., prediction direction, reference index, MVD and my predictor candidate index.
Syntax Tables:
7.3.8.9 Motion Vector Difference Syntax
2.3 New Inter Prediction Methods in JEM (Joint Exploration Model)
2.3.1 Sub-CU Based Motion Vector Prediction
In the JEM with QTBT, each CU can have at most one set of motion parameters for each prediction direction. Two sub-CU level motion vector prediction methods are considered in the encoder by splitting a large CU into sub-CUs and deriving motion information for all the sub-CUs of the large CU. Alternative temporal motion vector prediction (ATMVP) method allows each CU to fetch multiple sets of motion information from multiple blocks smaller than the current CU in the collocated reference picture. In spatial-temporal motion vector prediction (STMVP) method motion vectors of the sub-CUs are derived recursively by using the temporal motion vector predictor and spatial neighbouring motion vector.
To preserve more accurate motion field for sub-CU motion prediction, the motion compression for the reference frames is currently disabled.
2.3.1.1 Alternative Temporal Motion Vector Prediction
In the alternative temporal motion vector prediction (ATMVP) method, the motion vectors temporal motion vector prediction (TMVP) is modified by fetching multiple sets of motion information (including motion vectors and reference indices) from blocks smaller than the current CU. As shown in
ATMVP predicts the motion vectors of the sub-CUs within a CU in two steps. The first step is to identify the corresponding block in a reference picture with a so-called temporal vector. The reference picture is called the motion source picture. The second step is to split the current CU into sub-CUs and obtain the motion vectors as well as the reference indices of each sub-CU from the block corresponding to each sub-CU, as shown in
In the first step, a reference picture and the corresponding block is determined by the motion information of the spatial neighbouring blocks of the current CU. To avoid the repetitive scanning process of neighbouring blocks, the first merge candidate in the merge candidate list of the current CU is used. The first available motion vector as well as its associated reference index are set to be the temporal vector and the index to the motion source picture. This way, in ATMVP, the corresponding block may be more accurately identified, compared with TMVP, wherein the corresponding block (sometimes called collocated block) is always in a bottom-right or center position relative to the current CU. In one example, if the first merge candidate is from the left neighboring block (i.e., A1 in
In the second step, a corresponding block of the sub-CU is identified by the temporal vector in the motion source picture, by adding to the coordinate of the current CU the temporal vector. For each sub-CU, the motion information of its corresponding block (the smallest motion grid that covers the center sample) is used to derive the motion information for the sub-CU. After the motion information of a corresponding N×N block is identified, it is converted to the motion vectors and reference indices of the current sub-CU, in the same way as TMVP of HEVC, wherein motion scaling and other procedures apply. For example, the decoder checks whether the low-delay condition (i.e. the POCs of all reference pictures of the current picture are smaller than the POC of the current picture) is fulfilled and possibly uses motion vector MVx (the motion vector corresponding to reference picture list X) to predict motion vector MVy (with X being equal to 0 or 1 and Y being equal to 1−X) for each sub-CU.
2.3.1.2 Spatial-Temporal Motion Vector Prediction
In this method, the motion vectors of the sub-CUs are derived recursively, following raster scan order.
The motion derivation for sub-CU A starts by identifying its two spatial neighbours. The first neighbour is the N×N block above sub-CU A (block c). If this block c is not available or is intra coded the other N×N blocks above sub-CU A are checked (from left to right, starting at block c). The second neighbour is a block to the left of the sub-CU A (block b). If block b is not available or is intra coded other blocks to the left of sub-CU A are checked (from top to bottom, staring at block b). The motion information obtained from the neighbouring blocks for each list is scaled to the first reference frame for a given list. Next, temporal motion vector predictor (TMVP) of sub-block A is derived by following the same procedure of TMVP derivation as specified in HEVC. The motion information of the collocated block at location D is fetched and scaled accordingly. Finally, after retrieving and scaling the motion information, all available motion vectors (up to 3) are averaged separately for each reference list. The averaged motion vector is assigned as the motion vector of the current sub-CU.
2.3.1.3 Sub-CU Motion Prediction Mode Signalling
The sub-CU modes are enabled as additional merge candidates and there is no additional syntax element required to signal the modes. Two additional merge candidates are added to merge candidates list of each CU to represent the ATMVP mode and STMVP mode. Up to seven merge candidates are used, if the sequence parameter set indicates that ATMVP and STMVP are enabled. The encoding logic of the additional merge candidates is the same as for the merge candidates in the HM, which means, for each CU in P or B slice, two more rate distortion (RD) checks is needed for the two additional merge candidates.
In the JEM, all bins of merge index are context coded by context adaptive binary arithmetic coding (CABAC). While in HEVC, only the first bin is context coded and the remaining bins are context by-pass coded.
2.3.2 Adaptive Motion Vector Difference Resolution
In HEVC, motion vector differences (MVDs) (between the motion vector and predicted motion vector of a PU) are signalled in units of quarter luma samples when use_integer_mv_flag is equal to 0 in the slice header. In the JEM, a locally adaptive motion vector resolution (LAMVR) is introduced. In the JEM, MVD can be coded in units of quarter luma samples, integer luma samples or four luma samples. The MVD resolution is controlled at the coding unit (CU) level, and MVD resolution flags are conditionally signalled for each CU that has at least one non-zero MVD components.
For a CU that has at least one non-zero MVD components, a first flag is signalled to indicate whether quarter luma sample MV precision is used in the CU. When the first flag (equal to 1) indicates that quarter luma sample MV precision is not used, another flag is signalled to indicate whether integer luma sample MV precision or four luma sample MV precision is used.
When the first MVD resolution flag of a CU is zero, or not coded for a CU (meaning all MVDs in the CU are zero), the quarter luma sample MV resolution is used for the CU. When a CU uses integer-luma sample MV precision or four-luma-sample MV precision, the MVPs in the AMVP candidate list for the CU are rounded to the corresponding precision.
In the encoder, CU-level RD checks are used to determine which MVD resolution is to be used for a CU. That is, the CU-level RD check is performed three times for each MVD resolution. To accelerate encoder speed, the following encoding schemes are applied in the JEM.
During RD check of a CU with normal quarter luma sample MVD resolution, the motion information of the current CU (integer luma sample accuracy) is stored. The stored motion information (after rounding) is used as the starting point for further small range motion vector refinement during the RD check for the same CU with integer luma sample and 4 luma sample MVD resolution so that the time-consuming motion estimation process is not duplicated three times.
RD check of a CU with 4 luma sample MVD resolution is conditionally invoked. For a CU, when RD cost integer luma sample MVD resolution is much larger than that of quarter luma sample MVD resolution, the RD check of 4 luma sample MVD resolution for the CU is skipped.
2.3.3 Pattern Matched Motion Vector Derivation
Pattern matched motion vector derivation (PMMVD) mode is a special merge mode based on Frame-Rate Up Conversion (FRUC) techniques. With this mode, motion information of a block is not signalled but derived at decoder side.
A FRUC flag is signalled for a CU when its merge flag is true. When the FRUC flag is false, a merge index is signaled and the regular merge mode is used. When the FRUC flag is true, an additional FRUC mode flag is signalled to indicate which method (bilateral matching or template matching) is to be used to derive motion information for the block.
At encoder side, the decision on whether using FRUC merge mode for a CU is based on RD cost selection as done for normal merge candidate. That is the two matching modes (bilateral matching and template matching) are both checked for a CU by using RD cost selection. The one leading to the minimal cost is further compared to other CU modes. If a FRUC matching mode is the most efficient one, FRUC flag is set to true for the CU and the related matching mode is used.
Motion derivation process in FRUC merge mode has two steps. A CU-level motion search is first performed, then followed by a Sub-CU level motion refinement. At CU level, an initial motion vector is derived for the whole CU based on bilateral matching or template matching. First, a list of MV candidates is generated and the candidate which leads to the minimum matching cost is selected as the starting point for further CU level refinement. Then a local search based on bilateral matching or template matching around the starting point is performed and the MV results in the minimum matching cost is taken as the MV for the whole CU. Subsequently, the motion information is further refined at sub-CU level with the derived CU motion vectors as the starting points.
For example, the following derivation process is performed for a W×H CU motion information derivation. At the first stage, MV for the whole W×H CU is derived. At the second stage, the CU is further split into M×M sub-CUs. The value of M is calculated as in (16), D is a predefined splitting depth which is set to 3 by default in the JEM. Then the MV for each sub-CU is derived.
As shown in the
As shown in
2.3.3.1 CU Level MV Candidate Set
The MV candidate set at CU level consists of
When using bilateral matching, each valid MV of a merge candidate is used as an input to generate a MV pair with the assumption of bilateral matching. For example, one valid MV of a merge candidate is (MVa, refa) at reference list A. Then the reference picture refb of its paired bilateral MV is found in the other reference list B so that refa and refb are temporally at different sides of the current picture. If such a refb is not available in reference list B, refb is determined as a reference which is different from refa and its temporal distance to the current picture is the minimal one in list B. After refb is determined, MVb is derived by scaling MVa based on the temporal distance between the current picture and refa, refb.
Four MVs from the interpolated MV field are also added to the CU level candidate list. More specifically, the interpolated MVs at the position (0, 0), (W/2, 0), (0, H/2) and (W/2, H/2) of the current CU are added.
When FRUC is applied in AMVP mode, the original AMVP candidates are also added to CU level MV candidate set.
At the CU level, up to 15 MVs for AMVP CUs and up to 13 MVs for merge CUs are added to the candidate list.
2.3.3.2 Sub-CU Level MV Candidate Set
The MV candidate set at sub-CU level consists of
The scaled MVs from reference pictures are derived as follows. All the reference pictures in both lists are traversed. The MVs at a collocated position of the sub-CU in a reference picture are scaled to the reference of the starting CU-level MV.
ATMVP and STMVP candidates are limited to the four first ones.
At the sub-CU level, up to 17 MVs are added to the candidate list.
2.3.3.3 Generation of Interpolated MV Field
Before coding a frame, interpolated motion field is generated for the whole picture based on unilateral ME. Then the motion field may be used later as CU level or sub-CU level MV candidates.
First, the motion field of each reference pictures in both reference lists is traversed at 4×4 block level. For each 4×4 block, if the motion associated to the block passing through a 4×4 block in the current picture (as shown in
2.3.3.4 Interpolation and Matching Cost
When a motion vector points to a fractional sample position, motion compensated interpolation is needed. To reduce complexity, bi-linear interpolation instead of regular 8-tap HEVC interpolation is used for both bilateral matching and template matching.
The calculation of matching cost is a bit different at different steps. When selecting the candidate from the candidate set at the CU level, the matching cost is the absolute sum difference (SAD) of bilateral matching or template matching. After the starting MV is determined, the matching cost C of bilateral matching at sub-CU level search is calculated as follows:
C=SAD+w·(|MVx−MVxs|+MVy−MVys|) (2)
where w is a weighting factor which is empirically set to 4, MV and MVs indicate the current MV and the starting MV, respectively. SAD is still used as the matching cost of template matching at sub-CU level search.
In FRUC mode, MV is derived by using luma samples only. The derived motion will be used for both luma and chroma for MC inter prediction. After MV is decided, final MC is performed using 8-taps interpolation filter for luma and 4-taps interpolation filter for chroma.
2.3.3.5 MV Refinement
MV refinement is a pattern based MV search with the criterion of bilateral matching cost or template matching cost. In the JEM, two search patterns are supported—an unrestricted center-biased diamond search (UCBDS) and an adaptive cross search for MV refinement at the CU level and sub-CU level, respectively. For both CU and sub-CU level MV refinement, the MV is directly searched at quarter luma sample MV accuracy, and this is followed by one-eighth luma sample MV refinement. The search range of MV refinement for the CU and sub-CU step are set equal to 8 luma samples.
2.3.3.6 Selection of Prediction Direction in Template Matching FRUC Merge Mode
In the bilateral matching merge mode, bi-prediction is always applied since the motion information of a CU is derived based on the closest match between two blocks along the motion trajectory of the current CU in two different reference pictures. There is no such limitation for the template matching merge mode. In the template matching merge mode, the encoder can choose among uni-prediction from list0, uni-prediction from list1 or bi-prediction for a CU. The selection is based on a template matching cost as follows:
where cost0 is the SAD of list0 template matching, cost1 is the SAD of list1 template matching and costBi is the SAD of bi-prediction template matching. The value of factor is equal to 1.25, which means that the selection process is biased toward bi-prediction. The inter prediction direction selection is only applied to the CU-level template matching process.
2.3.4 Decoder-Side Motion Vector Refinement
In bi-prediction operation, for the prediction of one block region, two prediction blocks, formed using a motion vector (MV) of list0 and a MV of list1, respectively, are combined to form a single prediction signal. In the decoder-side motion vector refinement (DMVR) method, the two motion vectors of the bi-prediction are further refined by a bilateral template matching process. The bilateral template matching applied in the decoder to perform a distortion-based search between a bilateral template and the reconstruction samples in the reference pictures in order to obtain a refined MV without transmission of additional motion information.
In DMVR, a bilateral template is generated as the weighted combination (i.e. average) of the two prediction blocks, from the initial MV0 of list0 and MV1 of list1, respectively, as shown in
DMVR is applied for the merge mode of bi-prediction with one MV from a reference picture in the past and another from a reference picture in the future, without the transmission of additional syntax elements. In the JEM, when LIC, affine motion, FRUC, or sub-CU merge candidate is enabled for a CU, DMVR is not applied.
2.3.5 Merge/Skip Mode with Bilateral Matching Refinement
A merge candidate list is first constructed by inserting the motion vectors and reference indices of the spatial neighboring and temporal neighboring blocks into the candidate list with redundancy checking until the number of the available candidates reaches the maximum candidate size of 19. The merge candidate list for the merge/skip mode is constructed by inserting spatial candidates (
It is noted that IC flags are also inherited from merge candidates except for STMVP and affine. Moreover, for the first four spatial candidates, the bi-prediction ones are inserted before the ones with uni-prediction.
In some implementations, blocks which are not connected with the current block may be accessed. If a non-adjacent block is coded with non-intra mode, the associated motion information may be added as an additional merge candidate.
2.3.6 Shared Merge List JVET-M0170
It proposes to share the same merging candidate list for all leaf coding units (CUs) of one ancestor node in the CU split tree for enabling parallel processing of small skip/merge-coded CUs. The ancestor node is named merge sharing node. The shared merging candidate list is generated at the merge sharing node pretending the merge sharing node is a leaf CU.
For Type-2 definition, the merge sharing node will be decided for each CU inside a CTU during parsing stage of decoding; moreover, the merge sharing node is an ancestor node of leaf CU which must satisfy the following 2 criteria:
The merge sharing node size is equal to or larger than the size threshold.
In the merge sharing node, one of the child CU size is smaller than the size threshold.
Moreover, no samples of the merge sharing node are outside the picture boundary has to be guaranteed. During parsing stage, if an ancestor node satisfies the criteria (1) and (2) but has some samples outside the picture boundary, this ancestor node will not be the merge sharing node and it proceeds to find the merge sharing node for its child CUs.
The proposed shared merging candidate list algorithm supports translational merge (including merge mode and triangle merge mode, history-based candidate is also supported) and subblock-based merge mode. For all kinds of merge mode, the behavior of shared merging candidate list algorithm looks basically the same, and it just generates candidates at the merge sharing node pretending the merge sharing node is a leaf CU. It has 2 major benefits. The first benefit is to enable parallel processing for merge mode, and the second benefit is to share all computations of all leaf CUs into the merge sharing node. Therefore, it significantly reduces the hardware cost of all merge modes for hardware codec. By the proposed shared merging candidate list algorithm, the encoder and decoder can easily support parallel encoding for merge mode and it relieves the cycle budget problem of merge mode.
2.3.7 Tile Groups
JVET-L0686 was adopted in which slices are removed in favor of tile groups and the HEVC syntax element slice_address is substituted with tile_group_address in the tile_group_header (if there is more than one tile in the picture) as address of the first tile in the tile group.
The current HEVC design could take the correlation of current block its neighbouring blocks (next to the current block) to better code the motion information. However, it is possible that that the neighbouring blocks correspond to different objects with different motion trajectories. In this case, prediction from its neighbouring blocks is not efficient.
Prediction from motion information of non-adjacent blocks could bring additional coding gain with the cost of storing all the motion information (typically on 4×4 level) into cache which significantly increase the complexity for hardware implementation.
To overcome the drawbacks of existing implementations, LUT-based motion vector prediction techniques using one or more tables (e.g., look up tables) with at least one motion candidate stored to predict motion information of a block can be implemented in various embodiments to provide video coding with higher coding efficiencies. A look up table is an example of a table which can be used to include motion candidates to predict motion information of a block and other implementations are also possible. Each LUT can include one or more motion candidates, each associated with corresponding motion information. Motion information of a motion candidate can include partial or all of the prediction direction, reference indices/pictures, motion vectors, local illumination compensation (LIC) flags, affine flags, Motion Vector Derivation (MVD) precisions, and/or MVD values. Motion information may further include the block position information to indicate from which the motion information is coming.
The LUT-based motion vector prediction based on the disclosed technology, which may enhance both existing and future video coding standards, is elucidated in the following examples described for various implementations. Because the LUTs allow the encoding/decoding process to be performed based on historical data (e.g., the blocks that have been processed), the LUT-based motion vector prediction can also be referred to as History-based Motion Vector Prediction (HMVP) method. In the LUT-based motion vector prediction method, one or multiple tables with motion information from previously coded blocks are maintained during the encoding/decoding process. These motion candidates stored in the LUTs are named HMVP candidates. During the encoding/decoding of one block, the associated motion information in LUTs may be added to the motion candidate lists (e.g., merge/AMVP candidate lists), and after encoding/decoding one block, LUTs may be updated. The updated LUTs are then used to code the subsequent blocks. Thus, the updating of motion candidates in the LUTs are based on the encoding/decoding order of blocks. The examples below should be considered as examples to explain general concepts. These examples should not be interpreted in a narrow way. Furthermore, these examples can be combined in any manner.
Some embodiments may use one or more look up tables with at least one motion candidate stored to predict motion information of a block. Embodiments may use motion candidate to indicate a set of motion information stored in a look up table. For conventional AMVP or merge modes, embodiments may use AMVP or merge candidates for storing the motion information.
The examples below explain general concepts.
Examples of Look-Up Tables
Example A1: Each look up table may contain one or more motion candidates wherein each candidate is associated with its motion information.
Example B1: For coding a block, partial or all of motion candidates from one look up table may be checked in order. When one motion candidate is checked during coding a block, it may be added to the motion candidate list (e.g., AMVP, merge candidate lists). Example B2: The selection of look up tables may depend on the position of a block.
Usage of Look Up Tables
Example C1: The total number of motion candidates in a look up table to be checked may be pre-defined.
Example C2: The motion candidate(s) included in a look up table may be directly inherited by a block.
Example C3: The motion candidate(s) included in a look up table may be used as a predictor for coding motion information of a block.
Example C4: The checking order of motion candidates in a look up table is defined as follows (suppose K (K>=1) motion candidates are allowed to be checked):
In some implementations, the motion candidates in a look up table may be utilized to derive other candidates and the derived candidates may be utilized for coding a block.
In some implementations, enabling/disabling the usage of look up tables for motion information coding of a block may be signaled in SPS, PPS, Slice header, tile header, CTU, CTB, CU or PU, region covering multiple CTU/CTB/CU/PUs.
In some implementations, whether to apply prediction from look up tables may further depend on the coded information. When it is inferred not to apply for a block, additional signaling of indications of the prediction is skipped. Alternatively, when it is inferred not to apply for a block, there is no need to access motion candidates of look up tables, and the checking of related motion candidates is omitted.
In some implementations, the motion candidates of a look up table in previously coded frames/slices/tiles may be used to predict motion information of a block in a different frame/slice/tile.
After coding a block with motion information (i.e., intra block copy (IntraBC) mode, inter coded mode), one or multiple look up tables may be updated.
For all above examples and implementations, the look up tables indicate the coded information or information derived from coded information from previously coded blocks in a decoding order.
A history-based MVP (HMVP) method is proposed wherein a HMVP candidate is defined as the motion information of a previously coded block. A table with multiple HMVP candidates is maintained during the encoding/decoding process. The table is emptied when a new slice is encountered. Whenever there is an inter-coded block, the associated motion information is added to the last entry of the table as a new HMVP candidate. The overall coding flow is depicted in
In one example, the table size is set to be L (e.g., L=16 or 6, or 44), which indicates up to L HMVP candidates may be added to the table.
In one embodiment (corresponding to example 11.g.i), if there are more than L HMVP candidates from the previously coded blocks, a First-In-First-Out (FIFO) rule is applied so that the table always contains the latest previously coded L motion candidates.
In another embodiment (corresponding to invention 11.g.iii), whenever adding a new motion candidate (such as the current block is inter-coded and non-affine mode), a redundancy checking process is applied firstly to identify whether there are identical or similar motion candidates in LUTs.
Some examples are depicted as follows:
HMVP candidates could be used in the merge candidate list construction process. All HMVP candidates from the last entry to the first entry (or the last K0 HMVP, e.g., K0 equal to 16 or 6) in the table are inserted after the TMVP candidate. Pruning is applied on the HMVP candidates. Once the total number of available merge candidates reaches the signaled maximally allowed merge candidates, the merge candidate list construction process is terminated. Alternatively, once the total number of added motion candidates reaches a given value, the fetching of motion candidates from LUTs is terminated.
Similarly, HMVP candidates could also be used in the AMVP candidate list construction process. The motion vectors of the last K1 HMVP candidates in the table are inserted after the TMVP candidate. Only HMVP candidates with the same reference picture as the AMVP target reference picture are used to construct the AMVP candidate list. Pruning is applied on the HMVP candidates. In one example, K1 is set to 4.
With respect to method 2900, in some embodiments, the motion information includes at least one of a prediction direction, a reference picture index, motion vector values, intensity compensation flag, affine flag, motion vector difference precision, and motion vector difference value. Further, the motion information may further include block position information indicating source of the motion information. In some embodiments, the video block may be a CU or a PU and the portion of video may correspond to one or more video slices or one or more video pictures.
In some embodiments, each LUT includes an associated counter, wherein the counter is initialized to a zero value at beginning of the portion of video and increased for each encoded video region in the portion of the video. The video region comprises one of a coding tree unit, a coding tree block, a coding unit, a coding block or a prediction unit. In some embodiments, the counter indicates, for a corresponding LUT, a number of motion candidates that were removed from the corresponding LUT. In some embodiments, the set of motion candidates may have a same size for all LUTs. In some embodiments, the portion of video corresponds to a slice of video, and wherein the number of LUTs is equal to N*P, wherein N is an integer representing LUTs per decoding thread, and P is an integer representing a number of Largest Coding Unit rows or a number of tiles in the slice of video. Additional details of the method 2900 is described in the examples provided in Section 4 and the examples listed below.
Features and embodiments of the above-described methods/techniques are described below.
1. A video processing method, comprising: maintaining tables, wherein each table includes a set of motion candidates and each motion candidate is associated with corresponding motion information; and performing a conversion between a first video block and a bitstream representation of a video including the first video block, the performing of the conversion including using at least some of the set of motion candidates as a predictor to process motion information of the first video block.
2. The method of clause 1, wherein the tables include motion candidates derived from previously decoded video blocks that are decoded prior to the first video block.
3. The method of clause 1, wherein the performing of the conversion includes performing an Advanced Motion Vector Prediction (AMVP) candidate list derivation process using at least some of the set of motion candidates.
4. The method of clause 3, wherein the AMVP candidate list derivation process includes checking motion candidates from one or more tables.
5. The method of any one of clauses 1 to 4, wherein the performing of the conversion includes checking a motion candidate and a motion vector associated with the checked motion candidate is used as a motion vector predictor for coding the motion vector of the first video block.
6. The method of clause 4, wherein a motion vector associated with a checked motion candidate is added to the AMVP motion candidate list.
7. The method of clause 1, wherein the performing of the conversion includes checking at least some of the motion candidates based on a rule.
8. The method of clause 7, wherein the rule enables the checking when an AMVP candidate list is not full after checking a temporal motion vector prediction (TMVP) candidate.
9. The method of clause 7, wherein the rule enables the checking when an AMVP candidate list is not full after selecting from spatial neighbors and pruning, before inserting a TMVP candidate.
10. The method of clause 7, wherein the rule enables the checking when i) there is no AMVP candidate from above neighboring blocks without scaling, or ii) when there is no AMVP candidate from left neighboring blocks without scaling.
11. The method of clause 7, wherein the rule enables the checking when a pruning is applied before adding a motion candidate to a AMVP candidate list.
12. The method of clause 1, wherein motion candidates with an identical reference picture to a current reference picture are checked.
13. The method of clause 12, wherein motion candidates with a different reference picture from the current reference picture are further checked.
14. The method of clause 13, wherein the checking of the motion candidates with the identical reference picture is performed prior to the checking of the motion candidates with the different reference picture.
15. The method of clause 1, further comprising an AMVP candidate list construction process including a pruning operation before adding a motion vector from a motion candidate in a table.
16. The method of clause 15, wherein the pruning operation includes comparing a motion candidate to at least a part of available motion candidates in an AMVP candidate list.
17. The method of clause 15, wherein the pruning operation includes a number of operations, the number being a function of a number of spatial or temporal AMVP candidates.
18. The method of clause 17, wherein the number of operations is such that in case that M candidates are available in an AMVP candidate list, the pruning is applied only to K AMVP candidates where K<=M and where K and M are integers.
19. The method of clause 1, wherein the performing of the conversion includes performing a symmetric motion vector difference (SMVD) process using some of the motion vector differences.
20. The method of clause 1, wherein the performing of the conversion includes performing a symmetric motion vector (SMV) process using some of motion vectors.
21. The method of clause 7, wherein the rule enables the checking when an AMVP candidate list is not full after inserting a certain AMVP candidate.
22. The method of clause 1, further comprising enabling checking of motion candidates in the table, wherein the checking is enabled before checking other candidates derived from a spatial or temporal block and other candidates include AMVP candidates, SMVD candidates, SMV candidates, or affine inter candidates.
23. The method of clause 1, further comprising enabling checking of motion candidates in the table, wherein the checking is enabled when there is at least one motion candidate in the table.
24. The method of clause 1, wherein, for a motion candidate that is a bi-prediction candidate, a reference picture of a first reference picture list is checked before a reference picture of a second reference picture list is checked, the first reference picture list being a current target reference picture list.
25. The method of clause 1 or 2, wherein, for a motion candidate that is a bi-prediction candidate, a reference picture of a first reference picture list is checked before a reference picture of a second reference picture list is checked, the second reference picture list being a current target reference picture list.
26. The method of clause 1, wherein reference pictures of a first reference picture list are checked before reference pictures of a second reference picture list.
27. The method of clause 1, wherein the performing of the conversion includes generating the bitstream representation from the first video block.
28. The method of clause 1, wherein the performing of the conversion includes generating the first video block from the bitstream representation.
29. The method of any one of clauses 1 to 28, wherein a motion candidate is associated with motion information including at least one of a prediction direction, a reference picture index, motion vector values, an intensity compensation flag, an affine flag, a motion vector difference precision, or motion vector difference value.
30. The method of any of clauses 1 to 29, wherein a motion candidate is associated with intra prediction modes used for intra-coded blocks.
31. The method of any of clauses 1 to 29, wherein a motion candidate is associated with multiple illumination compensation (IC) parameters used for IC-coded blocks.
32. The method of any of clauses 1 to 29, wherein a motion candidate is associated with filter parameters used in the filtering process.
33. The method of any one of clauses 1 to 29, further comprising updating, based on the conversion, one or more tables.
34. The method of any one of clauses 1 to 33, wherein the updating of one or more tables includes updating one or more tables based on the motion information of the first video block after performing the conversion.
35. The method of clause 34, further comprising: performing a conversion between a subsequent video block of the video and the bitstream representation of the video based on the updated tables.
36. An apparatus comprising a processor and a non-transitory memory with instructions thereon, wherein the instructions upon execution by the processor, cause the processor to implement the method in any one of clauses 1 to 35.
37. A computer program product stored on a non-transitory computer readable media, the computer program product including program code for carrying out the method in any one of clauses 1 to 35.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
The disclosed and other embodiments, modules and the functional operations described in this document can be implemented in digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this document and their structural equivalents, or in combinations of one or more of them. The disclosed and other embodiments can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more them. The term “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them. A propagated signal is an artificially generated signal, e.g., a machine-generated electrical, optical, or electromagnetic signal, that is generated to encode information for transmission to suitable receiver apparatus.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this document can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., a field programmable gate array (FPGA) or an application specific integrated circuit (ASIC).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random-access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and compact disc, read-only memory (CD ROM) and digital versatile disc read-only memory (DVD-ROM) disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
This application is a continuation application of U.S. application Ser. No. 17/019,675 filed on Sep. 14, 2020, which is a continuation application of International Application No. PCT/IB2019/055595, filed on Jul. 1, 2019, which claims the priority to and benefits of International Patent Application No. PCT/CN2018/093663, filed on Jun. 29, 2018, International Patent Application No. PCT/CN2018/105193, filed on Sep. 12, 2018, and International Patent Application No. PCT/CN2019/072058, filed on Jan. 16, 2019. All of the aforementioned patent applications are hereby incorporated by reference in their entireties.
Number | Name | Date | Kind |
---|---|---|---|
7023922 | Xu et al. | Apr 2006 | B1 |
7653134 | Xu et al. | Jan 2010 | B2 |
7675976 | Xu et al. | Mar 2010 | B2 |
7680189 | Xu et al. | Mar 2010 | B2 |
7680190 | Xu et al. | Mar 2010 | B2 |
7801220 | Zhang et al. | Sep 2010 | B2 |
8804816 | Li et al. | Aug 2014 | B2 |
9350970 | Kang et al. | May 2016 | B2 |
9445076 | Zhang et al. | Sep 2016 | B2 |
9485503 | Zhang et al. | Nov 2016 | B2 |
9503702 | Chen et al. | Nov 2016 | B2 |
9621888 | Jeon | Apr 2017 | B2 |
9667996 | Chen et al. | May 2017 | B2 |
9699450 | Zhang et al. | Jul 2017 | B2 |
9762882 | Zhang et al. | Sep 2017 | B2 |
9762900 | Park et al. | Sep 2017 | B2 |
9807431 | Hannuksela et al. | Oct 2017 | B2 |
9872016 | Chuang et al. | Jan 2018 | B2 |
9900615 | Li et al. | Feb 2018 | B2 |
9918102 | Kohn et al. | Mar 2018 | B1 |
9967592 | Zhang et al. | May 2018 | B2 |
9998727 | Zhang et al. | Jun 2018 | B2 |
10021414 | Seregin et al. | Jul 2018 | B2 |
10085041 | Zhang et al. | Sep 2018 | B2 |
10116934 | Zan et al. | Oct 2018 | B2 |
10154286 | He et al. | Dec 2018 | B2 |
10158876 | Chen et al. | Dec 2018 | B2 |
10200709 | Chen et al. | Feb 2019 | B2 |
10200711 | Li et al. | Feb 2019 | B2 |
10230980 | Liu et al. | Mar 2019 | B2 |
10271064 | Chien et al. | Apr 2019 | B2 |
10277909 | Ye et al. | Apr 2019 | B2 |
10284869 | Han et al. | May 2019 | B2 |
10306225 | Zhang et al. | May 2019 | B2 |
10349083 | Chen et al. | Jul 2019 | B2 |
10362330 | Li et al. | Jul 2019 | B1 |
10368072 | Zhang et al. | Jul 2019 | B2 |
10390029 | Ye et al. | Aug 2019 | B2 |
10440378 | Xu et al. | Oct 2019 | B1 |
10448010 | Chen et al. | Oct 2019 | B2 |
10462439 | He et al. | Oct 2019 | B2 |
10491902 | Xu et al. | Nov 2019 | B1 |
10491917 | Chen et al. | Nov 2019 | B2 |
10531118 | Li et al. | Jan 2020 | B2 |
10560718 | Lee et al. | Feb 2020 | B2 |
10595035 | Karczewicz et al. | Mar 2020 | B2 |
10681383 | Ye et al. | Jun 2020 | B2 |
10687077 | Zhang et al. | Jun 2020 | B2 |
10694204 | Chen et al. | Jun 2020 | B2 |
10701366 | Chen et al. | Jun 2020 | B2 |
10771811 | Liu et al. | Sep 2020 | B2 |
10778997 | Zhang et al. | Sep 2020 | B2 |
10778999 | Li et al. | Sep 2020 | B2 |
10805650 | Wang et al. | Oct 2020 | B2 |
10812791 | Chien et al. | Oct 2020 | B2 |
10841615 | He et al. | Nov 2020 | B2 |
10873756 | Zhang et al. | Nov 2020 | B2 |
10911769 | Zhang et al. | Feb 2021 | B2 |
11128887 | Lee | Sep 2021 | B2 |
11134243 | Zhang et al. | Sep 2021 | B2 |
11134244 | Zhang et al. | Sep 2021 | B2 |
11134267 | Zhang et al. | Sep 2021 | B2 |
11140383 | Zhang et al. | Oct 2021 | B2 |
11140385 | Zhang | Oct 2021 | B2 |
11146785 | Zhang et al. | Oct 2021 | B2 |
11146786 | Zhang et al. | Oct 2021 | B2 |
11153557 | Zhang et al. | Oct 2021 | B2 |
11153558 | Zhang et al. | Oct 2021 | B2 |
11153559 | Zhang et al. | Oct 2021 | B2 |
11159787 | Zhang et al. | Oct 2021 | B2 |
11159807 | Zhang et al. | Oct 2021 | B2 |
11159817 | Zhang et al. | Oct 2021 | B2 |
11245892 | Zhang et al. | Feb 2022 | B2 |
11412211 | Lee | Aug 2022 | B2 |
11528501 | Zhang | Dec 2022 | B2 |
11997253 | Zhang | May 2024 | B2 |
12034914 | Zhang | Jul 2024 | B2 |
20050105812 | Molino et al. | May 2005 | A1 |
20060233243 | Ridge et al. | Oct 2006 | A1 |
20070025444 | Okada et al. | Feb 2007 | A1 |
20090180538 | Visharam et al. | Jul 2009 | A1 |
20100080296 | Lee et al. | Apr 2010 | A1 |
20110109964 | Kim et al. | May 2011 | A1 |
20110116546 | Guo et al. | May 2011 | A1 |
20110170600 | Ishikawa | Jul 2011 | A1 |
20110194608 | Rusert et al. | Aug 2011 | A1 |
20110194609 | Rusert et al. | Aug 2011 | A1 |
20110200107 | Ryu et al. | Aug 2011 | A1 |
20120082229 | Su et al. | Apr 2012 | A1 |
20120134415 | Lin et al. | May 2012 | A1 |
20120195366 | Liu et al. | Aug 2012 | A1 |
20120195368 | Chien et al. | Aug 2012 | A1 |
20120257678 | Zhou et al. | Oct 2012 | A1 |
20120263231 | Zhou | Oct 2012 | A1 |
20120287999 | Li et al. | Nov 2012 | A1 |
20120300846 | Sugio et al. | Nov 2012 | A1 |
20120307903 | Sugio et al. | Dec 2012 | A1 |
20120320984 | Zhou | Dec 2012 | A1 |
20130064301 | Guo et al. | Mar 2013 | A1 |
20130070855 | Zheng et al. | Mar 2013 | A1 |
20130094580 | Zhou et al. | Apr 2013 | A1 |
20130101041 | Fishwick | Apr 2013 | A1 |
20130114717 | Zheng et al. | May 2013 | A1 |
20130114723 | Bici et al. | May 2013 | A1 |
20130128982 | Kim et al. | May 2013 | A1 |
20130163668 | Chen et al. | Jun 2013 | A1 |
20130188013 | Chen et al. | Jul 2013 | A1 |
20130188715 | Seregin et al. | Jul 2013 | A1 |
20130208799 | Srinivasamurthy et al. | Aug 2013 | A1 |
20130243093 | Chen et al. | Sep 2013 | A1 |
20130265388 | Zhang et al. | Oct 2013 | A1 |
20130272377 | Karczewicz et al. | Oct 2013 | A1 |
20130272410 | Seregin et al. | Oct 2013 | A1 |
20130272412 | Seregin et al. | Oct 2013 | A1 |
20130272413 | Seregin et al. | Oct 2013 | A1 |
20130294513 | Seregin | Nov 2013 | A1 |
20130301734 | Gisquet et al. | Nov 2013 | A1 |
20130336406 | Zhang et al. | Dec 2013 | A1 |
20140049605 | Chen | Feb 2014 | A1 |
20140064372 | Laroche et al. | Mar 2014 | A1 |
20140078251 | Kang et al. | Mar 2014 | A1 |
20140086327 | Ugur et al. | Mar 2014 | A1 |
20140105295 | Shiodera et al. | Apr 2014 | A1 |
20140105302 | Takehara et al. | Apr 2014 | A1 |
20140126629 | Park et al. | May 2014 | A1 |
20140133558 | Seregin et al. | May 2014 | A1 |
20140161186 | Zhang et al. | Jun 2014 | A1 |
20140185685 | Asaka et al. | Jul 2014 | A1 |
20140219356 | Nishitani et al. | Aug 2014 | A1 |
20140241434 | Lin et al. | Aug 2014 | A1 |
20140286427 | Fukushima et al. | Sep 2014 | A1 |
20140286433 | He et al. | Sep 2014 | A1 |
20140321547 | Takehara | Oct 2014 | A1 |
20140334557 | Schierl et al. | Nov 2014 | A1 |
20140341289 | Schwarz et al. | Nov 2014 | A1 |
20140355685 | Chen et al. | Dec 2014 | A1 |
20140376614 | Fukushima et al. | Dec 2014 | A1 |
20140376626 | Lee | Dec 2014 | A1 |
20140376638 | Nakamura et al. | Dec 2014 | A1 |
20150085932 | Lin | Mar 2015 | A1 |
20150110197 | Kim et al. | Apr 2015 | A1 |
20150189313 | Shimada et al. | Jul 2015 | A1 |
20150195558 | Kim | Jul 2015 | A1 |
20150237370 | Zhou | Aug 2015 | A1 |
20150256853 | Li et al. | Sep 2015 | A1 |
20150264386 | Pang | Sep 2015 | A1 |
20150281733 | Fu et al. | Oct 2015 | A1 |
20150312588 | Yamamoto et al. | Oct 2015 | A1 |
20150326880 | He et al. | Nov 2015 | A1 |
20150341635 | Seregin et al. | Nov 2015 | A1 |
20150358635 | Xiu et al. | Dec 2015 | A1 |
20160044332 | Maaninen | Feb 2016 | A1 |
20160050430 | Xiu et al. | Feb 2016 | A1 |
20160219278 | Chen et al. | Jul 2016 | A1 |
20160227214 | Rapaka et al. | Aug 2016 | A1 |
20160234492 | Li et al. | Aug 2016 | A1 |
20160241867 | Sugio et al. | Aug 2016 | A1 |
20160277761 | Li et al. | Sep 2016 | A1 |
20160286230 | Li et al. | Sep 2016 | A1 |
20160286232 | Li et al. | Sep 2016 | A1 |
20160295240 | Kim et al. | Oct 2016 | A1 |
20160301936 | Chen et al. | Oct 2016 | A1 |
20160330471 | Zhu et al. | Nov 2016 | A1 |
20160337661 | Pang et al. | Nov 2016 | A1 |
20160366416 | Liu et al. | Dec 2016 | A1 |
20160366442 | Liu et al. | Dec 2016 | A1 |
20160373784 | Bang | Dec 2016 | A1 |
20160381374 | Bang | Dec 2016 | A1 |
20170006302 | Lee et al. | Jan 2017 | A1 |
20170013269 | Kim et al. | Jan 2017 | A1 |
20170048550 | Hannuksela | Feb 2017 | A1 |
20170054995 | Kim | Feb 2017 | A1 |
20170054996 | Xu et al. | Feb 2017 | A1 |
20170078699 | Park et al. | Mar 2017 | A1 |
20170099495 | Rapaka et al. | Apr 2017 | A1 |
20170127082 | Chen et al. | May 2017 | A1 |
20170127086 | Lai et al. | May 2017 | A1 |
20170150168 | Nakamura et al. | May 2017 | A1 |
20170163999 | Li et al. | Jun 2017 | A1 |
20170188045 | Zhou et al. | Jun 2017 | A1 |
20170214932 | Huang et al. | Jul 2017 | A1 |
20170223352 | Kim et al. | Aug 2017 | A1 |
20170238005 | Chien et al. | Aug 2017 | A1 |
20170238011 | Pettersson et al. | Aug 2017 | A1 |
20170264895 | Takehara et al. | Sep 2017 | A1 |
20170272746 | Sugio et al. | Sep 2017 | A1 |
20170280159 | Xu et al. | Sep 2017 | A1 |
20170289570 | Zhou et al. | Oct 2017 | A1 |
20170332084 | Seregin et al. | Nov 2017 | A1 |
20170332095 | Zou et al. | Nov 2017 | A1 |
20170332099 | Lee et al. | Nov 2017 | A1 |
20170339425 | Jeong et al. | Nov 2017 | A1 |
20180014017 | Li et al. | Jan 2018 | A1 |
20180041769 | Chuang et al. | Feb 2018 | A1 |
20180070100 | Chen et al. | Mar 2018 | A1 |
20180077417 | Huang | Mar 2018 | A1 |
20180084260 | Chien | Mar 2018 | A1 |
20180098063 | Chen et al. | Apr 2018 | A1 |
20180124398 | Park | May 2018 | A1 |
20180184085 | Yang et al. | Jun 2018 | A1 |
20180192069 | Chen et al. | Jul 2018 | A1 |
20180192071 | Chuang et al. | Jul 2018 | A1 |
20180242024 | Chen et al. | Aug 2018 | A1 |
20180262753 | Sugio et al. | Sep 2018 | A1 |
20180270500 | Li | Sep 2018 | A1 |
20180278949 | Karczewicz et al. | Sep 2018 | A1 |
20180310018 | Guo et al. | Oct 2018 | A1 |
20180332284 | Liu et al. | Nov 2018 | A1 |
20180332312 | Liu et al. | Nov 2018 | A1 |
20180343467 | Lin | Nov 2018 | A1 |
20180352223 | Chen et al. | Dec 2018 | A1 |
20180352247 | Park | Dec 2018 | A1 |
20180352256 | Bang | Dec 2018 | A1 |
20180359483 | Chen et al. | Dec 2018 | A1 |
20180376149 | Zhang et al. | Dec 2018 | A1 |
20180376160 | Zhang et al. | Dec 2018 | A1 |
20180376164 | Zhang et al. | Dec 2018 | A1 |
20190098329 | Han et al. | Mar 2019 | A1 |
20190116374 | Zhang et al. | Apr 2019 | A1 |
20190116381 | Lee et al. | Apr 2019 | A1 |
20190141334 | Lim et al. | May 2019 | A1 |
20190158827 | Sim et al. | May 2019 | A1 |
20190158866 | Kim | May 2019 | A1 |
20190200040 | Lim et al. | Jun 2019 | A1 |
20190215529 | Laroche | Jul 2019 | A1 |
20190222848 | Chen et al. | Jul 2019 | A1 |
20190222865 | Zhang et al. | Jul 2019 | A1 |
20190230362 | Chen et al. | Jul 2019 | A1 |
20190230376 | Hu et al. | Jul 2019 | A1 |
20190297325 | Lim et al. | Sep 2019 | A1 |
20190297343 | Seo et al. | Sep 2019 | A1 |
20190320180 | Yu et al. | Oct 2019 | A1 |
20190342557 | Robert | Nov 2019 | A1 |
20190356925 | Ye et al. | Nov 2019 | A1 |
20200014948 | Lai et al. | Jan 2020 | A1 |
20200021839 | Pham Van et al. | Jan 2020 | A1 |
20200021845 | Lin et al. | Jan 2020 | A1 |
20200029088 | Xu et al. | Jan 2020 | A1 |
20200036997 | Li et al. | Jan 2020 | A1 |
20200077106 | Jhu et al. | Mar 2020 | A1 |
20200077116 | Lee et al. | Mar 2020 | A1 |
20200099951 | Hung et al. | Mar 2020 | A1 |
20200112715 | Hung et al. | Apr 2020 | A1 |
20200112741 | Han et al. | Apr 2020 | A1 |
20200120334 | Xu et al. | Apr 2020 | A1 |
20200128238 | Lee et al. | Apr 2020 | A1 |
20200128266 | Xu et al. | Apr 2020 | A1 |
20200145690 | Li et al. | May 2020 | A1 |
20200154124 | Lee | May 2020 | A1 |
20200169726 | Kim et al. | May 2020 | A1 |
20200169745 | Han et al. | May 2020 | A1 |
20200169748 | Chen et al. | May 2020 | A1 |
20200186793 | Racape et al. | Jun 2020 | A1 |
20200186820 | Park et al. | Jun 2020 | A1 |
20200195920 | Racape et al. | Jun 2020 | A1 |
20200195959 | Zhang et al. | Jun 2020 | A1 |
20200195960 | Zhang et al. | Jun 2020 | A1 |
20200204820 | Zhang et al. | Jun 2020 | A1 |
20200396466 | Zhang et al. | Jun 2020 | A1 |
20200221108 | Xu et al. | Jul 2020 | A1 |
20200228815 | Xu | Jul 2020 | A1 |
20200228825 | Lim et al. | Jul 2020 | A1 |
20200236353 | Zhang et al. | Jul 2020 | A1 |
20200244954 | Heo et al. | Jul 2020 | A1 |
20200244979 | Li | Jul 2020 | A1 |
20200267408 | Lee et al. | Aug 2020 | A1 |
20200275124 | Ko et al. | Aug 2020 | A1 |
20200280733 | Li | Sep 2020 | A1 |
20200280735 | Lim et al. | Sep 2020 | A1 |
20200280736 | Wang | Sep 2020 | A1 |
20200288150 | Jun et al. | Sep 2020 | A1 |
20200288157 | Li | Sep 2020 | A1 |
20200288168 | Zhang et al. | Sep 2020 | A1 |
20200296411 | Li | Sep 2020 | A1 |
20200296414 | Park et al. | Sep 2020 | A1 |
20200304805 | Li | Sep 2020 | A1 |
20200322628 | Lee et al. | Oct 2020 | A1 |
20200336726 | Wang et al. | Oct 2020 | A1 |
20200366923 | Zhang et al. | Nov 2020 | A1 |
20200374542 | Zhang et al. | Nov 2020 | A1 |
20200374543 | Liu et al. | Nov 2020 | A1 |
20200374544 | Liu et al. | Nov 2020 | A1 |
20200382770 | Zhang et al. | Dec 2020 | A1 |
20200396446 | Zhang et al. | Dec 2020 | A1 |
20200396447 | Zhang et al. | Dec 2020 | A1 |
20200396462 | Zhang et al. | Dec 2020 | A1 |
20200404253 | Chen | Dec 2020 | A1 |
20200404254 | Zhao et al. | Dec 2020 | A1 |
20200404285 | Zhang et al. | Dec 2020 | A1 |
20200404305 | Ye | Dec 2020 | A1 |
20200404306 | Auyeung | Dec 2020 | A1 |
20200404316 | Zhang et al. | Dec 2020 | A1 |
20200404319 | Zhang et al. | Dec 2020 | A1 |
20200404320 | Zhang et al. | Dec 2020 | A1 |
20200413038 | Zhang et al. | Dec 2020 | A1 |
20200413044 | Zhang et al. | Dec 2020 | A1 |
20200413045 | Zhang et al. | Dec 2020 | A1 |
20210006787 | Zhang et al. | Jan 2021 | A1 |
20210006788 | Zhang et al. | Jan 2021 | A1 |
20210006790 | Zhang et al. | Jan 2021 | A1 |
20210006819 | Zhang et al. | Jan 2021 | A1 |
20210006823 | Zhang et al. | Jan 2021 | A1 |
20210014520 | Zhang et al. | Jan 2021 | A1 |
20210014525 | Zhang et al. | Jan 2021 | A1 |
20210021856 | Zheng | Jan 2021 | A1 |
20210029351 | Zhang et al. | Jan 2021 | A1 |
20210029352 | Zhang et al. | Jan 2021 | A1 |
20210029362 | Liu et al. | Jan 2021 | A1 |
20210029366 | Zhang et al. | Jan 2021 | A1 |
20210029372 | Zhang et al. | Jan 2021 | A1 |
20210029374 | Zhang et al. | Jan 2021 | A1 |
20210051324 | Zhang et al. | Feb 2021 | A1 |
20210051339 | Liu et al. | Feb 2021 | A1 |
20210067783 | Liu et al. | Mar 2021 | A1 |
20210076063 | Liu et al. | Mar 2021 | A1 |
20210092357 | Wang | Mar 2021 | A1 |
20210092379 | Zhang et al. | Mar 2021 | A1 |
20210092436 | Zhang et al. | Mar 2021 | A1 |
20210105482 | Zhang et al. | Apr 2021 | A1 |
20210120234 | Zhang et al. | Apr 2021 | A1 |
20210168368 | Xu | Jun 2021 | A1 |
20210185326 | Wang et al. | Jun 2021 | A1 |
20210203984 | Salehifar et al. | Jul 2021 | A1 |
20210235108 | Zhang et al. | Jul 2021 | A1 |
20210243476 | Ko | Aug 2021 | A1 |
20210258569 | Chen et al. | Aug 2021 | A1 |
20210297659 | Zhang et al. | Sep 2021 | A1 |
20210321089 | Lin | Oct 2021 | A1 |
20210329292 | Jeong | Oct 2021 | A1 |
20210337216 | Zhang et al. | Oct 2021 | A1 |
20210344947 | Zhang et al. | Nov 2021 | A1 |
20210352312 | Zhang et al. | Nov 2021 | A1 |
20210360230 | Zhang et al. | Nov 2021 | A1 |
20210360277 | Jeong | Nov 2021 | A1 |
20210360278 | Zhang et al. | Nov 2021 | A1 |
20210368180 | Park | Nov 2021 | A1 |
20210377518 | Zhang et al. | Dec 2021 | A1 |
20210377545 | Zhang et al. | Dec 2021 | A1 |
20210377558 | Xiu | Dec 2021 | A1 |
20210400298 | Zhao | Dec 2021 | A1 |
20220007047 | Zhang et al. | Jan 2022 | A1 |
20220021900 | Jeong | Jan 2022 | A1 |
20220385887 | Jun | Dec 2022 | A1 |
20220417551 | Lim | Dec 2022 | A1 |
Number | Date | Country |
---|---|---|
2019293670 | Jun 2023 | AU |
112020024142 | Mar 2021 | BR |
3020265 | Nov 2017 | CA |
1898715 | Jan 2007 | CN |
1925614 | Mar 2007 | CN |
101193302 | Jun 2008 | CN |
101933328 | Dec 2010 | CN |
102474619 | May 2012 | CN |
102860006 | Jan 2013 | CN |
102907098 | Jan 2013 | CN |
102946536 | Feb 2013 | CN |
103004204 | Mar 2013 | CN |
103096071 | May 2013 | CN |
103096073 | May 2013 | CN |
103339938 | Oct 2013 | CN |
103370937 | Oct 2013 | CN |
103404143 | Nov 2013 | CN |
103444182 | Dec 2013 | CN |
103518374 | Jan 2014 | CN |
103535039 | Jan 2014 | CN |
103535040 | Jan 2014 | CN |
103609123 | Feb 2014 | CN |
103797799 | May 2014 | CN |
103828364 | May 2014 | CN |
103858428 | Jun 2014 | CN |
103891281 | Jun 2014 | CN |
103931192 | Jul 2014 | CN |
104041042 | Sep 2014 | CN |
104054350 | Sep 2014 | CN |
104079944 | Oct 2014 | CN |
104126302 | Oct 2014 | CN |
104247434 | Dec 2014 | CN |
104272743 | Jan 2015 | CN |
104350749 | Feb 2015 | CN |
104365102 | Feb 2015 | CN |
104396248 | Mar 2015 | CN |
104539950 | Apr 2015 | CN |
104584549 | Apr 2015 | CN |
104662909 | May 2015 | CN |
104756499 | Jul 2015 | CN |
104915966 | Sep 2015 | CN |
105245900 | Jan 2016 | CN |
105324996 | Feb 2016 | CN |
105556971 | May 2016 | CN |
105681807 | Jun 2016 | CN |
105917650 | Aug 2016 | CN |
106464864 | Feb 2017 | CN |
106471806 | Mar 2017 | CN |
106716997 | May 2017 | CN |
106797477 | May 2017 | CN |
106851046 | Jun 2017 | CN |
106851267 | Jun 2017 | CN |
106851269 | Jun 2017 | CN |
107071458 | Aug 2017 | CN |
107079161 | Aug 2017 | CN |
107079162 | Aug 2017 | CN |
107087165 | Aug 2017 | CN |
107113424 | Aug 2017 | CN |
107113442 | Aug 2017 | CN |
107113446 | Aug 2017 | CN |
107197301 | Sep 2017 | CN |
107211156 | Sep 2017 | CN |
107295348 | Oct 2017 | CN |
107347159 | Nov 2017 | CN |
107431820 | Dec 2017 | CN |
107493473 | Dec 2017 | CN |
107592529 | Jan 2018 | CN |
107690809 | Feb 2018 | CN |
107690810 | Feb 2018 | CN |
107710764 | Feb 2018 | CN |
107959853 | Apr 2018 | CN |
108134934 | Jun 2018 | CN |
108200437 | Jun 2018 | CN |
108235009 | Jun 2018 | CN |
108293127 | Jul 2018 | CN |
108353184 | Jul 2018 | CN |
109076218 | Dec 2018 | CN |
109089119 | Dec 2018 | CN |
110169073 | Aug 2019 | CN |
113615193 | Nov 2021 | CN |
2532160 | Dec 2012 | EP |
2668784 | Dec 2013 | EP |
2741499 | Jun 2014 | EP |
2983365 | Feb 2016 | EP |
3791585 | Mar 2021 | EP |
3791588 | Mar 2021 | EP |
3794825 | Mar 2021 | EP |
201111867 | Aug 2011 | GB |
2488815 | Sep 2012 | GB |
2492778 | Jan 2013 | GB |
2588006 | Apr 2021 | GB |
2013110766 | Jun 2013 | JP |
2013537772 | Oct 2013 | JP |
2014501091 | Jan 2014 | JP |
2014509480 | Apr 2014 | JP |
2014197883 | Oct 2014 | JP |
2016059066 | Apr 2016 | JP |
2017123542 | Jul 2017 | JP |
2017028712 | Jan 2019 | JP |
2019515587 | Jun 2019 | JP |
2020523853 | Aug 2020 | JP |
2021052373 | Apr 2021 | JP |
2021510265 | Apr 2021 | JP |
2021513795 | May 2021 | JP |
2022504073 | Jan 2022 | JP |
2022507682 | Jan 2022 | JP |
2022507683 | Jan 2022 | JP |
7502380 | Jun 2024 | JP |
20170058871 | May 2017 | KR |
20170115969 | Oct 2017 | KR |
102680903 | Jul 2024 | KR |
2550554 | May 2015 | RU |
2571572 | Dec 2015 | RU |
2632158 | Oct 2017 | RU |
2669005 | Oct 2018 | RU |
201444349 | Nov 2014 | TW |
201832556 | Sep 2018 | TW |
2011095259 | Aug 2011 | WO |
2011095260 | Aug 2011 | WO |
2012074344 | Jun 2012 | WO |
2012095467 | Jul 2012 | WO |
2012172668 | Dec 2012 | WO |
2013081365 | Jun 2013 | WO |
2013157251 | Oct 2013 | WO |
2014007058 | Jan 2014 | WO |
2014054267 | Apr 2014 | WO |
2015006920 | Jan 2015 | WO |
2015010226 | Jan 2015 | WO |
2015042432 | Mar 2015 | WO |
2015052273 | Apr 2015 | WO |
2015100726 | Jul 2015 | WO |
2015180014 | Dec 2015 | WO |
2016008409 | Jan 2016 | WO |
2016054979 | Apr 2016 | WO |
2016091161 | Jun 2016 | WO |
2017043734 | Mar 2017 | WO |
2017058633 | Apr 2017 | WO |
2017076221 | May 2017 | WO |
2017084512 | May 2017 | WO |
2017147765 | Sep 2017 | WO |
2017197126 | Nov 2017 | WO |
2017197126 | Nov 2017 | WO |
2017222237 | Dec 2017 | WO |
2018012886 | Jan 2018 | WO |
2018026148 | Feb 2018 | WO |
2018045944 | Mar 2018 | WO |
2018048904 | Mar 2018 | WO |
2018058526 | Apr 2018 | WO |
2018061522 | Apr 2018 | WO |
2018065397 | Apr 2018 | WO |
2018070107 | Apr 2018 | WO |
2018127119 | Jul 2018 | WO |
2018231700 | Dec 2018 | WO |
2018237299 | Dec 2018 | WO |
2019223746 | Nov 2019 | WO |
2020003275 | Jan 2020 | WO |
2020003279 | Jan 2020 | WO |
2020003284 | Jan 2020 | WO |
2020113051 | Jun 2020 | WO |
Entry |
---|
US 11,089,321 B2, 08/2021, Zhang (withdrawn) |
Enhanced AMVP Mechanism Based Adaptive Motion Search Range Decision, HEVC; 2014. (Year: 2014). |
Parallel AMVP candidate list construction for HEVC; Yu; 2016. (Year: 2016). |
Enhanced AMVP Mechanism Based Adaptive Motion Search Range, HEVC; 2014. (Year: 2014). |
Description of SDR & HDR video coding technology proposed by Ericsson and Nokia; 2018 (Year: 2018). |
Reducing coding cost of merge index by dynamic merge reallocation; 2012. |
Non-Final Office Action from U.S. Appl. No. 16/998,258 dated Nov. 25, 2020. |
Notice of Allowance from U.S. Appl. No. 17/229,019 dated Oct. 12, 2022. |
Notice of Eligibility of Grant from Singapore Patent Application No. 11202011714R dated Jul. 25, 2022. |
Final Office Action from U.S. Appl. No. 17/480,184 dated May 2, 2022. |
Examination Report from Patent Application GB2020091.1 mailed Mar. 21, 2022. |
Examination Report from Patent Application GB2018263.0 mailed Mar. 30, 2022. |
Examination Report from Patent Application GB2019557.4 mailed Apr. 1, 2022. |
Extended European Search Report European Patent Application No. 20737921.5 dated Feb. 22, 2022 (9 pages). |
Notice of Allowance from U.S. Appl. No. 17/019,753 dated Dec. 1, 2021. |
Non-Final Office Action from U.S. Appl. No. 17/480,184 dated Dec. 29, 2021. |
Non-Final Office Action from U.S. Appl. No. 17/796,708 dated Aug. 11, 2021. |
Non-Final Office Action from U.S. Appl. No. 17/019,753 dated Jul. 22, 2021. |
Notice of Allowance from U.S. Appl. No. 16/998,296 dated Mar. 23, 2021. |
Notice of Allowance from U.S. Appl. No. 16/998,258 dated Mar. 24, 2021. |
Non-Final Office Action from U.S. Appl. No. 17/011,058 dated Apr. 13, 2021. |
Final Office Action from U.S. Appl. No. 17/071,139 dated Apr. 16, 2021. |
Non-Final Office Action from U.S. Appl. No. 17/229,019 dated Jun. 25, 2021. |
Notice of Allowance from U.S. Appl. No. 17/011,068 dated Mar. 1, 2021. |
Notice of Allowance from U.S. Appl. No. 17/018,200 dated Mar. 1, 2021. |
Final Office Action from U.S. Appl. No. 17/019,753 dated Mar. 8, 2021. |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055588 dated Sep. 16, 2019 (21 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055591 dated Jan. 10, 2019 (16 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055593 dated Sep. 16, 2019 (23 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055595 dated Sep. 16, 2019 (25 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055619 dated Sep. 16, 2019 (26 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055620 dated Sep. 25, 2019 (18 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055621 dated Sep. 30, 2019 (18 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055622 dated Sep. 16, 2019 (13 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055623 dated Sep. 26, 2019 (17 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055624 dated Sep. 26, 2019 (17 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055625 dated Sep. 26, 2019 (19 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055626 dated Sep. 16, 2019 (17 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/057690 dated Dec. 16, 2019 (17 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/057692 dated Jan. 7, 2020 (16 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055571 dated Sep. 16, 2019 (20 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2020/080597 dated Jun. 30, 2020 (11 pages). |
Non-Final Office Action from U.S. Appl. No. 16/803,706 dated Apr. 17, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/796,693 dated Apr. 28, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/796,708 dated May 29, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/993,598 dated Oct. 14, 2020. |
Final Office Action from U.S. Appl. No. 16/796,693 dated Oct. 27, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/005,634 dated Nov. 13, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/019,753 dated Nov. 17, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/037,322 dated Nov. 17, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/011,068 dated Nov. 19, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/018,200 dated Nov. 20, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/998,296 dated Nov. 24, 2020. |
Document: JCT-VC-B0078, Guionnet et al., “CE5.h: Reducing the Coding Cost of Merge Index by Dynamic Merge Index Reallocation,” Joint Collaborative Team on 3D Video Coding Extension Extension Development of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 2nd Meeting: Shanghai, CN, Oct. 13-19, 2012. |
Document: JVET-C0035, Lee et al., “Modification of Merge Candidate Derivation: ATMVP Simplification and Merge Pruning,” Joint Video Experts Tomn (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 3rd Meeting: Geneva, CH, May 26-Jun. 2016. |
Non-Final Office Action from U.S. Appl. No. 17/005,702 dated Nov. 27, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/005,574 dated Dec. 1, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/011,058 dated Dec. 15, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/071,139 dated Dec. 15, 2020. |
Non-Final Office Action from U.S. Appl. No. 16/993,561 dated Dec. 24, 2020. |
Non-Final Office Action from U.S. Appl. No. 17/031,404 dated Dec. 24, 2020. |
Notice of Allowance from U.S. Appl. No. 16/796,693 dated Feb. 10, 2021. |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055549 dated Aug. 20, 2019 (16 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055575 dated Aug. 20, 2019 (12 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055576 dated Sep. 16, 2019 (15 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/IB2019/055582 dated Sep. 20, 2019 (18 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/071332 dated Apr. 9, 2020(9 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/071656 dated Apr. 3, 2020(12 pages). |
Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/072387 dated Apr. 20, 2020(10 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/072391 dated Mar. 6, 2020 (11 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055554 dated Aug. 20, 2019 (16 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055556 dated Aug. 29, 2019 (15 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055581 dated Aug. 29, 2019 (25 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055586 dated Sep. 16, 2019 (16 pages). |
International Search Report and Written Opinion from International Patent Application No. PCT/182019/055587 dated Sep. 16, 2019 (23 pages). |
Non-Final Office Action from U.S. Appl. No. 17/457,868 dated Nov. 25, 2022. |
Non-Final Office Action from U.S. Appl. No. 16/796,708 dated Nov. 23, 2022. |
Non-Final Office Action from U.S. Appl. No. 17/135,054 dated Nov. 25, 2022. |
Non-Final Office Action dated Nov. 10, 2020, 11 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020. |
Final Office Action dated Mar. 19, 2021, 50 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020. |
Non-Final Office Action dated Nov. 18, 2021, 39 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020. |
Notice of Allowance dated Mar. 11, 2022, 23 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020. |
Notice of Allowance dated Jun. 16, 2022, 19 pages, U.S. Appl. No. 17/019,675, filed Sep. 14, 2020. |
Nevdyaev, “Telecommunication Technologies, English-Russian Explanatory Dictionary and Reference Book,” Communications and Business, Moscow, 2002, 5 pages. |
Non-Final Office Action dated Mar. 30, 2023, 98 pages, U.S. Appl. No. 17/369,132 filed Jul. 7, 2021. |
Chen et al. “Description of SOR, HOr and 360° video coding technology proposal by Qualcomm and Technicolor- ow and high complexity versions”, Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 9/ WG 11, 10th Meeting: San Diego, US, JVET-J0021 (Apr. 2018). |
Chen et al. “CE4.3.1: Shared merging candidate list”, JVET 13th Meeting, JVET-M0170-v1 (Jan. 2019). |
Wang et al. “Spec text for the agreed starting point on slicing and tiling”, JVET 12th Meeting, JVET-L0686-v2 (Oct. 2018). |
Han et al. “A dynamic motion vector referencing scheme for video coding” IEEE International Conference on Image Processing (ICIP), (Sep. 2016). |
Rapaka et al. “On intra block copy merge vector handling” JCT-VG Meeting, JCTVC-V0049 {Oct. 2015). |
Chen et al. “Symmetrical mode for bi-prediction” JVET Meeting,JVET-J0063 (Apr. 2018). |
Wang et al. “Description of Core Experiment 4 (CE4): Inter prediction and motion vector coding” JVET-K1024 (Jul. 2018). |
Zhang et al. “CE4-related: Restrictions on History-based Motion Vector Prediction”, JVET-M0272 (Jan. 2019). |
Zhang et al. “CE2-related: Early awareness of accessing temporal blocks in sub-block merge list construction”, JVET-M0273 (Jan. 2019). |
Robert et al. “High precision FRUC with additional candidates” JVET Meeting JVET-D0046 (Oct. 2016). |
Toma et al. “Description of SDR video coding technology proposal by Panasonic”, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 10th Meeting: San Diego, US, JVET-J0020-v1and v2 (Apr. 2018). |
Zhang et al. “CE4-related: History-based Motion Vector Prediction”, Joint Video Experts Team (JVET) of ITU-T SG 6 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, Document JVET-K0104-v5, Meeting Report of the 11th meeting of the Uoint Video Experts Team (JVET), Ljubljana, SI, 10-18 (Jul. 2018). |
Zhang et al., “History-Based Motion Vector Prediction in Versatile Video Coding”, 2019 Data Compression Conference (DCC), IEEE, pp. 43-52, XP033548557 (Mar. 2019). |
Esenlik et al. “Description of Core Experiment 9 (CE9): Decoder Side Motion Vector Derivation” JVET-J1029-r4, Apr. 2018). |
Sjoberg et al. “Description of SOR and HDR video coding technology proposal by Ericsson and Nokia” JVET Meeting, JVET-J0012-v1 (Apr. 2018). |
Ku et al. “Intra block copy improvement on top ofTencent's cfp response” JVET Meeting, JVET-J0050-r2 (Apr. 2018). |
Lin et al. “CE3: Summary report on motion prediction for texture coding” JCT-3V Meeting, JCT3V-G0023, Jan. 2014. |
Sprljan et al. “TE3 subtest 3: Local intensity compensation (LIC) for inter prediction”, JCT-VG of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11, 3rd Meeting: Guangzhou, CN, JCTVC-C233 (Oct. 2010). |
Document: JVET-J0022, Bordes, et al., “Description of SDR, HDR and 360° video coding technology proposal by Qualcomm and Technicolor—medium complexity version,” Joint Video Experts Team (JVET) of ITU-T S,G 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 10th Meeting: San Diego, US, Apr. 10-20, 2018. |
Zhang et al. “CE4: History-based Motion Vector Prediction(Test 4.4.7)”, Joint Video Experts Team (JVET) of ITU-T S, G 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th meeting: Macao, CN, Oct. 3-12, 2018, Document JVET-L0266-V1and v2, Oct. 12, 2018. |
Wang et al. Description of Core Experiment 4 {CE4); Interprediction and Motion Vector Coding,JVET Meeting, The Joint Video Exploration Team of ISO/IEC JTC1/SC29/WG11 and ITU-TSG. 16 No Meeting San Diego, Apr. 20, 2018, Document JVET-J1024, Apr. 20, 2018. |
Lee et al., “Non-CE4: HMVP Unification between the Merge and MVP List,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N0373, Mar. 2019. |
Zhu et al. “Simplified HMVP,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, Document JVET-M0473, Jan. 2019. |
Document: JVET-M0562, Bandyopadhyay, S., “Cross-Check of JVET-M0436: AHG2: Regarding HMVO Table Size,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA Jan. 9-18, 2019. |
Zhang et al. “CE4-4.4: Merge List Construction for Triangular Prediction Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document UVET-N0269, Mar. 2019. |
Solovyev et al. “CE-4.6: Simplification for Merge List Derivation in Triangular Prediction Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0454, Mar. 2019. |
Zhang et al. “CE10-related: Merge List Construction Process for Triangular Protection Mode,” Joint Video Experts Iream (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, document JVET-M0271, Jan. 2019. |
Ma et al. “Eleventh Five-Year Plan” teaching materials for ordinary colleges and universities, Principle and Application of S7-200 PLC and Digital Speed Control Systems, Jul. 31, 2009. |
Enhanced AMVP Mechanism Based Adaptive Motion Search for Fast HEVC Coding; 2014. |
Motion vector prediction methods considering prediction continuity in HEVC; 2016. |
Hardware-friendly Advanced Motion Vector Predictor for hevc; 2018. |
Parallel AMVP candidate list construction for HEVC; 2016. |
Guionnet et al. “CE5.h: Reducing the Coding Cost of Merge Index by Dynamic Merge Index Reallocation,” Joint Collaborative Team on 30 Video Coding Extension Development of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1, 2nd Meeting: Shanghai, CN, Oct. 13-19, 2012, document JCT3V-B0078, 2012. |
Lee et al. “EE2.6: Modification of Merge Candidate Derivation: ATMVP Simplification and Merge Pruning,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 3rd Meeting: Geneva, CH, 26 May Jun. 2016, document JVET-C0035, 2016. |
Document: JVET-L1002, Chen et al., “Algorithm Description for Versatile Video Coding and Terst Model 3 (VTM 3),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018. |
Library—USPTO search query; 2022. |
Document: JVET-L0401-r3 Chien, et al., “CE4-related: Modification on History-based Mode Vector Prediction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018. |
Xin, Y., “Exploration and Optimization of Merge Mode Candidate Decision in HEVC,” 2016 Microcomputers and Applications No. 15, (School Information Engineering, Shanghai Maritime University, Shanghai 201306) , Sep. 1, 2016. |
Document: JVET-K1000, Sullivan et al., “Meeting Report of the 11th Meeting of the Joint Video Experts Team (JVET), Ljubljana, SI, Jul. 10-18, 2018,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 11th Meeting:Ljubljana, SI, Jul. 10-18, 2018. |
Document: JVET-J0024-v2, Akula, S., et al., “Description of SDR, HDR and 360° video coding technology proposal considering mobile application scenario by Samsung, Huawei, GoPro, and HiSilicon,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 10th Meeting: San Diego, US, Apr. 10-20, 2018, 139 pages. |
Document: JVET-M0124, Zhao, J., et al., “CE4: Methods of Reducing No. of Pruning Checks of History Based Motion Vector Prediction (Test 4.1.1),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, 6 pages. |
Jia, J., et al., “A fast candidate selection method for Merge mode based on adaptive threshold,” Journal of Optoelectronics—Laser, vol. 27, No. 9, Sep. 2016, 7 pages. |
Non-Final Office Action dated Jul. 3, 2023, 101 pages, U.S. Appl. No. 17/374,160 filed Jul. 13, 2021. |
Non-Final Office Action dated Aug. 7, 2023, 101 pages, U.S. Appl. No. 17/374,311, filed Jul. 13, 2021. |
Non-Final Office Action dated Aug. 21, 2023, 126 pages, U.S. Appl. No. 17/374,208, filed Jul. 13, 2021. |
Canadian Office Action from Canadian Application No. 3,101,730 dated Aug. 10, 2023. |
Document: JVET-J0024-v2, Akula, S., et al., “Description of SDR, HDR and 360 video coding technology proposal by Samsung, Huawei, GoPro, and HiSilicon,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, JVET-J0024-v2, 10th Meeting: San Diego, Apr. 10-20, 2018, 139 pages. |
Document: JCTVC-S1014, Joshi, R., et al., “Screen content coding test model 3 (SCM 3),” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 19th Meeting: Strasbourg, FR, Oct. 17-24, 2014, 12 pages. |
Document: JVET-K0104-v5, Zhang, L., et al., “CE4-related: History-based Motion Vector Prediction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, 7 pages. |
Document: JVET-L0124-v2, Liao, R., et al., “CE10.3.1.b: Triangular prediction unit mode,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, 8 pages. |
Document: JCTVC-G157, Hendry, “Reference List Construction for Random Access Settings,” Joint Collaborative Team on Video Coding (JCT-VC) of ITU-T SG16 WP3 and ISO/IEC JTC1/SC29/WG11 7th Meeting: Geneva, CH, Nov. 21-30, 2011, 5 pages. |
Partial European Search Report from European Appliciation No. 23210728.4 dated Jan. 10, 2024, 19 pages. |
Document: JVET-J0012-v1, Sjoberg, R., et al., “Description of SDR and HDR video coding technology proposal by Ericsson and Nokia,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 10th Meeting: San Diego, CA, USA, Apr. 10-20, 2018, 32 pages. |
Document: JVET-L1002-v1, Chen, J., et al., “Algorithm description for Versatile Video Coding and Test Model 3 (VTM 3),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, 48 pages. |
Document: JVET-L0266-v2, Zhang, L., et al., “CE4: History-based Motion Vector Prediction (Test 4.4.7),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, 8 pages. |
Taiwanese Office Action from Taiwan Patent Application No. 108133113 dated Apr. 26, 2024, 23 pages. |
Japanese Notice of Reasons for Refusal from Japanese Patent Application No. 2023-072498 dated May 28, 2024, 8 pages. |
Extended European Search Report from European Application No. 23213700.0 dated May 16, 2024, 24 pages. |
Final Office Action from U.S. Appl. No. 17/388,146 dated Jun. 5, 2024, 34 pages. |
Chinese Notice of Allowance from Chinese Patent Application No. 202080009387.0 dated May 16, 2024, 6 pages. |
Chinese Notice of Allowance from Chinese Patent Application No. 202210307588.x dated Aug. 1, 2024, 6 pages. |
Non-Final Office Action from U.S. Appl. No. 17/380,225 dated Jul. 10, 2024, 21 pages. |
Non-Final Office Action from U.S. Appl. No. 17/388,146 dated Jun. 5, 2024, 34 pages. |
Notice of Allowance from U.S. Appl. No. 18/156,666 dated Jun. 13, 2024, 22 pages. |
Number | Date | Country | |
---|---|---|---|
20230064498 A1 | Mar 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17019675 | Sep 2020 | US |
Child | 17975323 | US | |
Parent | PCT/IB2019/055595 | Jul 2019 | WO |
Child | 17019675 | US |