This patent document relates to video coding and decoding techniques, devices and systems.
In spite of the advances in video compression, digital video still accounts for the largest bandwidth use on the internet and other digital communication networks. As the number of connected user devices capable of receiving and displaying video increases, it is expected that the bandwidth demand for digital video usage will continue to grow.
Devices, systems and methods related to digital video coding/decoding, and specifically, simplified linear model derivations for the cross-component linear model (CCLM) prediction mode in video coding/decoding are described. The described methods may be applied to both the existing video coding standards (e.g., High Efficiency Video Coding (HEVC)) and future video coding standards (e.g., Versatile Video Coding (VVC)) or codecs.
In one representative aspect, a method for visual media processing is disclosed. The method includes computing, during a conversion between a current video block of visual media data and a bitstream representation of the current video block, a cross-component linear model (CCLM) and/or a chroma residual scaling (CRS) factor for the current video block based, at least in part, on neighboring samples of a corresponding luma block which covers a top-left sample of a collocated luma block associated with the current video block, wherein one or more characteristics of the current video block are used for identifying the corresponding luma block.
In another representative aspect, a method for visual media processing is disclosed. The method includes using a rule to make a determination of selectively enabling or disabling a chroma residual scaling (CRS) on color components of a current video block of visual media data, wherein the rule is based on coding mode information of the current video block and/or coding mode information of one or more neighbouring video blocks; and performing a conversion between the current video block and a bitstream representation, based on the determination.
In yet another representative aspect, a method for visual media processing is disclosed. The method includes using a single chroma residual scaling factor for at least one chroma block associated with video blocks in a slice or a tile group associated with a current video block of visual media data; and performing a conversion between the current video block and a bitstream representation of the current video block.
In another representative aspect, a method for visual media processing is disclosed. The method includes deriving a chroma residual scaling factor during a conversion between a current video block of visual media data and a bitstream representation of the current video block; storing the chroma residual scaling factor for use with other video blocks of the visual media data; and applying the chroma residual factor for the conversion of the current video block and the other video blocks into the bitstream representation.
In another representative aspect, a method for visual media processing is disclosed. The method includes during a conversion between a current video block of visual media data and a bitstream representation of the visual media data: computing a chroma residual factor of the current video block; storing, in a buffer, the chroma residual scaling factor for use with a second video block of the visual media data; and subsequent to the use, removing the chroma residual scaling factor from the buffer.
In yet another example aspect, a video encoder or decoder apparatus comprising a processor configured to implement an above described method is disclosed.
In another example aspect, a computer readable program medium is disclosed. The medium stores code that embodies processor executable instructions for implementing one of the disclosed methods.
In yet another representative aspect, the above-described method is embodied in the form of processor-executable code and stored in a computer-readable program medium.
2.1 A Brief Review on HEVC
2.1.1 Intra Prediction in HEVC/H.265
Intra prediction involves producing samples for a given TB (transform block) using samples previously reconstructed in the considered colour channel. The intra prediction mode is separately signalled for the luma and chroma channels, with the chroma channel intra prediction mode optionally dependent on the luma channel intra prediction mode via the ‘DM_CHROMA’ mode. Although the intra prediction mode is signalled at the PB (prediction block) level, the intra prediction process is applied at the TB level, in accordance with the residual quad-tree hierarchy for the CU, thereby allowing the coding of one TB to have an effect on the coding of the next TB within the CU, and therefore reducing the distance to the samples used as reference values.
HEVC includes 35 intra prediction modes—a DC mode, a planar mode and 33 directional, or ‘angular’ intra prediction modes. The 33 angular intra prediction modes are illustrated in
For PBs associated with chroma colour channels, the intra prediction mode is specified as either planar, DC, horizontal, vertical, ‘DM_CHROMA’ mode or sometimes diagonal mode ‘34’.
Note for chroma formats 4:2:2 and 4:2:0, the chroma PB may overlap two or four (respectively) luma PBs; in this case the luma direction for DM_CHROMA is taken from the top left of these luma PBs.
The DM_CHROMA mode indicates that the intra prediction mode of the luma colour channel PB is applied to the chroma colour channel PBs. Since this is relatively common, the most-probable-mode coding scheme of the intra_chroma_pred_mode is biased in favor of this mode being selected.
2.2 Versatile Video Coding (VVC) Algorithm Description
2.2.1 VVC Coding Architecture
To explore the future video coding technologies beyond HEVC, Joint Video Exploration Team (JVET) was founded by VCEG and MPEG jointly in 2015. The JVET meeting is concurrently held once every quarter, and the new coding standard is targeting at 50% bitrate reduction as compared to HEVC. The new video coding standard was officially named as Versatile Video Coding (VVC) in the April 2018 JVET meeting, and the first version of VVC test model (VTM) was released at that time. As there are continuous effort contributing to VVC standardization, new coding techniques are being adopted to the VVC standard in every JVET meeting. The VVC working draft and test model VTM are then updated after every meeting. The VVC project is now aiming for technical completion (FDIS) at the July 2020 meeting.
As in most preceding standards, VVC has a block-based hybrid coding architecture, combining inter-picture and intra-picture prediction and transform coding with entropy coding. The picture partitioning structure divides the input video into blocks called coding tree units (CTUs). A CTU is split using a quadtree with nested multi-type tree structure into coding units (CUs), with a leaf coding unit (CU) defining a region sharing the same prediction mode (e.g. intra or inter). In this document, the term ‘unit’ defines a region of an image covering all colour components; the term ‘block’ is used to define a region covering a particular colour component (e.g. luma), and may differ in spatial location when considering the chroma sampling format such as 4:2:0.
2.2.2 Dual/Separate Tree Partition in VVC
Luma component and chroma component can have separate partition trees for I slices. Separate tree partitioning is under 64×64 block level instead of CTU level. In VTM software, there is an SPS flag to control the dual-tree on and off.
2.2.3 Intra Prediction in VVC
2.2.3.1 67 Intra Prediction Modes
To capture the arbitrary edge directions presented in natural video, the number of directional intra modes in VTM4 is extended from 33, as used in HEVC, to 65. The new directional modes not in HEVC are depicted as red dotted arrows in
2.2.3.2 Cross-Component Linear Model Prediction (CCLM)
To reduce the cross-component redundancy, a cross-component linear model (CCLM) prediction mode is used in the VTM4, for which the chroma samples are predicted based on the reconstructed luma samples of the same CU by using a linear model as follows:
predC(i,j)=α·recL′(i,j)+β
where preC(i,j) represents the predicted chroma samples in a CU and recL(i,j) represents the downsampled reconstructed luma samples of the same CU. Linear model parameter α and β are derived from the relation between luma values and chroma values from two samples, which are luma sample with minimum sample value and with maximum sample inside the set of downsampled neighboring luma samples, and their corresponding chroma samples. The linear model parameters α and β are obtained according to the following equations.
Where Ya and Xa represent luma value and chroma value of the luma sample with maximum luma sample value. And Xb and Yb represent luma value and chroma value of the luma sample with minimum luma sample, respectively.
The division operation to calculate parameter α is implemented with a look-up table. To reduce the memory required for storing the table, the diff value (difference between maximum and minimum values) and the parameter α are expressed by an exponential notation. For example, diff is approximated with a 4-bit significant part and an exponent. Consequently, the table for 1/diff is reduced into 16 elements for 16 values of the significand as follows:
DivTable[ ]={0,7,6,5,5,4,4,3,3,2,2,1,1,1,1,0}
This would have a benefit of both reducing the complexity of the calculation as well as the memory size required for storing the needed tables
Besides the above template and left template can be used to calculate the linear model coefficients together, they also can be used alternatively in the other 2 LM modes, called LM_A, and LM_L modes.
In LM_A mode, only the above template are used to calculate the linear model coefficients. To get more samples, the above template are extended to (W+H). In LM_L mode, only left template are used to calculate the linear model coefficients. To get more samples, the left template are extended to (H+W).
For a non-square block, the above template are extended to W+W, the left template are extended to H+H.
To match the chroma sample locations for 4:2:0 video sequences, two types of downsampling filter are applied to luma samples to achieve 2 to 1 downsampling ratio in both horizontal and vertical directions. The selection of downsampling filter is specified by a SPS level flag. The two downsampling filters are as follows, which are corresponding to “type-0” and “type-2” content, respectively.
Note that only one luma line (general line buffer in intra prediction) is used to make the downsampled luma samples when the upper reference line is at the CTU boundary.
This parameter computation is performed as part of the decoding process, and is not just as an encoder search operation. As a result, no syntax is used to convey the α and β values to the decoder.
For chroma intra mode coding, a total of 8 intra modes are allowed for chroma intra mode coding. Those modes include five traditional intra modes and three cross-component linear model modes (CCLM, LM_A, and LM_L). Chroma mode coding directly depends on the intra prediction mode of the corresponding luma block. Since separate block partitioning structure for luma and chroma components is enabled in I slices, one chroma block may correspond to multiple luma blocks. Therefore, for Chroma DM mode, the intra prediction mode of the corresponding luma block covering the center position of the current chroma block is directly inherited.
2.2.3.2.1 Corresponding Modified Working Draft (JVET-N0271)
The following spec is based on the modified working draft of JVET-M1001 and the adoption in JVET-N0271. The modifications of the adopted JVET-N0220 are shown in bold and underlining
Syntax Table
Sequence Parameter Set RBSP Syntax
Semantics
sps_cclm_enabled_flag equal to 0 specifies that the cross-component linear model intra prediction from luma component to chroma component is disabled. sps_cclm_enabled_flag equal to 1 specifies that the cross-component linear model intra prediction from luma component to chroma component is enabled.
Decoding Process
In 8.4.4.2.8 Specification of INTRA_LT_CCLM, INTRA_L_CCLM and INTRA_T_CCLM Intra Prediction Mode
Inputs to this process are:
Output of this process are predicted samples predSamples[x][y], with x=0 . . . nTbW−1, y=0 . . . nTbH−1.
The current luma location (xTbY, yTbY) is derived as follows:
(xTbY,yTbY)=(xTbC<<1,yTbC<<1) (8-156)
The variables availL, availT and availTL are derived as follows:
The number of available neighbouring chroma samples on the top and top-right numTopSamp and the number of available neighbouring chroma samples on the left and left-below nLeftSamp are derived as follows:
The prediction samples predSamples[x][y] with x=0 . . . nTbW−1, y=0 . . . nTbH−1 are derived as follows:
VTM4 includes many intra coding tools which are different from HEVC, for example, the following features have been included in the VVC test model 3 on top of the bock tree structure.
In VTM4, when a CU is coded in merge mode, and if the CU contains at least 64 luma samples (that is, CU width times CU height is equal to or larger than 64), an additional flag is signalled to indicate if the combined inter/intra prediction (CIIP) mode is applied to the current CU.
In order to form the CIIP prediction, an intra prediction mode is first derived from two additional syntax elements. Up to four possible intra prediction modes can be used: DC, planar, horizontal, or vertical. Then, the inter prediction and intra prediction signals are derived using regular intra and inter decoding processes. Finally, weighted averaging of the inter and intra prediction signals is performed to obtain the CIIP prediction.
2.2.4.2 Miscellaneous Inter Prediction Aspects
VTM4 includes many inter coding tools which are different from HEVC, for example, the following features have been included in the VVC test model 3 on top of the bock tree structure.
There are totally three in-loop filters in VTM4. Besides deblocking filter and SAO (the two loop filters in HEVC), adaptive loop filter (ALF) are applied in the VTM4. The order of the filtering process in the VTM4 is the deblocking filter, SAO and ALF.
In the VTM4, the SAO and deblocking filtering processes are almost same as those in HEVC.
In the VTM4, a new process called the luma mapping with chroma scaling was added (this process was previously known as the adaptive in-loop reshaper). This new process is performed before deblocking.
2.2.6 Luma Mapping with Chroma Scaling (LMCS, aka. In-Loop Reshaping)
In VTM4, a coding tool called the luma mapping with chroma scaling (LMCS) is added as a new processing block before the loop filters. LMCS has two main components: 1) in-loop mapping of the luma component based on adaptive piecewise linear models; 2) for the chroma components, luma-dependent chroma residual scaling is applied.
2.2.6.1 Luma Mapping with Piecewise Linear Model
The in-loop mapping of the luma component adjusts the dynamic range of the input signal by redistributing the codewords across the dynamic range to improve compression efficiency. Luma mapping makes use of a forward mapping function, FwdMap, and a corresponding inverse mapping function, InvMap. The FwdMap function is signalled using a piecewise linear model with 16 equal pieces. InvMap function does not need to be signalled and is instead derived from the FwdMap function.
The luma mapping model is signalled at the tile group level. A presence flag is signalled first. If luma mapping model is present in the current tile group, corresponding piecewise linear model parameters are signalled. The piecewise linear model partitions the input signal's dynamic range into 16 equal pieces, and for each piece, its linear mapping parameters are expressed using the number of codewords assigned to that piece. Take 10-bit input as an example. Each of the 16 pieces will have 64 codewords assigned to it by default. The signalled number of codewords is used to calculate the scaling factor and adjust the mapping function accordingly for that piece. At the tile group level, another LMCS enable flag is signalled to indicate if the LMCS process as depicted in
Each i-th piece, i=0 . . . 15, of the FwdMap piecewise linear model is defined by two input pivot points InputPivot[ ] and two output (mapped) pivot points MappedPivot[ ].
The InputPivot[ ] and MappedPivot[ ] are computed as follows (assuming 10-bit video):
As shown in
The luma mapping process (forward and/or inverse mapping) can be implemented using either look-up-tables (LUT) or using on-the-fly computation. If LUT is used, then FwdMapLUT and InvMapLUT can be pre-calculated and pre-stored for use at the tile group level, and forward and inverse mapping can be simply implemented as FwdMap(Ypred)=FwdMapLUT [Ypred] and InvMap(Yr)=InvMapLUT[Yr], respectively. Alternatively, on-the-fly computation may be used. Take forward mapping function FwdMap as an example. In order to figure out the piece to which a luma sample belongs, the sample value is right shifted by 6 bits (which corresponds to 16 equal pieces). Then, the linear model parameters for that piece are retrieved and applied on-the-fly to compute the mapped luma value. Let i be the piece index, a1, a2 be InputPivot[i] and InputPivot[i+1], respectively, and b1, b2 be MappedPivot[i] and MappedPivot[i+1], respectively. The FwdMap function is evaluated as follows:
FwdMap(Ypred)((b2−b1)/(a2−a1))*(Ypred−a1)+b1
The InvMap function can be computed on-the-fly in a similar manner, except that conditional checks need to be applied instead of a simple right bit-shift when figuring out the piece to which the sample value belongs, because the pieces in the mapped domain are not equal sized.
2.2.6.2 Luma-Dependent Chroma Residual Scaling
Chroma residual scaling is designed to compensate for the interaction between the luma signal and its corresponding chroma signals. Whether chroma residual scaling is enabled or not is also signalled at the tile group level. If luma mapping is enabled and if dual tree partition (also known as separate chroma tree) is not applied to the current tile group, an additional flag is signalled to indicate if luma-dependent chroma residual scaling is enabled or not. When luma mapping is not used, or when dual tree partition is used in the current tile group, luma-dependent chroma residual scaling is disabled. Further, luma-dependent chroma residual scaling is always disabled for the chroma blocks whose area is less than or equal to 4.
Chroma residual scaling depends on the average value of the corresponding luma prediction block (for both intra- and inter-coded blocks). Denote avgY′ as the average of the luma prediction block. The value of CScaleInv is computed in the following steps:
The following spec is based on the modified working draft of JVET-M1001 and the adoption in JVET-N0220. The modification in the adopted JVET-N0220 is shown in bold and underlining
Syntax Tables
In 7.3.2.1 Sequence Parameter Set RBSP Syntax
In 7.3.4.1 General Tile Group Header Syntax
if( NumTilesInCurrTileGroup > 1 ) {
ue(v)
u(v)
}
In 7.3.4.4 Luma Mapping with Chroma Scaling Data Syntax
Semantics
In 7.4.3.1 Sequence Parameter Set RBSP Semantics
equal to 1 specifies that luma mapping with chroma scaling is used in the CVS. sps_lmcs_enabled_flag equal to 0 specifies that luma mapping with chroma scaling is not used in the CVS.
equal to 1 specifies that lmcs_data( ) is present in the tile group header. tile_group_lmcs_model_present_flag equal to 0 specifies that lmcs_data( ) is not present in the tile group header. When tile_group_lmcs_model_present_flag is not present, it is inferred to be equal to 0.
equal to 1 specifies that luma mapping with chroma scaling is enabled for the current tile group. tile_group_lmcs_enabled_flag equal to 0 specifies that luma mapping with chroma scaling is not enabled for the current tile group. When tile_group_lmcs_enabled_flag is not present, it is inferred to be equal to 0.
equal to 1 specifies that chroma residual scaling is enabled for the current tile group. tile_group_chroma_residual_scale_flag equal to 0 specifies that chroma residual scaling is not enabled for the current tile group. When tile_group_chroma_residual_scale_flag is not present, it is inferred to be equal to 0.
In 7.4.5.4 Luma Mapping with Chroma Scaling Data Semantics
specifies the minimum bin index used in the luma mapping with chroma scaling construction process. The value of lmcs_min_bin_idx shall be in the range of 0 to 15, inclusive.
specifies the delta value between 15 and the maximum bin index LmcsMaxBinIdx used in the luma mapping with chroma scaling construction process. The value of lmcs_delta_max_bin_idx shall be in the range of 0 to 15, inclusive. The value of LmcsMaxBinIdx is set equal to 15−lmcs_delta_max_bin_idx. The value of LmcsMaxBinIdx shall be larger than or equal to lmcs_min_bin_idx.
plus 1 specifies the number of bits used for the representation of the syntax lmcs_delta_abs_cw[i]. The value of lmcs_delta_cw_prec_minus1 shall be in the range of 0 to BitDepthY−2, inclusive.
[i] specifies the absolute delta codeword value for the ith bin.
[i] specifies the sign of the variable lmcsDeltaCW[i] as follows:
The variable InputPivot[i], with i=0 . . . 16, is derived as follows:
InputPivot[i]=i*OrgCW (7-74)
The variable LmcsPivot[i] with i=0 . . . 16, the variables ScaleCoeff[i] and InvScaleCoeff[i] with i=0 . . . 15, are derived as follows:
LmcsPivot
[
0
]
= 0;
for( i = 0; i <= 15; i++ ) {
LmcsPivot
[
i + 1
]
= LmcsPivot
[
i
]
+ lmcsCW
[
i
]
}
The variable ChromaScaleCoeff[i], with i=0 . . . 15, is derived as follows:
if ( lmcsCW[ i ] = = 0 )
else {
}
The variables ClipRange, LmcsMinVal, and LmcsMaxVal are derived as follows:
ClipRange=((lmcs_min_bin_idx>0)&&(LmcsMaxBinIdx<15) (7-77)
LmcsMinVal=16<<(BitDepthY−8) (7-78)
LmcsMaxVal=235<<(BitDepthY−8) (7-79)
The current design of LMCS/CCLM may have the following problems:
To tackle the problems, we propose several methods to remove/reduce/restrict the cross-component dependency in luma-dependent chroma residual scaling, CCLM, and other coding tools that rely on information from a different colour component.
The detailed embodiments described below should be considered as examples to explain general concepts. These embodiments should not be interpreted narrowly way. Furthermore, these embodiments can be combined in any manner.
It is noted that although the bullets described below explicitly mention LMCS/CCLM, the methods may be also applicable to other coding tools that rely on information from a different colour component. In addition, the term ‘luma’ and ‘chroma’ mentioned below may be replaced by ‘a first color component’ and ‘a second color component’ respectively, such as ‘G component’ and ‘B/R component’ in the RGB color format.
In the following discussion, the definition a “collocated sample/block” aligns with the definition of collocated sample/block in VVC working draft JVET-M1001. To be more specific, in 4:2:0 colour format, suppose the top-left sample of a chroma block is at position (xTbC, yTbC), then the top-left sample of the collocated luma block location (xTbY, yTbY) is derived as follows: (xTbY, yTbY)=(xTbC<<1, yTbC<<<1). As illustrated in
In the following discussion, a “corresponding block” may have different location with the current block. For an example, there might be a motion shift between the current block and its corresponding block in the reference frame. As illustrated in
Hereinafter, DMVD (decoder-side motion vector derivation) is used to represent BDOF (a.k.a BIO) or/and DMVR (decode-side motion vector refinement) or/and FRUC (frame rate up-conversion) or/and other method that refines motion vector or/and prediction sample value at decoder.
Removal of the Chroma Scaling Latency of LMCS and Model Computation of CCLM
The embodiment below is for the method in item 11 of the example embodiments in Section 4 of this document.
Newly added parts are highlighted in bolded, underlined, italicized font, and the deleted parts from WC working draft are highlighted in capitalized font. The modifications are based on the latest WC working draft (JVET-M1007-v7) and the new adoption in JVET-N220-v3.
8.7.5.4 Picture Reconstruction with Luma Dependent Chroma Residual Scaling Process for Chroma Samples
Inputs to this process are:
Output of this process is a reconstructed chroma picture sample array recSamples.
The reconstructed chroma picture sample recSamples is derived as follows for i=0 . . . nCurrSw−1, j=0 . . . nCurrSh−1:
The embodiment below is for the method in item 11 of the example embodiments in Section 4 of this document.
Newly added parts are highlighted in bolded, underlined, italicized font, and the deleted parts from WC working draft are highlighted in capitalized font. The modifications are based on the latest WC working draft (JVET-M1007-v7) and the new adoption in JVET-N220-v3.
The differences between Embodiment #2 and #1 are listed as follows:
Inputs to this process are:
Output of this process is a reconstructed chroma picture sample array recSamples.
The reconstructed chroma picture sample recSamples is derived as follows for i=0 . . . nCurrSw−1, j=0 . . . nCurrSh−1:
In some embodiments, the video coding methods may be implemented using an apparatus that is implemented on a hardware platform as described with respect to
Some embodiments may be described using the following clause-based format.
The system 1200 may include a coding component 1204 that may implement the various coding or encoding methods described in the present document. The coding component 1204 may reduce the average bitrate of video from the input 1202 to the output of the coding component 1204 to produce a coded representation of the video. The coding techniques are therefore sometimes called video compression or video transcoding techniques. The output of the coding component 1204 may be either stored, or transmitted via a communication connected, as represented by the component 1206. The stored or communicated bitstream (or coded) representation of the video received at the input 1202 may be used by the component 1208 for generating pixel values or displayable video that is sent to a display interface 1210. The process of generating user-viewable video from the bitstream representation is sometimes called video decompression. Furthermore, while certain video processing operations are referred to as “coding” operations or tools, it will be appreciated that the coding tools or operations are used at an encoder and corresponding decoding tools or operations that reverse the results of the coding will be performed by a decoder.
Examples of a peripheral bus interface or a display interface may include universal serial bus (USB) or high definition multimedia interface (HDMI) or Displayport, and so on. Examples of storage interfaces include SATA (serial advanced technology attachment), PCI, IDE interface, and the like. The techniques described in the present document may be embodied in various electronic devices such as mobile phones, laptops, smartphones or other devices that are capable of performing digital data processing and/or video display.
Some embodiments discussed in this document are now presented in clause-based format.
In the present document, the term “video processing” or “visual media processing” may refer to video encoding, video decoding, video compression or video decompression. For example, video compression algorithms may be applied during conversion from pixel representation of a video to a corresponding bitstream representation or vice versa. The bitstream representation of a current video block may, for example, correspond to bits that are either co-located or spread in different places within the bitstream, as is defined by the syntax. For example, a macroblock may be encoded in terms of transformed and coded error residual values and also using bits in headers and other fields in the bitstream. Furthermore, during conversion, a decoder may parse a bitstream with the knowledge that some fields may be present, or absent, based on the determination, as is described in the above solutions. Similarly, an encoder may determine that certain syntax fields are or are not to be included and generate the coded representation accordingly by including or excluding the syntax fields from the coded representation.
From the foregoing, it will be appreciated that specific embodiments of the presently disclosed technology have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the presently disclosed technology is not limited except as by the appended claims.
Implementations of the subject matter and the functional operations described in this patent document can be implemented in various systems, digital electronic circuitry, or in computer software, firmware, or hardware, including the structures disclosed in this specification and their structural equivalents, or in combinations of one or more of them. Implementations of the subject matter described in this specification can be implemented as one or more computer program products, i.e., one or more modules of computer program instructions encoded on a tangible and non-transitory computer readable medium for execution by, or to control the operation of, data processing apparatus. The computer readable medium can be a machine-readable storage device, a machine-readable storage substrate, a memory device, a composition of matter effecting a machine-readable propagated signal, or a combination of one or more of them. The term “data processing unit” or “data processing apparatus” encompasses all apparatus, devices, and machines for processing data, including by way of example a programmable processor, a computer, or multiple processors or computers. The apparatus can include, in addition to hardware, code that creates an execution environment for the computer program in question, e.g., code that constitutes processor firmware, a protocol stack, a database management system, an operating system, or a combination of one or more of them.
A computer program (also known as a program, software, software application, script, or code) can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program does not necessarily correspond to a file in a file system. A program can be stored in a portion of a file that holds other programs or data (e.g., one or more scripts stored in a markup language document), in a single file dedicated to the program in question, or in multiple coordinated files (e.g., files that store one or more modules, sub programs, or portions of code). A computer program can be deployed to be executed on one computer or on multiple computers that are located at one site or distributed across multiple sites and interconnected by a communication network.
The processes and logic flows described in this specification can be performed by one or more programmable processors executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by, and apparatus can also be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of nonvolatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
It is intended that the specification, together with the drawings, be considered exemplary only, where exemplary means an example. As used herein, the use of “or” is intended to include “and/or”, unless the context clearly indicates otherwise.
While this patent document contains many specifics, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this patent document in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
Similarly, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Moreover, the separation of various system components in the embodiments described in this patent document should not be understood as requiring such separation in all embodiments.
Only a few implementations and examples are described and other implementations, enhancements and variations can be made based on what is described and illustrated in this patent document.
| Number | Date | Country | Kind |
|---|---|---|---|
| PCT/CN2019/083846 | Apr 2019 | WO | international |
This application is a continuation of Ser. No. 17/740,753, filed on May 10, 2022, which is a continuation of U.S. patent application Ser. No. 17/406,284, filed on Aug. 19, 2021, which is based on International Application No. PCT/CN2020/086111, filed on Apr. 22, 2020, which claims the priority to and benefit of International Patent Application No. PCT/CN2019/083846, filed on Apr. 23, 2019. All the aforementioned patent applications are hereby incorporated by reference in their entireties.
| Number | Name | Date | Kind |
|---|---|---|---|
| 9648330 | Pu | May 2017 | B2 |
| 9648332 | Kim | May 2017 | B2 |
| 9998742 | Chen | Jun 2018 | B2 |
| 10200700 | Zhang | Feb 2019 | B2 |
| 10469847 | Xiu | Nov 2019 | B2 |
| 10609395 | Kerofsky | Mar 2020 | B2 |
| 10652588 | Kerofsky | May 2020 | B2 |
| 10812817 | Li et al. | Oct 2020 | B2 |
| 10939096 | Xiu et al. | Mar 2021 | B2 |
| 10939128 | Zhang | Mar 2021 | B2 |
| 10979717 | Zhang | Apr 2021 | B2 |
| 11070812 | Coban et al. | Jul 2021 | B2 |
| 11146785 | Zhang et al. | Oct 2021 | B2 |
| 11159816 | Liu et al. | Oct 2021 | B2 |
| 11159817 | Zhang et al. | Oct 2021 | B2 |
| 11172216 | Zhang | Nov 2021 | B1 |
| 11190790 | Jang | Nov 2021 | B2 |
| 11197003 | Zhang et al. | Dec 2021 | B2 |
| 11206406 | Zhang | Dec 2021 | B1 |
| 11284084 | Zhang | Mar 2022 | B1 |
| 11463713 | Deng | Oct 2022 | B2 |
| 11463714 | Deng | Oct 2022 | B2 |
| 11533487 | Deng | Dec 2022 | B2 |
| 11553194 | Deng | Jan 2023 | B2 |
| 11616965 | Deng | Mar 2023 | B2 |
| 11659164 | Deng | May 2023 | B1 |
| 11956439 | Deng et al. | Apr 2024 | B2 |
| 12034942 | Deng | Jul 2024 | B2 |
| 20100046612 | Sun | Feb 2010 | A1 |
| 20110243232 | Alshina | Oct 2011 | A1 |
| 20130271566 | Chen et al. | Oct 2013 | A1 |
| 20140010300 | Rapaka | Jan 2014 | A1 |
| 20140086502 | Guo | Mar 2014 | A1 |
| 20140161179 | Seregin | Jun 2014 | A1 |
| 20140307773 | Minoo | Oct 2014 | A1 |
| 20140362917 | Joshi et al. | Dec 2014 | A1 |
| 20140376634 | Guo | Dec 2014 | A1 |
| 20150003518 | Nguyen et al. | Jan 2015 | A1 |
| 20150016512 | Pu | Jan 2015 | A1 |
| 20150017519 | Cho | Jan 2015 | A1 |
| 20150117519 | Kim | Apr 2015 | A1 |
| 20150117527 | Gamei | Apr 2015 | A1 |
| 20150124865 | Kim | May 2015 | A1 |
| 20150249828 | Rosewarne | Sep 2015 | A1 |
| 20150264354 | Zhang | Sep 2015 | A1 |
| 20150264362 | Joshi | Sep 2015 | A1 |
| 20150373343 | Hendry | Dec 2015 | A1 |
| 20150373349 | Zhang | Dec 2015 | A1 |
| 20160100189 | Pang | Apr 2016 | A1 |
| 20160277762 | Zhang | Sep 2016 | A1 |
| 20170105014 | Lee | Apr 2017 | A1 |
| 20170244975 | Huang | Aug 2017 | A1 |
| 20170272748 | Seregin | Sep 2017 | A1 |
| 20170272749 | Pettersson | Sep 2017 | A1 |
| 20170324643 | Seregin et al. | Nov 2017 | A1 |
| 20180063527 | Chen | Mar 2018 | A1 |
| 20180063531 | Hu | Mar 2018 | A1 |
| 20180077426 | Zhang | Mar 2018 | A1 |
| 20180115787 | Koo | Apr 2018 | A1 |
| 20180132710 | Pacey et al. | May 2018 | A1 |
| 20180167615 | Kim | Jun 2018 | A1 |
| 20180176588 | Alshina | Jun 2018 | A1 |
| 20180176594 | Zhang | Jun 2018 | A1 |
| 20180205946 | Zhang | Jul 2018 | A1 |
| 20180242006 | Kerofsky | Aug 2018 | A1 |
| 20190068969 | Rusanovskyy | Feb 2019 | A1 |
| 20190110054 | Su | Apr 2019 | A1 |
| 20190116376 | Chen | Apr 2019 | A1 |
| 20190320191 | Song | Oct 2019 | A1 |
| 20190349607 | Kadu | Nov 2019 | A1 |
| 20200252619 | Zhang | Aug 2020 | A1 |
| 20200267392 | Lu | Aug 2020 | A1 |
| 20200275111 | Zhao et al. | Aug 2020 | A1 |
| 20200275121 | Zhao et al. | Aug 2020 | A1 |
| 20200288126 | Hu | Sep 2020 | A1 |
| 20200288159 | Van der Auwera | Sep 2020 | A1 |
| 20200288173 | Ye | Sep 2020 | A1 |
| 20200296390 | Chao et al. | Sep 2020 | A1 |
| 20200359051 | Zhang | Nov 2020 | A1 |
| 20200366896 | Zhang | Nov 2020 | A1 |
| 20200366910 | Zhang | Nov 2020 | A1 |
| 20200366933 | Zhang | Nov 2020 | A1 |
| 20200382769 | Zhang | Dec 2020 | A1 |
| 20200382771 | Liu | Dec 2020 | A1 |
| 20200382800 | Zhang | Dec 2020 | A1 |
| 20200396453 | Zhang | Dec 2020 | A1 |
| 20200396458 | Francois | Dec 2020 | A1 |
| 20210029351 | Zhang et al. | Jan 2021 | A1 |
| 20210029361 | Lu | Jan 2021 | A1 |
| 20210092395 | Zhang | Mar 2021 | A1 |
| 20210092396 | Zhang | Mar 2021 | A1 |
| 20210112262 | Jang | Apr 2021 | A1 |
| 20210211738 | Yin | Jul 2021 | A1 |
| 20210297669 | Zhang | Sep 2021 | A1 |
| 20210297679 | Zhang | Sep 2021 | A1 |
| 20210314579 | Hu | Oct 2021 | A1 |
| 20210321121 | Zhang | Oct 2021 | A1 |
| 20210321140 | Zhang | Oct 2021 | A1 |
| 20210329273 | Rusanovskyy | Oct 2021 | A1 |
| 20210385469 | Deng | Dec 2021 | A1 |
| 20210385498 | Zhang | Dec 2021 | A1 |
| 20210385500 | Zhang | Dec 2021 | A1 |
| 20210392351 | Deng | Dec 2021 | A1 |
| 20210400257 | Zhao | Dec 2021 | A1 |
| 20210400310 | Zhang | Dec 2021 | A1 |
| 20210409753 | Rufitskiy | Dec 2021 | A1 |
| 20220030220 | Deng | Jan 2022 | A1 |
| 20220030257 | Deng | Jan 2022 | A1 |
| 20220086428 | Lim | Mar 2022 | A1 |
| 20220094936 | Lai | Mar 2022 | A1 |
| 20220109885 | Deng | Apr 2022 | A1 |
| 20220124340 | Deng | Apr 2022 | A1 |
| 20220132106 | Xu | Apr 2022 | A1 |
| 20220217405 | Paluri | Jul 2022 | A1 |
| 20220279169 | Deng et al. | Sep 2022 | A1 |
| 20220329828 | Zhao | Oct 2022 | A1 |
| 20220337848 | Francois | Oct 2022 | A1 |
| 20220368928 | Ma | Nov 2022 | A1 |
| 20220385926 | Deng | Dec 2022 | A1 |
| 20230105972 | Ye | Apr 2023 | A1 |
| 20230199191 | Hendry et al. | Jun 2023 | A1 |
| 20230214959 | Mitani et al. | Jul 2023 | A1 |
| 20230308691 | Deng | Sep 2023 | A1 |
| Number | Date | Country |
|---|---|---|
| 3131286 | Sep 2020 | CA |
| 103139565 | Jun 2013 | CN |
| 103227917 | Jul 2013 | CN |
| 104471940 | Mar 2015 | CN |
| 104521232 | Apr 2015 | CN |
| 104620574 | May 2015 | CN |
| 104685872 | Jun 2015 | CN |
| 105049843 | Nov 2015 | CN |
| 105637866 | Jun 2016 | CN |
| 105706451 | Jun 2016 | CN |
| 105723707 | Jun 2016 | CN |
| 105981381 | Sep 2016 | CN |
| 106664425 | May 2017 | CN |
| 106797476 | May 2017 | CN |
| 107079157 | Aug 2017 | CN |
| 107211124 | Sep 2017 | CN |
| 107439012 | Dec 2017 | CN |
| 107637080 | Jan 2018 | CN |
| 108449604 | Aug 2018 | CN |
| 108885783 | Nov 2018 | CN |
| 109005408 | Dec 2018 | CN |
| 109076210 | Dec 2018 | CN |
| 109076225 | Dec 2018 | CN |
| 109155853 | Jan 2019 | CN |
| 109196862 | Jan 2019 | CN |
| 109196867 | Jan 2019 | CN |
| 109417620 | Mar 2019 | CN |
| 109479133 | Mar 2019 | CN |
| 109691102 | Apr 2019 | CN |
| 109804625 | May 2019 | CN |
| 110301134 | Oct 2019 | CN |
| 114128280 | Mar 2022 | CN |
| 113692739 | Oct 2023 | CN |
| 113545049 | Apr 2024 | CN |
| 113711590 | Apr 2024 | CN |
| 113711610 | Apr 2024 | CN |
| 2559245 | Aug 2015 | EP |
| 3386198 | Oct 2018 | EP |
| 3425911 | Jan 2019 | EP |
| 3484151 | May 2019 | EP |
| 3703366 | Sep 2020 | EP |
| 3932063 | Jan 2022 | EP |
| 3949395 | Feb 2022 | EP |
| 3977738 | Apr 2022 | EP |
| 3912343 | Jul 2022 | EP |
| 3932062 | Jul 2022 | EP |
| 3935834 | Aug 2022 | EP |
| 40063053 | Jul 2024 | HK |
| P00093719 | May 2024 | ID |
| P00093774 | May 2024 | ID |
| P00094483 | Jun 2024 | ID |
| P000094748 | Jun 2024 | ID |
| 2012080571 | Apr 2012 | JP |
| 2016506683 | Mar 2016 | JP |
| 2020017970 | Jan 2020 | JP |
| 2021002780 | Jan 2021 | JP |
| 2021513284 | May 2021 | JP |
| 2022523925 | Apr 2022 | JP |
| 2022526663 | May 2022 | JP |
| 2022531280 | Jul 2022 | JP |
| 2022532277 | Jul 2022 | JP |
| 7129958 | Sep 2022 | JP |
| 7529848 | Aug 2024 | JP |
| 7608550 | Jan 2025 | JP |
| 7612773 | Jan 2025 | JP |
| 20130058524 | Jun 2013 | KR |
| 20200101990 | Aug 2020 | KR |
| 2573222 | Jan 2016 | RU |
| 2015147886 | May 2017 | RU |
| 2636103 | Nov 2017 | RU |
| 2676234 | Dec 2018 | RU |
| 201832562 | Sep 2018 | TW |
| 42015 | Nov 2024 | VN |
| 2009050889 | Apr 2009 | WO |
| 2011128269 | Oct 2011 | WO |
| 2012090504 | Jul 2012 | WO |
| 2014071439 | May 2014 | WO |
| 2014205561 | Dec 2014 | WO |
| 2015066525 | May 2015 | WO |
| 2015165030 | Nov 2015 | WO |
| 2015187978 | Dec 2015 | WO |
| 2015196119 | Dec 2015 | WO |
| 2016066028 | May 2016 | WO |
| 2016123219 | Aug 2016 | WO |
| 2016132145 | Aug 2016 | WO |
| 2016164235 | Oct 2016 | WO |
| 2017019818 | Feb 2017 | WO |
| 2017091759 | Jun 2017 | WO |
| 2017138352 | Aug 2017 | WO |
| 2017144881 | Aug 2017 | WO |
| 2017158236 | Sep 2017 | WO |
| 2017165494 | Sep 2017 | WO |
| 2018016381 | Jan 2018 | WO |
| 2018045207 | Mar 2018 | WO |
| 2018061588 | Apr 2018 | WO |
| 2018116802 | Jun 2018 | WO |
| 2018116925 | Jun 2018 | WO |
| 2018118940 | Jun 2018 | WO |
| 2018132475 | Jul 2018 | WO |
| 2018132710 | Jul 2018 | WO |
| 2018236031 | Dec 2018 | WO |
| 2019006300 | Jan 2019 | WO |
| 2019054200 | Mar 2019 | WO |
| 2019069950 | Apr 2019 | WO |
| 2019160986 | Aug 2019 | WO |
| 2020156529 | Aug 2020 | WO |
| 2020176459 | Sep 2020 | WO |
| 2020182092 | Sep 2020 | WO |
| 2020211862 | Oct 2020 | WO |
| 2020211869 | Oct 2020 | WO |
| 2020220884 | Nov 2020 | WO |
| 2021241727 | Dec 2021 | WO |
| 2022115698 | Jun 2022 | WO |
| Entry |
|---|
| Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11,14th Meeting: Geneva, CH, Mar. 19-27, 2019, Document JVET-N0847 (Year: 2019). |
| Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11,14th Meeting: Geneva, CH, Mar. 19-27, 2019, Document JVET-N0389 (Year: 2019). |
| Document: JVET-M0427-v2, Lu, T., et al., “CE12: Mapping functions (test CE12-1 and CE12-2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, 17 pages. |
| Document: JVET-Q0182-v1, Lai, et al., “AHG9: Allowing slice-level scaling list and LMCS,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 17th Meeting: Brussels, BE, Jan. 7-17, 2020, 5 pages. |
| Document: JVET-M0292, Koo, M., et al., “CE6: Reduced Secondary Transform (RST) (test 6.5.1),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting: Marrakech, MA, Jan. 9-18, 2019, 14 pages. |
| Document: JVET-N0193, Koo, M., et al., “CE6: Reduced Secondary Transform (RST) (CE6-3.1),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 19 pages. |
| Document: JVET-N0555-v3, Siekmann, M., et al., “CE6—related: Simplification of the Reduced Secondary Transform,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 10 pages. |
| Document: JVET-P0379-v1, Fan, K., et al., “Non-CE6: A unified zero-out range for 4x4 LFNST,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 16th Meeting: Geneva, CH, Oct. 1-11, 2019, 8 pages. |
| Recommendation ITU-T H.265 (Feb. 2018), [online], ITU-T, Feb. 13, 2018, pp. 77-78 URL: https://www.itu.int/rec/TREC-H.265-201802-S/en. |
| Document: JVET-N0805-v1, Heng, B., et al., “AHG17: Design for signalling reshaper model,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 4 pages. |
| Document: JVET-N0805-v2, Wan, W., et al., “AHG17: Design for signalling reshaper model,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 6 pages. |
| Notice of Allowance from U.S. Appl. No. 17/546,621 dated Nov. 30, 2022. |
| Notice of Allowance from U.S. Appl. No. 17/357,166 dated Dec. 27, 2022. |
| Notice of Allowance from U.S. Appl. No. 17/406,284 dated Jan. 5, 2023. |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/085654 dated Jul. 20, 2020 (9 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/085655 dated Jul. 21, 2020 (10 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/085672 dated Jul. 20, 2020 (10 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/085674 dated Jul. 14, 2020 (12 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/086111 dated Jul. 22, 2020 (12 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/089096 dated Aug. 14, 2020 (12 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/097374 dated Oct. 10, 2020 (10 pages). |
| International Search Report and Written Opinion from International Patent Application No. PCT/CN2020/100573 dated Oct. 13, 2020 (9 pages). |
| Non Final Office Action from U.S. Appl. No. 17/405,236 dated Dec. 17, 2021. |
| Non Final Office Action from U.S. Appl. No. 17/405,212 dated Dec. 24, 2021. |
| Non Final Office Action from U.S. Appl. No. 17/479,192 dated Dec. 24, 2021. |
| Extended European Search Report from European Patent Application No. 20832921.9 dated Aug. 1, 2022 (11 pages). |
| Extended European Search Report from European Patent Application No. 20795818.2 dated Aug. 19, 2022 (14 pages). |
| Non-Final Office Action from U.S. Appl. No. 17/406,284 dated Aug. 30, 2022. |
| Partial Supplementary European Search Report from European Patent Application No. 20791687_5 dated Mar. 30, 2022 (14 pages). |
| Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/078385 dated Jun. 4, 2020 (10 pages). |
| Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/078388 dated May 29, 2020 (10 pages). |
| Intemational Search Report and Written Opinion from International Patent Application No. PCT/CN2020/078393 dated Jun. 5, 2020 (9 pages). |
| Non Final Office Action from U.S. Appl. No. 17/357,093 dated Aug. 19, 2021. |
| Non Final Office Action from U.S. Appl. No. 17/357,166 dated Oct. 5, 2021. |
| Final Office Action from U.S. Appl. No. 17/357,166 dated Jan. 28, 2022. |
| Non Final Office Action from U.S. Appl. No. 17/494,974 dated Jan. 24, 2022. |
| Non Final Office Action from U.S. Appl. No. 17/546,621 dated Apr. 4, 2022. |
| Notice of Allowance from U.S. Appl. No. 17/406,284 dated Jan. 4, 2022. |
| Extended European Search Report from European Patent Application No. 20801570.1 dated May 27, 2022 (15 pages). |
| Extended European Search Report from European Patent Application No. 20792015.8 dated Jun. 15, 2022 (9 pages). |
| Extended European Search Report from European Patent Application No. 20836333.3 dated Jul. 14, 2022 (8 pages). |
| Extended European Search Report from European Patent Application No. 20791687.5 dated Jul. 18, 2022 (16 pages). |
| Non Final Office Action from U.S. Appl. No. 17/357,166 dated Jul. 8, 2022. |
| Final Office Action from U.S. Appl. No. 17/546,621 dated Jul. 26, 2022. |
| High Efficiency Video Coding, Series H: Audiovisual and Multimedia Systems: Infrastructure of Audiovisual Services Coding of Moving Video, ITU-T Telecommunication Standardization Sector of ITU, H.265, Nov. 2019.Rec. ITU-T H.265 | ISO/IEC 23008-2. |
| Rosewarne et al. “High Efficiency Video Coding (HEVC) Test Model 16 (HM 16) Improved Encoder Description Update 7,” Joint Collaborative Team on Video Coding (JCT-VG) ITU-T SG 16 WP3 and 1SO/IEC JTC1/SC29/WG11, 25th Meeting, Chengdu, CN, Oct. 14-21, 2016, document JCTVC-Y1002, 2016. |
| Lu et al. “CE12: Mapping Functions {Test CE12-1 and CE12-2),” Joint Video Experts Team {JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA, Jan. 9-18, 2019, document JVET-M0427, 2019. |
| Bross et al. “Versatile Video Coding (Draft 4),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA, Jan. 9-18, 2019, document JVET-M1001, 2019. |
| Chen et al. “Algorithm Description for Versatile Video Coding and Test Model 4 (VTM 4),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA, Jan. 9-18, 2019, document JVET-M1002, 2019. |
| Lin et al. “AHG16: Subblock-Based Chroma Residual Scaling,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0113, 2019. |
| Wang et al. “AHG17: Signalling of reshaper parameters in APS,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0117, 2019. |
| Pfaff et al. “CE3: Affine Linear Weighted Intra Prediction (CE3-4.1, CE3-4.2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N0217, 2019. |
| Lu et al. “AHG16: Simplification of Reshaper Information,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N0220, 2019. |
| Luo et al. “CE2-Related: Prediction Refinement with Optical Flow for Affine Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N0236, 2019. |
| Huo et al. “CE3-1.5: CCLM Derived with Four Neighbouring Samples,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0271, 2019. |
| Zhao et al., “On Luma Dependent Chroma Residual Scaling of In-loop Reshaper,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0299, 2019. |
| Francois et al. “Chroma Residual Scaling with Separate Luma/Chroma Tree,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, Mar. 19-27, 2019, document JVET-N0389, 2019. |
| Francois et al. “AHG16/Non-CE3: Study of CCLM Restrictions in Case of Separate Luma/Chroma Tree,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, Mar. 27, 2019, document JVET-N0390, 2019. |
| Ye et al. “On Luma Mapping with Chroma Scaling,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0477, 2019. |
| Francois et al. “Suggested Luma Mapping with Chroma Scaling Modifications in N0220/N0389/N0477,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0806, 2019. |
| Bross et al. “Versatile Video Coding (Draft 5),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and 1SO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N1001, 2019. |
| Chen et al. “Algorithm description for Versatile Video Coding and Test Model 5 (VTM 5),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N1002, 2019. |
| Chen et al. “Description of Core Experiment 2 {CE2): Luma-Chroma Dependency Reduction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 1114th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N1022, 2019. |
| VTM software: https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM.git, May 9, 2022. |
| Bross et al., “Versatile Video Coding (Draft 2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 11th Meeting, Ljubljana, SI, Jul. 10-18, 2018, document JVET-K1001, 2018. |
| Bross et al., “Versatile Video Coding (Draft 3),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L1001, 2018. |
| Chen et al “Algorithm Description of Joint Exploration Test Model 7 (JEM 7),” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 7th Meeting: Torino, IT, Jul. 13-21, 2017, document JVET-G1001, 2017. |
| Chen et al. “CE4: Separate List for Sub-Block Merge Candidates (Test 4.2.8),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L0369, 2018. |
| Chen et al., “Crosscheck of JVET-L0142 (CE4: Simplification of the Common Base for Affine Merge (Test 4.2.6)),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L0632, 2018. |
| Chen et al. “Description of Core Experiment 2 (CE2): Luma-Chroma Dependency Reduction,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-M1022, 2019. |
| Chiang et al. “CE10.1.1: Multi-Hypothesis Prediction for Improving AMVP Mode, Skip or Merge Mode, and Intra Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L0100, 2018. |
| Francois et al. “CE12-Related: Block-Based In-Loop Luma Reshaping,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA Jan. 9-18, 2019, document JVET-M0109, 2019. |
| Francois et al. “CE12-Related: In-Loop Luma Reshaping with Approximate Inverse Mapping Function,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA Jan. 9-18, 2019, document JVET-M0640, 2019. |
| Han et al. “CE4.1.3: Affine Motion Compensation Prediction,” Joint Video Exploration Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 11th Meeting, Ljubljana, SI, Jul. 10-18, 2018, document JVET-K0337, 2018. |
| “Information Technology—High Efficiency Coding and Media Delivery in Heterogeneous Environments—Part 2: High Efficiency Video Coding” Apr. 20, 2018, ISO/DIS 23008, 4th Edition. |
| Jeong et al. “CE4 Ultimate Motion Vector Expression (Test 4.5.4),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L0054, 2018. |
| Jung et al. “AHG16/CE3-Related: CCLM Mode Restriction for Increasing Decoder Throughput,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting, Geneva, CH, Mar. 19-27, 2019, document JVET-N0376, 2019. |
| Lee et al. “CE4: Simplification of the Common Base for Affine Merge (Test 4.2.2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macau, CN, Oct. 8-12, 2012, document JVET-L0142, 2018. |
| Pu et al. “CE12-4: SDR In-Loop Reshaping,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L0246, 2018. |
| Rasch et al. “CE10: Uniform Directional Diffusion Filters for Video Coding,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3-12, 2018, document JVET-L0157, 2018. |
| Rusanovskyy et al. “CE14: Test on In-Loop Bilateral Filter From JVET-J0021/JVET-K0384 with Parametrization (CE14.2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, 3-12, Oct. 2018, document JVET-L0406, 2018. |
| Sethuraman, Sriram, “CE9: Results of DMVR Related Tests CE9.2.1 and CE9.2.2,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA, Jan. 9-18, 2019, document JVET-M0147, 2019. |
| Stepin et al. “CE2 Related: Hadamard Transform Domain Filter,” Joint Video Expters Team (JVET) of ITU-T SG 16 and ISO/IEC JTC 1/SC 29/WG 11, 11th Meeting, Ljubljana, SI, Jul. 10-18, 2018, document JVET-K0068, 2018. |
| Wang et al. “AHG17: On Header Parameter Set (HPS),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 13th Meeting, Marrakech, MA, Jan. 9-18, 2019, document JVET-M0132, 2019. |
| JEM-7.0: https://jvet.hhi.fraunhofer.de/svn/svn_HMJEMSoftware/tags/ HM-16.6-JEM-7.0, May 9, 2022. |
| https://vcgit.hhi.fraunhofer.de/jvet/VVCSoftware_VTM/tags/VTM-2.1, May 9, 2022. |
| English Translation of JP2021002780A (Takeshi et al. Video Decoding Device and Video Coding Device), Jan. 7, 2021. |
| Zhao et al. “CE2-Related: CCLM for Dual Tree with 32x32 Latency,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 15th Meeting, Gothenburg, SE, Jul. 3-12, 2019, document JVET-00196, 2019. |
| Chubach et al. “CE7-Related: Support of Quantization Matrices for WC,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, document JVET-N0847, 2019. |
| Chen et al. “Non-CE2: Unification of Chroma Residual Scaling Design,” Joint Video Experts Team (JVET) of ITU-T S, SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting, Gothenburg Se Jul. 3-12, 2019, document JVET-O1109, 2019. |
| Lu et al. “Non-CE2: Alternative Solutions for Chroma Residual Scaling Factors Derivation for Dual Tree,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting Gothenburg SE, Jul. 3-12, 2019, document JVET-00429, 2019. |
| Chen et al. “CE2: Related: Luma-Chroma Dependency Reduction on Chroma Scaling,” Joint Video Experts (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 15th Meeting Gothenburg SE, Jul. 3-12, 2019, document JVET-O0627, 2019. |
| Lu et al. “CE12-Related: Universal Low complexity Reshaper for SOR and HOR Video,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 12th Meeting: Macao, CN, Oct. 3-12, 2018, document JVET-L0247, 2018. |
| Chen et al. “Single Depth Intra Coding Mode in 3D-HEVC,” Proceeding of 2015 IEEE International Symposium on Circuits and Systems (May 27, 2015) IEEE pp. 1130-1133, ISBN:978-1-4799-8391-9 doi: 10.1109/ISCAS.2015.7168837. |
| Document: JVET-N0390r1, Francois, E., et al., “AHG16/non-CE3: Study of CCLM restrictions in case of separate luma/chroma tree,” Joint Video Experts Team (JVET) of ITU-T SG 16WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 7 pages. |
| Document: JVET-N0376, Jung, J., et al., “AHG16/CE3-related: CCLM mode restriction for increasing decoder throughput,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 6 pages. |
| Document: JVET-M0427-v2, Lu, T., “CE12: Mapping functions (test CE12-1 and CE12-2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 13th Meeting: Marrakech , MA , Jan. 9-18, 2019, 13 pages. |
| Canadian Office Action from Canadian Application No. 3,135,521 dated Oct. 25, 2023, 8 pages. |
| Singapore Eligibility of Grant from Singapore Application No. 11202110999P dated Sep. 15, 2023, 7 pages. |
| Non-Final Office Action from U.S. Appl. No. 17/873,973 dated Sep. 14, 2023, 62 pages. |
| Final Office Action from U.S. Appl. No. 18/295,568 dated Jul. 18, 2024, 26 pages. |
| Chinese Notice of Allowance from Chinese Patent Application No. 202080045462.9 dated Jul. 29, 2024, 7 pages. |
| Japanese Office Action from Japanese Patent Application No. 2023-117436 dated Jul. 30, 2024, 6 pages. |
| Canadian Office Action from Canadian Patent Application No. 3135521 dated Aug. 21, 2024, 7 pages. |
| Bross B., et al., “Versatile Video Coding (Draft 2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29/WG 11, 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, Document: JVET-K1001-v1, 43 Pages. |
| Bross B., et al., “Versatile Video Coding (Draft 2),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29/WG 11, 11th Meeting: Ljubljana, SI, Jul. 10-18, 2018, Document: JVET-K1001-v6, 141 Pages. |
| Bross B., et al., “Versatile Video Coding (Draft 3),” Joint Video Experts Team (JVET) of ITU-T 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting, Macao, CN, Oct. 3, 2018-Oct. 12, 2018, Document: JVET-L1001-v9, 247 Pages. |
| Bross B., et al., “Versatile Video Coding (Draft 5)”, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14 Meeting: Geneva, CH, Mar. 19-27, 2019, Document: JVET-N1001-v10, 407 Pages. |
| Bross B., et al., “Versatile Video Coding (Draft 5),” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, Document: JVET-N1001-v8, 400 Pages. |
| Bross B., et al., “Versatile Video Coding (Draft 6),” The Joint Video Exploration Team of ISO/IEC JTC1/SC29/WG11 and ITU-T SG.16, 15. JVET Meeting, Gothenburg, Jul. 3-12, 2019, Document: JVET-O2001-v2, XP030293932, (Jul. 13, 2019), Retrieved from URL: https://jvet-experts.org/doc_end_user/documents/15_Gothenburg/wg11/JVET-O2001-v2.zip_JVET-O2001-v2.docx. |
| Chen J., et al., “CE2: Related: Luma-Chroma Dependency Reduction on Chroma Scaling,” Joint Video Experts (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 15th Meeting: Gothenburg SE, Jul. 3-12, 2019, Document: JVET-O0627-v1, 7 Pages. |
| Chiang M-S., et al., “CE10.1.1: Multi-Hypothesis Prediction for Improving AMVP Mode, Skip or Merge Mode, and Intra Mode,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, 12th Meeting: Macao, CN, Oct. 3-12, 2018, Document: JVET-L0100-v3, 14 Pages. |
| Communication pursuant to Article 94(3) EPC for European Application No. 20770821.5, mailed Feb. 12, 2024, 4 pages. |
| Communication pursuant to Article 94(3) EPC for European Application No. 20836333.3, mailed Jul. 2, 2024, 6 pages. |
| Communication pursuant to Article 94(3) EPC for European Application No. 20839291.9, mailed Apr. 17, 2024, 4 pages. |
| Document: JVET-N0389r1, Francois, E., et al., “Chroma residual scaling with separate luma/chroma tree”, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 11 pages. |
| Extended European Search Report for European Application No. 20769932.3, mailed Jun. 17, 2022, 8 pages. |
| Extended European Search Report for European Application No. 20770821.5, mailed Jun. 9, 2022, 8 Pages. |
| Final office Action from U.S. Appl. No. 18/501,497 dated Oct. 1, 2024, 43 pages. |
| Foreign Communication from a Counterpart Application, Office Action for Canadian Application No. 3,135,968, mailed Jun. 28, 2023, 4 Pages. |
| Foreign Communication from a Counterpart Application, Office Action for Japanese Application No. 2021-559969, mailed Nov. 15, 2022, 20 Pages. |
| Foreign Communication from a related Counterpart Application, Office Action for Japanese Application No. 2021-564420, mailed Nov. 22, 2022, 14 Pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/078385, mailed Sep. 23, 2021, 7 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/078388, mailed Sep. 23, 2024, 6 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/078393, mailed Sep. 23, 2021, 6 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/085654, mailed Oct. 28, 2021, 6 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/085655, mailed Oct. 28, 2021, 7 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/085672, mailed Oct. 28, 2021, 7 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/085674, mailed Oct. 28, 2021, 7 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/086111, mailed Nov. 4, 2021, 7 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/089096, mailed Nov. 18, 2021, 7 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/097374, mailed Jan. 6, 2022, 6 pages. |
| International Preliminary Report on Patentability for International Application No. PCT/CN2020/100573, mailed Jan. 20, 2022, 6 pages. |
| Invitation to Respond to Written Opinion for Singaporean Application No. 11202109150Q, dated Aug. 22, 2022, 9 pages. |
| Invitation to Respond to Written Opinion for Singaporean Application No. 11202110713X, dated Jan. 2, 2024, 14 pages. |
| Invitation to Respond to Written Opinion for Singaporean Application No. 11202111000X, dated Jan. 2, 2024, 9 pages. |
| ITU-T: “Series H: Audiovisual and Multimedia Systems Infrastructure of Audiovisual Services—Coding of moving Video,” Versatile Video Coding, Recommendation ITU-T H.266, Apr. 2022, 536 Pages, [Retrieved on Feb. 28, 2023] Retrieved from URL: https://handle.itu.int/11.1002/1000/14336. |
| Japanese Office Action from Japanese Patent Application No. 2021-559969 dated Dec. 17, 2024, 35 pages. |
| JVET-K0556-v2, Hsu, C., et al., “CE1-related: Constraint for binary and ternary partitions,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11 11th Meeting: Ljubljana, SI, 1018 Jul. 2018, 3 pages. |
| Lin Z-Y., et al., “AHG16: Subblock-Based Chroma Residual Scaling,” Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WVG 11, 14th Meeting: Geneva, CH, Mar. 19-27, 2019, Document: JVET-N0113-v2, 6 Pages. |
| Mexican Office Action from Mexican Patent Application No. MX/a/2021/012470 dated Aug. 23, 2024, 6 pages. |
| Non-Final Office Action for U.S. Appl. No. 17/357,166, mailed Aug. 8, 2022, 11 Pages. |
| Notice of Allowance for U.S. Appl. No. 17/357,166, mailed Jul. 20, 2023, 18 Pages. |
| Notice of Preliminary Rejection for Korean Application No. 10-2021-7027093, mailed Jul. 10, 2024, 7 pages. |
| Office Action for Canadian Application No. 3135973, mailed on Nov. 21, 2023, 5 pages. |
| Office action for Japanese Patent Application No. 20210559969, mailed on May 23, 2023, 10 pages. |
| Office action for Japanese Patent Application No. 20230158409, mailed on Jun. 18, 2024, 8 pages. |
| Request for the Submission of an Opinion for Korean Application No. 10-2021-7040624, mailed Jul. 15, 2024, 12 pages. |
| Substantive Examination Adverse Report for Malaysian Application No. PI2021005977, mailed on Aug. 27, 2024, 4 pages. |
| Written Decision on Registration for Korean Application No. 10-2021-7027093, mailed Dec. 2, 2024, 4 pages. |
| Written Decision on Registration for Korean Application No. 10-2021-7040624, mailed Dec. 2, 2024, 7 pages. |
| Document: JVET-N0389r2, Francois, E., et al., “Chroma residual scaling with separate luma/chroma tree”, Joint Video Experts Team (JVET) of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC29/WG 11 14th Meeting: Geneva, CH, Mar. 19-27, 2019, 20 pages. |
| Japanese Office Action from Japanese Patent Application No. 2023-16044 dated Jan. 19, 2025, 35 pages. |
| Number | Date | Country | |
|---|---|---|---|
| 20230344990 A1 | Oct 2023 | US |
| Number | Date | Country | |
|---|---|---|---|
| Parent | 17740753 | May 2022 | US |
| Child | 18339063 | US | |
| Parent | 17406284 | Aug 2021 | US |
| Child | 17740753 | US | |
| Parent | PCT/CN2020/086111 | Apr 2020 | WO |
| Child | 17406284 | US |