The present disclosure generally relates to video processing, and more particularly, to methods and apparatuses for motion template-matching-based motion refinement.
A video is a set of static pictures (or “frames”) capturing the visual information. To reduce the storage memory and the transmission bandwidth, a video can be compressed before storage or transmission and decompressed before display. The compression process is usually referred to as encoding and the decompression process is usually referred to as decoding. There are various video coding formats which use standardized video coding technologies, most commonly based on prediction, transform, quantization, entropy coding and in-loop filtering. The video coding standards, such as the High Efficiency Video Coding (HEVC/H.265) standard, the Versatile Video Coding (VVC/H.266) standard, AVS standards, specifying the specific video coding formats, are developed by standardization organizations. With more and more advanced video coding technologies being adopted in the video standards, the coding efficiency of the new video coding standards get higher and higher.
Embodiments of the present disclosure provide methods and apparatuses for motion template-matching-based motion refinement.
According to some exemplary embodiments, there is provided a method of encoding video content. The method includes: dividing a target coding block into a plurality of subblocks; determining a plurality of sub-templates based on the plurality of subblocks; and refining motion vectors of the plurality of subblocks based on the plurality of sub-templates.
According to some exemplary embodiments, there is provided a method of decoding a bitstream associated with video content. The method includes: decoding the bitstream to reconstruct a target coding block, wherein the target coding block is divided into a plurality of subblocks; determining a plurality of sub-templates based on the plurality of subblocks; and refining motion vectors of the plurality of subblocks based on the plurality of sub-templates.
According to some exemplary embodiments, there is provided a method of storing a bitstream associated with video content. The method includes: dividing a target coding block into a plurality of subblocks; determining a plurality of sub-templates based on the plurality of subblocks; refining motion vectors of the plurality of subblocks based on the plurality of sub-templates; generating the bitstream based on the refined motion vectors; and storing the bitstream in a non-transitory computer-readable storage medium.
Embodiments and various aspects of the present disclosure are illustrated in the following detailed description and the accompanying figures. Various features shown in the figures are not drawn to scale.
template-based motion refinement, according to some embodiments of the present disclosure.
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. The following description refers to the accompanying drawings in which the same numbers in different drawings represent the same or similar elements unless otherwise represented. The implementations set forth in the following description of exemplary embodiments do not represent all implementations consistent with the invention. Instead, they are merely examples of apparatuses and methods consistent with aspects related to the invention as recited in the appended claims. Particular aspects of the present disclosure are described in greater detail below. The terms and definitions provided herein control, if in conflict with terms and/or definitions incorporated by reference.
Video coding systems are often used to compress digital video signals, for instance to reduce storage space consumed or to reduce transmission bandwidth consumption associated with such signals. With high-definition (HD) videos (e.g., having a resolution of 1920×1080 pixels) gaining popularity in various applications of video compression, such as online video streaming, video conferencing, or video monitoring, it is a continuous need to develop video coding tools that can increase compression efficiency of video data.
For example, video monitoring applications are increasingly and extensively used in many application scenarios (e.g., security, traffic, environment monitoring, or the like), and the numbers and resolutions of the monitoring devices keep growing rapidly. Many video monitoring application scenarios prefer to provide HD videos to users to capture more information, which has more pixels per frame to capture such information. However, an HD video bitstream can have a high bitrate that demands high bandwidth for transmission and large space for storage. For example, a monitoring video stream having an average 1920×1080 resolution can require a bandwidth as high as 4 Mbps for real-time transmission. Also, the video monitoring generally monitors 7×24 continuously, which can greatly challenge a storage system, if the video data is to be stored. The demand for high bandwidth and large storage of the HD videos has therefore become a major limitation to its large-scale deployment in video monitoring.
A video is a set of static pictures (or “frames”) arranged in a temporal sequence to store visual information. A video capture device (e.g., a camera) can be used to capture and store those pictures in a temporal sequence, and a video playback device (e.g., a television, a computer, a smartphone, a tablet computer, a video player, or any end-user terminal with a function of display) can be used to display such pictures in the temporal sequence. Also, in some applications, a video capturing device can transmit the captured video to the video playback device (e.g., a computer with a monitor) in real-time, such as for monitoring, conferencing, or live broadcasting.
For reducing the storage space and the transmission bandwidth needed by such applications, the video can be compressed before storage and transmission and decompressed before the display. The compression and decompression can be implemented by software executed by a processor (e.g., a processor of a generic computer) or specialized hardware. The module for compression is generally referred to as an “encoder,” and the module for decompression is generally referred to as a “decoder.” The encoder and decoder can be collectively referred to as a “codec.” The encoder and decoder can be implemented as any of a variety of suitable hardware, software, or a combination thereof. For example, the hardware implementation of the encoder and decoder can include circuitry, such as one or more microprocessors, digital signal processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), discrete logic, or any combinations thereof. The software implementation of the encoder and decoder can include program codes, computer-executable instructions, firmware, or any suitable computer-implemented algorithm or process fixed in a computer-readable medium. Video compression and decompression can be implemented by various algorithms or standards, such as MPEG-1, MPEG-2, MPEG-4, H.26x series, or the like. In some applications, the codec can decompress the video from a first coding standard and re-compress the decompressed video using a second coding standard, in which case the codec can be referred to as a “transcoder.”
The video encoding process can identify and keep useful information that can be used to reconstruct a picture and disregard unimportant information for the reconstruction. If the disregarded, unimportant information cannot be fully reconstructed, such an encoding process can be referred to as “lossy.” Otherwise, it can be referred to as “lossless.” Most encoding processes are lossy, which is a tradeoff to reduce the needed storage space and the transmission bandwidth.
The useful information of a picture being encoded (referred to as a “current picture”) include changes with respect to a reference picture (e.g., a picture previously encoded and reconstructed). Such changes can include position changes, luminosity changes, or color changes of the pixels, among which the position changes are mostly concerned. Position changes of a group of pixels that represent an object can reflect the motion of the object between the reference picture and the current picture.
A picture coded without referencing another picture (i.e., it is its own reference picture) is referred to as an “I-picture.” A picture coded using a previous picture as a reference picture is referred to as a “P-picture.” A picture coded using both a previous picture and a future picture as reference pictures (i.e., the reference is “bi-directional”) is referred to as a “B-picture.”
As previously mentioned, video monitoring that uses HD videos faces challenges of demands of high bandwidth and large storage. For addressing such challenges, the bitrate of the encoded video can be reduced. Among the I-, P-, and B-pictures, I-pictures have the highest bitrate. Because the backgrounds of most monitoring videos are nearly static, one way to reduce the overall bitrate of the encoded video can be using fewer I-pictures for video encoding.
However, the improvement of using fewer I-pictures can be trivial because the I-pictures are typically not dominant in the encoded video. For example, in a typical video bitstream, the ratio of I-, B-, and P-pictures can be 1:20:9, in which the I-pictures can account for less than 10% of the total bitrate. In other words, in such an example, even all I-pictures are removed, the reduced bitrate can be no more than 10%.
As shown in
Typically, video codecs do not encode or decode an entire picture at one time due to the computing complexity of such tasks. Rather, they can split the picture into basic segments, and encode or decode the picture segment by segment. Such basic segments are referred to as basic processing units (“BPUs”) in this disclosure. For example, structure 110 in
The basic processing units can be logical units, which can include a group of different types of video data stored in a computer memory (e.g., in a video frame buffer). For example, a basic processing unit of a color picture can include a luma component (Y) representing achromatic brightness information, one or more chroma components (e.g., Cb and Cr) representing color information, and associated syntax elements, in which the luma and chroma components can have the same size of the basic processing unit. The luma and chroma components can be referred to as “coding tree blocks” (“CTBs”) in some video coding standards (e.g., H.265/HEVC or H.266/VVC). Any operation performed to a basic processing unit can be repeatedly performed to each of its luma and chroma components.
Video coding has multiple stages of operations, examples of which will be detailed in
For example, at a mode decision stage (an example of which will be detailed in
For another example, at a prediction stage (an example of which will be detailed in
For another example, at a transform stage (an example of which will be detailed in
In structure 110 of
In some implementations, to provide the capability of parallel processing and error resilience to video encoding and decoding, a picture can be divided into regions for processing, such that, for a region of the picture, the encoding or decoding process can depend on no information from any other region of the picture. In other words, each region of the picture can be processed independently. By doing so, the codec can process different regions of a picture in parallel, thus increasing the coding efficiency. Also, when data of a region is corrupted in the processing or lost in network transmission, the codec can correctly encode or decode other regions of the same picture without reliance on the corrupted or lost data, thus providing the capability of error resilience. In some video coding standards, a picture can be divided into different types of regions. For example, H.265/HEVC and H.266/VVC provide two types of regions: “slices” and “tiles.” It should also be noted that different pictures of video sequence 100 can have different partition schemes for dividing a picture into regions.
For example, in
In
The encoder can perform process 200A iteratively to encode each original BPU of the original picture (in the forward path) and generate predicted reference 224 for encoding the next original BPU of the original picture (in the reconstruction path). After encoding all original BPUs of the original picture, the encoder can proceed to encode the next picture in video sequence 202.
Referring to process 200A, the encoder can receive video sequence 202 generated by a video capturing device (e.g., a camera). The term “receive” used herein can refer to receiving, inputting, acquiring, retrieving, obtaining, reading, accessing, or any action in any manner for inputting data.
At prediction stage 204, at a current iteration, the encoder can receive an original BPU and prediction reference 224, and perform a prediction operation to generate prediction data 206 and predicted BPU 208. Prediction reference 224 can be generated from the reconstruction path of the previous iteration of process 200A. The purpose of prediction stage 204 is to reduce information redundancy by extracting prediction data 206 that can be used to reconstruct the original BPU as predicted BPU 208 from prediction data 206 and prediction reference 224.
Ideally, predicted BPU 208 can be identical to the original BPU. However, due to non-ideal prediction and reconstruction operations, predicted BPU 208 is generally slightly different from the original BPU. For recording such differences, after generating predicted BPU 208, the encoder can subtract it from the original BPU to generate residual BPU 210. For example, the encoder can subtract values (e.g., greyscale values or RGB values) of pixels of predicted BPU 208 from values of corresponding pixels of the original BPU. Each pixel of residual BPU 210 can have a residual value as a result of such subtraction between the corresponding pixels of the original BPU and predicted BPU 208. Compared with the original BPU, prediction data 206 and residual BPU 210 can have fewer bits, but they can be used to reconstruct the original BPU without significant quality deterioration. Thus, the original BPU is compressed.
To further compress residual BPU 210, at transform stage 212, the encoder can reduce spatial redundancy of residual BPU 210 by decomposing it into a set of two-dimensional “base patterns,” each base pattern being associated with a “transform coefficient.” The base patterns can have the same size (e.g., the size of residual BPU 210). Each base pattern can represent a variation frequency (e.g., frequency of brightness variation) component of residual BPU 210. None of the base patterns can be reproduced from any combinations (e.g., linear combinations) of any other base patterns. In other words, the decomposition can decompose variations of residual BPU 210 into a frequency domain. Such a decomposition is analogous to a discrete Fourier transform of a function, in which the base patterns are analogous to the base functions (e.g., trigonometry functions) of the discrete Fourier transform, and the transform coefficients are analogous to the coefficients associated with the base functions.
Different transform algorithms can use different base patterns. Various transform algorithms can be used at transform stage 212, such as, for example, a discrete cosine transform, a discrete sine transform, or the like. The transform at transform stage 212 is invertible. That is, the encoder can restore residual BPU 210 by an inverse operation of the transform (referred to as an “inverse transform”). For example, to restore a pixel of residual BPU 210, the inverse transform can be multiplying values of corresponding pixels of the base patterns by respective associated coefficients and adding the products to produce a weighted sum. For a video coding standard, both the encoder and decoder can use the same transform algorithm (thus the same base patterns). Thus, the encoder can record only the transform coefficients, from which the decoder can reconstruct residual BPU 210 without receiving the base patterns from the encoder. Compared with residual BPU 210, the transform coefficients can have fewer bits, but they can be used to reconstruct residual BPU 210 without significant quality deterioration. Thus, residual BPU 210 is further compressed.
The encoder can further compress the transform coefficients at quantization stage 214. In the transform process, different base patterns can represent different variation frequencies (e.g., brightness variation frequencies). Because human eyes are generally better at recognizing low-frequency variation, the encoder can disregard information of high-frequency variation without causing significant quality deterioration in decoding. For example, at quantization stage 214, the encoder can generate quantized transform coefficients 216 by dividing each transform coefficient by an integer value (referred to as a “quantization parameter”) and rounding the quotient to its nearest integer. After such an operation, some transform coefficients of the high-frequency base patterns can be converted to zero, and the transform coefficients of the low-frequency base patterns can be converted to smaller integers. The encoder can disregard the zero-value quantized transform coefficients 216, by which the transform coefficients are further compressed. The quantization process is also invertible, in which quantized transform coefficients 216 can be reconstructed to the transform coefficients in an inverse operation of the quantization (referred to as “inverse quantization”).
Because the encoder disregards the remainders of such divisions in the rounding operation, quantization stage 214 can be lossy. Typically, quantization stage 214 can contribute the most information loss in process 200A. The larger the information loss is, the fewer bits the quantized transform coefficients 216 can need. For obtaining different levels of information loss, the encoder can use different values of the quantization parameter or any other parameter of the quantization process.
At binary coding stage 226, the encoder can encode prediction data 206 and quantized transform coefficients 216 using a binary coding technique, such as, for example, entropy coding, variable length coding, arithmetic coding, Huffman coding, context-adaptive binary arithmetic coding, or any other lossless or lossy compression algorithm. In some embodiments, besides prediction data 206 and quantized transform coefficients 216, the encoder can encode other information at binary coding stage 226, such as, for example, a prediction mode used at prediction stage 204, parameters of the prediction operation, a transform type at transform stage 212, parameters of the quantization process (e.g., quantization parameters), an encoder control parameter (e.g., a bitrate control parameter), or the like. The encoder can use the output data of binary coding stage 226 to generate video bitstream 228. In some embodiments, video bitstream 228 can be further packetized for network transmission.
Referring to the reconstruction path of process 200A, at inverse quantization stage 218, the encoder can perform inverse quantization on quantized transform coefficients 216 to generate reconstructed transform coefficients. At inverse transform stage 220, the encoder can generate reconstructed residual BPU 222 based on the reconstructed transform coefficients. The encoder can add reconstructed residual BPU 222 to predicted BPU 208 to generate prediction reference 224 that is to be used in the next iteration of process 200A.
It should be noted that other variations of the process 200A can be used to encode video sequence 202. In some embodiments, stages of process 200A can be performed by the encoder in different orders. In some embodiments, one or more stages of process 200A can be combined into a single stage. In some embodiments, a single stage of process 200A can be divided into multiple stages. For example, transform stage 212 and quantization stage 214 can be combined into a single stage. In some embodiments, process 200A can include additional stages. In some embodiments, process 200A can omit one or more stages in
Generally, prediction techniques can be categorized into two types: spatial prediction and temporal prediction. Spatial prediction (e.g., an intra-picture prediction or “intra prediction”) can use pixels from one or more already coded neighboring BPUs in the same picture to predict the current BPU. That is, prediction reference 224 in the spatial prediction can include the neighboring BPUs. The spatial prediction can reduce the inherent spatial redundancy of the picture. Temporal prediction (e.g., an inter-picture prediction or “inter prediction”) can use regions from one or more already coded pictures to predict the current BPU. That is, prediction reference 224 in the temporal prediction can include the coded pictures. The temporal prediction can reduce the inherent temporal redundancy of the pictures.
Referring to process 200B, in the forward path, the encoder performs the prediction operation at spatial prediction stage 2042 and temporal prediction stage 2044. For example, at spatial prediction stage 2042, the encoder can perform the intra prediction. For an original BPU of a picture being encoded, prediction reference 224 can include one or more neighboring BPUs that have been encoded (in the forward path) and reconstructed (in the reconstructed path) in the same picture. The encoder can generate predicted BPU 208 by extrapolating the neighboring BPUs. The extrapolation technique can include, for example, a linear extrapolation or interpolation, a polynomial extrapolation or interpolation, or the like. In some embodiments, the encoder can perform the extrapolation at the pixel level, such as by extrapolating values of corresponding pixels for each pixel of predicted BPU 208. The neighboring BPUs used for extrapolation can be located with respect to the original BPU from various directions, such as in a vertical direction (e.g., on top of the original BPU), a horizontal direction (e.g., to the left of the original BPU), a diagonal direction (e.g., to the down-left, down-right, up-left, or up-right of the original BPU), or any direction defined in the used video coding standard. For the intra prediction, prediction data 206 can include, for example, locations (e.g., coordinates) of the used neighboring BPUs, sizes of the used neighboring BPUs, parameters of the extrapolation, a direction of the used neighboring BPUs with respect to the original BPU, or the like.
For another example, at temporal prediction stage 2044, the encoder can perform the inter prediction. For an original BPU of a current picture, prediction reference 224 can include one or more pictures (referred to as “reference pictures”) that have been encoded (in the forward path) and reconstructed (in the reconstructed path). In some embodiments, a reference picture can be encoded and reconstructed BPU by BPU. For example, the encoder can add reconstructed residual BPU 222 to predicted BPU 208 to generate a reconstructed BPU. When all reconstructed BPUs of the same picture are generated, the encoder can generate a reconstructed picture as a reference picture. The encoder can perform an operation of “motion estimation” to search for a matching region in a scope (referred to as a “search window”) of the reference picture. The location of the search window in the reference picture can be determined based on the location of the original BPU in the current picture. For example, the search window can be centered at a location having the same coordinates in the reference picture as the original BPU in the current picture and can be extended out for a predetermined distance. When the encoder identifies (e.g., by using a pel-recursive algorithm, a block-matching algorithm, or the like) a region similar to the original BPU in the search window, the encoder can determine such a region as the matching region. The matching region can have different dimensions (e.g., being smaller than, equal to, larger than, or in a different shape) from the original BPU. Because the reference picture and the current picture are temporally separated in the timeline (e.g., as shown in
The motion estimation can be used to identify various types of motions, such as, for example, translations, rotations, zooming, or the like. For inter prediction, prediction data 206 can include, for example, locations (e.g., coordinates) of the matching region, the motion vectors associated with the matching region, the number of reference pictures, weights associated with the reference pictures, or the like.
For generating predicted BPU 208, the encoder can perform an operation of “motion compensation.” The motion compensation can be used to reconstruct predicted BPU 208 based on prediction data 206 (e.g., the motion vector) and prediction reference 224. For example, the encoder can move the matching region of the reference picture according to the motion vector, in which the encoder can predict the original BPU of the current picture. When multiple reference pictures are used (e.g., as picture 106 in
In some embodiments, the inter prediction can be unidirectional or bidirectional. Unidirectional inter predictions can use one or more reference pictures in the same temporal direction with respect to the current picture. For example, picture 104 in
Still referring to the forward path of process 200B, after spatial prediction 2042 and temporal prediction stage 2044, at mode decision stage 230, the encoder can select a prediction mode (e.g., one of the intra prediction or the inter prediction) for the current iteration of process 200B. For example, the encoder can perform a rate-distortion optimization technique, in which the encoder can select a prediction mode to minimize a value of a cost function depending on a bit rate of a candidate prediction mode and distortion of the reconstructed reference picture under the candidate prediction mode. Depending on the selected prediction mode, the encoder can generate the corresponding predicted BPU 208 and predicted data 206.
In the reconstruction path of process 200B, if intra prediction mode has been selected in the forward path, after generating prediction reference 224 (e.g., the current BPU that has been encoded and reconstructed in the current picture), the encoder can directly feed prediction reference 224 to spatial prediction stage 2042 for later usage (e.g., for extrapolation of a next BPU of the current picture). If the inter prediction mode has been selected in the forward path, after generating prediction reference 224 (e.g., the current picture in which all BPUs have been encoded and reconstructed), the encoder can feed prediction reference 224 to loop filter stage 232, at which the encoder can apply a loop filter to prediction reference 224 to reduce or eliminate distortion (e.g., blocking artifacts) introduced by the inter prediction. The encoder can apply various loop filter techniques at loop filter stage 232, such as, for example, deblocking, sample adaptive offsets, adaptive loop filters, or the like. The loop-filtered reference picture can be stored in buffer 234 (or “decoded picture buffer”) for later use (e.g., to be used as an inter-prediction reference picture for a future picture of video sequence 202). The encoder can store one or more reference pictures in buffer 234 to be used at temporal prediction stage 2044. In some embodiments, the encoder can encode parameters of the loop filter (e.g., a loop filter strength) at binary coding stage 226, along with quantized transform coefficients 216, prediction data 206, and other information.
In
The decoder can perform process 300A iteratively to decode each encoded BPU of the encoded picture and generate predicted reference 224 for encoding the next encoded BPU of the encoded picture. After decoding all encoded BPUs of the encoded picture, the decoder can output the picture to video stream 304 for display and proceed to decode the next encoded picture in video bitstream 228.
At binary decoding stage 302, the decoder can perform an inverse operation of the binary coding technique used by the encoder (e.g., entropy coding, variable length coding, arithmetic coding, Huffman coding, context-adaptive binary arithmetic coding, or any other lossless compression algorithm). In some embodiments, besides prediction data 206 and quantized transform coefficients 216, the decoder can decode other information at binary decoding stage 302, such as, for example, a prediction mode, parameters of the prediction operation, a transform type, parameters of the quantization process (e.g., quantization parameters), an encoder control parameter (e.g., a bitrate control parameter), or the like. In some embodiments, if video bitstream 228 is transmitted over a network in packets, the decoder can depacketize video bitstream 228 before feeding it to binary decoding stage 302.
In process 300B, for an encoded basic processing unit (referred to as a “current BPU”) of an encoded picture (referred to as a “current picture”) that is being decoded, prediction data 206 decoded from binary decoding stage 302 by the decoder can include various types of data, depending on what prediction mode was used to encode the current BPU by the encoder. For example, if intra prediction was used by the encoder to encode the current BPU, prediction data 206 can include a prediction mode indicator (e.g., a flag value) indicative of the intra prediction, parameters of the intra prediction operation, or the like. The parameters of the intra prediction operation can include, for example, locations (e.g., coordinates) of one or more neighboring BPUs used as a reference, sizes of the neighboring BPUs, parameters of extrapolation, a direction of the neighboring BPUs with respect to the original BPU, or the like. For another example, if inter prediction was used by the encoder to encode the current BPU, prediction data 206 can include a prediction mode indicator (e.g., a flag value) indicative of the inter prediction, parameters of the inter prediction operation, or the like. The parameters of the inter prediction operation can include, for example, the number of reference pictures associated with the current BPU, weights respectively associated with the reference pictures, locations (e.g., coordinates) of one or more matching regions in the respective reference pictures, one or more motion vectors respectively associated with the matching regions, or the like.
Based on the prediction mode indicator, the decoder can decide whether to perform a spatial prediction (e.g., the intra prediction) at spatial prediction stage 2042 or a temporal prediction (e.g., the inter prediction) at temporal prediction stage 2044. The details of performing such spatial prediction or temporal prediction are described in
In process 300B, the decoder can feed predicted reference 224 to spatial prediction stage 2042 or temporal prediction stage 2044 for performing a prediction operation in the next iteration of process 300B. For example, if the current BPU is decoded using the intra prediction at spatial prediction stage 2042, after generating prediction reference 224 (e.g., the decoded current BPU), the decoder can directly feed prediction reference 224 to spatial prediction stage 2042 for later usage (e.g., for extrapolation of a next BPU of the current picture). If the current BPU is decoded using the inter prediction at temporal prediction stage 2044, after generating prediction reference 224 (e.g., a reference picture in which all BPUs have been decoded), the encoder can feed prediction reference 224 to loop filter stage 232 to reduce or eliminate distortion (e.g., blocking artifacts). The decoder can apply a loop filter to prediction reference 224, in a way as described in
Apparatus 400 can also include memory 404 configured to store data (e.g., a set of instructions, computer codes, intermediate data, or the like). For example, as shown in
Bus 410 can be a communication device that transfers data between components inside apparatus 400, such as an internal bus (e.g., a CPU-memory bus), an external bus (e.g., a universal serial bus port, a peripheral component interconnect express port), or the like.
For ease of explanation without causing ambiguity, processor 402 and other data processing circuits are collectively referred to as a “data processing circuit” in this disclosure. The data processing circuit can be implemented entirely as hardware, or as a combination of software, hardware, or firmware. In addition, the data processing circuit can be a single independent module or can be combined entirely or partially into any other component of apparatus 400.
Apparatus 400 can further include network interface 406 to provide wired or wireless communication with a network (e.g., the Internet, an intranet, a local area network, a mobile communications network, or the like). In some embodiments, network interface 406 can include any combination of any number of a network interface controller (NIC), a radio frequency (RF) module, a transponder, a transceiver, a modem, a router, a gateway, a wired network adapter, a wireless network adapter, a Bluetooth adapter, an infrared adapter, an near-field communication (“NFC”) adapter, a cellular network chip, or the like.
In some embodiments, optionally, apparatus 400 can further include peripheral interface 408 to provide a connection to one or more peripheral devices. As shown in
It should be noted that video codecs (e.g., a codec performing process 200A, 200B, 300A, or 300B) can be implemented as any combination of any software or hardware modules in apparatus 400. For example, some or all stages of process 200A, 200B, 300A, or 300B can be implemented as one or more software modules of apparatus 400, such as program instructions that can be loaded into memory 404. For another example, some or all stages of process 200A, 200B, 300A, or 300B can be implemented as one or more hardware modules of apparatus 400, such as a specialized data processing circuit (e.g., an FPGA, an ASIC, an NPU, or the like).
The present disclosure provides methods for refining the motion information (e.g., motion vectors) used in the above-described encoding or decoding process 200A, 200B, 300A, or 300B.
Next, decoder-side motion vector refinement (DMVR) is described. VVC adopts a bilateral-matching (BM) based decoder side motion vector refinement in bi-prediction operation to increase the accuracy of the MVs of the merge mode. In DMVR, refined MVs are searched around the initial MVs in the reference picture list L0 and reference picture list L1. The BM method calculates the distortion between the two candidate blocks in the reference picture list L0 and reference picture list L1. As illustrated in
In VVC, the application of DMVR is restricted and is only applied for the CUS that are coded with some or all of the following modes and features:
The refined MV derived by DMVR process is used to generate the inter prediction samples and also used in temporal motion vector prediction for future pictures coding. While the original MV is used in deblocking process and also used in spatial motion vector prediction for future CU coding.
The additional features of DMVR are mentioned in the following sub-clauses.
In DMVR, the search points are surrounding the initial MV and the MV offset obey the MV difference mirroring rule. In other words, any points that are checked by DMVR, denoted by candidate MV pair (MV0, MV1) obey the following two equations:
where MV_offset represents the refinement offset between the initial MV and the refined MV in one of the reference pictures. The refinement search range is two integer luma samples from the initial MV. The searching includes the integer sample offset search stage and fractional sample refinement stage.
25 points full search is applied for integer sample offset searching. The SAD of the initial MV pair is first calculated. If the SAD of the initial MV pair is smaller than a threshold, the integer sample stage of DMVR is terminated. Otherwise SADs of the remaining 24 points are calculated and checked in raster scanning order. The point with the smallest SAD is selected as the output of integer sample offset searching stage. To reduce the penalty of the uncertainty of DMVR refinement, it is proposed to favor the original MV during the DMVR process. The SAD between the reference blocks referred by the initial MV candidates is decreased by ¼ of the SAD value.
The integer sample search is followed by fractional sample refinement. To save the calculational complexity, the fractional sample refinement is derived by using parametric error surface equation, instead of additional search with SAD comparison. The fractional sample refinement is conditionally invoked based on the output of the integer sample search stage. When the integer sample search stage is terminated with center having the smallest SAD in either the first iteration or the second iteration search, the fractional sample refinement is further applied.
In parametric error surface based sub-pixel offsets estimation, the center position cost and the costs at four neighboring positions from the center are used to fit a 2-D parabolic error surface equation of the following form:
where (xmin, ymin) corresponds to the fractional position with the least cost and C corresponds to the minimum cost value. By solving the above equations by using the cost value of the five search points, the (xmin) ymin) is computed as:
The value of xmin and ymin are automatically constrained to be between −8 and 8 since all cost values are positive and the smallest value is E(0,0). This corresponds to half peal offset with 1/16th-pel MV accuracy in VVC. The computed fractional (xmin, ymin) are added to the integer distance refinement MV to get the sub-pixel accurate refinement delta MV.
Next, bilinear-interpolation and sample padding is described. In VVC, the resolution of the MVs is 1/16 luma samples. The samples at the fractional position are interpolated using an 8-tap interpolation filter. In DMVR, the search points are surrounding the initial fractional-pel MV with integer sample offset, therefore the samples of those fractional position need to be interpolated for DMVR search process. To reduce the calculation complexity, the bi-linear interpolation filter is used to generate the fractional samples for the searching process in DMVR. Another important effect is that by using bi-linear filter is that with 2-sample search range, the DMVR does not access more reference samples compared to the normal motion compensation process. After the refined MV is attained with DMVR search process, the normal 8-tap interpolation filter is applied to generate the final prediction. In order to not access more reference samples to normal MC process, the samples, which is not needed for the interpolation process based on the original MV but is needed for the interpolation process based on the refined MV, will be padded from those available samples.
When the width or height of a CU are larger than 16 luma samples, it will be further split into subblocks with width or height equal to 16 luma samples. The maximum unit size for DMVR searching process is limited to 16×16.
Next, multi-pass decoder-side motion vector refinement (MP-DMVR) is described. In ECM, to further improve the coding efficiency, a multi-pass decoder-side motion vector refinement is applied. In the first pass, bilateral matching (BM) is applied to the coding block. In the second pass, BM is applied to each 16×16 subblock within the coding block. In the third pass, MV in each 8×8 subblock is refined by applying bi-directional optical flow (BDOF). The refined MVs are stored for both spatial and temporal motion vector prediction.
In the first pass, a refined MV is derived by applying BM to a coding block. Similar to decoder-side motion vector refinement (DMVR), in bi-prediction operation, a refined MV is searched around the two initial MVs (MV0 and MV1) in the reference picture lists L0 and L1. The refined MVs (MV0_pass1 and MV1_pass1) are derived around the initiate MVs based on the minimum bilateral matching cost between the two reference blocks in L0 and L1.
BM performs local search to derive integer sample precision intDeltaMV. The local search applies a 3×3 square search pattern to loop through the search range [−sHor, sHor] in horizontal direction and [−sVer, sVer] in vertical direction, wherein, the values of sHor and sVer are determined by the block dimension, and the maximum value of sHor and sVer is 8 or other values. For example, as in
The bilateral matching cost is calculated as: bilCost=mvDistanceCost+sadCost, wherein sadCost is the SAD between 10 predictor and 11 predictor on each search point and mvDistanceCost is based on intDeltaMV (i.e., the distance between the search point and the initial position). When the block size cbW×cbH is greater than 64, MRSAD cost function is applied to remove the DC effect of distortion between reference blocks. When the bilCost at the center point of the 3×3 search pattern has the minimum cost, the intDeltaMV local search is terminated. Otherwise, the current minimum cost search point becomes the new center point of the 3×3 search pattern and continue to search for the minimum cost, until it reaches the end of the search range.
The existing fractional sample refinement is further applied to derive the final deltaMV. The refined MVs after the first pass is then derived as:
In the second pass, a refined MV is derived by applying BM to a 16×16 grid subblock. For each subblock, a refined MV is searched around the two MVs (MV0_pass1 and MV1_pass1), obtained on the first pass, in the reference picture list L0 and L1. The refined MVs (MV0_pass2 (sbIdx2) and MV1_pass2 (sbIdx2)) are derived based on the minimum bilateral matching cost between the two reference subblocks in L0 and L1.
For each subblock, BM performs full search to derive integer sample precision intDeltaMV (sbIdx2). The full search has a search range [−sHor, sHor] in horizontal direction and [−sVer, sVer] in vertical direction, wherein, the values of sHor and sVer are determined by the block dimension, and the maximum value of sHor and sVer is 8 or other values.
The bilateral matching cost is calculated by applying a cost factor to the SATD cost between two reference subblocks, as: bilCost=satdCost×costFactor. The search area (2×sHor+1)×(2×sVer+1) is divided up to 5 diamond shape search regions shown on
The existing VVC DMVR fractional sample refinement is further applied to derive the final deltaMV(sbIdx2). The refined MVs at second pass is then derived as:
In the third pass, a refined MV is derived by applying BDOF to an 8×8 grid subblock. For each 8×8 subblock, BDOF refinement is applied to derive scaled Vx and Vy without clipping starting from the refined MV of the parent subblock of the second pass. The derived bioMv(Vx, Vy) is rounded to 1/16 sample precision and clipped between −32 and 32.
The refined MVs (MV0_pass3(sbIdx3) and MV1_pass3(sbIdx3)) at third pass are derived as:
In ECM, adaptive decoder side motion vector refinement method is an extension of multi-pass DMVR which consists of the two new merge modes to refine MV only in one direction, either L0 or L1, of the bi prediction for the merge candidates that meet the DMVR conditions. The multi-pass DMVR process is applied for the selected merge candidate to refine the motion vectors, however either MVD0 or MVD1 is set to zero in the 1st pass (i.e., PU level) DMVR. Thus, a new merge candidate list is constructed for adaptive decoder-side motion vector refinement. And the new merge mode for the new merge candidate list is called BM merge in ECM.
The merge candidates for BM merge mode are derived from spatial neighboring coded blocks, TMVPs, non-adjacent blocks, history-based motion vector predictors (HMVPs), pair-wise candidate, similar to the derivation process in the regular merge mode. The difference is that only those meeting the DMVR conditions are added into the candidate list. The same merge candidate list is used by the two new merge modes. If the list of BM candidates contains the inherited BCW weights and DMVR process is unchanged except the computation of the distortion is made using MRSAD or MRSATD if the weights are non-equal and the bi-prediction is weighted with BCW weights. Merge index is coded as in regular merge mode.
Template matching (TM) is a decoder-side MV derivation method to refine the motion information of the current CU by finding the closest match between a template (i.e., top or left neighboring blocks of the current CU) in the current picture and a block (i.e., same size to the template) in a reference picture.
In advanced motion vector prediction (AMVP) mode, an MVP candidate is determined based on template matching error to select the one which reaches the minimum cost. The cost is calculated as the difference between the current block template and the reference block template. And then TM is performed only for this particular MVP candidate for MV refinement. TM refines this MVP candidate, starting from full-pel MVD precision (or 4-pel for 4-pel AMVR mode) within a [−8, +8]-pel search range by using iterative diamond search. The AMVP candidate may be further refined by using cross search with full-pel MVD precision (or 4-pel for 4-pel AMVR mode), followed sequentially by half-pel and quarter-pel ones depending on AMVR mode as specified in Table 1. This search process ensures that the MVP candidate still keeps the same MV precision as indicated by the AMVR mode after TM process. In the search process, if the difference between the previous minimum cost and the current minimum cost in the iteration is less than a threshold that is equal to the area of the block, the search process terminates.
In merge mode, similar search method is applied to the merge candidate indicated by the merge index. As Table 1 shows, TM may perform all the way down to ⅛-pel MVD precision or skipping those beyond half-pel MVD precision, depending on whether the alternative interpolation filter (that is used when AMVR is of half-pel mode) is used according to merged motion information. Besides, when TM mode is enabled, template matching may work as an independent process or an extra MV refinement process between block-based and subblock-based bilateral matching (BM) methods, depending on whether BM can be enabled or not according to its enabling condition check.
The sum of absolute difference (SAD) or the sum of absolute transformed difference (SATD) between templates of the current block and the reference block may be used as the template matching cost, i.e., the cost of a candidate motion vector which refers to the reference block. In some other cases, mean removed SAD or mean removed SATD may be used as the template matching cost. The template matching cost is calculated as the difference between the template of the current block and the template of the reference block.
For bi-prediction candidate, the two MVs, one for reference picture list 0 and the other for reference picture list 1, are firstly refined independently and then an iteration process is performed to jointly refine the two MVs.
In HEVC, only translation motion model is applied for motion compensation prediction (MCP). While in the real world, there are many kinds of motion, e.g., zoom in/out, rotation, perspective motions and the other irregular motions. In VVC, a block-based affine transform motion compensation prediction is applied. For example,
For the 4-parameter affine motion model, motion vector at sample location (x, y) in a block is derived as:
For the 6-parameter affine motion model, motion vector at sample location (x, y) in a block is derived as:
where (mv0x, mv0y) is motion vector of the top-left corner control point, (mv1x, mv1y) is motion vector of the top-right corner control point, and (mv2x, mv2y) is motion vector of the bottom-left corner control point.
In order to simplify the motion compensation prediction, block based affine transform prediction is applied. To derive motion vector of each 4×4 luma subblock, the motion vector of the center sample of each subblock, as shown in
As done for translational motion inter prediction, there are also two affine motion inter prediction modes: affine merge mode and affine AMVP mode.
Affine merge mode (AF_MERGE) can be applied for CUs with both width and height larger than or equal to 8. In this mode the control point motion vectors (CPMVs) of the current CU is generated based on the motion information of the spatial neighboring CUs. There can be up to 15 affine candidates and an index is signaled to indicate the one to be used for the current CU. The following 8 types of candidates are used to form the affine merge candidate list:
The inherited affine candidates are derived from affine motion model of the adjacent or non-adjacent blocks. When an adjacent or non-adjacent affine CU is identified, its control point motion vectors are used to derive the CPMVP candidate in the affine merge list of the current CU. As shown in
For inherited candidate from non-adjacent neighbors. The non-adjacent spatial neighbors are checked based on their distances to the current block, i.e., from near to far. At a specific distance, only the first available neighbor (that is coded with the affine mode) from each side (e.g., the left and above) of the current block is included for inherited candidate derivation.
Constructed affine candidates from adjacent neighbors are the candidates constructed by combining the neighbor translational motion information of each control point. The motion information for the control points is derived from the specified spatial neighbors and temporal neighbor shown in
After MVs of four control points are attained, affine merge candidates are constructed based on those motion information. The following combinations of control point MVs are used to construct in order:
The combination of 3 CPMVs constructs a 6-parameter affine merge candidate and the combination of 2 CPMVs constructs a 4-parameter affine merge candidate. To avoid motion scaling process, if the reference indices of control points are different, the related combination of control point MVs is discarded.
For the 4-parameter affine merge/AMVP candidates constructed based on a combination of 2 CPMVs, the non-translational affine parameters are inherited from the non-adjacent spatial neighbors. Specifically, the 4-parameter affine merge/AMVP candidates are generated from the combination of 1) the translational affine parameters of adjacent neighboring 4×4 blocks; and 2) the non-translational affine parameters inherited from the non-adjacent spatial neighbors as defined in
For the regression based affine merge candidates, subblock motion field from a previously coded affine CU and motion information from adjacent subblocks of a current CU are used as the input to the regression process to derive proposed affine candidates. The previously coded affine CU can be identified from scanning through non-adjacent positions and the affine HMVP table. Adjacent subblock information of current CU is fetched from 4×4 sub-blocks represented by the grey filled zone depicted in
After inserting all the above candidates into the candidate list, if the list is still not full, zero MVs are inserted to the end of the list.
In the disclosed embodiments, prediction refinement with optical flow for affine mode can be used. Subblock based affine motion compensation can save memory access bandwidth and reduce computation complexity compared to pixel-based motion compensation, at the cost of prediction accuracy penalty. To achieve a finer granularity of motion compensation, prediction refinement with optical flow (PROF) is used to refine the subblock based affine motion compensated prediction without increasing the memory access bandwidth for motion compensation. In VVC, after the subblock based affine motion compensation is performed, luma prediction sample is refined by adding a difference derived by the optical flow equation. The PROF is described as following four steps:
Step 1) The subblock-based affine motion compensation is performed to generate subblock prediction I(i, j).
Step 2) The spatial gradients gx(i, j) and gy(i,j) of the subblock prediction are calculated at each sample location using a 3-tap filter [−1, 0, 1]. The gradient calculation is exactly the same as gradient calculation in BDOF.
shift1 is used to control the gradient's precision. The subblock (i.e., 4×4) prediction is extended by one sample on each side for the gradient calculation. To avoid additional memory bandwidth and additional interpolation computation, those extended samples on the extended borders are copied from the nearest integer pixel position in the reference picture.
Step 3) The luma prediction refinement is calculated by the following optical flow equation.
where the Δv(i, j) is the difference between sample MV computed for sample location (i,j), denoted by v(i,j), and the subblock MV of the subblock to which sample (i,j) belongs, as shown in
Since the affine model parameters and the sample location relative to the subblock center are not changed from subblock to subblock, Δv(i,j) can be calculated for the first subblock, and reused for other subblocks in the same CU. Let dx(i,j) and dy(i,j) be the horizontal and vertical offset from the sample location (i,j) to the center of the subblock (xSB>ySB), Δv(x,y) can be derived by the following equations:
In order to keep accuracy, the center of the subblock (xSB, ySB) is calculated as ((WSB−1)/2, (HSB−1)/2), where WSB and HSB are the subblock width and height, respectively. For 4-parameter affine model, it satisfies:
For 6-parameter affine model, it satisfies:
In the above equations (19) and (20), (v0x, v0y), (v1x, v1y), (v2x, v2y) are the top-left, top-right and bottom-left control point motion vectors, w and h are the width and height of the CU.
Step 4) Finally, the luma prediction refinement ΔI(i,j) is added to the subblock prediction I(i, j). The final prediction I′ is generated as the following equation.
There are two cases in which the PROF is not applied to an affine coded CU: 1) all control point MVs are the same, which indicates the CU only has translational motion; and 2) the affine motion parameters are greater than a specified limit because the subblock based affine MC is degraded to CU based MC to avoid large memory access bandwidth requirement.
The merge candidates are adaptively reordered with template matching (TM). The reordering method is applied to regular merge mode, TM merge mode, and affine merge mode (excluding the SbTMVP candidate).
An initial merge candidate list is firstly constructed according to given checking order, such as spatial, temporal motion vector predictors (TMVPs), non-adjacent, history-based motion vector predictors (HMVPs), pairwise, virtual merge candidates. Then the candidates in the initial list are divided into several subgroups. Merge candidates in each subgroup are reordered to generate a reordered merge candidate list and the reordering is according to cost values based on template matching. The index of selected merge candidate in the reordered merge candidate list is signaled to the decoder. For simplification, merge candidates in the last but not the first subgroup are not reordered. All the zero candidates from the ARMC reordering process are excluded during the construction of Merge motion vector candidates list. The subgroup size is set to 5 for regular merge mode and TM merge mode. The subgroup size is set to 3 for affine merge mode.
The template matching cost of a merge candidate during the reordering process is measured by the SAD between samples of a template of the current block and their corresponding reference samples.
When template matching is used to derive the refined motion, the template size is set equal to 1. Only the above or left template is used during the motion refinement of TM when the block is flat with block width greater than 2 times of height or narrow with height greater than 2 times of width. TM is extended to perform 1/16-pel MVD precision. The first four merge candidates are reordered with the refined motion in TM merge mode.
For affine merge candidates with subblock size equal to Wsub×Hsub, the above template comprises several sub-templates with the size of Wsub×1, and the left template comprises several sub-templates with the size of 1×Hsub. As shown in
In the reordering process, a candidate is considered as redundant if the cost difference between a candidate and its predecessor is inferior to a lambda value, e.g., |D1−D2|<λ, where D1 and D2 are the costs obtained during the first ARMC ordering and λ is the Lagrangian parameter used in the RD criterion at encoder side.
The proposed algorithm is defined as the following:
This algorithm is applied to the Regular, TM, BM and Affine merge modes. A similar algorithm is applied to the Merge MMVD and sign MVD prediction methods which also use ARMC for the reordering.
The value of λ is set equal to the λ of the rate distortion criterion used to select the best merge candidate at the encoder side for low delay configuration and to the value λ corresponding to a another QP for Random Access configuration. A set of λ values corresponding to each signaled QP offset is provided in the SPS or in the Slice Header for the QP offsets which are not present in the SPS.
The template-based reorder can also be applied in the TM merge mode.
The ARMC design is also applicable to the AMVP mode wherein the AMVP candidates are reordered according to the TM cost. For the template matching for advanced motion vector prediction (TM-AMVP) mode, an initial AMVP candidate list is constructed, followed by a refinement from TM to construct a refined AMVP candidate list. In addition, an MVP candidate with a TM cost larger than a threshold, which is equal to five times of the cost of the first MVP candidate, is skipped.
It is noted that when wrap around motion compensation is enabled, the MV candidate is clipped with wrap around offset taken into consideration.
Merge candidates of one single candidate type, e.g., TMVP or non-adjacent MVP (NA-MVP), are reordered based on the ARMC TM cost values. The reordered candidates are then added into the merge candidate list. The TMVP candidate type adds more TMVP candidates with more temporal positions and different inter prediction directions to perform the reordering and the selection. Moreover, NA-MVP candidate type is further extended with more spatially non-adjacent positions. The target reference picture of the TMVP candidate can be selected from any one of reference picture in the list according to scaling factor. The selected reference picture is the one whose scaling factor is the closest to 1.
During the development of the motion refinement technique, the following problems and areas for improvements are recognized.
First, the TM cost is calculated based as the sample difference between current template and reference template. The difference of the initial MV and the refined MV is not considered. The initial MV in merge mode is inherited from the spatial or temporal neighboring blocks which has high correlation with the current block. Although the abundant refinement on the initial MV may be good for the template, it is not good for the current block itself. Therefore, the MV difference between the initial MV and refined MV should be taken into consideration during the refinement.
Second, the order of performing the TM and multi-pass DMVR may affect the quality of the resulting refined MV.
Third, for bi-prediction, the TM cost of bi-prediction CostBi is compared with the TM cost of uni prediction of list 0 or list 1 (denoted as cost0_uni and cost1_uni). If cost_bi is much larger than uni-prediction cost, the bi-prediction is converted to uni-prediction. However, choosing TM cost of uni-prediction of list 0 or list 1 for comparison is dependent on which one is refined in the last step, which can not guarantee that the one selected one for comparison is a smaller one between cost0_uni and cost1_uni.
Fourth, in ARMC-TM reordering process, a candidate is considered as redundant if the cost difference between a candidate and its predecessor is inferior to a lambda value. And the candidate is reordered to make the difference between two consecutive candidate cost is larger than a lambda value, which is to guarantee the diversity of the candidates. However, in the current design, the diversity-based reordering is performed differently for the TM based merge list and regular merge list, which creates inconsistency.
Fifth, the affine merge mode is used to capture the object with more complex motion than translation and TM is a tool to further improve the motion accuracy which is inherited from the previously coded blocks without MV offset signaling. In the current design, TM is only applied in regular merge mode, but not applied in affine merge mode. However, the affine motion inherited from the previously coded blocks may not perfectly match with the current block. So, template matching based refinement is helpful.
The present disclosure provides solutions to one or more of the above-described problems.
In some embodiments, the TM cost is extended by taking MV offset into consideration to give a penalty of a search position far away from the initial position. The MV offset here refers to the difference between the refined MV and the initial MV. Thus, a large MV refinement itself gives a big cost, which prevents the refined MV goes too far away from the initial MV which is derived from the neighboring blocks.
Assuming MV0=(mv0x, mv0y) denotes the initial MV before TM refinement and the MV=(mvx, mvy) denotes the MV of each search point. Then the MV cost, denoted as cost (MV) can be derived as:
and TM cost, which is denoted as cost (TM), can be a weighted sum of MV cost and sample cost:
The sample cost is derived according to the sample difference between the template of the current block and the template of the reference cost. It can be SAD or SATD of the two templates.
The template matching based refinement and bilateral matching based refinement can be both applied on a coding block. In some embodiments, when TM and multi-pass DMVR are both applied on a coding block, the TM is performed first as MV offsets derived by TM is usually larger than that derived by DMVR. Conducting TM before DMVR could make it easy to reach an optimal MV value. So, for each merge candidate, the TM refinement is performed based on the initial MV and a TM refined MV is output. Then based on TM refined MV, if the coding block satisfy the DMVR condition, the DMVR is performed based in the TM refined MV and a DMVR refined MV is output and used as the final MV for motion compensation. The process is shown as process 2300A in
As TM refinement will check uni-prediction TM cost and bi-prediction TM cost, it will also convert a bi-prediction block into uni-prediction. And DMVR can only be applied on bi-prediction block. So, performing TM before DMVR will make some coding blocks loss the chance being refined by DMVR if these coding blocks are converted into uni-prediction. Thus, in some other embodiments, the TM is performed after DMVR or after the second pass of multi-pass DMVR. The process is shown as process 2300B in
When TM is applied on a bi-prediction coding block, each of the two MVs will be refined separately as uni-prediction. The two MVs refined as uni-prediction are denoted as MV0_uni and MV1_uni, the corresponding TM cost are denoted as cost0_uni and cost1_uni. After refinement of each MV, the two MVs are further refined jointly as bi-prediction. To reduce the complexity of joint refinement of two MVs, the refinement is implemented with an iteration process. That is, fixing one MV and refining another MV and then fixing the refined MV and refining the fixed MV. The refinement process can be the process 1000 shown in
In some embodiments, it is proposed to compare TM cost of bi-prediction with the smaller one of TM costs of uni-prediction, which is not related with which MV being refined at last. The process can be described in pseudo code as follows.
The merge candidates are adaptively reordered with template matching (TM). For regular merge candidate, the diversity of the candidates is considered in the reordering. A candidate is considered as redundant if the cost difference between a candidate and its predecessor is inferior to a lambda value. A redundant candidate is moved at a further position in the list. This further position is the first position where the candidate is diverse enough compared to its predecessor. So, the cost difference between two consecutive candidates are compared with the lambda value and the candidate is moved to a further position in the list if the difference of the candidate cost and its predecessor is less than the lambda value. However, for TM merge candidate, the smaller one between the cost of the first candidate cost and cost difference of two consecutive candidates are compared with the lambda value. Even if the cost difference of any consecutive candidates is larger than the lambda value, the first candidate will be moved to a further position if the cost of the first candidate itself is less than the lambda.
In some embodiments, in order to make consistency design between regular merge candidate reordering and TM candidate reordering, the first candidate cost is not considered in the TM candidate reordering. That is, the diversity based reordering method is the same for regular merge candidate and TM merge candidate. Only the cost difference of two consecutive candidates are compared with the lambda value and if the cost difference is less than the lambda value, the later one of the two consecutive candidate is move to a further position that has a cost sufficiently different from its new predecessor.
In some embodiments, the diversity-based reordering is not applied for TM merge candidates. After constructing the TM merge candidates list, the candidates are reordered based on the template cost, and then each candidate is refined by template matching based refinement. There is no diversity-based reordering conducted.
In some embodiments, the diversity-based reordering is not applied for TM merge candidates. After constructing the TM merge candidates list, the candidates are reordered based on the template cost, and then each candidate is refined by template matching based refinement. There is no diversity-based reordering conducted.
In some embodiments, the template matching based refinement is applied to affine merge mode to improve the accuracy of the affine motion which is inherited from the previously coded blocks.
To apply the template matching based refinement, the TM affine merge list is derived first. In one example, the TM affine merge list is the same as the regular affine merge list. That is, the same candidates are used for regular affine merge mode and TM affine merge mode. For regular affine merge mode, one of the candidates is selected and indicated in the bitstream. The motion of the selected candidate is used for motion compensation. For TM affine merge mode, the motion of the candidates are refined by TM and the refined motion of the selected candidate is used for motion compensation.
As TM merge mode, the motion will be refined by TM. So, in another example, a different affine merge candidate list is constructed by considering TM influence. In the TM merge candidate list, the similarity of the candidates is checked. If a candidate to be inserted into the list is similar to the existing candidates in the list, the candidate will not be inserted as the similar candidate may produce the same motion after TM refinement. To check the similarity of the two affine merge candidates, the difference of CPMVs of the affine candidates are calculated and compared with the threshold. Suppose a first affine merge candidate has three CPMVs as CPMV0=(mv0_x, mv0_y), CPMV1=(mv1_x, mv1_y), CPMV2=(mv2_x, mv2_y), and a second merge candidate has three CPMVs as CPMV0′=(mv0_x′, mv0_y′), CPMV1′=(mv1_x′, mv1_y′), CPMV2′=(mv2_x′, mv2_y′). The first affine candidate and the second affine candidate is similar with each other if
where TH0_x, TH0_y, TH1_x, TH1_y, TH2_x, TH2_y are thresholds which may be dependent on the coding block size. All the types of affine candidate, including inherited candidate from adjacent neighbors and non-adjacent neighbors, constructed candidates from adjacent neighbors, the first type of constructed candidates from non-adjacent neighbors, the second type of constructed candidates from non-adjacent neighbors, regression-based candidates, pairwise affine candidate.
As affine motion compensation is performed in subblock level, a template of an affine merge candidate can also comprise multiple sub-templates.
To get the template of the reference block, the MV of each subblock template can be derived. In one example, the MV of each sub-template is borrowed from the boundary subblock. That is, the MV of a sub-template is the same as the MV of the adjacent subblock within the current coding block. This example is illustrated in
For an affine model, motion vector at sample location (x, y) can be formulated as
wherein (mvx, mvy) is the derived motion vector at sample location (x, y), (mv0x, mv0y) is called based MV in the model which is the motion vector at sample location (0, 0), and a, b, c, d are the parameters of the affine model which can be derived based on the motion vectors at other two sample locations in the plane. Generally, base MV in the model can be the motion vector at any sample location, not necessarily at location (0, 0). If motion vector at sample location (w, h) is chosen as the base MV (denoted as (mvwx, mvhy), then motion vector at sample location (x, y) can be formulated as
For 4-parameters affine model, b is equal to −c and d is equal to a. Thus, 4-parameter affine model can be formulated as
Theoretically, all the parameters of affine model, including a, b, c, d and mvwx, mvwy can be refined in DMVR. However, to restrict the complexity, in some embodiments of this disclosure, it is proposed to fix the affine parameter a, b, c and d, and only refine base MV (mvwx, mvhy). That is, the template only has translation motion in the searching process. In each search position, all the sub-templates have the same MV offset compared with the initial MV. Thus, the three CPMVs and the subblock MVs also have a same MV offset after refinement. If CPMV0, CPMV1 and CPMV2 are the three initial CPMVs and sbMV is a subblock MV before refinement, then after refinement, the refined CPMVs, denoted as CPMV0′, CPMV1′ and CPMV2′ and the refined subblock MV sbMV′ obey the following equations.
where MV_offset is the MV refinement in the TM refinement process (i.e., a MV offset producing the best TM cost).
All the search patterns, including cross search, 8-position diamond search, 16-position diamond search pattern can be used. For example, to reduce the search complexity,
The affine TM refinement can also be applied together with affine DMVR on an affine coded block. In that case, TM refinement process can be before DMVR, or can be after base MV refinement of affine DMVR but before affine model parameter refinement of affine DMVR, or after affine DMVR.
Template-based reordering of merge candidate can also be performed to TM merge candidate. For example, after TM affine merge candidate list construction, the candidates are reordered based on the template. And then the TM refinement is applied on the candidates in the lists, and after TM refinement, another template-based reordering and candidate similarity check can be performed to remove the redundant candidate. A second TM refinement can be applied after then.
In some embodiments, to further improve the affine model accuracy, the non-translation parameters are also refined in TM refinement. one way to refine the non-translation parameters is to add offsets to the initial parameters to get refined parameters, and then derive CPMVs, subblock MVs or sub-template MVs from the refined non-translation parameters. The template matching cost is obtained by calculating the difference of the template of the current block and the template of the reference block which are fetched according to sub-template MVs.
In some embodiments, affine non-translation parameter search is performed. For affine model
the non-translation parameter a, b, c and d are searched in the parameter space. For a search position with parameter values equal to a′, b′, c′ and d′, it can be represented as:
where offset_a, offset_b, offset_c and offset_d are the parameter offsets searched in the TM refinement process. After getting values of a′, b′, c′, d′, the subblock MVs and sub-template MVs can be derived according to the affine model with a′, b′, c′, d′. And the TM cost can be calculated as the difference between the template of reference block and the template of the current block. By comparing the TM costs corresponding to different values of offset_a, offset_b, offset_c and offset_d, the best non-translation parameter a′, b′, c′, d′can be obtained as the refined non-translation parameters and the corresponding CPMVs can be calculated as the refined CPMVs.
To reduce the search complexity, the 2-parameter search can be applied. That is, offset_b is constrained to be equal to −offset_c and offset_d is constrained to be equal to offset_a. So, the encoder and the decoder only need to search for offset_a and offset_b, and derive offset_b and offset_d according to offset_a and offset_b. It is called 2 parameter refinement in this disclosure.
The MV search method can be applied in parameter search. For example, for 2 parameter refinement, as shown in
wherein T1 and T2 are two thresholds, which can be 1/16, ⅛, ¼ or other values. This threshold defines the MV difference for the sample in the current coding that is farthest away from the sample with the base MV can be generated during each step of search. It is noted that, in this example, different parameters have different search steps.
For the cost of each search point, the difference of the parameter offsets could also be considered. That is, the cost could be a weighted sum of the SAD or SATD between the template of the reference block and the template of the current block as TMCost=w*ParameterOffsetCost+sadCost, wherein w is a weight, sadCost is the SAD/SATD or mean removed SAD/SATD cost of the templates and ParameterOffsetCost is a cost dependent on the parameter offset of the refined parameters. When w is equal to 0, only sadCost is considered.
When search for the affine parameters, the base MV can be fixed. Theoretically, MV at any point in the plane can be fixed as base MV. In some embodiments, the CPMV is fixed as base MV. For example, as shown in
As described above, the affine parameter refinement process is similar with the base MV refinement process. The search process is conducted one round by one round. For each round, if the template matching cost of the central position is less than all the neighboring position, the current central position is found as the best position and the search process terminates; otherwise, the neighboring position with the least template matching cost is set as a new center position and the search goes to the next round. To control the search complexity, a maximum number of search round is set at both encoder and decoder side. Thus, the search process terminates either the central position is with the least cost or the search round number achieves the pre-set maximum number. A larger maximum search round number can give more coding performance gain but takes longer encoding and decoding time. Accordingly, to make a good trade-off between complexity and performance, the maximum search round may be dependent on fixed base MV, QP, temporal layer, CU size, etc. For example, referring to the search processes shown in
In some embodiments, the maximum search round number of the later steps is dependent on the actual search round number of previous steps. For example, in the first step when the top-left CPMV is set as base MV, the maximum search round number is set to N. However, during the first step search, the search process terminates in the k-th (k<N) search round as the central position has the minimum template matching cost. Then in the second step, the maximum search round number is set to k/2 (or other values dependent on k and less than P). If in the first step search round, the search process achieves the maximum search round number, then in the second step, the maximum search round number is set to P, which is a value less than N. The similar method can be applied in the third step. If the actual search round number of the second step achieves the maximum number, the maximum search round number of the third step is set to is L where L is less than P; if the actual search round number is t that doesn't achieve the maximum number, the maximum search round number of the third step is set to t/2. Thus, the maximum search round number is adaptively determined by the previous search process.
In some embodiments, to reduce the complexity, the search neighboring positions of a search round is reduced adaptively according to the previous search round. For example, in the 3×3×3×3 cross search scheme, there are eight neighboring positions to be searched in each search round. Suppose the current center is (a, b, c, d) and the eight neighboring positions to be checked are pa0=(a+s, b, c, d), pa1=(a-s, b, c, d), pb0=(a, b+s, c, d), pb1=(a, b-s, c, d), pc0=(a, b, c+s, d), pc1=(a, b, c-s, d), pd0=(a, b, c, d+s) and pd1=(a, b, c, d-s), respectively. The template matching cost of eight neighboring positions are denoted as cost_pa0, cost_pa1, cost_pb0, cost_pb1, cost_pc0, cost_pc1, cost_pd0, and cost_pd1. Compare cost_pa0 and cost_pa1, if cost_pa0 is less than cost_pa1, then only positive offset is considered for parameter a in the next round, if cost_pa0 is greater than cost_pa1, then only minus offset is considered for parameter a in the next round. Compare cost_pb0 and cost_pb1, if cost_pb0 is less than cost_pb1, then only positive offset is considered for parameter b in the next round, if cost_pb0 is greater than cost_pb1, then only minus offset is considered for parameter b in the next round. Compare cost_pc0 and cost_pc1, if cost_pc0 is less than cost_pc1, then only positive offset is considered for parameter c in the next round, if cost_pc0 is greater than cost_pc1, then only minus offset is considered for parameter c in the next round. Compare cost_pd0 and cost_pd1, if cost_pd0 is less than cost_pd1, then only positive offset is considered for parameter d in the next round, if cost_pd0 is greater than cost_pd1, then only minus offset is considered for parameter d in the next round. Suppose for the current search round, cost_pa0 is less than cost_pa1, cost_pb0 is greater than cost_pb1, cost_pc0 is less than cost_pc1 and cost_pd0 is greater than cost_pd1, then in the next round the four neighboring positions to be checked are (a′+s, b′, c′, d′), (a′, b′-s, c′, d′), (a′, b′, c′+s, d′) and (a′, b′, c′, d′-s) where (a′, b′, c′, d′) is the center position of the next round search.
In some embodiments, the minimum template matching cost of the current search round is compared with that of last search round. If the minimum cost reduction is a small amount, the search process terminates. For example, if the cost of last search round is A, which means the cost of the current search center is A, the minimum cost of the neighboring positions is B at position posb, where B<A. According to search rule, the search goes to the next round with search center posb. However, in some embodiments, if A−B<K or B>A×f, the search process terminates and the posb is selected as the best position is this search step. K and f are pre-set thresholds. For example, f is a factor less than 1, like 0.95, 0.9 or 0.8.
Quantization Parameter (QP) controls the quantization in video coding. With a higher QP, a bigger quantization step is used, and thus more distortion is introduced. So, for higher QP, more search rounds are needed in the refinement and it increases more encoding time. To reduce the total coding time, in some embodiments, it is proposed to have a smaller maximum search round number in higher QP than in lower QP. Other methods for reducing complexity may also be used in high QP. for example, reducing the neighboring positions to be searched, adaptively reducing the search round or early terminating the search process dependent on the previous search process may be used. Thus, in the disclosed embodiments, different search strategies may be adopted in different QPs. In some embodiments, as a high QP introduces more distortion which requires more refinement, a smaller maximum search round number is set for low QP and a greater maximum search round number is set for high QP to keep the coding efficiency and reduce the complexity at the same time. Other methods for reducing complexity may also be used in low QP, as low QP case may not need to much refinement.
The search rounds may also be dependent on the sequence resolution. For example, for video sequences with large resolution, the maximum search round number or the neighboring positions to be searched in each round is set to a big value and for the video sequences with small resolution, the maximum search round number or the neighboring positions to be searched in each round is set to a small value.
Inter-coded frame, like B frame and P frame, has one or more reference frames. The time distance between the current frame and reference frame impacts the accuracy of the inter prediction. The time distance between two frames in video coding is usually represented by picture order count (POC) distance. Usually, with a longer POC distance, the inter prediction accuracy is lower and the motion information accuracy is also lower, and thus it needs more refinement. Thus, in the disclosed embodiments, the search process depends on the POC distance between the current frame and the reference frame. For hierarchical B frame, the frame with a higher temporal layer has short POC distance to the reference frame and the frame with a lower temporal layer has longer POC distance to the reference frame. So, the search process can also depend on the temporal layer of the current frame. For example, disable the affine parameter refinement for the high temporal layer as high temporal layer has short POC distance to the reference frame and may not need refinement. In another example, set a small search round or reduce neighboring search positions for high temporal layer frame. Also, other methods to reduce the complexity of parameter refinement could be used for the high temporal layer frame. So, in the disclosed embodiments, the parameter refinement process depends on the temporal or the POC distance between the current frame and the reference frame.
In some embodiments, affine model search can be used. In the above embodiments, the affine parameters are directly refined. However, the affine motion includes translation, rotation and zooming. The translation is represented by the base MV, and rotation and zooming are represented by the affine parameters. So, in some embodiments, the motion of rotation and zoom is refined. That is, based on the original affine model, an additional rotation and scaling is added. If the original affine model is described as following equation.
wherein (mvx, mvy) is the derived motion vector at sample location (x, y). Then a rotation with angle t and scaling with factor k is applied as the following equation
t and k are two parameters to be searched during the DMVR process. The current search methods can be applied to get the best value of t and k. Then the subblock MV is derived with equation (35) and subblock level motion compensation is performed to get the predictor of the current affine-coded block.
All the existing early termination method in the MV refinement method can also be applied in the parameter refinement process. For example, during the refinement process, if the SAD/SATD between two predictors are less than a threshold, the search process is terminated.
In some embodiments, CPMV search can be performed. The TM search is not conducted directly on non-translation parameters, but on CPMVs. As the non-translation parameters are refined, so each CPMVs may have a different offset in the refinement that is different from base MV refinement in which all the CPMVs have the same offset in the refinement. If CPMV0, CPMV1 and CPMV2 are the three initial CPMVs, the refined CPMVs are denoted as CPMV0′, CPMV1′ and CPMV2′. The refined CPMVs are represented as:
where MV_offset0, MV_offset1 and MV_offset2 are the three MV offsets searched in the TM refinement process for three CPMVs. For each search position, the sub-template MVs are derived according to the CPMVs corresponding to the search position, and the TM cost is calculated accordingly. The CPMVs producing the minimum TM cost is treated as the refined CPMVs output by the TM refinement process.
All the search methods and the complexity reduction methods used in non-translation parameter search can be used in the CPMV search.
In some embodiments, optical based search can be used. To reduce the search complexity, optical flow-based search scheme is used in some embodiments. In this scheme, the next search position is calculated from the optical flow equation.
For an affine model
construct the coefficient matrix as
where Gxi and Gyi is the horizontal and vertical gradient of the i-th sample in the template of the predicted block for a search position, (xi, yi) is the coordinate of the i-th sample in the template of the predicted block for the search position. Then the equation is constructed as
where X is the vector of the affine model parameters and R is the residual vector of template samples. Thus, the values of X and R can be the following:
where Ri is the difference of i-th sample in the template of the reference block and i-th sample in the template of the current block.
By solving the equation AX=R, the resulting solution X can be used as the affine parameter for the next search position.
For each search position, the equation (40) is solved to obtain the next position until it reaches a maximum number, or the next search position has already been searched before.
In some embodiments, template matching for bi-predicted affine block can be performed. Similar to non-affine coded block, for bi-prediction affine merge candidate, the refinement of the list 0 motion and list 1 motion can be performed iteratively.
First, the initial subblock MVs of two reference picture lists are derived based on the initial CPMVs, and then the list 0 template (which is the template of the reference block in the reference picture of list 0) and the list 1 template (which is the template of the reference block in the reference picture of list 1) are obtained from reference pictures by interpolation. Then the TMcost0 (which is the SAD or SATD between list 0 template and the current block template) and the TMcost1 (which is the SAD or SATD between list 1 template and the current block template) are calculated. If the TMcost0 is less than TMcost1, reference list 0 MV are fixed and list 1 motion is refined in the first step. If the TMcost0 is larger than TMcost1, reference list 1 motion is fixed and list 0 motion is refined in the first step.
In the template matching refinement for each list, the base MV refinement and the affine model (non-translation parameter) refinement can be applied. To refine list 0 motion, the base MV offset MV_offset searched and parameter offset offseta, offset_b, offset_c, offset_d searched in the refinement process are for list 0. To refine list 1 motion, the base MV offset MV_offset searched and parameter offset offseta, offset_b, offset_c, offset_d searched in the refinement process are for list 1.
After list i motion is refined, the list 1-i motion can be further refined. The same base MV refinement and affine model (non-translation parameters) refinement can be applied to list i−1 motion.
After list 1-i motion is refined, the list i motion can be further refined. The iteration can be performed to further refine the affine motion for two reference lists.
After the iterative bi-prediction affine motion refinement, the bi-prediction cost can also be compared with uni-prediction cost to determine whether convert the bi-predicted affine block into uni-predicted affine block.
In some embodiments, certain methods can be used to reduce the complexity of the affine non-translation parameter search. For example, the complexity reducing methods that can be applied in affine model search or CPMV search can also be used in the bi-predicted affine block template matching to reduce the complexity. In some embodiments, whether the template matching is performed on bi-predicted affine block is dependent on Quantization Parameter (QP). And if the template matching is performed on bi-predicted affine block, the iterative number is dependent on QP. Since QP controls the quantization in video coding and higher QP may introduce more coding errors, more refinement is needed in high QP case. Thus, to reduce the complexity and maintain the coding efficiency, the template matching is disabled for bi-predicted affine block in low QP case, and the iterative number is smaller in lower QP case and greater in higher QP case.
In some embodiments, whether the template matching is performed on bi-predicted affine block is dependent on the video sequence resolution. For example, for video sequences with high resolution, the template matching is disabled or the iterative number is set to a smaller value for longer sequences, and the template matching is enabled or the iterative number is set to a greater value for shorter sequences. Alternatively, for video sequences with low resolution, the template matching is disabled, or the iterative number is set to smaller value; and for high resolution, the template matching is enabled or the iterative umber is set to a greater value.
In some embodiments, whether the template matching is performed on bi-predicted affine block is dependent on the picture order count distance, and/or the temporal layer. For inter-coded frame, e.g., B frame and P frame, there is one or more reference frames. The time distance between the current frame and reference frame impacts the accuracy of the inter prediction. The time distance between two frames in video coding is usually represented by picture order count (POC) distance. Usually, with a longer POC distance, the inter prediction accuracy is lower and the motion information accuracy is also lower, and thus more refinement is needed. Thus, template-matching based refinement can be enabled in large POC distance case and disabled in short POC distance case. That is, if the POC distance between the current frame and the reference frame is larger than a threshold, the template-matching based refinement is used; and if the POC distance between the current frame and the reference frame is smaller than the threshold, the template-matching based refinement is disabled. As another example, the iterative number is larger in longer POC distance case and smaller in shorter POC distance case. That is, if the POC distance between the current frame and the reference frame is longer, the iterative number of bi-predicted affine block refinement is greater; and if the POC distance between the current frame the reference frame is shorter, the iterative number of bi-predicted affine block is smaller. For hierarchical B frame, a frame with a higher temporal layer has a shorter POC distance to the reference frame, and a frame with a lower temporal layer has a longer POC distance to the reference frame. Thus, whether to enable or disable template-matching based refinement on bi-predicted affine block may depend on the temporal layer of the current frame. For example, template-matching on bi-predicted affine block can be disabled for a high temporal layer, because the higher temporal layer has a shorter POC distance to the reference frame and may not need refinement; while template-matching on bi-predicted affine block can be enabled for a lower temporal layer, because the lower temporal layer has a longer POC distance to the reference frame and needs refinement. As another example, the iterative number of template-matching based refinement for bi-predicted affine block can be set to a smaller value for a higher temporal layer, and set to a greater value for a lower temporal layer. Consistent with the disclosed embodiments, other methods to reduce the complexity can be used for the higher temporal layer frame, and are not limited by the present disclosure.
In some embodiments, base MV refinement and affine model (non-translation parameter) refinement can be combined.
The base MV refinement and affine model (i.e., affine non-translation parameter) refinement can be applied to an affine coded block at the same time.
In some embodiments, the base MV refinement and affine non-translation parameter refinement are performed sequentially as in
In some embodiments, the base MV refinement and the affine non-translation parameter refinement are performed parallelly. For example,
When affine TM is applied on the bi-prediction block, the iterative refinement can be combined with the base MV refinement and non-translation parameter refinement. For example,
To reduce the complexity, in some embodiments, non-translation parameter refinement is not applied on bi-prediction block. That is, for uni-prediction affine block, both base MV refinement and non-translation parameter refinement are applied and for bi-prediction affine block, only base MV refinement is applied. In some other embodiments, non-translation parameter refinement is not applied on uni-prediction block. That is, for bi-prediction affine block, both base MV refinement and non-translation parameter refinement are applied and for uni-prediction affine block, only base MV refinement is applied. So, whether to apply affine TM is dependent on the prediction direction.
In some other embodiments, the TM cost obtained in the TM process is used to determine the skip or continue the following TM process. For example, in the case of base MV refinement and non-translation parameter refinement are performed sequentially, denote the TM cost of the initial motion as cost0 and TM cost after base MV refinement as cost1, if cost1<cost0 or cost1<k×cost0 where k is a factor less than 1, the non-translation parameter refinement is performed, otherwise the non-translation parameter refinement is skipped. The condition cost1<cost0 or cost1<k×cost0 means the base MV refinement does improve the affine motion for the current block, so TM maybe suitable for the current block and the non-translation is worth being performed. In another example, in the case of base MV refinement and non-translation parameter refinement are performed sequentially, denote the TM cost of the initial motion as cost0 and TM cost after base MV refinement as cost1, if cost1<h×cost0 where h is a factor less than 1, the non-translation parameter refinement is skipped, otherwise the non-translation parameter refinement is performed. The condition cost1<h×cost0 means the base MV refinement already improve the affine model significantly and thus the non-translation parameter refinement may not be needed.
After the base MV refinement and non-translation parameter refinement, the TM cost is denoted as costA and costA can be compared with the TM cost of initial motion cost0. And only if costA<h×cost0 where h is a factor less than 1, the refined affine motion is used for the motion compensation, otherwise the initial motion is used for the motion compensation.
In some embodiments, high-level control flag(s) can be used for template-based refinement. For example, to control the template-matching based refinement, a control flag can be signaled in a sequence parameter set (SPS). The value of the flag can be set by the encoder and signaled to the decoder, to indicate whether TM based refinement is enabled or disabled. When the flag is equal to 1, the TM is enabled for the sequence; and when the flag is equal to 0, the TM is disabled for the sequence. The encoder has the flexibility to set the value of the flag. For example, to use the method to reduce the complexity, the encoder may set the value to 0 in low QP case and set the flag to 1 in high QP case. Consistent with the disclosed embodiments, the encoder may set the flag in other ways, and the present disclosure does not limit the specific ways of setting the flag values.
In some embodiments, there can be multiple control flags in SPS to control template-matching based refinement. For example, a first SPS flag can be used to control template-matching based refinement for conventional inter-prediction block, a second SPS flag can be used to control template-matching based refinement for affine parameter of a affine coded block, a third SPS flag can be used to control template-matching based refinement for bi-predicted affine block, and/or a fourth SPS flag can be used to control template-matching based refinement for subblock temporal motion prediction (SBTMVP) block.
In some embodiments, to have a finer control granularity, another control flag can be signaled in the picture parameter set (PPS) to control template-matching based refinement in picture level. Thus, different frames within one sequence may have different choices. TM can be enabled on some frames and disabled on the other frames. Similar with the control flag(s) in SPS, multiple PPS control flags can be used for TM based refinement. For example, a first PPS flag can be used to control template-matching based refinement for conventional inter-prediction block, a second PPS flag can be used to control template-matching based refinement for affine parameter of a affine coded block, a third PPS flag can be used to control template-matching based refinement for bi-predicted affine block, and/or a fourth PPS flag can be used to control template-matching based refinement for subblock temporal motion prediction (SBTMVP) block.
The embodiments described in the present disclosure can be freely combined.
It is contemplated that the motion information refinement methods described in the present disclosure can be performed by a decoder (e.g., by process 300A of
In some embodiments, a non-transitory computer-readable storage medium storing a bitstream is also provided. The bitstream can be encoded and decoded according to the disclosed motion template-matching-based motion refinement methods.
The embodiments may further be described using the following clauses:
1. A method of encoding video content, the method comprising:
2 The method according to clause 1, wherein refining the motion vectors of the plurality of subblocks comprises matching the plurality of sub-templates to a template of the target coding block.
3. The method according to clause 1, wherein determining the plurality of sub-templates comprises:
4. The method according to clause 1, wherein determining the plurality of sub-templates comprises:
5. The method according to clause 1, wherein the plurality of subblocks comprises left boundary subblocks and top boundary subblocks of the target coding block, and determining the plurality of sub-templates comprises:
6. The method according to clause 1, wherein refining the motion vectors of the plurality of subblocks further comprises:
7. The method according to clause 1, wherein refining the motion vectors of the plurality of subblocks further comprises:
8. The method according to clause 1, wherein refining the motion vectors of the plurality of subblocks further comprises:
9. The method according to clause 1, wherein refining the motion vectors of the plurality of subblocks comprises:
10. The method according to clause 1, wherein refining the motion vectors of the plurality of subblocks comprises:
11. The method according to clause 1, wherein the motion vectors of the plurality of subblocks comprises a first set of motion vectors associated with a reference picture list 0 and a second set of motion vectors associated with a reference picture list 1, wherein the method further comprises:
12. A method of decoding a bitstream associated with video content, the method comprising:
13. The method according to clause 12, wherein refining the motion vectors of the plurality of subblocks comprises matching the plurality of sub-templates to a template of the target coding block.
14. The method according to clause 12, wherein determining the plurality of sub-templates comprises:
15. The method according to clause 12, wherein determining the plurality of sub-templates comprises:
16. The method according to clause 12, wherein the plurality of subblocks comprises left boundary subblocks and top boundary subblocks of the target coding block, and determining the plurality of sub-templates comprises:
17. The method according to clause 12, wherein refining the motion vectors of the plurality of subblocks further comprises:
18. The method according to clause 12, wherein refining the motion vectors of the plurality of subblocks further comprises:
19. The method according to clause 12, wherein refining the motion vectors of the plurality of subblocks further comprises:
20. The method according to clause 12, wherein refining the motion vectors of the plurality of subblocks comprises:
21. The method according to clause 12, wherein refining the motion vectors of the plurality of subblocks comprises:
22. The method according to clause 12, wherein the motion vectors of the plurality of subblocks comprises a first set of motion vectors associated with a reference picture list 0 and a second set of motion vectors associated with a reference picture list 1, wherein the method further comprises:
23. A method of storing a bitstream associated with video content, the method comprising:
24. The method according to clause 23, wherein refining the motion vectors of the plurality of subblock comprises matching the plurality of sub-templates to a template of the target coding block.
25. The method according to clause 23, wherein determining the plurality of sub-templates comprises:
26. The method according to clause 23, wherein determining the plurality of sub-templates comprises:
27. The method according to clause 23, wherein the plurality of subblocks comprises left boundary subblocks and top boundary subblocks of the target coding block, and determining the plurality of sub-templates comprises:
28. The method according to clause 23, wherein refining the motion vectors of the plurality of subblocks further comprises:
29. The method according to clause 23, wherein refining the motion vectors of the plurality of subblocks further comprises:
30. The method according to clause 23, wherein refining the motion vectors of the plurality of subblocks further comprises:
31. The method according to clause 23, wherein refining the motion vectors of the plurality of subblocks comprises:
32. The method according to clause 23, wherein refining the motion vectors of the plurality of subblocks comprises:
33. The method according to clause 23, wherein the motion vectors of the plurality of subblocks comprises a first set of motion vectors associated with a reference picture list 0 and a second set of motion vectors associated with a reference picture list 1, wherein the method further comprises:
34. A data signal representing a bitstream comprising coded information for decoding according to:
35. The data signal according to clause 34, wherein refining the motion vectors of the plurality of subblocks comprises matching the plurality of sub-templates to a template of the target coding block.
36. The data signal according to clause 34, wherein determining the plurality of sub-templates comprises:
37. The data signal according to clause 34, wherein determining the plurality of sub-templates comprises:
38. The data signal according to clause 34, wherein the plurality of subblocks comprises left boundary subblocks and top boundary subblocks of the target coding block, and determining the plurality of sub-templates comprises:
39. The data signal according to clause 34, wherein refining the motion vectors of the plurality of subblocks further comprises:
40. The data signal according to clause 34, wherein refining the motion vectors of the plurality of subblocks further comprises:
41. The data signal according to clause 34, wherein refining the motion vectors of the plurality of subblocks further comprises:
42. The data signal according to clause 34, wherein refining the motion vectors of the plurality of subblocks comprises:
43. The data signal according to clause 34, wherein refining the motion vectors of the plurality of subblocks comprises:
44. The data signal according to clause 34, wherein the motion vectors of the plurality of subblocks comprises a first set of motion vectors associated with a reference picture list 0 and a second set of motion vectors associated with a reference picture list 1, wherein the method further comprises:
45. A computer readable medium storing a bitstream, wherein the bitstream is generated according to:
46. The computer readable medium according to clause 44, wherein refining the motion vectors of the plurality of subblocks comprises matching the plurality of sub-templates to a template of the target coding block.
47. The computer readable medium according to clause 44, wherein determining the plurality of sub-templates comprises:
48. The computer readable medium according to clause 44, wherein determining the plurality of sub-templates comprises:
49. The computer readable medium according to clause 44, wherein the plurality of subblocks comprises left boundary subblocks and top boundary subblocks of the target coding block, and determining the plurality of sub-templates comprises:
50. The computer readable medium according to clause 44, wherein refining the motion vectors of the plurality of subblocks further comprises:
51. The computer readable medium according to clause 44, wherein refining the motion vectors of the plurality of subblocks further comprises:
52. The computer readable medium according to clause 44, wherein refining the motion vectors of the plurality of subblocks further comprises:
53. The computer readable medium according to clause 44, wherein refining the motion vectors of the plurality of subblocks comprises:
54. The computer readable medium according to clause 44, wherein refining the motion vectors of the plurality of subblocks comprises:
55. The computer readable medium according to clause 44, wherein the motion vectors of the plurality of subblocks comprises a first set of motion vectors associated with a reference picture list 0 and a second set of motion vectors associated with a reference picture list 1, wherein the method further comprises:
In some embodiments, a non-transitory computer-readable storage medium including instructions is also provided, and the instructions may be executed by a device (such as the disclosed encoder and decoder), for performing the above-described methods. Common forms of non-transitory media include, for example, a floppy disk, a flexible disk, hard disk, solid state drive, magnetic tape, or any other magnetic data storage medium, a CD-ROM, any other optical data storage medium, any physical medium with patterns of holes, a RAM, a PROM, and EPROM, a FLASH-EPROM or any other flash memory, NVRAM, a cache, a register, any other memory chip or cartridge, and networked versions of the same. The device may include one or more processors (CPUs), an input/output interface, a network interface, and/or a memory.
It should be noted that, the relational terms herein such as “first” and “second” are used only to differentiate an entity or operation from another entity or operation, and do not require or imply any actual relationship or sequence between these entities or operations. Moreover, the words “comprising,” “having,” “containing,” and “including,” and other similar forms are intended to be equivalent in meaning and be open ended in that an item or items following any one of these words is not meant to be an exhaustive listing of such item or items, or meant to be limited to only the listed item or items.
As used herein, unless specifically stated otherwise, the term “or” encompasses all possible combinations, except where infeasible. For example, if it is stated that a database may include A or B, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or A and B. As a second example, if it is stated that a database may include A, B, or C, then, unless specifically stated otherwise or infeasible, the database may include A, or B, or C, or A and B, or A and C, or B and C, or A and B and C.
It is appreciated that the above-described embodiments can be implemented by hardware, or software (program codes), or a combination of hardware and software. If implemented by software, it may be stored in the above-described computer-readable media. The software, when executed by the processor can perform the disclosed methods. The computing units and other functional units described in the present disclosure can be implemented by hardware, or software, or a combination of hardware and software. One of ordinary skill in the art will also understand that multiple ones of the above described modules/units may be combined as one module/unit, and each of the above described modules/units may be further divided into a plurality of sub-modules/sub-units.
In the foregoing specification, embodiments have been described with reference to numerous specific details that can vary from implementation to implementation. Certain adaptations and modifications of the described embodiments can be made. Other embodiments can be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims. It is also intended that the sequence of steps shown in figures are only for illustrative purposes and are not intended to be limited to any particular sequence of steps. As such, those skilled in the art can appreciate that these steps can be performed in a different order while implementing the same method.
In the drawings and specification, there have been disclosed exemplary embodiments. However, many variations and modifications can be made to these embodiments. Accordingly, although specific terms are employed, they are used in a generic and descriptive sense only and not for purposes of limitation.
The disclosure claims the benefits of priority to U.S. Provisional Application No. 63/587,492, filed on Oct. 3, 2023; U.S. Provisional Application No. 63/619,059, filed on Jan. 9, 2024; and U.S. Provisional Application No. 63/569,681, filed on Mar. 25, 2024. All the claimed provisional applications are incorporated herein by reference in their entireties.
Number | Date | Country | |
---|---|---|---|
63587492 | Oct 2023 | US | |
63619059 | Jan 2024 | US | |
63569681 | Mar 2024 | US |