This disclosure describes a set of advanced video/streaming coding/decoding technologies. More specifically, the disclosed technology involves enhancement on signaling and entropy coding for zero residual flag, or zero transform coefficient flag associated with a transform block.
Uncompressed digital video can include a series of pictures, and may specific bitrate requirements for storage, data processing, and for transmission bandwidth in streaming applications. One purpose of video coding and decoding can be the reduction of signaling overhead in video bit stream, through various compression and encoding techniques.
The present disclosure describes various embodiments of methods, apparatus, and computer-readable storage medium for enhancing signaling and entropy coding for zero residual flag, or zero transform coefficient flag.
According to one aspect, an embodiment of the present disclosure provides a method for decoding a video bitstream, performed by a decoder, the method includes receiving the video bitstream comprising a current picture, the current picture comprising the current block, and the current block comprising a current transform block; extracting a syntax element for a skip transform flag, the skip transform flag indicating whether the current transform block has all zero coefficient, note that the syntax element extracted in this step may be in a format (e.g., raw video bits) that need to be entropy decoded; deriving a context for entropy decoding the skip transform flag based on at least one of: a prediction mode of the current transform block; a quantization index information for the current transform block; whether the current transform block is calculated by a secondary transform that is applied jointly on a transform block of a second component, the second component being different from a first component to which the current transform block belongs; or whether the current transform block is an output of a transform applied jointly on a residual block of the first component and a residual block of the second component; decoding the skip transform flag using the context; and reconstructing the current block based on skip transform flag.
According to another aspect, an embodiment of the present disclosure provides a method for encoding a video bitstream, performed by an encoder, the method includes determining a skip transform flag indicating whether a current transform block in the current block of a current picture has all zero coefficient; deriving a context for entropy encoding the skip transform flag based on at least one of: a prediction mode of the current transform block; a quantization index information for the current transform block; whether the current transform block is calculated by a secondary transform that is applied jointly on a transform block of a second component, the second component being different from a first component to which the current transform block belongs; and whether the current transform block is an output of a transform applied jointly on a residual block of the first component and a residual block of the second component; entropy encoding the skip transform flag using the derived context; and constructing the current block based on the encoded skip transform flag.
According to another aspect, an embodiment of the present disclosure provides a for method processing visual media file. The method includes performing a conversion between a visual media file and a bitstream of a visual media data, wherein the bitstream comprises a current transform block in a current block of a current picture, wherein the bitstream comprises syntax element for a skip transform flag indicating whether the current transform block has all zero coefficient, wherein the skip transform flag is entropy encoded using on a context that is derived based on at least one of: a prediction mode of the current transform block; a quantization index information for the current transform block; whether the current transform block is calculated by a secondary transform that is applied jointly on a transform block of a second component, the second component being different from a first component to which the current transform block belongs; and whether the current transform block is an output of a transform applied jointly on a residual block of the first component and a residual block of the second component.
In another aspect, an embodiment of the present disclosure provides an apparatus/decoder for decoding a video bitstream. The apparatus/decoder includes a memory storing instructions; and a processor in communication with the memory. When the processor executes the instructions, the processor is configured to cause the apparatus/decoder to perform the above methods for video decoding and/or encoding.
In another aspect, an embodiment of the present disclosure provides non-transitory computer-readable mediums storing instructions which when executed by a computer for video decoding and/or encoding cause the computer to perform the above methods for video decoding and/or encoding.
The above and other aspects and their implementations are described in greater detail in the drawings, the descriptions, and the claims.
Further features, the nature, and various advantages of the disclosed subject matter will be more apparent from the following detailed description and the accompanying drawings in which:
The invention will now be described in detail hereinafter with reference to the accompanied drawings, which form a part of the present invention, and which show, by way of illustration, specific examples of embodiments. Please note that the invention may, however, be embodied in a variety of different forms and, therefore, the covered or claimed subject matter is intended to be construed as not being limited to any of the embodiments to be set forth below. Please also note that the invention may be embodied as methods, devices, components, or systems. Accordingly, embodiments of the invention may, for example, take the form of hardware, software, firmware or any combination thereof.
Throughout the specification and claims, terms may have nuanced meanings suggested or implied in context beyond an explicitly stated meaning. The phrase “in one embodiment” or “in some embodiments” as used herein does not necessarily refer to the same embodiment and the phrase “in another embodiment” or “in other embodiments” as used herein does not necessarily refer to a different embodiment. Likewise, the phrase “in one implementation” or “in some implementations” as used herein does not necessarily refer to the same implementation and the phrase “in another implementation” or “in other implementations” as used herein does not necessarily refer to a different implementation. It is intended, for example, that claimed subject matter includes combinations of exemplary embodiments/implementations in whole or in part.
In general, terminology may be understood at least in part from usage in context. For example, terms, such as “and”, “or”, or “and/or,” as used herein may include a variety of meanings that may depend at least in part upon the context in which such terms are used. Typically, “or” if used to associate a list, such as A, B or C, is intended to mean A, B, and C, here used in the inclusive sense, as well as A, B or C, here used in the exclusive sense. In addition, the term “one or more” or “at least one” as used herein, depending at least in part upon context, may be used to describe any feature, structure, or characteristic in a singular sense or may be used to describe combinations of features, structures or characteristics in a plural sense. Similarly, terms, such as “a”, “an”, or “the”, again, may be understood to convey a singular usage or to convey a plural usage, depending at least in part upon context. In addition, the term “based on” or “determined by” may be understood as not necessarily intended to convey an exclusive set of factors and may, instead, allow for existence of additional factors not necessarily expressly described, again, depending at least in part on context.
As shown in
As shown in
As shown, in
A first unit may include the scaler/inverse transform unit (351). The scaler/inverse transform unit (351) may receive a quantized transform coefficient as well as control information, including information indicating which type of inverse transform to use, block size, quantization factor/parameters, quantization scaling matrices, and the lie as symbol(s) (321) from the parser (320). The scaler/inverse transform unit (351) can output blocks comprising sample values that can be input into aggregator (355).
In some cases, the output samples of the scaler/inverse transform (351) can pertain to an intra coded block, i.e., a block that does not use predictive information from previously reconstructed pictures, but can use predictive information from previously reconstructed parts of the current picture. Such predictive information can be provided by an intra picture prediction unit (352). In some cases, the intra picture prediction unit (352) may generate a block of the same size and shape of the block under reconstruction using surrounding block information that is already reconstructed and stored in the current picture buffer (358). The current picture buffer (358) buffers, for example, partly reconstructed current picture and/or fully reconstructed current picture. The aggregator (355), in some implementations, may add, on a per sample basis, the prediction information the intra prediction unit (352) has generated to the output sample information as provided by the scaler/inverse transform unit (351).
In other cases, the output samples of the scaler/inverse transform unit (351) can pertain to an inter coded, and potentially motion compensated block. In such a case, a motion compensation prediction unit (353) can access reference picture memory (357) based on motion vector to fetch samples used for inter-picture prediction. After motion compensating the fetched reference samples in accordance with the symbols (321) pertaining to the block, these samples can be added by the aggregator (355) to the output of the scaler/inverse transform unit (351) (output of unit 351 may be referred to as the residual samples or residual signal) so as to generate output sample information.
The output samples of the aggregator (355) can be subject to various loop filtering techniques in the loop filter unit (356) including several types of loop filters. The output of the loop filter unit (356) can be a sample stream that can be output to the rendering device (312) as well as stored in the reference picture memory (357) for use in future inter-picture prediction.
The video encoder (403) may receive video samples from a video source (401). According to some example embodiments, the video encoder (403) may code and compress the pictures of the source video sequence into a coded video sequence (443) in real time or under any other time constraints as required by the application. Enforcing appropriate coding speed constitutes one function of a controller (450). In some embodiments, the controller (450) may be functionally coupled to and control other functional units as described below. Parameters set by the controller (450) can include rate control related parameters (picture skip, quantizer, lambda value of rate-distortion optimization techniques, . . . ), picture size, group of pictures (GOP) layout, maximum motion vector search range, and the like.
In some example embodiments, the video encoder (403) may be configured to operate in a coding loop. The coding loop can include a source coder (430), and a (local) decoder (433) embedded in the video encoder (403). The decoder (433) reconstructs the symbols to create the sample data in a similar manner as a (remote) decoder would create even though the embedded decoder 433 process coded video steam by the source coder 430 without entropy coding (as any compression between symbols and coded video bitstream in entropy coding may be lossless in the video compression technologies considered in the disclosed subject matter). An observation that can be made at this point is that any decoder technology except the parsing/entropy decoding that may only be present in a decoder also may necessarily need to be present, in substantially identical functional form, in a corresponding encoder. For this reason, the disclosed subject matter may at times focus on decoder operation, which allies to the decoding portion of the encoder. The description of encoder technologies can thus be abbreviated as they are the inverse of the comprehensively described decoder technologies. Only in certain areas or aspects a more detail description of the encoder is provided below.
During operation in some example implementations, the source coder (430) may perform motion compensated predictive coding, which codes an input picture predictively with reference to one or more previously coded picture from the video sequence that were designated as “reference pictures.”
The local video decoder (433) may decode coded video data of pictures that may be designated as reference pictures. The local video decoder (433) replicates decoding processes that may be performed by the video decoder on reference pictures and may cause reconstructed reference pictures to be stored in a reference picture cache (434). In this manner, the video encoder (403) may store copies of reconstructed reference pictures locally that have common content as the reconstructed reference pictures that will be obtained by a far-end (remote) video decoder (absent transmission errors).
The predictor (435) may perform prediction searches for the coding engine (432). That is, for a new picture to be coded, the predictor (435) may search the reference picture memory (434) for sample data (as candidate reference pixel blocks) or certain metadata such as reference picture motion vectors, block shapes, and so on, that may serve as an appropriate prediction reference for the new pictures.
The controller (450) may manage coding operations of the source coder (430), including, for example, setting of parameters and subgroup parameters used for encoding the video data.
Output of all aforementioned functional units may be subjected to entropy coding in the entropy coder (445). The transmitter (440) may buffer the coded video sequence(s) as created by the entropy coder (445) to prepare for transmission via a communication channel (460), which may be a hardware/software link to a storage device which would store the encoded video data. The transmitter (440) may merge coded video data from the video coder (403) with other data to be transmitted, for example, coded audio data and/or ancillary data streams (sources not shown).
The controller (450) may manage operation of the video encoder (403). During coding, the controller (450) may assign to each coded picture a certain coded picture type, which may affect the coding techniques that may be applied to the respective picture. For example, pictures often may be assigned as one of the following picture types: an Intra Picture (I picture), a predictive picture (P picture), a bi-directionally predictive picture (B Picture), a multiple-predictive picture. Source pictures commonly may be subdivided spatially into a plurality of sample coding blocks as described in further detail below.
For example, the video encoder (503) receives a matrix of sample values for a processing block. The video encoder (503) then determines whether the processing block is best coded using intra mode, inter mode, or bi-prediction mode using, for example, rate-distortion optimization (RDO).
In the example of
The inter encoder (530) is configured to receive the samples of the current block (e.g., a processing block), compare the block to one or more reference blocks in reference pictures (e.g., blocks in previous pictures and later pictures in display order), generate inter prediction information (e.g., description of redundant information according to inter encoding technique, motion vectors, merge mode information), and calculate inter prediction results (e.g., predicted block) based on the inter prediction information using any suitable technique.
The intra encoder (522) is configured to receive the samples of the current block (e.g., a processing block), compare the block to blocks already coded in the same picture, and generate quantized coefficients after transform, and in some cases also to generate intra prediction information (e.g., an intra prediction direction information according to one or more intra encoding techniques).
The general controller (521) may be configured to determine general control data and control other components of the video encoder (503) based on the general control data to, for example, determine the prediction mode of the block and provides a control signal to the switch (526) based on the prediction mode.
The residue calculator (523) may be configured to calculate a difference (residue data) between the received block and prediction results for the block selected from the intra encoder (522) or the inter encoder (530). The residue encoder (524) may be configured to encode the residue data to generate transform coefficients. The transform coefficients are then subject to quantization processing to obtain quantized transform coefficients. In various example embodiments, the video encoder (503) also includes a residual decoder (528). The residual decoder (528) is configured to perform inverse-transform, and generate the decoded residue data. The entropy encoder (525) may be configured to format the bitstream to include the encoded block and perform entropy coding.
In the example of
The entropy decoder (671) can be configured to reconstruct, from the coded picture, certain symbols that represent the syntax elements of which the coded picture is made up. The inter decoder (680) may be configured to receive the inter prediction information, and generate inter prediction results based on the inter prediction information. The intra decoder (672) may be configured to receive the intra prediction information, and generate prediction results based on the intra prediction information. The residual decoder (673) may be configured to perform inverse quantization to extract de-quantized transform coefficients, and process the de-quantized transform coefficients to convert the residual from the frequency domain to the spatial domain. The reconstruction module (674) may be configured to combine, in the spatial domain, the residual as output by the residual decoder (673) and the prediction results (as output by the inter or intra prediction modules as the case may be) to form a reconstructed block forming part of the reconstructed picture as part of the reconstructed video.
It is noted that the video encoders (203), (403), and (503), and the video decoders (210), (310), and (610) can be implemented using any suitable technique. In some example embodiments, the video encoders (203), (403), and (503), and the video decoders (210), (310), and (610) can be implemented using one or more integrated circuits. In another embodiment, the video encoders (203), (403), and (503), and the video decoders (210), (310), and (610) can be implemented using one or more processors that execute software instructions.
Turning to block partitioning for coding and decoding, general partitioning may start from a base block and may follow a predefined ruleset, particular patterns, partition trees, or any partition structure or scheme. The partitioning may be hierarchical and recursive. After dividing or partitioning a base block following any of the example partitioning procedures or other procedures described below, or the combination thereof, a final set of partitions or coding blocks may be obtained. Each of these partitions may be at one of various partitioning levels in the partitioning hierarchy, and may be of various shapes. Each of the partitions may be referred to as a coding block (CB). For the various example partitioning implementations described further below, each resulting CB may be of any of the allowed sizes and partitioning levels. Such partitions are referred to as coding blocks because they may form units for which some basic coding/decoding decisions may be made and coding/decoding parameters may be optimized, determined, and signaled in an encoded video bitstream. The highest or deepest level in the final partitions represents the depth of the coding block partitioning structure of tree. A coding block may be a luma coding block or a chroma coding block. The CB tree structure of each color may be referred to as coding block tree (CBT). The coding blocks of all color channels may collectively be referred to as a coding unit (CU). The hierarchical structure of for all color channels may be collectively referred to as coding tree unit (CTU). The partitioning patterns or structures for the various color channels in in a CTU may or may not be the same.
In some implementations, partition tree schemes or structures used for the luma and chroma channels may not need to be the same. In other words, luma and chroma channels may have separate coding tree structures or patterns. Further, whether the luma and chroma channels use the same or different coding partition tree structures and the actual coding partition tree structures to be used may depend on whether the slice being coded is a P, B, or I slice. For example, For an I slice, the chroma channels and luma channel may have separate coding partition tree structures or coding partition tree structure modes, whereas for a P or B slice, the luma and chroma channels may share a same coding partition tree scheme. When separate coding partition tree structures or modes are applied, a luma channel may be partitioned into CBs by one coding partition tree structure, and a chroma channel may be partitioned into chroma CBs by another coding partition tree structure.
In some example implementations, a predetermined partitioning pattern may be applied to a base block. As shown in
As shown in
In some example implementations, a coding tree unit (CTU) may be split into coding units (CUs) by using a quadtree structure denoted as coding tree to adapt to various local characteristics. The decision on whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the CU level. Each CU can be further split into one, two or four prediction units (PUs) according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU splitting type, a CU can be partitioned into transform units (TUs) according to another quadtree structure like the coding tree for the CU. In some example implementations, a CU or a TU can only be square shape, while a PU may be square or rectangular shape for an inter predicted block. In some example implementations, one coding block may be further split into four square sub-blocks, and transform is performed on each sub-block, i.e., TU. Each TU can be further split recursively (using quadtree split) into smaller TUs, which is called Residual Quad-Tree (RQT).
In some example implementations, at picture boundary, implicit quad-tree split may be employed so that a block will keep quad-tree splitting until the size fits the picture boundary.
Such quadtree splitting may be applied hierarchically and recursively to any square shaped partitions. Whether a base block or an intermediate block or partition is further quadtree split may be adapted to various local characteristics of the base block or intermediate block/partition.
Another example implementation for partitioning of a base block into CBs, PBs and or TBs is further described below. For example, rather than using a multiple partition unit types such as those shown in
One specific example for the quadtree with nested multi-type tree coding block structure of block partition for one base block is shown in
For the specific example above, the maximum luma transform size may be 64×64 and the maximum supported chroma transform size could be different from the luma at, e.g., 32×32. Even though the example CBs above in
In the specific example for partitioning of a base block into CBs above, and as descried above, the coding tree scheme may support the ability for the luma and chroma to have a separate block tree structure. For example, for P and B slices, the luma and chroma CTBs in one CTU may share the same coding tree structure. For I slices, for example, the luma and chroma may have separate coding block tree structures. When separate block tree structures are applied, luma CTB may be partitioned into luma CBs by one coding tree structure, and the chroma CTBs are partitioned into chroma CBs by another coding tree structure. This means that a CU in an I slice may consist of a coding block of the luma component or coding blocks of two chroma components, and a CU in a P or B slice always consists of coding blocks of all three color components unless the video is monochrome.
When a coding block is further partitioned into multiple transform blocks, the transform blocks therein may be order in the bitstream following various order or scanning manners. Example implementations for partitioning a coding block or prediction block into transform blocks, and a coding order of the transform blocks are described in further detail below. In some example implementations, as descried above, a transform partitioning may support transform blocks of multiple shapes, e.g., 1:1 (square), 1:2/2:1, and 1:4/4:1, with transform block sizes ranging from, e.g., 4×4 to 64×64. In some implementations, if the coding block is smaller than or equal to 64×64, the transform block partitioning may only apply to luma component, such that for chroma blocks, the transform block size is identical to the coding block size. Otherwise, if the coding block width or height is greater than 64, then both the luma and chroma coding blocks may be implicitly split into multiples of min (W, 64)×min (H, 64) and min (W, 32)×min (H, 32) transform blocks, respectively.
In some example implementations of transform block partitioning, for both intra and inter coded blocks, a coding block may be further partitioned into multiple transform blocks with a partitioning depth up to a predefined number of levels (e.g., 2 levels). The transform block partitioning depth and sizes may be related. For some example implementations, a mapping from the transform size of the current depth to the transform size of the next depth is shown in the following in Table 1.
Based on the example mapping of Table 1, for 1:1 square block, the next level transform split may create four 1:1 square sub-transform blocks. Transform partition may stop, for example, at 4×4. As such, a transform size for a current depth of 4×4 corresponds to the same size of 4×4 for the next depth. In the example of Table 1, for 1:2/2:1 non-square block, the next level transform split may create two 1:1 square sub-transform blocks, whereas for 1:4/4:1 non-square block, the next level transform split may create two 1:2/2:1 sub transform blocks.
In some example implementations, for luma component of an intra coded block, additional restriction may be applied with respect to transform block partitioning. For example, for each level of transform partitioning, all the sub-transform blocks may be restricted to having equal size. For example, for a 32×16 coding block, level 1 transform split creates two 16×16 sub-transform blocks, level 2 transform split creates eight 8×8 sub-transform blocks. In other words, the second level splitting must be applied to all first level sub blocks to keep the transform units at equal sizes. An example of the transform block partitioning for intra coded square block following Table 1 is shown in
In some example implementations, for luma component of inter coded block, the above restriction for intra coding may not be applied. For example, after the first level of transform splitting, any one of sub-transform block may be further split independently with one more level. The resulting transform blocks thus may or may not be of the same size. An example split of an inter coded block into transform locks with their coding order is show in
As showing in
In some example implementations, for chroma component(s), some additional restriction for transform blocks may apply. For example, for chroma component(s) the transform block size can be as large as the coding block size, but not smaller than a predefined size, e.g., 8×8.
In some other example implementations, for the coding block with either width (W) or height (H) being greater than 64, both the luma and chroma coding blocks may be implicitly split into multiples of min (W, 64)×min (H, 64) and min (W, 32)×min (H, 32) transform units, respectively. Here, in the present disclosure, a “min (a, b)” may return a smaller value between a and b.
In video coding technologies such as AV1, for each intra and inter coding block, a flag, namely skip_txfm flag, is signaled, as shown in the following tables (tables 2-4) indicated by the read_skip( ) function. This flag is indicating whether the transform coefficients are all zero in the current coding block. If this flag is signaled with a value 1, then transform coefficients are all zero, and transform coefficients related syntaxes, e.g., EOB (End of Block), is not signaled but is rather derived as a value associated for zero transform coefficients block. In some example implementations, for inter coding block, signaling of this flag may depend on the skip mode flag (skip_mode). When skip_mode is true, skip_txfm flag is not signaled but is inferred as 1, otherwise, skip_txfm flag is signaled. Table 2 below shows example intra frame mode information syntax.
Table 3 below shows example inter frame mode info syntax.
Table 4 below shows example skip syntax.
In some example implementations, for a block such as a transform block, a skip flag may be used to indicate whether there may be transform coefficient(s) to read for this block. When the skip flag is equal to 0, it indicates that there are transform coefficient(s) to read (or the block has at least one non zero transform coefficient). Whereas when the skip flag is equal to 1, it indicates that there are no transform coefficients to read (or the block has all non zero transform coefficient).
In some example implementations, a context may be derived for entropy encoding/decoding the above skip flag, and the derivation of the context may depend on the skip flag values of the above and/or left neighboring blocks. Exemplarily, there may be a total one 3 candidate contexts, and they may be stored in an array. If none of above or left neighboring blocks is coded with nonzero skip flag, context value 0 (i.e., array index 0) is used. If one of above and left neighboring blocks is coded with nonzero skip flag, context value 1 (i.e., array index 1) is used. If both above and left neighboring blocks are coded with nonzero skip flag, context value 2 (i.e., array index 2) is used.
In some example implementations, the aforementioned context array may include, for example, TileSkipCdf[ctx]. Here ctx is the index which may be computed by a function as illustrated in Table 5 below.
In this disclosure, various embodiments are disclosed for improving video encoding/decoding technologies, including AV1, HEVC, VVC, VP9, and the like. These embodiments aim to improve entropy coding efficiency, optimize entropy coding context, reduce signaling overhead, and enhance prediction accuracy with minimum overhead on signaling cost. Further, these embodiments may be implemented in a decoder, and/or an encoder.
In video coding technologies, a block is coded by a prediction mode selected from various prediction modes. The prediction modes may include, for example, intra prediction; inter prediction; combined intra and inter prediction (CIIP); prediction using a block vector or using templates (e.g., a group of neighboring reconstruction samples of the current block, and/or a group of neighboring reconstruction samples of the candidate prediction blocks for the current block) to identify a prediction block in the reconstructed area of the same picture; a weighted average of multiple prediction modes; or a prediction mode derived from neighboring reconstructed area.
In some example implementations, CIIP combines inter prediction and intra prediction with derived weights to form a final prediction. As an example, the weights are derived from the prediction modes of the two adjacent blocks of left and above for combining the final prediction, and only planar mode is used as the intra prediction mode of CIIP.
In some example implementations, to reduce the cross-component redundancy, a cross-component prediction mode may be used. In this scheme, sample of one component (e.g., chroma) may be predicted based on the reconstructed samples of another component (e.g., luma). The sample used for the prediction may be co-located and may be in a same coding block or a same coding unit.
The block vector is used to identify a reference block. In some example implementations, the reference block is restricted to be in a picture (or frame) different from the current picture (or current frame) to which the current block belongs. In some example implementations, the reference block is restricted to be in a same picture as the current block. In some example implementations, the reference block identified by the motion vector is within a predetermined distance to the current block.
Based on investigation and statistical observation on video coding technologies such as AV1, it is discovered that the statistic characteristics of residual block are strongly correlated with prediction mode associated with the residual block. Under different prediction mode, the statistics is different. The statistical characteristics may include, for example, values or magnitude of residuals/coefficients, and their distributions in the residual block. The same observation may also apply to transform coefficients and quantized transform coefficients in transform block(s). Based on this observation, it is possible to exploit this correlation to derive a context to achieve more efficient entropy coding. For example, the correlation may be utilized to derive the context for entropy coding of specific syntax(s) associated with the coding block or residual block.
In this disclosure, a skip flag is used to indicate whether a residual block (or transform block, quantized transform block) is all zero. Note that when a block (e.g., a residual block, a transform block, or a quantized transform block) is all zero, it means the values (residuals, transform coefficients, or quantized transform coefficients) in the block are all zero. The skip flag may be coded as a syntax element. When the skip is true, it indicates that the associated block is all zero, and there is no transform coefficient (or quantized transform coefficient, or residual, depending on the block type) need to be read for the associated block.
In this disclosure, a prediction mode may be a smooth mode. For example, there are non-directional smooth intra prediction, which may include, for example, DC_PRED and TM_PRED. The smooth mode may further include: SMOOTH_V_PRED, SMOOTH_H_PRED, and SMOOTH_PRED.
In this disclosure, if a prediction mode is not smooth mode, or the prediction mode is generating prediction samples according to a given prediction direction, this mode is called angular mode or directional mode.
In this disclosure, the term block may refer to a transform block, a coded block, a prediction block, a coding block, a coding unit (CU), etc. In this disclosure, when saying block size, it may refer to either the block width or height, or maximum value of width and height, or minimum of width and height, or area size (width*height), or aspect ratio (width:height, or height:width) of the block. The term chroma block may refer to a block in any of the chrominance (color) channels. The direction of a reference frame is determined by whether the reference frame is prior to current frame in display order or after current frame in display order.
In this disclosure, a sample may be interpreted as pixel value of a pixel. It may generally refer to any component (luma, or chroma).
In this disclosure, unless otherwise specified, a signaling may include one or more sub-signalings. The one or more sub-signalings may be transmitted together, or separately.
In one embodiment, the prediction mode of a current block (i.e., the block currently being encoded/decoded) is used to derive the context for entropy coding skip flag. Alternatively, the prediction mode of a current transform block (i.e., the transform block currently being encoded/decoded) is used to derive the context for entropy coding skip flag. For illustration purpose, the description below is made using the prediction mode of the current block, in which case all transform block(s) in the current block share the same prediction mode. The same underlying principle applies to a finer level in which the prediction mode applies to a finer, transform block granularity, and each transform block in the current block may use its own prediction mode. The same underlying principle applies to a finer level in which the prediction mode applies to a finer, transform block granularity, and each transform block in the current block may use its own prediction mode.
In one embodiment, the prediction mode of a current transform block (i.e., the transform block currently being encoded/decoded) is used to derive the context for entropy coding the skip flag associated with the current transform block. Alternatively, the prediction mode of the current block (i.e., the block contains one or more transform blocks, including the current transform block, refer to block 1102, 1104, and 1106 in
As an example,
In some example implementations, the prediction mode is categorized by whether the transform block is coded by intra prediction mode. The intra prediction mode may include but is not limited to: a regular intra mode; a derived intra mode; or a prediction mode that references the current picture which includes at least one of: an intra block copy mode, or an intra template matching mode. Note that the current picture is the picture currently being encoded/decoded which includes the current block and the transform block(s) inside the current block. There may be two sets of contexts, with each set including one or more candidate contexts for entropy coding the skip flag. If the prediction mode belongs to any one of the intra prediction mode as specified here, then a context from a first set is selected; otherwise the context from a second set is selected. In some example implementations, the two sets have no intersection.
In some example implementations, the derived intra prediction mode refers to an intra mode derived by decoder, based on already re-constructed information, for example, re-constructed neighboring blocks of the current block.
In some example implementations, the prediction mode is categorized by whether the transform block is coded by one of: an inter prediction mode, or a prediction mode using a block vector to identify a prediction block in the reconstructed area of the same picture. The block vector may be either signaled or derived. That is, if the transform block is coded by a mode as specified here, then a first context is selected; otherwise a second context that is different from the first context is selected.
In some example implementations, the prediction mode is categorized by whether the transform block is coded by one of: an inter prediction mode, a combined inter-intra prediction mode, or a prediction mode using a block vector to identify a prediction block in the reconstructed area of the same picture. The block vector may be either signaled or derived. A context may be similarly selected as previous embodiments based on the categorization.
In some example implementations, the prediction mode is categorized by whether the transform block is coded by a cross component mode or not.
In some example implementations, the prediction mode is categorized by whether the transform block is coded by a mode derived using texture analysis of previously coded neighboring pixels.
In some example implementations, the prediction mode is categorized by whether the transform block is coded by a mode derived using weighted average of multiple prediction modes. The weighted average is used to combine prediction result from different prediction modes. That is, the prediction may be conducted using various different prediction modes, and the results may be combined using a weighted average.
In some example implementations, the prediction mode is categorized by whether the transform block is coded by using one of following mode: intra prediction, single prediction inter prediction, compound prediction inter prediction. Compound prediction inter prediction uses two reference frames, and generates a prediction by combining two prediction blocks from these reference frames. Whereas in single prediction inter prediction, only one reference frame is used.
In one embodiment, each prediction mode, such as the prediction mode specified in the previous embodiments, may be assigned a unique value (e.g., an index value). To derive the entropy coding context for the skip flag, the prediction mode value can be used jointly with other coded information, e.g., the transform block size of the current transform block; the values of the skip flags associated with neighboring transform blocks, each of these skip flags indicating whether its associated transform block has any nonzero coefficient or not.
In some example implementations, depending on different values of prediction mode, the calculation of context value using other coded information is different. Exemplarily, a context derivation function may be used to calculate the context. For example, when current block (or current transform block) is coded by intra prediction mode, the context derivation does not depend on the values of the skip flags associated with neighboring coded blocks (i.e., the values of these skip flags are not considered in context derivation or calculation). In this case, the context derivation function does not use these skip flag values as input. For another example, when current block (or current transform block) is not coded by intra prediction mode, e.g., if the current block is coded by one of: inter prediction mode; combined intra inter prediction mode; or a prediction mode using a block vector (either signaled or derived) to identify a prediction block in the reconstructed area of the same picture, the context derivation depends on both the transform block size and the values of the skip flags associated with neighboring coded blocks.
In some example implementations, depending on different transform block sizes, the calculation of context value may be different. For example, the calculation may use various coded information, such as prediction mode of the current transform block, and/or the values of the skip flags associated with neighboring coded blocks. For different transform block size, a different context derivation function, and/or different input to the context derivation function may be used.
In one embodiment, the context derivation may depend on whether a secondary transform is applied jointly on the transform coefficient blocks of two components (e.g., the two chroma components, a luma component and a chroma component). In some example implementations, the transform coefficient blocks of the two components may be co-located (i.e., these transform coefficient blocks have a same spatial location).
In one embodiment, the context derivation may depend on whether a transform is applied jointly on the residual blocks of two components (e.g., the two chroma components). The jointly applied transform may include a primary transform, and the residual blocks may or may not go through a secondary transform.
In one embodiment, the quantization index information may be used to derive the context for entropy coding the skip flag. The quantization index information may include a base quantization index (e.g., reference quantization index; picture quantization index; or frame quantization index).
In some example implementations, it is observed that non-zero coefficients are more probable in a transform block with larger base quantization index, and zero coefficients are more probable in a transform block with smaller base quantization index. Therefore, the probability for a transform block with a smaller base quantization index to have all zero coefficients is higher, compared with a transform block with a larger base quantization index. The quantization index information may include the base quantization index, which may include reference quantization index, picture quantization index, or frame quantization index.
In some example implementations, the quantization index information may come from the already available information, such as the already coded/decoded blocks in the picture.
In this disclosure, embodiments and/or implementations may be described from the decoder side. The same underlying principles apply equally to the encoder side, and vice versa. Additionally, the same underlying principles apply equally to an encoded video bitstream. For example, the same principles may apply for performing a conversion between a media source (e.g., a media file) and a bitstream (i.e., video bitstream), the bitstream may be included in a visual media data.
In this disclosure, the embodiments are described for exemplary purpose. Various embodiments and/or implementations described in the present disclosure may be performed separately or combined in any order. The described features, advantages and characteristics of the present solution may be combined in any suitable manner in one or more embodiments. One of ordinary skill in the relevant art will recognize, in light of the description herein, that the present solution may be practiced without one or more of the specific features or advantages of a particular embodiment. In other instances, additional features and advantages may be recognized in certain embodiments that may not be present in all embodiments of the present solution. Further, each of the methods (or embodiments), encoder, and decoder may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits). The one or more processors execute a program that is stored in a non-transitory computer-readable medium. In the present disclosure, the term block may be interpreted as a prediction block, a coding block, or a coding unit (CU). In above exemplary methods for deriving the context for entropy coding/decoding the skip flag, the specified “based on” information may be the only information needed for the derivation, alternatively, the specified “based on” information may include additional information.
An exemplary method for decoding a current block in a video bitstream following the principles described in above embodiments may include a portion or all of the following steps: step 1: receiving the video bitstream comprising a current transform block in a current block of a current picture; step 2: extracting a syntax element for a skip transform flag, and/or determining a skip transform flag, the skip transform flag indicating whether the current transform block has all zero coefficient; step 3: deriving a context for entropy decoding the skip transform flag based on at least one of: a prediction mode of the current transform block; a quantization index information for the current transform block; whether the current transform block is calculated by a secondary transform that is applied jointly on a transform block of a second component, the second component being different from a first component to which the current transform block belongs; and whether the current transform block is an output of a transform applied jointly on a residual block of the first component and a residual block of the second component; step 4: entropy decoding the skip transform flag using the derived context; and step 5: reconstructing the current block based on the skip transform flag.
Operations above may be combined or arranged in any amount or order, as desired. Two or more of the steps and/or operations may be performed in parallel. Embodiments and implementations in the disclosure may be used separately or combined in any order. Steps in one embodiment/method may be split to form multiple sub-methods, each of the sub-methods may be independent of other steps in the embodiment and may form a standalone solution. Further, each of the methods (or embodiments), an encoder, and a decoder may be implemented by processing circuitry (e.g., one or more processors or one or more integrated circuits). In one example, the one or more processors execute a program that is stored in a non-transitory computer-readable medium. Embodiments in the disclosure may be applied to a luma block or a chroma block. The term block may be interpreted as a prediction block, a coding block, or a coding unit, i.e. CU. The term block here may also be used to refer to the transform block.
The techniques described above, can be implemented as computer software using computer-readable instructions and physically stored in one or more computer-readable media. For example,
The computer software can be coded using any suitable machine code or computer language, that may be subject to assembly, compilation, linking, or like mechanisms to create code comprising instructions that can be executed directly, or through interpretation, micro-code execution, and the like, by one or more computer central processing units (CPUs), Graphics Processing Units (GPUs), and the like.
The instructions can be executed on various types of computers or components thereof, including, for example, personal computers, tablet computers, servers, smartphones, gaming devices, internet of things devices, and the like.
The components shown in
Computer system (1800) may include certain human interface input devices. Input human interface devices may include one or more of (only one of each depicted): keyboard (1801), mouse (1802), trackpad (1803), touch screen (1810), data-glove (not shown), joystick (1805), microphone (1806), scanner (1807), camera (1808).
Computer system (1800) may also include certain human interface output devices. Such human interface output devices may be stimulating the senses of one or more human users through, for example, tactile output, sound, light, and smell/taste. Such human interface output devices may include tactile output devices (for example tactile feedback by the touch-screen (1810), data-glove (not shown), or joystick (1805), but there can also be tactile feedback devices that do not serve as input devices), audio output devices (such as: speakers (1809), headphones (not depicted)), visual output devices (such as screens (1810) to include CRT screens, LCD screens, plasma screens, OLED screens, each with or without touch-screen input capability, each with or without tactile feedback capability-some of which may be capable to output two dimensional visual output or more than three dimensional output through means such as stereographic output; virtual-reality glasses (not depicted), holographic displays and smoke tanks (not depicted)), and printers (not depicted).
Computer system (1800) can also include human accessible storage devices and their associated media such as optical media including CD/DVD ROM/RW (1820) with CD/DVD or the like media (1821), thumb-drive (1822), removable hard drive or solid state drive (1823), legacy magnetic media such as tape and floppy disc (not depicted), specialized ROM/ASIC/PLD based devices such as security dongles (not depicted), and the like.
Those skilled in the art should also understand that term “computer readable media” as used in connection with the presently disclosed subject matter does not encompass transmission media, carrier waves, or other transitory signals.
Computer system (1800) can also include an interface (1854) to one or more communication networks (1855). Networks can for example be wireless, wireline, optical. Networks can further be local, wide-area, metropolitan, vehicular and industrial, real-time, delay-tolerant, and so on. Examples of networks include local area networks such as Ethernet, wireless LANs, cellular networks to include GSM, 3G, 4G, 5G, LTE and the like, TV wireline or wireless wide area digital networks to include cable TV, satellite TV, and terrestrial broadcast TV, vehicular and industrial to include CAN bus, and so forth.
Aforementioned human interface devices, human-accessible storage devices, and network interfaces can be attached to a core (1840) of the computer system (1800).
The core (1840) can include one or more Central Processing Units (CPU) (1841), Graphics Processing Units (GPU) (1842), specialized programmable processing units in the form of Field Programmable Gate Areas (FPGA) (1843), hardware accelerators for certain tasks (1844), graphics adapters (1850), and so forth. These devices, along with Read-only memory (ROM) (1845), Random-access memory (1846), internal mass storage such as internal non-user accessible hard drives, SSDs, and the like (1847), may be connected through a system bus (1848). In some computer systems, the system bus (1848) can be accessible in the form of one or more physical plugs to enable extensions by additional CPUs, GPU, and the like. The peripheral devices can be attached either directly to the core's system bus (1848), or through a peripheral bus (1849). In an example, the screen (1810) can be connected to the graphics adapter (1850). Architectures for a peripheral bus include PCI, USB, and the like.
The computer readable media can have computer code thereon for performing various computer-implemented operations. The media and computer code can be those specially designed and constructed for the purposes of the present disclosure, or they can be of the kind well known and available to those having skill in the computer software arts.
While this disclosure has described several exemplary embodiments, there are alterations, permutations, and various substitute equivalents, which fall within the scope of the disclosure. It will thus be appreciated that those skilled in the art will be able to devise numerous systems and methods which, although not explicitly shown or described herein, embody the principles of the disclosure and are thus within the spirit and scope thereof.
This application is based on and claims the benefit of priority to U.S. Provisional Application No. 63/529,111, filed on Jul. 26, 2023, which is herein incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63529111 | Jul 2023 | US |