Film grain is typically defined as a random optical texture in processed photographic film due to the presence of small particles of a metallic silver, or dye clouds, developed from silver halide that have received enough photons. In the entertainment industry, and especially in motion pictures, film grain is considered part of the creative process and intent. Thus, while digital cameras do not generate film grain, it is not uncommon for simulated film grain to be added to captured material from digital video cameras to emulate a “film look.”
Because of its random nature, film grain poses a challenge to image and video compression algorithms, since a) like random noise, it may reduce the compression efficiency of a coding algorithm used for the coding and distribution of motion pictures, and b) original film grain may be filtered and/or altered due to the lossy compression characteristics of coding algorithms, thus altering the director's creative intent. Thus, it is important when encoding motion pictures to maintain the director's intent on the film-look of a movie, but also maintain coding efficiency during compression.
To handle the film grain more efficiently, coding standards like AVC, HEVC, VVC, AV1, and the like (see Refs. [1-4]) have adopted Film Grain Technology (FGT). FGT in media workflow consists of two major components, film grain modelling and film grain synthesis. At an encoder, film grain is removed from the content, it is modelled according to a film-grain model, and the film grain model parameters are sent in the bitstream as metadata. This part allows for more efficient coding. At a decoder, film grain is simulated according to the model parameters and re-inserted back to the decoded images prior to display, thus preserving creative intent.
The term “metadata” herein relates to any auxiliary information transmitted as part of the coded bitstream and assists a decoder to render a decoded image. Such metadata may include, but are not limited to, color space or gamut information, reference display parameters, and film grain modeling parameters, as those described herein.
Film grain technology is not limited to the content which contains the true film grain. By adding artificial film grain, FGT can also be used to hide compression artifact at a decoder, which is very useful for very low bitrate applications, especially for mobile media. As appreciated by the inventors here, improved techniques for metadata signaling for film grain encoding and synthesis are described herein.
The approaches described in this section are approaches that could be pursued, but not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section. Similarly, issues identified with respect to one or more approaches should not assume to have been recognized in any prior art on the basis of this section, unless otherwise indicated.
An embodiment of the present invention is illustrated by way of example, and not in way by limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
FIG. TA depicts an example end-to-end flow of film grain technology when film grain may be part of the original input video;
Example embodiments that relate to metadata signaling and conversion for film grain encoding are described herein. In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the various embodiments of present invention. It will be apparent, however, that the various embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are not described in exhaustive detail, in order to avoid unnecessarily occluding, obscuring, or obfuscating embodiments of the present invention.
Example embodiments described herein relate to metadata signaling and conversion for film grain technology. In an embodiment, a processor receives an input video bitstream and associated input film grain information according to a first coding format. The processor parses the input film grain information to generate input film grain parameters for film grain synthesis in the first coding format. Next, it generates output film grain parameters for noise synthesis in a second coding format based on the input film grain parameters, wherein the second coding format is different than the first coding format. The processor generates output film noise according to the output film grain parameters and decodes the input video bitstream according to the first coding format to generate decoded video pictures. Finally, the processor adds the output film noise to the decoded video pictures to generate output video pictures.
In reference to existing coding standards, in AVC, HEVC and VVC (Refs. [1-3] and Ref. [6]) (collectively to be referred to as MPEG or MPEG video standards), film-grain model parameters are carried in a film-grain specific supplemental enhancement information (SEI) message. SEI messaging, including film-grain SEI messaging, is not normative. In SMPTE-RDD-5-2006 (Ref. [5]), the Film Grain Technology Decoder Specification, specifies bit-accurate film grain simulation. In AV1 (Ref. [4]), film-grain model parameters are carried as part of the “Film grain params syntax” section in the bitstream. Unlike the MPEG standards, film grain synthesis in AV1 is normative.
During the decoding process (90), a video decoder (130) (e.g., an AVC, HEVC, AV1, and the like decoder) receives the coded bitstream (112) and the corresponding film-grain metadata (122), to generate a decoded video bitstream (132) and FG parameters (134), typically the same as the parameters generated in step 120 in the encoding process. A film-grain synthesis process 140 applies those FG parameters to generate synthetic film grain (142), which, when added to the decoded video film-grain-free video (132), generates the output video (152), which is a close approximation of the input video (102).
The decoding process (90) in process 100B is identical to the one as in process 100A. After decoding the coded bitstream (112), a film-grain synthesis process (140) applies the extracted FG parameters to generate synthetic film grain (142), which, when added to the decoded film-grain-free video (132) generates the output video 152, which is a close approximation of the input video (102).
In AVC, HEVC, and VVC (Refs. [1-3] and Ref. [6]), collectively, for ease of discussion, to be referred to as MPEG or as MPEG video, the film grain model parameters are part of the syntax related to film grain characteristics (FGC) or film-grain (FG) SEI messaging. Film Grain Synthesis (FGS) is primarily characterized by the following set of parameters:
where
Range 0: [0,limit_model 0] for film_grain_model_id=0;
Range_1: [−limit_model_1, limitmodel1−1] for film_grain_model_id=1;
with limit_model 0=2(fimGrainBitDepth[c])−1 and
MPEG video SEI messaging supports two models: a frequency filtering model and an auto-regression (AR) model. No bit-accurate FG synthesis process has been specified in the AVC, HEVC and VVC specifications. For the frequency filtering model, the SMPTE-RDD-5-2006 document (to also be referred as RDD-5) “Film Grain Technology—Specifications for H.264| MPEG-4 AVC Bit streams” (Ref. [5]) specifies a bit-accurate process for film grain simulation. Table 2 depicts the FGC parameters being used in Ref. [5] and their supported range.
FGC parameters are additionally constrained when used in Ref. [5], as follows:
These constraints are interpreted as follows:
The film grain synthesis in RDD-5 contains two major steps:
Film grain synthesis in AV1 is a normative process and uses an auto regression (AR) model for the simulation of noise and an additive blend mode for adding the noise to the decoded source samples. The synthesis process is primarily controlled by grain pattern and grain intensity.
AV1 determines the FG process in the following steps: a) a random number generation process, b) a grain generation process, c) a scaling lookup initialization process, and d) an adding noise synthesis process.
Comparing the two film grain synthesis models, MPEG video SEI supports the AR film model by setting film_grain_model_id to 1 and supports additive blend mode by setting blending_mode_id to 0. The major FGS differences between the MPEG video SEI AR model and the AV1 AR model are as follows:
Given that decoding devices may have already hardware accelerators to enable film-grain synthesis for either AV1 or MPEG video content (e.g., AVC, HEVC, and VVC), and given that film grain metadata may be generated regardless of the video coding technology, as appreciated by the inventors, transcoding between MPEG and AV1 film-grain metadata may allow users a richer user experience, using existing content. Furthermore, proposed embodiments allow the existing MPEG film-grain (FG) SEI messaging syntax to be used as is to allow decoders to apply the AV1 film-grain synthesis on MPEG-decoded video.
If, as depicted in
Thus, given the two MPEG SEI flags: fg_characteristics_cancel_flag and fg_characteristics_persistence_flag, the following data flow in pseudo code summarizes the proposed actions according to an embodiment:
In addition, one can set the overlap flag equal to 0 or 1, to either disallow or allow spatial overlapping between film grain blocks. Because this value is signaled in the bitstream in AV1, unless it can also be signaled through the MPEG FG SEI messaging, for improved video quality, a default value of 1 is suggested. The AR model parameters can be mapped in a variety of ways. In one embodiment, the mapping may be indicated as follows:
In step 515, the decoder needs to decide whether the provided SEI messaging can be used to perform FG synthesis using AV1 film synthesis, and in particular, using the AR, additive, model. For example, process 200 (in
Next, in step 525, one needs to set up the parameters for the AV1 AR model, such as the value of shifts, the AR coefficients, and the like. For the shift operation, AV1 has three related parameters: grain_scaling_minus_8, ar_coeff_shift_minus_6 and grain_scale_shift. In MPEG FG, there is only one parameter fg_log 2_scale_factor (in AVC, also referred to as log 2_scale_factor). In an embodiment, fg_log 2_scale_factor is set equal to grain_scaling_minus_8+8, grain_scale_shift is set equal to 0, and ar_coeff_shift_minus_6 is set to a constant value among (0, 1, 2, 3). For example, one can set ar_coeff_shift_minus_6 equal to 2.
For the MPEG AR model parameters, fg_comp_model_value[c][i][0] models the variance or standard deviation of film grain noise. In an embodiment, one can use it to generate the ScalingLUT table or map to point_y/cb/cr values, described in step 540. The ScalingLUT is generated by having the constant value within an intensity interval instead of linear interpolation. ScalingLUT is generated as follows: The variable NumPoints[cIdx] for cIdx=0..2 is set as following:
To obtain values of the scaling function, the following procedure is invoked with the color plane index cIdx and the input value pointVal as inputs. The output is the value of the scaling function pointScal.
When ar_coeff lag is set equal to 2, fg_comp_model_value[c][i][j], j=1, 2, 3, 4, 5, are set as follows:
The array aRCoeffs[cIdx] [pos], for pos=0 . . . numPos[cIdx]−1 is set equal to 0, except the following.
In AV1, the variable grain seed (also to be referred to as GrainSeed) specifies the starting value for the pseudo-random number generator used in film grain synthesis. In particular, GrainSeed contains the value used to initialize a 16-bit variable RandomRegister. Without limitation, the GrainSeed generation step (530) may follows section “1.5.2.1 Seed initialization” in SMPTE RDD-5 for the luma component. An example implementation is also provided in “Section 8.5.3.2” in Appendix 1.
In SMPTE RDD-5, the seed is generated based on the variables PicOrderCnt and idr_pic_id for IDR frame (frames that can be decoded without reference to other frames). In AVC, idr_pic_id can be read from the slice header, and successive IDR frames should not have the same idr_pic_id. Additionally, any two IDR frames that are 32 or fewer frames apart (in decoding order) should not have the same idr_pic_id. This is to avoid film grain repetition in temporally-adjacent frames. However, this syntax does not exist in HEVC or VVC. In an embodiment, one may define a new “idr_pic_id”-like variable (herein named for convenience and with no limitation idr_pic_id), which can be updated by setting idr_pic_id=0 initially and increasing idr_pic_id by 1 for every IRAP picture. A potential shortcoming of this approach is that for trick modes (e.g., fast forwarding and the like) such an idrpic_id variable will lose proper synchronization and may have invalid values. In another embodiment, it is proposed not to use the idr_pic_id information. For example, it is suggested to put constraints to the PicOrderCnt value. For example, PicOrderCnt value can have the following constraints: 1) successive IRAP frames shall not have the same PicOrderCnt value, and 2) any two IRAP frames which are 32 or fewer frames apart (in decoding order) shall not have the same PicOrderCnt value. This approach could be applied to future revisions of SMPTE-RDD-5 to support HEVC and VVC.
The film grain synthesis process can be described as follows. Given AR coefficients and a Gaussian sequence, a film grain template is generated first (530). With a pseudo-random generator, the decoder generates offsets for a 32×32 block (or +2 additional row/columns when overlap is used) inside the film grain template. The pseudo-random generator is initialized in the beginning of each row of 32×32 blocks to allow parallel row processing. The generator is initialized based on a GrainSeed element. The 32×32 block of the film grain is scaled, and added to the reconstructed samples, and then clipping is applied (545). For 4:2:0, the block size for chroma components is half of the luma block size horizontally and vertically. For more details, one is referred to the proposed example operations provided in Appendix 1 in “Section 8.5.3.3.”
Following step 535, as in AV1, the process continues with initializing a scaling look-up table (LUT) in step 540 and adding the noise to the current decoded picture (step 545).
In summary, the process of applying AV1 FGS process using the MPEG SEI AR model is as follows:
Process 500B includes also the additional steps 570 and 575. In step 570, the decoder checks the value of fg_characteristics_cancel_flag. If it is set to zero, then the remaining FG SEI message is parsed and new film-synthesis parameters are generated to proceed with step 525. Otherwise, in step 575, the stored AR parameters are cleared and the film-grain synthesis and addition process is skipped.
For greater interoperability, in some embodiments, it may be desirable to translate AV1 film-grain parameter to MPEG FG SEI parameters. For example, a content creator may want to take advantage of existing film-grain synthesis hardware or they may want to allow users to experience a similar film look regardless of whether they view MPEG- or AV1-compressed content. As before, one may establish a flag (e.g., denoted as AV12FGCenableFlag), which when all of the following constraints apply, the flag AV12FGCenableFlag shall be set equal to 1, otherwise AV12FGCenableFlag shall be set to 0. When AV12FGCenableFlag is equal to 1, the AV1 film grain model parameter values may be signaled in an MPEG FG SEI message. When AV12FGCenableFlag is equal to 0, the AV1 film grain model parameter values should not be signaled in an MPEG SEI message.
When AV12FGCenableFlag is equal to 1 and the AV1 film grain model parameters are signaled in an MPEG SEI message, the following constraints apply for the values of the MPEG SEI syntax elements:
By way of example, the value of fg_comp_model_value[0][i][j] may be determined from ar_coeff_y_plus_128[pos], when present, as indicated in Table 6 for the case in which ar_coeff_lag is equal to 1, and Table 7 for the case in which ar_coeff lag is equal to 2.
As discussed earlier, RDD-5 (Ref. [5]) provides a bit-accurate technology implementation of film-grain technology for AVC (H.264). This section provides some recommendations for an updated version of RDD-5 as it may apply to HEVC and VVC. Some of the recommendations have already been discussed earlier, thus their details may be omitted from this section. Appendix 2 of this document provides an example of proposed amendments to the RDD-5 frequency model in MPEG film-grain synthesis for HEVC and VVC.
As appreciated by the inventors, future versions of RDD-5 should consider at least the following:
Equation (1.6) in RDD-5 Now Reads
In an embodiment, a power of two scale factor, e.g., scale_factor=2x, can be determined by rounding the value of comp_model_value[c][s][0], as illustrated below
If 2x is closest to comp_model_value[c][s][0], then the above set of equations reduce to
As appreciated by the inventors, among the places where SMPTE RDD-5 needs to be generalized to support blockAvgSize greater than 8×8 include the routines for: 1) Refinement of the grain pattern database
Each one of the references listed herein is incorporated by reference in its entirety.
Example Computer System Implementation
Embodiments of the present invention may be implemented with a computer system, systems configured in electronic circuitry and components, an integrated circuit (IC) device such as a microcontroller, a field programmable gate array (FPGA), or another configurable or programmable logic device (PLD), a discrete time or digital signal processor (DSP), an application specific IC (ASIC), and/or apparatus that includes one or more of such systems, devices or components. The computer and/or IC may perform, control, or execute instructions relating to film-grain metadata signaling and conversion in image and video coding, such as those described herein. The computer and/or IC may compute any of a variety of parameters or values that relate to film-grain metadata signaling and conversion in image and video coding described herein. The image and video embodiments may be implemented in hardware, software, firmware and various combinations thereof.
Certain implementations of the invention comprise computer processors which execute software instructions which cause the processors to perform a method of the invention. For example, one or more processors in a display, an encoder, a set top box, a transcoder, or the like may implement methods related to film-grain metadata signaling and conversion in image and video coding as described above by executing software instructions in a program memory accessible to the processors. Embodiments of the invention may also be provided in the form of a program product. The program product may comprise any non-transitory and tangible medium which carries a set of computer-readable signals comprising instructions which, when executed by a data processor, cause the data processor to execute a method of the invention. Program products according to the invention may be in any of a wide variety of non-transitory and tangible forms. The program product may comprise, for example, physical media such as magnetic data storage media including floppy diskettes, hard disk drives, optical data storage media including CD ROMs, DVDs, electronic data storage media including ROMs, flash RAM, or the like. The computer-readable signals on the program product may optionally be compressed or encrypted.
Where a component (e.g. a software module, processor, assembly, device, circuit, etc.) is referred to above, unless otherwise indicated, reference to that component (including a reference to a “means”) should be interpreted as including as equivalents of that component any component which performs the function of the described component (e.g., that is functionally equivalent), including components which are not structurally equivalent to the disclosed structure which performs the function in the illustrated example embodiments of the invention.
Example embodiments that relate to film-grain metadata signaling and conversion in image and video coding are thus described. In the foregoing specification, embodiments of the present invention have been described with reference to numerous specific details that may vary from implementation to implementation. Thus, the sole and exclusive indicator of what is the invention, and what is intended by the applicants to be the invention, is the set of claims that issue from this application, in the specific form in which such claims issue, including any subsequent correction. Any definitions expressly set forth herein for terms contained in such claims shall govern the meaning of such terms as used in the claims. Hence, no limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of such claim in any way. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense.
Example addendum to Section 8.5, Film grain characteristics SEI message, of VVC SEI messaging (Ref. [6]) to support film grain synthesis during MPEG decoding according to the AV1 specification.
8.5.3 Film Grain Synthesis Process for AR Model with Additive Blending Mode
The process requires the definition of the following variables:
The process requires the further constraint of the following variables:
The input of the process is the variable nBits. The output of the process is the variable randResult.
The random number generation process is specified with the function randResult=getRandNum(nBits).
The output of this process is the film grain template GrainTemplate[cIdx][y][x], where the variable cIdx is the color component index, y=0 . . . nCurrSh, and x=0 . . . nCurrSw. When cIdx is equal to 0 (luma component), nCurrSw=82, nCurrSh=73. When ChromaFormatldc=1, nCurrSw=44, nCurrSh=38 for cldx is equal to 1 or 2 (chroma component).
The array gaussianSequence contains random samples from a Gaussian distribution with zero mean and standard deviation of about 512 clipped to the range of [−2048, 2047] and rounded to the nearest multiple of 4.
The variable scaleShift is set to 12—BitDepth
The film grain templates are generated as follows.
GrainTemplate[cIdx][y][x]=(gaussianSequence[getRandNum(11)]+2scaleShift-1)»scaleShift
The variable ar_coeff_lag is set equal to 2.
The variable aRCoeffShift is set equal to 8
The variable numPos[cIdx] is set as the follows:
The array aRCoeffs[cIdx] [pos], for pos=0 . . . numPos[cIdx]−1 is set equal to 0, except the following.
The grain template for each cIdx is generated as follows:
8.5.3.4 scalingLut Initialization
For a color plane cIdx, the following initialization procedure is envoked to initialize the ScalingLut[cIdx][256].
The variable NumPoints[cIdx] for cIdx=0..2 is set as following:
To obtain values of the scaling function, the following procedure is invoked with the color plane index cIdx and the input value pointVal as inputs. The output is the value of the scaling function pointScal.
8.5.3.5 Add Noise Process
Inputs to this process are the reconstructed picture prior to adding film grain, i.e., the array recPictureL and, when ChromaFomatIdc is not equal to 0, the arrays recPictureCb and recPictureCr.
Outputs of this process are the modified reconstructed picture after adding film grain, i.e., the array recPictureL and, when ChromaFomatIdc is not equal to 0, the arrays recPictureCb and recPictureCr.
The pseudo-random generator is initialized in the beginning of each row of 32×32 blocks to allow parallel row processing ((denoted as variable rowNum). The generator is initialized based on a GrainSeed element. For every 32×32 block, with a pseudo-random generator getRandNum(8), a 34×34 array grSCur[cldx] is generated from GrainTemplate[cIdx]. grSCur[cldx] is then assigned to the picture array grPlanes[cIdx]. The variable cIdx specfies the color component index.
When NumPoints[0] is larger than 0, the film grain is added to the luma ccomponent recPictureL.
When ChromaFormatlde is larger than 0 and NumPoints[cldx] is larger than 0 where cIdx is equal to 1 or 2, the film grain is added to the chroma components recPictureC chroma samples. recPictureC refers to recPictureCb when cIdx is equal to 1 and refers to recPictureCr when cIdx is equal to 2
The variable minVal[cldx] and maxVal[cldx] is specified as follows: If fg_full_range_flag is equal to 1, minVal[cldx] is equal to 0, and maxVal[cldx] is equal to 2BitDepth—1 when cIdx is equal to 0, 1 or 2.
Otherwise if fg_full_range_flagis is equal to 0, minVal[1] and minVal[2] is equal to 16*2BitDepth-8 and maxVal[0] is equal to 235*2BitDepth-8.
Example addendum to Section 8.5, Film grain characteristics SEI message, of VVC SEI messaging (Ref. [6]) to support film grain synthesis during MPEG decoding for different picture sizes and the frequency filtering model. Proposed amendments are shown in an Italic font.
Modify the location and text of Note 3 in the following text:
Use of this SEI message requires the definition of the following variables:
As follows:
“Use of this SEI message requires the definition of the following variables:
Modify the following:
“Depending on the value of fg_model_id, the selection of the one or more intensity intervals for the sample value Idecoded[c][x][y] is specified as follows:
As follows:
“Depending on the value of fg_model_id, the selection of the one or more intensity intervals for the sample value Idecoded[c][x][y] is specified as follows:
1.1.3 Example Embodiment for a bit-accurate process for grain blending
The bit-accurate grain blending process and constraints are specified such that all decoders that conform to this version of this Specification will produce numerically identical cropped decoded output pictures.
Bitstreams conforming to the bit-accurate grain generation process shall obey the following constraints:
Inputs to the process are the decoded picture sample arrays before grain blending decPictureL, decPictureCb, and decPictureCr.
Outputs of this process are modified decoded picture sample array after grain blending blendPictureL, blendPictureCb, and blendPictureCr.
The variable BlockSize is derived as follows:
The grain blending process is derived as the following ordered steps:
1.1.3.1 Grain Pattern Database Generation
Output of this process is a 13×13×64×64 grain pattern database array GrainDb.
The function Prng(x), with x=0..2{circumflex over ( )}(32)-1, is defined as follows:
Prng(x)=(x<«1)+(1+((x & (1«2))>0)+((x & (1«30))>0))% 2
The pseudo-random number generator array SeedLUT[i] with i=0..255 is specified as follows:
The Gaussian pseudo-random array gaussianLUT[m], with m=0..2047 is specified as follows:
The variable fgCutFreH[hi] _((h+3) «2)−1, with h=0.12.
The variable fgCutFreqV[v]=((v+3) «2)−1, with v=0..12.
The variable fgCoeffs[h][v][i][j] is initially set equal to 0, with h=0..12, v=0..12, i=0..63, j=0..63, and is derived as follows:
For given h, and v, with h=0..12, v=0..12, the array GrainDb[h][v][i][j] is derived from fgCoeffs[h][v][i][j] with i=0..63, j=0..63, by invoking the transformation process as specified in Rec. ITU-T H.2661 ISO/IEC 23090-3, clause 8.7.4.4, with trType inferred to equal 0. The grain pattern database GrainDb is further refined as follows:
1.1.3.2 Grain Blending Process
Inputs to this process are the decoded picture prior to grain blending, i.e., the arrays decPictureL, decPictureCb and decPictureCr.
Outputs of this process are the modified decoded picture after grain blending, i.e., the array blendPictureL, blendPictureCb and blendPictureCr.
Depending on the value of the colour component cldx, the following assignment is made:
For given cIdx,
1.1.3.2.1 Seed Initialization for Current Picture
Input to this process is a variable cIdx specifying the colour component.
The output of this process are picture seed array prngArray.
The function MSB16(x) with x=0..2{circumflex over ( )}(32)−1, is defined as follows:
MSB16(x)=((x&0xFFFF0000)»16)
The function LSB16(x) with x=0..2{circumflex over ( )}(32)−1, is defined as follows:
LSB16(x)=(x&0x0000FFFF)
The function BIT0(x) with x=0..2{circumflex over ( )}(32)−1, is defined as follows:
BIT0(x)=(x&0x1)
The variable picOffset=PicOrderCnt (e.g., PicOrderCnt=PicOrderCntVal)
The variable cOffset=(cIdx==0)? 0: (cIdx==1) ? 85:170)
The variable prngInit=SeedLUT[picOffset+cOffset) % 256]
The array prngArray[i][j] with i=0.. PicHeightInBlock−1, j=0.. PicWidthInBlock−1 is derived as follows:
1.1.3.2.2 Grain Block Blending Process
Inputs to this process are:
Outputs to this process are blended sample array blendSamples.
The variable picWidth=(cIdx==0)? PicWidthInLumaSamples: PicWidthInLumaSamples/2
The variable picHeight=(cIdx==0)? PicHeightInLumaSamples: PicHeightInLumaSamples/2
The variable intensitylntevalIdx for the current block is initialized to be−1 and is derivded as follows:
The derivation of decSamples for the current block is as follows:
The grainSample is further refined as follows:
Number | Date | Country | Kind |
---|---|---|---|
202141015755 | Apr 2021 | IN | national |
This application claims priority to U.S. Provisional Application No. 63/249,401, filed Sep. 28, 2021, U.S. Provisional Application No. 63/210,789, filed Jun. 15, 2021, Indian Application No. 202141015755, filed Apr. 2, 2021, and Indian Application No. 202141029381, filed Jun. 30, 2021 all of which are incorporated herein by reference in their entirety TECHNOLOGY The present document relates generally to images. More particularly, an embodiment of the present invention relates to metadata signaling and conversion for film grain encoding and synthesis in images and video sequences.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2022/022958 | 3/31/2022 | WO |
Number | Date | Country | |
---|---|---|---|
63249401 | Sep 2021 | US | |
63210789 | Jun 2021 | US |