The present disclosure generally relates to the field of video compression. For example, the present disclosure relates to compression of video sequences in dynamic groups of pictures.
Video coding (video encoding and decoding) is used in a wide range of digital video applications, for example broadcast digital TV, video transmission over internet and mobile networks, real-time conversational applications such as video chat, video conferencing, DVD and Blu-ray discs, video content acquisition and editing systems, and camcorders of security applications.
The amount of video data needed to depict even a relatively short video can be substantial, which may result in difficulties when the data is to be streamed or otherwise communicated across a communications network with limited bandwidth capacity. Thus, video data is generally compressed before being communicated across modem day telecommunications networks. The size of a video could also be an issue when the video is stored on a storage device because memory resources may be limited. Video compression devices often use software and/or hardware at the source to code the video data prior to transmission or storage, thereby decreasing the quantity of data needed to represent digital video pictures. The compressed data is then received at the destination by a video decompression device that decodes the video data. With limited network resources and ever increasing demands of higher video quality, improved compression and decompression techniques that improve compression ratio with little to no sacrifice in picture quality are desirable.
The encoding and decoding of the video may be performed by standard video encoders and decoders, compatible with H.264/AVC, HEVC (H.265), VVC (H.266) or other video coding technologies, for example.
Transmission resources are typically limited so that compression of the transferred data may be desirable. In general, compression may be lossless (e.g. entropy coding) or lossy (e.g. applying quantization). The lossy compression typically provides a higher compression ratio. However, it is in general irreversible, i.e. some information may be irrecoverably lost.
Methods and apparatuses according to this disclosure allow compression of video sequences where synthetic frames are used in dynamic groups of pictures.
Some implementations of the present disclosure relate to compression of video sequences where synthetic frames are used in dynamic groups of pictures so as to generate at low bit costs a bitstream including only a few encoded input frames and position indications for synthetic frames without encoding them.
In this disclosure a synthetic frame may be also referred to as synthesized frame. A frame may also refer to a picture or a frame of a picture or picture frame.
According to an aspect of the present disclosure, an apparatus is provided for generating a bitstream representing input frames of a video sequence, the apparatus comprising: a processing circuitry configured to generate the bitstream including: generating a synthesized frame at a first position of a first input frame based on two or more input frames; determining a quality measure for the synthesized frame; when the quality measure fulfills a predetermined condition, including an indication of said first position into a bitstream portion; and when the quality measure does not fulfill the predetermined condition, encoding content of said first input frame into the bitstream portion. Accordingly, the bit costs for the generated bitstream is reduced, improving the compression without increase of encoding latency.
In one exemplary implementation, the generating of the synthesized frame includes interpolating the synthesized frame based on one or more input frames preceding the synthesized frame and on one or more input frames succeeding the synthesized frame in a display order. Accordingly, the generated S frame may entail time correlation from preceding and succeeding frames, and thus lead to a more accurate interpolation and thus closer match with the original frame.
For example, the generating of the synthesized frame is performed by a neural network. Accordingly, the generating of the bitstream may be performed with a trained network optimized for frame interpolation. Hence, the interpolation and thus determination of whether or not an S frame should be generated may be performed accurately. This may lead to a larger reduction of rate of the coded bitstream.
According to an implementation, the quality measure is any of peak signal to noise ratio, PSNR, resolution, bit depth, or a perceptual quality metric. Accordingly, different kinds of known quality measures may be used for the synthesized frame. A particular quality metric may be selected, based on a specific application. Using resolution and/or bit depth may be simple and thus efficient as they do not require any additional computation effort. On the other hand, PSNR is a widely used objective metric, whereas a perceptual metric (e.g. an estimator of subjective opinion score) may further reduce bitrate without compromising user experience.
In another implementation example, said first input frame, when the quality measure does not fulfill the predetermined condition, can be coded as any of an intra-predicted frame I, an unidirectional inter-prediction frame P and a bidirectional inter-prediction frame B, corresponding respectively to frame types I, P, and B. Accordingly, an input frame to be coded may be assigned any of frame types I, B, or P, which have different bit cost. Therefore, the bit costs may be optimized based on the assigned frame type.
According to an implementation, the synthesized frame is generated from the two or more input frames, and the indication indicates said first position within a Group of Pictures, GOP, at which the synthesized frame is to be generated. Accordingly, the first position is signaled in the bitstream with a simple indication. This reduces the signaling overhead.
For example, a number of synthesized frames generated for the GOP is based on a predefined look ahead constant determining a maximum number of synthesized frames that can be generated for the GOP. Accordingly, the number of synthesized frames may be adapted within a range of the look ahead constant. Therefore, the S frame determination may be performed in a flexible manner, enabling a dynamic adaptivity of the GOP.
In a further implementation, the GOP includes two or more coded frames comprising a starting frame having the frame type I and an ending frame of the GOP, and the processing circuitry is further configured to: assign one of frame types P and B in accordance with a predefined GOP pattern of frame types to each of the frames within the GOP different from the starting frame; and encode the content of the each of the frames within the GOP into the bitstream portion. Accordingly, the frames to be encoded are determined based on a GOP pattern, allowing for tuning the sequence of coded frames within a predefined GOP structure. This may be beneficial especially for applications which may require fixed GOP size.
In a further implementation, the indication of said first position includes positions of the coded frames. Therefore, by generating a bitstream according to a predefined look ahead constant and a GOP pattern, the GOP may be adaptively enlarged.
According to an implementation, the processing circuitry is further configured to: detect a scene change based on the frames of the GOP; and assign the frame to be encoded at which the scene change occurs the frame type I. Accordingly, frame types within the GOP may be adapted depending on the degree of scene change. Hence, the bit costs may be optimized in account of the degree and occurrence of scene changes.
For example, said first input frame pertains to a Group of Pictures, GOP, of a predefined GOP structure, and the processing circuitry is configured to: when the quality measure does not fulfill the predetermined condition, encode the content of said first input frame into the bitstream portion with a frame type according to a GOP pattern of frame types pre-configured for said GOP; and when the quality measure fulfills the predetermined condition, not encode the content of said first input frame into the bitstream portion.
According to an exemplary implementation, the processing circuitry is configured to: determine a set of one or more positions including said first position within the GOP, the GOP including, as coded input frames, a starting frame with a start position and an ending frame with an end position, wherein the start position and the end position are in display order, generate recursively, in coding order, the synthesized frame at a current position between the start position and the end position from the starting frame and the ending frame; determine the quality measure for the synthesized frame; when the quality measure fulfills the predetermined condition, include the indication of the current position into the bitstream portion; when the quality measure does not fulfill the predetermined condition, encode the content of an input frame at the current position into the bitstream portion; and continue the recursion using the coded frames or the synthesized frames at the start position and the current position, and/or at the current position and the end position. Accordingly, the determination which frame of the GOP should be replaced is performed during the encoding process (on-the-fly determination), because both coded frames and already synthesized frames are used. Hence, pre-processing may be reduced, accelerating the generation of the bitstream.
In an example, the processing circuitry is further configured to put the synthesized frame and/or the coded frame at the first position into a decoded frame buffer if one or more frames depend on the frame at said first position. Accordingly, coded and/or synthesized frames are available in case of frame dependencies. Thus, the determination of S frames at higher hierarchy level may be performed while preserving frame dependencies.
According to an implementation, the processing circuitry is configured to determine a set of one or more positions, including said first position, of a next coded frame within a Group of Pictures, GOP, the GOP including, as coded input frames, a starting frame with a start position and an ending frame with an end position, wherein the start position and the end position are in display order, including: generating for a GOP size and a predefined look ahead constant recursively in coding order one or more synthesized frames at a respective current position between the start position and the end position from the starting frame and the ending frame; determining the quality measure for each of the synthesized frames; when the quality measure fulfills the predetermined condition for each of the synthesized frames: determining the ending frame as the next frame to be encoded and encode the content of the ending frame at the end position into the bitstream portion; when the quality measure does not fulfill predetermined condition for any of the one or more synthesized frames: continuing the recursion by bisecting the GOP size and using the start position and the end position of the input frames of the respective bisected GOP; or determining the coded input frame immediately following the starting frame in the display order as the next frame to be encoded and encode the content of an input frame at the respective current position into the bitstream portion; wherein the predefined look ahead constant determines a maximum number of the synthesized frames that can be generated for the GOP. Accordingly, the number of synthesized frames that are to be generated for the GOP is determined and dynamically adapted during the encoding process.
For example, the GOP size and bisected GOP sizes correspond to differences in position between successive next frames to be encoded into the bitstream portion. Accordingly, positions of next coded frames can be determined easily from GOP sizes.
In another implementation, the one or more preceding frames and the one or more succeeding frames in display order being neighboring frames of the synthesized frame, respectively.
Moreover, an amount of one or more neighboring frames being any of a number ranging from 1 to 64. Accordingly, the S frame may be generated via simple bi-directional interpolation or higher order schemes by including an increased number of preceding and succeeding frames, which may ne neighboring the S frame. As a result, an S frame may be generated entailing time correlation from preceding and/or succeeding frames at various degree. Hence, the inter-frame correlation may be tunable e.g. depending on the content of the S frame and/or content of the preceding and succeeding frames.
According to an aspect of the present disclosure, an apparatus is provided for generating frames of a video sequence from a bitstream representing the video sequence, the apparatus comprising: a processing circuitry configured to generate the frames including: decoding from a bitstream portion of the bitstream content of two or more frames of the video sequence; parsing the bitstream portion for an indication of a first position; and generating, based on the parsed indication, a synthesized frame as a frame of the video sequence at said first position based on two or more previously generated frames. Accordingly, the decoder is able to decode content and position indication from a lightweight bitstream. Thus, the generating of frames for a GOP may be accelerated.
According to an implementation, the generating of the synthesized frame includes interpolating the synthesized frame based on one or more previously generated frames preceding the synthesized frame and on one or more previously generated frames succeeding the synthesized frame in a display order. Accordingly, the generated S frame may entail time correlation from preceding and succeeding frames, and thus lead to a more accurate interpolation and thus closer match with the original frame.
For example, the generating of the synthesized frame is performed by a neural network. Accordingly, the generating of the bitstream may be performed with a trained network optimized for frame interpolation. Hence, the generated S frame may be performed accurately.
In another implementation, the decoded two or more frames being any of an intra-predicted frame I, an unidirectional inter-prediction frame P, and a bidirectional inter-prediction frame B, corresponding to frame types I, P, and B.
In a further implementation, the indication indicates said first position within a Group of Pictures, GOP, at which the synthesized frame is generated. Accordingly, the decoder knows where to generate a synthesized frame within the GOP.
For example, the indication of said first position includes positions of the decoded two or more frames, with said first position and the positions of the decoded two or more frames being in display order of the GOP. Accordingly, the decoder knows the positions of decoded frames within the GOP.
According to an implementation, the processing circuitry is further configured to put the decoded two or frames into a decoded frame buffer if one or more frames depend on the frame at said first position. Accordingly, decoded frames are available in case of frame dependencies. Thus, S frames which may depend on decoded frames may be generated accurately at higher hierarchy level while preserving frame dependencies.
In a further implementation, the GOP includes two or more decoded frames comprising a starting frame an ending frame of the GOP, the starting frame having frame type I and the ending frame having a frame type B or P, and the processing circuitry is further configured to: determine, based on the parsed indication, a position difference between the two decoded frames having successive positions in display order; determine, based on the position difference, a number of synthesized frames generated in display order between the two decoded frames; and generate in decoding order the synthesized frames in accordance with the number at respective positions between the two decoded frames based on the position difference.
In another implementation, the two or more decoded frames of the GOP include one or more decoded frames of frame type I having a corresponding position in display order between the starting frame and the ending frame of the GOP. Accordingly, the frames of the GOP include frames of frame type I between the starting frame and the ending frame, representing access points for video content at which a scene change occurs.
According to a further implementation, the synthesized frame pertains to a Group of Pictures, GOP, of a predefined GOP structure, and the indication indicates said first position within the GOP at which the synthesized frame is generated. Accordingly, the decoder knows where to generate a synthesized frame within the GOP with a fixed (i.e. predefined) GOP structure.
For example, the GOP includes two or more already generated frames, the GOP comprising a starting frame with a start position and an ending frame with an end position, wherein the start position and the end position are in display order, and the processing circuitry is further configured to recursively: parse the bitstream for the indication of a current position, the current position being between the start position and the end position; when said indication is parsed: generate in decoding order the synthesized frame at the current position from the starting frame and the ending frame; when said indication is not parsed: decode from the bitstream portion content of a current frame being at the current position; continue the recursion using the starting frame and, as the ending frame, the synthesized frame or the decoded frame at the current position, and/or respectively using, as the starting frame, the synthesized frame or the decoded frame at the current position, and the ending frame. Accordingly, the frames of the GOP are generated using both decoded frames or already synthesized frames. Hence, frame of the GOP may be generated from a reduced number of decoded frames as a result of the lightweight bitstream.
In an implementation, the processing circuitry is further configured to put the generated frame into a decoded frame buffer if one or more generated frames depend on the frame at said first position. Accordingly, decoded frames are available in case of frame dependencies. Thus, S frames which may depend on decoded frames may be generated accurately at higher hierarchy level while preserving frame dependencies.
In another implementation, the indication indicates said first position within a Group of Pictures, GOP, at which the synthesized frame is generated and the bitstream portion includes an indication of one or more GOP sizes. Accordingly, the decoder knows the position of the S frame to be generated, as well as the positions of next coded frames based on the GOP sizes.
For example, the GOP size corresponds to a difference in position between successive decoded frames. Accordingly, the positions at which S frames are generated within the GOP may be easily determined from the GOP sizes.
According to an implementation, the GOP comprises already two decoded frames used as a starting frame with a start position and an ending frame with an end position, wherein the start position and the end position are in display order, and the processing circuitry is further configured to recursively: parse the bitstream portion for the indication of a first GOP size among the one or more GOP sizes; generate for said first GOP size in decoding order one or more synthesized frames at a respective current position in display order between the start position and the end position from the starting frame and the ending frame; continue the recursion using, as the starting frame, the ending frame of previous recursion step and using, as the ending frame, the decoded frame based on a next GOP size subsequent to said first GOP size.
In a further implementation, the one or more preceding and/or succeeding already generated frames in display order being neighboring frames of the synthesized frame, respectively.
For example, an amount of one or more neighboring frames being any of a number ranging from 1 to 64. Thus, an S frame may be generated in a flexible manner, exploiting previously generated frames at varying number (both preceding and succeeding frames) as well as the degree of being neighboring to the S frame to be generated. Accordingly, the frames of the vide sequence may be generated accounting for inter-frame correlations at different degree.
According to an aspect of the present disclosure, a method is provided for generating a bitstream representing input frames of a video sequence, the method comprising steps of generating the bitstream including: generating a synthesized frame at a first position of a first input frame based on two or more input frames; determining a quality measure for the synthesized frame; when the quality measure fulfills a predetermined condition: including an indication of said first position into a bitstream portion; and when the quality measure does not fulfill the predetermined condition: encoding content of said first input frame into the bitstream portion.
According to an aspect of the present disclosure, a method is provided for generating frames of a video sequence from a bitstream representing the video sequence, the method comprising steps of generating the frames including: decoding from a bitstream portion of the bitstream content of two or more frames of the video sequence; parsing the bitstream portion for an indication of a first position; and generating, based on the parsed indication, a synthesized frame as a frame of the video sequence at said first position based on two or more previously generated frames.
The methods provide similar advantages as the apparatuses performing the corresponding steps and described above.
According to an aspect of the present disclosure, provided is a computer-readable non-transitory medium storing a program, including instructions which when executed on one or more processors cause the one or more processors to perform the method according to any of the above implementations.
According to an aspect of the present disclosure, an apparatus is provided for generating a bitstream representing input frames of a video sequence, the apparatus comprising: one or more processors; and a non-transitory computer-readable storage medium coupled to the one or more processors and storing programming for execution by the one or more processors, wherein the programming, when executed by the one or more processors, configures the apparatus to carry out the method for generating a bitstream representing input frames of a video sequence.
According to an aspect of the present disclosure, an apparatus is provided for generating frames of a video sequence from a bitstream representing the video sequence, the apparatus comprising: one or more processors; and a non-transitory computer-readable storage medium coupled to the one or more processors and storing programming for execution by the one or more processors, wherein the programming, when executed by the one or more processors, configures the apparatus to carry out the method for generating frames of a video sequence from a bitstream representing the video sequence.
According to an aspect of the present disclosure, provided is a computer program comprising a program code for performing the method when executed on a computer according to any one of the above methods.
The aspects of the present disclosure and examples mentioned above can be implemented in hardware (HW) and/or software (SW) or in any combination thereof. Moreover, HW-based implementations may be combined with SW-based implementations.
In the following, embodiments of the invention are described in more detail with reference to the attached figures and drawings, in which
In the following description, reference is made to the accompanying figures, which form part of the disclosure, and which show, by way of illustration, aspects of embodiments of the present disclosure or aspects in which embodiments of the present disclosure may be used. It is understood that embodiments of the present disclosure may be used in other aspects and comprise structural or logical changes not depicted in the figures. The following detailed description, therefore, is not to be taken in a limiting sense.
For instance, it is understood that a disclosure in connection with a described method may also hold true for a corresponding device or system configured to perform the method and vice versa. For example, if one or a plurality of method steps are described, a corresponding device may include one or a plurality of units, e.g. functional units, to perform the described one or plurality of method steps (e.g. one unit performing the one or plurality of steps, or a plurality of units each performing one or more of the plurality of steps), even if such one or more units are not explicitly described or illustrated in the figures. On the other hand, for example, if an apparatus is described based on one or a plurality of units, e.g. functional units, a corresponding method may include one step to perform the functionality of the one or plurality of units (e.g. one step performing the functionality of the one or plurality of units, or a plurality of steps each performing the functionality of one or more of the plurality of units), even if such one or plurality of steps are not explicitly described or illustrated in the figures. Further, it is understood that the features of the various exemplary embodiments and/or aspects described herein may be combined with each other, unless noted otherwise.
Video coding typically refers to the processing of a sequence of pictures, which form the video or video sequence. Instead of the term picture, the terms frame or image may be used as synonyms in the field of video coding. Video coding comprises two parts, video encoding and video decoding. Video encoding is performed at the source side, typically comprising processing (e.g. by compression) the original video pictures to reduce the amount of data that may be required for representing the video pictures (for more efficient storage and/or transmission). Video decoding is performed at the destination side and typically comprises the inverse processing compared to the encoder to reconstruct the video pictures. Embodiments referring to “coding” of video pictures (or pictures in general) shall be understood to relate to both, “encoding” and “decoding” of video pictures. The combination of the encoding part and the decoding part is also referred to as CODEC (COding and DECoding).
In case of lossless video coding, the original video pictures can be reconstructed, i.e. the reconstructed video pictures have the same quality as the original video pictures (assuming no transmission errors or other data loss during storage or transmission). In case of lossy video coding, further compression, e.g. by quantization, is performed, to reduce the amount of data representing the video pictures, which cannot be completely reconstructed at the decoder, i.e. the quality of the reconstructed video pictures is lower or worse compared to the quality of the original video pictures.
Several video coding standards since H.261 belong to the group of “lossy hybrid video codecs” (i.e. combine spatial and temporal prediction in the sample domain and 2-D transform coding for applying quantization in the transform domain). Each picture of a video sequence is typically partitioned into a set of non-overlapping blocks and the coding is typically performed on a block level. In other words, at the encoder the video is typically processed, i.e. encoded, on a block (video block) level, e.g. by using spatial (intra picture) prediction and temporal (inter picture) prediction to generate a prediction block, subtracting the prediction block from the current block (block currently processed/to be processed) to obtain a residual block, transforming the residual block and quantizing the residual block in the transform domain to reduce the amount of data to be transmitted (compression), whereas at the decoder the inverse processing compared to the encoder is applied to the encoded or compressed block to reconstruct the current block for representation. Furthermore, the encoder duplicates the decoder processing loop such that both will generate identical predictions (e.g. intra- and inter-predictions) and/or re-constructions for processing, i.e. coding, the subsequent blocks.
As video picture processing (also referred to as moving picture processing) and still picture processing (the term processing comprising coding), share many concepts and technologies or tools, in the following the term “picture” is used to refer to a video picture of a video sequence (as explained above) and/or to a still picture to avoid unnecessary repetitions and distinctions between video pictures and still pictures, where not necessary. In case the description refers to still pictures (or still images) only, the term “still picture” shall be used.
In the following, an overview over some of the used technical terms is provided.
Intra-prediction: Predicting a block of samples for a current frame using samples only within the same current frame. Said same frame has no dependency on other frames within a video sequence. This is also referred to as spatial prediction.
Inter-prediction: Predicting a block of samples for a current frame of a vide sequence using samples of one or more other frames temporally different from the current frame. The current frame has a temporal dependency with the other frames. This is also referred to as temporal prediction.
Frame types: As will be explained further below, in the present disclosure, four frame types are defined, namely I, P, and B frames. In some embodiments, a new frame type, i.e. synthetic frame (S frame, may be also referred to as synthesized frame) is introduced. The Synthetic frame is generated, for example, by frame interpolation from a frame generation module, taking a frame backward (preceding) and a frame forward (following) of the synthetic frame as input. The preceding frame may be a frame directly (immediately) preceding the position of the synthesized frame in some embodiments. In other embodiments, the preceding frame may be any (predetermined) frame from the same GOP preceding the S-frame. The following frame may be a frame directly (immediately) following the position of the synthesized frame in some embodiments. In other embodiments, the following frame may be any (predetermined) frame from the same GOP following the S-frame.
Display order: Refers to the order in which frames are displayed after decoding.
Coding order: Refers to the order in which frames are coded, encoded, or decoded.
Group Of Pictures (GOP): Refers to a group of successive pictures within a video sequence. The coding order of frames in a GOP might not be the same to their display order.
Encoding decoding delay: Encoding/decoding delay occurs, when the display order and the coding order in a GOP are different. A frame in the GOP might not depend on previous frames in display order, but might also depend on future frames in display order. Therefore, an encoding/decoding latency between the current frame and the future frame occurs, because the encoder/decoder has to encode/decode first the future frame and then the current frame to fulfil the dependency between them.
GOP structure: Refers to the frame types assigned to each frame in a GOP, and their coding order/dependency in between.
All Intra configuration: Refers to a video coding configuration wherein only intra-prediction is allowed. Each frame has no dependency on any other frame. Within such an encoded sequences, the decoder can seek any frame and start decoding.
Random access configuration: Refers to a video coding configuration in which the display order and the coding order of frames are different. In this configuration, typically I frames are spread across a video sequence, which the decoder seeks for and start decoding of an I frame. Typically, a GOP structure is defined in this configuration for every N frames, with every N frames is a GOP.
Low delay configuration: Refers to a video coding configuration in which the display order and the coding order of frames are the same. A current frame only depends on one or more previous frames in display order. No future frame in display order is coded before the current frame. Therefore, no decoding latency exists between a given frame and its future frames. A low delay configuration usually has only one I frame at the beginning, and the decoder cannot seek for a random frame to be decoded, because the frame at the sought position depends on previous frames, which might not be decoded yet.
Frame rate: Refers to how many frames for given a period of a time interval T are processed, e.g. decoding/encoding/displayed. It is usually quantified in the unit of frames/second (i.e. frames per second fps).
Peak Signal-to-Noise Ratio (PSNR): PSNR is a metric commonly used to quantify the reconstruction quality of an image compared to its original image. The higher the PSNR the better quality of reconstructed images.
Coded frame: Refers to a frame encoded by an existing codec, but not synthesized by an interpolation module. A coded frame can be an I, P, or B frame.
Video compression is achieved by removing redundancy in a video sequence. Two kinds of redundancy are usually exploited: temporal redundancy and spatial redundancy.
In most instances, the content of successive frames in a video sequence may not change too much. This means that, within such kind of successive frames, only a small motion for specific objects are observed. In such cases, only motion information is recorded/encoded for those frames (except for the first “anchor” frame e.g. an I frame). To recover these frames in the decoder (except for the first “anchor” frame e.g. an I frame), a temporal prediction (i.e. inter-prediction) is performed with the decoded anchor frame and the motion information as inputs. In such a way, these non-anchor frames avoid encoding all the information of a frame, and thus reduces the temporal redundancy. In video coding convention, non-anchor frames are usually called inter frames because temporal prediction relies on an inter-picture correlation. Inter-picture correlation means that a temporal correlation between at least two pictures (or frames) at different time instances (time points) exists. The anchor frame is referred to as reference picture, and motion information is represented by motion vectors. Sometimes, the inter-prediction can be performed not only uni-directional, but also bi-directional. Bi-directional means that a current picture is inter-predicted using the reference pictures which are temporally forward (future) and backward (past) pictures of the current picture.
For the anchor frame, it does not rely on any other pictures, and thus it is usually called intra frame. Intra frame reduces the spatial redundancy by performing intra-prediction between a current block and its neighboring blocks within the same current frame. Intra-prediction exploits the fact that pixels in a current block are often similar to the pixels in its neighboring blocks.
Inter-prediction and intra-prediction are two techniques widely used by different generations of video codecs, such as HEVC (described e.g. in JCT-VC, High Efficient Video Coding (HEVC), ITU-T Recommendation H.265 and ISO/IEC 23008-2, ITU-T and ISO/IEC JTC, 1 Apr. 2013), and VVC (described e.g. in JVET, Versatile Video Coding (VVC), ITU-T Recommendation H.266| ISO/IEC 23090-3, Apr. 2020). Both inter- and intra-predictions are performed at block basis.
After the introduction of the terms inter- and intra-prediction, in the following different frame types are explained based on their constitution of prediction blocks. Typically, three main frame types are defined in video coding: I, P, and B frames.
In the present disclosure detailed further below, a new frame type S frame is introduced. As illustrated in
Since an I frame is the most expensive frame type in terms of bit cost, it might be assumed that an I frame should be chosen as less as possible in order to achieve the best compression efficiency. In some application scenarios, such as those requiring low delay and the content in a frame usually not changing, this is true. In
However, an I frame might be necessary at other positions instead of the first frame in one of the following cases:
In view of the above cases, I frames are usually spread over a video sequence and the distance between two I frames determines the random access point granularity. In early video coding standards, such as MPEG-2, the length between two I frames are called group of pictures (GOP) size. The structure of a GOP is defined by two parameters M and N. The first parameter M represents the delay of encoding and decoding of frames, and the second parameter N represents the distance between two I frames.
For example, in a sequence with a frame pattern IBBBBPBBBBPBBBBI, the GOP size (N value) is equal to 15 (length between two I frames) and the encoding/decoding delay (M value) is 5 (length between I and P frames or length between two consecutive P Frames). For the second parameter M, the encoder or decoder cannot process the B frame directly following the I frames, unless the P frame five frames away are encoded/decoded before, so as to provide reference frames for the B frame inter-prediction.
The delay of encoding/decoding is an important determining parameter for the GOP size, rather than the distance between two I frames. The distance between two I frames is called I period in recent video coding standard such as HEVC or VVC. In addition, in recent video coding standard, B frames can also be considered as a reference frame and a hierarchical coding structure in a GOP is developed.
An example is depicted in
Conventionally, inter-frame interpolation is performed by motion compensation by exploiting motion information as mentioned before. Motion compensation derives a prediction block based on a most similar block from other already decoded frames. The movement of the current block and the reference blocks is represented by motion vectors, and the reference frame is indicated as a reference index. In case of bi-directional prediction, the final prediction result might be a weighted sum of two prediction results. The motion vectors and the reference indices are coded in the bitstream. Therefore, inter-prediction by motion compensation cost bits.
Frame Rate Up Conversion (FRUC) refers to a technique in which the frame interpolation may be performed without any bit cost. Taking the example shown in
In recent years, neural networks (NN) are designed to learn the process of frame interpolation. For example, in H. Lee et al., “AdaCoF: Adaptive Collaboration of Flows for Video Frame Interpolation”, CVPR 2020, describes the neural network learning the frame interpolation from a dataset, consisting of tens of thousands of frame triplets. Each frame triplet consists of three consecutive frames, indexed as 0, 1, and 2. The NN performs frame interpolation using frames with index 0 and 2, and generates a synthesized frame for frame 1. The loss function is defined based on the quality loss between the synthesized frame and original frame 1. By minimizing the loss function during training, the interpolated frame has achieved reasonable image quality after about 50 epochs.
Both approaches of frame-interpolation using FRUC and the neural network are bi-directional, i.e. they use two frames, one backward and one forward (temporal), to interpolate the frame in the middle at zero bit cost.
As may be discerned from the above discussion, the choice of the best suitable frame types across the video sequence is crucial for the overall coding (and decoding) efficiency. The problem of choosing the best suitable frames may be illustrated as follows: introducing, for example, I frames at positions of a scene change (i.e. decreasing N for rapid scene change) mitigates errors as the number of random access points is increased, at the expense of increased bit costs as result of the poor compression. In turn, increasing the intra period (N value) in case of infrequent scene changes reduces the number of least compressed I frames, at the expense of reducing the number of random access points. Similarly, increasing the GOP size (M value), and hence the number of B frames within a GOP can save bits, at the expense of increasing the encoder/decoder delay to such an extent which may not be tolerable in some applications.
Therefore, a solution of the above problem is needed that leverages the selection of the right frame types to improve coding efficiency without further increasing the encoder/decoder delay (M) and intra period (N).
The present disclosure addresses the above problem in that a new frame type, i.e. a synthesized frame (S frame), is introduced. As further discussed below, the use of S frames allows to adaptively insert one or more S frames into the GOP structure or to replace a frame of a fixed GOP structure with an S frame. It is noted that he terms “insert” and “replace” should not be interpreted in a literal manner, but rather have a particular meaning within the context of GOP structures being adaptive or fixed. This will be explained in more detail below.
Synthesized frames are not encoded, but instead generated based on reference frames using a frame interpolation module and hence encounter no bit cost. Rather, an indication of a position is included into a bitstream (or a portion thereof) while the reference frames are encoded. A synthesized frames may be also referred to as synthetic frame. A frame may also refer to a picture or a frame of a picture or picture frame.
The adaptive insertion is done according to a criterion and a maximum look ahead constant. Synthetic frames are inserted only into the GOP, if the criterion is met, e.g. only if a minimum threshold in terms of an image quality metric is reached between the original frame and synthetic frame. This makes the insertion of S frames potentially adaptive to the video content. The maximum look ahead constant sets a limit to the amount of neighboring S frames, i.e. the look ahead value equals the maximum allowed distance between two coded frames (B, P or I).
In this way, the GOP size is potentially adaptive as it depends on the number of inserted or replaced S frames. The GOP may be larger or smaller, with the encoder/decoder delay being dependent on the maximum look ahead value, and hence is limited. Compared to enforcing an upper limit on the M and N values to limit the encoder/decoder delay for existing adaptive GOP size techniques, the present disclosure still allows saving bits by further inserting S frames which are not encoded. Therefore, bits can still be saved by replacing B frames with S frames, even if the maximum M value of the GOP (the distance between P or I frames) is chosen to be small. Compared to FRUC, the benefit of skipping encoding of frames and interpolating them on the decoder is kept. However, it eliminates the disadvantages due to lowering the frame rate, e.g. by dropping frames that are difficult to interpolate. The method of the present disclosure is orthogonal to the existing adaptive GOP techniques as they can be used in conjunction.
An input frame refers to a frame which is input to the encoding. It may be an original frame such as an uncompressed frame output by a camera, e.g. a frame that is not processed, for example, by encoding or decoding or the like. In other words, input frames may correspond to raw (possibly demosaicked) video data. Further, a frame may be also referred to as picture or picture frame or image.
According to the present disclosure, not all of said input frames may need to be encoded into the bitstream or a bitstream portion. For that purpose, a synthesized frame is generated at a predetermined (e.g. a first) position instead of a predetermined (e.g. a first) input frame, the synthetization being based on two or more input frames. A synthesized frame is obtained by interpolation from one or more other frames. The one or more other frames may be, e.g., frames which are encoded into the bitstream (their content is encoded into the bitstream) or other synthesized frames.
This is illustrated in
Moreover, the one or more frames preceding and/or succeeding the S frame in display order may be neighboring frames of the S frame. Neighboring means that the respective preceding and succeeding frames are temporally located (w.r.t. display order) directly before or after the S frame. Also, the number (i.e. the amount) of the one or more neighboring frames may be any number ranging from 1 to 64. It is noted that the range [1 to 64] depends on the current technology and it is understood that, with progressing CODEC technology, said interval may be extended.
Accordingly, the S frame may be generated via simple bi-directional interpolation or higher order schemes by including an increased number of preceding and succeeding frames, which may ne neighboring the S frame. As a result, an S frame may be generated entailing time correlation from preceding and/or succeeding frames at various degree. Hence, the inter-frame correlation may be tunable e.g. depending on the content of the S frame and/or content of the preceding and succeeding frames.
In the example of
The S frame may be generated on the encoder side, for example, by a neural network (NN), The NN may be any network that is trained by training data as part of a learning process. Likewise, a NN may be used for generating the S frame on the decoder side. Accordingly, the generating of the bitstream may be performed with a trained network optimized for frame interpolation. Hence, the interpolation and thus determination of whether or not an S frame should be generated may be performed accurately. This may lead to a larger reduction of rate of the coded bitstream.
According to an aspect, frames in a video sequence are adaptively selected on top of an existing video CODEC, and the selected frames are not coded using the existing video CODEC, but are replaced with synthesized frames generated from a neural network trained for frame interpolation. Also, the frames synthesized by frame interpolation from the neural network does not cost bit. Further, the adaptively selecting frame process is based on synthetic frames generated from the same neural network using the original frames (i.e. input frames) in a video sequence. In another aspect, the adaptively selecting frame process takes a criterion as an input, from which the frames are determined to be replaced or not. The criterion is compared against to one or more features of one or more synthesized frames using the original frames in a video sequence. In other words, based on a criterion such as the quality measure for the synthesized frame, it is determined whether or not an S frame is to be generated.
At this stage, the first input frame at said first position is not processed (e.g. encoded) and may not be needed. This is because, a quality measure for the synthesized frame is determined beforehand. As quality measure (QM) any of peak signal to noise ratio, PSNR, resolution, bit depth, or a perceptual quality metric may be suitable. The perceptual quality metric may be a structural similarity index measure (SSIM). Accordingly, different kinds of known quality measures may be used for the synthesized frame. A particular quality metric may be selected based on an application. Using resolution and/or bit depth may be simple and thus efficient as they do not require any additional computation effort. On the other hand, PSNR is a widely used objective metric, whereas a perceptual metric (e.g. an estimator of subjective opinion score) may further reduce bitrate without compromising user experience. The quality measures are not limited to those listed above. Other kind of QMs may be used in addition and/or may be combined. In the following, the PSNR is used as mere example to illustrate aspects of the processing of the present disclosure.
Using PSNR, the QM for the synthesized frame is determined by calculating the PSNR of the synthesized frame. Whether or not the first input frame (e.g. at position idx=1 in
In turn, when the quality measure does not fulfill the predetermined condition, content of said first input frame is encoded into the bitstream portion. For example, the predetermined condition may be PSNR difference between the S frame and the first input frame being larger than a predefined threshold. In this case, a PSNR of the first input frame is calculated as well. Alternatively, the PSNR may be compared directly with the predefined threshold, i.e. said threshold is fixed at least during the processing. Another option for the QM is using a PSNR of an input frame that is different from the first input frame. In other words, the PSNR of the S frame generated at the first position is compared with the PSNR of a frame at a position different from the first position. The term content refers to video data of the first input frame. In the following, the terms “content” and “coded frame” are used synonymously, and means that the (en) coding of a (input) frame refers to coding the respective video data of the frame.
In the example, if the PSNR of the S frame is of sufficient quality (i.e. QM fulfills the criteria on quality), an indication of the first position is included into the bitstream portion. Said indication may be any suitable indicator, such as a simple FLAG (e.g. binary flag with “1” indicating generate S frame at first position and “0” indicating not generating S frame at first position). In addition or alternatively, the indication may be index of the first position (i.e. the picture index). Including the indication means, for example, including the indication into a header of the bitstream or the bitstream portion. Alternatively, the indication may be encoded into the bitstream portion. In this case, a content of the first input frame is not encoded into the bitstream portion. Accordingly, the bit costs for the generated bitstream is reduced, improving the compression without increase of encoding latency.
In turn, when the PSNR of the S frame is not of sufficient quality (i.e. QM fulfills the criteria on quality), the content of the first input frame is encoded into the bitstream portion. In this case, the first input frame can be coded as any of an intra-predicted frame I, an unidirectional inter-prediction frame P, and a bidirectional inter-prediction frame B, corresponding respectively to frame types I, P, and B. The different kind of frame types are shown in
The above described generation of the bitstream may be realized in a hardware implementation.
With the basic processing performed on the encoding side, the decoding side takes the generated bitstream as input, and then generates frames of the video sequence from the bitstream. This may be performed by decoding the content of two or more frames of the video sequence from the bitstream portion. Similar to the encoding side, the term “content” refers here to video of a decoded frame. In other words, the terms “content”, “decoded content”, and “decoded frame” are used synonymously. Further, the bitstream portion is parsed for an indication of a first position. The indication may be a FLAG indicating that a S frame is generated or not at said position. For example, a FLAG “1” may indicate generating the S frame, whereas “0” indicates no S frame generation. Alternatively, the indication may be the position in terms of the picture index of an S frame generated at said position.
The two or more decoded frames may be any of an intra-predicted frame I, an unidirectional inter-prediction frame P, and a bidirectional inter-prediction frame B, corresponding to frame types I, P, and B. With two decoded frames along with the parsed indication, a synthesized frame is generated, based on the parsed indication, as a frame of the video sequence, based on two or more previously generated frames. Accordingly, the decoder is able to decode content and position indication from a lightweight bitstream. Thus, the generating of frames for a GOP may be accelerated.
The previous generated frames may include both decoded frames and S frames, with said S frames being generated before. In a typical situation, at least two decoded frames are available initially, which are then used to generate an S frame. This means that at the encoding side there are now two decoded frames and one S frame corresponding to frames of the video sequence. In this example step, any two of the S frame and two decoded frames may be used to generate a next S frame. Assuming for the purpose of demonstration that bi-directional prediction is performed, the next S frame is generated from frame pairs (D,D); (D,S), (S,D), and (S,S), with “D” referring to a decoded frame and “S” to an S frame. When the previously generated frames are more than two, then the S frame is generated by triplets, quadruples, etc. of D and S frames. For example, in the triplet case, such triple pairs could be (DDD), (DDS), (DSD), (SDD), (SDS), and (SSS).
Similar to the processing on the encoding side, the synthesized frame may be generated by interpolating the synthesized frame based on one or more previously generated frames preceding the synthesized frame and on one or more previously generated frames succeeding the synthesized frame in a display order. Accordingly, the generated S frame may entail time correlation from preceding and succeeding frames and thus lead to a more accurate interpolation and thus closer match with the original frame.
Also, the one or more preceding and/or succeeding already generated frames in display order may be neighboring frames of the synthesized frame, respectively. Moreover, an amount (i.e. a number) of one or more neighboring frames may be any of a number ranging from 1 to 64. As mentioned before, the range may evolve with advanced CODEC technology. The S frame generation using two or more (neighboring) preceding/succeeding frames may be illustrated for a frame S to be generated for the following sequence: 1. triplet (DSD), (SSD), (DSS), and (SSS). 2. Quadruple (DSDD), (DSSD), (DSDS), (DSSS), (SSDD), (SSSD), (SSDS), (SSSS) etc. Thus, an S frame may be generated in a flexible manner, exploiting previously generated frames at varying number (both preceding and succeeding frames) as well as the degree of being neighboring to the S frame to be generated. Accordingly, the frames of the vide sequence may be generated accounting for inter-frame correlations at different degree.
The above described frame generation may be realized in a hardware implementation.
The encoding and decoding processing may be performed by separate apparatuses such as encoder 20 in
In the following embodiments, preferred implementations of the present disclosure are discussed.
In the following, an example is discussed with reference to
In the example discussed, a synthesized frame 410 is generated from two input frames 411 and 412 (O frames). It is noted that there may be more than two input frames. Further, the frames shown in
In the example of
According to an aspect, the adaptively selecting frame process takes a constant number N of look-ahead frames as an input. The constant number of look-ahead frames determines the maximum number frames that could be replaced by synthetic frames over a video sequence.
The original frames are ordered according to the display order, and have a picture index (i.e. positions) idx=0 to idx=7.
According to the look ahead value of 2 and with eight original frame O in the buffer (step 0), the process starts in step 1 by considering the first three frames and adding them to the buffer. The frame at index idx=0 is set to be a coded frame. In some embodiments, the frame with idx=0 is assigned the frame type I since it is the first frame in the frame sequence of the GOP. In other words, I frame 401 in
To perform the bi-directional interpolation, the first and last frame in the buffer (idx=0 and idx=2) are used to interpolate a frame in the middle (idx=1). As the synthesized frame meets the criterion it is assigned an S type, which corresponds to S3 frame 406 in FIG. 4A. This means that the indication of the first position, namely picture index idx=1 is included into the bitstream portion. Therefore, the indication indicates that an S frame is to be generated at the position of the GOP as indicated by the indication. As a result, the respective input frame (i.e. original frame) at position idx=1 is not encoded, i.e. its content (video) is not encoded.
The frame type of the frame at position idx=2, which is different from the I type of the starting frame of I type of the GOP, still needs to be specified. The assignment of one of frame types P and B is performed in accordance with a predefined GOP pattern of frame types. In this example, the GOP pattern is IBP, whereby the type at the very beginning of said pattern is an I type as it refers to the access point of the respective GOP. As noted earlier, an access point refers to a reference frame and is the first frame of a GOP.
The frame at idx=2 is assigned a coded frame type (B, P) to satisfy the look ahead constant of 2. Therefore, frame 412 in
At this point, three key frames have been already determined, which equals the GOP size of 4 having the fixed GOP pattern (IBP). Accordingly, in step 3 each coded frame K within the GOP structure KSKK is assigned its appropriate type according to the defined GOP pattern. With reference to the GOP pattern IBP, S frames are skipped in this process, resulting in an effective increase of the final GOP because it includes an additional S frame. Hence, the final sequence for the first GOP is ISBP (as opposed to IBP). In
For each coded frame, the indication of the positions of the S frames includes positions of the coded frames. In one example, the indication may be the distance to the next coded frame, which is signaled in the frame header. Alternatively, the position of the coded frames may be signaled in the frame header. Further, the position of one or more frames and the distances may be signaled in the frame header. The distance may be a difference between indices of respective coded frames.
Next, in step 4, frames with idx=3 and idx=4 are kept in the buffer and frame idx=5 is added to the buffer. The above process of interpolation in accordance with the encoding/decoding delay and the GOP pattern is now repeated for frame idx=4 to idx=7, until one ends up with the following total GOP sequence, with inserted S frames: ISBPSBSI. Therefore, by generating a bitstream according to a predefined look ahead constant and a GOP pattern, the GOP may be adaptively enlarged. In the example of
With the bitstream being generated by the encoding side described before, the decoding side receives the bitstream which includes coded frames as well as indications of a first position at which a synthesized frame is to be generated. In some embodiments, the indication indicates a first position within a GOP where the S frame is generated. Accordingly, the decoder knows where to generate a synthesized frame within the GOP.
With reference to the previous example, the decoder receives the following encoded bitstream (IBPBI), which can be decoded in the usual manner. Thus, the GOP includes two or more decoded frames comprising a starting frame and an ending frame. The starting frame has a frame type I and the ending frame a frame type B or P, respectively. The decoded two or more frames may be put into a decoded frame buffer in case one or more frames of the GOP depend on the frame at the first position. Accordingly, decoded frames are available in case of frame dependencies. Thus, S frames which may depend on decoded frames may be generated accurately at higher hierarchy level while preserving frame dependencies. In order that decoder knows if and where to generate S frames into the GOP, determines based on the parsed indication a position difference between the two decoded frames, which have successive positions in display order.
For each decoded GOP, the decoder checks for gaps between every pair of consecutive coded frames, as the signaled indication of the first position includes positions of the decoded two or more frames. Accordingly, the decoder knows the positions of decoded frames within the GOP. Based on the indication, the decoder checks gaps in position between successive decoded frames based on position differences and/or positions of the respective coded frames. In other words, the signaled indication enables the decoding side to indirectly determine the first position via the signaled position(s) or position difference(s) between consecutive decoded frames. In other words, based on the position difference, the decoder determines a number of S frames generated between the two decoded frames.
The decoder then fills the gap at the respective positions by generating S frames in decoding order between successive coded frames of the GOP based on the position difference, yielding the following final GOP structure: ISBPSBSI of the video sequence. This structure fulfills all predefined conditions, i.e. N=7, M=3 for the first GOP, and M=4 for the second GOP of this GOP sequence.
Herein, the encoder/decoding delay (i.e. M value) can change depending on the number of inserted S frames, and hence the GOP structure can be dynamically changed as well. However, the maximum encoding/decoding delay is confined by the original encoding pattern IBPBI and the given look-ahead constant. Given the maximum look ahead constant equal to 2 in the above example, the maximum number of permissible S frame insertions would be ISBSPSBSI.
Instead of combining a fixed GOP pattern (e.g. IBPBI) with the adaptive generation of S frames as discussed in embodiment 1, embodiment 2 relates to how to adaptively generate S frames within an already adaptive GOP structure. To this end, the previous example is modified so as to use the adaptive GOP method proposed by B. Zatt et al. instead of the fixed IBPBI pattern, as illustrated in
To ensure an encoder/decoder delay of 4, the size of the interval of inserted P frames is changed in the example of
The difference is that now in step 3 the frame types are assigned to the frames, based on the policy in the work of B. Zatt et al. Without employing the policy of B Zatt et al., the following structure IBP is produced, since the algorithm detects a scene change at index idx=3. In contrast, embodiment 2 of the present disclosure, the scene change is detected based on the frames of the GOP, and the frame at which the scene change occurs is assigned instead a I frame. This yields a GOP structure of IBI, with the respective frame being Il 503 in
This is illustrated in the table of
Since the I frame is also encoded into the bitstream portion (i.e. its video content), the two or more decoded frames of the GOP include one or more decoded frames of frame type I. Accordingly, the frames of the GOP include frames of frame type I between the starting frame and the ending frame, representing access points for video content at which a scene change occurs. In
Compared to embodiment 1 and 2 where the GOP structure is changed after the generating of S frames, in embodiment 3 the coding structure in a GOP may not be changed, but rather the frames in a given GOP (i.e. the GOP is fixed) are selectively replaced with the synthetic frame. It is noted that the term “replace” is not understood literally. Rather, the term means that, for a fixed GOP, an input frame at a first portion may not be encoded, but rather an S frame is generated.
Therefore, in embodiment 2, a look-ahead constant may not be needed as input, as it is assumed to be the same as the considered GOP size. In particular, the first input frame (i.e. a first original O frame) pertains to a GOP of a predefined GOP structure, which is fixed.
To recall, in embodiments 1 and 2 discussed before, the synthesized frames are generated using original frames (i.e. input frames), after positions of S frames are determined, and this information is used during encoding. The processing in embodiment 3 may not decouple the determination process and encoding process. Rather, it is determined which frame in a GOP shall be replaced during the encoding process, and the determined (one or more) frames are synthesized using the previously coded frames instead of original frames (i.e. input frames).
Again, the quality measure (e.g. PSNR) is used to determine whether or not a first input frame is to be encoded. In an implementation, when the QM does not fulfill the predetermine condition, the content of a first input frame is encoded into the bitstream portion. The frame type is assigned to the first input frame in accordance to a GOP pattern of frame types, which is pre-configured for the respective GOP. In turn, when the QM fulfills the predetermined condition, the content of the first input picture is not encoded into the bitstream portion.
Since the GOP structure is fixed, a set of one or more positions which includes the first position within the GOP is determined in a recursive manner, using coded input frames of the GOP. The coded input frames include a starting frame with a start position (i.e. start index) and an ending frame with an end position (i.e. end index). The start position and end position are in display order. This is illustrated in
The determination of the positions includes to recursively generate in coding order the synthesized frame at a current position between the start and end position, using the starting frame and the ending frame. The term coding order refers to the order in which a frame is generated by interpolation using two or more (neighboring) frames. This is illustrated in
Then, the quality measure (e.g. PSNR) is determined for frame B2 and it is determined whether or not the QM fulfills the predetermined condition. In the example shown in
The recursion continues using the coded frames or synthesized frames at the start position and the current position, and/or at the current position and the end position. Accordingly, the determination which frame of the GOP should be replaced is performed during the encoding process (on-the-fly determination), because both coded frames and already synthesized frames are used. Hence, pre-processing may be reduced, accelerating the generation of the bitstream.
According to an aspect, the adaptively selecting frame process takes a starting frame with index S and an ending frame as inputs with index E, and synthesizes a frame in the middle of the starting and ending frames with index (S+E)/2. The quality of synthesized frame in the middle is compared to the input criterion quality. If its quality is higher than the given criterion, the frame in the middle is determined to be synthesized. The synthesized frame in the middle is used an anchor frame to synthesize other frames (if any). Otherwise, the frame in the middle is determined to be coded using an existing CODEC, and the original frame (i.e. input frame) is used to synthesize other frames (if any). Further, the adaptively selecting frame process repeats the for two new frame intervals, where in the first interval the starting and ending frame index is equal to S and (S+E)/2, and in the second interval, the starting and ending frame index is equal to (S+E)/2 and E, respectively. It is worth noting that the frame in the middle may be a synthetic frame or original frame. This bi-directional generation of synthetic frames is repeated and in each iteration the gap between anchor frames is halved (bisected). In the end, each frame between the frames with index S and E can be determined whether to be replaced or not. The recursion is started by initially setting the starting frame index to idx=0 and ending frame index to idx=N, and then in each iteration process N frames until the end of a video sequence. For each iteration, each frame between the starting and ending frames it is determined whether or not a frame is to be synthesized. Moreover, a list of sets of binary flags listSet[L][N−1] is collected. For each set, the size is fixed as N−1. Each flag with a value of 0 indicates using existing coded frames, and with a value of 1 indicates using synthetic frames. Suppose the length of a video sequence is K, and the length of list L is determined as (K−1)/N. The list of sets of binary flags is written into a bitstream. The coded frame position list might be written to a sequence-level header for a video coding at one place, or the coded frame position might be written to every N frames, from which the next N−1 frames, for each of them whether using synthetic or encoded frame can be determined. N shall be larger or equal to 2.
With reference to the example in
The processing on the encoder side may be also described as follows with reference to
The determination process starts after frame I0 601 and P1 602 are coded. It uses the encoded I0 and P1 frames to synthesize frame B2 603. If the PSNR of synthesized frame B2 is larger than a given PSNR value, then the synthesized frame B2 is picked up. This means that an indication of the current position of B2 is included into the bitstream portion. Otherwise, frame B2 is encoded. This means that the content of an input frame at the current position of B2 is encoded.
If there are one or more frames depending on B2, then the frame (either synthesized or encoded) is put into a decoded frame buffer (DFB). Frames in the decoded frame buffer are even used in the encoder, by which reference pictures are provided for frames depending on them. As
The above process can be described in a more general way using the following steps:
After determining the frame type (whether synthesized or encoded normally) for frame at the current position with index idx=4 (B2) at hierarchy level 1, the processing repeats in a recursive manner the determination process for frame B3 at H level 2, using as the input a starting and ending frame with index equal to 0 and 4, respectively. In other words, at this step of the recursion, Io 601 is the starting frame and B2 603. In this example, 10 is a coded frame and as well as B2, which is a coded input frame as a result of the PSNR of B2 not fulfilling the predetermined condition. The result of the determination at the next recursion step shows that the PSNR of B3 is not high enough (i.e. is not larger than the given threshold e.g. 35), and hence B3 is encoded. In other words, the content of the input frame at the current position idx=2 is encoded into the bitstream portion.
After B3 is determined, the determination process again continues the recursion to determine the frame type of b4 and b5 at highest hierarchy level 3, using two GOPs each having a GOP size equal to 2. the frame type of b4 is determined from the GOP having frame 10601 at idx=0 as starting frame and frame B3 604 at idx=2 as ending frame. Likewise, the frame type of b5 is determined from the GOP having frame B3 604 at idx=2 as starting frame and frame B2 603 at idx=4 as ending frame. Thus, the positions of the starting and ending frame used at the respective H levels may be specified in terms of pairs of picture indices of the respective GOP. In the example, these pairs are (0, 2), (2, 4), respectively. In this round of determination, both b4 605 and b5 606 satisfy the PSNR condition, and hence it is determined that a synthesized frame is generated at the current position idx=1 and iidx=3, respectively. It is noted that since no frame depends on b4 nor b5, frames b4 and b5 do not have to be put into the decoded picture buffer. This is a result that the highest H level in the example of
A similar recursive determination process can be applied to a GOP between frames with indices 4 to 8 (the index in display order). Among frames b7 608, B6 607, and b8 609, frame b7 is determined to be used in its synthesized form, as shown in the figure inset at the top right in
The determined positions of those frames need to be written in a bitstream, in order to signal the decoder, which frame needs to be synthesized. Usually, each encoded frame corresponds to two packages in the bitstream. The first bitstream package is frame header, indicating some high level syntax, such as the frame type. The second package includes a syntax for decoding the content of the frame (frame content), i.e. to reconstruct the frame from the bitstream. The first and second bitstream package are referred to as bitstream portion.
Normally, the header is lightweight, and most bits are written to the second package. For each frame, a flag may be written at the beginning of the frame's picture header to indicate whether said frame shall be synthesized or not. For example, if the flag is true, no other bits need to be written to the picture header, in which case the second package for the frame reconstruction (i.e. content decoding) is skipped as well. In the above example, the set flag=“true” indicates that the respective frame is to be synthesized. Therefore, said frame is not encoded. Alternatively, the flag may be set to “false” to indicate that the respective frame is to be synthesized.
In the above discussed example, the structure of the GOP is not changed. The unchanged GOP structure means that the length of frames [I BBB B BBB P] is neither shortened nor increased after one or more B frames are replaced within the GOP. Replacing means that, for the fixed GOP structure, at those positions of B frames of the GOP S frames are generated. Furthermore, the dependency/coding order between these frames are not changed, as shown in
With the positions of S frames being signaled to the decoder via indications of positions within the bitstream, the decoder knows at which positions S frame should be generated within a GOP having a predefined GOP structure (i.e. fixed GOP structure). The decoder then generates frames of the GOP in a recursive manner, similar to the encoding side except for checking the quality measure. The GOP includes two or more frames, which are already generated, and the GOP comprises a starting frame with a start position and an ending frame with an end position. The respective positions are in display order.
At the beginning of the recursion, the two frames generated are frames whose content is decoded from the bitstream portion. The bitstream portion is then parsed for the indication of a current position between the start and end position. If the position indication is parsed (i.e. said indication is actually included in the bitstream portion), a synthesized frame is generated in decoding order at the current position from the starting frame and the ending frame. If the position indication is not parsed (i.e. said indication position is not included in the bitstream portion), then content of a current frame at the current position is decoded from the bitstream portion. Hence, the generated frames of the GOP include decoded frames and synthesized frames. The recursion is continued using the starting frame and, as the ending frame, the S frame or the decoded frame at the current position. This refers to continuing the recursion to the left. In turn, continuing to the right, the S frame or the decoded frame at the current position is used as the starting frame, and the ending frame. Accordingly, the frames of the GOP are generated using both decoded frames or already synthesized frames. Hence, frame of the GOP may be generated from a reduced number of decoded frames as a result of the lightweight bitstream.
According to an aspect, a list of sets of binary flags listSet[L][N−1] is obtained from a sequence-level header in a bitstream, from which every N−1 frames can be determined to be synthesized or to use an existing coded frame. Further, a set of binary flags Set[N−1] in the picture header of a current frame is obtained, and the current frame is the first frame of every N frames. From the Set[N−1], the next N−1 frames following the current frame can be determined to be synthesized or not. Moreover, a variable frame_idx is set to be zero, and a current frame is decoded. The decoded frame is added into a decoded picture buffer, and the frame idx is added into a list of decoded frame idx (frame idx list). It is then checked whether the length of frame_idx_list is larger than 2: If this is the case, frames in between the two decoded frames are synthesized, using the corresponding coded frames in the decoded picture buffer, in a hierarchical way (i.e. in decoding order).
In
The frame-level based approach discussed in the previous embodiment imposes a limitation on the coding order. For example, the determined frame type for frames with indices 0 to 8 in display order is [I B S B B B S B P], wherein S represents a synthetic frame. Such a determination result can be applied for a random access configuration with a GOP size of 8, for example. However, such a frame sequence of a GOP is not compatible with a low delay configuration. This is because, in the sequence [I B S B B B S B P], the frame with index idx=2 is synthesized, and hence may require that the frame with index idx=4 shall be decoded first. However, in a low delay configuration, the coding order and display order shall be the same, and the frame with index idx=4 shall not be decoded before the frame with index idx=2.
For this reason, a more coarse-grained determination method is generalized for low delay scenario. For a low delay configuration, the input parameter of a lookahead constant may be required. The lookahead constant is basically the maximum allowed GOP size. The look ahead constant may be predefined and determines a maximum number of synthesized frames that can be generated for the GOP.
A low delay scenario may be implemented in that a set of one or more positions of a next coded frame within a GOP is determined. The set of positions include a first position at which an S frame may be generated. Further, the GOP includes, as coded input frames, a starting frame with a start position and an ending frame with an end position, with said positions being in display order. In
Then, one or more S frames (frames 703 to 709 in
When the QM fulfills the predetermined condition for each of the S frames (e.g. PSNR larger than 35), the ending frame is determined as the next frame that is encoded, in which case the content of the ending frame at the end position is encoded into the bitstream portion. For example, if the PSNR of all S frames 703 to 709 is sufficiently high, frame 702 at idx=8 would be the next frame whose content is encoded.
In turn, when any of the QM of the S frames does not fulfill the predetermined condition, the recursion is continued by bisection the GOP size, and the start position and end position of the input frames of the respective bisected GOP is used. In
According to an aspect, the adaptively selecting frame process takes a starting frame with index S and an ending frame as inputs with index E, and synthesizes a frame in the middle of the starting and ending frames with index (S+E)/2. The synthesized frame in the middle could be again used as an anchor frame, to generate synthesized frames between two new intervals. In the first interval, the starting and ending frame index, respectively equals to S and (S+E)/2, and in the second interval the starting and ending frame index, respectively equals (S+E)/2 and E. This bi-directional generation of synthetic frames is repeated, and in each iteration the gap between anchor frames is halved. In the end, the synthetic frame are generated hierarchically. For example, the adaptively selecting frame process sets the starting frame and ending frame to be the original frames (i.e. input frames) with index=0 and index=N, respectively. When the quality of all synthetic frames between the starting frame and the ending frame are larger than or equal to the input criterion quality, these frames are determined to be replaced with synthetic frames. In turn, when the quality of at least one of the frame between the starting frame and the ending frame is smaller than the input criterion quality, the input parameter N is shrunk by half (bisected), the starting and ending frames are set to be frames with index=0 and index=N/2, and further determine the quality of synthesized frames between frame 0 to N/2. If there is still at least one synthetic frame whose quality is smaller than the given criterion quality, then further shrink the input N by half, until the ending frame becomes frame with index=1. This frame would correspond to frame 705 in
As may be discerned from
According to an aspect, the adaptively selecting frame process determines a number M, indicating the position of next frame which shall be coded by the existing codec (i.e. the coded frame position of the next coded frame), whereas the frames between frame 0 and frame M are all synthesized. The number M refers to the GOP size. Moreover, the adaptively selecting frame process sets a new starting and ending frames with index=M and index=M+N, to determine position of the next coded frame in a new window (M, M+N) (i.e. a second GOP size), and repeat the process to the end of a video sequence.
According to an aspect, the determined coded frame M is collected in a list of coded frames positions (M0, M1, M2 . . . ME), where ME indicates the last coded frame position in a video sequence. Write the coded frame position list into a bitstream. As frame 0 is always a coded frame, the Mo corresponds to the position of the second coded frame. The coded frame position might be written to a bitstream in a directly way (the index of the coded frame), or in an indirectly (the difference to the previous coded frames), i.e. (M0, M1−M0, M2−M1, . . . ). The coded frame position list might be written to a sequence-level header for a video coding at one place, or the coded frame position might be written to a frame-level header, from which the next coded frame position can be obtained. For example, for frame 0, M0 is signaled in its header. For frame M0, M1 or M1−M0 is signaled in its picture header.
The processing on the encoder side may be also described as follows with reference to
The above coarse-grained determination process would return a GOP size corresponding to one value in a set of (8, 4, 2, 1) for a given input of the lookahead constant number of 8 in this example. The returned GOP size determines the position of a next coded frame, i.e. normally encoded frame. If one assumes for illustrative purposes that the returned next coded frame is a frame with index idx=4 (frame 703), the encoder would repeat the above same determination process, but would proceed with the next GOP using frame 703 as a starting frame with index idx=4 and ending frame with index idx=4+8=4+returned GOP size=12. The above determination process is repeated for the whole GOP sequence, and a list of GOP sizes is collected, from which the coded frame positions can be determined.
For example, suppose that 13 frames of a GOP sequence are encoded, and the determined GOP size list is [4, 8]. Said GOP size list indicates, except for the first frame with index idx=0, that the next two coded frames are those with an index idx=0+4=4 (the next coded frame after frame 0), and index idx=4+8=12 (the next coded frame after frame 4). All other frames, except frames with index 0, 4, 12 between 0 to 12, are synthesized as they are not coded frames.
Regarding the signaling in the bitstream, when coding the frame with index idx=0, the first determined GOP size (e.g. 4) can be signaled as well. This means that the bitstream includes an indication of a first position within a GOP at which an S frame is generated, along with an indication of one or more GOP sizes. Accordingly, the positions at which S frames are generated within the GOP may be easily determined from the GOP sizes.
The processing to generate the frames of the GOP is similar to the encoding side, except for determining and check of the QM. After content of coded frames are decoded from the bitstream, the GOP comprises already two decoded frames. The frames of the GOP of the vide sequence are generated in a recursive manner in that the two decoded frames are used as starting frame with start position and an ending frame with an end position. Start and end position are in display order. Then, the bitstream portion is parsed for the indication of a first GOP size among the one or more GOP sizes.
According to an aspect, a list of coded frame positions ((M0, M1, M2 . . . ME) or (M0, M1−M0, M2−M1, . . . ) are obtained from a sequence-level header in a bitstream (e.g. by parsing the bitstream), from which all coded frame positions in a video sequence can be determined. Further, a difference D between the next coded frame and the current frame from the picture header of the current frame. For example, M0 for frame 0, and M1−M0 for frame M0, etc. From D the next coded frame position can be determined. Moreover, a variable frame_idx is set to be zero. A current frame is decoded, the decoded frame is added into a decoded picture buffer. The frame_idx is added into a list of decoded frame indices (frame_idx_list). It is checked whether the length of frame_idx_list is larger than 2, and the difference between the last two elements in the frame_idx_list is larger than 1. If this is the case, frames are synthesized in the two coded frames that were just decoded, using the corresponding coded frames in decoded picture buffer, in a hierarchical way (i.e. in decoding order).
Using this information, the decoder can then, after decoding the frames at indices idx=0 and idx=4. In this case, the first GOP size is 4. The decoded frames are 701 and 703 at idx=0 and idx=4 in
The recursion continues by using as the starting frame the ending frame of the previous recursion step. In
From the above discussion, it is important that the distance to the next coded frame is signaled into the bitstream portion, since the decoder may not know if any and how many frames of the GOP have to be synthesized. As compared to FRUC, the amount of dropped frames is not predetermined (i.e. fixed), but decided and dynamically adjusted during the encoding process.
For all intra configurations, i.e. all images are encoded with intra prediction only without inter-prediction. The coding order and display order are the same. The same algorithm can be applied.
The present disclosure may be summarized as follows: The above discussed embodiments adaptively interpolate frames without bit cost for the content generation, for which a neural network may be used. The results for the positions of synthesized frames are written into the bitstream portion, so that it can be indicated to the decoder which frames of the video sequence should be synthesized at the respective position. As already discussed, the determination of the frame type initially uses original frames (i.e. input frames of the video sequence) as inputs for synthesizing S frames, and might use also those generated synthesized frames recursively to generate another synthesized frame at a higher hierarchical level. However, during encoding and decoding, the input of synthesized frames are encoded or decoded frames as the initial input. It is also noted that, for random access and low delay configurations, the determination of synthesized frames is slightly different.
Main aspects of the present disclosure include content adaptivity, generic to existing CODECs, enabling bi-directional interpolation for all intra-prediction and uni-prediction CODECs, and improved compression without increasing the encoding/decoding latency. This is a result of employing a quality measure for generated S frames to determine whether it may be required to encode content of an input frame at a position of the generated S frame (low S frame quality) or to simply include a position indication of the respective position into the bitstream portion (high S frame quality). The more S frames are determined to be generated, the less content (i.e. input frames) needs to be encoded into the bitstream, and hence saves bit cost of the encoding. In that respect, even though the quality measure refers to the quality of the generated S frame, it may be also interpreted as being a measure for the quality of those frames (either input frames or already synthesized frames) that are used to generate the S frame. Therefore, the bitstream includes encoded content of those frames which (i) ensure generating S frame of a sufficient quality and/or (ii) are needed at certain positions since the quality of an S frame would be too low otherwise.
As discussed above, in embodiment 2 it is determined which frames should be synthesized. For a random access configuration, the approach uses the same GOP structure at the encoder, and determines whether a frame in a GOP should be synthesized or not based on a frame quality criterion.
In turn, the a low delay configuration of embodiment 4, the approach takes as input a maximum GOP size and the frame quality criterion. If each of all synthesized frames between the first frame and end frame with index idx=max_GOP can satisfy the criterion, then the given maximum GOP size is returned. Otherwise, the GOP size is shrunk (bisected), and the encoder determines whether all frames between the first frame and the ending frame with index idx=max_GOP/2 can satisfy the criterion. The max_GOP can be shrunk until it becomes one, indicating that the next coded frame is the frame right next to the very first frame. In
Both of the above approaches can determine which frames should be synthesized adaptively, based on the given criterion. The position of synthetic frames can be adapted to the frame content in a video sequence. For example, if in a GOP the motion changes strongly, then the frame interpolation is likely not to work very well. In such a case, the encoder might determine that no frames should be synthesized. On the other hand, if in a GOP the content is rather stationary (i.e. still-like) and frame interpolation works well as a normal encoder does, then a majority of frames in a GOP are likely determined to be synthesized.
By contrast, FRUC would only synthesize frames evenly across a GOP sequence, and hence no no content adaptation is achieved.
As discussed above, a new type of frame, namely a synthetic frame, is added and their positions are determined at the encoder side, and the respective positions are signaled into a bitstream to the decoder. The decoder then derives the positions of these synthetic frames by parsing the bitstream for an indication of those positions. It is noted that the present disclosure is suitable for any existing CODEC, making the approach of the present disclosure generic. Over and above, any of the approaches discussed above is well defined and can be easily implemented on top of a target CODEC.
It is noted further that, tn some cases, a target CODEC might be only capable to perform intra prediction or uni-directional inter prediction. This is particularly true in the case of artificial intelligence based video codec, wherein most of the work are based on an image CODEC (no temporal inter prediction). When applying the approaches of the present disclosure on such codecs, bi-directional interpolation capability for the target CODEC is enabled, because a synthetic frame uses bi-directional interpolation from two key frames encoded by target CODEC.
The approaches of the present disclosure discussed above target a better compression by replacing low quality input frames with synthesized frames. Replacing means that, instead encoding low quality input frames at a current position, an S frame is to be generated at the current position. Since we just replace coding frames with synthetic frames, the encoding and decoding latency would not increase. Suppose, for example, a target CODEC with GOP size 4 in a random access configuration has a coding structure as IBBBPBBBPBBBP . . . , and the determination approach changes the frames type as ISBSPSBSPSBSP . . . . i.e. all odd frames (frame index starting from 0) are determined to be replaced. In this case, on the encoder side, the S frames do need to be encoded, butjusta position of these synthetic frames is included into the bitstream portion, and hence signaled to the decoder. Therefore, the encoder essentially encodes input frames of the video sequence having the picture types of a structure IBPBPBP . . . . From encoder perspective, the encoding latency is reduced by half. On the decoder, the latency is still four since S frames are to be generated.
The above described generation of the bitstream (encoding side) and the generation of the frames (decoding side) may be realized in a software implementation.
Likewise,
In a further embodiment, a computer-readable non-transitory medium stores a program, including instructions which when executed on one or more processors cause the one or more processors to perform any of the above methods.
In another embodiment of the present disclosure, an apparatus for generating a bitstream representing input frames of a video sequence, comprises: one or more processors; and a non-transitory computer-readable storage medium coupled to the one or more processors and storing programming for execution by the one or more processors, wherein the programming, when executed by the one or more processors, configures the apparatus to carry out the method for generating a bitstream representing input frames of a video sequence.
In another embodiment of the present disclosure, an apparatus for generating frames of a video sequence from a bitstream representing the video sequence, comprises: one or more processors; and a non-transitory computer-readable storage medium coupled to the one or more processors and storing programming for execution by the one or more processors, wherein the programming, when executed by the one or more processors, configures the apparatus to carry out the method for generating frames of a video sequence from a bitstream representing the video sequence.
In a further embodiment, a computer program comprises a program code for performing the method when executed on a computer according to any one of the above methods.
The person skilled in the art will understand that the “blocks” (“units”) or “modules” of the various figures (method and apparatus) represent or describe functionalities of embodiments of the present disclosure (rather than necessarily individual “units” in hardware or software) and thus describe equally functions or features of apparatus embodiments as well as method embodiments.
The terminology of “units” and/or “modules” is merely used for illustrative purposes of the functionality of embodiments of the encoder/decoder and are not intended to liming the disclosure.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communication connections may be implemented by using some interfaces. The indirect couplings or communication connections between the apparatuses or units may be implemented in electronic, optical, mechanical, or other forms.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the the solutions of the embodiments.
In addition, the functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
Some further implementations in hardware and software are described in the following.
As mentioned above, HEVC may be used to encode the content of input frames, such as the first input frame in some embodiments. Likewise, HEVC may be used also for decoding the content from the bitstream portion. The present disclosure is not limited to the examples presented above. It is conceivable to also employ embodiments of the present disclosure within a codec such as the HEVC or another codec. Accordingly, in the following, the HEVC function is briefly described.
An implementation example of a HEVC encoder and decoder is shown
The inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214, the loop filter 220, the decoded picture buffer (DPB) 230, the inter prediction unit 244 and the intra-prediction unit 254 are also referred to forming the “built-in decoder” of video encoder 20.
The encoder 20 may be configured to receive, e.g. via input 201, a picture 17 (or picture data 17), e.g. picture of a sequence of pictures forming a video or video sequence. The received picture or picture data may also be a pre-processed picture 19 (or pre-processed picture data 19). For sake of simplicity the following description refers to the picture 17. The picture 17 may also be referred to as current picture or picture to be coded (in particular in video coding to distinguish the current picture from other pictures, e.g. previously encoded and/or decoded pictures of the same video sequence, i.e. the video sequence which also comprises the current picture).
A (digital) picture is or can be regarded as a two-dimensional array or matrix of samples with intensity values. A sample in the array may also be referred to as pixel (short form of picture element) or a pel. The number of samples in horizontal and vertical direction (or axis) of the array or picture define the size and/or resolution of the picture. For representation of color, typically three color components are employed, i.e. the picture may be represented or include three sample arrays. In RBG format or color space a picture comprises a corresponding red, green and blue sample array. However, in video coding each pixel is typically represented in a luminance and chrominance format or color space, e.g. YCbCr, which comprises a luminance component indicated by Y (sometimes also L is used instead) and two chrominance components indicated by Cb and Cr. The luminance (or short luma) component Y represents the brightness or grey level intensity (e.g. like in a grey-scale picture), while the two chrominance (or short chroma) components Cb and Cr represent the chromaticity or color information components.
Embodiments of the video encoder 20 may comprise a picture partitioning unit 262 configured to partition the picture 17 into a plurality of (typically non-overlapping) picture blocks 203. These blocks may also be referred to as root blocks, macro blocks (H.264/AVC) or coding tree blocks (CTB) or coding tree units (CTU) (H.265/HEVC and VVC). The picture partitioning unit may be configured to use the same block size for all pictures of a video sequence and the corresponding grid defining the block size, or to change the block size between pictures or subsets or groups of pictures, and partition each picture into the corresponding blocks.
Embodiments of the video encoder 20 as shown in
The quantization unit 208 may be configured to quantize the transform coefficients 207 to obtain quantized coefficients 209, e.g. by applying scalar quantization or vector quantization. The quantized coefficients 209 may also be referred to as quantized transform coefficients 209 or quantized residual coefficients 209.
The quantization process may reduce the bit depth associated with some or all of the transform coefficients 207. For example, an n-bit transform coefficient may be rounded down to an m-bit Transform coefficient during quantization, where n is greater than m. The degree of quantization may be modified by adjusting a quantization parameter (QP). For example for scalar quantization, different scaling may be applied to achieve finer or coarser quantization. Smaller quantization step sizes correspond to finer quantization, whereas larger quantization step sizes correspond to coarser quantization. The applicable quantization step size may be indicated by a quantization parameter (QP). The quantization parameter may for example be an index to a predefined set of applicable quantization step sizes. For example, small quantization parameters may correspond to fine quantization (small quantization step sizes) and large quantization parameters may correspond to coarse quantization (large quantization step sizes) or vice versa. The quantization may include division by a quantization step size and a corresponding and/or the inverse dequantization, e.g. by inverse quantization unit 210, may include multiplication by the quantization step size. Embodiments according to some standards, e.g. HEVC, may be configured to use a quantization parameter to determine the quantization step size. Generally, the quantization step size may be calculated based on a quantization parameter using a fixed point approximation of an equation including division. Additional scaling factors may be introduced for quantization and dequantization to restore the norm of the residual block, which might get modified because of the scaling used in the fixed point approximation of the equation for quantization step size and quantization parameter. In one example implementation, the scaling of the inverse transform and dequantization might be combined. Alternatively, customized quantization tables may be used and signaled from an encoder to a decoder, e.g. in a bitstream. The quantization is a lossy operation, wherein the loss increases with increasing quantization step sizes.
Embodiments of the video encoder 20 (respectively quantization unit 208) may be configured to output quantization parameters (QP), e.g. directly or encoded via the entropy encoding unit 270, so that, e.g., the video decoder 30 may receive and apply the quantization parameters for decoding.
The inverse quantization unit 210 is configured to apply the inverse quantization of the quantization unit 208 on the quantized coefficients to obtain dequantized coefficients 211, e.g. by applying the inverse of the quantization scheme applied by the quantization unit 208 based on or using the same quantization step size as the quantization unit 208. The dequantized coefficients 211 may also be referred to as dequantized residual coefficients 211 and correspond—although typically not identical to the transform coefficients due to the loss by quantization—to the transform coefficients 207.
The reconstruction unit 214 (e.g. adder or summer 214) is configured to add the transform block 213 (i.e. reconstructed residual block 213) to the prediction block 265 to obtain a reconstructed block 215 in the sample domain, e.g. by adding—sample by sample—the sample values of the reconstructed residual block 213 and the sample values of the prediction block 265.
The above mentioned quantization parameter is one of the possible encoding parameters that may be set based on the importance according to some embodiments. Alternatively or in addition, the partitioning, the prediction type or loop-filtering may be used.
The loop filter unit 220 (or short “loop filter” 220), is configured to filter the reconstructed block 215 to obtain a filtered block 221, or in general, to filter reconstructed samples to obtain filtered samples. The loop filter unit is, e.g., configured to smooth pixel transitions, or otherwise improve the video quality. The loop filter unit 220 may comprise one or more loop filters such as a de-blocking filter, a sample-adaptive offset (SAO) filter or one or more other filters, e.g. a bilateral filter, an adaptive loop filter (ALF), a sharpening, a smoothing filters or a collaborative filters, or any combination thereof. Although the loop filter unit 220 is shown in
Embodiments of the video encoder 20 (respectively loop filter unit 220) may be configured to output loop filter parameters (such as sample adaptive offset information), e.g. directly or encoded via the entropy encoding unit 270, so that, e.g., a decoder 30 may receive and apply the same loop filter parameters or respective loop filters for decoding.
The decoded picture buffer (DPB) 230 may be a memory that stores reference pictures, or in general reference picture data, for encoding video data by video encoder 20. The DPB 230 may be formed by any of a variety of memory devices, such as dynamic random access memory (DRAM), including synchronous DRAM (SDRAM), magneto-resistive RAM (MRAM), resistive RAM (RRAM), or other types of memory devices.
The mode selection unit 260 comprises partitioning unit 262, inter-prediction unit 244 and intra-prediction unit 254, and is configured to receive or obtain original picture data, e.g. an original block 203 (current block 203 of the current picture 17), and reconstructed picture data, e.g. filtered and/or unfiltered reconstructed samples or blocks of the same (current) picture and/or from one or a plurality of previously decoded pictures, e.g. from decoded picture buffer 230 or other buffers (e.g. line buffer, not shown). The reconstructed picture data is used as reference picture data for prediction, e.g. inter-prediction or intra-prediction, to obtain a prediction block 265 or predictor 265.
Mode selection unit 260 may be configured to determine or select a partitioning for a current block prediction mode (including no partitioning) and a prediction mode (e.g. an intra or inter prediction mode) and generate a corresponding prediction block 265, which is used for the calculation of the residual block 205 and for the reconstruction of the reconstructed block 215.
Embodiments of the mode selection unit 260 may be configured to select the partitioning and the prediction mode (e.g. from those supported by or available for mode selection unit 260), which provide the best match or in other words the minimum residual (minimum residual means better compression for transmission or storage), or a minimum signaling overhead (minimum signaling overhead means better compression for transmission or storage), or which considers or balances both. The mode selection unit 260 may be configured to determine the partitioning and prediction mode based on rate distortion optimization (RDO), i.e. select the prediction mode which provides a minimum rate distortion. Terms like “best”, “minimum”, “optimum” etc. in this context do not necessarily refer to an overall “best”, “minimum”, “optimum”, etc. but may also refer to the fulfillment of a termination or selection criterion like a value exceeding or falling below a threshold or other constraints leading potentially to a “sub-optimum selection” but reducing complexity and processing time. The RDO may be also used to select one or more parameters based on the importance determined.
In other words, the partitioning unit 262 may be configured to partition the block 203 into smaller block partitions or sub-blocks (which form again blocks), e.g. iteratively using quad-tree-partitioning (QT), binary partitioning (BT) or triple-tree-partitioning (TT) or any combination thereof, and to perform, e.g., the prediction for each of the block partitions or sub-blocks, wherein the mode selection comprises the selection of the tree-structure of the partitioned block 203 and the prediction modes are applied to each of the block partitions or sub-blocks.
The partitioning unit 262 may partition (or split) a current block 203 into smaller partitions, e.g. smaller blocks of square or rectangular size. These smaller blocks (which may also be referred to as sub-blocks) may be further partitioned into even smaller partitions. This is also referred to tree-partitioning or hierarchical tree-partitioning, wherein a root block, e.g. at root tree-level 0 (hierarchy-level 0, depth 0), may be recursively partitioned, e.g. partitioned into two or more blocks of a next lower tree-level, e.g. nodes at tree-level 1 (hierarchy-level 1, depth 1), wherein these blocks may be again partitioned into two or more blocks of a next lower level, e.g. tree-level 2 (hierarchy-level 2, depth 2), etc. until the partitioning is terminated, e.g. because a termination criterion is fulfilled, e.g. a maximum tree depth or minimum block size is reached. Blocks which are not further partitioned are also referred to as leaf-blocks or leaf nodes of the tree. A tree using partitioning into two partitions is referred to as binary-tree (BT), a tree using partitioning into three partitions is referred to as ternary-tree (TT), and a tree using partitioning into four partitions is referred to as quad-tree (QT).
As mentioned before, the term “block” as used herein may be a portion, in particular a square or rectangular portion, of a picture. With reference, for example, to HEVC and VVC, the block may be or correspond to a coding tree unit (CTU), a coding unit (CU), prediction unit (PU), and transform unit (TU) and/or to the corresponding blocks, e.g. a coding tree block (CTB), a coding block (CB), a transform block (TB) or prediction block (PB).
For example, a coding tree unit (CTU) may be or comprise a CTB of luma samples, two corresponding CTBs of chroma samples of a picture that has three sample arrays, or a CTB of samples of a monochrome picture or a picture that is coded using three separate colour planes and syntax structures used to code the samples. Correspondingly, a coding tree block (CTB) may be an N×N block of samples for some value of N such that the division of a component into CTBs is a partitioning. A coding unit (CU) may be or comprise a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate colour planes and syntax structures used to code the samples. Correspondingly a coding block (CB) may be an M×N block of samples for some values of M and N such that the division of a CTB into coding blocks is a partitioning.
In embodiments, e.g., according to HEVC, a coding tree unit (CTU) may be split into CUs by using a quad-tree structure denoted as coding tree. The decision whether to code a picture area using inter-picture (temporal) or intra-picture (spatial) prediction is made at the CU level. Each CU can be further split into one, two or four PUs according to the PU splitting type. Inside one PU, the same prediction process is applied and the relevant information is transmitted to the decoder on a PU basis. After obtaining the residual block by applying the prediction process based on the PU splitting type, a CU can be partitioned into transform units (TUs) according to another quadtree structure similar to the coding tree for the CU.
Different sizes of the blocks, or maximum and/or minimum of the blocks obtained by partitioning may be also part of the encoding parameters, as different sizes of blocks will result in different coding efficiencies.
In one example, the mode selection unit 260 of video encoder 20 may be configured to perform any combination of the partitioning techniques described herein.
As described above, the video encoder 20 is configured to determine or select the best or an optimum prediction mode from a set of (e.g. pre-determined) prediction modes. The set of prediction modes may comprise, e.g., intra-prediction modes and/or inter-prediction modes.
In the example of
As explained with regard to the encoder 20, the inverse quantization unit 210, the inverse transform processing unit 212, the reconstruction unit 214 the loop filter 220, the decoded picture buffer (DPB) 230, the inter prediction unit 344 and the intra prediction unit 354 are also referred to as forming the “built-in decoder” of video encoder 20. Accordingly, the inverse quantization unit 310 may be identical in function to the inverse quantization unit 110, the inverse transform processing unit 312 may be identical in function to the inverse transform processing unit 212, the reconstruction unit 314 may be identical in function to reconstruction unit 214, the loop filter 320 may be identical in function to the loop filter 220, and the decoded picture buffer 330 may be identical in function to the decoded picture buffer 230. Therefore, the explanations provided for the respective units and functions of the video 20 encoder apply correspondingly to the respective units and functions of the video decoder 30.
The entropy decoding unit 304 is configured to parse the bitstream 21 (or in general encoded picture data 21) and perform, for example, entropy decoding to the encoded picture data 21 to obtain, e.g., quantized coefficients 309 and/or decoded coding parameters (not shown in
The inverse quantization unit 310 may be configured to receive quantization parameters (QP) (or in general information related to the inverse quantization) and quantized coefficients from the encoded picture data 21 (e.g. by parsing and/or decoding, e.g. by entropy decoding unit 304) and to apply based on the quantization parameters an inverse quantization on the decoded quantized coefficients 309 to obtain dequantized coefficients 311, which may also be referred to as transform coefficients 311. The inverse quantization process may include use of a quantization parameter determined by video encoder 20 for each video block in the video slice (or tile or tile group) to determine a degree of quantization and, likewise, a degree of inverse quantization that should be applied.
Inverse transform processing unit 312 may be configured to receive dequantized coefficients 311, also referred to as transform coefficients 311, and to apply a transform to the dequantized coefficients 311 in order to obtain reconstructed residual blocks 213 in the sample domain. The reconstructed residual blocks 213 may also be referred to as transform blocks 313.
The reconstruction unit 314 (e.g. adder or summer 314) may be configured to add the reconstructed residual block 313, to the prediction block 365 to obtain a reconstructed block 315 in the sample domain, e.g. by adding the sample values of the reconstructed residual block 313 and the sample values of the prediction block 365.
The loop filter unit 320 (either in the coding loop or after the coding loop) is configured to filter the reconstructed block 315 to obtain a filtered block 321, e.g. to smooth pixel transitions, or otherwise improve the video quality. The loop filter unit 320 may comprise one or more loop filters such as a de-blocking filter, a sample-adaptive offset (SAO) filter or one or more other filters, e.g. a bilateral filter, an adaptive loop filter (ALF), a sharpening, a smoothing filters or a collaborative filters, or any combination thereof. Although the loop filter unit 320 is shown in
The inter prediction unit 344 may be identical to the inter prediction unit 244 (in particular to the motion compensation unit) and the intra prediction unit 354 may be identical to the inter prediction unit 254 in function, and performs split or partitioning decisions and prediction based on the partitioning and/or prediction parameters or respective information received from the encoded picture data 21 (e.g. by parsing and/or decoding, e.g. by entropy decoding unit 304). Mode application unit 360 may be configured to perform the prediction (intra or inter prediction) per block based on reconstructed pictures, blocks or respective samples (filtered or unfiltered) to obtain the prediction block 365.
Mode application unit 360 is configured to determine the prediction information for a video block of the current video slice by parsing the motion vectors or related information and other syntax elements, and uses the prediction information to produce the prediction blocks for the current video block being decoded.
The embodiments of the video decoder 30 as shown in
Embodiments of the video decoder 30 as shown in
Other variations of the video decoder 30 can be used to decode the encoded picture data 21. For example, the decoder 30 can produce the output video stream without the loop filtering unit 320. For example, a non-transform based decoder 30 can inverse-quantize the residual signal directly without the inverse-transform processing unit 312 for certain blocks or frames. In another implementation, the video decoder 30 can have the inverse-quantization unit 310 and the inverse-transform processing unit 312 combined into a single unit.
In the following embodiments of a video coding system 10, a video encoder 20 and a video decoder 30 are described based on
As shown in
The source device 12 comprises an encoder 20, and may additionally, i.e. optionally, comprise a picture source 16, a pre-processor (or pre-processing unit) 18, e.g. a picture pre-processor 18, and a communication interface or communication unit 22.
The picture source 16 may comprise or be any kind of picture capturing device, for example a camera for capturing a real-world picture, and/or any kind of a picture generating device, for example a computer-graphics processor for generating a computer animated picture, or any kind of other device for obtaining and/or providing a real-world picture, a computer generated picture (e.g. a screen content, a virtual reality (VR) picture) and/or any combination thereof (e.g. an augmented reality (AR) picture). The picture source may be any kind of memory or storage storing any of the aforementioned pictures.
In distinction to the pre-processor 18 and the processing performed by the pre-processing unit 18, the picture or picture data 17 may also be referred to as raw picture or raw picture data 17.
Pre-processor 18 is configured to receive the (raw) picture data 17 and to perform pre-processing on the picture data 17 to obtain a pre-processed picture 19 or pre-processed picture data 19. Pre-processing performed by the pre-processor 18 may, e.g., comprise trimming, color format conversion (e.g. from RGB to YCbCr), color correction, or de-noising. It can be understood that the pre-processing unit 18 may be optional component.
The video encoder 20 is configured to receive the pre-processed picture data 19 and provide encoded picture data 21 (further details were described above, e.g., based on
Communication interface 22 of the source device 12 may be configured to receive the encoded picture data 21 and to transmit the encoded picture data 21 (or any further processed version thereof) over communication channel 13 to another device, e.g. the destination device 14 or any other device, for storage or direct reconstruction.
The destination device 14 comprises a decoder 30 (e.g. a video decoder 30), and may additionally, i.e. optionally, comprise a communication interface or communication unit 28, a post-processor 32 (or post-processing unit 32) and a display device 34.
The communication interface 28 of the destination device 14 is configured receive the encoded picture data 21 (or any further processed version thereof), e.g. directly from the source device 12 or from any other source, e.g. a storage device, e.g. an encoded picture data storage device, and provide the encoded picture data 21 to the decoder 30.
The communication interface 22 and the communication interface 28 may be configured to transmit or receive the encoded picture data 21 or encoded data 13 via a direct communication link between the source device 12 and the destination device 14, e.g. a direct wired or wireless connection, or via any kind of network, e.g. a wired or wireless network or any combination thereof, or any kind of private and public network, or any kind of combination thereof.
The communication interface 22 may be, e.g., configured to package the encoded picture data 21 into an appropriate format, e.g. packets, and/or process the encoded picture data using any kind of transmission encoding or processing for transmission over a communication link or communication network.
The communication interface 28, forming the counterpart of the communication interface 22, may be, e.g., configured to receive the transmitted data and process the transmission data using any kind of corresponding transmission decoding or processing and/or de-packaging to obtain the encoded picture data 21.
Both, communication interface 22 and communication interface 28 may be configured as unidirectional communication interfaces as indicated by the arrow for the communication channel 13 in
The decoder 30 is configured to receive the encoded picture data 21 and provide decoded picture data 31 or a decoded picture 31 (further details were described above, e.g., based on
The post-processor 32 of destination device 14 is configured to post-process the decoded picture data 31 (also called reconstructed picture data), e.g. the decoded picture 31, to obtain post-processed picture data 33, e.g. a post-processed picture 33. The post-processing performed by the post-processing unit 32 may comprise, e.g. color format conversion (e.g. from YCbCr to RGB), color correction, trimming, or re-sampling, or any other processing, e.g. for preparing the decoded picture data 31 for display, e.g. by display device 34.
The display device 34 of the destination device 14 is configured to receive the post-processed picture data 33 for displaying the picture, e.g. to a user or viewer. The display device 34 may be or comprise any kind of display for representing the reconstructed picture, e.g. an integrated or external display or monitor. The displays may, e.g. comprise liquid crystal displays (LCD), organic light emitting diodes (OLED) displays, plasma displays, projectors, micro LED displays, liquid crystal on silicon (LCoS), digital light processor (DLP) or any kind of other display.
Although
As will be apparent for the skilled person based on the description, the existence and (exact) split of functionalities of the different units or functionalities within the source device 12 and/or destination device 14 as shown in
The encoder 20 (e.g. a video encoder 20) or the decoder 30 (e.g. a video decoder 30) or both encoder 20 and decoder 30 may be implemented via processing circuitry as shown in
Source device 12 and destination device 14 may comprise any of a wide range of devices, including any kind of handheld or stationary devices, e.g. notebook or laptop computers, mobile phones, smart phones, tablets or tablet computers, cameras, desktop computers, set-top boxes, televisions, display devices, digital media players, video gaming consoles, video streaming devices (such as content services servers or content delivery servers), broadcast receiver device, broadcast transmitter device, or the like and may use no or any kind of operating system. In some cases, the source device 12 and the destination device 14 may be equipped for wireless communication. Thus, the source device 12 and the destination device 14 may be wireless communication devices.
In some cases, video coding system 10 illustrated in
For convenience of description, embodiments of the present disclosure are described herein, for example, by reference to High-Efficiency Video Coding (HEVC) or to the reference software of Versatile Video coding (VVC), the next generation video coding standard developed by the Joint Collaboration Team on Video Coding (JCT-VC) of ITU-T Video Coding Experts Group (VCEG) and ISO/IEC Motion Picture Experts Group (MPEG). One of ordinary skill in the art will understand that embodiments of the present disclosure are not limited to HEVC or VVC.
The video coding device 400 comprises ingress ports 410 (or input ports 410) and receiver units (Rx) 420 for receiving data; a processor, logic unit, or central processing unit (CPU) 430 to process the data; transmitter units (Tx) 440 and egress ports 450 (or output ports 450) for transmitting the data; and a memory 460 for storing the data. The video coding device 400 may also comprise optical-to-electrical (OE) components and electrical-to-optical (EO) components coupled to the ingress ports 410, the receiver units 420, the transmitter units 440, and the egress ports 450 for egress or ingress of optical or electrical signals.
The processor 430 is implemented by hardware and software. The processor 430 may be implemented as one or more CPU chips, cores (e.g., as a multi-core processor), FPGAs, ASICs, and DSPs. The processor 430 is in communication with the ingress ports 410, receiver units 420, transmitter units 440, egress ports 450, and memory 460. The processor 430 comprises a coding module 470. The coding module 470 implements the disclosed embodiments described above. For instance, the coding module 470 implements, processes, prepares, or provides the various coding operations. The inclusion of the coding module 470 therefore provides a substantial improvement to the functionality of the video coding device 400 and effects a transformation of the video coding device 400 to a different state. Alternatively, the coding module 470 is implemented as instructions stored in the memory 460 and executed by the processor 430.
The memory 460 may comprise one or more disks, tape drives, and solid-state drives and may be used as an over-flow data storage device, to store programs when such programs are selected for execution, and to store instructions and data that are read during program execution. The memory 460 may be, for example, volatile and/or non-volatile and may be a read-only memory (ROM), random access memory (RAM), ternary content-addressable memory (TCAM), and/or static random-access memory (SRAM).
A processor 502 in the apparatus 500 can be a central processing unit. Alternatively, the processor 502 can be any other type of device, or multiple devices, capable of manipulating or processing information now-existing or hereafter developed. Although the disclosed implementations can be practiced with a single processor as shown, e.g., the processor 502, advantages in speed and efficiency can be achieved using more than one processor.
A memory 504 in the apparatus 500 can be a read only memory (ROM) device or a random access memory (RAM) device in an implementation. Any other suitable type of storage device can be used as the memory 504. The memory 504 can include code and data 506 that is accessed by the processor 502 using a bus 512. The memory 504 can further include an operating system 508 and application programs 510, the application programs 510 including at least one program that permits the processor 502 to perform the methods described here. For example, the application programs 510 can include applications 1 through N, which further include a video coding application that performs the methods described herein, including the encoding and decoding using a neural network and the encoding and decoding the feature channels with different encoding parameters.
The apparatus 500 can also include one or more output devices, such as a display 518. The display 518 may be, in one example, a touch sensitive display that combines a display with a touch sensitive element that is operable to sense touch inputs. The display 518 can be coupled to the processor 502 via the bus 512.
Although depicted here as a single bus, the bus 512 of the apparatus 500 can be composed of multiple buses. Further, the secondary storage 514 can be directly coupled to the other components of the apparatus 500 or can be accessed via a network and can comprise a single integrated unit such as a memory card or multiple units such as multiple memory cards. The apparatus 500 can thus be implemented in a wide variety of configurations.
Although embodiments of the present disclosure have been primarily described based on video coding, it should be noted that embodiments of the coding system 10, encoder 20 and decoder 30 (and correspondingly the system 10) and the other embodiments described herein may also be configured for still picture processing or coding, i.e. the processing or coding of an individual picture independent of any preceding or consecutive picture as in video coding. In general only inter-prediction units 244 (encoder) and 344 (decoder) may not be available in case the picture processing coding is limited to a single picture 17. All other functionalities (also referred to as tools or technologies) of the video encoder 20 and video decoder 30 may equally be used for still picture processing, e.g. residual calculation 204/304, transform 206, quantization 208, inverse quantization 210/310, (inverse) transform 212/312, partitioning 262/362, intra-prediction 254/354, and/or loop filtering 220, 320, and entropy coding 270 and entropy decoding 304.
Embodiments, e.g. of the encoder 20 and the decoder 30, and functions described herein, e.g. with reference to the encoder 20 and the decoder 30, may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored on a computer-readable medium or transmitted over communication media as one or more instructions or code and executed by a hardware-based processing unit. Computer-readable media may include computer-readable storage media, which corresponds to a tangible medium such as data storage media, or communication media including any medium that facilitates transfer of a computer program from one place to another, e.g., according to a communication protocol. In this manner, computer-readable media generally may correspond to (1) tangible computer-readable storage media which is non-transitory or (2) a communication medium such as a signal or carrier wave. Data storage media may be any available media that can be accessed by one or more computers or one or more processors to retrieve instructions, code and/or data structures for implementation of the techniques described in this disclosure. A computer program product may include a computer-readable medium.
By way of example, and not limiting, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage, or other magnetic storage devices, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, any connection is properly termed a computer-readable medium. For example, if instructions are transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. It should be understood, however, that computer-readable storage media and data storage media do not include connections, carrier waves, signals, or other transitory media, but are instead directed to non-transitory, tangible storage media. Disk and disc, as used herein, includes compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk and Blu-ray disc, where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Combinations of the above should also be included within the scope of computer-readable media.
Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor,” as used herein may refer to any of the foregoing structure or any other structure suitable for implementation of the techniques described herein. In addition, in some aspects, the functionality described herein may be provided within dedicated hardware and/or software modules configured for encoding and decoding, or incorporated in a combined codec. Also, the techniques could be fully implemented in one or more circuits or logic elements.
The techniques of this disclosure may be implemented in a wide variety of devices or apparatuses, including a wireless handset, an integrated circuit (IC) or a set of ICs (e.g., a chip set). Various components, modules, or units are described in this disclosure to emphasize functional aspects of devices configured to perform the disclosed techniques, but do not necessarily require realization by different hardware units. Rather, as described above, various units may be combined in a codec hardware unit or provided by a collection of inter-operative hardware units, including one or more processors as described above, in conjunction with suitable software and/or firmware.
Summarizing, the present disclosure relates to methods and apparatuses for generating from input frames of a video sequence a bitstream and generating therefrom back frames of said video sequence. For that purpose, synthesized frames are generated at a position by interpolation using input frames and determining a quality of the synthesize frame. Whether an indication of the position is included or an input frame at said position is encoded into the bitstream, depends on the synthesized frame's quality. When the synthesized frames meets a quality criteria, the position indication is included into the bitstream. Otherwise, the content of the input frame at said position is encoded. Hence, a minimal amount of input frames are encoded and sufficient to generate the frames of the video sequence, exploiting the position information of the synthesized frames, so as to generate them. Such a bitstream generating method may be advantageous in high-efficient CODECS where bitstreams are generated at strongly reduced bit cost.
This application is a continuation of International Application No. PCT/RU2021/000299, filed on Jul. 13, 2021, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/RU2021/000299 | Jul 2021 | WO |
Child | 18412589 | US |