MACHINE LEARNING BASED VIDEO COMPRESSION

Information

  • Patent Application
  • 20230077379
  • Publication Number
    20230077379
  • Date Filed
    October 24, 2022
    2 years ago
  • Date Published
    March 16, 2023
    a year ago
Abstract
Systems and methods are disclosed for compressing a target video. A computer-implemented method may use a computer system that include one or more physical computer processors and non-transient electronic storage. The computer-implemented method may include: obtaining the target video, extracting one or more frames from the target video, and generating an estimated optical flow based on a displacement of pixels between the one or more frames. The one or more frames may include one or more of a key frame and a target frame.
Description
TECHNICAL FIELD

The present disclosure relates generally to video compression.


BRIEF SUMMARY OF THE EMBODIMENTS

Embodiments of the present disclosure include systems and methods of compressing video using machine learning. In accordance with the technology described herein, a computer-implemented method for compressing a target video is disclosed. The computer-implemented method may be implemented in a computer system that may include one or more physical computer processors and non-transient electronic storage. The computer-implemented method may include obtaining, from the non-transient electronic storage, the target video. The computer-implemented method may include extracting, with the one or more physical computer processors, one or more frames from the target video. The one or more frames may include one or more of a key frame and a target frame. The computer-implemented method may also include generating, with the one or more physical computer processors, an estimated optical flow based on a displacement of pixels between the one or more frames.


In embodiments, the displacement of pixels may be between a key frame and/or the target frame.


In embodiments, the computer-implemented method may further include applying, with the one or more physical computer processors, the estimated optical flow to a trained optical flow model to generate a refined optical flow. The trained optical flow model may have been trained by using optical flow training data. The optical flow training data may include (i) optical flow data, (ii) a corresponding residual, (iii) a corresponding warped frame, and/or (iv) a corresponding target frame.


In embodiments, the computer-implemented method may further include generating, with the one or more physical computer processors, a warped target frame by applying the estimated optical flow to the key frame. The warped target frame may include a missing element not visible in the key frame. The computer-implemented method may also include identifying, with the one or more physical computer processors, the missing element in the warped target frame using supplemental information. The computer-implemented method may include synthesizing, with the one or more physical computer processors, the missing element from the warped target frame by applying the warped target frame to a trained interpolation model. The trained interpolation model may have been trained using interpolation training data. The interpolation training data may include (i) a user-defined value and/or (ii) multiple sets of frames. A given set of frames may include a previous training frame, a target training frame, and/or a subsequent training frame. The computer-implemented method may also include generating, with the one or more physical computer processors, a synthesized target frame.


In embodiments, the supplemental information may include one or more of a mask, the target frame, a given magnitude of a given estimated optical flow for a given object in the warped target frame, and/or a depth corresponding to the missing element.


In embodiments, identifying the missing element may include, based on the given magnitude of the given estimated optical flow of the given object, identifying, with the one or more physical computer processors, the given object as a foreground object when the magnitude reaches a threshold value. Identifying the missing element may also include identifying, with the one or more physical computer processors, the missing element in a background of the warped target frame using the displacement of the foreground object between the one or more frames.


In embodiments, identifying the missing element may include, based on a change of depth of an object between the one or more frames, identifying, with the one or more physical computer processors, the missing element using the estimated optical flow. Identifying the missing element may also include generating, with the one or more physical computer processors, an element to apply to the missing element. Identifying the missing element may include generating, with the one or more physical computer processors, a synthesized target frame.


In embodiments, the trained optical flow model and/or the trained interpolation model may include a convolutional neural network.


In embodiments, the computer-implemented method may further include encoding, with the one or more physical computer processors, the synthesized target frame. The computer-implemented method may include encoding, with the one or more physical computer processors, side information based on the encoded synthesized target frame. The side information may include one or more of the optical flow and/or a mask.


In accordance with additional aspects of the present disclosure, a system may include non-transient electronic storage and one or more physical computer processors. The one or more physical computer processors may be configured by machine-readable instructions to perform a number of operations. One operation may be to obtain, from the non-transient electronic storage, the target video. Another operation may be to extract, with the one or more physical computer processors, one or more frames from the target video. The one or more frames may include one or more of a key frame and/or a target frame. Yet another operation may be to generate, with the one or more physical computer processors, an estimated optical flow based on a displacement of pixels between the one or more frames.


In embodiments, another operation may be to apply, with the one or more physical computer processors, the estimated optical flow to a trained optical flow model to generate a refined optical flow. The trained optical flow model may have been trained by using optical flow training data. The optical flow training data may include (i) optical flow data, (ii) a corresponding residual, (iii) a corresponding warped frame, and/or (iv) a corresponding target frame.


In embodiments, another such operation is to generate, with the one or more physical computer processors, a warped target frame by applying the estimated optical flow to the key frame. The warped target frame may include a missing element not visible in the key frame. Yet another such operation is to identify, with the one or more physical computer processors, the missing element in the warped target frame using supplemental information. Another operation is to synthesize, with the one or more physical computer processors, the missing element from the warped target frame by applying the warped target frame to a trained interpolation model. The trained interpolation model may have been trained using interpolation training data. The interpolation training data may include (i) a user-defined value and (ii) multiple sets of frames. A given set of frames may include a previous training frame, a target training frame, and/or a subsequent training frame. Another operation is to generate, with the one or more physical computer processors, a synthesized target frame.


In embodiments, the supplemental information may include one or more of a mask, the target frame, a given magnitude of a given estimated optical flow for a given object in the warped target frame, and/or a depth corresponding to the missing element.


In embodiments, identifying the missing element may include based on the given magnitude of the given estimated optical flow of the given object, identifying, with the one or more physical computer processors, the given object as a foreground object when the magnitude reaches a threshold value. Identifying the missing element may also include identifying, with the one or more physical computer processors, the missing element in a background of the warped target frame using the displacement of the foreground object between the one or more frames.


In embodiments, identifying the missing element may include based on a change of depth of an object between the one or more frames, identifying, with the one or more physical computer processors, the missing element using the estimated optical flow. Identifying the missing element may also include generating, with the one or more physical computer processors, an element to apply to the missing element. Identifying the missing element may include generating, with the one or more physical computer processors, a synthesized target frame.


In embodiments, the trained optical flow model and/or the trained interpolation model may include a convolutional neural network.


In embodiments, the operation may include encoding, with the one or more physical computer processors, the synthesized target frame. In embodiments, the operation may also include encoding, with the one or more physical computer processors, side information based on the encoded synthesized target frame. The side information may include one or more of the optical flow and/or a mask.


In embodiments, the key frame may include one or more of a previous frame and/or a subsequent frame.


In embodiments, generating the estimated optical flow may include using, with the one or more physical computer processors, the previous frame and the target frame.


In accordance with additional aspects of the present disclosure, a non-transitory computer-readable medium may have executable instructions stored thereon that, when executed by one or more physical computer processors, cause the one or more physical computer processors to perform a number of operations. One operation may be to obtain, from the non-transient electronic storage, the target video. Another operation may be to extract, with the one or more physical computer processors, one or more frames from the target video. The one or more frames may include one or more of a key frame and a target frame. Yet another operation may be to generate, with the one or more physical computer processors, an estimated optical flow based on a displacement of pixels between the one or more frames





BRIEF DESCRIPTION OF THE DRAWINGS

Aspects of the present disclosure will be appreciated upon review of the detailed description of the various disclosed embodiments, described below, when taken in conjunction with the accompanying figures.



FIG. 1A illustrates an example method for encoding and compressing video in accordance with embodiments disclosed herein.



FIG. 1B illustrates an example method for encoding and compressing video in accordance with embodiments disclosed herein.



FIG. 1C illustrates an example method for encoding and compressing video in accordance with embodiments disclosed herein.



FIG. 1D illustrates an example method for encoding and compressing video in accordance with embodiments disclosed herein.



FIG. 2 illustrates an example method for flow prediction in accordance with embodiments disclosed herein.



FIG. 3A illustrates an example method for forward image warping in accordance with embodiments disclosed herein.



FIG. 3B illustrates an example method for handling occlusions in accordance with embodiments disclosed herein.



FIG. 4 illustrates an example method for frame synthesis for video compression in accordance with embodiments disclosed herein.



FIG. 5 illustrates an example method for video compression in accordance with embodiments disclosed herein.



FIG. 6 illustrates an example method for encoding interpolated frames in accordance with embodiments disclosed herein.



FIG. 7 illustrates an example method for guided compression for side information in accordance with embodiments disclosed herein.



FIG. 8 illustrates an example convolutional neural network architecture in accordance with embodiments disclosed herein.



FIG. 9 illustrates an example method for joint compression of the optical flow and image compression, in accordance with embodiments disclosed herein.



FIG. 10 illustrates an example computing module that may be used to implement features of various embodiments of the disclosure.





The figures are described in greater detail in the description and examples below are provided for purposes of illustration only, and merely depict typical or example embodiments of the disclosure. The figures are not intended to be exhaustive or to limit the disclosure to the precise form disclosed. It should also be understood that the disclosure may be practiced with modification or alteration, and that the disclosure may be limited only by the claims and the equivalents thereof.


DETAILED DESCRIPTION

The present disclosure relates to systems and methods for machine learning based video compression. For example, neural autoencoders have been applied to single image compression applications, but video compression using machine learning (i.e., deep learning) has only focused on frame interpolation and its application to video compression.


Embodiments disclosed herein are directed towards frame synthesis methods that include interpolation and extrapolation with multiple warping approaches, compression schemes that use intermediate frame interpolation results and/or compression schemes that employ correlation between images and related information, such as optical flow.


Video codecs used for video compression generally decompose video into a set of key frames encoded as single images, and a set of frames for which interpolation is used. In contrast, the present disclosure applies deep learning (e.g., neural networks) to encode, compress, and decode video. For example, the disclosed method may include interpolating frames using deep learning and applying various frame warping methods to correct image occlusions and/or other artifacts from using the optical flow. The method may use the deep learning algorithm to predict the interpolation result. Embodiments disclosed here may further apply forward warping to the interpolation to correlate flow maps and images for improved compression. In some embodiments, a video compression scheme may predict a current frame by encoding already available video frames, e.g., the current frame and one or more reference frames. This is comparable to video frame interpolation and extrapolation, with the difference that the predicted image is available at encoding time. Example video compression schemes may include motion estimation, image synthesis, and data encoding, as will be described herein.



FIGS. 1A, 1B, 1C, and 1D illustrate an example method for encoding and compressing video. Assuming some reference frame r1 is available, using motion it is possible to efficiently encode a frame I (e.g., adjacent, within five frames, and/or other numbers of frames) by: computing a 2d displacement map Fr1 (e.g., FIG. 1A), using motion field and reference frame, synthesizing an estimate Î of the image to encode (e.g., FIG. 1B), encoding and transmitting the motion field—depending on the chosen encoding/decoding scheme, the frame r1 might be available at decoding time (e.g., FIG. 1C), and encoding I takes into account the synthesis result Î which is used as supplementary input (e.g., FIG. 1D). In some examples, a residual to Î is explicitly encoded. The residual may be the difference between I and Î. For reference frames, image and flow coding/decoding may be accomplished with a single network as there is correlation between flow and image content. For example, the single network may receive one or more inputs (e.g., optical flow, images, video, residuals, supplemental information, etc.) and outputs information used in a decoder network to produce playable content.


In some embodiments, using available reference frames {r1|I i ∈ 1 . . . n} (usually n=2), a new frame, or target frame, I, may be encoded. The reference frames may be selected to have some overlap with the content of I. Motion vector maps, or optical flow, may be estimated between the reference frames and the target frame. For example, a motion vector map may correspond to a 2d displacement of pixels from ri to I.


Frame synthesis may use the estimated optical flow to forward warp (e.g., from an earlier frame of the video to a later frame of the video) the reference frames ri and compute a prediction of the image to encode. The forward mapped image may be Wri→I denoted by and the prediction by Î. Using forward mapping may help identify image regions of I that are not present in the reference frame, or are otherwise occluded, indicating image regions where the machine learning algorithm may synthesize missing pixels. In the case of prediction from several reference frames, this also helps select which reference frame to use. A network, using machine learning (e.g., a convolutional neural network (CNN)), may select the reference frame with the smallest corresponding residual value or may select multiple frames to improve the predicted image, etc. It should be appreciated that other heuristics may be used to select the reference frame. In embodiments, these heuristics may be based on the residual. In addition to this, working with forward displacement maps allows the usage of the available reference frames ri for encoding and decoding. In some examples, the machine learning algorithm is a CNN.


Two types of frames may be used at encoding time: (1) the key frames, which rely entirely on single image compression, and (2) interpolated frames, which are the result of image synthesis. Encoding interpolated frames is more efficient because it takes advantage of the intermediate synthesis result Î. Any frame that is used as a reference frame must also encode the displacement map, from r1 to I, custom-character. custom-character may be correlated to ri.


Optical Flow


Methods for estimating optical flow are disclosed herein. In some embodiments, for each reference frame ri, the 2d displacement for each pixel location may be predicted to match pixels from I.


A ground truth displacement map may be used to estimate optical flow. In this case, optical flow may be computed at encoding time, between the reference frame ri and the frame to encode I. This optical flow estimate may be encoded and transferred as part of the video data. In this example, the decoder only decodes the data to obtain the displacement map.



FIG. 2 illustrates a predicted displacement map that may be used to estimate optical flow using several reference frames. The pixels p1, p2, and p, may correspond to frames r1, r2 and I respectively. The optical flow vector custom-character can be predicted from the available reference frames r1 and r2. Residual motion may be needed to correct the prediction as illustrated in FIG. 2. In embodiments, custom-character and custom-character can be used to infer custom-character.


In some embodiments, the reference frames r1 and r2 are respectively situated before and after I. Assuming linear motion, optical flow can be estimated as:






F
r

1

→I=0.5*Fr1→r2+Rr1→I   (1)


Where term Rr1→I is the residual to be encoded, correcting errors in the optical flow or if the linear motion assumption is not respected. This scheme may increase decoding time but reduce the amount of data to be transferred.


Some example embodiments include predicting multiple displacement maps. When predicting multiple displacement maps, the correlation between displacement maps may be used for better flow prediction and to reduce the size of the residual information needed. This is illustrated in FIG. 2. It is possible to use the different displacement maps related to r1 for predicting motion in the opposite direction Fr2→I. For example, displacement map may be inverted Fr1→r2 and/or computed using one or more reference frames, scale the inverted displacement map by a value, such as, for example, 0.5, and add the residual, Rr1→I. It should be appreciated that other techniques may be used to predict motion in the opposite direction.


Frame Synthesis


Some examples of frame prediction include estimating a prediction from a single image. In the case where a single reference frame r1 is available, the motion field Fr1→I may be used to forward warp the reference frame and obtain an initial estimate Wr1→I. The resulting image may contain holes in regions occluded or not visible in r1. Using machine learning (e.g., a convolutional neural network), the missing parts may be synthesized and used to compute an approximation of I1:






Î=F
s(Wr1→I; Λs)   (2)


Some example embodiments include a method for predicting residual motion from multiple images. Video compression may involve synthesis from a single frame using larger time intervals. These images may then be used for predicting in-between short-range frames. The proposed synthesis algorithm can take an optional supplementary input when available. Embodiments of the present disclosure include warping one or more reference frames using optical flow and providing the warping results as input for synthesis.






Î=F
s(Wr1→I, . . . , Wrn→I; Λs)   (3)


Image Warping


In some embodiments, before using machine learning (e.g., a convolutional neural network) to synthesize the frame Î, the reference image may be warped using the estimated optical flow.



FIGS. 3A and 3B illustrate methods for warping the reference image. Referring to FIG. 3A, a method for warping a reference image may include: (a) using the optical flow to warp pixel p from a reference frame r1 to a new position in Î, (pixel, p, may contribute to the color of pixel q, with weight wp corresponding to bilinear interpolation; (b) to address artifacts from direct bilinear weighting, a ground truth displacement, or image-based filtering process, and/or other process, may be applied, but may require an additional binary mask applied to occluded regions. In some embodiments, a flow-based filtering process may be applied to address occlusions and/or artifacts without the need to transfer additional data.


In some embodiments, a forward approach may be used. For example, a pixel p from the reference frame, r1, will contribute to 4 pixel locations around its end position in Î. In embodiments, for a pixel location q, the resulting color is










I

(
q
)

=





p


S
q






w
p


W
q





r
1

(
p
)



with



W
q



=




p


S
q




w
p







(
4
)







Sq is the set of pixels from r1 contributing to location q with weight ωp. Bilinear weights may be used as illustrated in FIG. 3A. Regions corresponding to dis-occlusion in Î will not receive any color from the reference image and they will be identified as regions the synthesis network has to inpaint.


If an occlusion occurs between r1 and I, using all pixels as in the contributing sets Sq will create ghosting artifacts (see FIG. 3B). The following methods may be used to reduce these ghosting artifacts and create the set Sq. In the following, Aq may represent the set of all pixels from r1 with end destinations near pixel q. Near may include within 10 pixels, 50 pixels, 100 pixels, etc.


In some examples, filling in occlusions may be estimated from the image. Contrary to frame interpolation, during video coding, ground truth colors of destination pixels are available and can be used to build the Sq. The first element is the pixel p* defined as:










p
*

=



arg

min


p


A
q








I

(
q
)

-


r
1

(
p
)









(
5
)







From this, Sq is defined as the set of pixels p ∈ Aq satisfying:













I

(
q
)

-


r
1

(
p
)




<





I

(
q
)


0



r
1

(

p
*

)




+
ϵ





(
6
)







In embodiments sets, Sq, need not be explicitly built. Instead, pixels p that are not used may be marked and ignored in the warping. A morphological operation may be used to smooth the resulting mask around the occlusion by consecutively applying opening and closing with a kernel size of about 5 pixels. It should be appreciated that other processes may be applied to smooth the mask. At decoding time the, same warping approach may be used, but the mask may be transmitted with optical flow.


In some examples, locations and colors of occlusions may be estimated from displacement. The previous solution requires the use of a supplementary mask which is also encoded. In the present approach, the magnitude of the optical flow may be used to resolve occlusions. For example, a large motion is more likely to correspond to foreground objects. In this case, the first element is the pixel p* defined as:










p
*

=



arg

min


p


A
q








F


r
1



"\[Rule]"

I


(
q
)








(
7
)







Sq is defined as the set of pixels p ∈ Aq satisfying:





Fr1→I(p)−Fr1→I(p*)∥<ϵ  (8)


Where ϵ may represent a user-defined threshold (e.g., based on the statistics of background motion). In embodiments, additional filtering may be used.


In some examples, occlusion may be estimated from depth. Depth ordering may be estimated with a machine learning process (e.g., a convolutional neural network). For example, a depth map network may estimate depth maps from an image or one or more monocular image sequences. Training data for the depth map network may include image sequences, depth maps, stereo image sequences, monocular sequences, and/or other content. After training an initial depth map network using the training data, a trained depth map network may receive content and estimate a depth map for the content and estimate occlusions based on the depth maps. Occluded pixels are identified with a depth test and simply ignored during warping. With sufficient computation power, more precise depth information can also be obtained using multi-view geometry techniques.


The warping techniques described herein are complementary and can be combined in different ways. For example, displacement and depth may be correlated. Many of the computations may be shared between the two modalities and obtaining depth represents a relatively minor increment in computation time. Occlusion may be estimated from the ground truth image. Deciding if the warping mask should be used may be based on the encoding cost comparison between the mask and the image residual after synthesis. In embodiments, these may be user selected based on the given application.


Synthesis Network



FIG. 4 illustrates an example synthesis network. Referring to FIG. 4, to synthesize the frame Î, the reference frame r1 may be warped using the optical flow map custom-character. The resulting image Wr1→I may be processed by the frame synthesis network to predict the image Î. When more than one reference frame r2 is available, a forward mapped image Wr2→I may be calculated and provided as a supplementary channel to the synthesis network. The network architecture may, for example, be a GridNet network, and/or other network types.


Still referring to FIG. 4, the network may take as input the concatenation of the warped reference frames. The training data may be multiple videos. The warped video frames may include holes and/or other occlusions. More inputs may be provided to the network, such as backward warped reference frames and/or warped feature maps.


Training depends on the application case. For example, for interpolation from two reference frames r1 and r2, the network may be trained to minimize the objective function L over the dataset D consisting of triplets of input images (r1, r2) and the corresponding ground truth interpolation frame, I:










Λ
s
*

=



arg

min


Λ
s





𝔼


r
1

,

r
2

,

I
~
D



[



(



F
s

(


W


r
1



"\[Rule]"

I


,


W


r
2



"\[Rule]"

I


;

Λ
s



)

,
I

)

]






(
9
)







For the loss, C, we use the custom-character1-norm of pixel differences which may lead to sharper results than custom-character2.






custom-character(Î, I)=∥I−Î∥1   (10)


Compression



FIG. 5 illustrates a video compression method. For example, a set of key frames may be defined. These key frames may be encoded using single image compression. For intermediate frames, encoding may take advantage of the intermediate synthesized result. Any frame used as reference for interpolation may also encode optical flow.


In some embodiments, image compression may be implemented through a compression network. In the following, C and D denote compression and decoding functions, respectively.


In some embodiments, key frames, which are not interpolated, may be compressed using a single image compression method (see FIG. 5). Key frames may be selected based on a selected interval spacing or a selection algorithm based on motion and scene content. Key frame compression may be implemented using the architecture and training loss described below:






custom-character(I, I′)=R(I, I′)=γε({tilde over (y)})   (11)


with {tilde over (y)}=C(I) and I′=D({tilde over (y)}). The total loss takes into account the reconstruction loss R(I, I′) and the rate loss entropy ε({tilde over (y)}). In some embodiments, example video compression techniques may be described in greater detail in U.S. patent application Ser. No. 16/254,475, which is incorporated by reference in its entirety herein.



FIG. 6 illustrates a method for encoding interpolated frames. Encoding predicted frames may take advantage of the prediction Î which is available both at encoding and decoding time. Multiple options are possible to encode the interpolated frames, e.g., decoding the image I or a residual to be added to the prediction Î.


In some examples, for predicted frames, the compression process may include multiple steps, e.g., interpolation and image coding, to make the process more efficient.


In one example, the residual information may be explicitly encoded to the interpolation result or letting the network learn a better scheme. Training data for the network may be multiple videos. Training may include, for example, using a warped frame and generating multiple predictions of the warped frame. Residuals may be generated based on differences between the multiple predictions and the original frame. The residuals may be used to train the network to improve itself. In embodiments, the network may include a variational autoencoder including one or more convolutions, downscaling operation, upscaling operations, and/or other processes. It should be appreciated that other components may be used instead of, or in addition to, the network. In both cases, the network as illustrated in FIG. 6 may be used. In addition to encoder/decoder architecture from [reference to DIS 295], we also extract features from Î which may be added as extra channels for the encoder. Encoder and feature extraction layers may follow the same architecture, but different weights can be used for each component.



FIG. 7 illustrates a method of performing image compression guided by side information. For example, the presently disclosed compression may take the advantage of the correlation between the images and the side information (e.g., flow and masks). The image (when available) may be used to guide encoding and decoding. In some examples, a video compression method may use forward optical flow and forward warping to take advantage of the correlation between the side information and image content. For example, an image may be encoded and decoded separately, and the encoding and decoding may be guided by optical flow and binary masks, as illustrated in FIG. 7.


In some embodiments, the image and the side information may be encoded at the same time. In this case, image colors and side information may be concatenated along channels and the compression network may predict the same number of channels.


In one embodiment, optical flow and image compression may be combined in one forward pass, as illustrated in FIG. 8. In some embodiments, the image residual may be compressed as well. Combining multiple features in a compression step may provide a method that can be trained end to end. As illustrated in FIG. 8, the optical flow may be estimated between frame I and two reference frames r1 and r2. The resulting flow maps FI→r1 and FI→r2 may be encoded. In embodiments, the flow encoder can take as supplementary input the warped frames Wr1→I, Wr2→I and the image I. The network may identify regions where optical flow is not as accurate (e.g., above a certain threshold) and where better compression can be achieved without loss in quality. In embodiments, backward warping may be used. Referring back to FIG. 8, the optical flow may be decoded. The frame synthesis network may be used to compute Î, a prediction for the image, I. In embodiments, residual information may be encoded to the interpolation result. In some embodiments, the network may be trained to learn a better scheme, as described herein. In embodiments, synthesis and compression can be grouped into a single step.


Some embodiments of the present disclosure may be implemented using a convolutional neural network as illustrated in FIG. 9. As illustrated in FIG. 9, the architecture of an example Progressive Artifact Removal Network is depicted. DConv 3×3×64 refers to a 2D convolution with kernel size 3×3 and 64 output channels. The number of input channels is given from the previous layer. Output Set Index is the output for different distortion quality levels as described herein. There are skip connections from the input to all outputs and simple convolutional layers in between. Other types of convolutional networks and/or deep learning/machine learning methodologies may be used.


As used herein, the term component might describe a given unit of functionality that can be performed in accordance with one or more embodiments of the technology disclosed herein. As used herein, a component might be implemented utilizing any form of hardware, software, or a combination thereof. For example, one or more processors, controllers, ASICs, PLAs, PALs, CPLDs, FPGAs, logical components, software routines or other mechanisms might be implemented to make up a component. In implementation, the various components described herein might be implemented as discrete components or the functions and features described can be shared in part or in total among one or more components. In other words, as would be apparent to one of ordinary skill in the art after reading this description, the various features and functionality described herein may be implemented in any given application and can be implemented in one or more separate or shared components in various combinations and permutations. As used herein, the term engine may describe a collection of components configured to perform one or more specific tasks. Even though various features or elements of functionality may be individually described or claimed as separate components or engines, one of ordinary skill in the art will understand that these features and functionality can be shared among one or more common software and hardware elements, and such description shall not require or imply that separate hardware or software components are used to implement such features or functionality.


Where engines, components, or components of the technology are implemented in whole or in part using software, in one embodiment, these software elements can be implemented to operate with a computing or processing component capable of carrying out the functionality described with respect thereto. One such example computing component is shown in FIG. 10. Various embodiments are described in terms of this example-computing component 1000. After reading this description, it will become apparent to a person skilled in the relevant art how to implement the technology using other computing components or architectures.


Referring now to FIG. 10, computing component 1000 may represent, for example, computing or processing capabilities found within desktop, laptop and notebook computers; hand-held computing devices (PDA's, smart phones, cell phones, palmtops, etc.); mainframes, supercomputers, workstations or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment. Computing component 1000 might also represent computing capabilities embedded within or otherwise available to a given device. For example, a computing component might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, WAPs, terminals, and other electronic devices that might include some form of processing capability.


Computing component 1000 might include, for example, one or more processors, controllers, control components, or other processing devices, such as a processor 1004. Processor 1004 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic. In the illustrated example, processor 1004 is connected to a bus 1002, although any communication medium can be used to facilitate interaction with other components of computing component 1000 or to communicate externally.


Computing component 1000 might also include one or more memory components, simply referred to herein as main memory 1008. For example, preferably random access memory (RAM) or other dynamic memory might be used for storing information and instructions to be executed by processor 1004. Main memory 1008 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 1004. Computing component 1000 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 1002 for storing static information and instructions for processor 1004.


The computing component 1000 might also include one or more various forms of information storage device 1010, which might include, for example, a media drive 1012 and a storage unit interface 1020. The media drive 1012 might include a drive or other mechanism to support fixed or removable storage media 1014. For example, a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive might be provided. Accordingly, storage media 1014 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to, or accessed by media drive 1012. As these examples illustrate, the storage media 1014 can include a computer usable storage medium having stored therein computer software or data.


In alternative embodiments, information storage mechanism 1010 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing component 1000. Such instrumentalities might include, for example, a fixed or removable storage unit 1022 and an interface 1020. Examples of such storage units 1022 and interfaces 1020 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory component) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 1022 and interfaces 1020 that allow software and data to be transferred from the storage unit 1022 to computing component 1000.


Computing component 1000 might also include a communications interface 1024. Communications interface 1024 might be used to allow software and data to be transferred between computing component 1000 and external devices. Examples of communications interface 1024 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX, or other interface), a communications port (such as for example, a USB port, IR port, RS232 port, Bluetooth® interface, or other port), or other communications interface. Software and data transferred via communications interface 1024 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 1024. These signals might be provided to communications interface 1024 via a channel 1028. This channel 1028 might carry signals and might be implemented using a wired or wireless communication medium. Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.


In this document, the terms “computer program medium” and “computer usable medium” are used to generally refer to media such as, for example, memory 1008, storage unit 1020, media 1014, and channel 1028. These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing component 1000 to perform features or functions of the disclosed technology as discussed herein.


While various embodiments of the disclosed technology have been described above, it should be understood that they have been presented by way of example only, and not of limitation. Likewise, the various diagrams may depict an example architectural or other configuration for the disclosed technology, which is done to aid in understanding the features and functionality that can be included in the disclosed technology. The disclosed technology is not restricted to the illustrated example architectures or configurations, but the desired features can be implemented using a variety of alternative architectures and configurations. Indeed, it will be apparent to one of skill in the art how alternative functional, logical or physical partitioning and configurations can be implemented to implement the desired features of the technology disclosed herein. Also, a multitude of different constituent component names other than those depicted herein can be applied to the various partitions. Additionally, with regard to flow diagrams, operational descriptions and method claims, the order in which the steps are presented herein shall not mandate that various embodiments be implemented to perform the recited functionality in the same order unless the context dictates otherwise.


Although the disclosed technology is described above in terms of various exemplary embodiments and implementations, it should be understood that the various features, aspects and functionality described in one or more of the individual embodiments are not limited in their applicability to the particular embodiment with which they are described, but instead can be applied, alone or in various combinations, to one or more of the other embodiments of the disclosed technology, whether or not such embodiments are described and whether or not such features are presented as being a part of a described embodiment. Thus, the breadth and scope of the technology disclosed herein should not be limited by any of the above-described exemplary embodiments.


Terms and phrases used in this document, and variations thereof, unless otherwise expressly stated, should be construed as open ended as opposed to limiting. As examples of the foregoing: the term “including” should be read as meaning “including, without limitation” or the like; the term “example” is used to provide exemplary instances of the item in discussion, not an exhaustive or limiting list thereof; the terms “a” or “an” should be read as meaning “at least one,” “one or more” or the like; and adjectives such as “conventional,” “traditional,” “normal,” “standard,” “known” and terms of similar meaning should not be construed as limiting the item described to a given time period or to an item available as of a given time, but instead should be read to encompass conventional, traditional, normal, or standard technologies that may be available or known now or at any time in the future. Likewise, where this document refers to technologies that would be apparent or known to one of ordinary skill in the art, such technologies encompass those apparent or known to the skilled artisan now or at any time in the future.


The presence of broadening words and phrases such as “one or more,” “at least,” “but not limited to” or other like phrases in some instances shall not be read to mean that the narrower case is intended or required in instances where such broadening phrases may be absent. The use of the term “component” does not imply that the components or functionality described or claimed as part of the component are all configured in a common package. Indeed, any or all of the various components of a component, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.


Additionally, the various embodiments set forth herein are described in terms of exemplary block diagrams, flow charts, and other illustrations. As will become apparent to one of ordinary skill in the art after reading this document, the illustrated embodiments and their various alternatives can be implemented without confinement to the illustrated examples. For example, block diagrams and their accompanying description should not be construed as mandating a particular architecture or configuration.

Claims
  • 1. A computer-implemented method for compressing a target video, the computer-implemented method comprising: determining a first estimated optical flow based on a displacement of pixels between a first reference frame included in the target video and a target frame included in the target video;applying the first estimated optical flow to the first reference frame to produce a first warped target frame;synthesizing, via a first trained machine learning model, an estimate of the target frame based on the first warped target frame; andencoding the target frame based on the estimate of the target frame.
  • 2. The computer-implemented method of claim 1, further comprising synthesizing the estimate of the target frame based on a second warped target frame, wherein the second warped target frame is generated based on a second reference frame included in the target video.
  • 3. The computer-implemented method of claim 2, wherein the first reference frame precedes the target frame within the target video and the second reference frame succeeds the target frame within the target video.
  • 4. The computer-implemented method of claim 1, further comprising training a first machine learning model based on interpolation training data and one or more losses to generate the first trained machine learning model, wherein the interpolation training data comprises one or more training reference frames and a training target frame.
  • 5. The computer-implemented method of claim 4, wherein the one or more losses comprise an L1 norm between a first set of pixels generated by the first machine learning model based on the one or more training reference frames and a second set of pixels included in the training target frame.
  • 6. The computer-implemented method of claim 1, wherein applying the first estimated optical flow to the first reference frame comprises generating the first warped target frame based on one or more estimates of occlusion between the first reference frame and the target frame.
  • 7. The computer-implemented method of claim 6, wherein the one or more estimates of occlusion are based on at least one of a difference between a first pixel value from the first reference frame and a second pixel value from the target frame, a magnitude of motion between the first pixel value and the second pixel value, or a depth test associated with the first reference frame and the target frame.
  • 8. The computer-implemented method of claim 1, further comprising encoding the first estimated optical flow based on the target frame.
  • 9. The computer-implemented method of claim 1, wherein encoding the target frame comprises encoding a residual associated with the estimate of the target frame.
  • 10. The computer-implemented method of claim 1, wherein the first trained machine learning model comprises a convolutional neural network.
  • 11. One or more non-transitory computer-readable media storing instructions that, when executed by one or more processors, cause the one or more processors to perform the steps of: determining a first estimated optical flow based on a displacement of pixels between a first reference frame included in a target video and a target frame included in the target video;applying the first estimated optical flow to the first reference frame to produce a first warped target frame;synthesizing, via a first trained machine learning model, an estimate of the target frame based on the first warped target frame; andencoding the target frame based on the estimate of the target frame.
  • 12. The one or more non-transitory computer-readable media of claim 11, wherein the instructions further cause the one or more processors to perform the steps of: applying a second estimated optical flow to a second reference frame included in the target video to produce a second warped target frame; andsynthesizing the estimate of the target frame based on the second warped target frame.
  • 13. The one or more non-transitory computer-readable media of claim 11, wherein applying the first estimated optical flow to the first reference frame comprises generating the first warped target frame based on one or more estimates of occlusion between the first reference frame and the target frame.
  • 14. The one or more non-transitory computer-readable media of claim 13, wherein the one or more estimates of occlusion are based on at least one of a difference between a first pixel value from the first reference frame and a second pixel value from the target frame, a magnitude of motion between the first pixel value and the second pixel value, or a depth test associated with the first reference frame and the target frame.
  • 15. The one or more non-transitory computer-readable media of claim 11, wherein the instructions further cause the one or more processors to perform the steps of: inputting the target frame and additional information associated with the target frame into a second trained machine learning model, wherein the second trained machine learning model includes one or more encoder neural networks; andgenerating, via the second trained machine learning model, an encoded representation of the additional information based on features extracted from the target frame and the additional information.
  • 16. The one or more non-transitory computer-readable media of claim 15, wherein the additional information comprises at least one of the first estimated optical flow or a mask associated with the first warped target frame.
  • 17. The one or more non-transitory computer-readable media of claim 11, wherein encoding the target frame based on the estimate of the target frame comprises: inputting the target frame and the estimate of the target frame into a second trained machine learning model, wherein the second trained machine learning model includes one or more encoder neural networks; andgenerating, via the second trained machine learning model, an encoded representation of the target frame based on features extracted from the estimate of the target frame and the target frame.
  • 18. The one or more non-transitory computer-readable media of claim 11, wherein the first trained machine learning model comprises a GridNet neural network.
  • 19. The one or more non-transitory computer-readable media of claim 11, wherein the first reference frame comprises a key frame.
  • 20. A system, comprising: one or more memories that store instructions, andone or more processors that are coupled to the one or more memories and, when executing the instructions, are configured to perform the steps of: determining a first estimated optical flow based on a displacement of pixels between a first reference frame included in a target video and a target frame included in the target video;applying the first estimated optical flow to the first reference frame to produce a first warped target frame;synthesizing, via a first trained machine learning model, an estimate of the target frame based on the first warped target frame; andencoding, via a second trained machine learning model, the target frame based on the estimate of the target frame.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application claims priority to U.S. Patent Application No. 62/717,470 filed on Aug. 10, 2018, which is incorporated herein by reference in its entirety.

Provisional Applications (1)
Number Date Country
62717470 Aug 2018 US
Divisions (1)
Number Date Country
Parent 16261441 Jan 2019 US
Child 18049262 US