Generating video content using ray tracing is computationally intensive. Techniques to reduce the time needed to generate video content would be useful.
Various embodiments of the invention are disclosed in the following detailed description and the accompanying drawings.
The invention can be implemented in numerous ways, including as a process; an apparatus; a system; a composition of matter; a computer program product embodied on a computer readable storage medium; and/or a processor, such as a processor configured to execute instructions stored on and/or provided by a memory coupled to the processor. In this specification, these implementations, or any other form that the invention may take, may be referred to as techniques. In general, the order of the steps of disclosed processes may be altered within the scope of the invention. Unless stated otherwise, a component such as a processor or a memory described as being configured to perform a task may be implemented as a general component that is temporarily configured to perform the task at a given time or a specific component that is manufactured to perform the task. As used herein, the term ‘processor’ refers to one or more devices, circuits, and/or processing cores configured to process data, such as computer program instructions.
A detailed description of one or more embodiments of the invention is provided below along with accompanying figures that illustrate the principles of the invention. The invention is described in connection with such embodiments, but the invention is not limited to any embodiment. The scope of the invention is limited only by the claims, and the invention encompasses numerous alternatives, modifications, and equivalents. Numerous specific details are set forth in the following description in order to provide a thorough understanding of the invention. These details are provided for the purpose of example, and the invention may be practiced according to the claims without some or all of these specific details. For the purpose of clarity, technical material that is known in the technical fields related to the invention has not been described in detail so that the invention is not unnecessarily obscured.
In various applications, a ray-traced sequence of images may be compressed using any appropriate video encoding algorithm such as MPEG, H.264, etc., at step 108. Common video encoding methods typically first comprise classifying frames into groups. For example, frames may be classified into I-frames (i.e., independent frames that are encoded and/or processed independently of other, neighboring frames), into P-frames (i.e., predictive frames that only encode data that has changed since the last I-frame), and/or B-frames (i.e., bi-directional frames that encode changes from previous and next I-frames). Henceforth, for simplicity, only I-frames and P-frames are considered in the given examples, although the disclosed techniques may be similarly extended to any other frame classification scheme.
As described, the rendering stage of process 300 uses ray-tracing, which is computationally an expensive task (e.g., it often takes 45-60 minutes to render a high-quality frame using current hardware). Moreover, motion compensation using block matching, which is an integral part of encoding P-frames, is the most expensive part of the encoding stage (e.g., taking up to 80% of encoding time). If tRT is the time taken to generate ray-traced frame Frame_RT and tMV is the time taken to compute motion vectors for a frame, the total time taken to generate a video sequence is dominated by:
tRT×numberTotalFrames+tMV×numberPFrames (1)
If there is a small or a predictable motion of the camera and/or objects between adjacent frames, the rendered frames are very similar. Video compression takes advantage of this redundancy by only storing the changes in pixel values between frames. Techniques for joint rendering and compression that take further advantage of the same redundancy are disclosed herein. As described, faster processing is facilitated by ray-tracing only a few frames in full-detail (large number of samples, multiple bounces, etc.) and capturing significant components of change between other frames from images rendered with a low-cost and low-complexity rendering method.
As described, in joint rendering and encoding process 320, all frames are rendered using a low cost/complexity rendering option (e.g., raster-based rendering or ray-tracing with a few ray samples and bounces), and only the I-frames are rendered in full-detail. Moreover, motion vectors for P-frames are in some embodiments computed from the three-dimensional geometry of the scene. If tRR is the time taken to generate Frame_RR, tRT is the time taken to generate Frame_RT, and t3DMV is the time taken to compute motion vectors using 3D geometry, the total time taken to generate a video sequence is dominated by:
tRR×numberTotalFrames+tRT×numberIFrames (2)
The reduction in computation time for generating video content depends on the following ratios:
tRR/tRT (3)
numberIFrames/numberTotalFrames (4)
Consider Eq. (3) and Eq. (4) in the context of a rendering experiment with a three-dimensional model associated with the image frames of
The ray-tracing method generates a render by solving the light transport equation:
where x is a point on a surface, ωo and ωi are the outgoing and incoming directions respectively, L(x, ωi) is the radiance of the incoming light, and θi is the angle between the incoming ray and the surface normal.
The Monte Carlo estimate of the integral I=∫s f(x)dx is:
where p(xi) is a probability density function.
From Eqs. (5) and (6), the Monte Carlo estimate of radiance is:
where N is the number of ray samples and M is the number of direct and indirect light sources. Write Eq. (7) as:
where αi accounts for the reflectance components and βj accounts for the light sources (including bounces from other surfaces in the scene). For a scene that has mainly diffuse reflection surfaces and diffuse inter-reflection can be ignored, Eq. (7) is well approximated by raster-based rendering. For a scene that has many surfaces with glossy BRDFs, the raster-based render is not accurate. For such scenes, a more sophisticated method for computing Frame_RR may be used, e.g., by including indirect lighting. In a low complexity scene, both N and M are 1, assuming reflectance along all directions can be aggregated into one diffuse reflection term and all light sources can be aggregated into one ambient light source. This is equivalent to a scene rendered with raster-based options, resulting in a large saving in render time. In a highly complex scene, both N and M have large values. This is equivalent to the method used to compute Frame_RT, resulting in little or no saving in render time.
Motion compensation is used in video coding to improve compression efficiency. Pixel values in a predicted frame are described in terms of motion from values at locations in a reference frame. For instance, if a camera is panning slowly, for most pixels, knowledge of the amount of pan can be used to calculate pixel offsets between adjacent frames. Pixel intensity values need to be stored explicitly only for the reference frames. For predicted frames, only the calculated offsets and the small prediction errors need be stored. This leads to improved compression performance.
Any appropriate technique may be employed to encode motion information. In some embodiments, global motion compensation (GMC) is employed to encode motion information. In GMC in traditional encoders, a global affine transform that accounts for camera pan, zoom, and rotation is estimated. A predicted frame (called an S-frame for sprite frame) is generated by transforming the reference frame by this global transform. For this predicted frame, only the global transform and prediction errors are stored. If prediction is accurate, errors are small, and the compression factor is improved significantly. GMC is particularly amenable when camera geometry for each frame is known. In such cases, the affine transformation between any two frames is a known transformation, and GMC is more accurate than if the affine transform were estimated from frame data. GMC is most effective if only the camera is moving and there is no motion of objects in the scene. In some embodiments, local motion compensation is employed to encode motion information. Local compensation may be pixel-based or block-based. A pixel (or a block of pixels) in a P-frame is related to a pixel (or a block of pixels) in a reference frame by a motion vector. Only the motion vectors and prediction errors are stored for the P-frames. One common but computationally expensive process of finding motion vectors is a block based search in the reference frame for a block in the predicted frame.
In the cases in which camera geometry is known, a location in a reference frame of a pixel from a P-frame may be explicitly calculated.
XR=AP†uP, (9)
where AP\denotes the pseudo-inverse of the known perspective transformation matrix of the camera in the predicted frame. The pseudo-inverse is found under the constraint that XR lies on the first ray-object intersection on the ray cast out of the camera center of projection through uP . The location of the same point in the reference frame is found as:
uP=ARXR, (10)
where AR is the known perspective transformation matrix of the camera for the reference frame.
Object motion may also be accounted for. When XR is found, the object on which the point lies is also found. If the object moves between the reference and predicted frames, the known motion is applied to the estimate of point location. Explicitly:
uP=AR(θRPXR+Δ), (11)
where θRP and Δ are the known rotation matrix and translation matrix for the object.
As described, motion may be estimated from a known three-dimensional geometry of a scene, such as the three-dimensional scene generated by process 100 of
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, the invention is not limited to the details provided. There are many alternative ways of implementing the invention. The disclosed embodiments are illustrative and not restrictive.
This application is a continuation of U.S. patent application Ser. No. 17/039,923, now U.S. Pat. No. 11,250,614 entitled GENERATING VIDEO CONTENT filed Sep. 30, 2020, which is a continuation of U.S. patent application Ser. No. 16/530,876, now U.S. Pat. No. 10,839,592, entitled GENERATING VIDEO CONTENT filed Aug. 2, 2019, which is a continuation of U.S. patent application Ser. No. 15/887,884, now U.S. Pat. No. 10,430,992, entitled GENERATING VIDEO CONTENT filed Feb. 2, 2018, which is a continuation of U.S. patent application Ser. No. 15/170,841, now U.S. Pat. No. 9,965,890, entitled GENERATING VIDEO CONTENT filed Jun. 1, 2016, which is a continuation of U.S. patent application Ser. No. 14/337,125, now U.S. Pat. No. 9,418,469, entitled GENERATING VIDEO CONTENT filed Jul. 21, 2014, which claims priority to U.S. Provisional Application No. 61/856,582, entitled COMPRESSION-AWARE CONTENT CREATION filed Jul. 19, 2013, all of which are incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
6693964 | Zhang | Feb 2004 | B1 |
8350846 | Mejdrich | Jan 2013 | B2 |
8396127 | Bultje | Mar 2013 | B1 |
20010014125 | Endo | Aug 2001 | A1 |
20040258154 | Liu | Dec 2004 | A1 |
20050053134 | Holcomb | Mar 2005 | A1 |
20060029134 | Winder | Feb 2006 | A1 |
20100141665 | Madruga | Jun 2010 | A1 |
20110141239 | Kennedy | Jun 2011 | A1 |
20110157309 | Bennett | Jun 2011 | A1 |
20110211043 | Benien | Sep 2011 | A1 |
20120169728 | Mora | Jul 2012 | A1 |
20130022111 | Chen | Jan 2013 | A1 |
20130072299 | Lee | Mar 2013 | A1 |
20130170550 | Li | Jul 2013 | A1 |
20130242051 | Balogh | Sep 2013 | A1 |
20130307931 | Bronstein | Nov 2013 | A1 |
20140169471 | He | Jun 2014 | A1 |
20140192893 | Sullivan | Jul 2014 | A1 |
20140244995 | Gilson | Aug 2014 | A1 |
20140267808 | Kuwata | Sep 2014 | A1 |
20140321757 | Azar | Oct 2014 | A1 |
Number | Date | Country | |
---|---|---|---|
61856582 | Jul 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17039923 | Sep 2020 | US |
Child | 17571883 | US | |
Parent | 16530876 | Aug 2019 | US |
Child | 17039923 | US | |
Parent | 15887884 | Feb 2018 | US |
Child | 16530876 | US | |
Parent | 15170841 | Jun 2016 | US |
Child | 15887884 | US | |
Parent | 14337125 | Jul 2014 | US |
Child | 15170841 | US |