Video content, such as movies and television programming, for example, is widely used to distribute entertainment to consumers. Due to its popularity with those consumers, ever more video is being produced and made available for distribution via traditional broadcast models, as well as streaming services. Consequently, efficient and cost-effective techniques for producing high quality video imagery are increasingly important to the creators and owners of that content.
In order to produce realistic scene-based lighting of a performer being filmed, it is important that the performer be illuminated by a close approximation of the lighting that does or will exist in a virtual environment or background in which the performer will be viewed. Because real physical lights are typically needed to illuminate the performer, conventional production techniques may include the inefficient process of manually rotoscoping the stage lighting from the image. An alternative conventional technique is to film the actor on a green matte screen. However, that green screen technique typically offers a poor representation of the environmental lighting effects being reflected from the performer. Although techniques for simulating an entire scene digitally are known, they often produce an “uncanny valley” effect or may simply be too expensive to be a practical alternative.
There are provided systems and methods for realistically illuminating a character for a scene, substantially as shown in and/or described in connection with at least one of the figures, and as set forth more completely in the claims.
The following description contains specific information pertaining to implementations in the present disclosure. One skilled in the art will recognize that the present disclosure may be implemented in a manner different from that specifically discussed herein. The drawings in the present application and their accompanying detailed description are directed to merely exemplary implementations. Unless noted otherwise, like or corresponding elements among the figures may be indicated by like or corresponding reference numerals. Moreover, the drawings and illustrations in the present application are generally not to scale, and are not intended to correspond to actual relative dimensions. The present application discloses systems and methods for illuminating a character for a scene that overcome the drawbacks and deficiencies in the conventional art.
Character 120 may be a virtual character, a machine such as a robot or animatronic character, a human actor, an animal, or other type of performer, for example. System 100 provides realistic lighting of character 120 relative to a scene in which character 120 is to be rendered. That is to say, system 100 identifies at least one background (e.g., background 122a) for the scene to include character 120, generates a simulation of background 122a on surface 112 illuminated by lighting source 110 using lighting elements 114 of lighting source 110, and utilizes the simulation of background 122a to illuminate character 120 for the scene.
In addition to the features shown in
It is noted that cameras 124a and 124b may be implemented as high-speed red-green-blue (RGB) digital video cameras, such as professional quality motion picture or television cameras, for example. Alternatively, in some implementations, cameras 124a and 124b may be digital video cameras integrated with a mobile communication device such as a smartphone or tablet computer, for example. Moreover, in some implementations, cameras 124a and 124b may be integrated with computing platform 102.
Cameras 124a and 124b may be designed to capture standard-definition (SD) images, high-definition (HD) images, or ultra-high-definition (UHD) images, as those terms are conventionally used in the art. In other words, cameras 124a and 124b may capture images having less than one thousand pixels of horizontal resolution (SD), those having from one thousand to two thousand pixels of horizontal resolution (HD), or those images having four thousand (4K) or eight thousand (8K) pixels of horizontal resolution (UHD).
It is further noted that cameras 124a and 124b may be designed to track multiple parameters as they record imagery, such as their respective camera locations and one or more characteristics of their respective lenses 126a and 126b. It is also noted that although
According to the exemplary implementation shown in
System 200 including computing platform 202, lighting source 210, and one or more of cameras 224a and 224b corresponds in general to system 100 including computing platform 102, lighting source 110, and one or more of cameras 124a and 124b. That is to say, computing platform 102, lighting source 110, and cameras 124a and 124b may share any of the characteristics attributed to respective computing platform 202, lighting source 210, and cameras 224a and 224b by the present disclosure, and vice versa. Thus, although not shown in
It is noted that although the present application refers to lighting control software code 230 as being stored in system memory 206 for conceptual clarity, more generally, system memory 206 may take the form of any computer-readable non-transitory storage medium. The expression “computer-readable non-transitory storage medium,” as used in the present application, refers to any medium, excluding a carrier wave or other transitory signal that provides instructions to a hardware processor of a computing platform, such as hardware processor 204 of computing platform 202. Thus, a computer-readable non-transitory medium may correspond to various types of media, such as volatile media and non-volatile media, for example. Volatile media may include dynamic memory, such as dynamic random access memory (dynamic RAM), while non-volatile memory may include optical, magnetic, or electrostatic storage devices. Common forms of computer-readable non-transitory media include, for example, optical discs, RAM, programmable read-only memory (PROM), erasable PROM (EPROM), and FLASH memory.
Communication network 234 may take the form of a packet-switched network such as the Internet, for example. Alternatively, communication network 234 may correspond to a wide area network (WAN), a local area network (LAN), or be implemented as another type of private or limited distribution network. It is noted that the depiction of device 201 as a laptop computer in
Once produced using system 200, illuminated character 232 may be stored locally on system memory 206, or may be transferred to illumination database 238 via communication network 234 and network communication links 236. Moreover, in some implementations, illuminated character 232 may be rendered on display 208 of device 201, or may be rendered into a scene including one or more of backgrounds 122a, 122b, and 122c shown in
System 300 corresponds in general to systems 100 and 200, in
Moreover, computing platform 302 corresponds in general to computing platform 202 provided by device 201, in
It is further noted that character 320 corresponds in general to character 120, in
The functionality of systems 100, 200, and 300 will be further described by reference to
Referring now to
Referring to
Referring to
Flowchart 440 continues with utilizing the simulation of background 122a generated on surface 112 illuminated by lighting source 110 to illuminate character 120 for the scene identified in action 441 (action 443). As shown by
Alternatively, and as shown by
Continuing to refer to exemplary
In some implementations, hardware processor 204 may execute lighting control software code 230 to modulate the simulation of background 122a during recording of the image of illuminated character 120 and simulation of background 122a. For example, hardware processor 204 may execute lighting control software code 230 to modulate the simulation of background 122a to further simulate one or more of a change in weather of background 122a and a change in time of background 122a.
Flowchart 440 can conclude with removing the simulation of background 122a from the image including illuminated character 120 and the simulation of background 122a based on the parameters of one or more of camera 124a and camera 124b tracked during action 444 (action 445). It is noted that because one or more camera(s) 124a and 124b are tracked using the same computing platform 102 that drives lighting source 110, simulation of background 122a can be algorithmically rotoscoped, i.e., removed from the image recorded during action 444, leaving a clean and realistically illuminated image of character 120 as illuminated character 232 for later compositing into the scene.
According to the exemplary implementations disclosed in the present application, the parameters of one or more of cameras 124a and 124b such as lens distortion, focal length, yaw, pitch, and roll in relation to surface 112 illuminated by lighting source 110 are tracked to build a pose model of the camera(s). That pose model is correlated to the image recorded during action 444 in order to determine a hypothetical representation of what should be visible on camera if character 120 were not present in combination with background 122a, compared to the recorded image of character 120 and the simulation of background 122a. As a result, in pixel locations where the hypothetical representation and the recorded image correspond, the pixel data may be digitally removed from the recorded image, leaving only illuminated character 232 in a format easily composited into a final scene, or into a real-time simulation such as a video game or an augmented reality (AR) display. Action 445 may be performed by lighting control software code 230, executed by hardware processor 204.
Although not included in the exemplary outline provided by flowchart 440, in some implementations, the present method may further include rendering illuminated character 232 on display 208 of device 201. As noted above, display 208 may take the form of an LCD, an LED display, an OLED display, or any other suitable display screen that performs a physical transformation of signals to light. Rendering of illuminated character 232 on display 208 may be performed using lighting control software code 230, executed by hardware processor 204. Moreover, in some implementations, as discussed above, illuminated character 232 may be rendered into a scene including background 122a, and that scene may be output via display 208 of device 201.
In some implementations, an artificial intelligence (AI) based rendering technique may be employed to increase the resolution (i.e., perform “up-resolution”) of illuminated character 232 in real-time. For example, video frames depicting illuminated character 232 may be reprocessed on-the-fly to take advantage of the intrinsic hardware resolution of systems 100, 200, 300, provided the up-resolution process can deterministically recreate the analytical base reference video capturing illuminated character 232 for analysis and comparison.
Regarding action 441 of flowchart 440, it is noted that, in some implementations, identifying background 122a for the scene results in identification of multiple backgrounds 122a, 122b, and 122c. In those implementations, hardware processor 204 of computing platform 202 may further execute lighting control software code 230 to use lighting source 110 to successively generate respective simulations of each of backgrounds 122a, 122b, and 122c on surface 112 illuminated by lighting source 110 during the recording performed in action 444. However, it is noted that light from backgrounds 122b and 122c would typically be completely off when the simulation of background 122a is projected, light from backgrounds 122a and 122c would typically be completely off when the simulation of background 122b is projected, and light from backgrounds 122a and 122b would typically be completely off when the simulation of background 122c is projected. That is to say overlap of different background simulations is generally undesirable.
In one implementation, cameras 124a and 124b implemented as high-speed cameras can be used to capture a rapidly fluctuating environmental lighting model, which may sequentially interleave multiple lighting conditions based on multiple possible backgrounds that may be selected at a later time. By way of example, character 120 may create a live performance that will be virtually composited in an AR device at multiple locations, such as a film premiere lobby, concurrently with other events such as an award ceremony or a public event. Referring to
Thus, the present application discloses solutions for illuminating a character for a scene that overcome the drawbacks and deficiencies in the conventional art. From the above description it is manifest that various techniques can be used for implementing the concepts described in the present application without departing from the scope of those concepts. Moreover, while the concepts have been described with specific reference to certain implementations, a person of ordinary skill in the art would recognize that changes can be made in form and detail without departing from the scope of those concepts. As such, the described implementations are to be considered in all respects as illustrative and not restrictive. It should also be understood that the present application is not limited to the particular implementations described herein, but many rearrangements, modifications, and substitutions are possible without departing from the scope of the present disclosure.
Number | Name | Date | Kind |
---|---|---|---|
1819883 | Fleischer | Aug 1931 | A |
9621869 | Davidson et al. | Apr 2017 | B2 |
9710972 | Rasmussen | Jul 2017 | B2 |
9779538 | Rasmussen | Oct 2017 | B2 |
10122992 | Siegel et al. | Nov 2018 | B2 |
10313607 | Vonolfen et al. | Jun 2019 | B2 |
10349029 | Benson | Jul 2019 | B2 |
10885701 | Patel | Jan 2021 | B1 |
11107195 | Cordes | Aug 2021 | B1 |
11132837 | Cordes | Sep 2021 | B2 |
11132838 | Cordes | Sep 2021 | B2 |
11200752 | Cordes | Dec 2021 | B1 |
20050099603 | Thomas et al. | May 2005 | A1 |
20130182225 | Stout | Jul 2013 | A1 |
20150070467 | Crowder et al. | Mar 2015 | A1 |
20150103090 | Pettigrew et al. | Apr 2015 | A1 |
20150279113 | Knorr | Oct 2015 | A1 |
20150289338 | Hochman | Oct 2015 | A1 |
20150347845 | Benson | Dec 2015 | A1 |
20160156893 | Bogusz et al. | Jun 2016 | A1 |
20210342971 | Watkins | Nov 2021 | A1 |
20210407174 | Walker | Dec 2021 | A1 |
20220005279 | Cordes | Jan 2022 | A1 |
Number | Date | Country |
---|---|---|
2873456 | Jan 2006 | FR |
2018128741 | Jul 2018 | WO |
Entry |
---|
New Zealand Patent Examination Report dated Oct. 6, 2021. |
“Introducing Smart IBL: Image based lighting has never been this clever!” Smart HDR Labs, IBL Overview http://www.hdrlabs.com/sibl/index.html pp. 1-3. |
“MudGet: Reproduction of the Desired Lighting Environment using a Smart-LED” by Yong Hwi Kim, Yong Yi Lee, Bilal Ahmed, Moon Gu Son, Junho Choi, Jong Hun Lee, and Kwan H. Lee (Journal of Computational Design and Engineering 2017). |
“Behind the Scenes with UE4's Next-Gen Virtual Production Tools” Project Spotlight, Unreal Engine Nov. 12, 12019 <https://www.youtube.com/watch?v=Hjb-AqMD-a4>. |
First Examination Report dated Nov. 25, 2021 for Australian Patent Application 2020277170. |
Number | Date | Country | |
---|---|---|---|
20210185213 A1 | Jun 2021 | US |