The present disclosure involves a method, system and apparatus for creating visual effects for applications such as linear, interactive experiences, augmented reality or mixed reality.
Any background information described herein is intended to introduce the reader to various aspects of art, which may be related to the present embodiments that are described below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light.
Creating visual effects for applications such as film, interactive experiences, augmented reality (AR) and/or mixed reality applications may involve replacing a portion of an image or video content captured in a real-world situation with alternative content. For example, a camera may be used to capture a video of a particular model of automobile. However, a particular use of the video may require replacing the actual model automobile with a different model while retaining details of the original environment such as surrounding or background scenery and details. Modern image and video processing technology permits making such modifications to an extent that the resulting image or video with the replaced portion, e.g., the different model automobile, may appear at least somewhat realistic. However, creating a sufficient degree of realism typically requires significant post-processing effort, i.e., in a studio or visual effects facility, after the image or video capture has been completed. Such effort may include an intensive and extensive manual effort by creative personnel such as graphic artists or designers with a substantial associated time and cost investment.
In addition to the cost and time required by post processing, adding realism by post processing presents numerous challenges during the initial image or video capture. For example, because effects are added later to create the final images or video, a camera operator or director cannot see the final result while they are behind the camera capturing images or video. That is, a cameraman or director cannot see what they are actually shooting with respect to the final result. This presents challenges with regard to issues such as composition and subject framing There may be a lack of understanding, or an inaccurate understanding, as to how the subject fits into the final scene. Guesswork is required to deal with issues such as the effect or impact of surrounding lighting conditions on the final result, e.g., is the subject properly lit? Thus, there is a need to be able to visualize in-camera the result of editing the actual video with the augmentation process.
In general, an embodiment comprises a method or system or apparatus providing visualization of photorealistic effects in real time during a shoot.
In accordance with an aspect of the present principles, an embodiment comprises producing visual effects incorporating in real time information representing reflections and/or lighting from one or more sources using an image sensor.
In accordance with another aspect of the present principles, an embodiment comprises producing visual effects for film, interactive experiences, augmented reality or mixed reality including capturing and incorporating in real time reflections and/or lighting from one or more sources using an image sensor.
In accordance with another aspect of the present principles, an embodiment comprises producing visual effects for film, interactive experiences, augmented or mixed reality including capturing and incorporating in real time lighting and/or reflections from one or more sources using at least one of a light sensor and an image sensor.
In accordance with another aspect of the present principles, an embodiment comprises producing visual effects for film, interactive experiences, augmented reality including capturing and incorporating in real time lighting and/or reflections from one or more sources using one or more sensors locationally distinct from a camera providing a video feed or image information to be augmented to produce augmented reality content.
In accordance with another aspect of the present principles, an embodiment comprises producing visual effects such as mixed reality including capturing and incorporating in real time lighting and/or reflections from one or more sources using one or more sensors locationally distinct from a camera providing a video feed to a wearable device worn by a user whose vision is being augmented in mixed reality.
In accordance with another aspect of the present principles, an embodiment comprises a method including receiving a first video feed from a first camera providing video of an object; tracking the object to produce tracking information indicating a movement of the object, wherein a camera array including a plurality of cameras is mounted on the object; receiving a second video signal including video information corresponding to a stitching together of a plurality of output signals from respective ones of the plurality of cameras included in the camera array, wherein the second video signal captures at least one of a reflection on the object and a lighting environment of the object; and processing the first video feed, the tracking information, and the second video signal to generate a rendered signal representing video in which the tracked object has been replaced in real time with a virtual object having reflections and/or a lighting environment of the tracked object.
In accordance with another aspect of the present principles, an embodiment of apparatus comprises one or more processors configured to receive a first video signal from a first camera providing video of an object; track the object to produce tracking information indicating a movement of the object, wherein a camera array including a plurality of cameras is mounted on the object; receive a second video signal including video information corresponding to a stitching together of a plurality of output signals from respective ones of the plurality of cameras included in the camera array, wherein the second video signal captures at least one of a reflection on the tracked object and a lighting environment of the tracked object; and process the first video signal, the tracking information, and the second video signal to generate a rendered signal representing video in which the tracked object has been replaced in real time with a virtual object having at least one of the reflection and the lighting environment of the tracked object.
In accordance with another aspect of the present principles, an embodiment of a system comprises a first camera producing a first video signal providing video of an object; a camera array including a plurality of cameras mounted on the object and having a first processor processing a plurality of output signals from respective ones of the plurality of cameras included in the camera array to produce a second video signal representing a stitching together of the plurality of output signals, wherein the second video signal includes information representing at least one of a reflection on the object and a lighting environment of the object; a second camera tracking the object and producing tracking information indicating a movement of the object; and a second processor processing the first video signal, the tracking information, and the second video signal to generate in real time a rendered signal representing video in which the tracked object has been replaced in real time with a virtual object having at least one of the reflection and the lighting environment of the object.
In accordance with another aspect, any embodiment as described herein may include the tracked object having one or more light sources emitting light from the tracked object that matches the color, directionality and intensity of the light emitted from the virtual object.
In accordance with another aspect, any embodiment as described herein may include a sensor and calculating light and/or reflection maps for one or more virtual objects locationally distinct from the sensor or a viewer, e.g., a camera or a user.
In accordance with another aspect, any embodiment as described herein may include communication of lighting and/or reflection information from one or more sensors using a wired and/or a wireless connection.
In accordance with another aspect, any embodiment as described herein may include modifying the lighting of a virtual object in real time using sampled real-world light sources rather than vice versa.
In accordance with another aspect of the present principles, an embodiment comprises photo-realistically augmenting a video feed from a first camera, such as a hero camera, in real time by tracking an object with a singular camera or multiple camera array mounted on the object to produce tracking information, capturing at least one of reflections on the object and a lighting environment of the tracked object using the single camera or array, stitching outputs of a plurality of cameras included in the camera array in real time to produce a stitched video signal representing reflections and/or lighting environment of the object, communicating the stitched output signal to a processor by a wireless and/or wired connection, wherein the processor processing the video feed, the tracking information, and the stitched video signal to generate a rendered signal representing video in which the tracked object has been replaced in real time with a virtual object having reflections and/or a lighting environment matching that of the tracked object.
In accordance with another aspect of the present principles, any embodiment described herein may include generating a positional matrix representing a placement of the virtual object responsive to the tracking information and generating the rendered signal including the virtual object responsive to the positional matrix.
In accordance with another aspect of the present principles, tracking an object in accordance with any embodiment described herein may include calibrating a lens of the first camera using one or more fiducials that are affixed to the tracked object or a separate and unique lens calibration chart.
In accordance with another aspect of the present principles, any embodiment as described herein may include processing the stitched output signal to perform image-based lighting in the rendered signal.
In accordance with another aspect of the present principles, an embodiment comprises a non-transitory computer readable medium storing executable program instructions to cause a computer executing the instructions to perform a method according to any embodiment of a method as described herein.
The present principles can be readily understood by considering the detailed description below in conjunction with the accompanying drawings wherein:
It should be understood that the drawings are for purposes of illustrating exemplary aspects of the present principles and are not necessarily the only possible configurations for illustrating the present principles. To facilitate understanding, throughout the various figures like reference designators refer to the same or similar features.
Embodiments of the present disclosure will be described herein below with reference to the accompanying drawings. In the following description, well-known functions or constructions are not described in detail to avoid obscuring the present disclosure in unnecessary detail.
The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure and are included within its spirit and scope.
All examples and conditional language recited herein are intended for instructional purposes to aid the reader in understanding the principles of the disclosure and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
In general, an embodiment in accordance with the present principles comprises a method or system or apparatus providing visualization of photorealistic effects in real time during a shoot. Visual effects, photorealistic effects, virtual objects and similar terminology as used herein are intended to broadly encompass various techniques such as computer-generated images or imagery (CGI), artist's renderings, images of models or objects that may be captured or generated and inserted or included in scenes being shot or produced. Shooting visual effects provides directors and directors of photography a visualization challenge. When shooting, directors and directors of photography need to know how virtual elements will be framed up, whether they are lit correctly, what can be seen in the reflections. An aspect of the present principles involves addressing the described problem.
An exemplary embodiment of a system and apparatus in accordance with the present principles is shown in
Also in
In more detail,
Turning now to
An exemplary embodiment of the processing involved is described in more detail below. As described above, the lighting environment and reflections information produced at step 330 may include a plurality of signals produced by a corresponding plurality of cameras or sensors, e.g., by an array of a plurality of cameras mounted on the object. Each of the camera signals may represent a portion of the lighting environment or reflections on the tracked object. At step 340, the content of the multiple signals are combined or stitched together in real time to produce a signal representing the totality of reflections and/or lighting environment of the tracked object. At step 350, a processor performs real time rendering to produce an augmented video output signal RENDERED OUTPUT. The processing at step 350 comprises processing the video feed produced at step 310, the tracking information produced at step 320 and the stitched reflections/lighting signal produced at step 340 to replace the tracked object in the video feed with a virtual object having reflections and/or a lighting environment matching that of the tracked object. In accordance with an aspect of the present principles, an embodiment of the rendering processing occurring at step 350 may comprise producing a positional matrix representing a placement of the virtual object responsive to the tracking information and generating signal RENDERED OUTPUT including the virtual object responsive to the positional matrix. In accordance with another aspect, processing at step 350 may comprise processing the stitched output signal to perform image-based lighting in the rendered signal. In accordance with another aspect, the stitching process at step 340 may occur in a processor in the tracked object such that step 340 is locationally distinct, i.e., in a different location, from the camera generating the video feed at step 310 and from the processing occurring at step 320 and 350. If so, step 340 may further include communicating the stitched signal produced by step 340 to the processor performing real-time rendering at step 350. Such communication may occur by wire and/or wirelessly.
In accordance with another aspect of the present principles, the tracked object may include one or more light sources to emit light from the tracked object. For example, the desired visual effects may include inserting a virtual object that is light emitting, e.g., a reflective metal torch with a fire on the end. If so, one or more lights or light sources, e.g., an array of lights or light sources, may be included in the tracked object. Light from such light sources that is emitted from the tracked object is in addition to any light reflected from the tracked object due to light incident on the tracked object from the lighting environment of the tracked object. The lights or light sources included in a tracked object may be any of various types of light sources, e.g., LEDs, incandescent, fire or flames, etc. If multiple lights or light sources are included in the tracked object, e.g., in an array of light sources, for an application then more than one type of light source may be included, e.g., to provide a mix of different colors, intensities, etc. of light. An array of lights would also enable movement of the lighting from the tracked object, e.g., a sequence of different lights in the array turning on or off, and/or flickering such as for a flame. That is, an array of lights may be selected and configured to emit light from the tracked object that matches the color, directionality, intensity, movement and variations of these parameters of the light emitted from the virtual object. Having the tracked object emit light that matches that emitted from the virtual object further increases the accuracy of the reflection and lighting environment information captured by an array of sensors or cameras 240 described above, thereby increasing the realism of the augmented signal including the virtual object. As an example,
It is to be appreciated that the various features shown and described are interchangeable, that is a feature shown in one embodiment may be incorporated into another embodiment.
Although embodiments which incorporate the teachings of the present disclosure have been shown and described in detail herein, the present description illustrates the present principles. It will thus be appreciated that those skilled in the art can readily devise many other varied embodiments that still incorporate these teachings. Having described embodiments which are intended to be illustrative and not limiting, it is noted that modifications and variations can be made by persons skilled in the art in light of the above teachings. It is therefore to be understood that changes may be made in the particular embodiments disclosed which are within the scope of the disclosure.
All examples and conditional language recited herein are intended for pedagogical purposes to aid the reader in understanding the present principles and the concepts contributed by the inventor(s) to furthering the art and are to be construed as being without limitation to such specifically recited examples and conditions.
Moreover, all statements herein reciting principles, aspects, and embodiments of the present principles, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
Thus, for example, it will be appreciated by those skilled in the art that the block diagrams presented herein represent conceptual views of illustrative circuitry embodying the present principles. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudocode, and the like represent various processes which may be substantially represented in computer readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (“DSP”) hardware, peripheral interface hardware, memory such as read-only memory (“ROM”) for storing software, random access memory (“RAM”), and non-volatile storage, and other hardware implementing various functions as will be apparent to one skilled in the art.
Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
Herein, the phrase “coupled” is defined to mean directly connected to or indirectly connected with through one or more intermediate components. Such intermediate components may include both hardware and software-based components.
In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a) a combination of circuit elements that performs that function or b) software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The present principles as defined by such claims reside in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
Reference in the specification to “one embodiment” or “an embodiment” of the present principles, as well as other variations thereof, means that a particular feature, structure, characteristic, and so forth described in connection with the embodiment is included in at least one embodiment of the present principles. Thus, the appearances of the phrase “in one embodiment” or “in an embodiment”, as well any other variations, appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
It is to be appreciated that the use of any of the following “/”, “and/or”, and “at least one of”, for example, in the cases of “A/B”, “A and/or B” and “at least one of A and B”, is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of both options (A and B). As a further example, in the cases of “A, B, and/or C” and “at least one of A, B, and C”, such phrasing is intended to encompass the selection of the first listed option (A) only, or the selection of the second listed option (B) only, or the selection of the third listed option (C) only, or the selection of the first and the second listed options (A and B) only, or the selection of the first and third listed options (A and C) only, or the selection of the second and third listed options (B and C) only, or the selection of all three options (A and B and C). This may be extended, as readily apparent by one of ordinary skill in this and related arts, for as many items listed.
It is to be understood that the teachings of the present principles may be implemented in various forms of hardware, software, firmware, special purpose processors, or combinations thereof. For example, various aspects of the present principles may be implemented as a combination of hardware and software. Moreover, the software may be implemented as an application program tangibly embodied on a program storage unit. The application program may be uploaded to, and executed by, a machine comprising any suitable architecture. The machine may be implemented on a computer platform having hardware such as one or more central processing units (“CPU”), a random-access memory (“RAM”), and input/output (“I/O”) interfaces. The computer platform may also include an operating system and microinstruction code. The various processes and functions described herein may be either part of the microinstruction code or part of the application program, or any combination thereof, which may be executed by a CPU. In addition, various other peripheral units may be connected to the computer platform such as an additional data storage unit and a printing unit.
It is to be further understood that, because some of the constituent system components and methods depicted in the accompanying drawings may be implemented in software, the actual connections between the system components or the process function blocks may differ depending upon the manner in which the present principles are programmed Given the teachings herein, one of ordinary skill in the pertinent art will be able to contemplate these and similar implementations or configurations of the present principles.
Although the illustrative embodiments have been described herein with reference to the accompanying drawings, it is to be understood that the present principles are not limited to those precise embodiments, and that various changes and modifications may be effected therein by one of ordinary skill in the pertinent art without departing from the scope or spirit of the present principles. All such changes and modifications are intended to be included within the scope of the present principles as set forth in the appended claims.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2018/016060 | 1/31/2018 | WO | 00 |
Number | Date | Country | |
---|---|---|---|
62463794 | Feb 2017 | US |