The present disclosure relates to 3D modeling. More specifically, the present disclosure relates to a method for constructing lighting environment representations of 3D scenes. The method can be applied in various Augmented Reality (AR) or Mixed Reality (MR) applications for rendering 3D scenes.
In AR/MR applications, a scene is a location observed by a user through a rendering device such as a tablet or a glass-type HMD (Head-Mounted Display). The scene is illuminated by a set of light sources. It is necessary to model the lighting environment so as to address the AR/MR application for rendering the scene.
Various methods have been proposed for modeling 3D lighting environment of 3D scenes and constructing lighting environment representations of the scenes. Some of the known methods analyze a scene from a RGB camera and detect shadows in the scene from which light source positions are estimated. The quality of the lighting representation of the scene produced by these methods highly depends on the texture of the scene, and the position and intensity of the light sources in the scene. For example, when light sources are vertically above the scene, the shadows are so small that it becomes difficult to estimate the exact positions of the light sources by the shadows. Recently, it has been proposed a method for estimating the 3D locations of light sources based on detecting specular effects in the scene. This method is efficient but requires specular surfaces in the scene. The result of this method also highly depends on the motion of the camera around the scene, because the specular effect of some lights may not be detectable from some observing points. The lighting environment representations obtained by this method may therefore lose some light sources.
Another class of methods estimates the locations of light sources using light probes or fish-eye cameras placed in the 3D scene. The light probes can directly detect the locations of light sources. However, the use of light probe is costly and tedious for an inexperienced user. The fish-eye cameras can directly provide images representing the lighting environment (light sources) of the scene. In order to fit the resulting lighting environment representation to the world coordinate system (WCS) of the scene in an AR/MR application, it is necessary to compute the orientation of the camera. The estimation of the orientation of the camera can be made by asking the user to match points of the scene observed in both the lighting environment representation and an image of the scene taken by another camera posed with respect to the world coordinate system (WCS). Again, it is tedious and costly for an inexperienced user to assist in the estimation of the orientation of the camera. Besides, the fish-eye cameras may not be able to detect some light sources outside its angle of view or obstructed by some objects in a scene. The lighting environment map generated by this method may lose some light sources illuminating the scene.
In document WO 2008/063168 to Thomson Licensing (inventors A. B. Benitez, D.-Q. Zhang and J. A. Fancher), it is proposed to collect light readings, e.g. on a motion picture set, at different positions and times to build a light model for a predetermined physical space. The light readings can correspond to light sensors associated with light and position information, which can be obtained e.g. by sensors of one or more camera(s) placed at different positions inside the physical space. The light model represents the light distribution across space and/or time and can be used for visual effect compositing. It is suited to predicting or interpolating the lighting at intermediate or other positions and/or times for which no specific light readings were obtained.
That solution can provide a way of determining the lighting at various locations and times. This can still be enhanced at a desired level of accuracy and of realism by increasing the number of sensors and optimizing the sensor locations given some lighting conditions (see page 11, lines 25-29) and/or by exploiting autonomous robots equipped with light sensors so as to cover better the target space including occluded areas (page 10, lines 3-7).
Anyway, that technology can prove relatively complex and computationally demanding in several situations, depending on the distribution of the light sources.
It is therefore desirable to provide a method for constructing lighting environment representations of 3D scenes in a convenient and less costly way. It is also desirable to construct a more complete lighting environment map without losing some light sources.
The present disclosure makes it possible to construct a more complete lighting environment representation of a 3D scene in a convenient and less costly way.
More specifically, the present disclosure relates to a method for constructing a synthesis lighting environment representation of a 3D scene. The method comprises:
By a “3D scene”, it is meant a content of a spatially limited area to be visually represented, for example for AR or MR applications. The 3D scene can therefore have any shape, including for example a hemispherical or a parallelepiped form.
The 3D scene can contain all, parts or none of light sources illuminating that 3D scene. In this respect, the light source(s) contained in the second image(s), which are part of those light sources, can themselves be contained in the 3D scene or not. Also, they can include all or part of the light sources illuminating the 3D scene.
That 3D scene is advantageously an observed scene in which an AR or MR application takes place. Such an application can be carried out either on the first images obtained with the first device, or on images obtained with another device observing the 3D scene.
The second image(s) contain(s) advantageously at least two light sources illuminating the 3D scene.
Combining the outside originating image data with the inside originating image data as defined above can potentially provide a very powerful and flexible tool in constructing the synthesis lighting environment representation of the 3D scene, which can be possibly suited to any kind of lighting and of scene. It can further leverage the advantages of existing technologies specifically directed to either outside image capture or inside image capture. In addition, given the potential efficiency of that solution, simple versions of inside and outside image captures can be exploited while possibly yielding high-precision results.
In spite of the apparent simplicity of the present solution, a skilled person failing an imaginative capacity would not have designed it, and would have even turned it down if a related idea had been proposed. This is due to the very different conceptual categories and processing operations of lighting representations associated with images obtained from inside the scene on one hand, and from outside the scene on the other hand, as developed typically in the above mentioned prior art. In fact, overcoming the drawbacks of the existing methods and enhancing the related technologies was so far developed within each of those categories, leading to continual respective enhancements, but without considering the potentialities offered by the joined exploitation of both categories as presently disclosed.
According to an embodiment of the present disclosure, at least one of the first device(s) and at least one of the second device(s) are at least one same and unique device successively arranged in locations chosen among the location(s) outside the 3D scene and the location(s) inside the 3D scene. Advantageously, one of the devices or the unique device is thus exploited for capturing the images successively both inside and outside the 3D scene (in either order).
In an aspect of the present disclosure, the method further comprises:
In an aspect of the present disclosure, the filming position(s) comprises deriving the filming position(s) from recognizing the first and/or second device position(s) on the second image(s) and/or on the first image(s) respectively.
In an aspect of the present disclosure, the method comprises deriving the first lighting environment representation from shadows and/or specularities on the first image(s).
In alternative implementations, which can be combined with the previous ones in any way, the first lighting environment representation is obtained by exploiting colors and/or direct light source identification (insofar as some of the light sources illuminating the 3D scene are contained therein and visible on the first image(s)).
In an aspect of the present disclosure, the method comprises obtaining the data representative of at least one of the second image(s) as corresponding to a wide range of view (for example associated with a wide-angle camera of a mobile device, or with a cheap wide-angle lens placed on a camera of a mobile device, or with a fixed fish-eye camera).
In alternative implementations, which can be combined with the previous ones, the method comprises obtaining the data representative of at least one of the second image(s) as corresponding to a limited range of view. For example, part or all of the second images can be obtained by such cameras positioned inside the 3D scene and directed to known light source directions. The use of two or more such cameras makes possible a broad coverage of the lighting sources, which can further be e.g. completed by a camera having a wide range of view.
For example, a camera having a wide range of view is arranged at a center of the 3D scene, while one or more further camera(s) having a limited range of view are targeting light source directions not easily accessible from the center of the 3D scene.
In an aspect of the present disclosure, the 3D scene has a center and the first lighting environment representation is centered at the center of the 3D scene.
In an aspect of the present disclosure, the synthesis lighting environment representation, the first lighting environment representation and the second lighting environment representation are environment maps.
In an aspect of the present disclosure, at least one of the environment maps is a grayscale image.
In an aspect of the present disclosure, the merging comprises rotationally matching the first lighting environment representation and the second lighting environment representation.
In an aspect of the present disclosure, the merging comprises translationally matching the first lighting environment representation and the second lighting environment representation.
In an aspect of the present disclosure, the merging is based on correlations between the first lighting environment representation and the second lighting environment representation.
In an aspect of the present disclosure, the merging comprises combining at least one light source (advantageously at least two light sources) of the first lighting environment representation with the light source(s) of the second lighting environment representation.
The present disclosure also relates to a method for rendering an Augmented Reality or a Mixed Reality scene. The Augmented Reality or Mixed Reality scene is generated using a synthesis lighting environment representation constructed by any of preceding methods for constructing a synthesis lighting environment representation of a 3D scene.
According to the present disclosure, there is provided a device for constructing a synthesis lighting environment representation of a 3D scene, comprising at least one processor configured to construct a synthesis lighting environment representation. The at least one processor is configured to:
In an aspect of the present disclosure, the at least one processor is configured to carry out any of preceding methods for constructing a synthesis lighting environment representation.
According to an embodiment of the present disclosure, the device for constructing a synthesis lighting environment representation of a 3D scene, comprises means for:
In an aspect of the present disclosure, the device for constructing a synthesis lighting environment representation comprises means for carrying out any of preceding methods for constructing a synthesis lighting environment representation.
According to the present disclosure, there is provided a computer program product, comprising program code instructions for performing any of the preceding methods for constructing a synthesis lighting environment representation, when the program is executed by a processor.
According to the present disclosure, there is also provided a non-transitory computer storage medium containing the computer program product.
In the present disclosure, the term “lighting environment representation” designates a representation of locations of light sources of a 3D scene. It can also comprise additional information, such as the intensity, color, size and/or shape of the light sources. The lighting environment representation can be stored in any form of data structure. For example, it can be a 2D or 3D environment map on which the 2D or 3D locations of the light sources are recorded. A 2D environment map can be an image (colored or grayscale) in which the locations of the light sources are highlighted according to the intensity and the size of the light sources. A grayscale image can contain essential information of lighting environment (direction, size and intensity of light sources) while it is easy to be processed and requires less memory for storage.
The present disclosure proposes a method for constructing a synthesis lighting environment representation that combines a method based on processing images of the scene taken from outside the scene and a method based on the direct observation of the lighting environment inside the scene. The general principal of the proposed method lies in the merging of a first lighting environment representation derived from data representative of at least one first image taken from at least one position outside the scene by a first device and a second lighting environment representation derived from data representative of at least one second image taken from at least one position inside the scene by a second device, into a synthesis lighting environment representation. The second device can be a camera with a wide range of view (for example, a wide-angle camera of a mobile device or a camera of a mobile device with a cheap wide-angle lens placed on it). The resulting camera can provide at least one image of the environment including light sources of the scene. The first device can also be a camera or any electronic device that comprises a camera. The first and second device can be a same and unique device. In this case, the device can be positioned outside the scene for capturing at least one first image at a first period of time and positioned inside the scene for capturing at least one second image at a second period of time. The proposed method is disclosed in view of
More specifically, according to an exemplary embodiment, the method comprises:
The first lighting environment representations can be derived from data representative of the images taken by the at least one first device. The data representative of the images can be directly or indirectly (by additional processing) derived from the images. In an exemplary embodiment, the step of generating the first lighting environment representation comprises estimating lighting from shadow and/or specularities detected in the images taken by the at least one first device. The first lighting environment representation can be presented in any form, such as a hemispherical, cubic, 2D or 3D lighting environment map (e.g. set of 3D point lights). The first lighting environment representation in hemispherical, 2D and 3D forms is detailed below in three embodiments of the disclosure.
The second lighting environment representations can be derived from the images taken by the at least one second device positioned in the 3D scene. The second lighting environment representation can also be presented in forms differing from the first lighting environment representation. A 2D lighting environment representation can be directly derived from at least one image taken by one second device positioned inside the scene. Indeed, each image captured by the second device contains the light sources in the view range of the second device. It is therefore not necessary to specifically create a 2D lighting environment representation. If there are at least two cameras positioned in the scene, a 3D lighting environment representation can be generated from at least two images taken respectively by the two second devices positioned in the scene. A hemispherical lighting environment representation can also be generated by projecting the 2D or 3D lighting environment representation onto a hemisphere.
The merging step may further comprise:
The orientations of the first lighting environment representation and the second lighting environment representation are important for merging the two environment representations. If the two environment representations have different orientations, the quality of the resulting synthesis lighting environment representation would be affected. This problem can be avoided by asking the user to make the acquisition of the images with caution. For example, a particular 2D marker is placed in the scene and used to define the world coordinate system (WCS) having a coordinate system (x, y, z). Then, the camera placed inside the scene is oriented by the user with respect to the marker so that it allows an easier correspondence between image axes and the axes of the WCS.
The first lighting environment map can be registered with respect to the second lighting environment map. This can improve the quality (accuracy) of the synthesis lighting environment map. More specifically, the first lighting environment representation can be preprocessed to detect highlights (light sources) and both environment maps can have a binary representation to allow comparison and computation of a correlation or matching error. The 3D registration can be limited to the estimation of a 3D rotation (no translation estimated) if either the two maps are described with respect to close (if not same) coordinate systems in the scene or the light sources are so far from the scene that distance between the two coordinate systems has a weak effect. The searched rotation can be estimated by rotating the second lighting environment representation around its center point with a preset of rotations, computing correlation and identifying the one corresponding to the maximal correlation. The searched rotation is 3D in the general case, but if the two coordinate systems share a common plane (e.g. xy plane) the searched rotation is limited to one angle ϕ around the normal to the plane (z-axis). A gradient descent algorithm or other iterative method can be used to refine the rotation.
The translational matching of the first and second lighting environment map may be important if the two lighting environment maps are not centered on a same point. The translation can be made using the filming position as a reference point in the two lighting environment maps if easily identified in the images captured by the second device. Otherwise, a preset of translation vectors is introduced in the registration process. The translation is 3D in the general case, but can be limited to 2D if the two coordinate systems share a common plane (e.g. xy plane). The registration process consists then in searching rotation and translation that allow the best fitting between the two lighting environment representations. The various positions take into account the fact that the position of the second device in the scene is not known. The translational matching is optional since the two lighting environment maps are already centered on the filming position.
Thus, the synthesis lighting environment representation combines the lighting information (information of light sources) of at least two (a first and a second) lighting environment representations. It is therefore more complete and more reliable than any of the first and second lighting environment representation. Besides, the method uses images taken from inside and outside the scene for constructing a synthesis lighting environment representation. The first and second devices taking the images can be conventional cameras (camera of a tablet or a smartphone). It is not necessary to use special and costly equipment, or tedious operations such as based on light probes as used in the prior art for constructing a lighting environment representation.
The method according to the present disclosure takes advantage of lighting environment representations derived from images taken from locations inside and outside of a 3D scene, and provides wide lighting environment representations of the 3D scene from various cues plus additional useful information issued from these various cues (surface reflectance, attenuation etc.) for AR/MR applications, without requiring complex or specific equipment nor tedious handling. In fact, using multiple complementary cues avoids a complex acquisition set-up.
The synthesis lighting representation can be constructed off-line before its use in an AR or MR application (in particular if only one device is used for capturing images outside and inside the scene). Alternatively, it can be constructed online, i.e. together with the AR/MR application. In this case, the electronic device (for example, a HMD, a tablet or a computer) on which the AR/MR application runs must be powerful enough for both updating the synthesis lighting environment representation and processing the MR/AR scenario. Furthermore, both processing modules can work at different frequencies, and in any case, a previously constructed (i.e. computed earlier) synthesis lighting environment representation can be considered in a current MR/AR scene. An advantage of considering both processing operations simultaneously is that this mode can take into account the possible variations of lighting during the MR/AR application (at least the lighting environment representation observed by the second device(s) only).
The first and the second devices can be two distinct cameras (or two distinct devices comprising respectively a camera) respectively positioned inside and outside the scene: the at least one image taken by the first device B therefore comprises the second device A, since the second device A is positioned in the scene while the first device B is taking the images. The at least one image taken by the first device B can be processed to detect the filming position (of the second device A) in the scene. This can be carried out for example via stereovision (at least two images of the scene are captured from the first device B at distinct viewpoints or from depth data if the first device B comprises an RGB-D sensor (“D” for depth). This 3D location estimation requires identifying the camera of device A in the image(s), which can be done via recognition.
The method for constructing a synthesis lighting environment representation can therefore comprise:
As the filming position (3D position) is recovered, the first lighting environment map can be generated with respect to the filming position (centered on the filming position). The filming position of the second device A inside the scene can also be obtained by matching images captured by the two devices A and B. In fact, as the geometric model of the 3D scene is available, by matching such two images, the filming position of the second device A and the location of the first device B in a same reference system (world coordinate system—WCS) can be estimated. The two solutions for obtaining information on the filming position can be combined so as to get a better estimation. The filming position can therefore be derived from existing images taken from positions outside and/or inside the scene. In other words, the step of obtaining the filming position of the second device A may comprise deriving that filming position from recognizing the at least one second device position from the at least one first image.
It is therefore not necessary to take additional images or use special devices, such as RGB-D sensor or distance meter, for detecting the filming position. A first and a second lighting environment representations centered on the filing position can then be generated from the information on the filming position.
In the following, three embodiments are more specifically presented. The three embodiments are based on the previous explanations. The first device B and the second device A are respectively presented as a camera B and a camera A. Some specific characteristics are described as a function of the way the lightning environment representations are generated. The specific characteristics of a first embodiment can be integrated into a second embodiment for delivering a new embodiment which comprises the characteristics extracted from the first embodiment.
In this first embodiment, the first, the second and the synthesis lighting environment representations are presented in form of hemispherical lighting environment maps. The main steps are globally the same as the ones previously presented.
In this embodiment, the step of generating 11 the first lighting environment map comprises:
The lighting designates 3D locations and other information (intensity, size, color etc.) of the light sources. The representation of the light sources depends on the method used to detect them and is often point-based or area-based. For example, the shadow-based method generally provides sets of point lights. As to specularity-based methods, they provide either point lights if 3D position is estimated, or patches in a hemispherical map if the specular effects such as highlights in the scene are interpreted e.g. as mirror reflections of the lighting.
The location of the camera B at the moment of taking each image is recorded together with the images or in a separated file. Therefore, the viewpoints (with respects to the WCS) by which the images were captured are known. The WCS can for example be attached to a particular feature visible and detected in at least one image of the scene. It can also be attached to the filming position of the camera A in the scene if it is known. In the embodiments of the present disclosure, the WCS is attached to the filming position of the camera A that is roughly placed at the center of the scene. Therefore, the 3D locations or directions of the identified light sources can be expressed with respect to the WCS as shown in
From the 3D locations of the identified light sources, a hemispherical lighting environment map (Rep HB) can be created by projecting the estimated 3D light sources to a hemisphere. L1H and L3H on the hemisphere are light sources projected on the hemispherical environment map (Rep HB) as shown on
The second hemispherical lighting environment map can be derived from the images taken by camera A positioned inside the scene as shown in
The image captured by the camera can itself constitute the second environment map of the light sources—this is a common representation mode of the hemispherical map. That image can also be transformed by the selection of pixels having an intensity exceeding a threshold, in which case the light sources are generally represented as strong intensity patches in that map. Alternatively, the image can be transformed by reducing such patches to punctual light sources having an intensity and a position induced from patch specificities. If such a patch is expanded, it can then be segmented and represented by several point-based light sources.
In this embodiment, in the step of merging 30, the first hemispherical lighting environment map Rep HB (shown in
The step of merging 30 comprises combining light sources of the first hemispherical lighting environment map and light sources of the second hemispherical lighting environment map. A synthesis hemispherical lighting environment map SYT HAB as shown in
For a light source (L1H) present in a same zone of a predetermined size in both Rep HA and Rep HB, the combining step will add one light source in the synthesis lighting environment map in the zone. The coordinates of this light source can be computed by averaging the coordinates of the light source L1H in Rep HA and Rep HB. For example, in the synthesis hemispherical environment map, the coordinates of light source L1H can be represented as (θS, φS) in which θS represents polar angle and φS represents azimuthal angle,
(θA, φA) and (θB, ψB) represent respectively the coordinates of L1H in Rep HA and Rep HB. The radial distance is not mentioned, since it is constant in the hemispherical environment map (all the light sources are on the hemisphere). It should be noted that the coordinates of the light source in the synthesis environment map can also be obtained by directly selecting the coordinates of this light source in the first or the second environment map in which the light source has a higher intensity value.
The intensity of light source L1H in SYT HAB can also be obtained by averaging the intensity of this light source in Rep HA and Rep HB. Alternatively, the intensity of light source L1H can be obtained by taking the maximum value of this light source in Rep HA and Rep HB.
For a light source (L2H or L3H) that is present only in one of the two environment maps (Rep HA and Rep HB), its coordinates and intensity in the SYT HAB can be the same as its coordinates and intensity in the environment that contains it.
In the present disclosure, a light source is simplified as a point in the environment map, in order to simplify the presentation of the general principles. The skilled person can also model the shape and size of light sources in lighting environment representations so as to get precise representation of the light sources.
In the first embodiment, the lighting environment representation is presented in the form of hemispherical lighting environment maps. The skilled person can use any other possible lighting environment map (for example, a cubic environment map) for representing the lighting environment of the scene. The features disclosed in the first embodiment can also be applied in the other embodiments of the disclosure. If necessary, the skilled person can obviously modify these features so as to make them adapted to other types of environment representations.
In the second embodiment, the 3D scene is identical to that of the first embodiment as shown in
More specifically, a first 2D lighting environment map (Rep 2DB) is generated from data representative of at least one image taken from a position (by camera B) outside the scene and a second 2D lighting environment map (Rep 2DA) is generated from the received data representative of at least one image taken from a position (by camera A) inside the scene.
The first 2D lighting representation can be generated by projecting lighting sources L1H and L3H in the hemispherical environment map Rep HB to a 2D plane (2D map) as shown in
In the above equations, (u,v) is the coordinates of the point P projected into the 2D environment map. The coordinates (u0, v0) correspond to the center of the hemispherical environment map in the 2D environment map. When the hemispherical environment map is centered at the filming point of camera A, the coordinate origin of the hemispherical environment map coincides with the coordinate origin of the 2D environment map (u0=v0=0). By implementing the projection, the first 2D lighting environment map is obtained as shown in
The second 2D lighting environment map can be generated directly from at least one image taken by camera A inside the scene. Indeed, the images captured by camera A inside the scene comprise light sources in the 2D plane of the images. The second 2D lighting environment shown in
By merging the first 2D lighting environment map shown in
The step of merging in the second embodiment is essentially the same as the step of merging disclosed in the first embodiment, except for the fact that the coordinates of light sources are cartesian in the second embodiment. Light source L12D appears in both the first and the second 2D lighting environment maps. Its coordinates (u, v) in the synthesis lighting environment map can be obtained by the following equation:
In the above equation, (uA,vA) and (uB,vB) represent the coordinates of light source L12D respectively in the second and the first 2D lighting environment map. It should be noted that the coordinates of the light source L12D in the synthesis environment map can also be obtained by directly selecting the coordinates of this light source in the first or second environment map in which the light source has a higher intensity value.
The intensity of light source L12D in SYT 2DAB can also be obtained by averaging the intensity of this light source in Rep 2DA and Rep 2DB. Alternatively, the intensity of light source L1H in SYT 2DAB can also be obtained by averaging the intensity of this light source in Rep HA and Rep HB. Alternatively, the intensity of light source L12D can be obtained by selecting the maximum value of this light source in Rep 2 DA and Rep 2DB.
For a light source (L22D or L32D) that is present only in one of the two environment maps (Rep 2 DA and Rep 2DB), its coordinates and intensity in SYT 2DAB can be the same as its coordinates and intensity in the environment map that contains it.
The use of 2D environment maps is particularly adapted in the following context. On one hand, the image captured by the “inner” camera (camera A) can itself constitute the second environment map of the light sources. That image can also be transformed by the selection of pixels having an intensity exceeding a threshold, in which case the light sources are generally represented as strong intensity patches in that map. On the other hand, in the context of specularity-based analysis, images of camera B can be processed to detect highlights in the surfaces, these highlights are interpreted as ‘mirror’ reflections of lights on specular surfaces. From these reflections, bundles of 3D vectors that indicate the directions of lights can be derived. These data can be naturally represented in a hemispherical map or a 2D environment map. This map can thus contain patches of pixels corresponding to area lights as in the first map. Then these two environment maps can be merged as described previously on a pixel basis.
By analyzing (processing) the images taken from outside the scene by camera B, 3D position estimates of light sources of the scene can be obtained. A first 3D lighting environment map (Rep 3DB) can be generated from data representative of images taken from outside the scene. The generation of the 3D lighting environment map Rep 3DB has already been discussed in the first embodiment with
In the first and second embodiments, the at least one second image is obtained by camera A placed in a filming position in the scene. In the third embodiment, two “inner” cameras A1 and A2 are placed in the scene. The two “inner” cameras observe the lighting environment of the scene from two distinct positions inside the scene. Using stereovision techniques, the images are matched and a 3D lighting representation can be built. A second 3D lighting environment map Rep 3DA is therefore generated from the received data representative of at least one image taken by a first inner camera A1 and at least one image taken by a second inner camera A2.
The first and second 3D lighting environment maps can then be merged into a synthesis 3D lighting environment map. To help the stereovision and the merging of the 3D environment maps, the two inner cameras A1 and A2 can be detected from the outside camera B. The 3D positions of the two inner cameras A1 and A2 can be estimated from at least two images taken by the outside camera B from distinct viewpoints.
The two distinct 3D environment maps can be described with respect to the WCS that is chosen to correspond to the filming position of one of the two cameras A1 or A2. As already explained, the two 3D environment maps (“outer” and “inner”) have to be registered (via 2D or 3D rotation) before fusion.
In the third embodiment, an “outer” camera B and two “inner” cameras A1 and A2, (three in total) are used. Alternatively, only two cameras can be used. The two cameras are first used as inner cameras A1 and A2 for taking images from inside of the scene. Then, one of the two cameras is used as outer camera B to take images from outside of the scene, while the other camera remains in the scene (this latter camera defines the filming position). Finally, two 3D lighting environment maps are generated from the images taken by the two cameras and a synthesis 3D environment map is created by merging the two 3D lighting environment maps Rep 3DA and Rep 3DB. The camera used as camera B outside the scene is then used for MR/AR application and possibly for continuously updating the first environment map, while the camera in the scene is used to continuously update the second environment map, both the first and second environment maps updates being then continuously merged.
In another embodiment, the 3D positions of light sources are estimated from a combination of 2D information obtained from at least one “inner” camera (noted A) placed in the scene and observing the lighting environment of the scene, and at least one “outer” camera (noted B) placed outside the scene and filming it. The related arrangement can correspond e.g. to the representation of
Camera B is then used to detect the direction of lights through the detection of specular effects in the scene or via the analysis of detected cast shadows, while camera A provides a 2D environment map from which light sources can be detected and their direction estimated. This provides two light direction maps, the combination of which allows to estimate the 3D positions of lights. Namely, matching between the lights in the two representations is established. Then, if all light directions can be expressed in a same WCS of the scene, triangulation is used to recover the 3D poses of the matched lights. Otherwise, if the two light representations are expressed in two distinct coordinate systems, a matching is used to estimate the relative 3D poses of the two light representations, and the 3D poses of the lights when observed in the two systems.
According to an embodiment, the device DVC comprises a processor PROC driven by a computer program PG stored in a memory MEM and implementing a method for constructing a synthesis lighting environment representation. The device DVC comprises also a module I/O controlled by the processing unit PROC for communicating with external devices. The processing unit PROC, the Memory MEM and the I/O module are connected to a bus. The I/O module can be a wired (USB, Ethernet) or wireless (WiFi, Bluetooth®) communication interface. The electronic device may also comprise a camera and can therefore be used for taking images from positions inside and outside the scene.
At initialization, the program code instructions of the computer program PG are for example loaded into a RAM (not shown) and then executed by the processor of the processing unit PROC. The program code instructions, when executed by the processor PROC, implement the steps of any method disclosed in the present disclosure for constructing a synthesis lighting environment representation.
The processor can in particular be configured to:
According to a specific embodiment, a device may be directly built for constructing the synthesis lightning environment. According to this embodiment, the device may comprise:
Of course, these devices may also comprise the necessary means and/or configuration for implementing the various characteristics of the methods which have been described previously.
Number | Date | Country | Kind |
---|---|---|---|
16306721.8 | Dec 2016 | EP | regional |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2017/082792 | 12/14/2017 | WO | 00 |