GEOMETRIC WARPING OF A STEREOGRAPH BY POSITIONAL CONTRAINTS

Abstract
In a method for consistently editing stereographs in warping at least part of a virtual 3D scene, a set of positional constraints is associated with a source and a target position of at least one constraint point in at least one specified depth layer of each of two specified stereographic images. The source and target positions in both images are respectively associated with semantic source and target point in the 3D scene. Those images are warped by applying these positional constraints for each of the specified depth layer and a warped stereograph is generated, comprising the two warped images for the specified depth layer.
Description
1. REFERENCE TO RELATED TO EUROPEAN APPLICATION

This application claims priority from European Patent Application No. 16306774.7, entitled “GEOMETRIC WARPING OF A STEREOGRAPH BY POSITIONAL CONSTRAINTS”, filed on Dec. 22, 2016, the contents of which are hereby incorporated by reference in its entirety.


2. TECHNICAL FIELD

The field of the disclosure relates to stereoscopic displays, which present visual display for both eyes, and also, advantageously, for panoramic stereoscopic displays that fully cover the peripheral fields of vision. There is a wide range of possible application contexts in computer games, visualization of medical content, sport activities, or immersive movies. The disclosure is applicable notably to virtual reality (VR) and augmented reality (AR).


The disclosure pertains to a method for editing a stereograph and to an apparatus for implementing such a method.


3. BACKGROUND ART

This section is intended to introduce the reader to various aspects of art, which may be related to various aspects of the present disclosure that are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.


Stereoscopy creates in the mind of a user the illusion of three-dimensional depth, by presenting to each of his eyes a slightly different bi-dimensional image. The two stereographic (stereo) images, whose pair is called a “stereograph”, are then combined in the brain to give the perception of depth.


In order to capture or create a stereograph, it is necessary to take or generate two pictures of a tridimensional scene from different horizontal positions to get a true stereoscopic image pair. This can be done notably with two separate side-by-side cameras, with one camera moved from one position to another between exposures, with one camera and a single exposure by means of an attached mirror or prism arrangement that presents a stereoscopic image pair to the camera lens, or with a stereo camera incorporating two or more side-by-side lenses.


Stereoscopic capture can be extended to create stereoscopic panoramas with large or very large, up to 360 degrees, horizontal field of views. To this purpose, a rig of two cameras separated by eye distance are made to rotate about the centre of the segment that joins them. These cameras have a wide vertical field of view and a narrow horizontal field of view. As the two cameras rotate, horizontal slices of the scene are captured and rendered. The slice renderings are stitched together to form the panoramic stereo image.


The current disclosure also applies to the rendering of synthetic scenes created by computer-generated imagery. Each object making up a synthetic scene consists of a textured 3D (tridimensional) mesh that can be rendered to 2D (bidimensional) image using a virtual camera placed at any point within the synthetic scene and with any desired optical features. Thus, computer-generated imagery allows for the rendering of synthetic stereoscopic or stereoscopic panorama images up to any specification of viewpoint and optical camera parameters.


Interestingly, some methods deal with generating new viewpoint images from existing viewpoint images. In particular, patent application US 2012/0169722 to Samsung Electronics describes the generation of new views based on interpolation or extrapolation applied to known reference views as well as on depth information or binocular disparity information, thereby taking parallax effects into account. The document deals with the processing of error suspicious areas in such induced new views, due to pixel assignment uncertainties at boundaries between foreground objects and background (which can correspond to an area adjacent to discontinuous depth/binocular disparity values). It teaches a weighting-based blending of two reconstitutions, one privileging the foreground by encompassing an error suspicious area in the related foreground (foreground-first warping), and another privileging the background by regarding the same error suspicious area as part of the background (background-first warping).


Patent application US 2013/0057644 to Disney Enterprises deals with the generation of autostereoscopic video content. Based on the reception of a multiscopic video frame including a first and a second image, a mapping function is determined from extracted image characteristics and at least one third image is generated by exploiting the mapping function, which third image is added to the multiscopic video frame. The developed solution enable to flexibly modify a scene depth and/or parallax of a stereo video such that the latter becomes suited to a different medium than the original one, e.g. cinema screen, TV set or handheld device. The solution is also adapted to having a same image content as an original image pair, associated with a differently perceived scene depth.


Despite the increasing interest in the capture or generation of stereoscopic images, few editing techniques are however available for editing stereographs corresponding to warping parts of a virtual 3D scene, consistently across viewpoints. The above-mentioned processes (US 2012/0169722, US 2013/0057644) are notably adapted to the generation of new images or the generation of suited stereographic effects, but silent about editing stereographs by warping parts of an underlying virtual 3D scene.


Most of the proposed related methods (as illustrated by the article “Plenoptic Image Editing” by S. Seitz and K. M. Kutulakos, in International Conference on Computer Vision, 1998) deal with editing texture information, but few are able to change the geometry of the images. Such a geometric warping of the image can be used in particular for magnifying or compressing certain image regions, enabling novel ways of interacting with the displayed content. For example, the window of a captured building in an image can be made bigger or smaller in size, the chest of a person in the image can be made bigger in order to give a more muscular appearance, or, in a medical context, the size of a specific body organ can be magnified for better inspection.


Specifically, a convenient and efficient method for editing conventional 2D images relies on sparse positional constraints, consisting of a set of pairs of a source point location, and a target point location. Each pair enforces the constraint that the pixel at the location of the source point in the original image should move to the location of the corresponding target point in the result image. The change in image geometry is obtained by applying a dense image warping transform to the source image. The transform at each image point is obtained as the result of a computation process that optimizes the preservation of local image texture features, while satisfying the constraints on sparse control points.


However, such an image warping method cannot be applied directly to stereographic representations, notably for immersive renderings, especially in the context of individual stereographic views, because this will result in misalignment in the views and produce jarring visual artefacts.


It would hence be desirable to provide an apparatus and a method that show improvements over the background art.


4. SUMMARY OF THE DISCLOSURE

References in the specification to “one embodiment”, “an embodiment”, “an example embodiment”, indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.


A particular embodiment of the disclosure pertains to a method for consistently editing stereographs in warping at least part of a virtual tridimensional scene, the method comprising:

    • obtaining at least one initial stereograph comprising two stereographic images of the tridimensional scene depicted from two corresponding bi-dimensional views, the two images being segmented into a discrete set of at least two depth layers in a consistent manner across the two images,
    • obtaining at least one initial set of positional constraint parameters to be applied to at least one specified depth layer within the two images, said positional constraint parameters being associated with a source position and a target position of at least one constraint point in said at least one specified depth layer for each of the two images, said positional constraint parameters determining a warping transformation to be applied to given points within each of the at least one specified depth layer in each of the two images.


Each of the source positions in one of the two images corresponds to one of the source positions in the other of the two images and is associated with a same semantic source point in the tridimensional scene. Likewise, each of the target positions in one of the two images corresponds to one of the target positions in the other of the two images and is associated with a same semantic target point in the tridimensional scene.


The method further comprises:

    • warping each of the two images by applying to each of the at least one specified depth layer said at least one initial set of positional constraint parameters corresponding to said each of the at least one specified depth layer,
    • generating an edited stereograph comprising the two warped images for said at least one specified depth layer.


The disclosure pertains to a method for warping a couple of stereographic images, each referring respectively to the views of the 3D scene, which could be captured by the left and right eyes of a user. By definition, these two images are almost identical and form a single three-dimensional image when viewed through a stereoscope. In the following description, this couple of stereographic images is referred to as “stereograph”. Each of the two images is segmented into a discrete set of depth layers in a consistent manner across the two images. In other terms, any image pixel in either of the two stereographic images is uniquely associated with a label Ln indicating its depth layer. In the following description, a depth layer is defined as a bounded area of 3D space in a scene that is located at a depth range positioning and ranking with respect to other depth layers, from a viewer position. Preferably, such a depth layer Ln corresponds to a semantically meaningful entity of the scene. For example, in a movie production workflow, the scene background, each character and each object is a potential scene layer. Once segmented out from a scene, the layers can be combined with layers from other scenes to form new virtual scenes, following a process known as compositing.


The warping of the two images designates the warping of the parts of the images respectively corresponding to the modified related depth layers, while the generation of the edited stereograph comprising the two warped images designates the global generation of the resulting pair of stereo images including all associated depth layers after edition (i.e. after having applied the warping).


The disclosed method relies on a new and inventive approach of warping a stereograph, which comprises applying a geometric warp specified by positional constraints on given depth layers of a couple of stereographic images. Namely, the warping is carried out layer by layer (successively and/or in parallel) for the specified depth layer(s), which can provide advantageously a powerful and reliable tool for efficient edition of stereographs. The method then allows generating a couple of warped stereographic images that is geometrically consistent in 3D. This couple of warped stereographic images corresponds to the edited stereograph.


Focusing on depth layers for consistent image warping between two viewpoints can provide a particularly cost-efficient, flexible and user-friendly stereographic potential. Indeed, processing depth layers can possibly amount to a repeated 2D processing while potentially offering a high quality consistent stereographic output, instead of complex 3D processing requiring demanding memory and computing resources.


As regards the non-specified depth layers, i.e. those that correspond to none of the positional constraint parameters, in particular implementations, they are left unchanged (no warping). This can prove particularly relevant notably when objects are located in given depth layers (one depth layer per object), since then, the warping can be limited to the depth layers in which objects are subject to modifications (e.g. position, content, size, shape). This can also be suited notably when all depth layers concerned by an edition are associated with at least some of the positional constraint parameters, including when some objects extend over two or more of the depth layers. Indeed, the non-specified depth layers can then be advantageously considered as not requiring any specific warping operation.


An “object” can be defined as a consistent logical entity of the 3D scene (e.g. a piece of furniture, a person, an animal, a tree, a floor, a wall) represented at least partly on the stereographs. More precision is provided above about an object in rendering a synthetic scenes.


In alternative implementations, non-specified depth layers are also submitted to a warping transformation of an object based on the specified depth layer(s), for example by interpolating deformations of the object in a current non-specified depth layer from the two or more closest surrounding specified depth layers where the same object is subject to modifications (“surrounding” meaning farther and closer to the user than the current non-specified depth layer), and/or by extrapolating deformations of the object in a current non-specified layer from two or more specified depth layers that are all farther or all closer to the user. Those implementations are relevant when objects are distributed over three or more depth layers.


In still another variant, it is first determined whether any object extends over more than one depth layer, and if yes, whether it goes through one or more non-specified depth layer(s). If yes, the latter is/are processed from the warping in the specified depth layer as mentioned above.


Advantageously, the 3D geometrical consistency includes maintaining a proper visual parallax between the right eye and left eye images.


The edited virtual 3D scene can be considered alone (VR) or overlaid on a real 3D scene (AR).


In one embodiment, the at least one initial stereograph is of a panoramic type.


Such a panoramic stereograph comprises a set of two stereographic panoramas.


In one embodiment, the method comprises a prior step of determining for each of the 2D views a corresponding matrix of projection of the 3D scene in said view.


Such a matrix of projection allows determining the projection into the 2D view of any point in 3D space. Conversely, given some point on the image of a 2D view, the projection matrix allows determining the viewing ray of this view, i.e. the line of all points in 3D space that project onto this view. Thus, according to this embodiment, calibration data corresponding to each of the views are determined autonomously when implementing the method and do not need to be inputted prior to its implementation. Alternatively, calibration data are provided by an external source, for example the manufacturer of a stereo rig.


In one embodiment, warping comprises computing respectively warped locations of pixels x of each of said at least one specified depth layer of at least one specified image, by solving the system of equations formed by the initial set of positional constraints, in the least squares sense:







M
x

=

Arg







min
M






i
=
1

m




1




x
-

s
i




2





(


M


(

s
i

)


-

t
i


)

2









wherein m is a number of positional constraints in said each of said at least one specified depth layer,

    • s1, s2, . . . , sm and t1, t2, . . . , tm are respectively the bidimensional vertex locations of said at least one source position and target position of said at least one constraint point, and
      • Mx is an optimal transformation to move any of said pixels x from a source position to a target position in said each of said at least one specified depth layer.
      • A method according to this embodiment allows optimizing or tending to optimize a warping of the specified images in a given one of the depth layers, when determining the 3D locations of the source point and target point.


In one embodiment, warping implements a bounded biharmonic weights warping model defined as a function of a set of affine transforms, in which one affine transform is attached to each positional constraint.


In one embodiment, the method comprises reconstructing at least one of the warped images by implementing an inpainting method.


When parts of an edited tri-dimensional object have shrunk in the editing process, background areas covered by this object in the original left or right stereographic views become uncovered. The implementation of a reconstruction step allows filling these unknown areas.


In one embodiment, generating the edited stereograph comprises rendering its depth layers from the two warped images, by executing said rendering consecutively from the furthest depth layer to the closest depth layer.


Thus, pixels rendered in inner layers will overwrite a pixel rendered from an outer layer.


In one embodiment, the method comprises a prior step of capturing two calibrated stereographic images with respectively two cameras having an optical center, the at least two depth layers being shaped as concentric cylinders around a theoretical sphere containing both optical centers.


The method comprises generating or rendering the edited stereograph.


As seen above, in particular implementations, the warping is leaving each of the two images unchanged in all the non-specified depth layer.


In particular modes, the method comprises obtaining the positional constraint parameters by at least one of receiving the positional constraint parameters from a user, deriving the positional constraint parameters from physics and deriving the positional constraint parameters from semantic relevance.


A particular embodiment of the disclosure pertains to an apparatus for consistently editing stereographs in warping at least part of a virtual tridimensional scene, said apparatus comprising at least one input adapted to receive:

    • at least one initial stereograph comprising two stereographic images of the tridimensional scene depicted from two corresponding bidimensional views, the two images being segmented into a discrete set of at least two depth layers in a consistent manner across the two images,
    • at least one initial set of positional constraint parameters to be applied to at least one specified depth layer within the two images, said positional constraint parameters being associated with a source position and a target position of at least one constraint point in said at least one specified depth layer for each of the two images, said positional constraint parameters determining a warping transformation to be applied to given points within each of the at least one specified depth layer in each of the two images.


Each of the source positions in one of the two images corresponds to one of the source positions in the other of the two images and is associated with a same semantic source point in the tridimensional scene. Likewise, each of the target positions in one of the two images corresponds to one of the target positions in the other of the two images and is associated with a same semantic target point in the tridimensional scene.


The apparatus further comprises at least one processor configured for:

    • warping each of the two images by applying to each of the at least one specified depth layer, said at least one initial set of positional constraint parameters corresponding to said each of the at least one specified depth layer,
    • generating an edited stereograph comprising the two warped images for said at least one specified depth layer.


One skilled person will understand that the advantages mentioned in relation with the method described here below also apply to an apparatus that comprises a processor configured to implement such a method. Since the purpose of the above-mentioned method is to edit a stereograph, without necessarily displaying it, such a method may be implemented on any apparatus comprising a processor configured for processing said method.


In one embodiment, the apparatus comprises means for warping each of the two images by applying said at least one initial set of positional constraint parameters to the at least one specified depth layer, and means for generating an edited stereograph comprising the two warped images for said at least one specified depth layer.


In one embodiment, the apparatus comprises at least one external unit adapted for outputting the edited stereograph.


In one embodiment, the at least one processor is configured for carrying out the above-mentioned method.


In one embodiment, the apparatus comprises a Human/Machine interface configured for inputting the at least one initial set of positional constraint parameters.


In one embodiment, the apparatus comprises a stereoscopic displaying device for displaying the edited stereograph, the apparatus being chosen among a camera, a mobile phone, a tablet, a television, a computer monitor, a games console and a virtual-reality box.


A particular embodiment of the disclosure pertains to a computer program product downloadable from a communication network and/or recorded on a medium readable by a computer and/or executable by a processor, comprising program code instructions for performing the above-mentioned method when executed.


Advantageously, the apparatus comprises means for implementing the steps performed in the above-mentioned method, in any of its various embodiments.


While not explicitly described, the present embodiments may be employed in any combination or sub-combination.





5. BRIEF DESCRIPTION OF THE DRAWINGS

The present disclosure can be better understood with reference to the following description and drawings, given by way of example and not limiting the scope of protection, and in which:



FIG. 1 is a schematic representation illustrating the geometric warping of a 3D scene and of the corresponding 2D images,



FIG. 2 is a schematic representation illustrating a camera projection for a specified view,



FIG. 3 is a schematic representation illustrating the segmentation of a panoramic stereograph into concentric depth layers,



FIG. 4 illustrates a pair of rectangular images depicting two stereographic panoramas,



FIG. 5 is a schematic representation illustrating the projections of a 3D point into left and right stereoscopic views of a 3D scene,



FIG. 6 is a schematic representation illustrating an image before and after being processed by an editing method according to one embodiment of the invention,



FIG. 7 is a flow chart illustrating the successive steps implemented when performing a method according to one embodiment of the invention, and



FIG. 8 is a block diagram of an apparatus for editing a stereograph according to one embodiment of the invention.





The components in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure.


6. DETAILED DESCRIPTION

General concepts and specific details of certain embodiments of the disclosure are set forth in the following description and in FIGS. 1 to 8 to provide a thorough understanding of such embodiments. Nevertheless, the present disclosure may have additional embodiments, or may be practiced without several of the details described in the following description.


6.1 General Concepts and Prerequisites


We assume that the representation of a 3D scene is given by a stereograph (A) comprising two views (VL, VR), each referring respectively to the views of this 3D scene, which could be captured by the left and right eyes of a user, or by two adjacent cameras, as illustrated by FIG. 1.


As illustrated by FIG. 2, we further assume that these two stereographic views (VL, VR), are calibrated, meaning that, for each view Vα(α=L or R), the projection matrix Cα for the view Vα is known. Cα allows to compute the projection pα into view Vα of any point P in 3D space as pα is equal to Cα*P. Conversely, given some point mα on the image of view Vα, Cα allows to compute the viewing ray from ma for view Vα, i.e. the line of all points M in 3D space that project onto mα in view Vα.


There are several ways, known from the state of art, to compute the camera projection matrices for the views in case of image capture from the real world, a process known as calibration.


A first approach to camera calibration is to place an object in the scene with easily detectable points of interest, such as the corners of the squares in a checkerboard pattern, and with known 3D geometry. The detectability of the points of interest in the calibration object allows to robustly and accurately find their 2D projections on each camera view. From these correspondences and the accurate knowledge of the 3D relative positions of the points of interest, the parameters of the intrinsic and extrinsic camera models can be computed by a data fitting procedure. An example of this family of methods is described in the article “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-The-Shelf TV Cameras and Lenses,” by R. Tsai, IEEE Journal on Robotics and Automation, Vols. RA-3, no. 4, pp. 323-344, 1987.


A second approach to camera calibration takes as input a set of 2D point correspondences between pairs of views, i.e., pairs of points (piL,piR) such that piL in view VL and piR in view VR are the projections of the same 3D scene point Pi, as illustrated by FIG. 5. It is well known from the literature, as illustrated by the article “A Robust Technique for Matching Two Uncalibrated Images Through the Recovery of the Unknown Epipolar Geometry”, by Z. Zhang, R. Deriche, O. Faugeras and n. Q.-T. Luo, Artificial Intelligence, vol. 78, no. 1-2, pp. 87-119, 1995, that the fundamental matrix for the pair of views can be computed if at least 8 (eight) such matches are known. Given the projection mL of some 3D scene point M in view VL, the fundamental matrix defines the epipolar line for mR in view VR where the projection of M in this view must lie. Assuming the intrinsic camera parameters are known, either from the camera specifications or from a dedicated calibration procedure, the camera projection matrices for the considered pair of cameras can be computed from an SVD decomposition (Singular Value Decomposition) of the fundamental matrix, as explained in section 9 of the book “Multiple View Geometry in Computer Vision” by R. Hartley and A. Zisserman, Cambridge University Press Ed., 2003.


In case of computer-generated images, the generation of consistent left and right views is further well known to a person having ordinary skills in the art. A virtual camera with desired optical characteristics is placed at a specified location in the virtual scene. Points in the virtual scene within the viewing frustum of the camera are projected through a specified camera projection center to the specified camera retinal plane. Optionally, the retinal plane image can further be deformed to account for distortions of the camera lens and non-square shape of image pixels.


6.2 Input Data


A method for editing a stereograph in a consistent manner takes as input a stereograph (A) comprising two calibrated stereographic images (aL, aR) and an initial set (Sini) of positional constraint parameters determining a warping transformation to be applied to a given point within each of these images (aL, aR).


In the following description of one embodiment of this method, the stereograph (A) is of a panoramic type, and comprises therefore a set of two stereographic panoramas (aL, aR). One will understand that in other embodiments, this stereograph (A) is of a different type without departing from the scope of the disclosure.


5.2.1 Panoramic Stereograph (A)


We assume that each stereo panorama is segmented (S1) into a set of N depth layers Ln (N≥2) which samples the depth of the 3D scene from the perspective of the concerned view. More precisely, a depth layer 6 is defined as a bounded area of 3D space in a scene that is located at a depth range positioning and ranking with respect to other depth layers, from a viewer position. Preferably, such a depth layer Ln corresponds to a semantically meaningful entity of the scene. In a movie production workflow, the scene background, each character and each object is a potential scene layer. Once segmented out from a scene, the layers can be combined with layers from other scenes to form new virtual scenes, a process known as compositing.


If the 3D scene being shown is a synthetic scene, as in computed-generated imagery, the 3D models of the elements making up the scene directly provide the scene layers. If the scene is captured from the real world using a stereo camera rig, the stereo images can be processed using computer vision algorithms to find point correspondences across views, from which stereo disparity and eventually depth estimates of each pixel in the stereo views can be obtained. In simple situations where the constituent elements of the scene are well separated in depth, it is straightforward to obtain the scene layers using depth thresholding. For more complex scenes, depth thresholding can be combined with texture-based object segmentation or rotoscoping, possibly involving human intervention, in order to assign depth layer labels to all pixels in the stereo views of the scene.


In the context of a panoramic stereograph (A), and as illustrated by FIG. 3, the depth layers Ln can be represented as concentric cylinders around a common sphere that contains both camera centers Ca and Cb for the stereo view. The depth layers Ln are partially filled cylinders with some holes, nested one inside another from the closest depth layer Lnc to the furthest depth layer Lnf. When all the layers Ln are concatenated and projected onto either of the cameras Ca or Cb, we get a completely filled panoramic image aα without holes. FIG. 4 illustrates a pair of rectangular images (aL, aR) depicting the stereographic panoramas captured by each of the camera centers Ca and Cb, in which the horizontal axis represents the rotation angle of the cameras, ranging from 0° to 360°, and the vertical axis represents the distance between each of these cameras and the considered object.


Due to the parallax between the two stereographic views (VL, VR), each depth layer Ln has some information that is occluded in the current view, but which can be obtained from the other view.


5.2.2 Initial Set (Sini) of Positional Constraint Parameters Input data also comprise an initial set Sini of positional constraint parameters (aα, Ln, xs, ys, xt, yt) to be applied to at least one specified depth layer (Ln) within the two images (aL, aR), wherein (xs, ys) and (xt, yt) are respectively the source pixel coordinates and the target pixel coordinates of at least one constraint point in said specified depth layer (Ln) within each image aα (α=L or R), the positional constraint parameters (aα, Ln, xs, ys, xt, yt) determining a warping transformation to be applied to a given point within the specified depth layer (Ln) in each of the two images (aL, aR).


In practice, these constraint parameters (aα, Ln, xs, ys, xt, yt) can be specified manually, by simple point clicking operations on the image. Alternatively, they can be obtained by a computer algorithm that detects feature points in the image and applies contextually meaningful geometric transformations to these points.


For example, deformations of an object extending over several depth layers are established by manual inputs of the constraint parameters in the furthest and the closest concerned depth layers only, and the related constraint parameters in the intermediary depth layers (i.e. between the furthest and the closest concerned depth layers) are derived by interpolation.


In another example, a given object must be magnified (e.g. an operating instrument or a small screen present in the 3D scene) by a given percentage, and the constraint parameters are automatically computed accordingly in all concerned depth layers.


In still another example, the constraint parameters are automatically derived from physics, e.g. from the effects of a wind blowing on objects in a scene, or of waves moving an anchored vessel on the sea.


The method then generates new panoramic images (aL, aR) for both the left and the right views (VL, VR), by warping each of the views in a geometrically consistent manner such that the projective transformations between the views are respected.


6.3 Method for Consistently Editing a Stereograph According to One Particular Embodiment


As illustrated by FIG. 7, the method for editing a stereograph A according to one particular embodiment comprises at least 2 (two) steps:

    • warping (S2) each of the two images (aL, aR) by applying said at least one initial set (Sini) of positional constraint parameters (aa, Ln, xs, ys, xt, yt) to the at least one specified depth layer Ln,
    • generating (S3) an edited stereograph comprising the two warped images for said at least one depth layer Ln.
    • In the following, each of the steps implemented by the method are described in greater detail.


In a pre-processing step, each stereo panorama (aL, aR) is first segmented into a set of n depth layers Ln, which samples the depth of the 3D scene from the perspective of the stereographic views (VL, VR). In computer-generated imagery, the depth layers Ln can alternatively be retrieved directly from the 3D models of the objects making up the synthetic scene.


6.3.1 Geometric Warping Transformation (S2) of Each Layer Ln


A geometric warping transformation is then applied to each concerned one among the depth layers Ln.


In this matter, we assume that for m different warping constraints (corresponding to source Pi, target Qi, with i=1, 2 . . . m), 3D positional constraint parameters (aa, Ln, xs, ys, xt, yt) are initially specified by the user in at least one depth layer Ln of each of the two specified views (VL and VR). Such constraints are representative of projections (piα, qiα) of the 3D source points Pi, and target points Qi on the right and left view. In variant embodiments, those constraint parameters are derived automatically from physics of from semantic relevance.


Considering one given view Vα, let there be a set of m positional constraints, given by 2D vertex locations in the initial and warped images as s1, s2, . . . , sm and t1, t2, . . . , tm respectively. Each point pair constraint (si, ti) is associated with the depth layer located at si in the view. A corresponding set of constraints is defined by the user in the other stereo view, in such a way that for each si and ti in one view, the corresponding si and ti in the other is associated with the same semantic point in the 3D scene.


For each layer Ln affected by the warping, an optimal warping transformation Mx is computed in the specified view Vα, based on all constraint point pairs falling into this depth layer. The same operation is performed in the other view with the corresponding constraint point pairs.


The computation of Mxα may take various forms, depending on the choice of the optimization criterion and the model for the transform. Advantageously, one of the Moving Least Squares energies and associated constrained affine models for Mxα proposed in the article “Image deformation using moving least squares,” by S. Schaefer, T. McPhail and J. Warren, in SIGGRAPH, 2006, is used to compute Mxα. For instance, Mxα is chosen to be an affine transform consisting of a linear transformation Axα and a translation Txα:






M
x
α(x)=Axαx+Txα,


and is defined in the concerned layer Ln, for every point x different from a piα, as the solution to the following optimization problem:







M
x
α

=

Arg







min
M






i
=
1

m




1




x
-

p
i
α




2








M


(

p
i
α

)


-

q
i
α




2









Mxα (piα) is defined to be equal to qiα. The minimization of the right term in the above equation is a quadratic programming problem whose solution can be obtained using techniques well known from the state of art (such a solution being possibly based on an iterative process associated with a predefined convergence threshold).


The disclosure is not limited to the above choice of warping model and optimality criterion. For example, the bounded biharmonic weights warping model proposed in the article “Bounded Biharmonic Weights for Real-Time Deformation,” A. Jacobson, I. Baran, J. Popovic and O. Sorkine, in SIGGRAPH, 2011, could be used in place of the moving least squares algorithm. In this approach, an affine transform over the whole image is associated to each user-specified positional constraint, and the image warp is computed as a linear combination of these affine transforms. The optimal warping transformation is defined as the one for which the weights of the linear combination are as constant as possible over the image, subject to several constraints. In particular, the warp at the location of each positional constraint is forced to coincide with the affine transform associated with the constraint. The resulting optimization problem is discretized using finite element modelling and solved using sparse quadratic programming.


The biharmonic warping model needs an affine transform to be specified at the location of each positional constraint. A first option is to restrict this affine transform to the specified translation from the source to the target constraint point. Alternatively, the affine transform could be computed by least-squares fitting an affine transform for the considered location, using all other available positional constraints as constraints for the fitting.


6.3.2 Generation (S3) of Two Warped Panoramic Images


Since the camera location in the stereographic pair is completely calibrated with respect to the cylindrical layer representation of 3D geometry, each layer Ln (n=1 . . . N) can be projected uniquely to generate a warped panoramic image around the camera centers (Ca, Cb). In one embodiment, such a generation is conducted according to the painter method. This method consists in rendering the concentric cylindrical layers Ln onto each of the camera views, by starting from the outer most layer Lnf, before rendering the inner layers consecutively until the closest depth layer Lnc. Thus, pixels rendered in inner layers will overwrite a pixel rendered from an outer layer.


6.3.3 Image Reconstruction (S4)


When parts of the edited object have shrunk in the editing process, background areas covered by this object in the original left or right stereographic views (VL, VR) become uncovered. The texture in these areas is unknown and needs to be reconstructed.



FIG. 6 depicts an image aα of a toy scene made up of two depth layers, a background consisting of a half-rectangle and a half-ellipse, and a triangle object in the foreground. The image aα could be either taken from the left view VL or the right view VR of a stereograph A. The left-hand part of FIG. 6 shows the original image aα before editing. On the right-hand part of FIG. 6, the same image aα has been edited using the previously described steps of geometric warping and stereo image generation. The editing operation in this example has resulted in the shrinkage of the foreground triangle object. As a result, an area of the background that was occluded by the foreground triangle now becomes visible. This area is represented with a vertical line pattern on the right-hand part of FIG. 6. The texture contents of this disoccluded area cannot be copied from the original image aα and needs to be reconstructed.


So-called “image inpainting” techniques known from the state of art are used for the reconstruction. Such a technique is described in the technical report MSR-TR-2004-04 (by Microsoft Research) called “PatchWorks: Example-Based Region Tiling for Image Editing”, by P. Pérez, M. Gangnet and A. Blake. These techniques fill the texture of a missing area in an image (the “hole”) starting from the boundary pixels in the hole and gradually filling the hole texture to its center.


To this purpose, the image is split into possibly overlapping small rectangular patches of constant size. At a given stage of the algorithm, a patch on the boundary of the region of the original hole yet to be filled is considered. This patch will hereafter be referred to as “patch to be filled”. It should hold both known pixels, on the outside of the boundary, and not yet reconstructed pixels, on the inside. Within a predefined subset of all known patches in the original image, hereafter referred to as “example patches”, the patch most resembling the known area of the patch to be filled is selected, and its pixels are copied onto the unknown image area of the patch to be filled. This operation is repeated until all patches in the hole have been filled. The reconstruction is then complete.


The similarity between the example patches and the patch to be filled is determined on the basis of a texture similarity metrics, which is evaluated only over the part of the patch to be filled for which texture is known. The selected example patch maximizes this computed similarity. This strategy minimizes the amount of texture discontinuities inside the reconstructed hole and at its boundaries, and eventually provides a reconstruction that is visually plausible, although of course different in general from the ground-truth texture that would have been observed on the actual background.


The rendering (S5) can thereby be carried out.


6.4 Description of an Apparatus for Consistently Editing a Stereograph



FIG. 8 is a schematic block diagram illustrating an example of an apparatus 1 for editing a stereograph, according to one embodiment of the present disclosure. Such an apparatus 1 includes a processor 2, a storage unit 3 and an interface unit 4, which are connected by a bus 5. Of course, constituent elements of the computer apparatus 1 may be connected by a connection other than a bus connection using the bus 5.


The processor 2 controls operations of the apparatus 1. The storage unit 3 stores at least one program to be executed by the processor 2, and various data, including stereographic and positional constraint data, parameters used by computations performed by the processor 2, intermediate data of computations performed by the processor 2, and so on. The processor 2 may be formed by any known and suitable hardware, or software, or by a combination of hardware and software. For example, the processor 2 may be formed by dedicated hardware such as a processing circuit, or by a programmable processing unit such as a CPU (Central Processing Unit) that executes a program stored in a memory thereof.


The storage unit 3 may be formed by any suitable storage or means capable of storing the program, data, or the like in a computer-readable manner. Examples of the storage unit 3 include non-transitory computer-readable storage media such as semiconductor memory devices, and magnetic, optical, or magneto-optical recording media loaded into a read and write unit. The program causes the processor 2 to perform a process for editing the stereograph, according to an embodiment of the present disclosure as described above with reference to FIG. 8.


The interface unit 4 provides an interface between the apparatus 1 and an external apparatus. The interface unit 4 may be in communication with the external apparatus via cable or wireless communication. In this embodiment, the external apparatus may be a stereographic-capturing device. In this case, stereographic images can be inputted from the capturing device to the apparatus 1 through the interface unit 4, and then stored in the storage unit 3. Alternatively, the external apparatus is a content generation device adapted to generate virtual stereographs.


The apparatus 1 and the stereographic-capturing device may communicate with each other via cable or wireless communication.


The apparatus 1 may comprise a displaying device or be integrated into any display device for displaying the edited stereograph.


The apparatus 1 may also comprise a Human/Machine Interface 6 configured for allowing a user inputting the at least one initial set of positional constraint parameters.


Although only one processor 2 is shown on FIG. 8, one will understand that such a processor may comprise different modules and units embodying the functions carried out by the apparatus 1 according to embodiments of the present disclosure, such as:

    • A module for warping (S2) each of the two images (aL, aR) by applying said at least one initial set (Sini) of positional constraint parameters (aα, Ln, xs, ys, xt, yt) to the specified depth layer Ln,
    • A module for generating (S3) one edited stereograph comprising the two warped images.


These modules may also be embodied in several processors 2 communicating and co-operating with each other.


As will be appreciated by one skilled in the art, aspects of the present principles can be embodied as a system, method or computer readable medium. Accordingly, aspects of the present principles can take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, microcode, and so forth), or an embodiment combining software and hardware aspects.


When the present principles are implemented by one or several hardware components, it can be noted that a hardware component comprises a processor that is an integrated circuit such as a central processing unit, and/or a microprocessor, and/or an Application-specific integrated circuit (ASIC), and/or an Application-specific instruction-set processor (ASIP), and/or a graphics processing unit (GPU), and/or a physics processing unit (PPU), and/or a digital signal processor (DSP), and/or an image processor, and/or a coprocessor, and/or a floating-point unit, and/or a network processor, and/or an audio processor, and/or a multi-core processor. Moreover, the hardware component can also comprise a baseband processor (comprising for example memory units, and a firmware) and/or radio electronic circuits (that can comprise antennas), which receive or transmit radio signals. In one embodiment, the hardware component is compliant with one or more standards such as ISO/IEC 18092/ECMA-340, ISO/IEC 21481/ECMA-352, GSMA, StoLPaN, ETSI/SCP (Smart Card Platform), GlobalPlatform (i.e. a secure element). In a variant, the hardware component is a Radio-frequency identification (RFID) tag. In one embodiment, a hardware component comprises circuits that enable Bluetooth communications, and/or Wi-Fi communications, and/or Zigbee communications, and/or USB communications and/or Firewire communications and/or NFC (for Near Field) communications.


Furthermore, aspects of the present principles can take the form of a computer readable storage medium. Any combination of one or more computer readable storage medium(s) may be utilized.


Thus, for example, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable storage media and so executed by a computer or a processor, whether or not such computer or processor is explicitly shown.


Although the present disclosure has been described with reference to one or more examples, a skilled person will recognize that changes may be made in form and detail without departing from the scope of the disclosure and/or the appended claims.

Claims
  • 1. A method for consistently editing stereographs in warping at least part of a virtual tridimensional scene, the method comprising: obtaining at least one initial stereograph comprising two stereographic images of the tridimensional scene depicted from two corresponding bi-dimensional views, the two images being segmented into a discrete set of at least two depth layers in a consistent manner across the two images,obtaining at least one initial set of positional constraint parameters to be applied to at least one specified depth layer within the two images, said positional constraint parameters being associated with a source position and a target position of at least one constraint point in said at least one specified depth layer for each of the two images, said positional constraint parameters determining a warping transformation to be applied to given points within each of the at least one specified depth layer in each of the two images, each of said source positions in one of the two images corresponding to one of said source positions in the other of the two images and being associated with a same semantic source point in the tridimensional scene, and each of said target positions in one of the two images corresponding to one of said target positions in the other of the two images and being associated with a same semantic target point in the tridimensional scene,warping each of the two images by applying to each of the at least one specified depth layer, said at least one initial set of positional constraint parameters corresponding to said each of the at least one specified depth layer,generating an edited stereograph comprising the two warped images for said at least one specified depth layer.
  • 2. The method of claim 1, wherein the at least one initial stereograph is of a panoramic type.
  • 3. The method of claim 1, wherein it comprises a prior step of determining for each of the 2D views a corresponding matrix of projection of the 3D scene in said view.
  • 4. The method of claim 1, wherein said warping comprises computing respective warped locations of pixels x of each of said at least one specified depth layer of at least one specified image, by solving the system of equations formed by the initial set of positional constraints, in the least squares sense:
  • 5. The method of claim 1, wherein said warping implements a bounded biharmonic weights warping model defined as a function of a set of affine transforms, in which one affine transform is attached to each positional constraint.
  • 6. The method of claim 1, wherein it comprises reconstructing at least one of the warped images by implementing an inpainting method.
  • 7. The method of claim 1, wherein generating the edited stereograph comprises rendering its depth layers from the two warped images, by executing said rendering consecutively from the furthest depth layer to the closest depth layer.
  • 8. The method of claim 1, wherein it comprises a prior step of capturing two calibrated stereographic images with respectively two cameras having an optical center, the at least two depth layers being shaped as concentric cylinders around a theoretical sphere containing both optical centers.
  • 9. An apparatus for consistently editing stereographs in warping at least part of a virtual tridimensional scene, said apparatus comprising at least one input adapted to receive: at least one initial stereograph comprising two stereographic images of the tridimensional scene depicted from two corresponding bi-dimensional views, the two images being segmented into a discrete set of at least two depth layers in a consistent manner across the two images,at least one initial set of positional constraint parameters to be applied to at least one specified depth layer within the two images, said positional constraint parameters being associated with a source position and a target position of at least one constraint point in said at least one specified depth layer for each of the two images, said positional constraint parameters determining a warping transformation to be applied to given points within each of the at least one specified depth layer in each of the two images, each of said source positions in one of the two images corresponding to one of said source positions in the other of the two images and being associated with a same semantic source point in the tridimensional scene, and each of said target positions in one of the two images corresponding to one of said target positions in the other of the two images and being associated with a same semantic target point in the tridimensional scene,said apparatus further comprising at least one processor configured for: warping each of the two images by applying to each of the at least one specified depth layer, said at least one initial set of positional constraint parameters corresponding to said each of the at least one specified depth layer,generating an edited stereograph comprising the two warped images for said at least one specified depth layer.
  • 10. The apparatus of claim 9, wherein generating the edited stereograph comprises rendering its depth layers from the two warped images, by executing said rendering consecutively from the furthest depth layer to the closest depth layer.
  • 11. The apparatus of claim 9, wherein it comprises a Human/Machine interface configured for inputting the at least one initial set of positional constraint parameters.
  • 12. The apparatus of claim 9, comprising a stereoscopic displaying device for displaying the edited stereograph, the apparatus being chosen among a camera, a mobile phone, a tablet, a television, a computer monitor, a games console and a virtual-reality box.
  • 13. A non-transitory computer-readable carrier medium storing a computer program product which, when executed by a computer or a processor causes the computer or the processor to consistently edit stereographs in warping at least part of a virtual tridimensional scene, by: obtaining at least one initial stereograph comprising two stereographic images of the tridimensional scene depicted from two corresponding bi-dimensional views, the two images being segmented into a discrete set of at least two depth layers in a consistent manner across the two images,obtaining at least one initial set of positional constraint parameters to be applied to at least one specified depth layer within the two images, said positional constraint parameters being associated with a source position and a target position of at least one constraint point in said at least one specified depth layer for each of the two images, said positional constraint parameters determining a warping transformation to be applied to given points within each of the at least one specified depth layer in each of the two images, each of said source positions in one of the two images corresponding to one of said source positions in the other of the two images and being associated with a same semantic source point in the tridimensional scene, and each of said target positions in one of the two images corresponding to one of said target positions in the other of the two images and being associated with a same semantic target point in the tridimensional scene,warping each of the two images-by applying to each of the at least one specified depth layer, said at least one initial set of positional constraint parameters corresponding to said each of the at least one specified depth layer,
  • 14. The method of claim 1, wherein said warping is leaving each of the two images unchanged in all the non-specified depth layer.
  • 15. The method of claim 1, wherein said method comprises obtaining said positional constraint parameters by at least one of receiving said positional constraint parameters from a user, deriving said positional constraint parameters from physics and deriving said positional constraint parameters from semantic relevance.
Priority Claims (1)
Number Date Country Kind
16306774.7 Dec 2016 EP regional