The present invention relates to a method for three-dimensional model morphing.
At present, morphing of a model based on real dynamic scenes or even on images taken by cheap cameras can be a difficult problem. Three dimensional, which in the remainder of this document will be abbreviated by 3D, model artists may for instance spend a lot of time and effort to create highly detailed and life-like 3D content and 3D animations. However this is not desirable, and even not feasible in next-generation communication systems, were 3D visualizations of e.g. meeting participants have to be created on the fly.
It is therefore an object of embodiments of the present invention to present a method and an arrangement for image model morphing, which is able to generate high quality 3D image models based on two-dimensional, hereafter abbreviated by 2D, video scenes from even lower quality real life captions while at the same time providing a cheap, simple and automated solution.
According to embodiments of the present invention this object is achieved by a method for morphing a standard 3D model based on 2D image data input, said method comprising the steps of
In this way a classical detection based morphing is enhanced with optical flow morphing. This results in much more realistic models, which can still be realized in real time.
In an embodiment the optical flow between the 2D image data input and the morphed standard 3D model is determined based on a previous fine tuned morphed 3D standard model determined on a previous 2D image frame.
In a variant the optical flow determination between the 2D image data input and the morphed standard 3D model may comprise:
This allows for a high-quality and yet time efficient method.
In another embodiment the morphing model used in said initial morphing step is adapted based on the optical flow between the 2D image data input and the morphed standard 3D model. This will further increase the quality of the resulting model, and its correspondence with the input video object.
In another embodiment the detection model used in said initial morphing step, is adapted as well, based on optical flow information determined between the between the 2D image frame and a previous 2D image frame.
This again adds to a more quick and more realistic shaping/morphing of the 3D standard model in correspondence with the input 2D images.
In yet another variant the step of applying the optical flow comprises an energy minimization procedure.
This may even further enhance the quality of the resulting fine tuned morphed model.
The present invention relates as well to embodiments of an arrangement for performing this method, for image or video processing devices incorporating such an arrangement and to a computer program product comprising software adapted to perform the aforementioned or claimed method steps, when executed on a data-processing apparatus.
It is to be noticed that the term ‘coupled’, used in the claims, should not be interpreted as being limitative to direct connections only. Thus, the scope of the expression ‘a device A coupled to a device B’ should not be limited to devices or systems wherein an output of device A is directly connected to an input of device B. It means that there exists a path between an output of A and an input of B which may be a path including other devices or means.
It is to be noticed that the term ‘comprising’, used in the claims, should not be interpreted as being limitative to the means listed thereafter. Thus, the scope of the expression ‘a device comprising means A and B’ should not be limited to devices consisting only of components A and B. It means that with respect to the present invention, the only relevant components of the device are A and B.
As previously mentioned, during the whole of the text two-dimensional will be abbreviated by 2D, while three-dimensional will be abbreviated by 3D.
The above and other objects and features of the invention will become more apparent and the invention itself will be best understood by referring to the following description of an embodiment taken in conjunction with the accompanying drawings wherein:
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principles of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in computer readable medium and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
A first operation module 100 involves the morphing of an available, standard, 3D model which is selected or stored beforehand, e.g. in a memory. This standard 3D model is morphed in module 100 in accordance with the input 2D video frame at time T. Detailed embodiments for this morphing procedure will be described with reference to
Partly in parallel with the morphing step 100, the optical flow is determined from the 2D video frame at time T towards the morphed standard 3D model at time T. This takes place in module 200 which has as input the 2D video frame at time T, the morphed standard 3D model, as provided by module 100, and the output of the arrangement, determined in a previous time step. This previously determined output concerns the fine tuned morphed 3D standard model, determined at a previous time step, in the embodiment depicted in
The embodiment of
As previously mentioned, the resulting fine tuned morphed 3D standard model is used in a feedback loop for the determination of the optical flow.
The following more detailed embodiments will be described with reference to modeling of facial features. It is known to a person skilled in the art how to use the teachings of this document for application to morphing of other deformable objects in a video, such as e.g. animals etc.
This detection module 110 enables to detect facial features in the video frame at time T, in accordance to a detection model, such as the AAM detection model. AAM models and AAM detection are well known techniques in computer vision for detecting feature points on non-rigid objects. AAM morphing can also be extended to 3D localization in case 3D video is input to the system, and AAM detection modules can detect feature points on other objects than faces as well. The object category on which detection is performed may relate to the trainings phase of the AAM model detection module, which training can have taken place offline or in an earlier training procedure. In the described embodiment, the AAM detection module 110 is thus trained to detect facial feature points such as nose, mouth, eyes, eyebrows and cheeks, of a human face, being a non-rigid object, detected in the 2D video frame. The AAM detection model used within the AAM detection module 110 itself can thus be selected out of a set of models, or can be pre-programmed or trained off line to be generically applicable to all human faces.
In case of e.g. morphing of an animal model such as a cat, the training procedure will then have been adapted to detect other important feature points with respect to the form/potential expressions of this cat. These techniques are also well known to a person skilled in the art
In the example of human face modeling, the AAM detection block 110 will generally comprise detecting rough movements of the human face in the video frame, together or followed by detecting some more detailed facial expressions related to human emotions. The relative or absolute positions of the entire face in the live video frame are denoted as “position” information on
The 3D standard model, input to module 120 is also generally available/selectable from a standard database. Such as standard database can comprise 3D standard models of a human face, and several animals such as cats, dogs species etc. This standard 3D will thus be translated, rotated and/or scaled in accordance with the position information from module 110.
In the case of human face modeling, this position adaptation step will result in the 3D standard model reflecting the same pose as the face in the live video feed. In order to further adapt the 3D model to the correct facial expression of the 2D frame, the detected features from module 110 are applied to the partially adjusted 3D standard model in step 130. This morphing module 130 further uses a particular adaptation model, denoted “morphing model” in
The result is thus a morphed standard 3D model provided by module 130.
An example implementation of this model-based morphing may comprise repositioning the vertices of the standard model 3D relating to facial features, based on the facial features detection results of the live video feed. The 3D content in between facial features can be further filled by simple linear interpolation or, in case a more complex higher-order AAM morphing model including elasticity of the face is used, higher order interpolation or even other more complex functions are used. How the vertices are displaced and how the data in between is filled in, is all comprised in a morphing model.
It may be remarked that despite the quality of the available (AAM) detection and morphing models, still artificial-looking results may be obtained because the generic applicable detection model is only used to detect the location of the facial features in the live video feed, which are afterwards used to displace the facial features in the 3D position adapted model based on their location in the video feed. Regions between facial features in this 3D standard model are then interpolated using an (AAM) morphing model. The latter has however no or only limited knowledge about how the displacement of each facial feature may possibly affect neighboring facial regions. Some general information about facial expressions and their influence on facial regions, which may relate to elasticity, can be put into this morphing model, but yet this will still result in artificial-looking morphing results, simply because each person is different and not all facial expressions can be covered in one very generic model covering all human faces.
Similar considerations are valid for morphing other deformable objects such as animals detected in video based on 3D standard models.
To further improve the morphed standard 3D model, this artificial-looking morphing model provided by module 100, can be augmented using flow-based morphing in step 300, as was earlier discussed with reference to
Before performing this flow-based morphing-step the optical flow itself has to be determined. Optical flow is defined here as the displacement or pattern of apparent motion of objects, surfaces and edges in a visual scene from one frame to the other or from a frame to a 2D or 3D model. In the embodiments described here the methods for determining optical flow aim to calculate the motion between two images taken at different instances in time, e.g. T and T-1, at pixel level, or, alternatively aim at calculating the displacement between a pixel at time T and a corresponding voxel in a 3D model at time T or vice versa.
As the optical flow has to be applied in module 300 to the morphed standard 3D model, based on the 2D video frame, the optical flow is to be calculated from this frame to this 3D model. In general however optical flow calculations are performed from a 2D frame to another 2D frame, therefore some extra steps are added to determine the optical flow from a 2D frame to a 3D morphed model. This extra step may involve using a reference 3D input, being the previously determined fine tuned 3D model, e.g. determined at T-1. This information is thus provided from the output of the arrangement to module 200.
In order to determine the first optical flow between the 2D projection of the morphed standard 3D model and the 2D projection of the previous fine tuned morphed 3D standard model, these 2D projections are performed on the respective 3D models provided to module 200. To this purpose module 230 is adapted to perform a 2D rendering or projection on the morphed standard 3D model as provided by module 100, whereas module 240 is adapted to perform a similar 2D projection of the previous fine tuned morphed 3D standard model, in the embodiment of
In the embodiment depicted in
Therefore the delay element 210 of module 290 introduces a same delay as the one used in the feedback loop of the complete arrangement in
The optical flow calculated between successive video frames T and T-1 is thus determined in module 220, and further used in module 260 such as to determine the optical flow from the 2D projection of the 3D fine tuned output at time T-1 to the 2D video frame at T. The projection itself was thus performed in module 240. The projection parameters are such as to map to these used in the 2D camera with which the 2D video frames are recorded.
The determination of this second optical flow in step 260 takes into account that the standard model and live video feed can sometimes represent different persons, which anyhow should be aligned. In some embodiments module 260 can comprise two steps: a first face registration step, where the face shape of the live video feed at the previous frame T-1 is mapped to the face shape of the 2D projection of the previous fine tuned morphed 3D content (on time T-1). This registration step can again make use of an AAM detector. Next, the optical flow calculated on the live video feed at time T is aligned, e.g. by means of interpolation to the face shape of the 2D projected 3D content at time T-1. These embodiments are shown more into detail in
The first optical flow determined between the 2D projections of the morphed standard model at time T and the previously fine tuned standard model at time T-1, by module 250, is then to be combined with the second optical flow determined in module 260 to result in a third optical flow from the 2D video at time T to the 2D projection of the morphed standard model at time T. This is in 2D the optical flow information which is actually desired. As this combination involves subtracting a intermediate common element, being the 2D projection of the previously determined fine tuned model, this combination is shown by means of a “-” sign in module 270.
However as this determined third optical flow still concerns an optical flow between two images in 2D, an additional step 280 is needed for the conversion of this optical flow from the 2D video frame at time T to the 3D content of the morphed standard 3D model at time T. This may involve back-projecting using the inverse process as used during the 2D projection, thus with the same projection parameters. To this purpose the depth, which resulted from the 2D projection is used, for re-calculating vertices from 2D to 3D.
It is to be remarked that, instead of using successive frames and successively determined fine tuned morphed 3D models, at times T and T-1, the time gaps between a new frame and a previous frame may be longer than the frame delay. In this case a corresponding previously determined output morphed model is to be used, such that the timing difference between an actual frame and a previous frame as used in module 200, corresponds to that between the new to be determined output and the previous output used for determining the optical flow. In an embodiment this can be realized by e.g. using similar delay elements D in the feedback loop of
Module 300 of
In a first variant embodiment of the arrangement, depicted in
This update of the morphing model using optical flow feedback may be useful because a standard generic morphing model has no knowledge about how the displacement of each facial feature affects its neighboring face regions. This is because there is no or not enough notion of elasticity in this basic morphing model. The provision of optical flow information can therefore enable the learning of more complex higher-order morphing models. The idea here is that a perfect morphing model morphs the 3D standard model such that it resembles the live video feed perfectly, in which case the “optical flow combination” block 270 of module 200 would eventually result in no extra optical flow to be applied, and thus be superfluous.
In another variant embodiment, depicted in
In case of face modeling, such a probabilistic approach intuitively allows for an underlying elasticity model of the face to fill in the unobserved gaps. A face can only move in certain ways. There are constraints on the movements. For instance, neighboring points on the model will move in similar ways. Also, symmetric points on the face are correlated. This means that if you see the left part of your face smile, there is a high probability that the right side smiles as well, although this part may be unobserved.
Mathematically this can be formulated as an energy minimization problem, consisting of two data terms and a smoothness term.
E=S+D
FLOW
+D
MODEL
DFLOW is some distance metric between a proposed candidate solution for the final fine tuned morphed 3D model and what one could expect from seeing the optical flow of the 2D input image alone. The better the proposed candidate matches the probability distribution, given the observed dense optical flow map, the lower this distance. The metric is weighted inversely proportional to the accuracy of the optical flow estimate.
DMODEL is a similar metric, but represents the distance according to the match between the candidate solution and the observed AAM-based morphed 3D model. It is also weighted inversely proportional to the accuracy of the AAM algorithm.
S penalizes improbable motions of the face. It comprises two types of subterms: absolute and relative penalties. Absolute penalties penalize proportional to the improbability of a point of the face moving in the proposed direction, tout court. Relative ones penalize in the same manner, but given the displacement of neighboring points (or other relevant points, e.g. symmetric points).
Energy minimization problems can be solved by numerous techniques. Examples are: gradient descent methods, stochastic methods (simulated annealing, genetic algorithms, random walks), graph cut, belief propagation, Kalman filter, . . . . The objective is always the same: find the proposed morphed 3D model for which the energy in the above equation is minimal.
A more detailed embodiment for the embodiment of
A second probabilistic embodiment is shown in
Note that all described embodiments are not limited to the morphing of human faces only. Models for any non-rigid object can be built and used for morphing in the model-based approach. In addition the embodiments are not limited to the use of AAM models. Other models like e.g. ASM (Active Shape Models) can be used during the initial morphing module 100.
While the principles of the invention have been described above in connection with specific apparatus, it is to be clearly understood that this description is made only by way of example and not as a limitation on the scope of the invention, as defined in the appended claims. In the claims hereof any element expressed as a means for performing a specified function is intended to encompass any way of performing that function. This may include, for example, a combination of electrical or mechanical elements which performs that function or software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function, as well as mechanical elements coupled to software controlled circuitry, if any. The invention as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for, and unless otherwise specifically so defined, any physical structure is of little or no importance to the novelty of the claimed invention. Applicant thus regards any means which can provide those functionalities as equivalent as those shown herein.
Number | Date | Country | Kind |
---|---|---|---|
12305040.3 | Jan 2012 | EP | regional |
Filing Document | Filing Date | Country | Kind | 371c Date |
---|---|---|---|---|
PCT/EP2013/050173 | 1/8/2013 | WO | 00 | 6/13/2014 |