The present invention generally pertains to a system and method for generating one or more 3D models of at least one living object from at least one 2D image comprising the at least one living object. The one or more 3D models can be modified and enhanced. The resulting one or more 3D models can be transformed into at least one 2D display image; the point of view of the output 2D image(s) can be different from that of the input 2D image(s).
U.S. Granted Patent No. U.S. Pat. No. 8,384,714 discloses a variety of methods, devices and storage mediums for creating digital representations of figures. According to one such computer implemented method, a volumetric representation of a figure is correlated with an image of the figure. Reference points are found that are common to each of two temporally distinct images of the figure, the reference points representing movement of the figure between the two images. A volumetric deformation is applied to the digital representation of the figure as a function of the reference points and the correlation of the volumetric representation of the figure. A fine deformation is applied as a function of the coarse/volumetric deformation. Responsive to the applied deformations, an updated digital representation of the figure is generated.
However, U.S. Pat. No. 8,384,714 discloses using multiple cameras to generate the 3D (volumetric) image.
U.S. Patent Application Publication No. US2015/0178988 teaches a method for generating a realistic 3D reconstruction model for an object or being, comprising:
However, US20150178988 requires a plurality of input 2D images.
U.S. Granted Patent No. U.S. Pat. No. 9,317,954 teaches techniques for facial performance capture using an adaptive model. For example, a computer-implemented method may include obtaining a three-dimensional scan of a subject and a generating customized digital model including a set of blend shapes using the three-dimensional scan, each of one or more blend shapes of the set of blend shapes representing at least a portion of a characteristic of the subject. The method may further include receiving input data of the subject, the input data including video data and depth data, tracking body deformations of the subject by fitting the input data using one or more of the blend shapes of the set, and fitting a refined linear model onto the input data using one or more adaptive principal component analysis shapes.
However, U.S. Pat. No. 9,317,954 teaches a method where the initial image(s) are 3D images.
U.S. Granted Patent No. U.S. Ser. No. 10/796,480 teaches a method of generating an image file of a personalized 3D head model of a user, the method comprising the steps of: (i) acquiring at least one 2D image of the user's face; (ii) performing automated face 2D landmark recognition based on the at least one 2D image of the user's face; (iii) providing a 3D face geometry reconstruction using a shape prior; (iv) providing texture map generation and interpolation with respect to the 3D face geometry reconstruction to generate a personalized 3D head model of the user, and (v) generating an image file of the personalized 3D head model of the user. A related system and computer program product are also provided.
However, U.S. Ser. No. 10/796,480 requires “shape priors”—predetermined ethnicity-specific face and body shapes—to convert the automatically-measured facial features into an accurate face. Furthermore, either manual intervention or multiple images are needed to generate an acceptable 3D model of the body.
It is therefore a long felt need to provide a system for generating at least one modifiable and enhanceable 3D model from a single 2D image, without manual intervention.
It is an object of the present invention to disclose a system and method for generating at least one modifiable and enhanceable 3D model comprising at least one living object from at least one 2D image comprising the at least one living object.
In order to better understand the invention and its implementation in practice, a plurality of embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, wherein
The following description is provided, alongside all chapters of the present invention, so as to enable any person skilled in the art to make use of said invention and sets forth the best modes contemplated by the inventor of carrying out this invention. Various modifications, however, will remain apparent to those skilled in the art, since the generic principles of the present invention have been defined specifically to provide a means and method for generating a modifiable and enhanceable 3D models from a 2D image
The term ‘image’ hereinafter refers to a single picture as captured by an imaging device. A view of a couple dancing, as captured from a position on a dais, constitutes a non-limiting example of an image. A view of a face, showing only the face on a black background, constitutes a non-limiting example of an image.
The term ‘a sequence of images’ hereinafter refers to more than one image, where there is a relationship between each image and the next image in the sequence. A sequence of images typically forms at least part of a video or film.
The term ‘object’ hereinafter refers to an individual item as visible in an original image.
The term ‘model’ hereinafter refers to a representation of an object as generated by software. For non-limiting example, as used herein, a person constitutes an object. The person, as captured in a video image, also constitutes an object. The person, as input into software and, therefore, manipulatable, constitutes a model. A 3D representation of the person, as output from software, also constitutes a model.
The method allows creation of a single 3D model or a sequence of 3D models (volumetric video) from any device that can take regular 2D images.
Volumetric video can generated from a video that was generated for this purpose, from an old video, from a photograph, and any combination thereof. For example, one or more 3D models can be built from a photograph of people who are now dead, or from a photograph of people as children. In another example, a 3D model, a sequence of 3D models or a volumetric video can be generated of an event, such as a concert or a historic event, caught on film. Another example can be “re-shooting” an old movie, so as to generate a volumetric video of the movie.
Method steps:
An optional preprocessing stage for any of the above comprises a segmentation stage, which separates foreground from background and can, in some embodiments, separate one or more objects from the background, with the one or more objects storable and further analyzable and (if desired) manipulatable from the background and the unselected objects. The segmentation stage is implemented by means of a segmentation neural network.
Preferably either in step (3) or in step (4), the 3D model is completed b negating any portion that was invisible in the original image(s).
For embodiments that employ a latent space representation, a float vector of N numbers is used to represent the latent space. In some embodiments, N is 128, although N can be in a range from 30 to 106. The geometry NN that receives the latent space vector and outputs the 3D representation is of the “implicit function” type in which it receives the latent space vector and a set of points [x, y, z] and outputs, for each point (xi, yi, zi) a Boolean that describes whether the point is in the body or outside the body, thus generating a cloud of points that describes the 3D body.
In some embodiments, the output of the implicit function comprises, for each point (xi, yi, zi) a color value as well as a Boolean that describes whether the point is in the body or outside the body.
In some embodiments, for each point (xi, yi, zi) the NN returns whether the point is inside or outside the 3D model and a color value.
The color values can be, but are not limited to, CIE, RGB, YUV, HSL, HSV, CMYK, CIELUV, CIEUVW and CIELAB.
Another method is to project the input texture onto the 3D model and to use the implicit function to generate the portions of the 3D model that were invisible in the original 2D image
In some embodiments, training set(s) are used to train the geometric neural network(s) to add “accurate” texture and geometry to the 3D model(s). Since the original image(s) are in 2D, parts of the 3D model will have been invisible in the original 2D image(s) so that, by means of the training sets, the geometric neural network(s) learn how to complete the 3D model by adding to the 3D model a reasonable approximation of the missing portions. In such embodiments, a trained NN will fill in the originally invisible portion(s) with an average of the likely missing texture (and geometry) as determined from the training sets. For non-limiting example, an input image shows the front of a person wearing a basketball jersey. The back is invisible; there is no way to tell what number the person would have had on the back of the jersey. The training set would have included jersey backs with many different numbers, so that the “accurate” 3D model resulting from the averaged output would have a jersey with no number on the back. Similarly, the jersey back would be unwrinkled, since the locations of the wrinkles would be different on different jerseys.
In preferred embodiments, one or more Generative Adversarial Networks (GANs) is used to create a “realistic” model instead of an “accurate” model. Instead of, or in addition to, one or more GANs, one or more variational encoders can be used. In a GAN, two types of network are used, a “generator” and a “discriminator”. The generator creates input and feeds it to the discriminator; the discriminator decides if it the input it receives is real or not. Input the discriminator finds to be real (“realistic input”) can be fed back to the generator, which then can use the realistic input to improve later instances of input it generates.
To train the GAN, two types of input are used, “ground truth” input and generator input, where ground truth input is what an outside observer deems to be real. A 3D model of a basketball player generated from photographs of the player from a number of directions is a non-limiting example of a ground truth input. A “basketball player training set”, for non-limiting example, might comprise all of the New York Knicks players between 2000 and 2020. Another non-limiting example of a “basketball player training set” might be a random sample of all NBA players between 2000 and 2020.
Ground truth input and generator input are fed to the discriminator; the discriminator decides whether the input it received is ground truth or not. The discriminator input is checked by a trainer—was the discriminator input realistic or not. This is compared to the discriminator output, a Boolean generator input/ground truth input. Generator input that “fooled” the discriminator can then be fed back to the generator to improve its future performance. The GAN is deemed to be trained when the discriminator output is correct 50% of the time.
\In all cases, the system is configured to generate a model that is sufficiently realistic that a naïve user, one who is unfamiliar with the geometry and texture of the original object, will assume that the realistic textured 3D model or the resulting output image(s) accurately reproduce the original object.
Geometry as well as texture is generated for the portions of an object that were invisible in the original image(s). For non-limiting example, if the original object was a 2D frontal image of a person from the waist up, the output 3D model could comprise the person's legs and feet and could comprise a hairstyle that included the back of the head as well as the portions of the sides visible in the original image.
In some embodiments that employ a geometry neural network and a texture neural network, or which employ a combined geometry and texture neural network, the latent space representation is not used.
In some embodiments that employ a geometry neural network, no texture is generated and, therefore, no texture neural network is needed.
In some embodiments, the implicit function is created directly from the 2D image. In some embodiments, the implicit function is created from the latent space representation. For each point (xi, yi, zi), the output of the neural networks is whether the point is within or outside the body, and the color associated with the point.
A video has been generated of person dancing. A sequence of 3D models of the person dancing is generated from the video. The sequence of 3D models of the dancing person is then embedded inside a predefined 3D environment and published, for example, on social media. The result can be viewed in 3D, in YR or AR, with a 3D dancer in a 3D environment, or it can be viewed in 2D, from a virtual camera viewpoint, with the virtual camera viewpoint moving in a predefined manner, in a manner controlled by the user, and any combination thereof.
For non-limiting example, the original video could comprise the person doing a moonwalk. The resulting volumetric video could then be embedded in a pre-prepared 3D environment comprising a Michael Jackson thriller.
Wedding photos or wedding videos can e converted to a 3D hologram of the bride and groom. If this is displayed using YR, a user can be a virtual guest at the wedding.
In AR, the user can watch the bridal couple, for example, doing their wedding dance the user's living room.
A historical event captured video or a movie can be converted a 3D hologram. If the historical event is displayed in VR or AR, the user can “attend” a Led Zeppelin concert, “see” an opera, “watch” Kennedy's “ich bin ein Berliner” speech, or other event, all as part of the audience, or, perhaps, from the stage.
Similarly, in VR, a person can “be” a character in a movie, surrounded by the actors and sets or, in AR, have th4e movie play out in the user's home or other location.
Sport camera images can be converted to holograms and used for post-game analysis, for non-limiting example, who had a line of sight, where was the referee looking, was a ball in or out, did an offside occur, or did one player foul another. In addition, the question could be asked—could a referee have seen the offense from where he was standing or from where he was looking, or which referee could have (or should have) seen an offense.
Security camera images can also be converted to 3D holograms. Such holograms can be used to help identify a thief (for non-limiting example, is a suspect's body language the same as that of a thief), or to identify security failures (which security guard could have or should have seen an intruder, was the intruder hidden in a camera blind spot).
A user can “insert” himself into a 3D video game.
In some embodiments, the user creates at least one video in which he carries out at least one predefined game movement such as, but not limited to, a kick, a punch, running, digging climbing and descending. The video(s) are converted to 3D and inserted into a video game that uses these 3D sequences. When the user plays the game, the user will see himself as the game character, carrying out the 3D sequences on command.
In other embodiments, the user can take a single image, preferably of his entire body. The image is converted to 3D and, using automatic rigging, one or more sequences of 3D models is generated by manipulation of the single image, thereby generating at least one predefined game movement. The sequence(s) are inserted into a video game that uses these 3D sequences. When the user plays the game, the user will see himself as the game character, carrying out the 3D sequences on command.
A physical characteristic of the 3D model(s) can be altered. For non-limiting example, a chest size can be changed, a bust size or shape can be changed, muscularity of the model can be altered, a model's gender can be altered, an apparent age can be altered, the model can be made to look like a cartoon character, the model can be made to look like an alien, the model can be made to look like an animal, and any combination thereof.
For non-limiting example, a person's ears and eyebrows and skin color could be altered to make the person into a Vulcan, and the Vulcan inserted into a Star Trek sequence.
In another non-limiting example, a person could be videoed lifting weights and the 3D model altered twice, once to make the person very muscular, lifting the weights with ease, and once to make the person very weedy, lifting the weights only with great difficulty.
In another non-limiting example, an image of a woman in a bathing suit could be altered to have her as Twiggy (a very slender model) walking down a boardwalk with herself as Jayne Mansfield (a very curvaceous actress).
In yet another non-limiting example, a model of a woman could be altered to change her hairstyle, clothing and body shape so that she leaves an 18th Century house as a child of the court of Louis XIV, she morphs into a 14 year old Englishwoman of the Napoleonic era, then into a mid-Victorian Mexican in her late teens, then to a WWI nurse in her early 20's, a Russian “flapper” in her late 20's, a WWII US pilot in her early 30's, and so on, ending up entering a 22nd Century spaceship in her early 40's as the ship's captain.
Number | Date | Country | |
---|---|---|---|
63135765 | Jan 2021 | US |