Claims
- 1. One or more computer-readable media having computer-readable instructions thereon which, when executed by a computing device, perform a facial expression transformation method comprising:
defining a code book containing data defining a first set of facial expressions of a first person; providing data defining a second set of facial expressions, the second set of facial expressions providing a training set of expressions of a second person who is different from the first person; deriving a transformation function from the training set of expressions and corresponding expressions from the first set of expressions; and applying the transformation function to the first set of expressions to provide a synthetic set of expressions.
- 2. The one or more computer-readable media of claim 1, wherein the training set of expressions contains fewer expressions than the code book.
- 3. The one or more computer-readable media of claim 1, wherein the transformation function compensates for differences in the size and shape of the faces of the first and second persons.
- 4. The one or more computer-readable media of claim 1, wherein said deriving of the transformation function comprises computing a linear transformation from one set of expressions to another.
- 5. The one or more computer-readable media of claim 1, wherein the deriving of the transformation function comprises:
representing each expression as a 3m-vector that contains x, y, z displacements at m standard sample positions; and computing a set of linear predictors aj, one for each coordinate of ga, given a set of n expression vectors for a face to be transformed, gal . . . n, and a corresponding set of vectors for a target face, gbl . . . n, by solving 3m linear least squares systems of the following form:aj·gai=gbi[j], i=1 . . . n
- 6. The one or more computer-readable media of claim 5, wherein said computing comprises using only a subset of points for each gaj.
- 7. The one or more computer-readable media of claim 6, wherein said using comprises using only points that share edges with a standard sample point under consideration.
- 8. The one or more computer-readable media of claim 5 further comprising controlling the spread of singular values when computing a pseudoinverse to solve for the aj.
- 9. The one or more computer-readable media of claim 8, wherein said controlling the spread comprises zeroing out all singular values less than ασ1, where σ1 is the largest singular value of the matrix.
- 10. The one or more computer-readable media of claim 1, wherein said providing data defining a second set of facial expressions comprises:
illuminating the second person's face with illumination; and contemporaneously capturing structure data describing the face's structure and reflectance data describing reflectance properties of the face from the illumination.
- 11. The one or more computer-readable media of claim 10, wherein said illuminating comprises:
using multiple light sources, one of which projecting a pattern on the second person's face from which the structure data can be ascertained; at least one of the light sources comprising an infrared light source; at least one of the light sources being polarized; and said capturing comprising using a camera having a polarizer that suppresses specularly-reflected light so that diffuse component reflection data is captured.
- 12. The one or more computer-readable media of claim 1, wherein said providing data defining a second set of facial expressions comprises:
illuminating the second person's face with a first polarized light source that is selected so that specularly-suppressed reflective properties of the face can be ascertained; illuminating the second person's face with a second structured light source that projects a pattern onto the face, while simultaneously illuminating the face with the first polarized light source; and capturing both specularly-suppressed reflection data and structure data from the simultaneous illumination.
- 13. The one or more computer-readable media of claim 12, wherein the light sources provide light at different frequencies.
- 14. The one or more computer-readable media of claim 12, wherein the light sources provide infrared light.
- 15. The one or more computer-readable media of claim 12, further comprising processing the captured data to provide both (a) data that describes dimensional aspects of the face and (b) data that describes diffuse reflective properties of the face.
- 16. The one or more computer-readable media of claim 1, wherein said providing data defining a second set of facial expressions comprises:
illuminating the second person's face with multiple different light sources; measuring range map data from said illuminating; measuring image data from said illuminating; deriving a 3-dimensional surface from the range map data; computing surface normals to the 3-dimensional surface; and processing the surface normals and the image data to derive an albedo map.
- 17. The one or more computer-readable media of claim 16, wherein at least one of the light sources is polarized.
- 18. The one or more computer-readable media of claim 16, wherein all of the light sources are polarized.
- 19. A computing device embodying the one or more computer-readable media of claim 1.
- 20. A computing system comprising:
one or more computer-readable media having computer-readable instructions thereon which, when executed by a computer, cause the computer to:
operate on a training set of expressions from one person and corresponding expressions from a code book of another person to compute a linear transformation function from the training set and their corresponding expressions; and
apply the transformation function to a plurality of expressions from the code book to provide a synthetic set of expressions; and one or more processors configured to execute said computer-readable instructions.
- 21. The computing system of claim 20, wherein the instructions cause the one or more processors to use the synthetic set of expressions to transform expressions from the one person into expressions of the other person.
- 22. The computing system of claim 21, wherein the instructions cause the one or more processors to transform expressions from the one person that are different from those expressions comprising the code book expressions.
- 23. The computing system of claim 21, wherein the instructions cause the one or more processors to transform expressions by transmitting at least one index of a synthetic expression to a receiver that can reconstruct the expression.
- 24. The computing system of claim 21, wherein the instructions cause the one or more processors to transform facial expressions.
- 25. A facial expression transformation system comprising:
a code book embodied on a computer-readable medium, the code book containing data defining a first set of facial expressions of a first person; data embodied on a computer-readable medium, the data defining a second set of facial expressions, the second set of facial expressions providing a training set of expressions of a second person who is different from the first person; and one or more computer-readable media embodying instructions that implement a transformation processor configured to derive a transformation function from the training set of expressions and corresponding expressions from the first set of expressions.
- 26. The expression transformation system of claim 25, wherein the transformation processor comprises a linear transformation processor.
- 27. The expression transformation system of claim 25 further comprising a synthetic set of expressions embodied on a computer-readable medium, the synthetic set of expressions being derived by applying the transformation function to the code book expressions.
- 28. The expression transformation system of claim 25, wherein the transformation function compensates for differences in the size and shape of the faces of the first and second persons.
- 29. The expression transformation system of claim 25, wherein the transformation processor derives the transformation function by:
representing each expression as a 3m-vector that contains x, y, z displacements at m standard sample positions; and computing a set of linear predictors aj, one for each coordinate of ga, given a set of n expression vectors for a face to be transformed, gal . . . n, and a corresponding set of vectors for a target face, gbl . . . n, by solving 3m linear least squares systems of the following form:aj·gai=gbi[j], i=1 . . . n
- 30. A system for animating facial features comprising:
means for defining a subdivision surface that approximates geometry of a plurality of different faces; and means for fitting the same subdivision surface to each of the plurality of faces.
- 31. The system of claim 30, wherein said means for defining comprises means for defining the subdivision surface with a coarse mesh structure.
- 32. The system of claim 31, wherein the coarse mesh structure comprises a triangular mesh.
- 33. The system of claim 30, wherein said means for fitting comprises means for performing a continuous optimization operation over vertex positions of the subdivision surface.
- 34. The system of claim 30, wherein said means for fitting comprises means for fitting the subdivision surface to the faces without altering the connectivity of a mesh that defines the subdivision surface.
- 35. The system of claim 30, wherein said means for fitting comprises means for minimizing a smoothing functional associated with a mesh that defines the subdivision surface.
- 36. The system of claim 30, wherein said means for fitting comprises means for selecting one or more constraints associated with a mesh that defines the subdivision surface and means for fitting those constraints directly to corresponding points on the faces.
- 37. The system of claim 36, wherein the constraints are associated with one of the eyes, nose or mouth.
- 38. The system of claim 30, wherein said means for fitting comprises means for minimizing a functional that includes terms for distance, smoothness, and constraints.
- 39. The system of claim 30, wherein said means for fitting comprises means for solving a sequence of linear least-squares problems.
- 40. A method of animating facial features comprising:
defining a subdivision surface that approximates geometry of a plurality of different faces; fitting the same subdivision surface to each of the plurality of faces to establish a correspondence between the faces; and using the correspondence between the faces to transform an expression of one face into an expression of another face.
- 41. A method of animating facial features comprising:
measuring 3-dimensional data for a plurality of different faces to provide corresponding face models; defining only one generic face model that is to be used to map to each corresponding face model; selecting a plurality of points on the generic face model that are to be mapped directly to corresponding points on each of the corresponding face models; and fitting the generic face model to each of the corresponding face models, said fitting comprising mapping each of the selected points directly to the corresponding points on each of the corresponding face models.
- 42. The method of claim 41, wherein:
said defining comprises defining a subdivision surface from a base mesh structure, the subdivision surface containing a plurality of vertices and approximating the geometry of the face models; and said fitting comprises manipulating only the positions of the vertices of the subdivision surface.
- 43. The method of claim 41, wherein said fitting comprises manipulating a base mesh that defines a subdivision surface.
- 44. The method of claim 41, wherein said fitting comprises manipulating a base mesh that defines a subdivision surface without altering the connectivity of the base mesh.
- 45. The method of claim 41, wherein said measuring comprises using a laser range scan to measure the 3-dimensional data.
RELATED APPLICATIONS
[0001] This application is a continuation of and claims priority to U.S. patent application Ser. No. 09/651,880, filed on Aug. 30, 2000, the disclosure of which is incorporated by reference herein.
Continuations (1)
|
Number |
Date |
Country |
Parent |
09651880 |
Aug 2000 |
US |
Child |
10900252 |
Jul 2004 |
US |