Claims
- 1. A method for animating facial motion, comprising:
generating a three-dimensional digital model of an actor's face; overlaying a virtual muscle structure onto the digital model, said virtual muscle structure including plural muscle vectors each respectively defining a plurality of vertices lying along a surface of said digital model in a direction corresponding to that of actual facial muscles; and re-generating said digital model in response to an actuation of at least one of said plural muscle vectors that repositions corresponding ones of said plurality of vertices and thereby simulates motion.
- 2. The method for animating facial motion of claim 1, wherein at least one of said plural muscle vectors further includes an origin point defining a rigid connection of said muscle vector with an underlying structure corresponding to actual cranial tissue.
- 3. The method for animating facial motion of claim 1, wherein at least one of said plural muscle vectors further includes an insertion point defining a connection of said muscle vector with an overlying surface corresponding to actual skin.
- 4. The method for animating facial motion of claim 1, wherein at least one of said plural muscle vectors further includes interconnection points with other ones of said plural muscle vectors.
- 5. The method for animating facial motion of claim 1, further comprising defining facial marker locations corresponding to said plurality of vertices.
- 6. The method for animating facial motion of claim 5, further comprising generating a template having holes corresponding to said defined facial marker locations for use in marking said locations on the actor's face.
- 7. The method for animating facial motion of claim 1, receiving an input signal defining said actuation.
- 8. The method for animating facial motion of claim 7, wherein said input signal further comprises a user selection of at least one of said plural muscle vectors and a compression value to be applied to said selected one of said plural muscle vectors.
- 9. The method for animating facial motion of claim 7, wherein said input signal further comprises a user selection of pose comprised of plural ones of said plural muscle vectors and a compression value to be applied to said plural ones of said plural muscle vectors.
- 10. The method for animating facial motion of claim 7, wherein said input signal further comprises motion capture data reflecting movement of facial markers affixed to said actor's face.
- 11. The method for animating facial motion of claim 10, further comprising calibrating said motion capture data with respect to a default data set.
- 12. The method for animating facial motion of claim 10, further comprising retargeting said motion capture data for a different digital model.
- 13. The method for animating facial motion of claim 10, further comprising determining directional orientation of the actor's eyes from the motion capture data and animating eyes for said digital model consistent with said determined directional orientation.
- 14. A system for animating facial motion, comprising:
an animation processor adapted to generate three-dimensional graphical images and having a user interface; and a facial performance processing system operative with said animation processor to generate a three-dimensional digital model of an actor's face and overlay a virtual muscle structure onto the digital model including plural muscle vectors each respectively defining a plurality of vertices along a surface of said digital model in a direction corresponding to that of actual facial muscles, said facial performance processing system being responsive to an input reflecting selective actuation of at least one of said plural muscle vectors to thereby reposition corresponding ones of said plurality of vertices and re-generate said digital model in a manner that simulates facial motion.
- 15. The system of claim 14, wherein at least one of said plural muscle vectors further includes an origin point defining a rigid connection of said muscle vector with an underlying structure corresponding to actual cranial tissue.
- 16. The system of claim 14, wherein at least one of said plural muscle vectors further includes at least one insertion point defining a connection of said muscle vector with an overlying surface corresponding to actual skin.
- 17. The system of claim 14, wherein at least one of said plural muscle vectors further includes at least one interconnection point with other ones of said plural muscle vectors.
- 18. The system of claim 14, wherein said facial performance processing system is further operative to define facial marker locations corresponding to said plurality of vertices.
- 19. The system of claim 18, wherein said facial performance processing system is further operative to generate a template having holes corresponding to said defined facial marker locations for use in marking said locations onto the actor's face.
- 20. The system of claim 14, wherein said input further comprises a user selection of at least one of said plural muscle vectors and a compression value to be applied to said selected one of said plural muscle vectors.
- 21. The system of claim 14, wherein said input further comprises a user selection of a pose comprising a combination of plural ones of said plural muscle vectors and at least one associated compression value to be applied to said plural ones of said plural muscle vectors.
- 22. The system of claim 14, further comprising a motion capture processor adapted to produce motion capture data reflecting facial motion of said actor, said motion capture data comprising said input.
- 23. The system of claim 22, wherein said facial performance processing system is further operative to calibrate said motion capture data with respect to a default data set.
- 24. The system of claim 22, wherein said facial performance processing system is further operative to re-target said motion capture data for a different digital model.
- 25. The system of claim 22, wherein said facial performance processing system is further operative to determine directional orientation of the actor's eyes from the motion capture data and animate eyes for said digital model in accordance with said determined directional orientation.
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application claims priority pursuant to 35 U.S.C. § 119(e) to U.S. provisional patent application Ser. No. 60/454,871, filed Mar. 13, 2003, entitled “Performance Facial System.”
Provisional Applications (1)
|
Number |
Date |
Country |
|
60454871 |
Mar 2003 |
US |