This invention generally relates to animation of a three dimensional character; and to automated systems and methods for rigging a mesh for animating a three dimensional character. More particularly, this invention generally relates to animation of a facial mesh of a three dimensional character and to automated systems and methods for rigging the facial mesh of a three dimensional character.
The generation of three dimensional (3D) content, particularly 3D animated content, is becoming more popular. Animated 3D content is typically included in animated movies, virtual storytelling, 3D animations for video games in particular the cut away scenes in the game, 3D simulations of industrial uses, teleconferencing, and social interaction. Despite these emerging uses, the creation of 3D characters that can be animated is still typically only performed by artists that have specialized training in animation. Specifically, the creation of an animated sequence with one or more characters often requires a group of specialized 3D animation artists and animators.
The most difficult process in generating an animated sequence is the creation of a rigged mesh for an animated character. For purposes of this discussion, a mesh is a set of polygons (segments) defining the surface of an object or character; and a rigged mesh is a mesh that further includes a skeleton or set of bones and skinning weights. The skeleton or bone set is used to define character movement; and skinning weights are a set of relations that govern the motion of the mesh polygons (segments) with respect to the underlying skeleton. The creation of the skeleton or bone set often requires identifying key points of the character mesh. These key points may include, but are not limited to corners of a mouth, corners of the eyes, and other features important for movement. Typically, the process or creating the skeleton and skinning weights is performed by an animation process performed manually herein after termed forward kinematic rigging.
Animation of a rigged 3D character can be achieved by motion applied to the bones which drive the mesh. Animation may also be obtained using blend shapes on a per vertex animation using a manually intensive process. The bone set and skinning weights may be converted into a blend shape animation process by identification of appropriate morph targets.
Systems and methods for automatically rigging a mesh for the face of a three dimensional character in accordance with embodiments of this invention are disclosed. In accordance with some embodiments of this invention, systems and methods for automatic rigging of a three dimensional are performed in the following manner. An original three dimensional (3D) mesh of a character is received. A representative mesh is generated from the original mesh. A set of segments is then determined and key points are identified for the representative mesh. A bone set including bones for relating segments of the representative mesh is generated. The bones in the bone set are then placed in the representative mesh and skinning weights of the representative mesh is determined from the placement of the plurality of bones of the bone set in the representative mesh. The bone set, the segments, and skinning weights of the representative mesh are then translated into the original mesh to create a rigged mesh.
In accordance with some embodiments, the generation of the representative mesh is completed by performing a volumetric method on the original mesh. in accordance with some of these embodiments, the volumetric method includes generating a visual hull.
In accordance with some embodiments, the determining of the segments is done by performing a machine learning process to assign the segments. In accordance with some of these embodiments, the machine learning process is a JointBoost process. In accordance with some of these embodiments, the machine learning process is performed in the following manner. The representative mesh and a training set of meshes is received. Features are extracted from the representative mesh and each of the meshes in the training set. Segments of the representative mesh are probabilistically estimated using the machine learning process based on the extracted features of representative mesh and the extracted features of meshes in the training sets. The segments of the representative mesh are then determined from estimated segments such that the segments of the representative mesh provide reasonable boundaries between regions of the character and overall smoothness. In accordance with some of these embodiments, the determining of the of the of segments from the estimated segments is performed by applying a Conditional Random Fields (CFR) process to the estimated segments using a training set of segments. In accordance with further of these embodiments, the training set meshes includes a training set of meshes and at least one of a test set of meshes and a gold standard of segments.
In accordance with some embodiments, the key points are identified by applying a machine learning process to the segments of the representative mesh. In accordance with some embodiments, the identifying of the key points includes receiving user inputs of points on the representative mesh that are provided to the machine learning process.
In accordance with some embodiments, the bone set is generated a process performed in the following manner. The representative mesh, the segments of representative mesh, and the key points of the representative mesh are received by the process. A training set of bone sets that includes different bone sets is received. A machine learning process is then applied to the representative mesh including the segments and the key points of the representative mesh that uses the training set of bone sets to determine a bone set for the representative mesh. In accordance with some of these embodiments, the machine learning process is a Support Vector Machine. In accordance with some of these embodiments, the process of generating the bone set further includes receiving a user input of bone parameters and providing the bone parameters to the machine learning process to generate the bone set for the representative mesh.
In accordance with some embodiments, the determining of the skinning weights is determined using a diffusion process. In accordance with some of these embodiments, a process for determining the skinning weights is performed in the following manner. The process receives the representative mesh with the bone set, key points, and segments. The parameters for the skinning weights are determined. A Poisson problem is then generate for each of the segments by modeling each of the segments as a heat emitting body and the surrounding segments as heat absorbing bodies. Skinning templates are then created using the Poisson problem generated for each of the segments and a machine learning process using a training set of manually skinned models. The skinning weights are then calculated from the skinning templates.
In accordance with some embodiments, the translating of the bone set and skinning weights into the original mesh is performed using a point to polygon nearest neighbor search process.
In accordance with some embodiments, a topology of a rigged mesh is improved after the rigged mesh is generated.
In accordance with some embodiments, the rigged mesh is transmitted to a user device. In accordance with some other embodiments, the rigged mesh is stored in memory.
Turning now to the drawings, systems and methods for automated rigging of 3D characters, and in particular facial rigging, in accordance with embodiments of the invention are illustrated. In accordance with many embodiments of the invention, the system receives an original 3D mesh of a character. A representative mesh is then generated from the received original mesh. Segments, key points, a bone set and skinning weights are then determined for the representative mesh. The bone set and skinning weights for the representative mesh are than applied to the original mesh to generate a rigged mesh. While the following discussion of systems and methods in accordance with embodiments of the invention focuses primarily upon the automated rigging of a facial mesh of a 3D character, similar systems and methods can also be adapted for use in rigging meshes that describe the surface of other portions of 3D characters including (but not limited to) meshes of an entire 3D character without departing from the invention. Furthermore, the described embodiments provide a rigged mesh that utilizes bones and skinning weights to drive the animation of the facial mesh of a 3D character. However, embodiments of the invention can also be utilized to rig facial meshes for use in blend shape animation through the usage of appropriate morph targets. Systems and methods for automatically rigging facial meshes of 3D characters in accordance with embodiments of the invention are discussed further below.
System Overview
Devices that can be utilized to automatically rig the facial mesh of a 3D character, either acting singly or jointly through communication over a network in accordance with embodiments of an invention are shown in
In a number of embodiments, a mobile device 130 or a personal computer 140 may transmit an original 3D mesh of a character to the server 120, which automatically rigs the facial mesh. In some embodiments, the mobile device 130 or the personal computer 140 can perform automatic rigging of a facial mesh. In many embodiments, the mobile device 120 or the personal computer 130 communicates with the server 120 to receive data utilized to perform automatic rigging of a facial mesh. Furthermore, one skilled in the art will recognize that other configurations of devices may perform different tasks associated with the automatic rigging of a facial mesh in accordance with embodiments the invention and the preceding configurations are for example only and in no way limiting.
Device Architecture
In many embodiments, the systems and methods for automated rigging of a facial mesh of a 3D character are implemented using hardware, software, and/or firmware. A processing system configured to automatically rig a facial mesh of a 3D character in accordance with an embodiment of this invention is shown in
Although specific systems and devices that are configured to automatically rig a facial mesh of a 3D character are discussed above, any of a variety of systems and/or devices can be utilized to automatically rig facial meshes of 3D characters in accordance with embodiments of the invention. Processes for rigging facial meshes of 3D characters in accordance with embodiments of the invention are discussed further below.
Automated Rigging Process
In several embodiments, systems for automatically rigging facial meshes of 3D characters receive an original mesh of the face of a 3D character and provide a rigged mesh of the face of the 3D character. Processes for automatically rigging facial meshes of 3D characters in accordance with embodiments of this invention may be fully automated or may allow user interaction to provide inputs of parameters to aid in the rigging process. A process generating a rigged facial mesh using an original facial mesh in accordance with embodiments of this invention is shown in
An example of a representative mesh generated for the original mesh 800 shown in
Turning back to
After the segments of the representative mesh are identified, key points of the representative mesh are identified (320). Key points are points at the borders of segments that can be utilized in the animation of the mesh. An example of key points in a representative mesh is shown in
Referring again to
Referring back to
The representative mesh with the bones placed in the mesh can be used to determine (340) the skinning weights for the segments of the representative. The skinning weights define the relation between the motion of the 3D representation polygons (segments) and the underlying bone set. An example of skinning weights in accordance with embodiments of this invention is shown in
In the process 300 shown in
After the bone set and skinning weights are translated to the original mesh to create the rigged mesh, user inputs may be used to improve the topology of the rigged mesh. This may include inputs to alter the size, aspect ratio, location, and/or rotation of the segments of the rigging mesh while maintaining the overall shape of the rigging mesh. The improved topology may allow the rigging mesh to deform better when the bones are animated.
Although a specific process for automatically rigging a facial mesh is discussed above with reference to
Identifying Segments of a Facial Mesh Corresponding to Portions of a Face
When a representative mesh has been generated to deal with problematic areas of an original facial mesh of a 3D character, segments of the representative mesh corresponding to portions of a face can be identified. In many embodiments, artificial intelligence is used to identify the segments. In particular, machine learning processes can be used to identify various segments of the representative mesh corresponding to different portions of a face.
A process that identifies segments of a representative mesh corresponding to portions of a face in accordance with embodiments of this invention is shown in
The process 400 can perform (415) feature extraction on the representative mesh and the meshes in the training set. The extracted features are then provided to a learning process to perform the identification of the segments in the representative mesh. In accordance with an embodiment of this invention, the learning technique that is used is a JointBoost process and/or a Conditional Random Fields (CRF) process. The JointBoost process learns (420) probabilistically to estimate the segments of the representative mesh 420. The CRF process is a learning process that receives the probabilities of the segments from the JointBoost process and determines (425) a final segmentation of the representative mesh.
Identification of Key Points within a Representative Mesh
Once the segments of the representative mesh are identified, the key points of the representative mesh can be determined. Key points are typically points at borders between segments in a representative mesh. A machine learning process may be used to identify the key points. To aid in identification of key points, a user interface for providing data about key points is provided in accordance with some embodiments of this invention.
A process for identifying key points from a representative mesh in which segments corresponding to portions of a face have been identified in accordance with an embodiment of this invention is shown in
Generating a Bone Set
A bone set may be generated for the representative mesh after the segments and key points of the representative mesh are identified. Each bone is a line that connects two or more segments of the mesh that can be manipulated to animate the mesh. The generation of the bone set includes determining the number of bones needed, size of each bone, the placement of each bone, and the interconnection of bones. The generation of the bone set is typically performed by a machine learning process. A user interface that allows a user to enter specific parameters such as, but not limited to, the number of bones; and maximum and minimum size of the bones may be provided to allow a user to customize the bone set.
A process for generating the bone set for a representative mesh using a machine learning process in accordance with an embodiment of this invention is shown in
Determining Skinning Weights
A set of skinning weights defines the relationships between the motion of the segments of the mesh and underlying bone set. In accordance with embodiments of this invention, one of many methods such as, but not limited to, diffusion, proximity maps, and templates may be used to determine the skinning weights. User inputs may also be used in these methods to determine skinning weights.
A process for determining skinning weights for a representative mesh based and bone set using information concerning segments, and key points within the representative mesh in accordance with an embodiment of this invention is shown in
Poisson problems are then generated (715) for each segment. The Poisson problems may be generated by modeling each segment as a heat emitting body with the surrounding segments considered heat absorbing bodies. Specific coefficients and boundary conditions are used to capture the peculiarity of each segment/joint problem. These parameters may be estimated using machine learning processes or input by a user. The results of the Poisson problems provide skinning template profiles. Each skinning template profile is specific for a given joint and represents a typical spatial distribution of the skinning weights on a mesh. The skinning templates are then used to determine the skinning weights of the segments using a machine learning process (725) such as, but not limited to, Bayesian networks and in general Bayesian estimation.
The above is description of embodiments of systems and methods in accordance with the present invention. It is foreseen that other skilled in the art will design alternative systems that infringe on this invention as set forth in the following claims either literally or through the Doctrine of Equivalents.
The present application is a continuation of U.S. application Ser. No. 13/681,120, filed on Nov. 19, 2012 which claims priority to U.S. Provisional Application No. 61/561,228, filed on Nov. 17, 2011. The aforementioned applications are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6047078 | Kang | Apr 2000 | A |
6088042 | Handelman et al. | Jul 2000 | A |
6278466 | Chen | Aug 2001 | B1 |
6466215 | Matsuda et al. | Oct 2002 | B1 |
6535215 | DeWitt et al. | Mar 2003 | B1 |
6552729 | Di Bernardo et al. | Apr 2003 | B1 |
6554706 | Kim et al. | Apr 2003 | B2 |
6700586 | Demers | Mar 2004 | B1 |
6714200 | Talnykin et al. | Mar 2004 | B1 |
7149330 | Liu et al. | Dec 2006 | B2 |
7168953 | Poggio et al. | Jan 2007 | B1 |
7209139 | Keet et al. | Apr 2007 | B1 |
7372536 | Shah et al. | May 2008 | B2 |
7522165 | Weaver | Apr 2009 | B2 |
7937253 | Anast et al. | May 2011 | B2 |
8390628 | Harding et al. | Mar 2013 | B2 |
8704832 | Taylor et al. | Apr 2014 | B2 |
8744121 | Polzin et al. | Jun 2014 | B2 |
8749556 | de Aguiar et al. | Jun 2014 | B2 |
8797328 | Corazza et al. | Aug 2014 | B2 |
8928672 | Corazza et al. | Jan 2015 | B2 |
8982122 | Corazza et al. | Mar 2015 | B2 |
9626788 | Corazza | Apr 2017 | B2 |
9747495 | Corazza et al. | Aug 2017 | B2 |
20010000779 | Hayama et al. | May 2001 | A1 |
20020050988 | Petrov et al. | May 2002 | A1 |
20020102024 | Jones et al. | Aug 2002 | A1 |
20030164829 | Bregler et al. | Sep 2003 | A1 |
20030169907 | Edwards et al. | Sep 2003 | A1 |
20030208116 | Liang et al. | Nov 2003 | A1 |
20030215130 | Nakamura et al. | Nov 2003 | A1 |
20040021660 | Ng-Thow-Hing et al. | Feb 2004 | A1 |
20040049309 | Gardner et al. | Mar 2004 | A1 |
20040210427 | Marschner et al. | Oct 2004 | A1 |
20040227752 | McCartha et al. | Nov 2004 | A1 |
20050264572 | Anast et al. | Dec 2005 | A1 |
20060002631 | Fu et al. | Jan 2006 | A1 |
20060109274 | Alvarez et al. | May 2006 | A1 |
20060134585 | Adamo-villani et al. | Jun 2006 | A1 |
20060171590 | Lu et al. | Aug 2006 | A1 |
20060245618 | Boregowda et al. | Nov 2006 | A1 |
20060267978 | Litke et al. | Nov 2006 | A1 |
20070091085 | Wang et al. | Apr 2007 | A1 |
20070104351 | Yang et al. | May 2007 | A1 |
20070167779 | Kim et al. | Jul 2007 | A1 |
20070182736 | Weaver | Aug 2007 | A1 |
20080024487 | Isner et al. | Jan 2008 | A1 |
20080030497 | Hu et al. | Feb 2008 | A1 |
20080031512 | Mundermann et al. | Feb 2008 | A1 |
20080043021 | Huang et al. | Feb 2008 | A1 |
20080152213 | Medioni et al. | Jun 2008 | A1 |
20080158224 | Wong et al. | Jul 2008 | A1 |
20080170077 | Sullivan et al. | Jul 2008 | A1 |
20080170078 | Sullivan et al. | Jul 2008 | A1 |
20080180448 | Anguelov et al. | Jul 2008 | A1 |
20080187246 | Andres Del Valle | Aug 2008 | A1 |
20080252596 | Bell et al. | Oct 2008 | A1 |
20080284779 | Gu et al. | Nov 2008 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090067730 | Schneiderman | Mar 2009 | A1 |
20090195544 | Wrinch | Aug 2009 | A1 |
20090196466 | Capata et al. | Aug 2009 | A1 |
20090202114 | Morin et al. | Aug 2009 | A1 |
20090202144 | Taub et al. | Aug 2009 | A1 |
20090231347 | Omote | Sep 2009 | A1 |
20100007665 | Smith et al. | Jan 2010 | A1 |
20100020073 | Corazza | Jan 2010 | A1 |
20100073361 | Taylor et al. | Mar 2010 | A1 |
20100134490 | Corazza et al. | Jun 2010 | A1 |
20100141662 | Storey et al. | Jun 2010 | A1 |
20100149179 | de Aguiar et al. | Jun 2010 | A1 |
20100203968 | Gill et al. | Aug 2010 | A1 |
20100235045 | Craig et al. | Sep 2010 | A1 |
20100238182 | Geisner et al. | Sep 2010 | A1 |
20100253703 | Ostermann | Oct 2010 | A1 |
20100259547 | de Aguiar et al. | Oct 2010 | A1 |
20100271366 | Sung et al. | Oct 2010 | A1 |
20100278405 | Kakadiaris et al. | Nov 2010 | A1 |
20100285877 | Corazza | Nov 2010 | A1 |
20110211729 | Ramalingam | Sep 2011 | A1 |
20110292034 | Corazza et al. | Dec 2011 | A1 |
20110296331 | Iyer | Dec 2011 | A1 |
20110304622 | Rogers et al. | Dec 2011 | A1 |
20110304629 | Winchester | Dec 2011 | A1 |
20120019517 | Corazza et al. | Jan 2012 | A1 |
20120038628 | Corazza et al. | Feb 2012 | A1 |
20120130717 | Xu et al. | May 2012 | A1 |
20120327091 | Eronen et al. | Dec 2012 | A1 |
20130021348 | Corazza et al. | Jan 2013 | A1 |
20130127853 | Corazza et al. | May 2013 | A1 |
20130215113 | Corazza et al. | Aug 2013 | A1 |
20130257877 | Davis | Oct 2013 | A1 |
20130271451 | Tong et al. | Oct 2013 | A1 |
20140035934 | Du et al. | Feb 2014 | A1 |
20140043329 | Wang et al. | Feb 2014 | A1 |
20140160116 | de Aguiar et al. | Jun 2014 | A1 |
20140204084 | Corazza et al. | Jul 2014 | A1 |
20140285496 | de Aguiar et al. | Sep 2014 | A1 |
20140313192 | Corazza et al. | Oct 2014 | A1 |
20140313207 | Taylor et al. | Oct 2014 | A1 |
Number | Date | Country |
---|---|---|
1884896 | Feb 2008 | EP |
WO 2007132451 | Nov 2007 | WO |
WO 2009007701 | Jan 2009 | WO |
WO 2010060113 | May 2010 | WO |
WO 2010129721 | Nov 2010 | WO |
WO 2010129721 | Jun 2011 | WO |
WO 2011123802 | Oct 2011 | WO |
WO 2012012753 | Jan 2012 | WO |
Entry |
---|
International Search Report for International Application No. PCT/US09/57155, date completed Dec. 22, 2009, dated Jan. 12, 2010, 5 pgs. |
International Search Report for International Application No. PCT/US09/65825, date completed Jan. 21, 2010, dated Jan. 28, 2010, 3 pgs. |
International Search Report for International Application PCT/US2011/045060, completed Nov. 27, 2011, 2 pgs. |
International Search Report for PCT/US2010/033797, filed May 5, 2010, report completed Jun. 11, 2010, 2 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US09/57155, date completed Dec. 22, 2009, dated Jan. 12, 2010, 6 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US09/65825, date completed Jan. 21, 2010, dated Jan. 28, 2010, 6 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US2010/033797, filed May 5, 2010, completed Jun. 11, 2010, 4 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US2011/045060, completed Nov. 27, 2011, 5 pgs. |
Aguiar et al., “Automatic Conversion of Mesh Animations into Skeleton-based Animations”, EUROGRAPHICS 2008, vol. 27, No. 2, Apr. 2008. |
Anguelov et al., “Recovering Articulated Object Models from 3D Range Data”, In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, UAI 2004, 18-26. |
Anguelov et al., “SCAPE: Shape Completion and Animation of People”, Proceedings of the SIGGRAPH Conference, 2005. |
Anguelov et al., “The Correlated Correspondence Algorithm for Unsupervised Registration of Nonrigid Surfaces”, Advance in Neural Information Processing Systems, 17, 2004, pp. 33-40. |
Baran et al., “Automatic rigging and animation of 3D characters”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26, Issue 3, Jul. 2007. |
Baran, “Using Rigging and Transfer to Animate 3D Characters”, Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology, Sep. 2010. |
Beaudoin et al., “Adapting Wavelet Compression to Human Motion Capture Clips”, GI '07 Proceedings of Graphics Interface, 2007, 6 pages. |
Blanz et al., “A Morphable Model For The Synthesis of 3D Faces”, In Proceedings of ACM SIGGRAPH 11999, 8 pgs. |
Blanz et al., “Reanimating faces in images and video.” Computer Graphics Forum. vol. 22 No. 3. Blackwell Publishing, Inc., 2003. |
Bray, “Markerless Based Human Motion Capture: A Survey”, Published 2001, 44 pgs. |
Buenaposada et al., “Performance Driven Facial Animation Using Illumination Independent Appearance-Based Tracking”, In Proceedings of ICPR, Hong Kong, Aug. 2006, 4 pgs. |
Cheung et al., “Shape-from Silhouette of Articulated Objects and its use for Human Body Kinematics Estimation and Motion Capture”, In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77-84, 2003. |
Curio et al., “Semantic 3D Motion Retargeting for Facial Animation”, ACM 2006, 8 pgs. |
Curless et al., “The Space of Human Body Shapes: Reconstruction and Parameterization from Range Scans”, ACM Transactions on Graphics, 22(3), pp. 587-594. |
Curless et al., “A Volumetric Method of Building Complex Models from Range Images”, Retrieved from http://graphics.stanford.edu/papers/volrange/volrange.pdf, pp. 1-10, 1996. |
Curless et al., “Articulated Body Deformation from Range Scan Data”, ACM Transactions on Graphics, 21(3), 612-619. |
Davis et al., “Filing Holes in Complex Surfaces Using Volumetric Diffusion”, Proc. First Symposium on 3D Data Processing, Visualization, and Transmission, 2002, pp. 1-11. |
De Aguiar et al., “Marker-Less 3D Feature Tracking for Mesh-Based Human Motion Caption”, Human Motion 2007, LNCS 4818, 2007, 1-15. |
Di Bernardo et al., “Generating Realistic Human Motions from Observations”, submitted to Fifth European Conf. on Computer Vision, ECCV 1998, pp. 1-12. |
Gao et al., “Motion normalization: the Preprocess of Motion Data”, 2005, pp. 253-256. |
Garland et al., “Surface Simplification Using Quadric Error Metrics”, Proceedings of SIGGRAPH 1997, pp. 209-216, 1997. |
Goncalves et al., Reach Out and Touch Space (Motion Learning), Automatic Face and Gesture Recognition, 1998, Proceedings. Third IEEE International Conference on Apr. 14-16, 1998, pp. 234-239. |
Grassia, “Believable Automatically Synthesized Motion by Knowledge-Enhanced Motion Transformation”, Thesis CMU-CS-00-163, Aug. 21, 2000, 220 pages. |
Hahnel et al., “An Extension of the ICP Algorithm for Modeling Nonrigid Objects with Mobile Robots”, Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2003, 6pgs. |
Hasler et al., “A Statistical Model of Human Pose and Body Shape”, Journal compilation © 2008 The Eurographics Association and Blackwell Publishing Ltd. |
Hilton et al., “From 3D Shape Capture to Animated Models”, First International Symposium on 3D Processing, Visualization and Transmission (3DVPT2002). |
Isidro et al., “Stochastic Refinement of the Visual Hull to Satisfy Photometric and Silhouette Consistency Constraints” Boston University Computer Science Tech. Report No. 2003-017, Jul. 31, 2003. |
Jones, Michael, and Paul Viola, “Fast multi-view face detection.” Mitsubishi Electric Research Lab TR-20003-96 3 (2003): 14. |
Ju, et al., “Reusable Skinning Templates Using Cage-based Deformations”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH Asia 2008, vol. 27 Issue 5, Dec. 2008, 10 pages. |
Kahler et al., “Head Shop: Generating Animated Head Models with Anatomical Structure”, ACM SIGGRAPH Symposium on Computer Animation, pp. 55-64, 2002. |
Kalogerakis, “Machine Learning Algorithms for Geometry Processing by Example”, Thesis submitted Graduate Department of Computer Science University of Toronto, 2010. |
Lewis, “H.264/MPEG-4 AVC CABAC overview”, http://www.theonlineoasis.co.uk/notes.html, Dec. 3, 2012. |
Lewis et al., “Pose Space Deformation: A Unified Approach to Shape Interpolation and Skeleton-Drive Deformation”, Proceedings of ACM SIGGRAPH 2000, pp. 165-172, 2000. |
Liepa, “Filing Holes in Meshes”, Proc, of the Eurographicsl ACM SIGGRAPH Symposium on Geometry Processing, pp. 200-205, 2003. |
Liu et al., “3D Motion Retrieval with Motion Index Tree”, Computer Vision and Image Understanding, vol. 92, Issues 2-3, Nov.-Dec. 2003, pp. 265-284. |
Liu et al., “Background surface estimation for reverse engineering of reliefs.” International Journal of CAD/CAM 7.1 (2009). |
Lum et al., “Combining Classifiers for Bone Fracture Detection in X-Ray Images”, Image Processing, 2005, ICIP 2005, IEEE International Conference on (vol. 1) Date of Conference Sep. 11-14, 2005, pp. 1149-1152. |
Ma et al., “An Invitation to 3D Vision”, Springer Verlag, pp. 15-28, 2004. |
Mamou et al., “Temporal DCT-based compression of 3D Dynamic Meshes”, ICCOM'06 Proceedings of the 10th WSEAS international conference on Communications, Jul. 10-12, 2006, 74-79. |
Mamou et al., “The New MPEG-4/FAMC Standard for Animated 3D Mesh Compression”, IEEE 3DTV-CON'08, May 2008. |
Max Planck Institut Informatik, “Automatic Conversion of Mesh Animations into Skeleton-based Animations”, http://www.mpiinf.mpg.de/-edeaguialanimation-eg08.html; Mar. 30, 2008, 9pgs. |
Mohr et al., “Building Efficient, Accurate Character Skins from Examples”, ACM Transactions on Graphics, 22(3), pp. 562-568. |
Noh et al., “Expression Cloning”, Proceedings of ACM SIGGRAPH 2001, pp. 277-288, 2001. |
Okada et al., “A Video Motion Capture System for Interactive Games.”, MVA2007 IAPR Conference on Machine Vision Applications, Tokyo, Japan, May 16-8, 2007, 4 pgs. |
Park et al., “On-line locomotion generation based on motion blending”, ACM SIGGRAPH Symposium On Computer Animation. San Antonio, Jul. 21, 2002, 8 pages. |
Park et al., “On-line motion blending for real-time locomotion generation”, Computer Animation & Virtual Worlds. Wiley, UK, vol. 15, No. 3-4, Jul. 2004, 14 pages. |
Persson, Per, “ExMS: an animated and avatar-based messaging service for expressive peer communication.” Proceedings of the 2003 international ACM SIGGROUP conference on Supporting group work, ACM, 2003. |
Popovic et al., “Style-Based Inverse Kinematics”, ACM Transactions on Graphics, 23(3), pp. 522-531. |
Safonova et al., “Construction and optimal search of interpolated motion graphs”, ACM SIGGRAPH, 2007, 11 pgs. |
Sand et al., “Continuous Capture of Skin Deformation”, ACM Transactions on Graphics, 22(3), pp. 578-586, 2003. |
Scholkopf et al., “A Tutorial on support Vector Regression”, In Technical Report NC2-TR-1998-030. NeuroCOLT2, Oct. 1998. |
Seitz et al., “A comparison and evaluation of multi-view stereo reconstruction alogrithms,” Computer vision and pattern recognition, 2006 IEEE Computer Society Conference on vol. 1, IEEE, 2006. |
Seo et al., “An Automatic Modeling of Human Bodies from Sizing Parameters”, In Symposium on Interactive 3D Graphics, 2003, pp. 19-26. |
Sloan et al., “Shape By Example”, In 2001 Symposium on Interactive 3D Graphics, pp. 135-144, 2001. |
Smola et al., “A Tutorial on Support Vector Regression”, Statistics and Computing London 14(3) pp. 199-222, 2004. |
Sumner et al., “Deformation Transfer for Triangle Meshes”, Proceedings of ACM SIGGRAPH 2004, 23(3), pp. 399-405, 2004. |
Szeliski et al., “Matching 3D Anatomical Surfaces with Non-rigid Deformations Using Octree-Splines”, International Journal of Computer Vision, 1996, 18,2, pp. 171-186. |
Tao et al., “Mean Value Coordinates for Closed Triangular Meshes”, Proceedings of ACM SIGGRAPH (2005), 6 pgs. |
Taylor et al., “Modeling Human Motion Using Binary Latent Variables”, Proc. of Advances in Neural Information Processing Systems (NIPS) 19, 8 pgs. |
Tena, J. Rafael, Fernando De la Torre, and Lain Matthews. “Interactive Region-Based Linear 3D Face Models.” ACM Transactions on Graphics (TOG). vol. 30. No. 4. ACM, 2011. |
Tung et al., “Topology Matching for 3D Video Compression Computer Vision and Pattern Recognition”, IEEE Conference Computer Vision and Pattern Recognition, 2007, Jun. 2007, 8 pgs. |
Vasilescu et al., “Multilinear Analysis of Image Ensembles: Tensorfaces”, European Conference on Computer Vision (ECCV), pp. 447-460, 2002. |
Vlasic et al., “Face Transfer with Multilinear Models”, ACM Transactions on Graphics 24(3), pp. 426-433, 2005. |
Vlasic et al., “Multilinear Models for Facial Synthesis”, SIGGRAPH Research Sketch, 2004. |
Von Luxburg, “A Tutorial on Spectral Clustering. Statistics and Computing”, 2007, 32 pgs. |
Wang et al., “Multi-weight Enveloping: Least Squares Approximation Techniques for Skin Animation”, ACM SIGGRAPH Symposium on Computer Animation, pp. 129-138, 2002. |
Weise et al., “Realtime Performance-Based Facial Animation.” ACM Transactions on Graphics (TOG) vol. 30, No. 4, ACM (2011). |
Weise, Thibaut, et al. “Face/off: Live facial puppetry.” Proceedings of the 2009 ACM SIGGRAPH/Eurographics Symposium on Computer animation. ACM, 2009. |
Wikipedia, “Morph target animation”, Last Modified Aug. 1, 2014, Printed Jan. 16, 2015, 3 pgs. |
Zhidong et al., “Control of motion in character animation”, Jul. 14, 2004, 841-848. |
Zhu “Shape Recognition Based on Skeleton and Support Vector Machines”, ICIC 2007, CCIS 2, pp. 1035-1043, 2007.© Springer-Verlag Berlin Heidelberg 2007. |
Zordan et al., “Dynamic Response for Motion Capture Animation”, ACM Transactions on Graphics—Proceedings of ACM SIGGRAPH 2005, vol. 24, Issue 3, Jul. 2005, pp. 697-701. |
U.S. Appl. No. 13/681,120, filed Mar. 17, 2015, Office Action. |
U.S. Appl. No. 13/681,120, filed Dec. 1, 2015, Office Action. |
U.S. Appl. No. 13/681,120, filed Jun. 1, 2016, Office Action. |
U.S. Appl. No. 13/681,120, filed May 16, 2017, Office Action. |
U.S. Appl. No. 13/681,120, filed Oct. 25, 2017, Office Action. |
U.S. Appl. No. 13/681,120, filed Apr. 8, 2020, Notice of Allowance. |
U.S. Appl. No. 13/773,344, filed May 23, 2014, Office Action. |
U.S. Appl. No. 13/787,541, filed Mar. 12, 2015, Office Action. |
U.S. Appl. No. 13/787,541, filed Sep. 18, 2015, Notice of Allowance. |
U.S. Appl. No. 13/787,541, filed Feb. 26, 2016, Office Action. |
U.S. Appl. No. 13/787,541, filed Oct. 3, 2016, Office Action. |
U.S. Appl. No. 13/787,541, filed Feb. 21, 2017, Notice of Allowance. |
U.S. Appl. No. 13/787,541, filed Apr. 10, 2017, Notice of Allowance. |
U.S. Appl. No. 14/222,390, filed May 22, 2014, Office Action. |
U.S. Appl. No. 14/222,390, filed Oct. 16, 2014, Office Action. |
U.S. Appl. No. 14/222,390, filed Apr. 10, 2015, Office Action. |
U.S. Appl. No. 14/222,390, filed Oct. 29, 2015, Office Action. |
U.S. Appl. No. 14/222,390, filed Mar. 23, 2016, Office Action. |
U.S. Appl. No. 14/222,390, filed Oct. 26, 2016, Office Action. |
U.S. Appl. No. 15/044,970, filed Jul. 11, 2016, Preinterview 1st OA. |
U.S. Appl. No. 15/044,970, filed Dec. 1, 2016, Notice of Allowance. |
Lum et al., “Combining Classifiers for Bone Fracture Detection in X-Ray Images”, Image Processing, 2005, ICIP 2005, IEEE International Conference on (vol. 1) Date of Conference: Sep. 11-14, 2005, pp. 1149-52. |
Tena, J. Rafael, Fernando De la Tone, and Lain Matthews. “Interactive Region-Based Linear 3D Face Models.” ACM Transactions on Graphics (TOG). vol. 30. No. 4. ACM, 2011. |
U.S. Appl. No. 13/787,541, filed Apr. 20, 2017, Notice of Allowance. |
Number | Date | Country | |
---|---|---|---|
20200334892 A1 | Oct 2020 | US |
Number | Date | Country | |
---|---|---|---|
61561228 | Nov 2011 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13681120 | Nov 2012 | US |
Child | 16919774 | US |