The present invention relates generally to the generation of animation and more specifically to the concatenation of animations.
Three dimensional (3D) character animation has seen significant growth in terms of use and diffusion in the entertainment industry in the last decade. In most 3D computer animation systems, an animator defines a set of animation variables, or Avars that form a simplified representation of a 3D character's anatomy. The Avars are often organized in a hierarchical model and, therefore, the collection of Avars for a 3D character can be referred to as its hierarchical model. Motion of the 3D character can be defined by changing the values of Avars over time. The value of an Avar over time is referred to as the Avar's motion curve, and a sequence of motion can involve defining the motion curves for hundreds of Avars. The motion curves of all of a 3D character's Avars during a sequence of motion are collectively referred to as motion data.
An animator can directly animate a 3D character by manually defining the motion curves for the 3D character's Avars using an off-line software tool. Motion capture of a human or animal during a desired sequence of motion can also be used to generate motion data. Motion capture is a term used to describe a process of recording movement and translating the movement onto a digital model. A 3D character can be animated using the motion capture process to record the movement of points on the human or animal that correspond to the Avars of the 3D character during the motion. Motion capture has traditionally been performed by applying markers to the human or animal that can be mapped or retargeted to the Avars of the 3D character. However, markerless techniques have recently been developed that enable the animation of 3D characters using mesh based techniques. Markerless motion capture using mesh based techniques is described in U.S. Patent Publication No. 2008/0031512 entitled “Markerless Motion Capture System” to Mundermann, Corazza and Andriacchi, the disclosure of which is incorporated by reference herein in its entirety.
Animating a 3D character manually or using motion capture can be time consuming and cumbersome. As discussed above, the manual definition of a character's motion can involve a laborious process of defining and modifying hundreds of motion curves until a desired motion sequence is obtained. Motion capture requires the use of complex equipment and actors. In the event that the captured motion is not exactly as desired, the animator is faced with the choice of repeating the motion capture process, which increases cost, or attempting to manually edit the motion curves until the desired motion is obtained, which is difficult. The inability of animators to rapidly and inexpensively obtain complex motion data for a 3D character can represent a bottleneck for the generation of 3D animations.
Furthermore, transitions between animations are difficult given that the Avars at the end of one motion may not correspond to the Avars at the start of the next motion. For example, simply concatenating the motion data of sitting down and that of running would yield a choppy transition where a character first ends in a sitting position and next begins in an erect running position. Realistic animation involves smooth transitions from the end of one motion into the start of the next motion. The inability of animators to rapidly and inexpensively concatenate motion data for a 3D character can represent another bottleneck for the generation of 3D animations.
Systems and methods for generating and concatenating 3D character animations are described including systems in which recommendations are made by the animation system concerning motions that smoothly transition when concatenated. One embodiment includes a server system connected to a communication network and configured to communicate with a user device that is also connected to the communication network. In addition, the server system is configured to generate a user interface that is accessible via the communication network, the server system is configured to receive high level descriptions of desired sequences of motion via the user interface, the server system is configured to generate synthetic motion data based on the high level descriptions and to concatenate the synthetic motion data, the server system is configured to stream the concatenated synthetic motion data to a rendering engine on the user device, and the user device is configured to render a 3D character animated using the streamed synthetic motion data.
In a further embodiment, the server system includes an application server and a web server that are configured to communicate, the application server is configured to communicate with a database of motion data, the web server is connected to the communication network, the application server is configured to generate the at least one motion model using the motion data and provide the at least one motion model to the web server, the web server is configured to generate the user interface that is accessible via the communication network, the web server is configured to receive the high level descriptions of the desired sequences of motion via the user interface, the web server is configured to use the at least one motion model to generate synthetic motion data based on the high level descriptions of the desired sequence of motion, and the web server is configured to stream the synthetic motion data to a rendering engine on the user device.
In another embodiment, the server system is configured to concatenate two sets of motion data by determining the optimal parameters for the transition between the sets of motion data using a similarity map.
In a still further embodiment, the server system is configured to score the similarity between concatenated motion data.
In still another embodiment, the server system is configured to refine a transition between concatenated motion data based upon the similarity score of the concatenated motion data.
In a yet further embodiment, the server system is configured to refine the transition between the concatenated motion data by inserting an intermediate motion data sequence between the concatenated motion data.
In yet another embodiment, the high level description of the desired sequence of motion includes at least a motion type, and at least one performance attribute.
In a further embodiment again, the performance attribute is a description of a physical characteristic or an expressive characteristic of the motion.
In another embodiment again, the motion type is expressed as one of a number of discrete types of motion.
In a further additional embodiment, the high level description of the desired sequence of motion includes a trajectory of the motion, which is specified using at least a start point and an end point.
In another additional embodiment, the performance attribute is expressed using at least one value from a continuous scale that corresponds to a high level characteristic that was used to describe differences between repeated sequences of motion in the motion data used to generate the motion models.
In another further embodiment, the high level description of the desired sequence of motion further describes a procedural attribute that is applied to all concatenated motions.
In a still yet further embodiment, the synthetic motion data is based upon a standard model for a 3D character, the server system is configured to receive a model of a 3D character from the user device via the communication network, the server system is configured to retarget the synthetic motion data to the received model of a 3D character, and the synthetic motion data streamed by the server system is the retargeted synthetic motion data.
Still yet another embodiment also includes a database including similarity scores for specific combinations of motion data, the server system is configured to receive a high level description of a sequences of motion via the user interface, the server system is configured to query the database based upon the high level description of a sequence of motion to obtain at least one similar motion, and the server system is configured to generate a user interface listing the at least one similar motion.
In a still further embodiment again, the server system is configured to receive a second high level description of a sequence of motion selected based upon the list of at least one similar motion.
An embodiment of the method of the invention includes receiving a plurality of high level descriptions of motion characteristics for a plurality of desired motion sequences from a user device via a user interface generated by a server system, generating synthetic motion data using the server system based upon the high level descriptions of the motion characteristics for a desired motion sequence, concatenating the synthetic motion data, streaming the concatenated motion data from the server system to the user device, and rendering an animation of a 3D character using the client device based upon the streamed synthetic motion data.
A further embodiment of the method of the invention further includes obtaining a custom model of a 3D character, and retargeting the concatenated motion data to the custom model prior to streaming the synthetic motion data to the user device.
In another embodiment of the method of the invention the custom model of a 3D character is uploaded to the server system from the user device.
In a still further embodiment of the method of the invention, the custom model of a 3D character is generated by the server system in response to a high level description of the 3D character obtained from the user device via a user interface generated by the server system.
A yet further embodiment of the method of the invention also includes scoring the similarity between concatenated motion data.
Yet another embodiment of the method of the invention also includes refining a transition between concatenated motion data based upon the similarity score of the concatenated motion data.
In a further embodiment again of the method of the invention, refining the transition between the concatenated motion data includes inserting an intermediate motion data sequence between the concatenated motion data.
In another embodiment again of the method of the invention, the high level description of motion characteristics includes at least a motion type, a trajectory for the motion, and at least one performance attribute.
In a further additional embodiment of the method of the invention, the performance attribute is a description of a physical characteristic or an expressive characteristic of the motion.
In another additional embodiment of the method of the invention, the motion type is expressed as one of a number of discrete types of motion.
In a still yet further embodiment of the method of the invention, the trajectory of the motion is specified including at least a start point and an end point.
In still yet another embodiment of the method of the invention, the performance attribute is expressed using at least one value from a continuous scale that corresponds to a high level characteristic that was used to describe differences between repeated sequences of motion in the motion data used to generate the motion models.
In a still further embodiment again of the method of the invention, the synthetic motion data is based upon a standard model for a 3D character and further including receiving a model of a 3D character from the user device via the communication network, retargeting the synthetic motion data to the received model of a 3D character, and streaming the retargeted synthetic motion data.
In still another embodiment again of the method of the invention, receiving a plurality of high level descriptions of motion characteristics for a plurality of desired motion sequences from a user device via a user interface generated by a server system further includes receiving a high level description of motion characteristics via the user interface, querying a database including similarity scores for specific combinations of motion data based upon the high level descriptions of motion characteristics to obtain at least one similar motion, generating a user interface listing the at least one similar motion, and receiving a second high level description of motion characteristics selected based upon the list of at least one similar motion.
Turning now to the drawings, systems and methods for generating and concatenating 3D character animations are illustrated. In many embodiments, an animation system concatenates motion data from several motions to create an animation sequence. In the context of concatenating motion data, the term motion can be used to describe a set of motion data that corresponds to a specific type of movement (e.g. walking, running, jumping, etc.). Motion data, which can be synthetic or actual, are values that can be applied to the Avars of a 3D character skeleton to animate the 3D character. In certain embodiments, combination processes are used to blend the motion data of several motions to create a smooth motion sequence. In many embodiments, animations are concatenated by determining an optimally defined pivot around which the motion data of two motions is blended, such as (but not limited to) a character pivoting on a specific foot or points in the motions where the hips are similarly located. In a number of embodiments, a new intermediate sequence or frame is generated when concatenating dissimilar motions so as to improve the smoothness of the transition between the motions. More than one intermediate sequence or frame can be used to concatenate motion data. Multiple embodiments allow for user control of aspects of concatenation, such as fine tuning of the animation.
In several embodiments, the animation system is configured to provide recommendations for motions that transition well when combined with already selected motions. Various embodiments are configured to rate animations generated using concatenated motion data. In many embodiments, a database of recommended motion combinations can be accessed to populate a list of recommended motions that can be concatenated with previously selected motions as part of a user interface.
In many embodiments, an animation system distributes processing between a user's computer and one or more remote servers. The user can select among animations and motion data provided by a server and give a high level description of the animation concatenation. Providing a high level description can allow for generating synthetic motion data corresponding to a desired concatenated motion sequence without directly editing the motion data. In several embodiments, the server is configured to generate synthetic motion data based upon the high level description of the desired concatenation sequence provided by the user using one of a number of motion models that the server can retarget to a 3D character. The motion data for the concatenated animation can then be streamed to software resident on the user's computer, which can render an animation of the 3D character using the motion data. In this way, a single set of motion models, concatenation and retargeting software can be shared amongst different users without the servers bearing the processing burden of rendering each 3D animation concatenation using the generated synthetic motion data. The processing burden of rendering the animations is instead borne by the user's computers.
Animation System
A semi-schematic diagram of an animation system configured to generate and concatenate motion data in accordance with an embodiment of the invention is shown in
In many embodiments, the storage device 102 contains motion data that is used by the application server 104 to create one or more motion models. A motion model is a model that can generate synthetic motion data corresponding to a high level description of desired motion characteristics. The application server 104 deploys the motion model to the web server 106, which can use the motion model to create and concatenate synthetic motion data based upon a high level description. The term synthetic motion data describes motion data that is generated by a machine. Generation of synthetic motion data is described in U.S. patent application Ser. No. 12/753,032, to de Aguiar et al. entitled “Web Platform for Interactive Design, synthesis and Deliver of 3D Character Motion data”, filed Apr. 1, 2010, the disclosure of which is incorporated by reference herein in its entirety. The web server 106 can create a web based user interface that can be accessed via a user device 110 configured with an appropriate browser application. Users can use the web based user interface to concatenate motions based upon a high level description of desired motion combinations.
In several embodiments, the web based user interface enables an animator to provide a high level description of a desired sequence of motions. The web server 106 uses the high level description to generate and concatenate synthetic motion data in real time, which then can be retargeted and used to animate a specific 3D character. The 3D character can be resident on the server, generated by the server from a high level description in accordance with the process described in U.S. patent application Ser. No. 12/625,553 to Corazza et al. entitled “Real Time Generation of Animation-Ready 3D Character Models” filed Nov. 24, 2009, the disclosure of which is incorporated by reference herein in its entirety, or uploaded to the server from the user's computer. The concatenation of the selected motion data is discussed further below.
Concatenation of Motion Data
In a number of embodiments, a user selects specific motions to combine via a user interface. In many embodiments, the motion data for the selected motions is then generated or obtained and then concatenated. The transition between the two sets of motion data can be refined until an acceptable transition is obtained and the motion data used to animate a 3D character. Processes for concatenating motion data and improving the transition between different sets of concatenated motion data in accordance with embodiments of the invention are discussed further below.
Improving Transitions
In many embodiments, the similarity map built using a process similar to the process outlined above takes into account the pose of the character skeleton in 3D space in the two motions. A 2D registration of the poses of the skeleton at the end of the first motion and the pose of the skeleton at the start of the second motion is performed by sampling each pose on the floor plane in order to make the similarity map independent of character position and orientation. This then provides a determination of the optimal parameters for the transition between the sets of motion data. In several embodiments, the similarity map is weighted so that higher similarity scores are assigned to transitions, when key components of the character skeleton exhibit similarity. For example, the similarity map can give additional weight to key components of a character skeleton including (but not limited to) pivot points or joints in an ankle of the 3D character, the hips of the 3D character and/or the center of mass of a 3D character.
The similarity map can be utilized to generate a transition using combination processes including but not limited to a combination of motion blending; linear or non-linear time warping; physics considerations such as conservation of momentum or speed; pose matching elements such as matching of the pivot node position and or orientation; and IK constraints. Generating a transition can start with the spatial registration and time-warping of the first and additional motions according to optimally defined parameters obtained in the manner outlined above, such as using a foot or the hips as a pivot. Then the optimal blending between the frames of the first motion and frames of the additional motion can be computed. Further corrections and/or improvements can be applied to the blended motion, including but not limited to time-warping correction, smoothness enforcement, and corrections accounting for conservation of momentum or speed. A number of controls can be open to user's interaction for fine tuning of the motions for 3D character animation.
In numerous embodiments, high quality transitions are defined as transitions with an acceptable similarity value. An automatic method can be implemented to rate the quality of transitions obtained without direct user input based upon the similarity value of the transition. If a transition has a similarity score exceeding a predetermined threshold, then the transition is determined to be acceptable. If the transition fads to meet a predetermined level of quality and/or similarity, then a short sequence of one or more frames of motion can be used to serve as an intermediate sequence, or “bridge” between the two animations. In many embodiments, the process of generating transitions repeats until a “bridge” is obtained that achieves an appropriate level of similarity.
In many embodiments, the intermediate sequence is a new intermediate sequence of one or more frames that is found to produce a better quality transition. In one embodiment motion graphs are utilized to identify an appropriate bridge sequence from a database of short motion sequences. In another embodiment an extensive search for a “bridge” sequence is done over a database of motions. These motions can be described through metadata and a taxonomy tree to allow for real-time search. The database of motions can be a database of possible motion combinations. For example, to concatenate a “walking” motion with a “talking while seated” motion, the transition between the two motions can involve a “bridge” motion sequence in which the character skeleton transitions from a walking stance to a seated pose. This can be found in the database by searching for an intermediate sequence that has a good similarity with the selected motions. Similarity can be observed by analysis of the motion data of the “bridge” motion sequence or by reviewing metadata describing the types of motions that the “bridge” motion sequence is suited for bridging.
In many embodiments, the process of generating acceptable transitions is simplified by providing recommendations concerning combinations of motion data that produce acceptable transitions when concatenated. Systems and methods for recommending combinations of motion data that produce acceptable transitions when concatenated in accordance with embodiments of the invention are discussed below.
Motion Combination Database
In many embodiments, the process of concatenating sequences of motion data is simplified by creating a database of metadata concerning the similarity of different sets of motion data. In many systems, a motion model enables the generation of synthetic motion data for variations on a specific type of walking. For example, a motion model of walking can be utilized to generate a fast walk or a slow walk. In addition, the motion model could have the capability of varying stride length. Therefore, metadata concerning the similarity of specific sequences of motion data generated by motion models based upon specific high level descriptions can be useful in providing recommendations concerning motion models and/or high level descriptions of motion that can be used to generate a motion similar to a sequence of motion data that has already been generated. In a number of embodiments, metadata concerning the similarity of different sets of synthetic motion data that can be generated by a set of motion models is obtained by generating synthetic motion data using the motion models and automatically scoring the similarity of different concatenated sequences. The similarity scores can then be stored and used by an animation to recommend a set of motion data that will transition well from an existing set of motion data.
When a database of metadata concerning the similarity of synthetic motion data generated using specific high level motion descriptions is generated, recommendations are typically based upon automatic determinations of the smoothness and/or quality of the transition between the sets of concatenated motion data. For example, a set of motion data indicative of a running motion can be smoothly concatenated with a set of motion data indicative of walking. Processes for generating synthetic motion and scoring the similarity between sets of synthetic motion data for the purposes of populating a database of metadata concerning the similarity of synthetic motion data generated using specific combinations of motion models and/or high level motion descriptions in accordance with embodiments of the invention are discussed further below.
Generation of Motion Models from Motion Data
The motion data used to build a motion model in accordance with an embodiment of the invention can be actual motion data obtained via motion capture (either marker based or markerless), or can be synthetic motion data created manually by an animator using a conventional off-line animation software application. For the motion data to be used in the training of the motion model, each sequence of motion data is described using a set of high level characteristics appropriate to the application. The motion data is then used to build (404) a motion model. As discussed above, any model appropriate to the application that is capable of generating synthetic motion data from a high level description of a desired motion sequence can be used. In a number of embodiments, the motion model uses a combination of interpolation and linear or non-linear time warping of motion capture data to generate synthetic motion data. In several embodiments, the motion model is a statistical model trained using supervised learning. After motion models are built, the motion models can be used to generate synthetic motion data. As noted above, synthetic motion data generated by the models can be concatenated and the high level descriptions used to generate the synthetic motion data can be rated to create a database of recommendations concerning the high level motion descriptions that will generate motion data that can be smoothly concatenated. Processes for scoring the similarity of concatenated sequences of motion data in accordance with embodiments of the invention are discussed further below.
Filtering for Acceptable Similarity Value
In several embodiments, similarity values are obtained automatically. For example, similarity values can be a result of a parametric function. In other embodiments, similarity values can be manually assigned by animators. In still other embodiments, similarity values can be determined by a combination of automatic and manual processes. In many embodiments, the minimum acceptable similarity value can be selected by a user or be privy to other considerations. For example, the minimum acceptable similarity value can differ depending on the types of motions that are being combined.
Similarity Value Determination
The ability to characterize the similarity of concatenated motions can be useful in providing an animator with feedback concerning a specific transition and/or with recommendations concerning suitable motions for combination with a selected motion. A variety of processes can be used to score the similarity of concatenated motion data. In many embodiments, a training set of motion data is concatenated and animators score the quality of the transitions. The scored training data set can then be used to learn the attributes of a high quality transition and build an automated process for scoring the transitions.
In several embodiments, the parameters that are defined related to the position of a 3D character's feet, hips or other joints or limbs. In other embodiments, any of a variety of parameters can be utilized as appropriate to the requirements of a specific application.
In many embodiments, a parametric equation is designed to model the similarities between two frames with a set of constraints that operate on at least one of the relevant parameters. Different parameters may be used depending on the needs of the similarity value. For example, if an animator only focuses on the legs of an animated character while jumping, relevant parameters associated with the arms can be disregarded. In several embodiments, the learning process involves use of a Support Vector Machine where the animation parameters define the state vector. In many embodiments of the invention, determination of the parametric equation is performed using multivariate regression.
Recommendation of Additional Motions for Concatenation
As discussed above a database containing metadata of possible motion combinations can be incorporated into an animation system in accordance with embodiments of the invention. When a user selects a motion by providing a high level description of the motion to the animation system, the animation system can query the database to identify motion models and/or high level descriptions that can be used to produce motion data that has a high similarity with the selected motion. The process can be continuous. For example, a user can select a running motion, then a walking motion, then a jumping motion, and then a crawling motion. Likewise, motions may also be added between already selected motions in a sequence of motions, such as later adding a skipping motion between the motions of running and walking. The choice of concatenation between motions is fluid and can be modified at any time. In various embodiments, a 3D character can be animated in real time using the concatentated motion data so that an animator can independently verify the smoothness of transitions.
User Interface
In the illustrated embodiment, a user is able to select a first animation 1302 and concatenate additional motions 1312 to that first animation to create a sequence of concatenated animations. In instances where the selected motions do not transition smoothly, a short “bridging” transition motion 1308 can be selected by the animator and/or recommended by the animation system to improve the smoothness of the transition. Although a specific user interface is illustrated in
Although the present invention has been described in certain specific aspects, many additional modifications and variations would be apparent to those skilled in the art. It is therefore to be understood that the present invention may be practiced otherwise than specifically described, including various changes in the implementation. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive.
The current application claims priority to U.S. Provisional Application No. 61/328,934, filed Apr. 28, 2010, the disclosure of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
6047078 | Kang | Apr 2000 | A |
6088042 | Handelman et al. | Jul 2000 | A |
6278466 | Chen | Aug 2001 | B1 |
6552729 | Di Bernardo et al. | Apr 2003 | B1 |
6554706 | Kim et al. | Apr 2003 | B2 |
6700586 | Demers | Mar 2004 | B1 |
6714200 | Talnykin et al. | Mar 2004 | B1 |
7168953 | Poggio et al. | Jan 2007 | B1 |
7209139 | Keet et al. | Apr 2007 | B1 |
7522165 | Weaver | Apr 2009 | B2 |
7937253 | Anast et al. | May 2011 | B2 |
8749556 | De Aguiar et al. | Jun 2014 | B2 |
8797328 | Corazza et al. | Aug 2014 | B2 |
20020050988 | Petrov et al. | May 2002 | A1 |
20030164829 | Bregler et al. | Sep 2003 | A1 |
20030169907 | Edwards et al. | Sep 2003 | A1 |
20030208116 | Liang et al. | Nov 2003 | A1 |
20030215130 | Nakamura et al. | Nov 2003 | A1 |
20040021660 | Ng-Thow-Hing et al. | Feb 2004 | A1 |
20040049309 | Gardner et al. | Mar 2004 | A1 |
20040210427 | Marschner et al. | Oct 2004 | A1 |
20040227752 | McCartha et al. | Nov 2004 | A1 |
20050264572 | Anast et al. | Dec 2005 | A1 |
20060002631 | Fu et al. | Jan 2006 | A1 |
20060109274 | Alvarez et al. | May 2006 | A1 |
20060134585 | Adamo-villani et al. | Jun 2006 | A1 |
20060171590 | Lu et al. | Aug 2006 | A1 |
20060245618 | Boregowda et al. | Nov 2006 | A1 |
20060267978 | Litke et al. | Nov 2006 | A1 |
20070091085 | Wang et al. | Apr 2007 | A1 |
20070104351 | Yang et al. | May 2007 | A1 |
20070182736 | Weaver | Aug 2007 | A1 |
20080024487 | Isner et al. | Jan 2008 | A1 |
20080030497 | Hu et al. | Feb 2008 | A1 |
20080031512 | Mundermann et al. | Feb 2008 | A1 |
20080043021 | Huang et al. | Feb 2008 | A1 |
20080152213 | Medioni et al. | Jun 2008 | A1 |
20080158224 | Wong et al. | Jul 2008 | A1 |
20080170077 | Sullivan et al. | Jul 2008 | A1 |
20080180448 | Anguelov et al. | Jul 2008 | A1 |
20080187246 | Andres Del Valle | Aug 2008 | A1 |
20080252596 | Bell et al. | Oct 2008 | A1 |
20090027337 | Hildreth | Jan 2009 | A1 |
20090067730 | Schneiderman | Mar 2009 | A1 |
20090195544 | Wrinch | Aug 2009 | A1 |
20090196466 | Capata et al. | Aug 2009 | A1 |
20090231347 | Omote | Sep 2009 | A1 |
20100020073 | Corazza et al. | Jan 2010 | A1 |
20100073361 | Taylor et al. | Mar 2010 | A1 |
20100134490 | Corazza et al. | Jun 2010 | A1 |
20100149179 | Aguiar et al. | Jun 2010 | A1 |
20100238182 | Geisner et al. | Sep 2010 | A1 |
20100253703 | Ostermann | Oct 2010 | A1 |
20100259547 | De Aguiar et al. | Oct 2010 | A1 |
20100278405 | Kakadiaris et al. | Nov 2010 | A1 |
20100285877 | Corazza | Nov 2010 | A1 |
20110292034 | Corazza et al. | Dec 2011 | A1 |
20120019517 | Corazza et al. | Jan 2012 | A1 |
20130021348 | Corazza et al. | Jan 2013 | A1 |
20130127853 | Corazza et al. | May 2013 | A1 |
20130215113 | Corazza et al. | Aug 2013 | A1 |
20130235045 | Corazza et al. | Sep 2013 | A1 |
20140160116 | De Aguiar et al. | Jun 2014 | A1 |
20140204084 | Corazza et al. | Jul 2014 | A1 |
Number | Date | Country |
---|---|---|
1884896 | Feb 2008 | EP |
2007132451 | Nov 2007 | WO |
2009007701 | Jan 2009 | WO |
20100060113 | May 2010 | WO |
2010129721 | Nov 2010 | WO |
2010129721 | Jun 2011 | WO |
2011123802 | Oct 2011 | WO |
2012012753 | Jan 2012 | WO |
Entry |
---|
International Search Report for International Application No. PCT/US 09/57155, date completed Dec. 22, 2009, date mailed Jan. 12, 2010, 5 pgs. |
International Search Report for International Application No. PCT/US 09/65825, date completed Jan. 21, 2010, date mailed Jan. 28, 2010, 3 pgs. |
International Search Report for International Application PCT/US2011/045060, completed Nov. 27, 2011, 2 pgs. |
International Search Report for PCT/US2010/033797, filed May 5, 2010, report completed Jun. 11, 2010, 2 pgs. |
Written Opinion of the international Searching Authority for International Application No. PCT/US 09/57155, date completed Dec. 22, 2009, date mailed Jan. 12, 2010, 6 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US 09/65825, date completed Jan. 21, 2010, date mailed Jan. 28, 2010, 6 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US2010/033797, filed May 5, 2010, completed Jun. 11, 2010, 4 pgs. |
Written Opinion of the International Searching Authority for International Application No. PCT/US2011/045060, completed Nov. 27, 2011, 5 pgs. |
Agular, E. DE et al., “Automatic Conversion of Mesh Animations into Skeleton-based Animations”, Eurographics 2008, vol. 27, No. 2, Apr. 2008. |
Anguelov et al., “Recovering Articulated Object Models from 3D Range Data”, In Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence, UAI 2004, 18-26. |
Anguelov et al., “SCAPE: Shape Completion and Animation of People”, Proceedings of the SIGGGRAPH Conference, 2005. |
Anguelov et al., “The Correlated Correspondence Algorithm for Unsupervised Registration of Nonrigid Surfaces”, Advance in Neural Information Processing Systems, 17, pp. 33-40, 2005. |
Baran et al., “Automatic rigging and animation of 3D characters”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH 2007, vol. 26 Issue 3, Jul. 2007. |
Beaudoin, et al., “Adapting Wavelet Compression to Human Motion Capture Clips”, GI '07 Proceedings of Graphics Interface, 2007, 6 pages. |
Blanz et al., “A Morphable Model For The Synthesis of 3D Faces”, In Proceedings of ACM SIGGRAPH 11999, 8 pgs, 1999. |
Bray, Joseph, “Markerless Based Human Motion Capture: A Survey”, Published 2001, 44 pgs. |
Buenaposada et al., Performance Driven Facial Animation Using Illumination Independent Appearance-Based Tracking, In Proceedings of ICPR, Hong Kong, Aug. 2006, 4 pgs. |
Cheung et al., “Shape-from Silhouette of Articulated Objects and its use for Human Body Kinematics Estimation and Motion Capture”, In Conference on Computer Vision and Pattern Recognition (CVPR), pp. 77-84, 2003. |
Curio et al., “Semantic 3D Motion Retargeting for Facial Animation”, ACM 2006, 8 pgs. |
Curless at al., Allen, “The Space of Human Body Shapes: Reconstruction and Parameterization from Range Scans”, ACM Transactions on Graphics, 22(3), pp. 587-594, 2003. |
Curless et al., “A Volumetric Method of Building Complex Models from Range Images”. Retrieved from http://graphics.stanford.edu/papers/volrange/volrange.pdf, pp, 1-10, 1996. |
Curless et al., Allen, “Articulated Body Deformation from Range Scan Data”, ACM Transactions on Graphics, 21(3), 612-619, 2002. |
Davis et al., “Filing Holes in Complex Surfaces Using Volumetric Diffusion”, Proc. First Symposium on 3D Data Processing, Visualization, and Transmission, 2002, pp. 1-11. |
De Aguiar, et al., “Marker-Less 3D Feature Tracking for Mesh-Based Human Motion Caption”, Human Motion 2007, LNCS 4818, 2007, 1-15. |
Di Bernardo et al., “Generating Realistic Human Motions from Observations”, submitted to Fifth European Conf. on Computer Vision, ECCV 1998, pp. 1-12. |
Gao, et al., “Motion normalization: the Preprocess of Motion Data”, 2005, pp. 253-256. |
Garland et al., “Surface Simplification Using Quadric Error Metrics”, Proceedings of SIGGRAPH 1997, pp. 209-216, 1997. |
Goncalves et al., Reach Out and Touch Space (Motion Learning), Automatic Face and Gesture Recognition, 1998. Proceedings. Third IEEE international Conference on Apr. 14-16, 1998, pp. 234-239. |
Hahnel et al., “An Extension of the ICP Algorithm for Modeling Nonrigid Objects with Mobile Robots”, Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI), 2003, 6pgs. |
Hilton et al., “From 3D Shape Capture to Animated Models”, First International Symposium on 3D Processing, Visualization and Transmission (3DVPT2002). |
Ju, et al., “Reusable Skinning Templates Using Cage-based Deformations”, ACM Transactions on Graphics (TOG)—Proceedings of ACM SIGGRAPH Asia 2008, vol. 27 Issue 5, Dec. 2008, 10 pages. |
Kahler et al., “Head Shop: Generating Animated Head Models with Anatomical Structure”, ACM SIGGRAPH Symposium on Computer Animation, pp. 55-64, 2002. |
Lewis, “H.264/MPEG-4 AVC CABAC overview”, http://www.theonlineoasis.co.uk/notes.html, Dec. 3, 2012. |
Lewis et al., “Pose Space Deformation: A Unified Approach to Shape Interpolation and Skeleton-Drive Deformation”, Proceedings of ACM SIGGRAPH 2000, pp. 165-172, 2000. |
Liepa, P., “Filing Holes in Meshes”, Proc. of the Eurographics/ ACM SIGGRAPH Symposium on Geometry Processing, pp. 200-205, 2003. |
Ma et al., “An invitation to 3D Vision”, Springer Verlag, pp. 15-28, 2004. |
Mamou et al., “Temporal OCT-based compression of 3D dynamic meshes”, ICCOM'06 Proceedings of the 10th WSEAS international conference on Communications, Jul. 10-12, 2006, 74-79. |
Mamou et al., “The New MPEG-4/FAMC Standard for Animated 3D Mesh Compression”, IEEE 3DTV-CON'08, May 2008. |
Max Planck Institut Informatik, “Automatic Conversion of Mesh Animations into Skeleton-based Animations”, http://www.mplinf.mpg.de/-edeaguia/animation—eg08.html, Mar. 30, 2008, 9pgs. |
Mohr et al., “Building Efficient, Accurate Character Skins froms Examples”, ACM Transactions on Graphics, 22(3), 582-568, 2003. |
Noh et al., “Expression Cloning”, Proceedings of ACM SIGGRAPH 2001, pp. 277-288, 2001. |
Okada, R et al., “A Video Motion Capture System for Interactive Games.”, MVA2007 IAPR Conference on Machine Vision Applications, Tokyo, Japan, May 16-18, 2007, 4 pgs. |
Park et al., “On-line locomotion generation based on motion blending”, ACM SIGGRAPH Symposium on Computer Animation. San Antonio, Jul. 21, 2002, 8 pages. |
Park et al., “On-line motion blending for real-time locomotion generation”, Computer Animation & Virtual Worlds. Wiley, UK, vol. 15, No. 3-4, Jul. 2004, 14 pages. |
Popovic et al., “Style-Based Inverse Kinematics”, ACM Transactions on Graphics, 23(3), 522-531, 2004. |
Safonova et al., “Construction and optimal search of interpolated motion graphs”, ACM SIGGRAPH, 2007, 11 pages. |
Sand et al., “Continuous Capture of Skin Deformation”, ACM Transactions on Graphics, 22(3), pp. 578-586, 2003. |
Scholkopt et al., “A Tutorial on support Vector Regression”, In Technical Report NC2-TR-1998-030, NeuroCOLT2, 1998. |
Seo et al., “An Automatic Modeling of Human Bodies from Sizing Parameters.”, In Symposium on Interactive 3D Graphics, 19-26, 2003. |
Sloan et al., “Shape By Example”, In 2001 Symposium on Interactive 3D Graphics, pp. 135-144, 2001. |
Smola et al., “A Tutorial on Support Vector Regression”. Statistics and Computing London 14(3) pp. 199-222, 2004. |
Sumner et al., “Deformation Transfer for Triangle Meshes”, Proceedings of ACM SIGGRAPH 2004, 23(3), pp. 399-405, 2004. |
Szliski et al., “Matching 3D Anatomical Surfaces with Non-rigid Deformations Using Octree-splines”, International Journal of Computer Vision, 1996, 18,2, pp. 171-186. |
Tao, Ju et al., “Mean Value Coordinates for Closed Triangular Meshes”, Proceedings of ACM SIGGRAPH (2005), 6 pgs. |
Taylor et al., “Modeling Human Motion Using Binary Latent Variables”, Proc. of Advances in Neural Information Processing Systems (NIPS) 19, 8 pgs, 2006. |
Tung et al., “Topology Matching for 3D Video Compression Computer Vision and Pattern Recognition”, IEEE Conference Computer Vision and Pattern Recognition, 2007, Jun. 2007, 8 pgs. |
Vasilescu et al., “Multilinear Analysis of Image Ensembles: Tensorfaces”, European Conference on Computer Vision (ECCV), pp. 447-460, 2002. |
Vlasic et al., “Face Transfer with Multilinear Models”, ACM Transactions on Graphics 24(3), pp. 426-433, 2005. |
Vlasic et al., “Multilinear Models for Facial Synthesis”, SIGGRAPH Research Sketch, 2004. |
Von Luxburg, “A Tutorial on Spectral Clustering, Statistics and Computing”, 2007, 32 pgs. |
Wang et al., “Multi-weight Enveloping: Least Squares Approximation Techniques for Skin Animation”, ACM SIGGRAPH Symposium on Computer Animation, pp. 129-138, 2002. |
Zhidong et al., “Control of motion in character animation”, Jul. 14. 2004, 841-848. |
Blanz, Volker et al., “Reanimating Faces in Images and Video”, Computer Graphics Forum, vol. 22, No. 3, Biackwell Publishing, Inc., 2003, 10 pgs. |
Seitz, Steven M. et al., “A Comparison and Evaluation of Multi-View Stereo Reconstruction Algorithms”, Computer Vision and Pattern Recognition, 2006 IEEE Computer Society Conference on. vol. 1. IEEE, 2006., 2006, 8 pgs. |
Number | Date | Country | |
---|---|---|---|
20120038628 A1 | Feb 2012 | US |
Number | Date | Country | |
---|---|---|---|
61328934 | Apr 2010 | US |