The present disclosure relates to systems and techniques for animation generation. More specifically, this disclosure relates to machine learning techniques for dynamically generating animation of characters from motion capture video.
Electronic games are increasingly becoming more realistic due to an increase in available processing resources. This increase in realism may allow for more realistic gameplay experiences. For example, elements that form an in-game world, such as characters, may be more realistically presented. In this example, the elements may be increasingly rendered at higher resolutions, with more detailed textures, with more detailed underlying meshes, and so on. While this added realism may be beneficial to an end-user of an electronic game, it may place a substantial burden on electronic game developers. As an example, electronic game developers may be required to create very rich, and detailed, models of characters. As another example, electronic game designers may be required to create fluid, lifelike, movements of the characters.
With respect to the example of movement, characters may be designed to realistically adjust their arms, legs, and so on, while traversing an in-game world. In this way, the characters may walk, run, jump, and so on, in a lifelike manner. With respect to a sports electronic game, substantial time may be spent ensuring that the characters appear to mimic real-world sports players. For example, electronic game designers may spend substantial time fine-tuning movements of an underlying character model. Movement of a character model may be, at least in part, implemented based on movement of an underlying skeleton. For example, a skeleton may include a multitude of objects (e.g., bones or joints) which may represent a portion of the character model. As an example, a first object may be a finger while a second object may correspond to a wrist. The skeleton may therefore represent an underlying form on which the character model is built. In this way, movement of the skeleton may cause a corresponding adjustment of the character model.
To create realistic movement, an electronic game designer may therefore adjust positions of the above-described objects included in the skeleton. For example, the electronic game designer may create realistic running via adjustment of specific objects which form a character model's legs. This hand-tuned technique to enable movement of a character results in substantial complexity and usage of time.
The systems, methods, and devices of this disclosure each have several innovative aspects, no single one of which is solely responsible for the all of the desirable attributes disclosed herein.
Particular embodiments of the subject matter described in this specification can be implemented so as to realize one or more of the following advantages. Utilizing the techniques described herein, realistic motion may be rapidly generated for character models. For example, the realistic motion can be configured for use in electronic games. As will be described, the dynamic animation generation system can provide a deep learning framework to produce a large variety of martial arts movements in a controllable manner from unstructured motion capture data. The dynamic animation generation system can imitate animation layering using neural networks with the aim to overcome the typical challenges when mixing, blending and editing movements from unaligned motion sources. The dynamic animation generation system can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions including clinching and carrying by learning the spatial joint relationships. For achieving this task, the dynamic animation generation system can adopt a modular framework that is composed of the motion generator, that maps the trajectories of a number of key joints and root trajectory to the full body motion, and a set of different control modules that map the user inputs to such trajectories.
One embodiment discloses a computer-implemented method for dynamically generating animation of a virtual character performing certain actions in a virtual environment of an instance of a video game, the method comprising: receiving a current frame of a virtual character within a virtual environment of an instance of a video game, wherein the current frame includes current pose data for the virtual character; identifying a plurality of possible behaviors for the virtual character for the next frame based on the current pose data in the current frame, wherein the next frame is a subsequent frame to the current frame; receiving, from a user of the video game, an input to perform at least a first and second behavior of the plurality of possible behaviors; determining a plurality of pose data for the first and second behavior; performing layering of the plurality of pose data corresponding to the first and second behavior on the current pose data to generate layered data; applying the layered data to a gating network to generate weights; and applying the weights to a pose predictor network configured to generate next pose data for the next frame.
In some embodiments, the gating network receives velocity magnitudes of future joint trajectories for the next pose data, wherein the weights generated by the gating network are blended weights of the future joint trajectories.
In some embodiments, performing layering comprises applying additive layering to the pose data corresponding to the first and second behavior.
In some embodiments, performing layering comprises applying override layering to the pose data corresponding to the first and second behavior.
In some embodiments, performing layering comprises applying blend layering to the pose data corresponding to the first and second behavior.
In some embodiments, the pose predictor network blends weights of a fixed number of structurally identical networks.
In some embodiments, applying the layered data to the gating network comprises applying velocity magnitudes of future joint trajectories for the layered data, wherein generated weights by the gating network comprises blended weights dictating the influence of each of the structurally identical networks.
In some embodiments, the method further comprises applying the current frame with current pose data to the pose predictor network, wherein the pose predictor network is configured to generate next pose data for the next frame based on the current pose data.
Some embodiments include a system comprising one or more processors and non-transitory computer storage media storing instructions that when executed by the one or more processors, cause the one or more processors to perform operations comprising: receiving a current frame of a virtual character within a virtual environment of an instance of a video game, wherein the current frame includes current pose data for the virtual character; identifying a plurality of possible behaviors for the virtual character for the next frame based on the current pose data in the current frame, wherein the next frame is a subsequent frame to the current frame; receiving, from a user of the video game, an input to perform at least a first and second behavior of the plurality of possible behaviors; determining a plurality of pose data for the first and second behavior; performing layering of the plurality of pose data corresponding to the first and second behavior on the current pose data to generate layered data; applying the layered data to a gating network to generate weights; and applying the weights to a pose predictor network configured to generate next pose data for the next frame.
In some embodiments, the next pose data for the next frame does not match pose data previously stored by the system.
In some embodiments, the gating network applies gating variables according to the following:
Xi={Ci+1,Pi,gi}.
In some embodiments, the pose predictor network generates the next pose data according to the following:
(Ci+1,Pi)→Pi+1.
In some embodiments, the operations further comprise: mapping trajectories of a number of key joints and a root trajectory of the virtual character in the current frame, wherein the plurality of possible behaviors are identified based on the mapped trajectories and root trajectory.
In some embodiments, the plurality of possible behaviors for the virtual character for the next frame are identified by a neural network configured to determine possible behaviors for the virtual character for the next frame based on the current frame.
In some embodiments, the plurality of possible behaviors for the virtual character for the next frame are identified by a motion matching system for the virtual character in the instance of the video game.
Some embodiments include a non-transitory computer storage media storing instructions that when executed by a system of one or more processors, cause the one or more processors to perform operations comprising: receiving a current frame of a virtual character within a virtual environment of an instance of a video game, wherein the current frame includes current pose data for the virtual character; identifying a plurality of possible behaviors for the virtual character for the next frame based on the current pose data in the current frame, wherein the next frame is a subsequent frame to the current frame; receiving, from a user of the video game, an input to perform at least a first and second behavior of the plurality of possible behaviors; determining a plurality of pose data for the first and second behavior; performing layering of the plurality of pose data corresponding to the first and second behavior on the current pose data to generate layered data; applying the layered data to a gating network to generate weights; and applying the weights to a pose predictor network configured to generate next pose data for the next frame.
In some embodiments, identifying the plurality of possible behaviors is further based on a distance from an opponent.
In some embodiments, the operations further comprise determining the distance from the opponent based on a root position of the virtual character and a root position for the opponent.
In some embodiments, the operations further comprise determining the distance from the opponent based on a position and velocity information of a body limb for the virtual character and a position and velocity information of a body limb for the opponent.
Although certain embodiments and examples are disclosed herein, inventive subject matter extends beyond the examples in the specifically disclosed embodiments to other alternative embodiments and/or uses, and to modifications and equivalents thereof.
Throughout the drawings, reference numbers are re-used to indicate correspondence between referenced elements. The drawings are provided to illustrate embodiments of the subject matter described herein and not to limit the scope thereof.
Overview
This specification describes, among other things, technical improvements with respect to generation of motion for characters configured for use in electronic games. As will be described, a dynamic animation generation system described herein (e.g., the dynamic animation generation system) may implement a machine learning model to generate character movements, such as fighting motions in martial arts. The dynamic animation generation system can layer multiple animations where different animations can be combined to create a single animation.
Interactively synthesizing novel combinations and variations of character movements from different motion skills is a key problem in computer animation. Traditional systems apply one of three types of layering. The first type is called additive layering.
When the “kicking” layer is applied, the motion of the hip, the legs, and the joints for “kicking” are layered on top of the motion of the hip, the legs, and the joints for the layer of
The second type of layering is called override layering.
The third type of layering is called transition layering.
Another problem of traditional systems is that traditional systems typically have saved motions for particular motions, such as one motion for dribbling, another for jumping, and another for crouching. However, for example, when combining kicking or punching, there are hundreds of different types of punches and kicking, and thus, creating and assigning hundreds of different labels for punching and kicking are not practical. Traditional systems cannot simply create a label for each type of kick and punch to apply one of the above types of layering, let alone, resulting in unrealistic motions as discussed above.
Interactive applications rendering virtual characters in motion, like video games, virtual reality and various kind of simulations, desire an increasing volume of high quality and controllable animations. Regardless of the source of these animations, motion captured or keyframed, it is time-consuming and technically challenging to explicitly cover the entirety of required movements in a scalable and controllable fashion that is easy to use. Ideally, systems would like to synthesize new motion generalizing from examples using a compact and efficient model that can adapt to unseen situations and novel user inputs. Recently, data-driven approaches have been demonstrated being capable of learning such models, but they come with some key challenges: First, end-to-end systems concatenate control signals as conditions on top of the animations in order to guide the character movements by the user. However, since those features are often abstract, such as style labels or simplified goal variables to cause an action, the prediction can lead to averaging artifacts due to the inherent ambiguity in the input signal. Particularly for martial arts movements, defining such features to accurately cover all possible motion variations can be very challenging. Second, selecting the right features to control the movements is often task-specific, forcing retraining of the entire system not only whenever the application space changes, but also until the appearance of learned movements becomes as desired. This leads to increasingly long iteration times and can become infeasible for very large datasets. Lastly, such methods typically do not provide a transparent interface for animators to intuitively control the motion generation process. It is often not only unclear how to define such control features, but also how the network responds to them and sometimes also how to provide, combine or modify them by the user during runtime.
In some embodiments, the dynamic animation generation system addresses such issues using a deep learning framework capable of synthesizing fighting animations from given references movements into novel and unseen sequences, combinations or variations thereof in a controllable manner. The dynamic animation generation system can overcome the issues of traditional blending and layering techniques common in games, which suffer from artifacts that violate physics and often break the character pose in unnatural configurations, and without requiring changes in the workflows that animators are used to. The dynamic animation generation system can synthesize a large variety of character movements and actions, including locomotion, punching, kicking, avoiding and character-interactions in high quality while avoiding intensive manual labour when working with unstructured large-scale datasets. In some embodiments, motion can be generated for biped and/or human characters. In some embodiments, motion can be generated for quadruped characters.
In some embodiments, the dynamic animation generation system alleviates and mitigates the issues above by applying a neural network to combine a plurality of different motions.
In some embodiments, the dynamic animation generation system can receive input from the user, such as from a controller, to perform a series of actions. The dynamic animation generation system can generate character animation that performs the motions, such as a kick and punch, while also appearing realistic.
In some embodiments, the dynamic animation generation system can provide a deep learning framework to produce a large variety of martial arts movements in a controllable manner from unstructured motion capture data. The dynamic animation generation system can imitate animation layering using neural networks with the aim to overcome the typical challenges when mixing, blending and editing movements from unaligned motion sources. The dynamic animation generation system can synthesize novel movements from given reference motions and simple user controls, and generate unseen sequences of locomotion, punching, kicking, avoiding and combinations thereof, but also reconstruct signature motions of different fighters, as well as close-character interactions including clinching and carrying by learning the spatial joint relationships. For achieving this task, the dynamic animation generation system can adopt a modular framework that is composed of the motion generator, that maps the trajectories of a number of key joints and root trajectory to the full body motion, and a set of different control modules that map the user inputs to such trajectories. The motion generator functions as a motion manifold that not only projects novel mixed/edited trajectories to natural full-body motion, but also synthesizes realistic transitions between different motions. The control modules are task dependent and can be developed and trained separately by engineers to include novel motion tasks, which greatly reduces network iteration time when working with large-scale datasets. The dynamic animation generation system provides a transparent control interface for animators that allows modifying or combining movements after network training, and enables iterative adding of different motion tasks and behaviors. The dynamic animation generation system can be used for offline and online motion generation alike, and is relevant for real-time applications such as computer games.
Dynamic Animation Generation System Architecture
In some embodiments, the current frame i 502 can be a current frame of a virtual character within a virtual environment of an instance of a video game. The current frame can include current pose data for the virtual character, such as joint trajectory information. For example, at block 552, the dynamic animation generation system can receive a current frame of a video with a virtual character, such as frame i 502.
In some embodiments, the dynamic animation generation system can utilize a set of independent control modules for different actions or behaviors: these controllers can be in form of another neural network, an existing motion sequence or another suitable computational framework, where each of the controllers produce future motion trajectories going into a shared control interface. At block 553, the dynamic animation generation system can identify possible behaviors for the virtual character based on the current frame. The control modules 503 can include a number of behaviors, such as behavior 1, 2, 3, 4, 5, that can be performed from the current frame i 502. For example, behavior 1 can be an idling behavior, behavior 2 can be a punch, behavior 3 can be a kick, behavior 4 can be a neural network that generates some trajectories for a given opponent motion, and so on. The control modules 503 can generate joint trajectories for each of the different behaviors. In some embodiments, the joint trajectories for the behaviors can be generated by a neural network. In other embodiments, the joint trajectories for the behaviors are generated by a motion matching system for a character in a current instance of a virtual environment for a video game. In other embodiments, the joint trajectories for the behaviors are generated by any animation system that gives you a set of trajectories for motions to be followed for different behaviors and tasks.
In some embodiments, the control modules 503 can send joint trajectories to the control interface 504. The control modules 503 sends joint trajectories of the possible movements that the user can perform from the animation character position in Frame i 502. The control interface 504 can receive input, such as from a user, to perform a plurality of motions. At block 553, the dynamic animation generation system can receive, from a user of the video game, an input to perform at least a first and second behavior of the plurality of possible behaviors. The control interface 504 can apply additive layering, override layering, and/or blending to certain groups of behaviors, such as the plurality of motions that the user would like to perform. For example, the control interface 504 can apply additive layering with a punch motion on top of a crouch motion to the animation character in Frame i 502, such that the resulting joint trajectories are for an animation character that is performing a punch in a crouch position. The joint trajectories for the crouching punch is used to control the animation character to perform the new motion. At block 558, the dynamic animation generation system can perform layering, such as additive, override, or blending, on the pose data for the first and second behaviors to generated combined pose data.
In some embodiment, the control interface 504 can send the joint trajectories for the crouching punch motion to a motion generator 506. The motion generator 506 predicts the next pose that would be inside the set of joint trajectories that are received from the control interface 504, such as the joint trajectories for the crouching punch motion. The motion generator 506 take the combined joint trajectories for the crouching punch motion and generates the new movement. The motion generator 506 can include a gating network to generate weights and a pose predictor to generate next pose data for the next frame i+1 508. For example, at block 560, the dynamic animation generation system can apply the combined pose data to a gating network to generate weights. At block 562, the dynamic animation generation system can apply the weights to a pose predictor network to generate next pose data for the next frame.
Thus, the dynamic animation generation system can learn the entire manifold of movements from unstructured motion capture data within a compact network, which takes as input a dense signal of key joint trajectories. This signal can successfully reconstruct a large variety of attacking, defending or interaction behaviors typical of martial arts in a task-agnostic fashion and with high fidelity, even including stylistic signature movements of different fighters.
Neural Network Animation Layering Framework
In some embodiments, the deep learning framework is a time-series system that predicts the character pose from one frame into the next in an autoregressive fashion, where layering and motion progression is done at one step, such as within the control interface 504, and aims to decouple the motion generation process from the control process. First, the distribution of all unstructured motion capture data is learned with a motion generator network (described further herein), which is able to accurately reproduce the original animation and generalize to novel, unseen states. This network is trained to produce a character pose that follows a subset of motion trajectories, with the effect of compressing the entire data into a single network in a task-agnostic manner. After the motion generator network is trained, different control modules (such as control modules 503) can be independently created to purposely drive the motion synthesis. These can be in form of neural networks, heuristic-based controllers, existing reference motion clips, or user-driven editors, with one shared property: producing the future trajectories that are going into a common control interface (such as the control interface 504). This intermediate interface provides a transparent control scheme for artists or users to layer, blend and edit the character movement as desired. Afterwards, when the new trajectories are given to the motion generator 506, a novel unseen animation can be generated from the entire motion manifold. In addition, since the motion generator 506 does not have to be retrained when the controllers are being created or modified, this modular approach reduces iteration times during development and allows shifting the process of tuning the motion generation from before-to-after network training.
Motion Generator
In some embodiments, the motion generator 506 can include a gating network and a pose predictor network. The pose predictor network is constructed by blending the weights of a fixed number of structurally identical networks, called experts, according to a set of learned blending weights. The pose predictor network takes in as input the control series to guide the motion, plus the pose data of the current frame, and outputs the character pose for the next frame. In the dynamic animation generation system setup, a Control Series C, as shown in Eq. (1) with a total of L=1+N channels, where Ti is the root trajectory in 2D space and Mi,j=1, . . . ,N is a set of N key joint trajectories in 3D space, each covering a window of one second in both past and future around frame i.
Ci={Ci,1, . . . ,Ci,L}={Ti,Mi,1, . . . ,Mi,N} (1)
In some embodiments, the gating network takes as input the velocity magnitudes of the future joint trajectories, and outputs the blending weights that dictate the influence of each expert. The gating can segment the movements equally based on the low and high-frequency components of the motion, covered by a dense series of joint velocities, in order to follow a given reference motion well.
Structurally, the motion generator can be formulated as Eq. (2), mapping the current pose P at frame i to the pose at frame i+1, using the control series of frame i+1:
(Ci+1,Pi)→Pi+1. (2)
In some embodiments, the motion generator 506 can be trained on very large-scale datasets covering different motion skills and behaviors, without requiring manual supervision. The joint trajectories mitigate ambiguity in the input, effectively compressing several gigabytes of motion captured data into a model that can both reproduce very specific motion and synthesize animations that are not present in the training data. After training the network, editing, blending as well as layering joint trajectories as control to the motion generator 506, instead of carrying similar operation directly on the animation data, demonstrates significant advantages: Since the motion generator 506 essentially learns the manifold of plausible motion from the training data, the motion generator 506 acts as a projection operator on said manifold, avoiding unrealistic poses and jerkiness in the motion, and generalizes to novel movements and transitions between them while following the given reference trajectories.
Control Modules
In some embodiments, the purpose of the control modules 503 is to represent a specific behavior that outputs future control trajectories to be followed by the character. Control modules 503 can include neural networks, physics-based simulations, non-parametric systems such as motion matching, animation clips or user-driven tools that enable editing the trajectories themselves directly. The dynamic animation generation system can makes it easy to swap or even combine higher-level modules to focus on different tasks, where each module can define its very own inputs if necessary. Structurally, the function for the control modules can be formulated as
(⋅)→Ĉi+1kk∈1, . . . K, (3)
where each controller (⋅) for behavior k maps its input to a future control series Ĉi+1k of the next frame. Their combination by different layering techniques is performed by the control interface are discussed in Section 3.3.
Idling behavior: In some embodiments, in order to produce idling animations when the character is standing still, the dynamic animation generation system can take a small set of reference controls extracted from existing motion clips that are characteristic for each fighter. Afterwards, the controls can be interactively modified into several full-body pose variations via additive layering using an offset vector (see
Locomotion behavior: In some embodiments, locomotion is generated by training another neural network by using local phase variables (one for each foot). However, instead of directly producing the output pose, the dynamic animation generation system can predict a future control series of the locomotion in order to be combined with other behaviors in the control interface. With that the dynamic animation generation system can synthesize unseen combinations of locomotion with actions like blocking and punching movements.
Attacking and Targeting behavior: In some embodiments, performing a specific attack is initially done in a similar fashion like for idle behavior by selecting a short reference sequence from the data. By that the dynamic animation generation system can provide the animator with intuitive control about the initial appearance of a fighting skill. Afterwards, the dynamic animation generation system can combine and modify different attacking behaviors using the control interface to synthesize double-punches and natural kick-and-punch sequences of different timings that have not been part of the original training data.
When trying to hit an opponent, the relative configuration and offsets between two characters in runtime in most cases will not match that during training. Therefore, the dynamic animation generation system can apply a redirected control scheme that learns how to modify a given reference motion relative to an opponent in order to land an attack. Essentially, the dynamic animation generation system can define a redirected space between character A and its opponent B by computing an aligned root direction from A's root position to B's root position. The targeting module is then trained by another network and takes the current control series and the opponent pose as input in the own character space, and again predicting the same control series in the opponent space using the redirected root space. In effect, learning trajectories in this space enables the network to redirect a given control depending on the configuration between both characters in order to match the current runtime situation, particularly when the characters are rotating fast.
Hit Reactions and Avoidance behavior: In some embodiments, to synthesize hit reaction behaviors, the traditional systems did not have paired-up data inside motion capture. Therefore, extracting a dense signal in order to learn a reaction over a longer time-window, such as getting hit and stumbling back before recovering, may be challenging. Therefore, the dynamic animation generation system can utilize a nearest neighbor search that matches the incoming velocity vector and position between two impacting body regions. Afterwards, the dynamic animation generation system can modify the root trajectory via additive layering to adjust the stumbling direction of the hit reaction before reconstructing the animation by the Motion generator. If instead controlling the character to avoid an attack, the dynamic animation generation system can apply another network that takes as input the entire control series of the opponent, and from that learns to produce suitable future trajectories for the own character. To connect two characters and making them responsive to the motion of the opponent, the same concept of redirected control can be used. The difference is instead of predicting the same attacking motion in the redirected space, the dynamic animation generation system can predict the reaction trajectories in that space. From that, the avoiding movement changes depending on the relative location of both characters as well as based on the attacking action being performed.
Clinching and Carrying behavior: In some embodiments, to produce close-interaction movements such as clinching and carrying, another control module is trained that generates the motion trajectories of the character. Considering the dynamic animation generation system can have a subdominant fighter that is controlled by a dominant opponent, the dynamic animation generation system can first select a reference motion for the latter and can interactively modify its curves by additive layering.
Control Interface
In some embodiments, the generated future control trajectories from the control modules 503 pass through the shared control interface, where they can be mixed via override, additive or blending layering. Advantageously, this enables producing combinations and variations of separate motion skills on a control-level passed to the motion generator 506, instead performing said operations directly on a pose-level. Structurally, the task of the control interface 504 is to compute a combined future control series by a layering operator as denoted below:
Ĉi+1=(Ĉi+11, . . . ,Ĉi+1K). (4)
The override layering operator can include mixing curve channels from different control series into one. For example, this can be selecting the lower body curves from a walking or kicking behavior with the upper body curves from a punching or blocking behavior (see
:Ĉi+1={Ĉi+1,1S
S1, . . . ,L∈{1, . . . ,K}. (5)
IN some embodiments, the additive layering function modifies the current control series by an additional signal for a set of selected channels. This signal can be in form of a scalar to adjust speed or distance, a vector to control position or direction, an entire control series or another customized function:
:Ĉi+1=Ĉi+1+{s1T,s2M1, . . . ,sLMN}
S1, . . . ,L∈{0,1}. (6)
In some embodiments, a blending operation can be performed to transition from a current control series into another new control series:
:Ĉi+1=(1−s)Ĉi+sĈi+1
S∈[0,1]. (7)
Using these three operators, the dynamic animation generation system can synthesize a large variety of combinations and variations of different motion skills in an intuitive and transparent manner.
System Inputs and Outputs
In some embodiments, the extracted features can live in a time series window −1s1s=13 that can cover information of up to 13 uniformly-sampled points within is in the past and future (each 6 samples) around the additional centered root sample at current frame i. Given the state variables of the character in the previous frame i−1 and the current frame i, the dynamic animation generation system can include a time-series model that predicts those of the character in the next frame i+1.
Motion Generator
Inputs: In some embodiments, the input vector for the motion generator 506 includes a control series, the current character pose and the motion generator gating variables: ={Ci+1, Pi, gi}.
In some embodiments, the control series Ci+1={Ti+1, Mi+1,1, . . . , Mi+1,N} is used as control series input for the motion generator. N=11 is the number of key joint. The control series is sampled from next frame i+1 and transformed into the root space of frame i. Each element of control series is sampled in the past-to-future time window =13.
Root Trajectory Ti+1: For controlling the character root motion, the root trajectory defines the horizontal path of trajectory positions Ti+1p∈, trajectory directions Ti+1r∈, trajectory velocities Ti+1v∈, integrated lengths Ti+1l∈ and integrated angles Ti+1a∈.
Motion Trajectory Mi+1j, j∈1, . . . , N: A series of 3D transformations and velocity for each of N=11 key joints, represented by position, up direction and forward direction Mi+1,jt∈ and its position velocity Mi+1,jv∈. The key joints are chosen as hips, left/right upper/lower leg, left/right upper/lower arm, spine, and head.
Character Pose Pi={pi, ri, vi} is the pose and velocity of the character at the current frame i with B=26 bones.
Gating Input gi: The magnitudes of joint velocities gi∈ from the future control series sampled at Motion generator.
Outputs: The output vector Yi+1MG={Pi+1, c1+1, Fi+1} for the next frame i+1 is computed by the motion generator network to generate pose, contacts and hand pose, as it is described bellow.
Predicted Pose Pi={pi+1, r1+1, vi+1} is the pose and velocity of the character for the next frame i+1 with B=26 bones.
Contacts ci+1 Binary contact switches for the feet joints.
Finger Transformations Fi+1 The dynamic animation generation system can predict the joint transformations of the fingers relative to their wrist spaces to recover the hand pose from the Motion generator.
Control Modules
Inputs: Depending on the task each control module fulfils, the control modules 503 can require different input. For example, locomotion module input requires root trajectory, avoidance module input requires opponent's pose and motion trajectory, and hit reaction module input required attacking direction and velocity.
Outputs. Control module outputs can include a future control series Ĉi+1 of the next frame. The format of Ĉi+1 is exactly as the same as Ci+1 except for the fact that each element of control series is sampled in the current-to-future time window =7.
For different modules, additional output may be generated.
Network Training
In some embodiments, the training can be performed by normalizing the input and the output of the entire dataset by their mean and standard deviation and first training the motion generator. Afterwards, the dynamic animation generation system can utilize different techniques to start querying this compact network that has learned to reconstruct and progress the motion manifold. The dynamic animation generation system can then train the control modules for locomotion, targeting, avoidance and close-interactions separately depending on the task. The data for training can include a large variety of martial arts movements, including signature movements of different fighters, and interaction movements. The data is partially paired up for close-interaction movements. The data is processed and used for training the networks. For training each network, the dynamic animation generation system can use the same network architecture but different inputs/outputs. The learning rate is initialized with a value of 1.0·10−4 and later adjusted by the weight decay rate with the initial value of 2.5·10−3. Dropout rate is set to 0.3, hidden layer size in the gating network is set to 128 and in the pose prediction network to 512 respectively. The complete dataset can consist of unstructured motion capture data, and is not augmented with any assigned labels for actions, styles or goal variables. The complete dataset is doubled by mirroring, downsampled from 60 Hz capture to 30 Hz, and then exported twice by shifting the data by one frame. Advantageously after training, the data can be compressed, such as from ˜300 GB generated training data to ˜46 MB network weights. In addition, the reference trajectories, such as for different attacking, idling or hit reaction sequences, were stored in a small database with a total of ˜11 MB.
Character Control System
In some embodiments, the character is controlled by a gamepad's joysticks and buttons to offer a wide range of control signals to the user.
Animated Results
Override Motion Layering
In this section, the dynamic animation generation system can be used to combine movements from different motion skills via override layering into a new animation while maintaining the context of the original movements.
The dynamic animation generation system can reliably produce believable results that do not require additional cleanup to resolve self-collisions or joint limit violations. The dynamic animation generation system can combine motions with high quality while not being sensitive to time-alignment: for example, while layering a kick and a punch, shifting the start of the punch animation at different frames generates variations in the final animation that still appear believable. All generated movements do not exist as such combinations in the original training data.
Additive Motion Layering
The dynamic animation generation system can generate a variation of movements from a single reference motion and simple user control via additive layering. Modifying a particular motion into similar ones is particularly important for game situations where the user wants to perform a particular action with different conditions, for example doing a specific punch in different direction or speed.
Transition Motion Synthesis
Synthesizing transition movements between two motion clips can be very challenging when the start and end poses are not aligned with each other. Usually, this requires a lot of tweaking, manual work and experience to avoid artifacts in the motion synthesis, particularly in terms of foot sliding.
Signature Movements
A common question that rises among animators when using neural networks for motion generation is whether the system is able to synthesize signature movements, such as stylistic attacking behaviors of different fighters in martial arts. First, is the system generally able to encode and reconstruct the detailed motion nuances of such behaviors, and second, how can we then control and synthesize such animations after network training?
Character Interactions
Next, synthesizing character interactions in martial arts pose many challenges since the spatial relationships between two characters need to be maintained when interactively controlling the movements. More specifically, a character can be in an unseen state in the game that was not captured inside the data, but a feasible action shall still be performed successfully.
Compression Quality
In this section, the trained motion generator is evaluated as its ability to fit a large variety of different movements from more than 300 GB of unstructured motion capture data. This study indicates how much motion the trained motion generator is able to compress into a shared motion manifold, and that can later be sampled from by given reference movements without requiring retraining. In Table 1 below, the average error in position and rotation when following a given reference motion that has been seen during training is measured. It can be seen that when using the gating structure, the error is consistently lowest across all tested motion categories. In particular the model helps reconstructing the high-frequency components of motion, which achieve segmenting the animations based on the future velocity magnitudes. Without that, both LSTM and MLP architectures tend to produce more blurry results with less accuracy while tracking the targets, especially during fast movements and quick character rotations. Particularly for LSTM, signature movements tend to be modelled rather poorly. This could be due to the latent variables focusing more on the past of the motion, and can not respond well to very agile movements in the given future controls.
Overview of Computing Device
Computing device 10 may include a separate graphics processor 24. In some cases, the graphics processor 24 may be built into the processing unit 20. In some such cases, the graphics processor 24 may share Random Access Memory (RAM) with the processing unit 20. Alternatively, or in addition, the computing device 10 may include a discrete graphics processor 24 that is separate from the processing unit 20. In some such cases, the graphics processor 24 may have separate RAM from the processing unit 20. Computing device 10 might be a handheld video game device, a dedicated game console computing system, a general-purpose laptop or desktop computer, a smart phone, a tablet, a car console, or other suitable system.
Computing device 10 also includes various components for enabling input/output, such as an I/O 32, a user I/O 34, a display I/O 36, and a network I/O 38. I/O 32 interacts with storage element 40 and, through a device 42, removable storage media 44 in order to provide storage for computing device 10. Processing unit 20 can communicate through I/O 32 to store data, such as game state data and any shared data files. In addition to storage 40 and removable storage media 44, computing device 10 is also shown including ROM (Read-Only Memory) 46 and RAM 48. RAM 48 may be used for data that is accessed frequently, such as when a game is being played.
User I/O 34 is used to send and receive commands between processing unit 20 and user devices, such as game controllers. In some embodiments, the user I/O can include a touchscreen inputs. The touchscreen can be capacitive touchscreen, a resistive touchscreen, or other type of touchscreen technology that is configured to receive user input through tactile inputs from the user. Display I/O 36 provides input/output functions that are used to display images from the game being played. Network I/O 38 is used for input/output functions for a network. Network I/O 38 may be used during execution of a game, such as when a game is being played online or being accessed online.
Display output signals produced by display I/O 36 comprising signals for displaying visual content produced by computing device 10 on a display device, such as graphics, user interfaces, video, and/or other visual content. Computing device 10 may comprise one or more integrated displays configured to receive display output signals produced by display I/O 36. According to some embodiments, display output signals produced by display I/O 36 may also be output to one or more display devices external to computing device 10, such a display 16.
The computing device 10 can also include other features that may be used with a game, such as a clock 50, flash memory 52, and other components. An audio/video player 56 might also be used to play a video sequence, such as a movie. It should be understood that other components may be provided in computing device 10 and that a person skilled in the art will appreciate other variations of computing device 10. The computing device 10 can include one or more components for the interactive computing system 160, and/or a player computing system 152A, 152B. In some embodiments, the interactive computing system 160, and/or a player computing system 152A, 152B can include one or more components of the computing device 10.
Program code can be stored in ROM 46, RAM 48 or storage 40 (which might comprise hard disk, other magnetic storage, optical storage, other non-volatile storage or a combination or variation of these). Part of the program code can be stored in ROM that is programmable (ROM, PROM, EPROM, EEPROM, and so forth), part of the program code can be stored in storage 40, and/or on removable media such as game media 12 (which can be a CD-ROM, cartridge, memory chip or the like, or obtained over a network or other electronic channel as needed). In general, program code can be found embodied in a tangible non-transitory signal-bearing medium.
Random access memory (RAM) 48 (and possibly other storage) is usable to store variables and other game and processor data as needed. RAM is used and holds data that is generated during the execution of an application and portions thereof might also be reserved for frame buffers, application state information, and/or other data needed or usable for interpreting user input and generating display outputs. Generally, RAM 48 is volatile storage and data stored within RAM 48 may be lost when the computing device 10 is turned off or loses power.
As computing device 10 reads media 12 and provides an application, information may be read from game media 12 and stored in a memory device, such as RAM 48. Additionally, data from storage 40, ROM 46, servers accessed via a network (not shown), or removable storage media 46 may be read and loaded into RAM 48. Although data is described as being found in RAM 48, it will be understood that data does not have to be stored in RAM 48 and may be stored in other memory accessible to processing unit 20 or distributed among several media, such as media 12 and storage 40.
It is to be understood that not necessarily all objects or advantages may be achieved in accordance with any particular embodiment described herein. Thus, for example, those skilled in the art will recognize that certain embodiments may be configured to operate in a manner that achieves or optimizes one advantage or group of advantages as taught herein without necessarily achieving other objects or advantages as may be taught or suggested herein.
All of the processes described herein may be embodied in, and fully automated via, software code modules executed by a computing system that includes one or more computers or processors. The code modules may be stored in any type of non-transitory computer-readable medium or other computer storage device. Some or all the methods may be embodied in specialized computer hardware.
Many other variations than those described herein will be apparent from this disclosure. For example, depending on the embodiment, certain acts, events, or functions of any of the algorithms described herein can be performed in a different sequence, can be added, merged, or left out altogether (for example, not all described acts or events are necessary for the practice of the algorithms). Moreover, in certain embodiments, acts or events can be performed concurrently, for example, through multi-threaded processing, interrupt processing, or multiple processors or processor cores or on other parallel architectures, rather than sequentially. In addition, different tasks or processes can be performed by different machines and/or computing systems that can function together.
The various illustrative logical blocks and modules described in connection with the embodiments disclosed herein can be implemented or performed by a machine, such as a processing unit or processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A processor can be a microprocessor, but in the alternative, the processor can be a controller, microcontroller, or state machine, combinations of the same, or the like. A processor can include electrical circuitry configured to process computer-executable instructions. In another embodiment, a processor includes an FPGA or other programmable device that performs logic operations without processing computer-executable instructions. A processor can also be implemented as a combination of computing devices, for example, a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration. Although described herein primarily with respect to digital technology, a processor may also include primarily analog components. For example, some or all of the signal processing algorithms described herein may be implemented in analog circuitry or mixed analog and digital circuitry. A computing environment can include any type of computer system, including, but not limited to, a computer system based on a microprocessor, a mainframe computer, a digital signal processor, a portable computing device, a device controller, or a computational engine within an appliance, to name a few.
Conditional language such as, among others, “can,” “could,” “might” or “may,” unless specifically stated otherwise, are otherwise understood within the context as used in general to convey that certain embodiments include, while other embodiments do not include, certain features, elements and/or steps. Thus, such conditional language is not generally intended to imply that features, elements and/or steps are in any way required for one or more embodiments or that one or more embodiments necessarily include logic for deciding, with or without user input or prompting, whether these features, elements and/or steps are included or are to be performed in any particular embodiment.
Disjunctive language such as the phrase “at least one of X, Y, or Z,” unless specifically stated otherwise, is otherwise understood with the context as used in general to present that an item, term, etc., may be either X, Y, or Z, or any combination thereof (for example, X, Y, and/or Z). Thus, such disjunctive language is not generally intended to, and should not, imply that certain embodiments require at least one of X, at least one of Y, or at least one of Z to each be present.
Any process descriptions, elements or blocks in the flow diagrams described herein and/or depicted in the attached figures should be understood as potentially representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or elements in the process. Alternate implementations are included within the scope of the embodiments described herein in which elements or functions may be deleted, executed out of order from that shown, or discussed, including substantially concurrently or in reverse order, depending on the functionality involved as would be understood by those skilled in the art.
Unless otherwise explicitly stated, articles such as “a” or “an” should generally be interpreted to include one or more described items. Accordingly, phrases such as “a device configured to” are intended to include one or more recited devices. Such one or more recited devices can also be collectively configured to carry out the stated recitations. For example, “a processor configured to carry out recitations A, B and C” can include a first processor configured to carry out recitation A working in conjunction with a second processor configured to carry out recitations B and C.
It should be emphasized that many variations and modifications may be made to the above-described embodiments, the elements of which are to be understood as being among other acceptable examples. All such modifications and variations are intended to be included herein within the scope of this disclosure.
This application claims the benefit of U.S. Provisional Application No. 63/141,782, filed on Jan. 26, 2021, the disclosures of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5274801 | Gordon | Dec 1993 | A |
5548798 | King | Aug 1996 | A |
5982389 | Guenter et al. | Nov 1999 | A |
5999195 | Santangeli | Dec 1999 | A |
6064808 | Kapur et al. | May 2000 | A |
6088040 | Oda et al. | Jul 2000 | A |
6253193 | Ginter et al. | Jun 2001 | B1 |
6556196 | Blanz et al. | Apr 2003 | B1 |
6961060 | Mochizuki et al. | Nov 2005 | B1 |
7006090 | Mittring | Feb 2006 | B2 |
7403202 | Nash | Jul 2008 | B1 |
7415152 | Jiang et al. | Aug 2008 | B2 |
7944449 | Petrovic et al. | May 2011 | B2 |
8100770 | Yamazaki et al. | Jan 2012 | B2 |
8142282 | Canessa et al. | Mar 2012 | B2 |
8154544 | Cameron et al. | Apr 2012 | B1 |
8207971 | Koperwas et al. | Jun 2012 | B1 |
8267764 | Aoki et al. | Sep 2012 | B1 |
8281281 | Smyrl et al. | Oct 2012 | B1 |
8395626 | Millman | Mar 2013 | B2 |
8398476 | Sidhu et al. | Mar 2013 | B1 |
8406528 | Hatwich | Mar 2013 | B1 |
8540560 | Crowley et al. | Sep 2013 | B2 |
8599206 | Hodgins et al. | Dec 2013 | B2 |
8624904 | Koperwas et al. | Jan 2014 | B1 |
8648863 | Anderson et al. | Feb 2014 | B1 |
8860732 | Popovic et al. | Oct 2014 | B2 |
8914251 | Ohta | Dec 2014 | B2 |
9117134 | Geiss et al. | Aug 2015 | B1 |
9256973 | Koperwas et al. | Feb 2016 | B2 |
9317954 | Li et al. | Apr 2016 | B2 |
9483860 | Hwang et al. | Nov 2016 | B2 |
9616329 | Szufnara et al. | Apr 2017 | B2 |
9741146 | Nishimura | Aug 2017 | B1 |
9811716 | Kim et al. | Nov 2017 | B2 |
9826898 | Jin et al. | Nov 2017 | B1 |
9858700 | Rose et al. | Jan 2018 | B2 |
9947123 | Green | Apr 2018 | B1 |
9984658 | Bonnier et al. | May 2018 | B2 |
9990754 | Waterson et al. | Jun 2018 | B1 |
10022628 | Matsumiya et al. | Jul 2018 | B1 |
10096133 | Andreev | Oct 2018 | B1 |
10118097 | Stevens | Nov 2018 | B2 |
10198845 | Bhat et al. | Feb 2019 | B1 |
10314477 | Goodsitt et al. | Jun 2019 | B1 |
10388053 | Carter, Jr. et al. | Aug 2019 | B1 |
10403018 | Worsham | Sep 2019 | B1 |
10535174 | Rigiroli et al. | Jan 2020 | B1 |
10726611 | Court | Jul 2020 | B1 |
10733765 | Andreev | Aug 2020 | B2 |
10755466 | Chamdani et al. | Aug 2020 | B2 |
10792566 | Schmid | Oct 2020 | B1 |
10856733 | Anderson et al. | Dec 2020 | B2 |
10860838 | Elahie et al. | Dec 2020 | B1 |
10878540 | Stevens | Dec 2020 | B1 |
10902618 | Payne et al. | Jan 2021 | B2 |
20020054054 | Sanbe | May 2002 | A1 |
20020089504 | Merrick et al. | Jul 2002 | A1 |
20020180739 | Reynolds et al. | Dec 2002 | A1 |
20030038818 | Tidwell | Feb 2003 | A1 |
20040027352 | Minakuchi | Feb 2004 | A1 |
20040227760 | Anderson et al. | Nov 2004 | A1 |
20040227761 | Anderson et al. | Nov 2004 | A1 |
20050237550 | Hu | Oct 2005 | A1 |
20060036514 | Steelberg et al. | Feb 2006 | A1 |
20060149516 | Bond et al. | Jul 2006 | A1 |
20060217945 | Leprevost | Sep 2006 | A1 |
20060262114 | Leprevost | Nov 2006 | A1 |
20070085851 | Muller et al. | Apr 2007 | A1 |
20070097125 | Xie et al. | May 2007 | A1 |
20080049015 | Elmieh et al. | Feb 2008 | A1 |
20080111831 | Son et al. | May 2008 | A1 |
20080152218 | Okada | Jun 2008 | A1 |
20080268961 | Brook | Oct 2008 | A1 |
20080316202 | Zhou et al. | Dec 2008 | A1 |
20090066700 | Harding et al. | Mar 2009 | A1 |
20090315839 | Wilson et al. | Dec 2009 | A1 |
20100134501 | Lowe et al. | Jun 2010 | A1 |
20100251185 | Pattenden | Sep 2010 | A1 |
20100277497 | Dong et al. | Nov 2010 | A1 |
20110012903 | Girard | Jan 2011 | A1 |
20110074807 | Inada et al. | Mar 2011 | A1 |
20110086702 | Borst et al. | Apr 2011 | A1 |
20110119332 | Marshall et al. | May 2011 | A1 |
20110128292 | Ghyme et al. | Jun 2011 | A1 |
20110164831 | Van Reeth et al. | Jul 2011 | A1 |
20110187731 | Tsuchida | Aug 2011 | A1 |
20110269540 | Gillo et al. | Nov 2011 | A1 |
20110292055 | Hodgins et al. | Dec 2011 | A1 |
20120083330 | Ocko | Apr 2012 | A1 |
20120115580 | Hornik et al. | May 2012 | A1 |
20120220376 | Takayama et al. | Aug 2012 | A1 |
20120244941 | Ostergren et al. | Sep 2012 | A1 |
20120303343 | Sugiyama et al. | Nov 2012 | A1 |
20120313931 | Matsuike et al. | Dec 2012 | A1 |
20130050464 | Kang | Feb 2013 | A1 |
20130063555 | Matsumoto et al. | Mar 2013 | A1 |
20130120439 | Harris et al. | May 2013 | A1 |
20130121618 | Yadav | May 2013 | A1 |
20130222433 | Chapman et al. | Aug 2013 | A1 |
20130235045 | Corazza et al. | Sep 2013 | A1 |
20130263027 | Petschnigg et al. | Oct 2013 | A1 |
20130311885 | Wang et al. | Nov 2013 | A1 |
20140002463 | Kautzman et al. | Jan 2014 | A1 |
20140198106 | Sumner et al. | Jul 2014 | A1 |
20140198107 | Thomaszewski et al. | Jul 2014 | A1 |
20140327694 | Cao et al. | Nov 2014 | A1 |
20150113370 | Flider | Apr 2015 | A1 |
20150126277 | Aoyagi | May 2015 | A1 |
20150187113 | Rubin et al. | Jul 2015 | A1 |
20150235351 | Mirbach et al. | Aug 2015 | A1 |
20150243326 | Pacurariu et al. | Aug 2015 | A1 |
20150381925 | Varanasi et al. | Dec 2015 | A1 |
20160026926 | Yeung et al. | Jan 2016 | A1 |
20160042548 | Du et al. | Feb 2016 | A1 |
20160071470 | Kim et al. | Mar 2016 | A1 |
20160217723 | Kim et al. | Jul 2016 | A1 |
20160307369 | Freedman et al. | Oct 2016 | A1 |
20160314617 | Forster et al. | Oct 2016 | A1 |
20160354693 | Yan et al. | Dec 2016 | A1 |
20170132827 | Tena et al. | May 2017 | A1 |
20170301310 | Bonnier et al. | Oct 2017 | A1 |
20170301316 | Farell | Oct 2017 | A1 |
20180043257 | Stevens | Feb 2018 | A1 |
20180122125 | Brewster | May 2018 | A1 |
20180165864 | Jin et al. | Jun 2018 | A1 |
20180211102 | Alsmadi | Jul 2018 | A1 |
20180239526 | Varanasi et al. | Aug 2018 | A1 |
20190139264 | Andreev | May 2019 | A1 |
20190392587 | Nowozin et al. | Dec 2019 | A1 |
20200294299 | Rigiroli et al. | Sep 2020 | A1 |
20200394806 | Payne et al. | Dec 2020 | A1 |
20210019916 | Andreev | Jan 2021 | A1 |
20210217184 | Payne et al. | Jul 2021 | A1 |
20210375021 | Starke | Dec 2021 | A1 |
Number | Date | Country |
---|---|---|
102509272 | Jun 2012 | CN |
103546736 | Jan 2014 | CN |
105405380 | Mar 2016 | CN |
105825778 | Aug 2016 | CN |
2018-520820 | Aug 2018 | JP |
2019-162400 | Sep 2019 | JP |
Entry |
---|
Zhang H, Starke S, Komura T, Saito J. Mode-adaptive neural networks for quadruped motion control. ACM Transactions on Graphics (TOG). Jul. 30, 2018;37(4):1-1. (Year: 2018). |
Simon Clavet. Motion matching and the road to next-gen animation. In Proc. of GDC. 2016 (Year: 2016). |
Starke S, Zhao Y, Komura T, Zaman K. Local motion phases for learning multi-contact character movements. ACM Transactions on Graphics (TOG). Jul. 8, 2020;39(4):54-1 (Year: 2020). |
Holden D, Komura T, Saito J. Phase-functioned neural networks for character control. ACM Transactions on Graphics (TOG). Jul. 20, 2017;36(4):1-3. (Year: 2017). |
Anagnostopoulos et al., “Intelligent modification for the daltonization process”, International Conference on Computer Vision Published in 2007 by Applied Computer Science Group of digitized paintings. |
Andersson, S., Goransson, J.: Virtual Texturing with WebGL. Master's thesis, Chalmers University of Technology, Gothenburg, Sweden (2012). |
Avenali, Adam, “Color Vision Deficiency and Video Games”, The Savannah College of Art and Design, Mar. 2013. |
Badlani et al., “A Novel Technique for Modification of Images for Deuteranopic Viewers”, May 2016. |
Belytschko et al., “Assumed strain stabilization of the eight node hexahedral element,” Computer Methods in Applied Mechanics and Engineering, vol. 105(2), pp. 225-260 (1993), 36 pages. |
Belytschko et al., Nonlinear Finite Elements for Continua and Structures, Second Edition, Wiley (Jan. 2014), 727 pages (uploaded in 3 parts). |
Blanz V, Vetter T. A morphable model for the synthesis of 3D faces. In Proceedings of the 26th annual conference on Computer graphics and interactive techniques Jul. 1, 1999 (pp. 187-194). ACM Press/Addison-Wesley Publishing Co. |
Blanz et al., “Reanimating Faces in Images and Video” Sep. 2003, vol. 22, No. 3, pp. 641-650, 10 pages. |
Chao et al., “A Simple Geometric Model for Elastic Deformations”, 2010, 6 pgs. |
Cook et al., Concepts and Applications of Finite Element Analysis, 1989, Sections 6-11 through 6-14. |
Cournoyer et al., “Massive Crowd on Assassin's Creed Unity: AI Recycling,” Mar. 2, 2015, 55 pages. |
Dick et al., “A Hexahedral Multigrid Approach for Simulating Cuts in Deformable Objects”, IEEE Transactions on Visualization and Computer Graphics, vol. X, No. X, Jul. 2010, 16 pgs. |
Diziol et al., “Robust Real-Time Deformation of Incompressible Surface Meshes”, to appear in Proceedings of the 2011 ACM SIGGRAPH/Eurographics Symposium on Computer Animation (2011), 10 pgs. |
Dudash, Bryan. “Skinned instancing.” NVidia white paper(2007). |
Fikkan, Eirik. Incremental loading of terrain textures. MS thesis. Institutt for datateknikk og informasjonsvitenskap, 2013. |
Geijtenbeek, T. et al., “Interactive Character Animation using Simulated Physics”, Games and Virtual Worlds, Utrecht University, The Netherlands, The Eurographics Association 2011, 23 pgs. |
Georgii et al., “Corotated Finite Elements Made Fast and Stable”, Workshop in Virtual Reality Interaction and Physical Simulation VRIPHYS (2008), 9 pgs. |
Habibie et al., “A Recurrent Variational Autoencoder for Human Motion Synthesis”, 2017, in 12 pages. |
Halder et al., “Image Color Transformation for Deuteranopia Patients using Daltonization”, IOSR Journal of VLSI and Signal Processing (IOSR-JVSP) vol. 5, Issue 5, Ver. I (Sep.-Oct. 2015), pp. 15-20. |
Han et al., “On-line Real-time Physics-based Predictive Motion Control with Balance Recovery,” Eurographics, vol. 33(2), 2014, 10 pages. |
Hernandez, Benjamin, et al. “Simulating and visualizing real-time crowds on GPU clusters.” Computaci6n y Sistemas 18.4 (2014): 651-664. |
Hu G, Chan CH, Yan F, Christmas W, Kittler J. Robust face recognition by an albedo based 3D morphable model. In Biometrics (IJCB), 2014 IEEE International Joint Conference on Sep. 29, 2014 (pp. 1-8). IEEE. |
Hu Gousheng, Face Analysis using 3D Morphable Models, Ph.D. Thesis, University of Surrey, Apr. 2015, pp. 1-112. |
Irving et al., “Invertible Finite Elements for Robust Simulation of Large Deformation”, Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2004), 11 pgs. |
Kaufmann et al., “Flexible Simulation of Deformable Models Using Discontinuous Galerkin FEM”, Oct. 1, 2008, 20 pgs. |
Kavan et al., “Skinning with Dual Quaternions”, 2007, 8 pgs. |
Kim et al., “Long Range Attachments—A Method to Simulate Inextensible Clothing in Computer Games”, Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2012), 6 pgs. |
Klein, Joseph. Rendering Textures Up Close in a 3D Environment Using Adaptive Micro-Texturing. Diss. Mills College, 2012. |
Komura et al., “Animating reactive motion using momentum-based inverse kinematics,” Computer Animation and Virtual Worlds, vol. 16, pp. 213-223, 2005, 11 pages. |
Lee, Y. et al., “Motion Fields for Interactive Character Animation”, University of Washington, Bungie, Adobe Systems, 8 pgs, obtained Mar. 20, 2015. |
Levine, S. et al., “Continuous Character Control with Low-Dimensional Embeddings”, Stanford University, University of Washington, 10 pgs, obtained Mar. 20, 2015. |
Macklin et al., “Position Based Fluids”, to appear in ACM TOG 32(4), 2013, 5 pgs. |
McAdams et al., “Efficient Elasticity for Character Skinning with Contact and Collisions”, 2011, 11 pgs. |
McDonnell, Rachel, et al. “Clone attack! perception of crowd variety.” ACM Transactions on Graphics (TOG). vol. 27. No. 3. ACM, 2008. |
Muller et al., “Meshless Deformations Based on Shape Matching”, SIGGRAPH 2005, 29 pgs. |
Muller et al., “Adding Physics to Animated Characters with Oriented Particles”, Workshop on Virtual Reality Interaction and Physical Simulation VRIPHYS (2011), 10 pgs. |
Muller et al., “Real Time Dynamic Fracture with Columetric Approximate Convex Decompositions”, ACM Transactions of Graphics, Jul. 2013, 11 pgs. |
Muller et al., “Position Based Dymanics”, VRIPHYS 2006, Oct. 21, 2014, Computer Graphics, Korea University, 23 pgs. |
Musse, Soraia Raupp, and Daniel Thalmann. “Hierarchical model for real time simulation of virtual human crowds.” IEEE Transactions on Visualization and Computer Graphics 7.2 (2001): 152-164. |
Nguyen et al., “Adaptive Dynamics With Hybrid Response,” 2012, 4 pages. |
O'Brien et al., “Graphical Modeling and Animation of Brittle Fracture”, GVU Center and College of Computing, Georgia Institute of Technology, Reprinted from the Proceedings of ACM SIGGRAPH 99, 10 pgs, dated 1999. |
Orin et al., “Centroidal dynamics of a humanoid robot,” Auton Robot, vol. 35, pp. 161-176, 2013, 18 pages. |
Parker et al., “Real-Time Deformation and Fracture in a Game Environment”, Eurographics/ACM SIGGRAPH Symposium on Computer Animation (2009), 12 pgs. |
Pelechano, Nuria, Jan M. Allbeck, and Norman I. Badler. “Controlling individual agents in high-density crowd simulation.” Proceedings of the 2007 ACM SIGGRAPH/Eurographics symposium on Computer animation. Eurographics Association, 2007. APA. |
Rivers et al., “FastLSM: Fast Lattice Shape Matching for Robust Real-Time Deformation”, ACM Transactions on Graphics, vol. 26, No. 3, Article 82, Publication date: Jul. 2007, 6 pgs. |
Ruiz, Sergio, et al. “Reducing memory requirements for diverse animated crowds.” Proceedings of Motion on Games. ACM, 2013. |
Rungjiratananon et al., “Elastic Rod Simulation by Chain Shape Matching withTwisting Effect” SIGGRAPH Asia 2010, Seoul, South Korea, Dec. 15-18, 2010, ISBN 978-1-4503-0439-9/10/0012, 2 pgs. |
Seo et al., “Compression and Direct Manipulation of Complex Blendshape Models”, In ACM Transactions on Graphics (TOG) Dec. 12, 2011 (vol. 30, No. 6, p. 164). ACM. (Year: 2011), 10 pgs. |
Sifakis, Eftychios D., “FEM Simulations of 3D Deformable Solids: A Practioner's Guide to Theory, Discretization and Model Reduction. Part One: The Classical FEM Method and Discretization Methodology”, SIGGRAPH 2012 Course, Version 1.0 [Jul. 10, 2012], 50 pgs. |
Stomakhin et al., “Energetically Consistent Invertible Elasticity”, Eurographics/ACM SIGRAPH Symposium on Computer Animation (2012), 9 pgs. |
Thalmann, Daniel, and Soraia Raupp Musse. “Crowd rendering.” Crowd Simulation. Springer London, 2013. 195-227. |
Thalmann, Daniel, and Soraia Raupp Musse. “Modeling of Populations.” Crowd Simulation. Springer London, 2013. 31-80. |
Treuille, A. et al., “Near-optimal Character Animation with Continuous Control”, University of Washington, 2007, 7 pgs. |
Ulicny, Branislav, and Daniel Thalmann. “Crowd simulation for interactive virtual environments and VR training systems.” Computer Animation and Simulation 2001 (2001 ): 163-170. |
Vaillant et al., “Implicit Skinning: Real-Time Skin Deformation with Contact Modeling”, (2013) ACM Transactions on Graphics, vol. 32 (nº 4). pp. 1-11. ISSN 0730-0301, 12 pgs. |
Vigueras, Guillermo, et al. “A distributed visualization system for crowd simulations.” Integrated Computer-Aided Engineering 18.4 (2011 ): 349-363. |
Wu et al., “Goal-Directed Stepping with Momentum Control,” Eurographics/ ACM SIGGRAPH Symposium on Computer Animation, 2010, 6 pages. |
Number | Date | Country | |
---|---|---|---|
63141782 | Jan 2021 | US |