The present invention relates to the digital entertainment industry and to computer simulation. More specifically, the present invention concerns a method and system for on-screen animation of digital objects or characters.
It's the nature of the digital entertainment industry to continuously push the boundaries of creativity. This drive is very strong in the fields of three-dimensional (3D) animation, visual effects and gaming. Hand animation and particle systems are reaching their natural limits.
Procedural animation, which is driven by artificial intelligence (AI) technique is the new frontier. AI animation allows augmenting the abilities of digital entertainers across disciplines. It gives game designers the breadth, independence and tactics of film actors. Film-makers get the depth and programmability of an infinite number and real time game style characters.
Until recently, the field of AI animation was limited to a handful of elite studios with a large development team that developed their own expensive proprietary tools. This situation is akin to the case of early filmmakers such as the Lumiere Brothers, who had no choice but to build their own cameras.
For over twenty years, the visual effects departments of film studios have increasingly relied on computer graphics for whenever a visual effect is too expensive, too dangerous or just impossible to create any other way than via a computer. Unsurprisingly, the demands on an animator's artistic talent to produce even more stunning and realistic visual effects have also increased. Nowadays, it is not uncommon that the computer animation team is just as important to the success of a film as the lead actors.
Large crowd scenes, in particular battle scenes, are ideal candidates for computer graphics techniques since the sheer number of extras required make them extremely expensive, their violent nature make them very dangerous, and the use of fantastic elements such as beast warriors make them impractical, if not impossible, to film with human extras. Given the complexity, expense, and danger of such scenes, it is clear that an effective artificial intelligence (AI) animation solution is preferable to actually staging and filming such battles with real human actors. However, despite the clear need for a practical commercial method to generate digital crowd scenes, a satisfactory solution has been a long time in coming.
Commercial animation packages such as Maya™ by Alias Systems have made great progress in the last twenty years to the point that virtually all 3D production studios rely on them to form the basis of their production pipelines. These packages are excellent for producing special effects and individual characters. However, crowd animation remains a significant problem.
According to traditional commercial 3D animation techniques, animators must laboriously keyframe the position and orientation of each character frame by frame. In addition to requiring a great deal of the animator's time, it also requires expert knowledge on how intelligent characters actually interact. When the number of characters to be animated is more than a handful, this task becomes extremely complex. Animating one fish by hand is easy; animating fifty (50) fish by hand can become time consuming.
Non-linear animation techniques such as Maya's Trax Editor™, try to reduce the workload by allowing the animator to recycle clips of animations in a way that is analogous to how sound clips are used. According to this clip recycling technique, an animator must position, scale, and composite each clip. Therefore, to make a fish swim across a tank and turn to avoid a rock, the animator repeats and scales the swim clip and then adds a turn clip. Although this reduces the workload per character, it still must be repeated for each individual character, e.g. the fifty (50) fish.
Rule-based techniques present a more practical alternative to their laborious keyframe counterparts. Particle systems try to reduce the animator's burden by controlling the position and orientation of the character via simple rules. This is effective for basic effects such as a school of fish swimming in a straight-line. However the characters do not avoid each other and they all maintain the exact same speed. Moreover, animation clip control is limited to simple cycling. For example, it is very difficult to get a shark to chase fish and the fish to swim away, let alone for the shark to eat the fish and have them disappear.
A solution to this problem is to develop an AI solution in-house. Writing proprietary software may present the animator with the ability to create a package specifically designed for a given project, but it is often an expensive and risky proposition. Even if the necessary expertise can be found, it is most often not in the company's best interest to spend time and money on a non-core competency. In the vast majority of cases, the advantages of buying a proven technology outweigh this expensive, high-risk alternative.
In the computer game field, game AI has been in existence since the dawn of video games in the 1970s. However, it has come a long way since the creation of Pong™ and Pac-Man™. Nowadays, game AI is increasingly becoming a critical factor to a game's success and game developers are demanding more and more from their AI. Today's AI need to be able to seemingly think for themselves and act according to their environment and their experience giving the impression of intelligent behaviour, i.e. they need to be autonomous.
Game AI makes games more immersive. Typically game AI is used in the following situations:
Typically, in a conventional computer game, the main loop contains successive calls to the various layers of the virtual world, which could include the game logic, AI, physics, and rendering layers. The game logic layer determines the state of the agent's virtual world and passes this information to the AI layer. The AI layer then decides how the agent reacts according to the agent's characteristics and its surrounding environment. These directions are then sent to the physics layer, which enforces the world's physical laws on the game objects. Finally, the rendering layer uses data sent from the physics layer to produce the onscreen view of the world.
An object of the present invention is therefore to provide an improved method and system for on-screen animation of digital entities.
A method and system for on-screen animation of digital entities according to the present invention allows controlling the interaction of image entities within a virtual world. Some of the digital entities are defined as autonomous image entities (AIE) that can represent characters, objects, virtual cameras, etc, that behave in a seemingly intelligent and autonomous way. The virtual world includes autonomous and non-autonomous entities that can be graphically represented on-screen in addition to other digital representation which can or cannot be represented graphically on a computer screen or on another display.
Generally stated, the method and system allows generating seemingly intelligent image entities motion with the following properties:
1) Intelligent Navigation
Intelligent navigation in a world is handled on two conceptual levels. The first level is purely reactive and it includes autonomous image entities (AIE) attempting to move away from intervening obstacles and barriers as they are detected. This is analogous to the operation of human instinct in reflexively pulling one's hand away from a hot stove.
The second level involves forethought and planning and is analogous to a person's ability to read a subway map in order to figure out how to get from one end of town to the other.
Convincing character navigation is achieved by combining both levels. Doing so enables a character to navigate paths through complex maps while at the same time being able to react to dynamic obstacles encountered in the journey.
2) Intelligent Animation Control
In addition to being seemingly intelligently moved within the virtual world, AIEs' animations can be driven based on their stimuli.
The simplest level of animation control allows to, for example, play back an animation cycle based on the speed of motion of a character's travel. For example, a character's walk animation can be scaled according to the speed of its movement.
On a more complex level, AIEs can have multiple states and multiple animations associated with those states, as well as possible special-case transition animations when moving from state to state. For example, a character can seamlessly run, slow down as it approaches a target, blending through a walk cycle and eventually ending up at, for example, a “talk” cycle. The resulting effect is a character that runs towards another character, slows down and starts talking to them.
3) Interactivity
By specifying reactive-level and planning-level, AIEs can adapt to a changing environment.
A method and system for on-screen animation of digital entities according to the present invention allows defining an AIE that is able to navigate a world while avoiding obstacles, dynamic or otherwise. Adding more obstacles or changing the world can be achieved in the virtual world representation, allowing characters to understand their environment and continue to be able to act appropriately within it.
It is to be noted that the expression “virtual world” and “digital world” are interchangeable herein.
AIEs' brains can also be described with complex logic via a subsystem referred to herein as “Action Selection”. Using sensors to read information about the virtual world, decision trees to understand that information, and commands to execute resulting actions, AIEs can accomplish complex tasks within the virtual world, such as engaging in combat with enemy forces.
A system for on-screen animation of digital entities according to the present invention may include:
A) A solver
The system includes an Autonomous Image Entity Engine (AIEE). The engine calculates and updates the position and orientation of each character for each frame, chooses the correct set of animation cycles, and enables the correct simulation logic. Within the Autonomous Entity Engine is the solver, which allows the creation of intelligent entities that can self-navigate in the geometric world. The solver drives the AIEs and is the container for managing these AIEs and other objects in the virtual world.
B) Autonomous And Non-Autonomous Image Entities, Including Groups of Image Entities
Image entities come in two forms: autonomous and non-autonomous. In simple terms, an autonomous image entity (AIE) acts as if it has a brain and is controlled by in a manner defined by attributes it has been assigned. The solver controls the interaction of these autonomous image entities with other entities and objects within the world. Given this very general specification, an AIE can control anything from a shape-animated fish, a skeletal-animated warrior, or a camera. Once an AIE is defined it is assigned characteristics, or attributes, which define certain basic constraints about how the AIE is animated.
A non-autonomous image entity does not have a brain and must be manipulated by an external agent. Non-autonomous image entities are objects in the virtual world that, even though they may potentially interact with the world, are not driven by the solver. They can include objects such as player-controlled characters, falling rocks, and various obstacles.
Once an AIE is defined, characteristics or attributes, which define certain basic constraints about how the AIE can move, are assigned thereto. Attributes include, for example, the AIE's initial position and orientation, its maximum and minimum speed and acceleration, how quickly it can turn, and if the AIE hugs a given surface. These constraints will be obeyed when the AIE's position and orientation are calculated by the solver. The AIE can then be assigned pertinent behaviours to control its low-level locomotive actions. Behaviours generate steering forces that can change an AIE's direction and/or speed for example. Without an active behaviour, an AIE would remain moving in a straight line at a constant speed until it collided with another object.
Non-autonomous characters are objects in the digital world that, even though they may potentially interact with the world, are not driven by the solver. These can range from traditionally animated characters (e.g. the leader of a group) to objects (e.g. boulders and trees) driven by a dynamic solver. The method and system according to the present invention allows interaction among characters. For example, a group of autonomous characters could follow a non-autonomous leader character animated by traditional means, or the group could run away from a physics-driven boulder.
C) Paths and Waypoint Networks
Paths and waypoint networks are used to guide an AIE within the virtual world.
A path is a fixed sequence of waypoints that AIEs can follow. Each waypoint can be assigned speed limits to control how the AIE approaches it (e.g. approach this waypoint at this speed). Paths can be used to build racetracks, attack routes, flight paths, etc.
A waypoint network allows defining the “navigable” space in world, clearly defining to AIEs what possible routes they can take in order to travel from point to point in the world.
D) Behaviours
Behaviours provide AIEs with reactive-level properties that describe their interactions with the world. An AIE may have any number of behaviours that provide it with such instincts as avoiding obstacles and barriers, seeking to or fleeing from other characters, “flocking” with other characters as group, or simply wandering around.
Behaviours allow producing “desired motion” and desires from multiple behaviours can be combined to produce a single desired motion for the AIE to follow. Behaviour intensities (allowing scaling up or down of a behaviour's produced desired motion), behaviour priorities (allowing higher priority behaviours to completely override the effect of lower priority ones), and behaviour blending (allowing a behaviour's desired motion to be “fade in” and “fade out” over time), can be used to control the relative effects of different behaviours.
E) Action Selection
Action Selection allows enabling AIEs to make decisions based on information about their surrounding environment. As Behaviours can be thought of as instincts, Action Selection can be thought of as higher-level reasoning, or logic.
Action Selection is fuelled by “sensors” that allow AIEs to detect various kinds of information about the world or about other AIEs.
Results of sensors' detections are saved into “datum” and this data can be used to drive binary decision trees, which provide the “if . . . then” logic defining a character's high-level actions.
Finally, obeying a decision tree causes the character to make a decision, which is basically a group of commands. These commands provide the character with the ability to modify its behaviours, drive animation cycles, or update its internal memory.
F) Animation Control
Another feature of a method for on-screen animation of digital entities according to the present invention is its ability to control an AIE's animations based on events in the world. By defining animation cycles and transitions between animations, the method can be used to efficiently create a seamless, continuous blend of realistic AI-driven character animation.
More specifically, in accordance with a first aspect of the present invention, there is provided a method for on-screen animation of digital entities comprising:
According to a second aspect of the present invention, there is provided a system for on-screen animation of digital entities comprising:
According to a third aspect of the present invention, there is provided a system for on-screen animation of digital entities comprising:
A method and system for animating digital entities according to the present invention can be used in applications where there is a need for seemingly reaction of characters and objects, for example:
The method and system from the present invention provides software modules to create and control intelligent characters that can act and react to their worlds, such as:
Other objects, advantages and features of the present invention will become more apparent upon reading of the following non-restrictive description of preferred embodiments thereof, given by way of example only with reference to the accompanying drawings.
In the appended drawings:
A method 100 for on-screen animation of digital entities according to an illustrative embodiment of a first aspect of the invention will now be described, with reference first to
The method 100 comprises the following steps:
These general steps will now be further described.
A digital world model including image object elements is first provided in step 102. The image object elements include two or three-dimensional (2D or 3D) graphical representations of objects, autonomous and non-autonomous characters, building, animals, trees, etc. It also includes barriers, terrains, and surfaces. The concepts of autonomous and non-autonomous characters and objects will be described hereinabove in more detail.
As it is believed to be commonly known in the art, the graphical representation of objects and characters can be displayed, animated or not, on a computer screen or on another display device, but can also inhabit and interact in the virtual world without being displayed on the display device.
Barriers are triangular planes that can be used to build walls, moving doors, tunnels, etc. Terrains are 2D height-fields to which AIE can be automatically bound (e.g. keep soldier characters marching over a hill). Surfaces are triangular planes that may be combined to form fully 3D shapes to which autonomous characters can also be constrained.
In combination, these elements are to be used to describe the world in which the characters inhabit.
In addition to the image object elements, the digital world model includes a solver, which allows managing autonomous image entities (AIE), including autonomous characters, and other objects in the world.
The solver can have a 3D configuration, to provide the AIE with complete freedom of movement, or a 2D configuration, which is more computationally efficient, and allows an animator to insert a greater number of AIE in a scene without affecting performance of the animation system.
A 2D solver is computationally more efficient than a 3D solver since the solver does not consider the vertical (y) co-ordinate of an image object element or of an AIE. The choice between the 2D and 3D configuration depends on the movements that are allowed in the virtual world by the AIE and other objects. If they do not move in the vertical plane then there is no requirement to solve for in 3D and a 2D solver can be used. However, if the AIE requires complete freedom of movement, a 3D solver is used. It is to be noted that the choice of a 2D solver does not limit the dimensions of the virtual world, which may be 2D or 3D. The method 100 provides for the automatic creation of a 2D solver with default settings whenever an object or an AIE is created before a solver.
The following table shows examples of parameters that can be used to define the solver:
Non-autonomous characters are objects in the digital world that, even though they may potentially interact with the digital world, are not driven by the solver. These can range from traditionally animated characters (e.g. the leader of a group) to objects (e.g. boulders and trees) driven by the solver.
Barriers are equivalent to one-way walls, i.e. an object or an AIE inhabiting the digital world can pass through them in one direction but not in the other. When a barrier is created, spikes (forward orientation vectors) are used to indicate the side of the wall that can be detected by an object or an AIE. Therefore, an object or an AIE can pass from the non-spiked side to the spiked side, but not vice-versa. It is to be noted that a specific behaviour must be defined and activated for an AIE to attempt to avoid the barriers in the digital world (Avoid Barriers behaviour). The concept of behaviours will be described hereinbelow in more detail.
As illustrated in
Each barrier can be defined by the following parameters:
As it is commonly known, a bounding box is a rectilinear box that encapsulates and bounds a 3D object.
When barriers are defined in the world, the space-partitioning grid in the AI solver may be specified in order to optimize the solver calculations that concern barriers.
The space-partitioning grid allows to optimize the computational time needed for solving, including updating each AIE state (steps 108-114) as will be described hereinbelow in more detail. More specifically, the space-partitioning grid allows optimizing the search for barriers that is necessary when an Avoid Barriers behaviour is activated and is also used by the surface-hugging and collision subsolvers, which will be described hereinbelow.
Increasing the number of partitions in the grid generally decreases the computational time needed to update the world, but increases the solver memory usage. The space-partitioning grid is defined via the Grid parameters of the AI solver. The number of partitions along each axis may be specified which effectively divides the world into a given number of cells. Choosing suitable values for these parameters allows tuning the performance. However, values that are too large or too small can have a negative impact on performance. Cell size should be chosen based on average barrier size and density and should be such that, on average, each cell holds about 4 or 5 barriers.
The solver of the digital world model includes subsolvers, which are the various engines of the solver that are used to run the simulation. Each subsolver manages a particular aspect of object and AIE simulation in order to optimize computations.
After the digital world has been set, autonomous image entities (AIE) are defined in step 104. Each AIE may represent a character or an object that is characterized by attributes defining the AIE relatively to the image objects elements of the digital world, and behaviours for modifying some of the attributes. Each AIE is associated to animation clips allowing representing the AIE in movement in the digital world. Virtual sensors allow the AIE to gather data information about image object elements or other AIE within the digital world. Decision trees are used for processing the data information resulting in selecting and triggering one of the animation cycle or selecting a new behaviour.
As it is believed to be well known in the art, an animation cycle, which will also be referred to herein as “animation clip” is a unit of animation that typically can be repeated. For example, in order to get a character to walk, the animator creates a “walk cycle”. This walk cycle makes the character walks one iteration. In order to have the character walk more, more iterations of the cycle are played. If the character speeds up or slows down during time, the cycle is “scaled” accordingly so that the cycle speed matches the character displacement so that there is no slippage (e.g., it looks like the character is slipping on the ground).
The autonomous image entities are tied to transform nodes of the animating engine (or platform). The nodes can be in the form of locators, cubes or models of animals, vehicles, etc. Since animation clips and transform nodes are believed to be well known in the art, they will not be described herein in more detail.
Examples of an AIE attributes are briefly described in the following tables. Even though, this table refers to characters, the listed attributes apply to all AIE.
Of course, other attributes can also be used to characterize an AIE.
In step 106, each AIE attributes are initialized and an initial behaviour among the set of behaviours defined for each AIE is assigned thereto. The initialisation of attributes may concern only selected attributes, such as the initial position of the AIE, its initial speed, etc. As described in the above table, some attributes are modifiable by the solver or by a user via a user interface or a keyable command, for example when the method 100 is embodied in a computer game.
The concept of AIE behaviour will now be described hereinbelow in more detail.
In addition to attributes, AIE from the present invention are also characterized by behaviours. Along with the decision trees, the behaviours are the low-level thinking apparatus of an AIE. They take raw input from the digital world using virtual sensors, process it, and change the AIE's condition accordingly.
Behaviours can be categorized, for example, as State Change behaviours and Locomotive behaviours. State change behaviours modify a character's internal state attributes, which represent for example the AIE's “health”, or “aggressivity”, or any other non-apparent characteristics of the AIE. Locomotive behaviours allow an AIE to move. These locomotive behaviours generate steering forces that can affect any or all of an AIE's direction of motion, speed, and orientation (i.e. which way the AIE is facing) for example.
The following table includes examples of behaviours:
A locomotive behaviour can be seen as a force that acts on the AIE. This force is a behavioural force, and is analogous to a physical force (such as gravity), with a difference that the force seems to come from within the AIE itself.
It is to be noted that behavioural forces can be additive. For example, an autonomous character may simultaneously have more then one active behaviours. The solver calculates the resulting motion of the character by combining the component behavioural forces, in accordance with behaviour's priority and intensity. The resultant behavioural force is then applied to the character, which may impose its own limits and constraints (specified by the character's turning radius attributes, etc) on the final motion.
The following table briefly describes examples of parameters that can be used to define behaviours:
The behaviours allow creating a wide variety of actions for AIEs. Behaviours can be divided into four subgroups: simple behaviours, targeted behaviours, group behaviours and state change behaviours.
Simple behaviours are behaviours that only involve a single AIE.
Targeted behaviours apply to an AIE and a target object, which can be any other object in the digital world (including groups of objects).
Group behaviours allow AIEs to act and move as a group where the individual AIEs included in the group will maintain approximately the same speed and orientation as each other.
State change behaviours enable the state of an object to be changed.
Examples of behaviours will now be provided in each of the four categories. Of course, it is believed to be within a person skilled in the art to provide an AIE with other behaviours.
Simple Behaviours
Avoid Barriers
The Avoid Barriers behaviour allows a character to avoid colliding with barriers. When barriers are defined in the world, the space-partitioning grid in the AI solver may be specified in order to optimize the solver calculations that concern barriers.
Parameters specific to this behaviour may include, for example:
The Avoid Obstacles behaviour allows an AIE to avoid colliding with obstacles, which can be other autonomous and non-autonomous image entities. Similar parameters than those detailed for the Avoid Barriers behaviour can also be used to define this behaviour.
Accelerate At
The Accelerate At behaviour attempts to accelerate the AIE by the specified amount. For example, if the amount is a negative value, the AIE will decelerate by the specified amount. The actual acceleration/deceleration may be limited by max acceleration and max deceleration attributes of the AIE.
A parameter specific to this behaviour is the Acceleration, which represents the change in speed (distance units/frame2) that the AIE will attempt to maintain.
Maintain Speed At
The Maintain Speed At behaviour attempts to set the target AIE's speed to a specified value. This can be used to keep a character at rest or moving at a constant speed. If the desired speed is greater than the character's maximum speed attribute, then this behaviour will only attempt to maintain the character's speed equal to its maximum speed. Similarly, if the desired speed is less than the character's minimum speed attribute, this behaviour will attempt to maintain the character's speed equal to its minimum speed.
A parameter allowing defining this behaviour is the desired speed (distance units/frame) that the character will attempt to maintain.
Wander Around
The Wander Around behaviour applies random steering forces to the AIE to ensure that it moves in a random fashion within the solver area.
Parameters allowing to define this behaviour may be for example:
Orient To
The Orient To behaviour allows an AIE to attempt to face a specific direction.
Parameters allowing to define this behaviour are:
Targeted Behaviours
The following behaviours apply to an AIE (the source) and another object in the world (the target). Target objects can be any object in the world such as autonomous or non-autonomous image entities, paths, groups and data. If the target is a group, then the behaviour applies only to the nearest member of the group at any one time. If the target is a datum, then it is assumed that this datum is of type ID and points to the true target of the behaviour. An ID is a value used to uniquely identify objects in the world. The concept of datum will be described in more detail hereinbelow.
The following parameters, shared by all targeted behaviours, are:
Seek To
The Seek To behaviour allows an AIE to move towards another AIE or towards a group of AIEs. If an AIE seeks a group, it will seek the nearest member of the group at any time.
Parameters allowing to define this behaviour are for example:
Flee From
The Flee From behaviour allows an AIE to flee from another AIE or from a group of AIEs. When an AIE flees from a group, it will flee from the nearest member of the group at any time. The Flee From behaviour has the same attributes as the Seek To behaviour, however, it produces the opposite steering force. Since the parameters allowing defining the Flee From behaviour are very similar to those of the Seek To behaviour, they will not be described herein in more detail.
Look At
The Look At behaviour allows an AIE to face another AIE or a group of AIEs. If the target of the behaviour is a group, the AIE attempts to look at the nearest member of the group.
Strafe
The Strafe behaviour causes the AIE to “orbit” its target, in other words to move in a direction perpendicular to its line of sight to the target. A probability parameter allows to determine how likely it is at each frame that the AIE will turn around and start orbiting in the other direction. This can be used, for instance, to make a moth orbit a flame.
For example, the effect of a guard walking sideways while looking or shooting at its target can be achieved by turning off the guard's Forward Motion Only property, and adding a Look At behaviour set towards the guard's target. It is to be noted that, to do this, Strafe is set to Affects direction only, whereas Look At is set to Affects orientation only.
A parameter specific to this behaviour may be, for example, the Probability, which may take a value between 0 and 1 that determines how often the AIE change direction of orbit. For example, at 24 frames per second, a value of 0.04 will trigger a random direction change on average every second, whereas a value of 0.01 will trigger a change on average every four seconds.
Go Between
The Go Between behaviour allows an AIE to get in-between the first target and a second target. For example this behaviour can be used to enable a bodyguard character to protect a character from a group of enemies.
The following parameter allow specifying this behaviour, which may take a value between 0 and 1 that determines how close to the second target you want to go.
Follow Path
The Follow Path behaviour allows an AIE to follow a path. For example this behaviour can be used to enable a racecar to move around a racetrack.
The following parameters allow defining this behaviour:
Seek To Via Network
The Seek To Via Network behaviour can be viewed as an extension of the Seek To behaviour that allows a source (AIE) to use a waypoint network to navigate towards a target. The purpose of a waypoint network is to store as much pre-calculated information as possible about the world that surrounds the character and, in particular, the position of static obstacles. The waypoint network, which will be described hereinbelow in more detail, can be used for example in one of two ways:
Edges in the network are used to define a set of “safe corridors” within which a source object can safely navigate without fear of running into a barrier or other static obstacles. Thus, once an AIE has reached a corridor in the network, it can safely navigate from waypoint to waypoint via the network.
While navigating, periodic reach ability tests are performed in order to determine whether it is safe to cut corners thus producing more natural motion. The frequency of these tests can be adjusted using the behaviour parameters.
In addition to the parameters that are available for the Seek To behaviour, the Seek To Via Network behaviour has the following additional parameters that can be used to control the type and frequency of the vision tests used:
It is to be noted that the Seek To parameters are used to guide the motion of the AIE, however the contact radius and slowing radius parameters are only used when the AIE seeks its final target. In addition, when the AIE seeks its final target, only checks for barrier avoidance are performed rather than checks for current location, target location, and path smoothing. This single check is performed at each call to this behaviour.
Group Behaviours
Group behaviours allow grouping individual AIEs so that they act as a group while still maintaining individuality. Examples include a school of fish, a flock of birds, etc.
The following parameters may be used to define group behaviours:
The following includes brief descriptions of examples of group behaviours.
Align With
The Align With behaviour allows an AIE to maintain the same orientation and speed as other members of a group. The AIE may or may not be a member of the group.
Join With
The Join With behaviour allows an AIE to stay close to members of a group. The AIE may or may not be a member of the group.
An example of parameter that can be used to define this behaviour is the Join Distance, which is similar to the “contact radius” in targeted behaviours. Each member of the group within the neighbourhood radius and outside the join distance is taken into account when calculating the steering force of the behaviour. The join distance is the external distance between the characters (i.e. the distance between the outsides of the bounding spheres of the characters). The value of this parameter determines the closeness that members of the group attempt to maintain.
Separate From
The Separate From behaviour allows an AIE to keep a certain distance away from members of a group. For example, this can be used to prevent a school of fish from becoming too crowded. The AIE to which the behaviour is applied may or may not be a member of the group.
The Separation Distance is an example of parameters that can be used to define this behaviour. Each member of the group within the neighbourhood radius and inside the separation distance will be taken into account when calculating the steering force of the behaviour. The separation distance is the external distance between the AIEs (i.e. the distance between the outsides of the bounding spheres of the AIEs). The value of this parameter determines the external separation distance that members of the group will attempt to maintain.
Flock With
This behaviour allows AIEs to flock with each other. It combines the effects of the Align With, Join With, and Separate From behaviours.
The following table describes parameters that can be used to define this behaviour:
State Change Behaviours
State Change behaviours allow changing AIEs' states. Examples of State Change behaviours will now be provided.
State Change On Proximity
The State Change On Proximity behaviour allows an AIE's state to be changed based on its distance from a target. For example, the “alive” state of a soldier can be change to false once an enemy kills him.
Examples of parameters allowing defining the State Change On Proximity behaviour:
Target State Change On Proximity
The Target State Change On Proximity behaviour is similar to the State Change On Proximity behaviour with a difference that it affects the target character's state. For example, a shark kills a fish (i.e. change the fish's “alive” state to false) as soon as the shark is within a few centimetres of the fish.
The following table includes examples of parameters that can be used to define this behaviour:
Combining Behaviours
An AIE can have multiple active behaviours associated thereto at any given time. Since the possibility that these behaviours be in conflict with each other could arise, the method and system for on-screen animation of digital entities according to the present invention provides means to assign importance to a given behaviour.
A first means to achieve this is by assigning intensity and priority to a behaviour. The assigned intensity of a behaviour affects how strong the steering force generated by the behaviour will be. The higher the intensity the greater the generated behavioural steering forces. The priority of a behaviour defines the precedence the behaviour should have over other behaviours. When a behaviour of a higher priority is activated, those of lower priority are effectively ignored. By assigning intensities and priorities to behaviours the animator informs the solver which behaviours are more important in which situations in order to produce a more realistic animation.
In order for the solver to calculate the new speed, position, and orientation of an AIE, the solver calculates the desired motion of all behaviours, sums up these motions based on each behaviour's intensity, while ignoring those with lower priority, and enforces the maximum speed, acceleration, deceleration, and turning radii defined in the AIE's attributes. Finally, braking due to turning may be taken into account. Indeed, based on the values of the character's Braking Softness and Brake Padding attributes, the character may slow down in order to turn.
Providing, for example, the case of a school of fish and a hungry shark in a large aquarium, and more specifically the case where a fish wants to escape the hungry shark. At this point in time both the fish's “Flee From” shark and “Flock With” other fish behaviours will be activated causing two steering forces to act on the fish in unison. Therefore, the fish tries to escape the shark and stay with the other fish at the same time. The resulting active steering force on the fish will be the weighted sum of the individual behavioural forces, based on their intensities. For example, for the fish, it is much more important to flee from the shark than to stay in a school formation. Therefore, a higher intensity is assigned to the fish's “Flee From” behaviour than to the “Flock With” behaviour. This allows the fish to break formation when trying to escape the shark and then to regroup when it is far enough away from the shark.
Although the resulting behaviour can be achieved simply by adjusting intensities, ideally when the fish sees the shark it would disable its “Flock With” behaviour and enable its “Flee From” behaviour. Once out of range of the shark, the fish would then continue to swim in a school by disabling its “Flee From” behaviour and enabling its “Flock With” behaviour. This type of behavioural control can be achieved by setting the behaviours' priorities. By giving the “Flee From” behaviour a higher priority than the “Flock With” behaviour, when a fish is fleeing from a shark, its “Flock With” behaviour will be effectively disabled. Therefore, a fish will not try to remain with the other fish while trying to flee the shark, but once it has escaped the shark its “Flock With” behaviour will be reactivated and the fish will regroup with its school.
In many relatively simple cases such as described in this last example, to obtain a realistic animation sequence it is usually sufficient to assign various degrees of intensities and priorities to specific behaviours. However, in a more complicated scenario, simply tweaking a behaviour's attributes may not produce acceptable results. In order to implement higher-level behaviour, an AIE needs to be able to make decisions about what actions to take according to its surrounding environment. The following section describes how an AIE uses sensors to gather data information image object elements or other AIE in the digital world (step 108) and how decisions are made and action selected based on this information (step 110).
For example, to cause a character to move along a path but run away from any nearby enemies, the following logic can be implemented:
This relatively simple piece of logic can be divided as follows:
1. The conditional: “if enemy is near”.
2. The Actions: “run away”, or “follow the path”, depending of the current state of the conditional.
In the method 100, the conditional is implemented by creating a Sensor, which will output its findings to an element of the character's memory called a Datum.
The Actions are implemented using Commands. Commands can be used to activate behaviours or animation cycles, to set character attributes, or to set Datum values. In this example, the commands would activate a FleeFrom behaviour or a FollowPath behaviour for example.
Finally, a Decision Tree is used to group the Actions with the Conditional. A Decision Tree allows nesting multiple conditional nodes in order to produce logic of arbitrary complexity.
Data Information
An AIE's data information can be thought of as its internal memory. Each datum is an element of information stored in the AIE's internal memory. For example, a datum could hold information such as whether or not an enemy is seen or who is the weakest ally. A Datum can also be used as a state variable for an AIE.
Data are written to by a character's Sensors, or by Commands within a Decision Tree. The Datum's value is used by the Decision Tree to activate and deactivate behaviours and animations, or to test the character's state. Sensors and Decision trees will be described hereinbelow in more detail.
Sensors
AIEs use sensors to gain information about the world. A sensor will store its sensed information in a datum belonging to the AIE.
A parameter can be used to trigger the activation of a sensor. If a sensor is set off, it will be ignored by the solver and will not store information in any datum.
Example of sensors that can be implemented in the method 100 will now be described in more detail. Of course, it is believed to be within the reach of a person skilled in the art to provide additional or alternate sensors depending on the application.
Vision Sensor
The vision sensor is the eyes and ears of a character and allows the character to sense other physical objects or AIEs in the virtual world, which can be autonomous or non-autonomous characters, barriers, and waypoints, for example.
The following parameters allow, for example, defining the vision sensor:
Property Sensor
The Property sensor is a general-purpose sensor allowing to return and filter the value of any of an AIE's state, speed, angular velocity, orientation, distances from target, group membership, datum values, bearing, or pitch bearing.
Unlike other sensors, the property sensor can sense the properties of any AIE in the simulation.
The following table includes a list of parameters that can be used to define the Property sensor:
Random Sensor
A random sensor returns a random number within a specified range. The following table includes examples of parameters that allow to define the Random sensor:
Value Sensors
A value sensor allows setting the value of a datum based on whether or not a certain value is within a certain range.
The following table includes examples of parameters that can be used to define the Value sensor:
Speed Sensor
A speed sensor is a value sensor that sets the value of a boolean datum based on the speed of the AIE. For example, this sensor can be used to change the animation of an AIE from a walk cycle to a run cycle.
The Property sensor can be used to read the actual speed of an AIE into a datum.
The following table includes examples of parameters that can be used to define the Speed sensor:
State Sensor
A state sensor allows setting the value of a boolean datum based on the value of one of the AIE's states. For example, in a battle scene such a sensor can be used to allow AIEs with low health to run away by activating a Flee From behaviour when their “alive” state reaches a low enough value.
The following tables includes examples of parameters that can be used to define a state sensor:
Active Animation Sensor
An active animation sensor can set the value of a datum based on whether or not a certain animation is active.
The following tables includes examples of parameters that can be used to define a state sensor:
Commands, Decisions, and Decision Trees
As illustrated in steps 110-114 of
Step 110 results in a Command being used to activate a behaviour or an animation, or to modify an AIE's internal memory.
Commands are invoked by decisions. A single Decision consists of a conditional expression and a list of commands to invoke.
A Decision Tree consists of a root decision node, which can own child decision nodes. Each of those children may in turn own children of their own, each of which may own more children, etc.
Since the method 110 iterates on all frames, verification is done in step 118 to verify whether all frames have been processed. A similar verification is done in step 120 for the AIEs.
In step 122, all of the current AIE's behaviours are deactivated.
In step 124, verification is done to insure that all decision trees have been processed for the current AIE.
Then, for each decision tree, the root decision node is evaluated (step 126), all commands in the corresponding decision are invoked (step 128), and the conditional of the current decision tree is evaluated (step 130).
It is then verified in step 132, whether all decision nodes have been processed. If yes, the method 110 proceeds with the next decision tree (step 124). If no, the child decision node indicated by the conditional is evaluated (step 134), and the method returns to the next child decision node (step 132).
For the example given hereinabove, where a character moves along a path while running away from any nearby enemies, the following elements can be created:
It results from the above decision tree that when the character sees the enemy, it will activate its Flee From behaviour and if the character does not see the enemy it will activate its Follow Path behaviour.
Note that if an AIE is assigned a decision tree, the solver deactivates all behaviours before solving for that AIE (step 122 in
A parameter indicative of whether or not the decision tree is to be evaluated can be used in defining the decision tree.
Whenever the command corresponds to activating an animation and a transition is defined between the current animation and the new one, then that transition is first activated.
Similarly, whenever the command corresponds to activating a behaviour, a blend time can be provided between the current animation and the new one.
Moreover, whenever the command corresponds to activating a behaviour, the target is changed to the object specified by a datum. For example, to make a character flee from the nearest enemy:
Examples of command that can be used with the method 100, and more specifically in step 112 or 114, will be described.
Queue Animation Command
This command can be used to activate an animation (the “New Animation”) once another one (the “Old Animation”) completes its current cycle. For example, a queue animation with a walk animation as the “Old Animation”, and a run animation as “New Animation”, will activate the run animation as soon as the walk animation finishes its current cycle.
It is to be noted that the same result can be achieved by defining a Queuing transition between the animations, then using a normal ActivateAnimation command.
Set Datum Command
This command can be used to set (or increment, or decrement) the value of a datum. If the datum represents an AIE's state, then this command can be used to transition between states.
Set Value Command
This command allows setting (or increment, or decrement) the value of any attributes of one or more AIEs or character characteristics (such as behaviours, sensors, animations, etc).
In particular, this command may be used to set a character's position and orientation, active state, turning radii, surface offset, etc.
The set value command may include two parts: the Item Selection part for specifying which items are to be modified, and the Value section for specifying which of the item's attributes are to be modified.
Group Membership Command
This command can be used to add (or remove) an AIE to (from) a group. Also, such a command may be used to remove all members from a group.
Animation Control
It will now be described how animation clips are associated to AIEs and how an AIE drives the right animation clip at the right time and speed.
Animation clips can be defined for example using one of the following parameters:
Animation Selection
Action Selection allows, for example, to switch between an idle animation and a walk animation based on its speed. In this particular case the first step is to create a datum for the character, which might be called “IsWalking”. Next, a speed sensor is created to drive the datum. Then the following decision tree is created to activate the correct animation depending on the speed of the character:
Thus, at each frame, either the walk or idle clip would be played based on the value of the IsWalking datum.
Action Selection effectively allows activating the right animation at the right time, and adjusting its scale and number of cycles appropriately.
Animation Transitions and Blending
While animation commands are used to activate or deactivate clip animations at the appropriate time, animation transitions are used in order to specify what happens if an animation is already playing when another one is activated. This allows creating a smooth transition from one animation to another. In particular, animation transitions make it possible to smoothly blend one clip into another.
Before describing in more detail the action selection and the use of transitions between the activation of animation clips, the following terminology is introduced:
Animation channel: attributes of objects with animation curves;
Animation clip: a list of channels and their associated animations. Typically the animation of each channel is defined with a function curve, that is, a curve specifying the value of the parameter over time. This concept promotes animation reuse, over several characters and over time for a given AIE. It does this by scaling, stretching, offsetting the animation and possibly fitting it for a specific AIE. Common examples for a clip are walk or run cycles, death animations, etc.
Animation blending: a process that computes the value for a channel by averaging two or more different clips, in order to get something in-between. This is generally controlled by a weight that specifies the amount of each animation to use in the mix.
Interpolation/Transitions: a blending that occurs from either a static, non-animated posture to a new pose or animation, or an old animation to a new one where the new animation is expected to take over completely over a transition time.
Marker: used to define a reference point in an animation clip for transitions. For example in the last frame of the “in” animation clip a character has their right foot on the ground, the marker could then be used to define a similar position in the “out” animation to transition to.
Animation markers are reference points allowing synchronizing clips. They are used to synchronize transitions between two clips.
The following table includes example of parameters that can be used to define animation markers:
The following table includes example of parameters allowing to define animation blending:
It is to be noted that character's attributes can also be used to define animation. The following table includes examples of such character's attributes:
According to a further aspect of the present invention, waypoints are provided for marking spherical region of space in the virtual world.
Characteristics and functions of waypoints will be described with reference to the following table, including examples of parameters that can be used to define waypoints:
Waypoints allow to creates a path, which is an ordered set of waypoints that AIE may be instructed to follow. For example, a path around a racetrack would consist of waypoints at the turns of the track.
Each waypoint can be assigned speed limits to control how the AIE approaches it (e.g. approach this waypoint at this speed). Paths can be used to build racetracks, attack routes, flight paths, etc.
Also, linking together waypoints with edges may create a waypoint network. For example, a character with a SeekToViaNetwork behaviour can use a waypoint network to navigate around the world. An edge between a waypoint in the network to another waypoint in the same network indicates that an AIE can travel from the first waypoint to the second. The lack of an edge between two waypoints indicates that an AIE cannot travel between them.
A waypoint network functions in a similar manner to a path however, the exact route that the autonomous character takes is not pre-defined as the character will navigate its way via the network according to its environment. Essentially a waypoint network can be thought of as a dynamically generated path. Collision detection is used to ensure that AIEs do not penetrate each other or their surrounding environment.
According to a further specific aspect of the present invention, there is provided a method for generating waypoints network. The method includes analyzing the level, determining all reachable areas and placing the minimum necessary waypoints for maximum reach ability. For example, reachable waypoints can be positioned within the perimeter of the selected barriers, and outside barrier enclosed unreachable areas or “holes”.
It is to be noted that a waypoint can be positioned in the virtual world at the entrance to each room and mark is as a portal using a specific waypoint's parameter. Each room can have one Portal waypoint per doorway. Thus any 2 rooms connected by a doorway will have 2 Portal waypoints (one just inside each room) connected by an edge, and all passages or doorways connecting one room to another will have a corresponding edge between 2 Portal waypoints. All other waypoints should have the “IsPortal” parameter set to off. This allows the solver to significantly reduce the amount of run-time memory required to navigate large networks (i.e. >100 waypoints).
The animation package 202 includes means to model and texture characters 206, means for creating animation cycles 208, means to add AI animation to characters 210, and means to render out animation 212. Since those last four means are believed to be well known in the art, and of concision purposes, they will not be described herein in more detail.
The plug-in 204 includes an autonomous entity engine (AEE) (not shown), which calculates and updates the position and orientation of each AIE for each frame, chooses the correct set of animation cycles, and enables the correct simulation logic.
The plug-in 204 is designed to be integrated directly into the host art package 202, allowing animators to continue animating their AIE via the package 202, rather than having to learn a new technology. In addition, many animators are already familiar with specific animation package workflow, so learning curves are reduced and they can mix and match functionality between the package 202 and the plug-in 204 as appropriate for their project.
The system 200 may include a user-interface tool (not shown) for displaying specific objects and AIEs from the digital world, for example, in a tree view structure orientated from an artificial perspective. This tool allows selecting multiple objects and AIEs for editing one or more of their attributes.
Furthermore, a Paint tool (not shown) can be provided to organize, create, position and modify simultaneously a plurality of AIEs. The following table includes examples of parameters that can be used to define the effect of the paint tool:
The system 200 for on-screen animation according to the present invention provides for means to duplicate the attributes from a first AIE to a second AIE. The attributes duplication means may include a user-interface allowing selecting the previously created recipient of the attributes, and the AIE from which the attributes are to be copied. Of course, the duplication may extent also to behaviours, animation clips, decision tree, sensors and to any information associated to an AIE. The duplication allows to simply and rapidly create a group of identical AIE.
More specifically, options are provided to precise the attribute duplication process, such as:
Alternatively or additionally, the duplication process may be performed on an attribute-to-attribute basis.
Turning now to
The walls of the aquarium are defined as barriers 308 in the virtual world, the seaweed 310 as non-autonomous image entities, and each fish 312 and shark 304 as autonomous image entities.
Each fish 312 is assigned a “Flock With” other fish behaviour so that they all swim in a school formation, as well as a “Wander Around” behaviour so that the school 302 moves around the aquarium 300. To allow a fish 312 to escape the hungry shark 304, it is assigned the behaviour “Flee From” shark. The “Flee From” behaviour is given an activation radius so that when the shark 304 is outside this radius the “Flee From” behaviour would effectively be disabled and only enabled when the shark 304 is inside the radius.
To prevent a fish 312 from hitting other fish 312, the seaweed 310, nor the aquarium walls 308, each fish 312 has the additional behaviours “Avoid Obstacles” (seaweed 310 and the other fish 312) and “Avoid Barriers” (the aquarium walls 308). Similarly to the case in real life, the solver resolves these different behaviours to determine the correct motion path so that, in its efforts to avoid being eaten, a fish 312 avoids the shark 304, the other fish 312 around it, the seaweed 310, and the aquarium walls 308 the best it can.
Considering the case when a fish 312 wants to escape the hungry shark 304. At this point in time, both a fish's “Flee From” shark and “Flock With” other fish behaviours will be activated causing two steering forces to act on the fish 312 in unison. Therefore, a fish 312 will try to escape the shark 304 and stay with the other fish 312 at the same time. The resulting active steering force on the fish 312 will be the weighted sum of the individual behavioural forces, based on their intensities. For example, for the fish 312, it is much more important to flee from the shark 304 than to stay in a school formation 302. Therefore, a higher intensity is assigned to the fish's “Flee From” behaviour than the “Flock With” behaviour. This allows the fish 312 to break formation when trying to escape the shark 304 and then to regroup with the other fish 312 once it is far enough away from the shark 304.
Although simply adjusting the fish's behaviours intensities allow yielding realism, alternatively the “Flock With” behaviour of the fish 312 can be disabled and its “Flee From” behaviour is enable when the fish 312 sees the shark 304. Once out of range of the shark 304, a fish 312 would then continue to swim in a school 302 by disabling its “Flee From” behaviour and enabling its “Flock With” behaviour. This type of behavioural control can be achieved by setting the behaviours' priorities. By giving the “Flee From” behaviour a higher priority than the “Flock With” behaviour, when a fish 312 is fleeing from a shark 304 its “Flock With” behaviour will be effectively disabled. Assigning such priorities to the behaviours causes a fish 312 not to try remaining with the other fish 312, while trying to flee the shark 304. However, once it has escaped the shark 304 the “Flock With” behaviour is reactivated and the fish 312 regroups with its school 302.
In many relatively simple cases such as this one, to obtain a realistic animation sequence, it is usually sufficient to assign various degrees of intensities and priorities to specific behaviours. However, in a more complicated scenario, simply tweaking a behaviour's attribute may not produce acceptable results. In order to implement higher-level behaviour, an AIE needs to be able to make decisions about what actions to take according to its surrounding environment. According to the method 100, this is implemented via Action Selection.
The steering behaviour mechanisms described above allows controlling the behaviour of AIEs. However, an AIE often warrants greater intelligence. A method and system according to the present invention enables an animator to assign further behavioural detail to a character via Action Selection. Action Selection allows AIEs to make decisions for themselves based on their environment, where these decisions can modify the character's behaviour, drive its animation cycles, or update the character's memory. This allows the animator to control which behaviours or animation cycles are applied to an autonomous character and when.
Alternatively to assigning priorities to certain behaviours, a vision sensor is created for each autonomous fish 312 to determine whether the fish 312 sees a shark 304 or not.
The method 100 will now be described by way of a second specific example of application related to the animation of characters in a battle scene with reference to
Film battles typically involve two opposing armies who run towards each other, engage in battle, and then fight until one army weakens and is defeated. Given the complexity, expense, and danger of live filming such scenes, it is clear that an effective AI animation solution is preferable to staging and filming such battles with actual human actors.
The present example of application of the method 100 involves an army 401 of 250 disciplined Roman soldiers 403, composed of 10 units led by 10 leaders, against a horde 405 of 250 beast warriors 407 composed of 10 tribes lead by 10 chieftains. The scenario is as follows. The Romans 403 are disciplined soldiers who marched slowly in formation until the enemy is very close. Once within fighting range, the Romans 403 break ranks and attack the nearest beast warrior 407 (see for example
The following description outlines how the method 100 can be used to animate this battle scene. Firstly, group behaviour and the binary decision tree that determines what actions the characters 403 and 407 will make are defined. Secondly, individual character behaviour and the binary decision tree to ensure that the correct animation cycle is played at the correct time are defined.
In a battle involving hand-to-hand combat (see
The Roman soldiers 403 and their leaders behave in exactly the same manner. As summarized in
In contrast to the Romans 403, the beast warriors 407 run towards their enemy as a pack. The beast warrior chieftains are made to run towards the Romans 401 by setting their behaviour as “Seek To” the group of Romans at maximum speed. The beast warriors 407 in turn follow their chieftains via a “Seek To” chieftain behaviour. Once the beast warriors 407 are within range of the Roman soldiers 403, these behaviours are de-activated by their binary decision trees and replaced by their tactical behaviours, in much the same manner as we previously did for the Roman soldiers. The tactical behaviour binary decision tree 412 for a beast warrior 407 is illustrated in
The binary decision trees illustrated in
Once the gross motion of the battle is complete, the close-up hand-to-hand combat remains to be animated (see
The decision trees illustrated in
In order to play the correct animation sequence for each character during each fight sequence, the binary decision tree 424 shown in
In this example, it is first determined whether a character is walking or not via a speed sensor (step 426). The information returned from the sensor allows determining whether a walk or idle animation sequence should be played. It is then determined whether the character is attacking the enemy or not (428-428′). This information allows determining whether a fight animation is to be played. In order to choose which fight animation to play (FightHigh or FightLow), a random sensor is used to randomly return true or false each cycle. As the binary decision tree does not guarantee that a FightHigh animation sequence will be completed before a FightLow sequence is played, or vice versa, a given animation sequence is queued if another one is currently active. This ensures that a given fight animation sequence is completed before the next sequence commences. As each type of character has different animation sequences, the decision tree 424 is duplicated for each type of character and the correct animation sequences are associated thereto.
The animation sequence resulting from the second example can be completed by creating a decision tree for the dying sequence of each character. Then the required number of characters necessary to fill the battleground is duplicated and the animation is triggered. The screen shots shown in
The method 100 will now be described in more detail with reference to other specific examples of applications related to the animation of entities.
An animation logic wherein two humans walk through a narrow corridor cluttered with crates that they must avoid will now be considered, the elements defining the scene being:
Another example includes characters moving as a group. Group behaviours enable grouping individual autonomous characters so that they act as a group while still maintaining individuality. According to this example, a group of soldiers are about to launch an attack on their enemy in open terrain.
The soldiers are defined as AIEs and any obstacles, such as trees and boulders, are defined as non-autonomous entities.
As the ground is not perfectly flat, a flat terrain is created and the height fields of various points are modified to give the terrain some elevation. To ensure that the soldiers remain on the ground it is provided that they hug the terrain.
To prevent the soldiers from walking into obstacles each soldier is assigned an “Avoid Obstacles” behaviour.
To ensure that the soldiers remain as a unit they are also assigned a “Flock With” behaviour that would specify how closely they keep together.
A “Seek To” the enemy behaviour is finally assigned to make the soldiers move towards their enemy.
According to a further example, there is provided a car race between several cars that occurs on a racetrack.
The cars are defined as AIEs. Each car is defined by specifying different engine parameters (max. acceleration, max. speed, etc.) so that they each race slightly differently.
As the racetrack is not perfectly flat, a flat terrain is first created within the digital world and then the height fields are changed at various points to give the terrain some elevation. To ensure that the cars stay on the surface of the track, it is specified that the cars hug the terrain.
A looped path that follows the track is provided and the cars are assigned a “Follow Path” behaviour so that they stay on the racetrack. Each waypoint along the path is characterized by a speed limit associated to it (analogous to real gears at turns) that would limit the speed at which a car could approach the waypoint.
To prevent the cars from crashing into each other, each car is further characterized by an “Avoid Obstacles” behaviour as each car can be considered an obstacle to the other cars.
Finally, in order to keep the cars from straying too far off the racetrack, hidden barriers are added along the sides of the track and an “Avoid Barriers” behaviour is assigned to each car.
The next example concerns a skateboarder in a skate park.
The skateboarder is defined as an AIE and the various obstacles within the park, such as boxes and garbage bins, as obstacles. The ramps upon which the skateboarder can skate are defined as surfaces and to ensure that the skateboarder remains on a ramp surface rather than pass through it, it is specified that he hug the surface.
As discussed hereinabove, for AIEs to be able to make decisions for themselves based on information about their surrounding environment, Action Selection is implemented. For example, a guard patrolling a fortified compound against intruders is now provided as an example of animation according to the method 100.
The guard is defined as an AIE, the buildings and perimeter fence as barriers, and the trees, vehicles etc. within the compound as non-autonomous entities. A flat terrain is first created and then the height fields of various points are modified to give the terrain some elevation. To ensure that the guard remains on the ground it is specified that he hugs the terrain.
To prevent the guard from walking into obstacles within the compound during his patrol, an “Avoid Obstacles” behaviour is assigned thereto. In addition, to prevent him from walking into the perimeter fence or any of the buildings, he is also assigned an “Avoid Barriers” behaviour.
To specify the route that the guard takes during his patrol, a waypoint network is provided and the guard is assigned a “Seek To Via Network” behaviour. A waypoint network rather than a path is used to prevent the guard from following the exact same path each time. Via the network, the guard dynamically navigates his way around the compound according to the surrounding environment.
Sensors are created allowing the guard to gather data about his surrounding environment and binary decision trees are used to decide what actions to take to enable the guard to make decisions about what actions to take. For instance, sensors are created to enable the guard to hear and see in his surrounding environment. If he heard or saw something suspicious he would then decide what to do via a binary decision tree. For example, if he heard something suspicious during his patrol, he moves towards the source of the sound to investigate. If he didn't find anything, he returns to his patrol and continue to follow the waypoint network. If he did find an intruder, he fights the intruder. Further sensors and binary decision trees can be created to enable the guard to make other pertinent decisions.
The system 502 is in the form of an AI agent engine to be included in a video game platform 500. The AI agent 502 is provided in the form of a plug-in for the video game platform 500. Alternatively, the AI agent can be made integral to the platform 500.
The AI agent 502 comprises programming tools for each aspect of the game development, including visual and interactive creation tools for level edition and an extensible API (Application Programming Interface).
More specifically, the game platform 500 includes a level editor 504 and a game engine 506. As it is well known in the art, a level editor is a computer application allowing creating and editing the “levels” of a video game. An art package (or art application software) 508 is used to create the visual look of the digital world including the environment, autonomous and non-autonomous image entities that will inhabit the digital world. Of course, an art package is also used to create the looks of digital entities in any application, including movies. Since art packages and level editors are both believed to be well known in the art, they will not be described herein in more detail.
As illustrated in
The libraries 510 provides an open architecture that allows game programmers to extend the AI functionality such as adding their own programmed behaviours.
The libraries 510, including the AI agent 502, allows for the following functionality:
1—Real-time authoring tools for level editors:
The libraries allow creating, testing and editing character logic in art package/level editor and to be exported directly to the game engine.
As discussed hereinabove, the libraries can be integrated via plug-in or directly into a custom editor or game engine.
The implementation of the creating tools in the form of librairies allows for real-time feedback to shorten the design to production cycle.
2. Multi-platform:
The use of libraries allows to first author animations and then to publish them across many game platforms, such as Playstation 2™ (PS2), Xbox™, GameCube™, Personal Computer (PC) implementing Windows 98™, Windows 2000™, Windows XP™, or Linux™, etc.
3. High performance:
The use of libraries allows minimizing central processing unit (CPU) and memory usage.
It allows optimizing the animation for each platform.
4. Open, flexible and extendable AI architecture:
The modularity provided with the use of libraries allows using only the tools required to perform the animation.
5. Piggyback the physics layer to avoid duplicate world mark-up and representation and gain greater performance and productivity:
The use of libraries allows the AI agent to use the physics layer for barriers, space partition, vision sensing, etc.
It also allows for less environmental mark-up, faster execution, less data, less code in the executable, etc.
6. Detailed integration examples of genres (e.g., action/adventure, racing, etc.) and of other middleware solutions (e.g., Renderware™, Havok™, etc.):
For each genre, the plug-in 502 is used to author the example. Among the covered genres include First Person Shooter (FPS), action/adventure, racing and fishing. For each genre, examples are authored and documented. This is similar for film application, where the genres include battle scene, hand-to-hand combat, large crowd running, etc.
For other middleware solutions, the AI agent 502 is basically integrated therewith. For physics, it can be integrated, for example, with Havok's™ physics middleware by taking one of its demo engines, ripping out its hardwired AI agent and replacing it with the AI agent 502. For rendering middleware (GameBryo™ from NDL and Criterion's RenderWare), their software are used and simple game engines are built and the AI agent is linked into them.
7. Intelligent animation control feeds the animation engine:
Based on character decisions, animation clip control (selection, scaling, blending) is transferred to the developer animation engine. The inputs include user defined rules, and the outputs include dynamic information fro each animation frame based on AI for that frame of exactly which animation cycles to play, how they are to be blending, etc.
A system for on-screen animation of digital entities, including characters, according to the present invention, allows creating and animating non-player characters and opponents, camera control, and realistic people or vehicles for training systems and simulations. Camera control can be created via an intelligent invisible character equipped with a virtual hand-held camera, yielding a camera that seemingly follows the action.
A system for on-screen animation of digital entities according to embodiments of the present invention includes user-interface menus allowing a user selecting and assigning predetermined attributes and behaviours to an AIE.
Also, according to some embodiments, the system for on-screen animation of digital entities includes means for creating, editing and assigning a decision tree to an AIE.
Of course, many user-interface means can be used to allow copying and pasting of attributes from a graphical representation of a digital entity to another. For example, a mouse cursor and mouse buttons or a user menu can be used to identify the source and destination and to select the attribute to copy.
A method and system for on-screen animation of digital entities can be used to animate digital entities in a computer game, in computer animation for movies, and in computer simulation applications, such as a crowd emergency evacuation.
Although the present invention has been described hereinabove by way of preferred embodiments thereof, it can be modified, without departing from the spirit and nature of the subject invention as defined in the appended claims.
Number | Date | Country | |
---|---|---|---|
60444879 | Feb 2003 | US |