This document relates to an animation using an animation effect and a trigger element.
Moving images are sometimes provided by defining some property of an animation and selectively applying that animation to a picture or other image. This can then cause the image or picture to move, such as on a computer screen. Such animation are sometimes implemented in a hard-coded fashion, whereby there is little or no flexibility in modifying the way the animation works and/or what it is applied to.
Certain approaches can be used, say, when there are multiple instances of an image that are to appear in a view, such as individual raindrops that are to illustrate a rainfall. In such implementations, the appearance of individual droplets is sometimes effectuated by defining a birth rate of raindrops for the area at issue. That is, some entity such as a user, a random number generator or another application can specify the birth rate variable and this data causes the appropriate number of raindrop instances to be included in the view. When the animation is displayed, then, a viewer sees the specified number of raindrops appearing on the screen.
The invention relates to animating one or more image elements.
In a first aspect, a computer-implemented method for animating an image element includes determining that a trigger event defined by a trigger element occurs. The method includes, in response to the trigger event, applying an animation effect to a group that comprises at least one image element. A first association between the animation effect and the group is configured for another animation effect to selectively be associated with the group, and a second association between the trigger element and the animation effect is configured for another trigger element to selectively be associated with the animation effect.
Implementations can include any, all or none of the following features. The image element can be at least one of: an image of an object, a character, and combinations thereof. The trigger element can lack spatial properties and include information configured to feed a parameter to the animation effect regarding the image element. The information can be contained in at least one of: a numerical list, an alphabetical list, a random list, an amplitude of audio, input generated using an input device, input generated using a touch screen, and combinations thereof. The trigger element can have at least one spatial property and can be configured to feed a parameter to the animation effect regarding the image element, the trigger element triggering the animation effect upon touching the image element. The trigger element can be one of: a geometric figure, a circle, a line, an animated sequence, and combinations thereof. The trigger element can include a standard zone that causes the animation effect to be applied, and a dropoff zone that causes the animation effect to be applied to a lesser degree than in the standard zone. The method can further include changing the spatial property. The changing spatial property can be one of: a shape changing size, a moving line, and combinations thereof. The spatial property can change in response to a user manipulating the spatial property. The group can include multiple image elements that are currently in an ordered state, and applying the animation effect can cause each of the multiple image elements to undergo motion away from the ordered state. The motion can be one of: motion with inertia applied to the multiple image elements, and motion without inertia applied to the multiple image elements. The animation effect can cause the image element to rotate. The animation effect can include a wind effect that gives an appearance of blowing on the image element. The animation effect can include a fire effect that gives an appearance of burning the image element. The animation effect can cause one of: a size of the image element to change, a color of the image element to change, and combinations thereof. Multiple animation effects can be associated with the group, each of the animation effects having a particular trigger element.
In a second aspect, a computer program product is tangibly embodied in a computer-readable storage medium and includes instructions that when executed by a processor perform a method for animating an image element. The method includes determining that a trigger event defined by a trigger element occurs. The method includes, in response to the trigger event, applying an animation effect to a group that comprises at least one image element. A first association between the animation effect and the group is configured for another animation effect to selectively be associated with the group, and a second association between the trigger element and the animation effect is configured for another trigger element to selectively be associated with the animation effect.
In a third aspect, a computer-implemented method for providing animation of an image element includes obtaining a group comprising at least one image element that is to be animated. The method includes generating a first association for an animation effect to be applied to the obtained group, the first association being configured for another animation effect to selectively be associated with the obtained group. The method includes generating a second association for a trigger element to trigger the animation effect, the second association configured for another trigger element to selectively be associated with the animation effect.
Implementations can include any, all or none of the following features. The method can further include associating the other animation effect with the obtained group, wherein the other animation effect is to be triggered at least by the trigger element. The method can further include associating the other trigger element with the animation effect, wherein the animation effect is to be triggered at least by the other trigger element.
In a fourth aspect, a computer program product is tangibly embodied in a computer-readable storage medium and includes instructions that when executed by a processor perform a method for providing animation of an image element. The method includes obtaining a group comprising at least one image element that is to be animated. The method includes generating a first association for an animation effect to be applied to the obtained group, the first association being configured for another animation effect to selectively be associated with the obtained group. The method includes generating a second association for a trigger element to trigger the animation effect, the second association configured for another trigger element to selectively be associated with the animation effect.
Implementations can provide any, all or none of the following advantages. A more flexible animation can be provided. An animation can be provided that includes a freely interchangeable animation effect to be applied to an image element. An animation can be provided that includes a freely interchangeable trigger element to initiate an animation effect for an image element. An animation can be provided where both an animation effect and respective a trigger element are freely interchangeable.
The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
At least one trigger element 102 can be applied to the animation effect 104 using the association 100B to specify when and/or where the animation effect 104 occurs. In some implementations, trigger elements 102 can include some information that is provided to the animation effects 104 to animate the image elements 106a-106n. Trigger elements 102 can lack spatial properties. Trigger elements 102 that lack spatial properties include a numeric list, an alphabetical list, a random list, and an amplitude of audio, to name a few examples. For example, amplitude values of an audio sample can be provided as one or more parameters for the animation effects 104. As another example, one or more values can be retrieved from a list of values and used as parameters for the animation effects 104. Such parameters can specify the order in which each of several image elements are triggered, to name just one example.
In other implementations, trigger elements 102 can have spatial properties. Trigger elements 102 with spatial properties include a circle, a line, or other geometric figures, to name a few examples. In some implementations, the magnitude of the effect can be determined from parameters provided by the trigger elements 102. A trigger element can be visible or not visible to a user, regardless of whether the element extends spatially. For example, the implementations that will be described in connection with some of the figures in the present disclosure have trigger elements shown for clarity. In an actual implementation, such trigger elements can be invisible.
These spatial trigger elements 102 can activate the animation effects 104 when the trigger elements 102 touches at least one image element 106a-106n. For example, as a circular trigger element touches one or more image elements 106a-106n, the image elements that are touched are deformed (e.g., blurred, scaled, rotated, and the like) according to the animation effects 104 with a magnitude of the effect corresponding to the parameters of the circular trigger element 102.
The associations 100 are configured so that the items they connect are interchangeable. For example, any trigger element 102 can provide parameters to any animation effect 104 using the association 100B, and any animation effect 104 can be activated by any trigger element 102. For example, a new trigger element can be associated with the animation effect 104 without affecting the animation effect 104 or its association 100A with the image elements. As another example, a user can associate new images elements to be animated by the animation effect 104 without affecting the trigger element 102. This can provide a user flexibility when defining and/or associating trigger elements 102, animation effects 104, and image elements 106a-106n. For example, combinations of a few of the many trigger elements 102 animation effects 104, and image elements 106a-106n that use associations 100A-B are described in more detail below.
Trigger elements 102 can be configured to change during run-time execution. For example, a geometric figure (e.g., a circle) can change in size, thus determining when it will trigger the animation of particular image elements. In some implementations, the trigger element 102 can be manipulated by the actions of a user during run-time execution. The user can provide user input through a keyboard, mouse, pointing device, or other user input device to modify the size of a geometric shape or change the speed of a moving line, to name a few examples. For example, the user can use a scroll wheel on a mouse to modify the size of a circular trigger element 102 or modify the speed of a line trigger element 102.
In some implementations, the trigger elements 102 can be configured with one or more drop-off zones. In general, drop-off zones allow trigger elements to gradually increase or decrease the magnitude of the parameters provided to the animation effects 104 during run-time execution. For example, the amount of blur applied to the image elements 106a-106n can gradually change corresponding to a change in magnitudes provided by one or more trigger elements 102. Drop-off zones are described in more detail in reference to 7A-7C.
In some implementations, the trigger elements 102 can be an animated movie clip or other animated representations, to name two examples. These animated trigger elements 102 can interact with image elements 106a-106n in a substantially similar manner to other ones of the trigger elements 102 that have spatial properties. In other words, as the animated trigger elements 102 touch the image elements 106a-106n they trigger appropriate animation effects 104. Thus, both the item that the animation effect is applied to (i.e., the image element(s) 106) and the trigger that causes the animation to occur (i.e., the trigger element 102) can include an animated image. In addition, in some implementations, the animated trigger elements 102 can move in a manner consistent with their respective animation. For example, an animated trigger element that applies to an image of a football player may move in a direction consistent with the football player image. That is, if the football player's animation is oriented in a particular direction, the trigger element can also move in that direction.
Animation effects 104 can be configured to provide one or more image processing functions to any of the image elements 106. Image processing functions can change the position, size, color, orientation, or provide a filter (e.g., blurring) to modify at least one image element 106a-106n, to name a few examples. For example, the animation effects 104 can push the one or more portions of image elements 106a-106n away from each other using inertia or residual motion, to name two examples. The animation effects 104 may be an activation of a simulation. For example, a text string may blow away like leaves in the wind, or fall with a simulation of gravity. As another example, animation effects 104 may also be a conversion of image elements 106a-106n into particles. In addition, multiple animation effects 104 can be combined to generate additional animation effects. For example, a particle system effect can be combined with a wind simulation effect to move the generated particles according to the wind simulation. Particles (i.e., snowflakes, raindrops, fire elements, a flock of fairies) can be visually random, but may be based on an ordered list because the computer internally knows the number and position of each individual particle. Once started by a trigger element 102, certain animation effects are continuous, unless they are discontinued be another trigger element 102, user input, or some other event, to name a few example. Some continuous animation effects include wind simulations, gravity simulations, particle systems, magnification animations, and blurring animations, to name a few examples.
The trigger element 202 may move according to received user input. For example, the user can position a pointing device (e.g., a mouse) on or near user interface element 204, click a mouse button, and move the trigger element 202. As another example, the user can provide keyboard input (e.g., pressing one or more arrow keys) to move the trigger element 202. In other implementations, the motion of the trigger element 202 can be determined other than by user input, such as by being associated to a fluctuating variable or being randomized.
As illustrated by
As illustrated by
Because the trigger element 302a includes spatial properties (i.e., it is a line), the trigger element 302a can be configured to provide parameters to the animation effect corresponding to the spatial properties. For example, every image element to the left of the line 302a (e.g., in region 303) is affected by the simulated wind animation effect, as illustrated by representations of wind 306a and 306b, respectively. In addition, every image element to the right of the trigger element 302a (e.g., in region 304) is not affected by the simulated wind animation effect. The wind animation effect triggered by the element 302a can cause the letters to jiggle, but in this example is not strong enough to completely relocate any letter from its original position.
As illustrated by
As illustrated by
In some implementations, the pixel values in the alpha channel are used to determine if the trigger element 402 is touching any of the image elements 206. For example, some portions of the image-based trigger element 402 are transparent (e.g., the pixels of the image corresponding to the background of the image). In other words, the alpha channel value for the background pixels is substantially zero. As another example, some portions of the image-based trigger element 402 are not transparent (e.g., the pixels corresponding to the ballerina). In other words, the alpha channel value for the pixels is not substantially zero. If, for example, pixels with an alpha channel value that is not substantially zero touch at least one of the image elements 206, the displacement animation effect is triggered. For example, as illustrated by
In some implementations, the movement speed of the image-based trigger element 402 can be used as a parameter for the displacement effect. For example, if the ballerina moves more slowly, the displacement effect may be reduced in magnitude. In some implementations, the movement speed of the trigger element 402 is determined by the animation speed corresponding to the animation used for the image-based trigger element 402. For example, if the animation speed of the dancing is increased, the trigger element 402 may move across the view at an accelerated rate. In other implementations, the movement speed of the trigger element 402 can be determined randomly, determined by an input parameter, or based on other user input, to name a few examples.
Some trigger elements can also stop an animation effect. For example, as illustrated in
The drop-off zones 704a and 704c can allow a gradual change of magnitude of parameters provided to the animation effect. For example, trigger element 702a uses a blurring magnitude of 10, while trigger element 702b uses a blurring magnitude of 50. According to the differences between the blurring magnitudes of trigger elements 702a and 702b, drop-off zone 704a can gradually change the magnitude from 10 to 50. As another example, because region 704d does not include a blurring magnitude (e.g., the blurring magnitude is zero), the drop-off zone 704c gradually changes the blurring magnitude from 50 to zero, according to the differences in magnitude between trigger element 702b and region 704d, respectively. In some implementations, the drop-off zones can interpolate between the two values to determine an appropriate magnitude at a particular point in the drop-off zone. For example, because the inner edge of drop-off zone 704a has a blurring magnitude of 10, and the outer edge of drop-off zone 704a has a blurring magnitude of 50, drop-off zone 704a has a difference range of 40. If the drop-off zone 704a measures 40 units (e.g., mm, cm, inches, or some other unit of measurement) in size, than at every unit of measurement, the magnitude would change by a value of one, for example.
In addition, as illustrated by
For example, in
In step 1102, the computing system determines that a trigger event defined by a trigger element occurs. For example, in reference to
In step 1104, the computing system applies an animation effect to at least one image element in response to the trigger event. In general, a first association (e.g., association 100A) between the animation effect and the image elements is configured for any animation effect to be selectively associated with the image elements. For example, any of a blurring, a magnification, a sorting, a displacement, a simulation, or other animation effects can be selectively associated with the image elements. In addition, a second association (e.g., association 100B) between the trigger element and the animation effect is configured for any trigger element to be selectively associated with the animation effect. For example, different geometric triggers (e.g., a circle, a square, a line, or other geometric shapes) can be selectively associated with the animation effect.
In optional step 1106, the spatial property of the trigger element can be changed. For example, a user can increase or decrease the size of a circular trigger element. By changing the spatial property, the number of image elements that are touching the trigger element may change. In some implementations, the spatial property can be a rate of change or speed of movement. For example, the speed the a line trigger moves across the view can be modified.
In step 1202, the computing system obtains at least one image element to animate. For example, a user can specify one or more image elements to animate that are stored in the computing system.
In step 1204, a first association (e.g., association 100A) is generated for an animation effect to be applied to the obtained imaged elements. In general, the first association is configured for any animation effect to be selectively associated with the obtained image elements. For example, a displacement, a magnification, a reshuffling, simulations, and other animation effects can be selectively associated with the obtained image elements.
In step 1206, a second association (e.g., association 100B) is generated for a trigger element to trigger the animation effect. In general, the second association is configured for any trigger element to be selectively associated with the animation effect. For example, a geometric shape, a random list, an ordered list, or other trigger elements can be selectively associated with the animation effect.
In optional step 1208, another animation effect can be associated with the image elements. For example, the current animation effect can be removed and replaced with another animation effect. As another example, in reference to
In option step 1210, another trigger element can be associated with the animation effect. For example, the current trigger element can be removed and replaced with another trigger element. As another example, another trigger element can be associated so that either of them can initiate the animation effect. In some implementations, step 1210 can be executed multiple times, for example, in reference to
The memory 1320 stores information within the system 1300. In one implementation, the memory 1320 is a computer-readable medium. In one implementation, the memory 1320 is a volatile memory unit. In another implementation, the memory 1320 is a non-volatile memory unit.
The storage device 1330 is capable of providing mass storage for the system 1300. In one implementation, the storage device 1330 is a computer-readable medium. In various different implementations, the storage device 1330 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.
The input/output device 1340 provides input/output operations for the system 1300. In one implementation, the input/output device 1340 includes a keyboard and/or pointing device. In another implementation, the input/output device 1340 includes a display unit for displaying graphical user interfaces.
The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.
Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).
To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.
The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.
The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.
A number of embodiments have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of this disclosure. Accordingly, other embodiments are within the scope of the following claims.