The present disclosure relates generally to software, and more particularly, software that provides graphical user interfaces (GUIs) on the display of a portable electronic device, the software being specially designed or adapted to help a user of the device create, modify, or enhance animations of virtual objects, such as drawing(s) or images of one or more characters or things. The disclosure also relates to associated articles, systems, and methods.
Numerous software products—referred to herein as software programs, or simply, programs—are known. Many years ago, most software programs were designed for use on desktop or laptop computers, in which the display is at least about the size of a standard sheet of office paper, e.g., 8.5×11 inches. Stated differently, the display for such devices has a characteristic dimension of at least about 14 inches, measured diagonally between opposite corners of the generally rectangular display screen.
In recent years there has been explosive growth in the sales of smart phones, tablet computers, smart watches, and similar handheld devices, whose display screens have characteristic dimensions substantially smaller than 14 inches. The screen size on a handheld device may be for example less than 9 inches, or less than 8 inches, or in a range from 1 to 9 inches or from 4 to 8 inches. Most smart phones have screen sizes from 6 to 7 inches. Despite the small size of the display screen, software programs made for use with such handheld devices, sometimes referred to as applications or “apps”, can have a high degree of functionality and complexity.
Nevertheless, some computer-based activities—such as professional quality or near-professional quality animation—are so highly detailed in terms of depth of tools for the animator, and demanding on the capabilities of the microprocessor, that they are still predominantly the domain of the larger, more powerful desktop computers.
There is an ongoing need for new software tools and features, and in particular, software programs and features that can make high quality animation and drawing easier and faster to create, and accessible to more people. We believe in this regard it would also be desirable for such new software to be optimized for, or at least be compatible with, the smaller display screens and processors of handheld and other portable electronic devices.
We disclose herein, among other things, a software program or package that is capable of being used on a handheld device, although the software is not limited to such devices and can also be used on larger, more powerful electronic devices. The software provides graphical user interfaces (GUIs) or graphics displays that make it easy for a user to create, modify, save, and/or import one or more virtual objects, and to create, define, modify, and/or save animations to be performed on such object(s). The virtual object(s) may be or comprise one or more of characters, scenes, and objects such as vehicles, dwellings, etc. The disclosed techniques can be employed to allow a user who is not otherwise proficient in animation to animate such object(s) on a handheld device with simple finger taps, gestures, or the like, hence the techniques may be loosely grouped under the umbrella term of simplified animation.
We also disclose software designed to facilitate animation by users of handheld or portable electronic devices. The software provides a motion recording feature in which a user input in the form of a pointer, touch point, or other position-related input is monitored over the course of a recording session, converted to a data string of attribute values, and stored in memory. The software displays an animation of a virtual object over an animation period by retrieving the data string of attribute values from the memory and causing the processor to generate the animation using the retrieved data string of attribute values. The software program may be in the form of computer-readable instructions, a sequence of which can be used to form an effect, suitable to be carried out on the virtual objects by the microprocessor of such a device, causing the objects to animate when the animation applied to the object is activated by reaching a designated moment in an animation timeline, which is a virtual representation of the effects according to a timed sequence, or based on a more discrete sequence that is triggered by external factors, such as in response to a user input.
Also of interest is the process by which these effects are designated and configured by the user, and the ease and speed by which the user can apply new effects to an animation timeline. The process of recording the timing and positioning of an object attribute's keyframes through input over time of an input device such as a finger, stylus, or mouse offers a faster and more robust method for defining animation sequences. Multiple effects can be combined to form larger effects, and simpler motions with fewer clicks can allow the user to speed the process of applying fairly complex effects.
We also disclose methods for automating animation on an electronic device, comprising: providing an electronic device having a processor, a memory, and a screen, the processor configured to provide video signals to the screen, and to read and write information to and from the memory; displaying graphics on the screen, and defining one or more attribute input regions on the screen, different locations on the attribute input region(s) corresponding to different visual attribute values for a virtual object to be displayed on the screen; receiving user position signals produced by a user interacting with the attribute input region(s) over a recording period; converting the received user position signals to a data string of attribute values over the recording period; storing the data string of attribute values in the memory; and displaying on the screen an animation of the virtual object over an animation period by retrieving the data string of attribute values from the memory and causing the processor to generate the animation using the retrieved data string of attribute values. We also disclose non-transitory computer-readable storage medium having instructions that, when executed by a processing device having a processor, a memory, and a screen, cause the processing device to perform the foregoing operations.
We also disclose methods for automating animation on an electronic device, comprising: providing an electronic device having a processor, a memory, and a screen, the screen operable as both a display screen and a touch screen, the processor configured to receive touch signals from the screen and to provide video signals to the screen, the processor also configured to store first information to the memory and to read second information from the memory; generating a graphical user interface (GUI) on the screen, the GUI defining one or more attribute input regions on the screen, different locations on the attribute input region(s) corresponding to different visual attribute values for one or more virtual objects to be displayed on the screen; receiving touch signals produced by a user interacting with the attribute input region(s) over a recording period, and converting the received touch signals to a data string of attribute values over the recording period; storing the data string of attribute values in the memory; and displaying on the screen an animation of the one or more virtual objects over an animation period by retrieving the data string of attribute values from the memory and causing the processor to generate the animation using the retrieved data string of attribute values.
Numerous related methods, systems, and articles are also disclosed.
These and many other aspects of the present disclosure will be apparent from the detailed description below. In no event, however, should the above summaries be construed as limitations on the claimed subject matter, which subject matter is defined solely by the attached claims, as may be amended during prosecution.
The inventive animation software and related methods, devices, and systems are described with reference to the attached drawings, of which:
In the figures, like reference numerals designate like elements.
High quality drawing and animation tools and capabilities can be made available to the general public with the help of software, but in order to gain traction the software should provide intuitive and innovative capabilities and features that give the user a sufficient array of tools to carry out basic drawing and animation tasks quickly and easily. We disclose such capabilities below with reference to GUIs and views of the display screen generated by the software, which a user can interact with by means of a touch screen, touch pad, mouse, keyboard, or other input device in the form of mouse clicks, screen touches, swipes, gestures, and other user inputs, including also in particular where the user maintains a touch or contact paint with the touch screen, etc. over an extended period of time (recording session) while moving the touch point as desired along a path (motion path) to control a visual attribute of a virtual object.
In the description that follows, a variety of software functions are described, including a move effect, a master effect, timeline manipulation, a rotate effect, a scale effect, a visibility effect, a copy effect, a flip effect, object phases, animated scenes, animation timeline, object chest, timeline manipulation, bone manipulation, scene change, camera effects, phase manipulation, timeline blocks, and drill navigation. One, some, or all of the described functions may be included in a given commercial embodiment of the software, as well as one or more additional functions not described here. The software provides GUIs that allow a user to create and modify a virtual object, and to create, define, and save animations of that object for future use. The virtual object may be or comprise one or more of characters, scenes, and objects such as vehicles, dwellings, etc.
In the figures, for illustrative purposes, many of the screens depict a virtual object in the form of a female character having a head, torso, arms, legs, and feet. Such an object may be created using the same software, or by any other means and then imported into the software. The reader will understand that the particular female character depicted or the character's body parts in the figures can be replaced by other virtual objects, including human characters, non-human characters, and inanimate objects, for example.
Some electronic devices on which the software can be run will now be discussed. To the extent such devices all include a microprocessor, memory, and input and output means, they may be considered to be computers for purposes of this document, and the actions and sequences carried out by the software may be considered to be computer-implemented methods. The reader will understand that the software is implemented in the form of instructions suitable to be carried out by the microprocessors of such electronic devices. Such instructions can be stored in any suitable computer language or format on a non-transitory storage medium capable of being read by a computer. Such a computer-readable storage medium may be or include, for example, random access memory (RAM), read only memory (ROM), read/write memory, flash memory, magnetic media, optical media, or the like.
Representative portable electronic devices that can be used to load and run the animation software described above are shown in
The device of
Each device 110, 210, 310 includes a display on which the graphical user interface (GUI) of the software can be shown. Each device also includes input mechanisms that allow the user to enter information, such as make selections, enter alphanumeric data, trace freeform drawings, and so forth. In the case of the smart phone 110 and tablet 210, the primary input mechanism is the touch screen, although the laptop 310 may of course also include a touch screen. The graphical user interface (GUI) provided by the software may be designed to display virtual buttons on the output screen, which the user may selectively activate or trigger by touching the touch screen at the location of the desired virtual button with a finger or pen, or by moving a cursor to that location with a mouse or track pad and selecting with a “click”.
In this regard, a virtual button is like a physical button insofar as both can be user-activated or triggered by touching, pushing, or otherwise selecting, but unlike it insofar as the virtual button has no physical structure apart from the display screen, and its position, size, shape, and appearance are provided only by a graphical depiction on the screen. A virtual button may refer to a bounded portion of a display screen that, when activated or triggered, such as by a mouse click or a touch on a touch screen, causes the software to take a specific, defined action such as opening, closing, saving, or modifying a given task or window. A virtual button may include a graphic image that includes a closed boundary feature, such as a circle, rectangle, or other polygon, that defines the specific area on the display screen which, when touched or clicked, will cause the software to take the specified action. Virtual buttons require none of the physical hardware components associated with mechanical buttons.
The devices 110, 210, 310 all typically include a microprocessor, memory, and input and output means. As such, these devices may all be considered to be computers for purposes of this document, and actions and sequences carried out by the software on such devices may be considered to be computer-implemented methods. The software disclosed herein can be readily encoded by a person of ordinary skill in the art into a suitable digital language, and implemented in the form of instructions that can be carried out by the microprocessors of the devices 110, 210, and 310. Such instructions can be stored in any suitable computer language or format on a non-transitory storage medium capable of being read by a computer. Such a computer-readable storage medium may be or include, for example, random access memory (RAM), read only memory (ROM), read/write memory, flash memory, magnetic media, optical media, or the like.
In some cases the display screens of devices 110, 210, 310 may all be replaced, or supplemented, with a projected display screen made by projecting a screen image onto a wall or other suitable surface (remote from the electronic device) with a projector module. The projector module may be built into the electronic device, or it may be a peripheral add-on connected to the device by a wired or wireless link. The projector module may also project a virtual image of the display into the user's eyes, e.g., via suitable eyeglass frames or goggles.
The representative devices 110, 210, 310 should not be construed as limiting. The disclosed software and its various features can be run on any number of electronic devices, whether large screen or small screen. Other devices of interest include the category of smart watches, which may have a screen size in the 1 to 2 inch range, or on the order of 1 inch. Still other devices include touch screen TVs, whether large, medium, or small format.
Regardless of the electronic device chosen by the user and its specific capabilities and specifications, the input mechanism(s) allow the user to interact with the software by means of the GUI displayed on the screen, such as by activating or triggering virtual buttons on the display, or drawing a figure or shape by freeform tracing, using a touch screen, touch pad, mouse, or other known input mechanism.
A block diagram of a system 402 that includes an electronic device 410 and software in the form of instructions 420 that can be loaded directly or indirectly onto the electronic device 410 is shown in
Users can apply simple effects to their creations to create proprietary animations capable of export as animated gif (graphics interchange format) files and videos.
There may be four major animation views within the software application. Users can trigger the Effects view, which lets them apply pre-canned (i.e. predefined) effects to the animations. Effects are divided into Base Effects and Master Effects (user-defined effects or “canimations”). Base Effects are the basic building blocks that constitute all of the possible system effects. The Master Effect allows users to define their own effect as a superset combination of effects, turning this group of configured effects into a Master Effect or “canimation” when saved into their effect library.
A “Timeline View” is shown in
An “Object Effect Selection” is shown in
A “Scene Effect Selection” is shown in
An “Effects Configuration View” is shown in
In the full animation view, on tablets users see the cropped drawing/animation area as well as an area outside the canvas they can use for staging objects ready for animation. When selecting an effect, the bottom object menu tray may cover the entire bottom section of the display.
A “Full Animation View” is shown in
A “Configuration Options” tablet view, which includes a virtual RECORD button, is shown in
Throughout this document, First Time User Experience is abbreviated FTUE. FTUE/SHOW ME OPTION: the software may be configured to show an FTUE view or screen when (or only when) a user first applies a given effect on a given device, or when (or only when) the user clicks “Show Me” (see e.g.
A “Fingertip Animation” view is shown in
One limitation within all existing 2D animation is that the animator needs to manually adjust the timing and positions of the objects and characters being manipulated. With frame-by-frame animation, this happens by drawing out every single frame manually. Often there will be a simplified, rough draft attempt to achieve the desired timing sequence, and that is followed by filling in the details of every single frame.
With motion graphics, the motion between frames is achieved by moving or adjusting the attributes of an object from one keyframe position to another, and “tweening” or “interpolating” the frames for the objects between keyframes, often with the aid of a manual timing mechanism to modulate the rate of change between keyframes. The implementations of this to date result in limitations; physical attributes can only manipulated one sequence at a time, from keyframe position A to keyframe position B, and the timing (again between only those 2 positions) is entered manually and not with natural motions.
Fingertip Animation is a unique feature of the disclosed software to address these limitations on motion graphics, by measuring beyond keyframe A and B to any number of keyframe positions, and simultaneously measuring the timing as it reaches each keyframe position, using a single take “recording” of the keyframe positioning to control many changes in positions and timing.
The input device, such as a mouse, finger, or stylus, drags the object or representation of an object's attribute from one position to another over time, recording any number of attribute positions over time and saving it in memory as a time-based data file. This method can be applied to an object's base attributes, such as screen position, rotation, transparency/opacity, and size, but it can also be extended to the adjustment of most any attribute or effect or combination of effects, several examples of which are shown herein.
The actual recording process can be accomplished in a number of ways. In one example, the user may begin recording (after pushing the RECORD button) by touching the screen, and may stop recording by lifting his or her finger off of the screen. In another example, the user may click the virtual “RECORD” button to begin the recording, followed by any number of steps to manipulate the object that may require touching and lifting the finger, followed by an explicit “stop recording” click. In using this second option, the animator or the system can pre-determine if any recording should or should not take place while the finger is lifted.
When the recording is complete, the software may present to the user/animator many configuration options to manipulate the timing sequence further such as looping the sequence, reversing the sequence, trimming what is unnecessary, and adjusting or manually reconfiguring the timing to all or portions of the sequence. The software may also apply a positional smoothing curve or smooth out the timing without manual intervention.
The Fingertip Animation can be applied to any screen-output computing device, however, the benefits are clearly greatest when animating on a phone or tablet. The Fingertip Animation technique can also be applied to three-dimensional (3D) animation.
With this feature, a user can navigate the timeline to access effects to move and edit them.
A “Basic Animation View-Inactive” screen is shown in
A “Move Playhead” screen is shown in
The user may one-finger tap any location on the timeline to quickly snap the position of the playhead, while the finger is still pressed down and sliding. During this time, the marker extends outside the timeline (see
A 2-finger pinch allows the user to zoom in or out to a bigger or smaller time window, originating from the point of zoom. A 2-finger swipe pans to another position on the timeline. When zooming, the distance between time segments shrinks/expands for effects and the measure line.
Each line segment represents an effect. Effects can be placed outside the begin and end boundary markers (denoted by the brighter shading), or straddling the start location like the 4th effect. The smallest length an effect line segment can be is 3 pixels, even if it takes no elapsed time to complete, so that the effect remains visible.
The playhead will stop once it hits the scene's end marker and return to the start position, but the user can drag it out of bounds to play into the portion of the video that isn't part of the final scene. When playing from before the start, the playhead continues to the playable scene and removes the 50% overlay and does not pause at 0.
A “Filter” screen is shown in
The user may tap the “filter” again to show the full list of effects, in the original order, maintaining the selected effect. The user can also tap and drag to multi-select, then “filter” to reduce the number of steps to those selected items. In some embodiments, the software may be configured to provide an indication that the filter is on, such as by temporarily reducing the visibility of objects that have been filtered out. The user may also choose the peel to hide an item temporarily. All of its effects and sub-effects are hidden from the timeline when peeled. When un-peeling, all effects may move back to their original location on the timeline.
A “Selected Effect” screen is shown in
Here we revisit and compare a number of timeline-related features of the software, as follows:
Examining the Move Effect: the Move Effect can be created in a few ways and has several options to automate repeating tasks.
A “User Accesses Animate” screen is shown in
A “User Accesses Move Effect” screen is shown in
The software can also be configured to reduce the selection of an effect down to “one click animation” by immediately presenting a list of effects on the screen when first selecting the object from within Timeline View. In such configurations of the software, all FTUE screens disclosed herein would preferably be omitted. The effect list may be placed below the timeline in the bottom two rows of the screen, and the user could immediately select and apply the effect of choice from there with just one click, touch, or gesture.
An optional “Effect Prompt Animation” screen is shown in
The effect's start time begins wherever the playhead was on the timeline when the effect button was pressed, in this case 0.
Another optional “Effect Prompt Animation” screen is shown in
The spot that the user selects on the character to drag from becomes the axis point for the motion. DRAG START: Once the user begins dragging the character, a line stroke trail may follow behind to demonstrate the path of motion, and the character may move as the user drags it. The path and timing may be recorded so that with a single recorded motion, the user can intuitively set or program both the timing and movement of the object over a specified time period, without the need for additional timing and path manipulation. However, the user can optionally tweak (modify) the path of motion manually using path vertices and handles, and can similarly manipulate timing further with preset or manual timing curves.
Upon releasing from dragging the character (e.g. lifting up the finger off the character), recording is stopped and the user passes on to the configuration menu and sees the character in its initial position, but with a line stroke representing the path of motion, as seen in
A “Configure Effect” screen, which includes a virtual “RECORD” button, is shown in
The timeline depicts a begin and end effect marker flanking the selected effect that has been added to the timeline. Here, the user can drag the left bound manually to adjust start position and right bound for time length/end point used to cutoff or extend an effect. The user is able to drag the end point past the end marker for the timeline, even though it will cut off as it plays. This will have the impact of starting or ending in mid-animation, as opposed to having all animations transitioning in and out. The lower part of the screen is scrollable, but not the drawing canvas.
CONFIGURE EFFECT MODE: during effect creation, the beginning and end markers of the overall timeline may become immobile to keep from overlapping input on the timeline (and as such are depicted in low opacity in
In this regard, thicker or thinner handles may alternatively be used. OK STACK: clicking the OK Stack (the green virtual button near the upper right corner of the display in
Alternative embodiments of the software may include additional configuration options such as preset and custom speed curves which allow the user to manipulate the ease in/ease out properties of an effect's motion, but this is not shown in the figures. The preset curves could be represented with simple icons depicting the shape of the curves.
To the left of “Loops” in the screen of
The virtual RECORD button brings the user back to a state of dragging the object (see e.g.
IMPLIED MOVE EFFECT: from
Another “Configure Effect” screen, containing a virtual RECORD button, is shown in
LOOPS: this cycles through the motion (including reverse) a multiple number of times. Moving the slider to the right end sets it to “infinite” loops, or a very large number unknown to the user, which is especially helpful for game development or for a rapid ongoing shake motion in a longer scene. In some implementations, this motion may get cut off at some point either by a parent Master Effect, the end of the scene, or the end of the animation.
TILT ALONG PATH: this feature refers to controlling the object's orientation so that it always stays upright or adjusts to remain parallel to the path angle. When activated, the user can choose to Finish Upright (only selectable when Tilt is activated).
STRAIGHT PATH: this feature ignores the user's manual path and timing, and instead uses a straight line from start point to end point of the user's motion.
RETURN TO START: this feature places the object back at the beginning when finished with an effect loop. If selected, Reverse Motion becomes available.
RETRACE: this feature simply mirrors the entire motion, duplicating it in reverse back towards the initial position at the same speed. This option slides visible if “Return to Start” is selected. A “Show Help” button at the bottom of the Configuration Options screen brings up the optional FTUE animation or prompt from
A “Playing Animation” screen is shown in
The Master Effect feature allows the user to group other effects together to make animation more manageable. The Master Effect also allows users to create, distribute, and even license custom effects to speed up the process and reduce repetition, while also simplifying by organizing effects into smaller, more manageable chunks or segments of time.
An “Add Master Effect” screen is shown in
A “Configure Master Effect” screen is shown in
Another implementation of this may use “prev” and “next” buttons on a row immediately beneath the timeline to tab through the effects and then select the white button to edit the particular effect options.
The tab at the top of
To add a child effect, drill further into the character, or stay at this level and click the lightning bolt. NOTE: this is now a sublevel timeline, not the original timeline. We have “zoomed in” and can only access the object associated with the master effect. All timeline elements are only for this effect and its children.
A “Child Master Effect View” screen is shown in
A “Drilling Into Object” screen is shown in
A “Default Config Options” screen is shown in
A “Trash Modal” screen is shown in
A “More Options for Copies” screen is shown in
A “Loop Delay” screen is shown in
A “Change Start Positions” screen is shown in
If there is only one copy, the cycle options at the bottom of the screen do not appear. Tapping anywhere on the screen will move the currently selected copy to that spot, or the user can also drag the item. The user may then tab left or right to access the next copy, exiting by clicking OK. Note that these positions are relative to the original object. So, if the parent object moves, the next copy spawns relative to that new position. The spawned objects are then disconnected from the original, unless “Mirror Parent” option is selected. Copies spawn on the same layer as the original, just above or below, depending on the “Copies On Top” toggle.
With “Show All” selected, all copies are visible, with the current selection and original at 100%, and the others at 20%. This can be noisy when the count is high, so it can be helpful to toggle off, but normally it can be useful to see where the copies appear relative to each other. In some embodiments, the software may allow users to use 2-finger rotate and resize, allowing for some added, controlled variation. Random variations can also be introduced through added variable effects within the Master Effects timeline, such that the start positions are randomized.
MIRROR PARENT: refer to
An “Add Child Effect to Master” screen is shown in
A “Configure Child Effect” screen is shown in
A “Push Config Toggle” screen is shown in
A “Master Config After Push” screen is shown in
A “Drilling Into a Master Effect” screen is shown in
Drilling into a Master Effect is similar to
This timeline is zoomed in but acts much like the original timeline, with the outer boundaries now being the Master Effect time constraints and the inner boundaries for each accessed child effect the user has tabbed to, using Previous and Next. Scroll up to access the configuration options for the Master Effect, click the white tab to drill into the child effect, and the upper right (blue) tab to exit.
A “Drilling Into an Object” screen is shown in
With this feature, a user can apply an effect to an entire timeline. Scene effects apply to the entire timeline, in contrast to object effects, which apply only to a particular object.
A “Scene Effect Selection” screen is shown in
A “Branch Effect Configuration” screen is shown in
ORGANIZING: rather than post a large number of effects to a single timeline, the user can break this out into smaller subroutines, through these branches. These branch routines can also then be looped as a single repeating set of effects. The resulting effect sequence then appears as a single item on the original timeline.
CONDITIONAL LOGIC: one of the key powers of branching is to set up gaming logic. By adding conditions to a branch, branches can be set up with listening periods, acting as windows of time for conditions to be met. These conditions can be further nested as combined conditional logic for this branch and as subroutine child branches. A white edge border or the like can be used to reinforce the concept that the effect applies to the scene.
Pan, Rotate, Zoom, Track, Shake. Effects such as these can be mapped out with a square canvas, but alternative canvas sizes and shapes, e.g., rectangular or otherwise non-square, can also be used so as not to be boxed in.
A “Camera Effect Selection” screen is shown in
A “FTUE Pan DEMO” screen is shown in
A “Record Panning” screen is shown in
A “Configure Panning” screen is shown in
A “Configure Panning-Continued” screen is shown in
A “Rotate FTUE Demo” screen is shown in
A “Rotate FTUE Demo—Results” screen is shown in
A “Record Rotate” screen is shown in
A “Configure Rotate” screen is shown in
A “Configure Rotate—Continued” screen is shown in
A “Zoom FTUE Demo” screen is shown in
A “Zoom Recording” screen is shown in
A “Configure Zoom” screen is shown in
A “Configure Zoom—Continued” screen is shown in
“Tracking” is a camera effect that acts like a video game camera, in that it follows the movement of the player in the middle of the screen. In some embodiments of the software, this feature may be added to a character's effect list, such that the object being tracked can be identified. The Tracking effect will have few options, since the character movements determine what happens to the camera. However, one control or option for the Tracking effect may be “sensitivity level”, such that the user can control how much movement away from the previous mark starts the camera in motion, so it does not jump on every slight movement.
“Shake” is another camera effect that can be added to the software. The Shake effect may be a repackaging of existing camera effects, by moving or rotating them quickly in loops. Different types of shaking may be supported, corresponding to the different types of camera movements previously discussed.
“Scene Transitions” are other camera effects that can be added to the software. Such effects may include one or more of fading, blurring, flipping, and stretching the entire canvas.
With this feature, a user can trim any time segment by toggling the “trim” tool on.
To assist the reader's understanding of this feature, a “Trim Tool” screen is shown in
With trim selected, the end markers and line indicator for an effect turn a different color (e.g. turning from blue to red), highlighting that the user is in “trim” mode. During Trim Mode (red), when adjusting the left marker, it adjusts the effect's timing either by adding padding before the effect starts (move left) or by cutting off the early sequences of the effect (move right). When adjusting the right marker, it also adds time to the end of the sequence (move right) or cuts off part of the sequence (move left). When trimming the effect past the current sequence length so some cuts off, that part turns grey and is never seen, but we can always come back later and adjust it back into the sequence (it does not disappear completely). By toggling the Trim tool again, the software returns the user to the normal mode. The Normal Mode (blue) adjusts the beginning point with the left marker and slides the entire effect over. Moving the right marker adjusts the speed of the effect evenly.
Here we discuss rules & requirements for effects other than the Move Effect.
In regard to an optional FTUE/“SHOW ME” OPTION, a “User Accesses Rotate” screen is shown in
An “Effect Applied” screen, which includes a virtual RECORD button, is shown in
ROTATIONAL AXIS or PIVOT POINT: By default, the object's pivot point is placed at the designated center of the object. The relative position of this pivot point can be changed AFTER the user does an initial rotation in the “Rotate Config Options” screen of
RECORD: This button allows the user to draw or redraw the rotate effect starting at the current playhead location. LOOPS: Loops in this case loops the entire animation and does not note the number of full rotations made—for example, if the user draws a 25 degree rotation in the animation with their finger and reverses after 25 degrees by 90 degrees during a defined recording session, then the full animation of this will repeat. This slider can also be moved all the way to the end for infinite loops, or as many as technically feasible. FINISH UPRIGHT: this snaps any existing animation back to its original position at the end. AUTO-ROTATE: This overwrites any draw animation and allows the user to auto-rotate an animation 360 degrees based on the number of “rotations” indicated in the rotations slider. ROTATIONS: this slider lets the user manually set the number of 360 degree rotations the user would like to have. This is disabled if the user does not have auto-rotate toggled on. RETRACE: this reverses any clockwise rotation to be counter-clockwise once it is at the end of its original animation. This can also be used with auto-rotate once all rotations play. This also ADDS time to the animation, using equivalent time to create the initial effect.
In regard to a FTUE/“SHOW ME” OPTION, an optional “User Accesses (Uniform) Scale” screen is shown in
A “Uniform Scale Effect Applied” screen is shown in
SCALE PIVOT POINT: by default, scaling centers on the designated center of the object. This can later be adjusted in the configuration options to change the direction of the scale. This difference will be most noticeable on objects that are not perfectly square in aspect where the user wants the object to scale in a specific direction. See the “EDIT PIVOT POINT” comment below. SCALE RANGE: the slider is moved up and down at an increasing rate, starting in the middle, with corresponding scaling of the character from 0× (bottom) to 1× (middle) to 10× (top). In some embodiments, a slider may be added to modify the scale range away from 10×.
PREVIEW: there is no noticeable preview for scale when the animation is paused, but the user can scrub the timeline to preview how the object scales in real time. ALTERED START POSITION: if the user taps part of the slider that is not in the hitbox at the start of recording, the initial value may be set to that new location. The software also preferably re-centers the slider under the hitbox region regardless if it hit the hitbox or not. This is effective for copies, for example, where the software does not adjust to full-sized, but rather start invisible and grow to the correct size.
A “Scale Config Options (Advanced Scaling Disabled)” screen is shown in
RECORD: this virtual button allows the user to record or re-record the scale effect starting at the current playhead location. LOOPS: looping in scale is an easy way to have the object continually expand or contract up to a certain point. Looping takes the initial scale amount drawn by the user and multiplies it continually in the same direction. Looping is set to 0 by default. RETURN TO START: when toggled on, this resets the object to its initial 100% scale value at the end of the animation, and if looped, prior to each loop beginning. This is instant, and is toggled off by default.
RETRACE: this reverses the scale in the opposite direction using the same value initially created, but in reverse after the animation plays. For example, if the object is scaled 2×, it will now be shrunk by 2× after the initial scale. If it is a combination of shrink/expand, the object will reverse all steps. This also ADDS time to the animation, using equivalent time to create the initial effect.
A “Purchase Options—Advanced” screen is shown in
“Purchase Applied” screens are shown in
A “Directional Scaling” screen is shown in
By Default, scaling is set at 0 to 10, with the center point being 1×. However, users can modify scaling from −1000 to +1000, where negative indicates the object is flipped. User slides the marker up and down, and the character shrinks and grows vertically and horizontally over time, until the finger lifts. Pivot point affects centering of the scaling, but also the pivot angle (if applicable) sets what direction is vertical.
VERTICAL SCALING: the user can choose to adjust scaling in any single direction, or if coordinated enough, try both at same time. However, if the user chooses a single direction, they can also stack it with separate effects for horizontal and vertical. That is, the software can automatically combine the animation created for the single direction with animation(s) for the horizontal and/or vertical directions to yield a net or combined animation that includes both or all effects.
A “Freeform Scaling” screen is shown in
A benefit of this effect is to allow the user to adjust from any direction, creating warps that, when stacked or combined with other scaling effects or other disclosed effects, are unable to be achieved with the other methods, since the other approaches may only allow vertical/horizontal scaling. Further, no directional setup is useful for the pivot since the user controls it by the direction to/from the pivot. And no multiplier values are needed.
However, to achieve scaling from multiple directions, the user needs to lift their finger from the touch screen while recording, which typically stops the recording in other animations. To account for this, a green OK button appears during the recording session, or instead, a red button labeled “stop recording” may be used at the bottom of the screen to end the recording session. The +/− range still applies, which determines the extent of stretching possible and inversion.
In regard to a FTUE/“SHOW ME” OPTION, a “User Accesses Visibility” screen is shown in
A “Visibility Effect Applied” screen is shown in
PREVIEW: When the animation is paused, the object has the current visibility of its point in the timeline. The user can also scrub the timeline to preview how the object changes in visibility in real time. RECORDING: the user touches the visibility marker which begins recording the animation. As the user slides between 0 and 100% visible, keyframes are recorded. When the user lifts their finger, recording ends. ALTERED START POSITION: if the user taps part of the slider that is not in the hitbox at the start of recording, the initial value can be set to that new location. The slider can be re-centered under the hitbox region regardless if it hit the hitbox or not. This will be effective for copies, for example, where it is not desirable to adjust from visible, but rather start invisible and grow to visible.
A “Visibility Config Options” screen is shown in
We have thus described, among other things, methods for dynamically manipulating an attribute of a virtual object (including abstract objects) to be animated over a time sequence, the methods employing the user interface of an electronic device as part of a software program that includes a function where an animation effect is applied to the object to produce some appearance of motion, or other change in visual appearance of the object over time, from keyframe to keyframe. In the method, the object is selected on the visual display of the device in order to apply an animation effect to the object, and a pointer position is provided that corresponds to at least one attribute of the object or its effect at a given time. The position of the pointer is then monitored over the timeframe of an extended recording session, and the measured position as a function of time over that recording session is stored as a position data string. The program interprets different positions of the pointer as different values of the object's selected attribute, and converts the position data string to a data string of attribute values. In the simplest case, the position data string is used without any modification as the data string of attribute values, while in other cases filtering techniques, replication techniques, or other techniques can be used to derive the data string of attribute values from the position data string. Each data string includes a plurality of distinct points, typically tens or hundreds of points (but fewer are also possible), and in some cases some (or all) of the points in the data string may have the same value if the user chooses to keep the pointer stationary during some (or all) of the recording session. The rendered playback of the frames of the object will then display the object as exhibiting changes in the appearance of the selected attribute automatically as a function of the position of the pointer that was traced out by the user during the recording session, and not merely by the program generating “tweening frames”.
The pointer position can be in the form of a cursor icon as with a personal computer, but does not need to be physically represented, and in the case of mobile devices, is not likely to have a physical representation, but rather corresponds to the focal point on the screen being touched.
The pointer position may be determined by: a continuous movement of a stylus, mouse, fingertip(s), or eye tracker; taps or gestures of fingertip(s), stylus, or mouse; eye tracking focus or gestures; continuous touch pad movement; and/or touch pad taps or gestures. The recorded positions may correspond to values of the attribute(s) of the object such as: an on/off toggle; a slider position; a selection of objects to choose from; a chart such as a color wheel, an x-y coordinate graph such as two attributes can be manipulated at one time; a position along a path; a new path being defined by the input motion; an invisible path such as swiping up and down or left and right; and/or multiple attributes represented at the same time using any combination of the above.
If desired, a smoothing curve can be applied during or after the recording by the program to simplify the animation motion such that it reduces unintended jerkiness in the change of the attribute from one keyframe to another, and so that any keyframes that are missing or adjusted are filled in automatically by the program according to the values of the algorithm.
The object selected for animation may be a virtual object that remains visual on the screen during animation playback, including: a shape; a virtual character; a virtual object; a line stroke; the background object; and/or any grouped combination of the foregoing objects. The object selected for animation may also be an abstract object that does not itself have a physical virtual representation on the screen during animation playback including: the canvas position, (e.g. camera shake/rotate/fade); the scene selected (e.g. swapping scenes over time or fade in/out); and in some cases an audio object rather than, or in addition to, visual object(s), such as an audio recording (e.g. adjusting the volume).
The attribute (of the object) being adjusted to produce the animation may be “physical” in nature, such as the x, y, or z coordinate of the object's position on the screen or relative to other objects, or the rotation/orientation of the object, or the scale of the object, or the opacity of the object, or the zoom or position or rotation of the canvas/camera, or the volume of an audio track. The attribute to be adjusted may instead be “abstract” in nature, such as the mood of a character (e.g. as shown by physical expressions of the character), or intensity, or vitality (life), (e.g. a plant thriving or withering, or a character energizing or dying). The attribute to be adjusted may also be a combination of such physical and abstract attributes or effects.
The selection of the effect may occur by, for example: a set gesture associated with the object translates to a type of effect selected; a menu appearing upon selection of the object where the user can select the effect; and/or a menu appearing upon selection of the object where the user can choose to add effects and then choose the type of effect.
The start of recording may begin for the effect selected as follows: immediately upon selecting the effect; by selecting a record button option; by touching the screen; by touching or dragging the object; by touching or dragging the marker on a representation of the attribute; and/or an audible cue spoken into a microphone of the electronic device. The end of recording for the effect may be triggered as follows: lifting the user's finger or stylus from the touch-sensitive surface; a particular predefined gesture; a tap on a button (such as record/pause/stop); a click or double-click on a mouse device or track pad; double-click your mouse; and/or an audible cue spoken into the microphone of the device.
Additional filters and methods may also be added to the effect, such as: a replacement of the timing curve (e.g. a straight line or set motion in/motion out timing sequence); re-tracing the effect so it plays the effect in reverse after playing it forward; looping multiple iterations of the effect; looping multiple iterations of the effect along with its re-traced effect; returning the object to an upright position; re-positioning or re-orienting the object on the fly as the object moves along a defined path; cropping the effect so only portions play; scaling the timing faster or slower; and/or manually adjusting the timing. The effects may be stacked together, i.e., combined, either independently as effects of the object or objects selected or as a result of a parent object that impacts the object rendering, such that the combination of effects are all analyzed together in order for the software to determine the rendering of every keyframe for the object.
Turning now to
In step 2901, a selected portion of the screen is associated with an attribute range for an object of interest. For example, in the display of
In step 2902, the user starts the recording session. This may be done by touching or pressing a virtual RECORD button, for example, or by first touching the touch screen after pressing such button, or in other ways discussed above.
In step 2903, the system monitors the user's interaction with the selected portion of the screen during the recording session. For example, the system may monitor the location of the touch point within the selected portion of the screen at the refresh rate of the display screen or at another selected rapid interval, e.g. as the user moves the touch point along a motion path if they so choose. In step 2904, the string or sequence of such monitored locations is saved to the memory unit of the device. The saved information thus is or includes a time sequence of position data representing the location of the user-controlled touch point within the selected portion of the screen as a function of time during the recording session.
Step 2905 is optional and may be omitted, but can provide helpful feedback to the user during recording. If included, the visual effect of the user's interaction is displayed as changes in the selected attribute of the object. For example, in the display of
In step 2906, the recording session is ended or stopped. This may be done by lifting the user's finger off of the touch surface, or by touching or pressing the virtual RECORD button a second time, or by touching or pressing another virtual button provided on the screen, or in other ways discussed above.
In step 2907, the time sequence of position data that was monitored during the recording session is stored as a data file in the memory of the device. This may represent the completion of the storing or saving process carried out in step 2904. The saved position data may be a string of data points representing the position of the user's touch point at the sampled time intervals during the recording session. In some cases, each such data point in the string of data points may have only one numerical value representing a position along a particular in-plane axis on the touch screen. For example, in the case of the display of
In step 2908, an attribute animation data file is created from the stored position data file. This may be expressed alternatively as converting the received and stored position data to a data file or data string of attribute values. In some cases, the “converting” or “creating” may involve no modification of the position data, and may consist of nothing more than designating, or using, the stored position data file as a data file or data string of attribute values. In other cases, the program may employ one or more filtering techniques, replication techniques, or other data processing techniques to derive the data string of attribute values from the input data string. If the position data is 2-dimensional, e.g., if each position datapoint contains both an x-component and a y-component, the x-values may define an x-position function and the y-values may define a y-position function, and a first attribute function may be derived from the x-position function, while a second attribute function may be derived from the y-position function.
In step 2909, the software program uses the animation data file, e.g. the data string of attribute values, to automatically animate a designated object, such as the object that was the subject of the recording session. For example, in connection with
A graph of a curve that represents hypothetical or possible user-generated position data that the system uses to produce automated animation effects is shown in
In cases where both x- and y-coordinate position information is relevant, the user's action of tracing out a motion path produces two independent position curves, each analogous to curve 3001, substantially simultaneously. For example, if the motion path traced out by the user is one or more overlapping circles, the position graph for the x-coordinate will be a sinusoidal shape, and the position graph for the y-coordinate will be a similar sinusoidal shape with a phase delay.
As discussed above, the position data measured by the device during the recording session is used as a basis for the program's automatic animation of the character or object. In some cases, the position data may itself be used as an attribute data set purposes of the animation. Thus, the curve 3001 in
An example of a smoothing technique is shown in
Unless otherwise indicated, all numbers expressing quantities, measurement of properties, and so forth used in the specification and claims are to be understood as being modified by the term “about”. Accordingly, unless indicated to the contrary, the numerical parameters set forth in the specification and claims are approximations that can vary depending on the desired properties sought to be obtained by those skilled in the art utilizing the teachings of the present application. Not as an attempt to limit the application of the doctrine of equivalents to the scope of the claims, each numerical parameter should at least be construed in light of the number of reported significant digits and by applying ordinary rounding techniques.
Various modifications and alterations of this invention will be apparent to those skilled in the art without departing from the spirit and scope of this invention, and it should be understood that this invention is not limited to the illustrative embodiments set forth herein. The reader should assume that features of one disclosed embodiment can also be applied to all other disclosed embodiments unless otherwise indicated. It should also be understood that all U.S. patents, patent application publications, and other patent and non-patent documents referred to herein are incorporated by reference, to the extent they do not contradict the foregoing disclosure.
This application claims priority under 35 U.S.C. § 119 (e) to provisional patent application U.S. Ser. No. 62/740,656, “More Computer Methods and Systems for Automated or Assisted Animation”, filed Oct. 3, 2018, the contents of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62740656 | Oct 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 17282440 | Apr 2021 | US |
Child | 18130240 | US |