1. Field
A system and method are disclosed, which generally relate to robotic figures, and more specifically to animatronic figures.
2. Surrounding Background
An animatronic figure is a robotic figure, puppet, or other movable object that is animated via one or more electromechanical devices. The term “animated” is meant to be interpreted as to move to action. The electromechanical devices include electronics, mechanical, hydraulic, and/or pneumatic parts. Animatronic figures are popular in entertainment venues such as theme parks. For example, animatronic characters can be seen in shows, rides, and/or other events in a theme park. The animatronic character's body parts, such as the head and the arms, may generally move freely. However, the animatronic character is usually incapable of freely roaming or walking from one place to another.
Various animatronic systems have been created over a number of decades to control the animatronic figure. The control of these systems has steadily progressed over the last forty years from mechanical cams to mini computers to board-based systems, but the underlying approach has changed little.
In general, a current animatronic figure moves in a very mechanical fashion. In other words, the current animatronic figure gives the appearance of moving like a robot rather than giving the appearance of moving like a living creature.
In one aspect, there is an animatronic system. A reception module receives a fixed show selection input and a puppetted input from an operator. The fixed show selection input is associated with at least one fixed show instruction. The puppetted input provides a puppetted instruction. There is an animatronic figure. A translation software module translates the at least one fixed show instruction associated with the fixed show selection input into at least one fixed show physical movement instruction and translates the received puppetted instruction into at least one puppetted physical movement instruction. A motion software module receives the at least one fixed show physical movement instruction, receives the at least one puppetted physical movement instruction, and calculates a composite animated instruction from the at least one fixed show physical movement instruction and the at least one puppetted physical movement instruction so that at least one actuator can effectuate at least one component of the animatronic figure in a life-like manner.
In another aspect, there is an animatronic system. An audio software module provides an instruction to an audio device to output an audio signal when the motion of the at least one component of the animatronic figure is effectuated.
In one aspect, there is an animatronic system. An automatic fixed show software module automatically provides the fixed show selection input to the reception module at a predetermined time.
In another aspect, there is an animatronic system. A predetermined time coincides with a time that the operator provides the puppetted input.
In one aspect, there is an animatronic system. The operator presses a button to provide the fixed show selection input.
In another aspect, there is an animatronic system. A sensor is operably connected to the animatronic figure that determines the occurrence of an event.
In one aspect, there is an animatronic system. The occurrence of an event triggers the fixed show selection input to be inputted to the reception module.
In another aspect, there is an animatronic system. The operator turns a dial to provide the fixed show selection input to the reception module.
In one aspect, there is a method that produces motion of an animatronic figure. A puppetting instruction is provided to the animatronic figure to perform a puppetting movement. A fixed show selection is associated with at least one fixed show instruction to the animatronic figure to perform a fixed show movement. The puppetting instruction is combined with the at least one fixed show instruction to form a combined instruction. The animatronic figure is instructed to perform the combined instruction.
In another aspect, there is a method that produces motion of an animatronic figure. The fixed show selection is provided by the user.
In one aspect, there is a method that produces motion of an animatronic figure. Instructing the animatronic figure to perform the combined instruction results in a composite movement of the puppetted movement and the fixed show movement.
In another aspect, there is a method that produces motion of an animatronic figure. The puppetted movement is to be performed by the same component of the animatronic figure as the fixed show movement.
In one aspect, there is a method that produces motion of an animatronic figure. The puppetted movement is to be performed by a different component of the animatronic figure than the fixed show movement.
In another aspect, there is an animatronic system. There is an animatronic figure. A reception module receives a first command and a second command from an operator. The reception module is operably connected to the animatronic figure. The reception module communicates with the animatronic figure. The animatronic figure makes a first movement according to the first command. The animatronic figure makes a second movement according to the second command. A filter module filters the first command through a first filter and filters the second command through a second filter to coordinate a first movement of the animatronic figure resulting from the first command with a second movement of the animatronic figure resulting from the second command so that the motion of the animatronic figure provides a life-like appearance.
In one aspect, there is an animatronic system. The first command is a puppetted instruction.
In another aspect, there is an animatronic system. The second command is a fixed show selection that is associated with at least one fixed show instruction.
In one aspect, there is an animatronic system. The first filter is a low pass filter.
In another aspect, there is an animatronic system. The second filter is a high pass filter.
In one aspect, there is an animatronic system. The second filter is a band pass filter.
In another aspect, there is an animatronic system. The first filter and the second filter are used on different components of the same body part of the animatronic figure.
In one aspect, there is an animatronic system. The first filter and the second filter are used on different body parts of the animatronic figure.
In another aspect, there is an animatronic figure. There is a leg. A leg actuator is operably connected to the leg, wherein the leg actuator effectuates movement of the leg. There is a wheel. A wheel actuator is operably connected to the wheel, wherein the wheel actuator effectuates movement of the wheel. A processor determines a leg motion and a wheel motion to effectuate movement of the animatronic figure. The processor sends the leg motion to the leg actuator. The processor sends the wheel motion to the wheel actuator.
In one aspect, there is an animatronic figure. A position sensor determines a first current position of the animatronic figure.
In another aspect, there is an animatronic figure. An incremental sensor measures a second current relative position of the animatronic figure by incrementing the distance traveled from an initial position of the animatronic figure.
In one aspect, there is an animatronic figure. A clipping module determines if the difference between the second current position of the animatronic figure and the first current position of the animatronic figure has reached a positional clipping limit.
In another aspect, there is an animatronic figure. A clipping module determines if the velocity of the animatronic figure in moving from the first current position to the second current position has reached a velocity-clipping limit in addition to reaching the positional clipping limit.
In one aspect, there is an animatronic figure. A clipping module determines if the velocity of the animatronic figure in moving from the first current position to the second current position has reached a clipping limit.
In another aspect, there is an animatronic figure. A clipping module determines if the acceleration of the animatronic figure in moving from the first current position to the second current position has reached a clipping limit, separately or in conjunction with a position-clipping limit and/or with a velocity-clipping limit.
In one aspect, there is an animatronic figure. The clipping module shortens a trajectory of the animatronic figure if the clipping limit has been reached.
In another aspect, there is an animatronic figure. The clipping module reduces a velocity of the animatronic figure if the clipping limit has been reached.
In one aspect, there is an animatronic figure. The clipping module reduces an acceleration of the animatronic figure if the clipping limit has been reached.
In another aspect, there is a method that produces motion of an animatronic figure. A puppetted instruction is provided to the animatronic figure to perform a puppetted movement. A fixed show selection command is provided to the animatronic figure to perform at least one fixed show movement associated with the fixed show selection. The puppetted movement and the fixed show movement are combined into a composite movement. The animatronic figure is instructed to perform the composite movement.
In one aspect, there is a method that produces motion of an animatronic figure. The fixed show selection is provided by the user.
In another aspect, there is a method that produces motion of an animatronic figure. The puppetted movement and the at least one fixed show movement is a composite of the puppetted movement and the fixed show movement.
In one aspect, there is a method that produces motion of an animatronic figure. The puppetted movement is on the same component of the animatronic figure as the fixed show movement.
In another aspect, there is a method that produces motion of an animatronic figure. The puppetted movement is on a different component of the animatronic figure than the fixed show movement.
In one aspect, there is a method that produces motion of an animatronic figure. A pre-determined limit reduces the composite of the puppetted movement and the fixed show movement.
In another aspect, there is an animatronic system. There is an animatronic figure. A reception module receives a first command from an operator. The reception module is operably connected to the animatronic figure. The reception module communicates with the animatronic figure. The first command is used by the animatronic system as part of a calculation to determine a first movement. An algorithmic response module provides a second command to the animatronic figure based on the occurrence of the first command being provided to the animatronic figure. The algorithmic response module receives the first command from the reception module. The animatronic figure makes a second movement according to the second command.
In one aspect, there is an animatronic system. The first command is a puppetted instruction.
In another aspect, there is an animatronic system. The second command is a fixed show selection input associated with at least one fixed show instruction.
In one aspect, there is an animatronic system. The second command is an animatronic fixed show instruction that is based on the occurrence of a sensor operably connected to the animatronic figure detecting a stimulus.
In another aspect, there is an animatronic system. A filter module filters the first movement through a first filter and filters the second movement through a second filter to coordinate the first movement with the second movement so that the motion of the animatronic figure provides a life-like appearance.
By way of example, reference will now be made to the accompanying drawings.
A computing environment is disclosed, which provides greater flexibility for controlling animatronic systems than previously seen and provides the ability to produce real-time composite motions. This computing environment can be applied to robotic systems generally and is not limited to only animatronic systems. Further, this computing environment is not limited to any particular programming language. In one embodiment, a scripting language is used to provide a programmer with a high level tool that instructs the animatronic figure to create and produce real, life-like motions.
Various features of this computing environment are described below with respect to an animatronic system. In one embodiment, the computing environment provides resources for different types of shows that the animatronic figure can perform, the combination and sequencing of the movements in these shows to produce life-like movements in real-time, calculations of trajectories for real-time movements, and/or filtering of the animatronic figure's movements. These and other features will be discussed below.
One type of show is a puppetted show. The puppetted show is a sequence of movements that are operator-controlled. In other words, an operator 101 manually inputs the desired movements of the animatronic
In one embodiment, the user inputs the puppetted motions through a joystick or other analog input device. In another embodiment, the user inputs the puppetted motions through a keyboard. In yet another embodiment, the user inputs the puppetted motions through sensors attached to the user's body. In another embodiment, face tracking is used to input the puppetted motions. Face tracking detects movements of the face and/or head. A camera can be focused on the face and/or head to receive the puppetted motion. Sensors can also be placed on the head to receive the puppetted motion. For instance, the operator 101 can wear a headband that has sensors on it for detecting movement of the head. In yet another embodiment, the user inputs the puppetted motions through voice commands into a microphone that operates in conjunction with voice recognition software.
Another type of show is a fixed show. The fixed show is a recordation of pre-animated sequences that can be played back either once or continuously in a loop. The operator 101 can simply select a fixed show and animate the animatronic
A few different methods exist for creating a fixed show. In one embodiment, the fixed show is a recordation of the user puppetting a sequence of movements. In another embodiment, the fixed show is a recordation of movements that a user enters through a graphical user interface (“GUI”). In another embodiment, motions are derived from other sources, such as deriving mouth positions from analyzing recorded speech. In another embodiment, a combination of these approaches is used. In one embodiment, the instructions for the fixed show are stored on a computer-readable medium.
In one embodiment, the user inputs the selection of the fixed show through a button (not shown) that instructs the animatronic system 100 to animate the animatronic
In another embodiment, the user inputs the selection of the fixed show with a touch screen display. In another embodiment, the user inputs the selection of the fixed show through a dial. In another embodiment, the user inputs the selection of the fixed show through voice commands into a microphone that operates in conjunction with voice recognition software.
The animatronic system 100 provides the user with the ability to animate the animatronic
If the fixed show provides an instruction to the same actuator as the puppetted show, a composite motion is calculated. The composite motion is calculated so that the composite motion appears life-like. For example, if a fixed show is a sneezing motion and a puppetted motion is moving the head forward, a simple superimposition of the fixed show instruction and the puppetted motion would lean the head forward more so than if only the puppetted instruction was given. In another embodiment, the composite motion is formed, by multiplying motions to increase or reduce a component motion. For example, the amplitude that the tail wags can be increased when a laugh is played. In another embodiment, the composite motion is formed by modulating component motions. For example, the frequency of the breathing motion can increase when a strenuous motion is played. This combination parallels the motion of a living creature that would have, in the sneezing example, an exaggerated movement of the head forward if a sneeze occurred during the forward motion of the head.
Accordingly, the composite motion may appear to be too exaggerated of a motion to appear life-like. In the sneezing example, the head may move too far forward during the combined forward motion and the sneeze to appear life-like. In order to correct this behavior, the composite motion can be clipped. The term clipping refers to the reduction of a value to fall within a predetermined limit. In one embodiment, the trajectory of the composite motion of the animatronic
In another embodiment, the velocity of the composite motion of the animatronic
In yet another embodiment, the acceleration of the animatronic
In one embodiment, a reception module 102 receives operator input commands for controlling the animatronic
For example, an operator of an amusement park ride rotates a joystick to control the head and the neck of the animatronic
The translation software module 104 converts the fixed show instruction associated with the fixed show selection into at least one physical movement instruction. Further, the translation software module can also convert the puppetted instruction into at least one physical movement instruction. First, the translation software module 104 evaluates a fixed show instruction associated with the fixed show selection. In one embodiment, a computer program is loaded from a memory device.
In one embodiment, the reception module 102, the translation software module 104, and the motion software module are all stored on different computers. In one embodiment, the reception module 102 does not need to be stored on a computer. Rather, the reception module 102 can be a simple input device. In another embodiment, the reception module 102 and the translation software module 104 are stored on the same computer but a different computer than the computer on which the motion software module 106 is stored. In yet another embodiment, the reception module 102 and the motion software module 106 are stored on the same computer but on a different computer than the computer on which the translation software module 104 is stored. In another embodiment, the translation software module 104 and the motion software module 106 are stored on the same computer but on a different computer than the computer on which the reception module 102 is stored. One of ordinary skill in the art will also recognize that one computer software module can perform the functions of all or some of these modules. For instance, one software module can perform the functions of the reception module and the translation software module.
In a process block 230, a combined instruction results from the combination of the puppetted instruction and the fixed show instruction. In another embodiment, a combined instruction results from the superimposition of the puppetted instruction and the fixed show instruction. In a process block 240, the animatronic system 100 instructs the animatronic
For example, an operator provides, through keyboard input, a puppetted instruction to raise the animatronic character's leg. In one embodiment, the puppetted input is stored in a memory device within a computer. Afterwards, the operator presses a button operably connected to the animatronic character to request a fixed show selection that is associated with at least one fixed show instruction for the character to growl. A processor within the computer superimposes the instruction to raise the animatronic character's 108 leg with the instruction to growl. The animatronic system 100 then causes the animatronic
In one embodiment, an algorithmic response module 308 determines whether a condition has been met so that an algorithm can be run. In one embodiment, the algorithmic response module accesses a database that stores a condition and an algorithm to be run if the condition is met. In another embodiment, the database can store a condition and an instruction to be performed if the condition is met. In another embodiment, the algorithmic response module 308 determines the direction of the resulting motion. In another embodiment, the algorithmic response module 308 determines the magnitude of the resulting motion. In another embodiment, the algorithmic response module 308 determines the duration of the resulting motion. In another embodiment, the algorithmic response module 308 determines the sequence of motions to follow or how many times an individual motion repeats.
In one embodiment, a clock 302 provides the time to the algorithmic response module 308. One of the conditions can be that the animatronic
In another embodiment, an algorithm is provided that calculates a breathing motion for the animatronic
In another embodiment, a random number generator 304 provides an input to the algorithmic response module 308. An algorithm can be set to receive random numbers from the random number generator 304 to instruct the animatronic
In another embodiment, a sensor 306 provides data to the algorithmic response module 308. In one embodiment, the data is environmental data. For instance, the environmental data can include a detection of a body part of the animatronic
The sensed condition can be the occurrence of the motion of any body part regardless of whether the motion results from a puppetted instruction or from a fixed show selection. In one embodiment, a condition is established such that the motion of a first body part invokes an algorithm to produce a motion for a second body part in the same vicinity. The algorithm calculates the second motion such that the first motion and the second motion appear to provide a coordinated life-like movement. For instance, the condition can be a downward motion of the head of the animatronic
In one embodiment, the translation software module 104 forms a composite motion by combining the user-inputted motion with the algorithmically calculated motion. The animatronic
A motion of the animatronic
The filter module 502 filters the user-inputted motion according to one of a variety of filters. For instance, the filter module 502 can filter the user-inputted motion according to a low-pass filter, which blocks high frequency content and passes the lower frequency content. The filter module 502 can also filter the user-inputted motion according to a high-pass filter, which blocks low frequency content and passes the higher frequency content. Further, the filter module 502 can filter the user-inputted motion according to a band-pass filter, which passes a limited range of frequencies. The band-pass filter can be a combination of a low-pass filter and a high-pass filter to only pass through certain low frequency content and certain high frequency content. One of ordinary skill in the art will recognize that other filters can be used by the filter module 502.
In essence, the filter module 502 filters motions to appear more life-like. For instance, when the user 101 inputs an instruction to move the head of the animatronic
The filtering of the motions of the animatronic
Because of the low-pass filter, the head of the animatronic
The filter module 502 applies a first filter to the user-inputted motion and applies a second filter to the algorithmically-determined motion in an attempt to coordinate movements of the animatronic
Further, the curve 620 is illustrated to represent operation of the low-pass filter for the head of the animatronic
In addition, a filter is applied to the motion of the middle portion of the neck of the animatronic
A filter is also applied to the motion of the base of the neck of the animatronic
At a process block 906, a first movement and a second movement are determined. The movements are determined from the respective commands. In one embodiment, the translation software module 104 translates the commands into the movements.
At a process block 908, the first movement and the second movement are filtered to produce a coordinated life-like motion. Accordingly, at a process block 910, the first and second movements are provided to the animatronic
At a process block 928, a first movement and a second movement are determined. The movements are determined from the respective commands. In one embodiment, the translation software module 104 translates the commands into the movements.
At a process block 930, the first movement and the second movement are filtered to produce a coordinated life-like motion. Accordingly, at a process block 932, the first and second movements are provided to the animatronic
In one embodiment, a real-time evaluated scripting language is used to describe show elements. Show elements are theatric elements that can range from small motions and gestures up to complex interactive motions such as walking. These show elements contain sets of motions for the animatronic
In one embodiment, the scripting language and its execution environment are specialized for physical animation and run in a strictly real-time framework. The specialization for physical animation makes common animation tasks simple and efficient. The real-time framework assures that the resulting motions do not stutter or pause, maintaining consistent control and safety.
Smooth motion is helpful for physically realized animation. The underlying system maintains an explicit understanding of the component, composite, and transformed motion allowing automatic generation of smooth trajectories. In one embodiment, an approach to keyframe animation is used that is both simple to use and maintains C2 continuity. By C2 continuity, it is meant that the trajectory and its first two derivatives are continuous. Not only does this assure that the animatronic
In one embodiment, trajectories are implemented as computing objects, effectively computer programs, which are evaluated on a precise real-time clock. They are created, installed, and then run until the resulting motion completes or is stopped. While a trajectory is running, it is continually being evaluated to determine the animatronic figure's next target positions.
Trajectories are evaluated on a strictly periodic cycle. It is not acceptable for the evaluation to pause or not return a value in its allotted time. Such a pause would be indistinguishable from a system failure, and not returning a value after the allotted time would result in motion that is not smooth. Therefore, elements of the computation that could cause the calculation to falter are necessarily moved elsewhere.
Evaluated languages achieve their dynamic behavior by acquiring and releasing resources as they are needed to perform a calculation. The most common resource is computer memory, but also includes hardware and other devices. Acquiring resources can take an undetermined amount of time. Thus, the standard approach used by scripting languages to perform calculations cannot assure timely results and is therefore not acceptable here.
The trajectory planner 1004 has a trajectory following and execution module 1102. Accordingly, the trajectory following and execution module 1102 handles the real-time task of evaluating trajectories and determining target positions. The trajectory following and execution module 1102 periodically wakes up, evaluates the active trajectories, and returns target positions.
In addition, the trajectory planner 1004 has a trajectory creation and management module 1104. Accordingly, the trajectory creation and management module 1104 handles all the non-real time needs, particularly the creation of trajectory objects and managing of resources. When a trajectory completes, it and all its resources are handed back to the manager to be cleaned up, that is memory and devices are freed back to the computer to be used elsewhere. This approach of having a non real-time management thread, cooperating program that shares and manages a common memory space, is different and more powerful than typically used elsewhere for resource management.
Trajectory objects can be atomic or composite. An atomic trajectory is capable of computing targets position independent of other trajectories. In one embodiment, the atomic trajectory uses time or external inputs. A composite trajectory computes its target positions based on the outputs of component trajectories.
The trajectory objects in this embodiment provide a generalized computing environment. The example in
In addition, other trajectories can include sequence, composite, and transition trajectories, perform logical operators, and handle starting, stopping, and interrupting trajectories. Trajectories can produce multiple outputs, be transformed, loop, and be tested. Finally, trajectories can define an internal computational environment where internal variables, functions, and trajectories can be defined.
Keyframes based curves are a common and important tool for animation, computer aided design, and elsewhere. Standard approaches used for keyframe animation are powerful and simple to use but lack the high degree of mathematical smoothness required by physical animation, that is the life-like motion of an animatronic figure or robot. In one embodiment, an atomic keyframe trajectory that is used is C2 continuous. By the term C2 continuous, it is meant that the positions as well as the first two derivatives, the velocity and acceleration, are continuous and smooth.
In one embodiment, smooth keyframe based curves are implemented. Accordingly, the curve is C2 continuous. Further, data is interpolated, i.e. the curve passes through, not just approximates, all specified data. In addition, keyframes may be non-uniformly spaced. In other words, the time between adjacent pairs of keyframes may be different. Individual keyframes may contain position data alone, position and velocity data, or position, velocity, and acceleration data, all of which is interpolated. Different kinds of keyframes, such as position, position/velocity, or position/velocity/acceleration, may be arbitrarily mixed.
Standard approaches to keyframe animation often use Bézier curves or the generalized class of Hermite splines, both of which in their standard form are only C1 continuous. Higher order Bézier and Hermite splines exist but are not well suited to simple curve editing. The most common Hermite spline for curve editing is the Catmull-Rom method that uses adjacent points to compute the additional needed constraints, i.e., the velocities, at each point. Additional controls can also be added, probably the most common example being Kochanek-Bartels splines, as seen in AUTODESK's 3D-STUDIO MAX and NEWTEK's LIGHTWAVE.
as illustrated in
where
The value for vi reduces to the Catmull-Rom value when dtA=dtB. For any keyframe that does not have an acceleration specified, the following computation is performed. If it is the first or last keyframe (k1 or kN), the unspecified acceleration is set to zero, ai=0. If the acceleration of a remaining keyframe is unspecified, compute it by averaging the accelerations of the curve to the left of the keyframe and the curve to the right of the keyframe, such that those curves interpolate whatever data does exist. The equation
represents this computation. The component accelerations aA and aB are computed in the following manner. If the previous keyframe ki−1 has a specified acceleration or is the first keyframe (so ai−1=a1=0), then solve the fourth-order polynomial fa(t)=a0+a1t+a2t2+a3t3+a4t4 such that it passes through {ti−1, xi−1, vi−1, ai−1} and {ti, xi, vi} and then compute the resulting acceleration at ti:
Else, the previous keyframe ki−1 does not have a specified acceleration and is not the first keyframe. Solve the third-order polynomial fa(t)=a0+a1t+a2t2+a3t3 such that it passes through {ti−1, xi−1, vi−1} and {ti, xi, vi} and then compute the resulting acceleration at ti:
If the subsequent keyframe ki+1 has a specified acceleration or is the last keyframe (so ai+1=aN=0), then solve the fourth-order polynomial fb(t)=b0+b1t+b2t2+b3t3+b4t4 such that it passes through {ti, xi, vi} and {ti+1, xi+1, vi+1, ai+1} and then compute the resulting acceleration at ti:
Else, the subsequent keyframe ki+1 does not have a specified acceleration and is not the last keyframe. Solve the third-order polynomial fb(t)=b0+b1t+b2t2+b3t3 such that it passes through {ti, xi, vi} and {ti+1, xi+1, vi+1} and then compute the resulting acceleration at ti:
At this point, each keyframe has a position, velocity, and acceleration, either from being specified or from being computed based on the values of adjacent keyframes. These fully populated keyframes {ti, xi, vi, ai} are now used to compute the resulting curve.
An added benefit of this approach is that the acceleration averaging typically reduces the maximum acceleration, and thereby eventual torque required by the robot to realize the motion.
Not only should atomic and composite trajectory objects produce smooth motions, but transitions between curves, curves that are interrupted, and curves that start from unanticipated positions must do so as well. This is done by computing and passing software objects, called PVA objects (for position, velocity, and acceleration) that describe the trajectory's current motion. Trajectory objects use PVAs to initialize their motion and return them when they stop or are interrupted. Additionally, it was found that PVA objects can be used to produce trajectories that combine motions in different transform or description spaces.
Transitions between smooth component trajectories are achieved using cross fades, where the fade is itself a C2 continuous curve. Any C2 continuous s-shaped curve could be used, here this was done with the 5th order polynomial
f(τ)=3τ(10+τ(6τ−15))
where the scaled time τ is in 0≦τ≦1. If the two trajectories T1 and T2 as well as the transition function f(t) are C2 continuous, then their linear combination
f(αt)T1+(1−f(αt))T2
where α is a constant scaling factor for time, is also C2 continuous.
Cross fades can be used for any trajectory. It is often useful to transition an internal or transformed trajectory. This causes a smooth transition from one complex motion to a different complex motion in an often unintuitive but powerful way.
As an example, consider a tail wag that involves a sinusoidal motion. When the animatronic figure is happy the tail might want to wave more quickly than when sad. In this example, the frequency of the wag motion would smoothly transition from some lower value to some higher value. In actuality, a fixed sinusoidal pattern can appear too perfect, too mechanical to be life-like. A more life-like solution is to smoothly transition from a low-frequency, slowly-varying, pseudo-random pattern to a high-frequency, slowly-varying, pseudo-random pattern.
Some trajectories, most notably keyframe trajectories, are able to start from an arbitrary initial position or motion. All trajectory objects are initialized during execution before being evaluated for the first time. This initialization sets a number of parameters such as the start time and the start PVA. Keyframe trajectories, if they have been specified to do so, can use this start PVA as their initial keyframe, a subsequent keyframe in the curve, or as a computational component used by that trajectory object. Thus, trajectories that start with a non-predetermined motion can still compute and follow a smooth trajectory.
While the eventual task of a control system is to move the robot's motors, it is often easier to describe motions in a transformation space. For a walking animatronic figure the conceptually simple task of moving the body horizontally, vertically, or rotationally typically requires moving all of the motors in all of the legs in a coordinated fashion. Producing combinations of these conceptually simple body motions quickly becomes impractically difficult if each joint's trajectories has to be described directly. Instead, it is convenient to be able to describe motions in a more intuitive transformation space. In the walking example, motions would be described in terms of the body and the system would compute the motions of the leg motors needed to create that motion. The transformation in this example is between body motion and leg motor motions.
By combining transformations with composite trajectories and PVAs, additional capabilities were realized in this preferred embodiment. A particularly powerful capability was the ability to project motion from one transform space into another. As an example, consider an animatronic figure which picks up a trumpet and plays it, all the while moving its head and body. The position of the animatronic figure's hands while picking up the trumpet would most easily be described in terms of the stationary coordinate system of the table. Playing the trumpet while the animatronic figure moved would most easily be done in terms of the figure's lips. Transitioning smoothly from one transform space to the other, as well as allowing motions described in one space to be modified or combined in another, is achieved by projecting a motion described in one transform spaced down into joint motions and then projecting those joint motions backwards into the other transform space where it can be modified or combined with other motion components.
Another example is the movement of an animatronic dinosaur, which involves feet positions during walking. While a foot is on the ground, its position is described in terms of ground coordinates, in particular, a foot on the ground should not move or slide along the ground. While a foot is swinging forward in the air during a step, it is described in terms of the body's position. By allowing smooth transitions between these two descriptions or transform spaces it was possible in the scripting language to ask a foot to be lifted smoothly off the ground, swung forward to a prescribed stride position, and set smoothly back onto the ground without having to worry about the component motions that were necessary to make the foot leave and touch the ground smoothly.
The ability to describe and rapidly try out trajectories provides much of the power and flexibility. The scripting language allows trajectory objects to be described at a high level and then have the computer manage them. The computer is also able to perform some optimizations on the scripts as they are read in, or parsed. These optimizations result in smaller trajectory objects that evaluate more efficiently.
The scripting language used in the preferred embodiment is a text-based language that allows trajectories to be defined and manipulated. As a way of explanation, scripts for the trajectories illustrated previously will be shown. The syntax is reminiscent of the computer language C++.
The animatronic system 100 is intended to cover any system that deals with an unconstrained environment where the operator 101 can come in and either modify or completely replace some or all of the behavior and then potentially hand control back possibly even at a new point. The animatronic system 100 provides the ability to mix and to switch between canned (pre-recorded), computed (typically sensor responses, but also planned), and live (operator controlled) motion to produce a seamless amalgam. Accordingly, the animatronic system 100 can be applied to a variety of different configurations. For instance, the animatronic
In addition, the animatronic
Further, the animatronic
The animatronic
The animatronic
While the above description contains many specifics, these should not be construed as limitations on the scope of the invention, but rather as an exemplification of preferred embodiments thereof. The invention includes any combination or subcombination of the elements from the different species and/or embodiments disclosed herein. One skilled in the art will recognize that these features, and thus the scope of the present invention, should be interpreted in light of the following claims and any equivalents thereto.
This application is a continuation-in-part and claims priority to U.S. patent application Ser. No. 10/757,797, entitled “ANIMATRONIC SUPPORTED WALKING SYSTEM” filed on Jan. 14, 2004 by Akhil Jiten Madhani, Holger Irmler, Alexis P. Wieland, and Bryan S. Tye, now U.S. Pat. No. 7,238,079, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
4202423 | Soto | May 1980 | A |
4503924 | Bartholet et al. | Mar 1985 | A |
4511011 | Bartholet | Apr 1985 | A |
4527650 | Bartholet | Jul 1985 | A |
4558758 | Littman et al. | Dec 1985 | A |
4565487 | Kroczynski | Jan 1986 | A |
4677568 | Arbter | Jun 1987 | A |
4695977 | Hansen et al. | Sep 1987 | A |
4712184 | Haugerud | Dec 1987 | A |
4747127 | Hansen et al. | May 1988 | A |
4774445 | Penkar | Sep 1988 | A |
4815011 | Mizuno et al. | Mar 1989 | A |
4821463 | Fuller, Jr. | Apr 1989 | A |
4833624 | Kuwahara et al. | May 1989 | A |
4835730 | Shimano et al. | May 1989 | A |
4840242 | Chih et al. | Jun 1989 | A |
4868474 | Lancraft et al. | Sep 1989 | A |
4912650 | Tanaka et al. | Mar 1990 | A |
4923428 | Curran | May 1990 | A |
5008834 | Mizuno et al. | Apr 1991 | A |
5021878 | Lang | Jun 1991 | A |
5088953 | Richman | Feb 1992 | A |
5121805 | Collie | Jun 1992 | A |
5151859 | Yoshino et al. | Sep 1992 | A |
5157316 | Glovier | Oct 1992 | A |
5159988 | Gomi et al. | Nov 1992 | A |
5221883 | Takenaka et al. | Jun 1993 | A |
5270480 | Hikawa | Dec 1993 | A |
5303384 | Rodriguez et al. | Apr 1994 | A |
5353886 | Paakkunainen | Oct 1994 | A |
5355064 | Yoshino et al. | Oct 1994 | A |
5434489 | Cheng et al. | Jul 1995 | A |
5484031 | Koyachi et al. | Jan 1996 | A |
5493185 | Mohr et al. | Feb 1996 | A |
5511147 | Abdel-Malek | Apr 1996 | A |
5519814 | Rodriguez et al. | May 1996 | A |
5644204 | Nagle | Jul 1997 | A |
5697829 | Chainani et al. | Dec 1997 | A |
5724074 | Chainani et al. | Mar 1998 | A |
5746602 | Kikinis | May 1998 | A |
5752880 | Gabai et al. | May 1998 | A |
5784541 | Ruff | Jul 1998 | A |
5794166 | Bauer et al. | Aug 1998 | A |
5808433 | Tagami et al. | Sep 1998 | A |
5838130 | Ozawa | Nov 1998 | A |
5842533 | Takeuchi | Dec 1998 | A |
5913727 | Ahdoot | Jun 1999 | A |
5929585 | Fujita | Jul 1999 | A |
5947825 | Horstmann et al. | Sep 1999 | A |
6022273 | Gabai et al. | Feb 2000 | A |
6053797 | Tsang et al. | Apr 2000 | A |
6056618 | Larian | May 2000 | A |
6075195 | Gabai et al. | Jun 2000 | A |
6149490 | Hampton et al. | Nov 2000 | A |
6192215 | Wang | Feb 2001 | B1 |
6206745 | Gabai et al. | Mar 2001 | B1 |
6230078 | Ruff | May 2001 | B1 |
6249278 | Segan et al. | Jun 2001 | B1 |
6253058 | Murasaki et al. | Jun 2001 | B1 |
6264521 | Hernandez | Jul 2001 | B1 |
6290566 | Gabai et al. | Sep 2001 | B1 |
6319010 | Kikinis | Nov 2001 | B1 |
6330494 | Yamamoto | Dec 2001 | B1 |
6352478 | Gabai et al. | Mar 2002 | B1 |
6358111 | Fong et al. | Mar 2002 | B1 |
6368177 | Gabai et al. | Apr 2002 | B1 |
6377281 | Rosenbluth et al. | Apr 2002 | B1 |
6452348 | Toyoda | Sep 2002 | B1 |
6454625 | Fong et al. | Sep 2002 | B1 |
6462498 | Filo | Oct 2002 | B1 |
6493606 | Saijo et al. | Dec 2002 | B2 |
6497607 | Hampton et al. | Dec 2002 | B1 |
6514117 | Hampton et al. | Feb 2003 | B1 |
6535793 | Allard | Mar 2003 | B2 |
6537128 | Hampton et al. | Mar 2003 | B1 |
6544098 | Hampton et al. | Apr 2003 | B1 |
6584377 | Saijo et al. | Jun 2003 | B2 |
6629087 | Benson et al. | Sep 2003 | B1 |
6663393 | Ghaly | Dec 2003 | B1 |
6667593 | Inoue et al. | Dec 2003 | B2 |
6773322 | Gabai et al. | Aug 2004 | B2 |
6800013 | Liu | Oct 2004 | B2 |
6937289 | Ranta et al. | Aug 2005 | B1 |
6959166 | Gabai et al. | Oct 2005 | B1 |
7059933 | Hsiao et al. | Jun 2006 | B1 |
7065490 | Asano et al. | Jun 2006 | B1 |
7219064 | Nakakita et al. | May 2007 | B2 |
7260430 | Wu et al. | Aug 2007 | B2 |
7308335 | Takenaka et al. | Dec 2007 | B2 |
20010004195 | Barr | Jun 2001 | A1 |
20010027397 | Yeon | Oct 2001 | A1 |
20010031652 | Gabai et al. | Oct 2001 | A1 |
20010049248 | Choi | Dec 2001 | A1 |
20020016128 | Saito | Feb 2002 | A1 |
20020024447 | Fong et al. | Feb 2002 | A1 |
20020029388 | Heisele | Mar 2002 | A1 |
20020061700 | Chan | May 2002 | A1 |
20020061708 | Fong et al. | May 2002 | A1 |
20020086607 | Chan | Jul 2002 | A1 |
20020107591 | Gabai et al. | Aug 2002 | A1 |
20020120362 | Lathan et al. | Aug 2002 | A1 |
20020187722 | Fong et al. | Dec 2002 | A1 |
20020193908 | Parker et al. | Dec 2002 | A1 |
20030110040 | Holland et al. | Jun 2003 | A1 |
20030124954 | Liu | Jul 2003 | A1 |
20030162475 | Pratte et al. | Aug 2003 | A1 |
20040088080 | Song et al. | May 2004 | A1 |
20050283043 | Sisk | Dec 2005 | A1 |
20060084362 | Ghaly | Apr 2006 | A1 |
20060094332 | Ghaly | May 2006 | A1 |
20060260087 | Im et al. | Nov 2006 | A1 |
20070099538 | Friedland et al. | May 2007 | A1 |
Number | Date | Country |
---|---|---|
8174378 | Jul 1996 | JP |
10137439 | May 1998 | JP |
2002210679 | Jul 2002 | JP |
Entry |
---|
Patent Application No. 2006-500961, Office Action from the Japanese Patent Office, English Translation of the Notification of Reasons for Refusal, dated Nov. 2, 2009, citing prior art, p. 5. |
Number | Date | Country | |
---|---|---|---|
20050153624 A1 | Jul 2005 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10757797 | Jan 2004 | US |
Child | 10917044 | US |