Creating robotic devices, such as animatronic characters, can be time intensive and processing intensive. Previous processes and systems for creating the physical linkages and joints of robotic devices are often slow, iterative and repetitive. For example, whether the creative vision for an animatronic character can be realized with a given set of robotic hardware is unknown at the outset. Furthermore, certain details of the hardware may also be unknown. Thus, many iterations of the robotic and character design may be needed to realize a functioning device. Therefore, there exists a need for improved processes and systems that can enable quick design of robotic systems.
In one embodiment, a computer-implemented method for designing a robotic device, including a processor and a memory storing instructions that, when executed by the processor, cause the system to receive a target animation for a character to be represented by the robotic device; receive an initial model of the robotic device, the model including a plurality of configurable joints and a plurality of actuators; generate a kinematic design of the robotic device based on the initial model and the target animation; generate control parameters for the plurality of actuators based on the kinematic design; generate a physical design for the robotic device based on the kinematic design and the control parameters; and deploy the physical design to the robotic device.
Optionally, in some embodiments, the plurality of configurable joints include respective parameterized characteristics fixed during an animation of the robotic device.
Optionally, in some embodiments, the instructions, when executed by the processor cause the processor to parameterize a characteristic of at least one of the plurality of configurable joints.
Optionally, in some embodiments, the plurality of configurable joints includes at least one of a Cartesian joint, a prismatic joint, a cylindrical joint, a revolute joint, a universal joint, or a spherical joint.
Optionally, in some embodiments, the plurality of configurable joints includes at least one of an actuated joint or a passive joint.
Optionally, in some embodiments, the parameterized characteristics include at least one of an orientation or position of at least one of the plurality of configurable joints.
Optionally, in some embodiments, the instructions, when executed by the processor cause the processor to discretize the target animation into a plurality of time intervals.
Optionally, in some embodiments, the instructions, when executed by the processor cause the processor to compare a motion of the robotic device with respect to the target animation at each of the plurality of time intervals, and adjust the kinematic design based on the comparison.
Optionally, in some embodiments, the comparing the motion of the robotic device includes measuring at least one of a position of at least one of the plurality of actuators or a velocity of at least one of the plurality of actuators.
In one embodiment, a system for designing a robotic device, includes a processor configured to: receive a target animation for a character to be represented by the robotic device; receive an initial model of the robotic device, the model including a plurality of configurable joints and a plurality of actuators; generate a kinematic design of the robotic device based on the initial model and the target animation; generate control parameters for the plurality of actuators based on the kinematic design; generate a physical design for the robotic device based on the kinematic design and the control parameters; and deploy the physical design to the robotic device.
Optionally, in some embodiments, the plurality of configurable joints include respective parameterized characteristics fixed during an animation of the robotic device.
Optionally, in some embodiments, the processor is further configured to parameterize a characteristic of at least one of the plurality of configurable joints.
Optionally, in some embodiments, the plurality of configurable joints includes at least one of a Cartesian joint, a prismatic joint, a cylindrical joint, a revolute joint, a universal joint, or a spherical joint.
Optionally, in some embodiments, the plurality of configurable joints includes at least one of an actuated joint or a passive joint.
Optionally, in some embodiments, the parameterized characteristics of the plurality of configurable joints includes at least one of an orientation or position of at least one of the plurality of configurable joints.
Optionally, in some embodiments, the processor is further configured to discretize the target animation into a plurality of time intervals.
Optionally, in some embodiments, the processor is further configured to compare a motion of the robotic device with respect to the target animation at each of the plurality of time intervals, and adjust the kinematic design based on the comparison.
Optionally, in some embodiments, comparing the motion of the robotic device includes measuring at least one of a position of at least one of the plurality of actuators or a velocity of at least one of the plurality of actuators.
In one embodiment, a robotic device includes: a plurality of rigid bodies coupled together by one or more of a plurality of joints. At least one of the plurality of joints is configurable to adjust a characteristic of the joint, and the characteristic of the joint is fixed during an animation of the robotic device.
Optionally, in some embodiments, at least one of the plurality of joints includes an actuated joint or a passive joint.
The kinematic motion of a robotic device is defined by its mechanical joints and actuators that create the relative motion of its components. Kinematics describes degrees-of-freedom (“DOF”) of respective linkages, their positions, velocities, and ranges of motion. The disclosed systems and methods provide improvements in the kinematic design and control of robotic devices, allowing faster and easier deployment of different target motions to different or new robotic devices. In many embodiments, the system includes configurable joints that can be modified to change output characteristics thereof, enabling a rapid deployment of different types of desired movement.
In many embodiments, a method includes receiving a target animation for a character to be represented by the robotic device and based on the target animation, an initial robotic model (e.g., three-dimensional solid model) may be generated. The robotic model typically includes a plurality of rigid bodies or links joined by a variety of types of joints to form one or more linkages. The linkages are assembled together in a robotic device (either virtually and/or in physical reality). The joints may be actuated joints (e.g., powered joints that move under the power of a motor or the like) or passive joints (e.g., unpowered joints or followers), or configurable joints, which may be real or simulated joints, which can be changed to adapt the configuration of the robot. The configurable joints enable a robotic device assembly to be parameterized. By setting up a model of the robotic device with configurable joints, the configurable joints may modify the overall shape of the robotic device, the length and shape of robot links or solid bodies, the position and/or orientation of actuators, the position and/or orientation of passive joints, the mass distribution of the robot, and the like. In many embodiments, the configuration may be changed and/or set before starting an animation and then remains fixed during the animation. The configuration variation can be done virtually (e.g., to enable design of both the robotic devices and animation) and physically (e.g., to actually change the physical outputs of the robotic device and particular joints).
The system may parameterize one or more aspects of a configurable joint, such as one or more positions or angles. Parameterization of a robotic device's configurable joints provides the benefits of enabling rapid design iteration, a design optimization method that eliminates redundancy in constraints, and is therefore agnostic to the robot kinematics, and a reduction of the local approximate design-control to a discrete-time optimal control problem that enables efficient, scalable, and robust solving of the kinematic design using dynamic programming. The system utilizes the components (including those that are parameterized) to optimize the robotic device's kinematics with respect to the target animation. For example, the system may discretize the target animation at one or more points in time and compare position and orientation of the components at those times to the target animation, i.e., comparing the desired position and orientation of a given component with the actual position and orientation. In instances where components stray from the target animation, movements of those components may be restricted or de-weighted to assist in realigning the component to match the desired input. The system may generate and solve a constrained optimization problem to optimize the kinematic design.
Turning to the figures,
The user device 104 may be a phone, tablet, laptop, desktop, or a virtualized environment on the server 108. The user device 104 may be suitable to simulate or model any aspect of a robotic device 112 herein, such as the type and number of any bodies, links, linkages, or actuators of a robotic device 112, as well as the kinematic performance of the resulting robotic device 112.
The database 110 may store models, simulations, or other data related to the kinematic design of any robotic device 112 disclosed. The database 110 may be in communication with the server 108 directly or via the network 106.
The server 108 is any computing device that can receive a user input and perform a calculation based on that input. In many embodiments, the server 108 may have more substantial computing, communications, and/or storage capacity than the user device 104. The server 108 may be a discrete computing device, a cloud computer instance, or any number of computing devices in communication with one another.
The network 106 may be implemented using one or more of various systems and protocols for communications between computing devices. In various embodiments, the network 106 or various portions of the network 106 may be implemented using the Internet, a local area network (LAN), a wide area network (WAN), and/or other networks. In addition to traditional data networking protocols, in some embodiments, data may be communicated according to protocols and/or standards including near field communication (NFC), Bluetooth, cellular connections, Wi-Fi, Zigbee, and the like. See
Turning to
The solid bodies of the linkage 200 are coupled by a plurality of passive joints 204. In this example, many of the passive joints 204 are examples of revolute joints 408 (discussed in detail with respect to
As shown for example in
In the example of the linkage 200, one of the “bars” of the 4-bar linkage is comprised of two solid bodies 208d and 208e coupled to one another via a configurable joint 206. As used herein “configurable” refers to a joint whose state is changeable while a linkage or robotic device 112 is not in motion. For example, a position and/or, orientation of a configurable joint may be set at a certain value prior to exercising the linkage or robotic device 112. As used herein, “moveable” or the like refers to motion of a component, link, solid body, actuator, or robotic device 112 during an animation sequence thereof.
The configurable joint 206 is repositionable between animations of the linkage 200. In other words, the relative positions of the solid body 208d and solid body 208e are fixed during motion of the linkage 200, but may be re-positioned between motions of the linkage 200 to impart different kinematic characteristics to the linkage 200.
The example configurable joint 206 shown is an example of a Cartesian joint 402 (discussed in detail with respect to
In some embodiments, the orientations, qA and qB, of two bodies (e.g., links or rigid bodies) A and B, may be set to the identity in the initial configuration, exemplified with a revolute joint, actuator, or configurable joint. See, e.g.,
Joints, actuators, and configurable joints constrain the relative motion between pairs of components A and B, whose states may be represented with 7-vectors sA and sB that encode the components' positions, cA and cB, and their orientations, qA and qB, represented by quaternions. In some embodiments, the Euler-Rodrigues formula may be used to convert a unit-quaternion q to a rotation matrix R(q) and R(u,a) to represent a rotation by u about axis a. RA and RB abbreviate R(qA) and R(qB). For cylindrical and prismatic joints, the difference vector d=(RAxA+cA)−(RBxB+cB) may be defined. The Cartesian actuator or configurable joint has three parameters, u, that determine the translations along the three axes, A=[ay,ay,az]. The spherical actuator or configurable joint is parameterized with a quaternion u whose length may be constrain to 1 during optimizations. The cylindrical and universal actuators or configurable joints are parameterized with two parameters, u1 and u2, and the prismatic and revolute actuators or configurable joints with a single parameter u. The fixed joint does not have a corresponding actuator or configurable joint, because it already removes all degrees of freedom. The ground joint keeps a single component fixed in space at its initial position c0 (and orientation which is set to the identity). Vectors ex, ey, and ez are the three unit vectors.
Mechanical joints restrict the relative motion between pairs of bodies, A and B. To formulate constraints, one may define a frame whose global position, x, coincides with the position of the joint in the robotic device's initial pose, and whose axes ax, ay, and ay align with its degrees of freedom. Because initial orientations may be set to the identity, the local frame axes in the body coordinates of A and B equal the global axes, and the local frame positions are xA=x−cA and xB=x−cB. In some embodiments, constraints between pairs of components may be as summarized in summarized in Table. 1,
As discussed in more detail with respect to
The mechanical joints may also have constraints for a corresponding actuator. Passive constraints may be complemented with additional constraints, parameterizing the constraints with time-varying control parameters u (see, e.g., Table. 1). As values are determined for u, the relative states of the two components that they connect are determined. Revolute or prismatic actuators and spherical actuators can be used.
To parameterize a robotic device's kinematics a configurable joint may be used. A configurable joint may be similar to an actuator, but parameterized with design parameters p that remain the same throughout an animation, and may not typically vary with time like control parameters do (Table. 1, Configurable Joints, u replaced with p).
Given an initial design of a robotic device in its rest configuration, a set of constrains may be applied, as shown for example in Table 1,
that represents, together with the state of the components, the kinematics of the robotic device. The above constraints include a unit length constraint, q·q−1=0, for each component of the robotic device. Given a set of design and control parameters, this set of constraints can be used by the system 100 to solve for the state of the robotic device, s(p,u), and therefore to simulate its kinematic motion.
The configurable joints 308 may follow the motion imparted by the actuated joints 310. As shown for example in
As described in more detail with respect to the method 600, the design configuration of a joint, such as the hip joint 302 may be optimized to perform a desired motion. See, e.g.,
With reference to
Turning to
As shown for example in
The actuator 512b is coupled to the lower leg portion 518 by one or more fasteners 516 received in apertures 514 of the lower leg portion 518 and corresponding apertures 514 of the actuator 512b. The actuator 512b may be configurable along one or more axes such as the axis 510. For example to configure the actuator 512b, the fasteners 516 may be removed, the actuator 512b repositioned with respect to the lower leg portion 518 and the fasteners 516 re-attached to couple the actuator 512b to the lower leg portion 518.
According to some examples, the method 600 includes receiving a target animation at operation 602. The target animation may be generated by an animation software, solid modeling software, sketch, or the like and is configured to provide a desired movement or sets of movements (e.g., choreographed) for a robotic device. See, e.g.,
According to some examples, the method 600 includes generating an initial design configuration 502 at operation 604. For example, a processing element 802 may analyze the target animation 702 and determine the initial placement, type, number, and configuration of one or more components of a linkage capable of performing the target animation 702. For example, the system 100, or a user using the system 100, may place actuators and/or joints in naïve locations, amounts, and/or orientations with respect to the target animation 702, to form an initial design configuration 502. See, e.g.,
According to some examples, the method 600 includes parameterizing the configurable joints of the robotic device 112 at operation 606. Parameterization typically includes selecting the properties of configurable joints included in the initial design configuration 502.
According to some examples, the method 600 includes generating the kinematic design of the robotic device 112 at operation 608. In one example, the system 100 discretizes the animation of the robotic device 112 based on the initial design configuration into multiple time steps and solves a constrained optimization problem of the robotic device's state at each time step. The motion of the device is compared to the target animation 702 at each time step, and parameters that result in close tracking of the target animation are emphasized while those that result in poor tracking are penalized. In some examples, to solve the constrained optimization problem, the system 100 solves for the state at a previous time step by minimizing an objective function of the robotic device 112's state for the current time step and control variables for the previous time step. E.g., the system 100 may solve backward in time, beginning at an end state for the target animation and proceeding backward to an initial animation state. Sec, e.g., Table 3.
According to some examples, the method 600 includes generating control parameters for the actuators at of the robotic device 112 at operation 610. The control parameters are configured to command the actuators of the robotic device to perform the target animation. The control parameters may be time-variant (e.g., at each discretized time step of the target animation) position, velocity, and/or acceleration commands for any actuator in the robotic device 112 given by a processor for the robotic device 112 to execute. The control parameters may be generated for either a simulated or real robotic device 112 based on the kinematic design and/or the initial design configuration, including the one or more configurable joints. In some examples, the operations 608 and 610 solving for the kinematic design and the control parameters respectively may be performed substantially simultaneously. In various examples, the operations 608 and 610 may be performed in one or more calculation loops, at discretized time intervals, or sequentially.
According to some examples, the method 600 includes deploying a kinematic design from the operation 608 to a physical robotic device at operation 612. For example the desired configurations of the configurable joints may be outputted by the system 100 (e.g., to a display, printout, solid model, or the like). The joints of the physical robot can be configured as determined in the method 600 such that the robotic device 112's performance of the motion closely tracks that of the target animation.
In some embodiments, a robotic device includes of a set of rigid components whose time-varying states are represented with 7D vectors that encode positions c and orientations q. For orientations, quaternions may be used and their unit length enforced with constraints of the form q·q=1. Variable s refers to the full state of the robotic device. Without loss of generality, in some embodiments, all orientations are set to the identity in the character's initial or rest pose.
In some embodiments, it is desired to optimize a character's parameterized joints to achieve a target animation 702 as closely as possible. Because optimal control parameters change if adjustments are made to design parameters, the variables of the optimization may be solved for simultaneously.
A design parameter change may have an impact on the entire motion of a robotic device, and therefore the system 100 may measure the performance of a particular design for an entire animation to make an optimal choice.
In some embodiments, of the operation 606, the system 100 discretizes the target motion into n time intervals Δt and k=0, . . . , n time steps, and introduce intermediate objectives, f, that measure the robotic device's performance with respect to the target animation 702 and ensure that actuator positions and velocities remain within limits. To directly penalize actuator velocities near limits, the system 100 may introduce time-varying velocity variables v and set them to {dot over (u)}. The system 100 may also introduce a terminal objective F that measures the difference between the robotic device's terminal state and its user-specified target.
To minimize the number of optimization variables, the system 100 may work with a single set of design parameters p. However, this choice results in a Hessian of the Lagrangian which is no longer a banded matrix due to sparsity along the time dimension. In addition, it would prevent the system 100 from applying a fast solution strategy based on dynamic programming, possibly requiring a recursive structure and local dependence between consecutive variables. The system 100 therefore may work with per-time-step design parameters pk, and enforce equality between them with constraints pk+1=pk.
To ensure that orientations in the design and control parameterization are singularity-free, the system 100 may use quaternions as control and design parameters for spherical and ground actuators and configurable joints (see, e.g., Table. 1). To enforce their unit length, the system 100 may add constraints, (p0)=0 and (uk)=0, to the set of constraints. Because the system 100 enforces equality between design parameters, the system 100 may only enforce their unit lengths at k=0.
In one embodiment, a discrete-time optimal design problem is:
According to some examples, the method includes optimizing kinematics of a robotic device 112 at operation 608 and/or generating control parameters at operation 610. A processing element 802 of the system 100 may output an optimized design configuration 504 for the robotic device 112, As shown for example in
In some embodiments, the optimal design problem may be difficult to solve: It has a design and control parameter set per time step, and the constraints , , and , as also the intermediate and terminal objectives, are nonlinear. As such, the system 100 may employ a variety of solution strategies in the operation 608.
In some embodiments, a first solution strategy may be sensitivity analysis where the system 100 solves for optimal states for a given set of design and control parameters in the inner loop, and then for optimal design and control parameters in the outer loop, with a first-order optimality constraint on the inner-loop optimization.
In some embodiments, an alternative solution strategy is sequential quadratic programming (SQP). To this end, the system 100 may introduce Lagrange multipliers , , , , and for the five constraint sets and use λ to refer to the combined set of multipliers (: design constraints; : velocity constraints). The Lagrangian may be represented by:
that is partially separable because the design and velocity constraints, that depend on two consecutive time steps, are linear and can therefore be split into two parts.
To perform line search, the system 100 may compute search directions
by either applying Newton to the Karush-Kuhn-Tucker conditions, or by solving the equivalent quadratic program (QP)
where the system 100 omits arguments for the last three sets of constraints, adding the time step as superscript instead. p, u, p, u, and s are constraint Jacobians with respect to design, control, and state variables.
To iteratively find optimal values for these variables, the system 100 may perform line search with the L1 merit function to identify a good step length α, and update the currently best estimates
The system 100 may also update Lagrange multipliers. To do so, the system 100 may compute an increment Δλ, multiply it with the step length, and use it to update the current best estimate λ as explained towards the end of the section.
In some embodiments, the system 100 may compute the search directions for variables and multipliers to solve the QP by applying a direct sparse linear solver to the equivalent system of linear equations. For large problems, this strategy is limited by its computational cost and the memory that is necessary to assemble the system matrix.
Iterative solvers can circumvent the memory bottleneck by using access to a matrix-vector product operator, and can often be parallelized. However, a careful tuning of tolerances and solver parameters is generally needed. Moreover, QP solvers may need the problem to satisfy certain properties, for example positive definiteness of the unconstrained Hessian, which may not hold at a distance from the optimum.
In some embodiments, a solution strategy may exploit the recursive structure of the problem: e.g., the Hessian of the Lagrangian is a banded matrix, more specifically a tridiagonal block matrix because constraints depend on two consecutive time steps only; and the blocks themselves are sparse.
An alternative strategy enabled by this recursive structure is the use of dynamic programming. This strategy is less restrictive when it comes to properties, and provides a direct solution strategy instead of an iterative one, without requiring explicit assembly of the system matrix. In some embodiments, this strategy outperforms a sparse solution strategy on the full system in terms of robustness and speed.
To apply dynamic programming, the system 100 may first bring the above QP into standard form for a linear discrete-time optimal control problem as shown below.
In this standard form, the “state” and “control” variables are {tilde over (s)}k and ũk. The QP can then be solved with dynamic programming by the system 100.
Solving a QP for a linear discrete-time optimal control problem with dynamic programming. The initial state (or conditions) are assumed to be known. See, e.g., Table 2.
In the step-by-step derivation that follows, the system 100 may reduce the QP to this standard form, defining the matrices in the above standard equations.
The linearized design and control constraints may depend on two consecutive time steps and can be brought into standard form. In some embodiments, the design constraints depend on Δpk and Δpk+1. Analogously, in some embodiments, the velocity constraints depend on the control parameters at k and k+1, but only on velocity variables at k. In some embodiments, the design and control variables may be state variables in the standard form, and the velocity variables take on the role of control variables
In some embodiments, the system 100 may add a leading 1 in the definition of states, allowing the system 100 to combine the gradient and Hessian of the Lagrangian at k into a single quadratic form as desired.
In some embodiments, the state variables Δsk in the above definition of {tilde over (s)}k and ũk may be omitted. They may appear in the linearized kinematic constraints that determine their values for a given Δpk and Δuk.
where redundant constraints were removed from C and the Jacobian Csk is a square matrix. By substituting Eq. 9 for Δsk in the individual Lagrangian terms k, the system 100 can remove these variables and the kinematic constraints.
In some embodiments, the unit length constraints for the design parameters at k=0. By forming a singular value decomposition of the Jacobian p0, the system 100 can represent the solutions that satisfy the constraint with a reduced set of variables Δ
where [|] are the right singular vectors, with corresponding to non-zero singular values. For the control parameters at k=0, the system 100 can proceed analogously. The reduced variables may be incorporated in an algorithm herein by adding the equation {tilde over (s)}0:=Ã−1{tilde over (s)}−1, with
for k=−1 to the set of constraints. Ã−1 represents a mapping from reduced to full space. Note that {tilde over (s)}−1 represents design and control variables at k=0 in reduced space, while {tilde over (s)}0 represents them in full space.
The remaining unit quaternion constraints for k=1, . . . , n are less straightforward to remove. To do so, the velocity and unit quaternion constraints for control parameters may be considered together, rearranging terms to align time steps
A projection of control parameters onto a reduced set, Δũk+1, as above for k=0 may not lead to a solution, because the matrix for a singular value decomposition of the Jacobian uk+1 would appear in front of a reduced set of control parameters Δũk+1, and cannot be brought to the other side because it is not a square matrix and hence not invertible.
An alternative is to work with reduced velocity variables. To this end, the system 100 substitutes the velocity equations for uk+1 in the second equation
then represent the solutions with a reduced set Δk
The subspace velocity equations then become
Note that the system 100 may use reduced velocity variables Δ
In some embodiments, an optimization algorithm and matrices {tilde over (Q)}k, {tilde over (S)}k, {tilde over (R)}k, {tilde over (P)}n, Ãk, and {tilde over (B)}k are summarized below. The system 100 may solve for the state {tilde over (s)}−1 by minimizing the objective {tilde over (s)}T{tilde over (P)}−1{tilde over (s)}. Taking into account the leading 1 in the state representation, the minimization reduces to a linear system of equations
The output of equation 11 may be one or more search directions, dk, for optimization variables, which may be identical to the ones obtained by solving the equivalent QP. To compute a corresponding search direction, Δλ, for the Lagrange multipliers, the system 100 expands the first equation of the Karush-Kuhn-Tucker system that is equivalent to the QP, solving for the individual multiplier increments by utilizing the recursive structure as summarized herein. The system 100 may then perform the update of the current best multiplier estimates
for the first and last time steps, with the step length α.
= hpn − ( pn)T
= hpk+1 − ( pk+1)T +
= (uk(uk)T)−1 uk(huk − ( uk)T +
)
Robotic devices with kinematic loops often have redundancy in constraints. For example, linkages are used to place actuators where there is space, while they provide the source of motion where it is needed. Linkages introduce redundancy. A simple case to see this is a four bar linkage (e.g., as shown in
In some embodiments, the constraint elimination process takes as input a reference state s of the robotic device (e.g., its initial design configuration or first frame of an animation), and automatically selects a non-redundant subset of constraints in so that this subset contains as many constraints as unknown states in s. Because the behavior in a neighborhood of s may be considered to choose the “right” subset, the system 100 may rely on the Jacobian s. However, before the system 100 computes the Jacobian, the system 100 may remove all actuators, replacing them with corresponding passive joints. This computation may be used because actuators, for a particular set of control parameters u, hold the robot in the state s. The system 100 would therefore not see the “mobility” of the robot in a neighborhood of s if the system 100 analyzed the Jacobian of the actuated system directly. If the system 100 analyzed the Jacobian corresponding to the passive system, however, the mobility of mechanical joints and actuators. Before analyzing the Jacobian, s, of the passive system, the system 100 may normalize each row. Each of its rows i can be understood as a direction in which the kinematic structure is immobile, while the mobility of the passive system is spanned by directions that are not part of the space that the rows span. The goal is therefore to keep the constraints so that the corresponding rows span the space with an as orthogonal basis as possible, preventing the introduction of any unwanted mobility. This motivates the following selection process: first form the singular value decomposition of s, and extract the left singular vectors Z that correspond to zero singular values, such that ZTs=0. Each row k of these equations provide a linear combination that evaluates to zero
where (s)i refers to row of the Jacobian. For any j such that zjk≠0, where equation in Eq. 16 may eliminate constraint that is already in the span of the other constraints
To reduce of prevent unwanted mobility, j may be set so that the constraint that is the “least” orthogonal to others is removed, or that results in the lowest right-hand-side coefficients in Eq. 16
The rows of the Jacobian may be normalized to make the coefficients comparable. After adding j to the set of eliminated constraints, the corresponding equation may be removed k, and subtracted from the remaining equations
setting coefficients zij to zero that correspond to the eliminated constraint j. The system 100 may iterate the process until all equations from Eq. 16 have been used. The Jacobian of the subset of selected constraints has full row rank, and if the additional constraints are added back full rank Jacobian for the actuated system may result. In some embodiments, an exception is an over-actuated robot with more actuators than needed for its degrees of freedom. For over-actuated robots, after removing redundancy in the passive Jacobian, the process may be repeated, but using the Jacobian of the actuated system with passive redundancy removed and considering only actuation constraints for elimination.
When editing the design of an existing robotic device, the system 100 may first simulate its kinematic motion and then record trajectories of points of interest. By representing them with spatial cubic Hermite splines, or applying transformations to them, the system 100 can then edit the target motion, and therefore the design of the robotic device. If a user designs a robotic device from scratch, a rigged character can serve as a conceptual input, or motion capture could serve as a source of motion input.
Independent of the use case, the system 100 may track the difference between the motion of points of interest on the robotic device and user-provided target motion. To this end, the system 100 may use tracking objectives.
In a local coordinate frame of a rigid body that may be guided based on a target animation, the system 100 may define the position xrb and/or orientation Arb. The global motion over time of the position, x(sk), and orientation, A(sk), are then used to define tracking objectives based on the target positions, {circumflex over (x)}k, and orientations, Âk (right).
To measure a robotic device's performance with respect to user-specified targets, the system 100 supports position and orientation tracking. A target trajectory either includes of a target point, {circumflex over (x)}k, or a target orientation, {circumflex over (R)}k, for every time step k, or a combination of the two. The system 100 chooses a position, xrb, and/or orientation, Arb, in a local coordinate frame of a rigid body whose motion the target trajectory guide. During optimization, the system 100 transforms the local position and orientation to global coordinates using the body's position ck and orientation qk
then measure differences with our position and orientation objectives
where a weighted norm W=diag(wx, wy, wz) is used for positions and weigh orientation objectives with wori. In some embodiments, these weights can be set to non-constant values to emphasize preservation of motion either spatially or temporally, or both. For points of interest, the system 100 may add position and/or orientation objectives to the intermediate and terminal objectives, f and F.
The system 100 supports position and velocity limits for actuators, and position limits for configurable joints. The system 100 enforces them with a smooth barrier function
that becomes active if a value x is less than an ε from either a user-specified lower or upper limit xmin or xmax, resulting in our limits objective
For each component x of the control parameters, uk and vk, and the design parameters, pk, the system 100 adds a limits objective to the intermediate objective f. For the terminal objective F, the system 100 adds position limits.
To avoid ill-posed problems, the method may include regularization terms
keeping control parameters close to an initial animation, uk0, on the un-optimized design, and design parameters close to their initial values p0. For some examples, the null-space in design parameters can be large, requiring a higher weight for the . This can have an effect on the quality of the result. To mitigate its impact, in some embodiments, the method may include updating p0 with design parameters from the last iterate in a decreasing frequency, i.e., at iterations 2, 4, 8, 16, etc., is effective, without a noticeable effect on convergence. The regularization terms are added to both f and F.
According to some examples, the method 600 includes deploying to a physical robotic device at operation 612. As shown for example in
For example, as shown in
For the initial design configuration 502, the actuators may be naively oriented along one of the world axes (e.g., in the environment of the robotic device). Using the method 600 the orientations of the joints were parameterized, with the exception of the knees. The initial design configuration 502 poorly tracks the target animation 702. However, in the optimized design configuration 504, the robotic device is able to track the initial design configuration 502 well, despite the reduced number of actuators. In some embodiments, a velocity limit is introduced to the design (e.g., to better align with real-world velocities). Unlike previous approaches, the systems and methods of the present disclosure, account for a full motion sequence and therefore track a target animation 702 better than previous approaches.
The processing element 802 may be any type of electronic device capable of processing, receiving, and/or transmitting instructions. For example, the processing element 802 may be a central processing unit, microprocessor, processor, or microcontroller. Additionally, it should be noted that some components of the computing system 800 may be controlled by a first processing element 802 and other components may be controlled by a second processing element 802, where the first and second processing elements may or may not be in communication with each other.
The I/O interface 804 allows a user to enter data in to computing system 800, as well as provides an input/output for the computing system 800 to communicate with other devices or services. The I/O interface 804 can include one or more input buttons, touch pads, touch screens, and so on.
The external device 812 are one or more devices that can be used to provide various inputs to the computing systems 600, e.g., mouse, microphone, keyboard, trackpad, sensing element (e.g., a thermistor, humidity sensor, light detector, etc. The external devices 812 may be local or remote and may vary as desired. In some examples, the external devices 812 may also include one or more additional sensors.
The memory components 808 are used by the computing system 800 to store instructions for the processing element 802 such as the initial design configuration 312, the initial design configuration 502, the optimized design configuration 504, component models, geometry, parameters, instructions that perform the operations of the method 600, and/or a user interface, user preferences, alerts, etc. The memory components 808 may be, for example, magneto-optical storage, read-only memory, random access memory, erasable programmable memory, flash memory, or a combination of one or more types of memory components.
The network interface 810 provides communication to and from the computing system 800 to other devices. The network interface 810 includes one or more communication protocols, such as, but not limited to Wi-Fi, Ethernet, Bluetooth, etc. The network interface 810 may also include one or more hardwired components, such as a Universal Serial Bus (USB) cable, or the like. The configuration of the network interface 810 depends on the types of communication desired and may be modified to communicate via Wi-Fi, Bluetooth, etc.
The display 806 provides a visual output for the computing system 800 and may be varied as needed based on the device. The display 806 may be configured to provide visual feedback to the user 102 and may include a liquid crystal display screen, light emitting diode screen, plasma screen, or the like. In some examples, the display 806 may be configured to act as an input element for the user 102 through touch feedback or the like.
The description of certain embodiments included herein is merely exemplary in nature and is in no way intended to limit the scope of the disclosure or its applications or uses. In the included detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and which are shown by way of illustration specific to embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized, and that structural and logical changes may be made without departing from the spirit and scope of the disclosure. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of embodiments of the disclosure. The included detailed description is therefore not to be taken in a limiting sense, and the scope of the disclosure is defined only by the appended claims.
From the foregoing it will be appreciated that, although specific embodiments of the invention have been described herein for purposes of illustration, various modifications may be made without deviating from the spirit and scope of the invention.
The particulars shown herein are by way of example and for purposes of illustrative discussion of the preferred embodiments of the present disclosure and are presented in the cause of providing what is believed to be the most useful and readily understood description of the principles and conceptual aspects of various embodiments of the invention. In this regard, no attempt is made to show structural details of the invention in more detail than is necessary for the fundamental understanding of the invention, the description taken with the drawings and/or examples making apparent to those skilled in the art how the several forms of the invention may be embodied in practice.
As used herein and unless otherwise indicated, the terms “a” and “an” are taken to mean “one”, “at least one” or “one or more”. Unless otherwise required by context, singular terms used herein shall include pluralities and plural terms shall include the singular.
Unless the context clearly requires otherwise, throughout the description and the claims, the words ‘comprise’, ‘comprising’, and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in the sense of “including, but not limited to”. Words using the singular or plural number also include the plural and singular number, respectively. Additionally, the words “herein,” “above,” and “below” and words of similar import, when used in this application, shall refer to this application as a whole and not to any particular portions of the application.
All relative, directional, and ordinal references (including top, bottom, side, front, rear, first, second, third, and so forth) are given by way of example to aid the reader's understanding of the examples described herein. They should not be read to be requirements or limitations, particularly as to the position, orientation, or use unless specifically set forth in the claims. Connection references (e.g., attached, coupled, connected, joined, and the like) are to be construed broadly and may include intermediate members between a connection of elements and relative movement between elements. As such, connection references do not necessarily infer that two elements are directly connected and in fixed relation to each other, unless specifically set forth in the claims.
Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.
Finally, the above discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.
This application claims the benefit of priority under 35 U.S.C. § 119 (e) and 37 C.F.R. § 1.78 to provisional application No. 63/503,899 filed on May 23, 2023, titled “Optimal Design of Robotic Character Kinematics” which is hereby incorporated by reference herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63503899 | May 2023 | US |