The present invention relates to a system and method for controlling a humanoid robot having a plurality of joints and multiple degrees of freedom.
Robots are automated devices that are able to manipulate objects using a series of links, which in turn are interconnected via robotic joints. Each joint in a typical robot represents at least one independent control variable, i.e., a degree of freedom (DOF). End-effectors are the particular links used to perform a task at hand, e.g., grasping a work tool or an object. Therefore, precise motion control of the robot may be organized by the level of task specification: object level control, which describes the ability to control the behavior of an object held in a single or cooperative grasp of a robot, end-effector control, and joint-level control. Collectively, the various control levels achieve the required robotic mobility, dexterity, and work task-related functionality.
Humanoid robots are a particular type of robot having an approximately human structure or appearance, whether a full body, a torso, and/or an appendage, with the structural complexity of the humanoid robot being largely dependent upon the nature of the work task being performed. The use of humanoid robots may be preferred where direct interaction is required with devices or systems that are specifically made for human use. The use of humanoid robots may also be preferred where interaction is required with humans, as the motion can be programmed to approximate human motion such that the task queues are understood by the cooperative human partner. Due to the wide spectrum of work tasks that may be expected of a humanoid robot, different control modes may be simultaneously required. For example, precise control must be applied within the different control spaces noted above, as well as control over the applied torque or force of a given motor-driven joint, joint motion, and the various robotic grasp types.
Accordingly, a robotic control system and method are provided herein for controlling a humanoid robot via an impedance-based control framework as set forth in detail below. The framework allows for a functional-based graphical user interface (GUI) to simplify implementation of a myriad of operating modes of the robot. Complex control over a robot having multiple DOF, e.g., over 42 DOF in one particular embodiment, may be provided via a single GUI. The GUI may be used to drive an algorithm of a controller to thereby provide diverse control over the many independently-moveable and interdependently-moveable robotic joints, with a layer of control logic that activates different modes of operation.
Internal forces on a grasped object are automatically parameterized in object-level control, allowing for multiple robotic grasp types in real-time. Using the framework, a user provides functional-based inputs through the GUI, and then the control and an intermediate layer of logic deciphers the input into the GUI by applying the correct control objectives and mode of operation. For example, by selecting a desired force to be imparted to the object, the controller automatically applies a hybrid scheme of position/force control in decoupled spaces.
Within the scope of the invention, the framework utilizes an object impedance-based control law with hierarchical multi-tasking to provide object, end-effector, and/or joint-level control of the robot. Through a user's ability in real-time to select both the activated nodes and the robotic grasp type, i.e., rigid contact, point contact, etc., a predetermined or calibrated impedance relationship governs the object, end-effector, and joint spaces. Joint-space impedance is automatically shifted to the null-space when object or end-effector nodes are activated, with joint space otherwise governing the entire control space as set forth herein.
In particular, a robotic system includes a humanoid robot having a plurality of joints adapted for imparting force control, and a controller having an intuitive GUI adapted for receiving input signals from a user, from pre-programmed automation, or from a network connection or other external control mechanism. The controller is electrically connected to the GUI, which provides the user with an intuitive or graphical programming access to the controller. The controller is adapted to control the plurality of joints using an impedance-based control framework, which in turn provides object level, end-effector level, and/or, joint space-level control of the humanoid robot in response to the input signal into the GUI.
A method for controlling a robotic system having the humanoid robot, controller, and GUI noted above includes receiving the input signal from the user using the GUI, and then processing the input signal using a host machine to control the plurality of joints via an impedance-based control framework. The framework provides object level, end-effector level, and/or joint space-level control of the humanoid robot.
The above features and advantages and other features and advantages of the present invention are readily apparent from the following detailed description of the best modes for carrying out the invention when taken in connection with the accompanying drawings.
With reference to the drawings, wherein like reference numbers refer to the same or similar components throughout the several views, and beginning with
The robot 10 is adapted to perform one or more automated tasks with multiple degrees of freedom (DOF), and to perform other interactive tasks or control other integrated system components, e.g., clamping, lighting, relays, etc. According to one embodiment, the robot 10 is configured with a plurality of independently and interdependently-moveable robotic joints, such as but not limited to a shoulder joint, the position of which is generally indicated by arrow A, an elbow joint that is generally (arrow B), a wrist joint (arrow C), a neck joint (arrow D), and a waist joint (arrow E), as well as the various finger joints (arrow F) positioned between the phalanges of each robotic finger 19.
Each robotic joint may have one or more DOF. For example, certain compliant joints such as the shoulder joint (arrow A) and the elbow joint (arrow B) may have at least two DOF in the form of pitch and roll. Likewise, the neck joint (arrow D) may have at least three DOF, while the waist and wrist (arrows E and C, respectively) may have one or more DOF. Depending on task complexity, the robot 10 may move with over 42 DOF. Each robotic joint contains and is internally driven by one or more actuators, e.g., joint motors, linear actuators, rotary actuators, and the like.
The robot 10 may include components such as a head 12, torso 14, waist 15, arms 16, hands 18, fingers 19, and thumbs 21, with the various joints noted above being disposed within or between these components. The robot 10 may also include a task-suitable fixture or base (not shown) such as legs, treads, or another moveable or fixed base depending on the particular application or intended use of the robot. A power supply 13 may be integrally mounted to the robot 10, e.g., a rechargeable battery pack carried or worn on the back of the torso 14 or another suitable energy supply, or which may be attached remotely through a tethering cable, to provide sufficient electrical energy to the various joints for movement of the same.
The controller 22 provides precise motion control of the robot 10, including control over the fine and gross movements needed for manipulating an object 20 that may be grasped by the fingers 19 and thumb 21 of one or more hands 18. The controller 22 is able to independently control each robotic joint and other integrated system components in isolation from the other joints and system components, as well as to interdependently control a number of the joints to fully coordinate the actions of the multiple joints in performing a relatively complex work task.
Still referring to
The controller 22 may include a server or host machine 17 configured as a distributed or a central control module, and having such control modules and capabilities as might be necessary to execute all required control functionality of the robot 10 in the desired manner. Additionally, the controller 22 may be configured as a general purpose digital computer generally comprising a microprocessor or central processing unit, read only memory (ROM), random access memory (RAM), electrically-erasable programmable read only memory (EEPROM), a high speed clock, analog-to-digital (A/D) and digital-to-analog (D/A) circuitry, and input/output circuitry and devices (I/O), as well as appropriate signal conditioning and buffer circuitry. Any algorithms resident in the controller 22 or accessible thereby, including an algorithm 100 for executing the framework described in detail below, may be stored in ROM and executed to provide the respective functionality.
The controller 22 is electrically connected to a graphical user interface (GUI) 24 providing user access to the controller. The GUI 24 provides user control of a wide spectrum of tasks, i.e., the ability to control motion in the object, end-effector, and/or joint spaces or levels of the robot 10. The GUI 24 is simplified and intuitive, allowing a user, through simple inputs, to control the arms and the fingers in different intuitive modes by inputting an input signal (arrow iC), e.g., a desired force imparted to the object 20. The GUI 24 is also capable of saving mode changes so that they can be executed in a sequence at a later time. The GUI 24 may also accept external control triggers to process a mode change, e.g., via a teach-pendant that is attached externally, or via PLC controlling the flow of automation through a network connection. Various embodiments of the GUI 24 are possible within the scope of the invention, with two possible embodiments described below with reference to
In order to perform a range of manipulation tasks using the robot 10, a wide range of functional control over the robot is required. This functionality includes hybrid force/position control, impedance control, cooperative object control with diverse grasp types, end-effector Cartesian space control, i.e., control in the XYZ coordinate space, and joint space manipulator control, and with a hierarchical prioritization of the multiple control tasks. Accordingly, the present invention applies an operational space impedance law and decoupled force and position to the control of the end-effectors of robot 10, and to control of object 20 when gripped by, contacted by, or otherwise acted upon by one or more end-effectors of the robot, such as the hand 18. The invention provides for a parameterized space of internal forces to control such a grip. It also provides a secondary joint space impedance relation that operates in the null-space of the object 20 as set forth below.
Still referring to
where Mo, Bo, and Ko are the commanded inertia, damping, and stiffness matrices, respectively. The variable p is the position of the object reference point, ω is the angular velocity of the object, Fe and Fe* represent the actual and desired external wrench on the object 20. Δy is the position error (y−y*). NFT is the null-space projection matrix for vector, Fe*T, and may be described as follows:
In the above equation, the superscript (+) indicates the pseudo-inverse of the respective matrix, and I is the identity matrix. NFT keeps the position and force control automatically decoupled by projecting the stiffness term into the space orthogonally to the commanded force, with the assumption that the force control direction consists of one DOF. To decouple the higher order dynamics as well, Mo and Bo need to be selected diagonally in the reference frame of the force. This extends to include the ability to control forces in more than one direction.
This closed-loop relation applied a “hybrid” scheme of force and motion control in the orthogonal directions. The impedance law applies a second-order position tracker to the motion control position directions while applying a second-order force tracker to the force control directions, and should be stable given positive-definite values for the matrices. The formulation automatically decouples the force and position control directions. The user simply inputs a desired force, i.e., F*e, and the position control is projected orthogonally into the null space. If zero desired force is input, the position control spans the full space.
Referring to
νi={dot over (p)}+ω×ri+νrel
ωi=ω+ωrel
{dot over (ν)}i={umlaut over (p)}+{dot over (ω)}×ri+ω×(ω×ri)+2ω×νrel
{dot over (ω)}={dot over (ω)}+{dot over (ω)}rel
where νi represents the velocity of the contact point, and ωi represents the angular velocity of the end-effector i. νrel and αrel are defined as the first and second derivative, respectively, or ri in the B frame.
In other words, they represent the motion of the point relative to the body. The terms become zero when the point is fixed in the body.
End-Effector Coordinates: the framework of the present invention is designed to accommodate at least the two grasp types described above, i.e., rigid contacts and point contacts. Since each type presents different constraints on the DOF, the choice of end-effector coordinates for each manipulator, xi depends on the particular grasp type. A third grasp type is that of “no contact”, which describes an end-effector that is not in contact with the object 20. This grasp type allows control of the respective end-effectors independently of the others. The coordinates may be defined on the velocity level as:
Through the GUI 24 shown in
{dot over (x)}i=ji{dot over (q)}.
In this formula, q is the column matrix of all the joint coordinates in the system being controlled.
Matrix Notation: the composite end-effector velocity may be defined as: {dot over (x)}=[{dot over (x)}1T . . . {dot over (x)}nT]T: where n is the number of active end-effectors, e.g., a finger 19 of the humanoid robot 10 shown in
{dot over (x)}=G{dot over (y)}+{dot over (x)}rel
{umlaut over (x)}=Gÿ+Q+{umlaut over (x)}rel
G may be referred to as the grasp matrix, and contains the contact position information. Q is a column matrix containing the centrifugal and coriolus terms. {dot over (x)}rel and {umlaut over (x)}rel are column matrices containing the relative motion terms.
The structure of the matrices G, Q, and J vary according to the contact types in the system. They can be constructed of submatrices representing each manipulator i such that:
Referring to
The third case in the table of
When both {dot over (x)}rel and {umlaut over (x)}rel equal zero, the end-effectors perfectly satisfy the rigid body condition, i.e., producing no change to internal forces between them. {umlaut over (x)}rel may be used to control the desired internal forces in a grasped object. To ensure that {umlaut over (x)}rel does not affect the external forces, it must lie in the space orthogonal to G, referred to herein as the “internal space”, i.e., the same space containing the internal forces. The projection matrix for this space, or the null-space GT, follows:
NG=I−GG+
Relative accelerations may be constrained to the internal space:
{umlaut over (x)}relNG
where η is an arbitrary column matrix of internal accelerations.
This condition ensures that {umlaut over (x)}rel produces no net effect on the object-level accelerations, leaving the external forces unperturbed. To validate this claim, one may solve for the object acceleration and show that the internal accelerations have zero contribution to ÿ, i.e.,:
Internal Forces: there are two requirements for controlling the internal forces within the above control framework. First, the null-space is parameterized with physically relevant parameters, and second, the parameters must lie in the null-space of both grasp types. Both requirements are satisfied by the concept of interaction forces. Conceptually, by drawing a line between two contact points, interaction forces may be defined as the difference between the two contact forces that are projected along that line. One may show that the interaction wrench, i.e., the interaction forces and moments, also lies in the null-space of the rigid contact case.
One may consider a vector at a contact point normal to the surface and pointing into the object 20 of
With respect to the interaction accelerations, these may be defined as:
wherein the desired relative accelerations should lie in the interaction directions. In the above equation, α may be defined as the column matrix of interaction accelerations, αij, where αij represents the relative linear acceleration between points i and j. Hence, the relative acceleration seen by point i is:
where uij represents the unit vector pointing along the axis from point i to j.
In addition, uij=0 if either i or j represents a no “contact” point. The interaction accelerations are then used to control the interaction forces using the following PI regulator, where kp and ki are constant scalar gains:
αij=−kp(ƒij−ƒ*ij)−ki∫(ƒij−ƒ*ij)dt
wherein ƒij is the interaction force between points i and j.
ƒij=(ƒi−ƒj)·uij
This definition allows us to introduce a space that parameterizes the interaction components, Nint. As used herein, Nint is a subspace of the full null-space, NGT, except in the point-contact case where it spans the whole null-space:
{umlaut over (x)}=Q+Nintα
Nint consists of the interaction direction vectors (uij) and can be constructed from the equation:
It may be shown that Nint is orthogonal to G for both contact types. Consider an example with two contact points. In this case:
Noting that uij=−uji and αij=αji the following simple matrix expressions result:
The expression for a three contact case follows as:
Control Law—Dynamics Model: the following equation models the full system of manipulators, assuming external forces acting only at the end-effectors:
M{umlaut over (q)}+c+JTω=τ
where q is the column matrix of generalized coordinates, M is the joint-space inertia matrix, c is the column matrix of Coriolus, centrifugal and gravitational generalized forces, T is the column matrix of joint torques, and w is the composite column matrix of the contact wrenches.
Control Law—Inverse Dynamics: the control law based on inverse dynamics may be formulated as:
τ=M{umlaut over (q)}*+c+JTω
where {umlaut over (q)}* is the desired joint-space acceleration. It may be derived from the desired end-effector acceleration ({umlaut over (x)}*) as follows:
{umlaut over (x)}*=J{umlaut over (q)}*+{dot over (J)}{dot over (q)}
{umlaut over (q)}*=J+({umlaut over (x)}*−{dot over (J)}{dot over (q)})+NJ{dot over (q)}ns
where {umlaut over (q)}ns is an arbitrary vector projected into the null-space of J. It will be utilized for a secondary impendance task hereinbelow. NJ denotes the null-space projection operator for matrix J.
The desired acceleration on the end-effector and object level may then be derived from the previous equations. The strength of this object force distribution method is that it does not need a model of the object. Conventional methods may involve translating the desired motion of the object into a commanded resultant force, a step that requires an existing high-quality dynamic model of the object. This resultant force is then distributed to the contacts using the inverse of G. The end-effector inverse dy-namics then produces the commanded force and the commanded motion. In the method presented herein, introducing the sensed end-effector forces and conducting the allocation in the acceleration domain eliminates the need for a model of the object.
Control Law—Estimation: the external wrench (Fe) on the object 20 of
{dot over (y)}=G+{dot over (x)}
When an end-effector is designated as the “no contact” type as noted above, G will contain a row of zeros. A Singular Value Decomposition (SVD)-based pseudo-inverse calculation produces G+ with the corresponding column zeroed out. Hence, the velocity of the non-contact point will not effect the estimation. Alternatively, the pseudo-inverse may be computed with a standard closed-form solution. In this case, the rows of zeros need to be removed before the calculation and then reinstated as corresponding columns of zeros. The same applies to the J matrix, which may contain rows of zeros as well.
Second Impedance Law: the redundancy of the manipulators allows for a secondary task to act in the null-space of the object impedance. The following joint-space impedance relation defines a secondary task:
Mj{umlaut over (q)}+Bj{dot over (q)}+KjΔq=τe
wherein τe represents the column matrix of joint torques produced by external forces. It may be estimated from the equation of motion, i.e., M{umlaut over (q)}+c+JTω=τ, such that:
τe=M{umlaut over (q)}+c−τ.
This formula in turn dictates the following desired acceleration for the null-space of
{umlaut over (q)}*=J+({umlaut over (x)}*−{dot over (J)}{dot over (q)})+NJ{umlaut over (q)}ns i.e., {umlaut over (q)}ns=Mj−1(τc−Bj{dot over (q)}−KjΔq).
It may be shown that this implementation produces the following close-loop relation in the null-space of the manipulators. Note that NJ is an orthogonal projection matrix that finds the minimum-error projection into the null-space.
NJ[{umlaut over (q)}−Mj−1(τc−Bj{dot over (q)}−KjΔq)]=0
Zero Force Feedback: the following results from the above equations:
If reliable force sensing is not available in the manipulators, the impedance relation can be adjusted to eliminate the need for the sensing. Through an appropriate selection of the desired impedance inertias, Mo and Mi, the force feedback terms can be eliminated. The appropriate values can be easily determined from the previous equation.
User Interface: through a simple user interface, e.g., the GUI 24 of
Referring to
Referring to
Each primary finger 19R, 119R, 19L, 119L has a corresponding finger interface, i.e., 34A, 134A, 34B, 134B, 34C, 134C, respectively. Each palm of a hand 18L, 18R includes a palm interface 34L, 34R. Interfaces 35, 37, and 39 respectively provide a position reference, an internal force reference (f12, f13, f23), and a 2nd position reference (x*). No contact options 41L, 41R are provided for the left and right hands, respectively.
Joint space control is provided via inputs 30B. Joint position of the left and right arms 16L, 16R may be provided via interfaces 34D, E. Joint position of the left and right hands 18L, 18R may be provided via interfaces 34F, G. Finally, a user may select a qualitative impedance type or level, i.e., soft or stiff, via interface 34H, again provided via the GUI 24 of
Referring to
While the best modes for carrying out the invention have been described in detail, those familiar with the art to which this invention relates will recognize various alternative designs and embodiments for practicing the invention within the scope of the appended claims.
The present application claims the benefit of and priority to U.S. Provisional Application No. 61/174,316 filed on Apr. 30, 2009.
This invention was made with government support under NASA Space Act Agreement number SAA-AT-07-003. The government may have certain rights in the invention.
Number | Name | Date | Kind |
---|---|---|---|
7113849 | Kuroki et al. | Sep 2006 | B2 |
7383100 | Ng-Thow-Hing et al. | Jun 2008 | B2 |
7403835 | Sandner et al. | Jul 2008 | B2 |
7747351 | Tsusaka et al. | Jun 2010 | B2 |
20050125099 | Mikami et al. | Jun 2005 | A1 |
20070010913 | Miyamoto et al. | Jan 2007 | A1 |
20100138039 | Moon et al. | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
4178708 | Jun 1992 | JP |
7080787 | Mar 1995 | JP |
2005125460 | May 2005 | JP |
2007015037 | Jan 2007 | JP |
2007075929 | Mar 2007 | JP |
Entry |
---|
http://robotics.nasa.gov/courses/fall2002/event/oct1/NASA—Robotics—20021001.htm. |
Number | Date | Country | |
---|---|---|---|
20100280663 A1 | Nov 2010 | US |
Number | Date | Country | |
---|---|---|---|
61174316 | Apr 2009 | US |