1. Field of Disclosure
The disclosure generally relates to the field of controlling motion of a system, and more specifically, to controlling motion of a system to avoid collision.
2. Description of the Related Art
Collision avoidance has been an important and widely studied problem since the inception of robotics technology. The majority of early research in this area focused on obstacle avoidance, typically for applications involving mobile robot navigation and industrial manipulation. See A. A. Maciejewski and C. A. Klein, “Obstacle avoidance for kinematically redundant manipulators in dynamically varying environments”, International Journal of Robotics Research, 4:109-117 (1985); see also 0. Khatib, “Real-Time Obstacle Avoidance for Manipulators and Mobile Robots”, The International Journal of Robotics Research (IJRR), 5(1):90-98 (1986), both of which are incorporated by reference herein in their entirety. In these applications, the workspace was often predefined, static, or slowly varying. Moreover, application developers typically adopted the philosophy of segregating the workspace of robots and people as a safety countermeasure to avoid collisions with people. Setting large collision distance thresholds could be tolerated for many of the early mobile navigation and industrial manipulation applications. The concept of artificial potential fields proved to be an effective approach for obstacle avoidance under such circumstances.
Today, the field of robotics is moving towards development of high degree of freedom, human-like, and personal robots, which are often designed to share a common workspace and physically interact with humans. Such robots are often highly redundant which fundamentally adds new capabilities (self-motion and subtask performance capability). However, increased redundancy has also added new challenges for constraining the internal motion to avoid joint limits and self collisions. With these challenges, researchers have become increasingly aware of the need for robust collision avoidance strategies to accommodate such applications.
In particular, self collision avoidance, which was largely overlooked or not required when obstacle avoidance strategies were first developed, has recently become an important topic of research. See H. Sugiura, M. Gienger, H. Janssen, and C. Goerick, “Real-time collision avoidance with whole body motion control for humanoid robots”, IEEE Int. Conf. on Intelligent Robots and Systems (IROS 2007), (2007); see also O. Stasse, A. Escande, N. Mansard, S. Miossec, P. Evrard, and A. Kheddar, “Real-time (self)-collision avoidance task on a hrp-2 humanoid robot”, In Proceedings of ICRA, pages 3200-3205, Pasadena Calif. (2008), both of which are incorporated by reference herein in their entirety. Self collision avoidance is especially challenging for humanoid robots performing humanlike tasks. The collision avoidance strategy must not only accommodate multiple colliding bodies simultaneously, but also tolerate smaller collision distances than those established for early obstacle avoidance algorithms. Furthermore, a self collision avoidance strategy should not significantly alter the reference or originally planned motion. This is particularly important in applications involving reproduction of robot motion from observed human motion, a problem often referred to as motion retargeting. See B. Dariush, M. Gienger, B. Jian, C. Goerick, and K. Fujimura, “Whole body humanoid control from human motion descriptors”, Int. Conf. Robotics and Automation (ICRA), Pasadena, Calif. (2008); see also A. Nakazawa, S. Nakaoka, K. Ikeuchi, and K. Yokoi, “Imitating human dance motions through motion structure analysis”, Intl. Conference on Intelligent Robots and Systems (IROS), pages 2539-2544, Lausanne, Switzerland (2002), both of which are incorporated by reference herein in their entirety.
Hence, there is lacking, inter alia, a system and method for motion control of robots and other articulated rigid body systems to avoid self collisions and obstacles in real-time.
Embodiments of the present invention provide a method (and corresponding system and computer program product) for avoiding collision of a body segment with other structures in an articulated system. According to one aspect, a collision function is determined for avoiding such collision. A distance between the body segment and one such structure is measured. A weighting matrix is generated based on the collision function and the distance, and used to determine a redirected motion for the body segment. The body segment is redirected based on the redirected motion to avoid colliding with the structure.
According to another aspect of the present invention, a joint limit gradient weighting matrix is generated to avoid collision between connected body segments, and a collision gradient weighting matrix is generated to avoid obstacles and collision between unconnected body segments. A weighting matrix is generated based on the two matrices, and used to determine redirected motions of body segments to avoid obstacles and collision among the body segments in the articulated system.
The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the disclosed subject matter.
The disclosed embodiments have other advantages and features which will be more readily apparent from the detailed description, the appended claims, and the accompanying drawings (figures). A brief description of the drawings is as follows:
Figure (FIG.) 1 is a block diagram illustrating a motion retargeting system for controlling a target system in accordance with one embodiment of the invention.
The present invention provides a motion retargeting system (and corresponding method and computer program product) for real-time motion control of robots and other articulated rigid body systems while avoiding self collisions and obstacles. Collisions are avoided by monitoring distances between collision pairs and redirecting body segments based on redirected motion determined using a weighting matrix and the monitored distances.
The Figures (FIGS.) and the following description relate to embodiments of the present invention by way of illustration only. Reference will now be made in detail to several embodiments, examples of which are illustrated in the accompanying figures. It is noted that wherever practicable similar or like reference numbers may be used in the figures and may indicate similar or like functionality. The figures depict embodiments of the disclosed system (or method) for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles described herein.
Motion Retargeting System
In one embodiment, the motion retargeting system 100 captures motions generated in the source system 102 and transfers the captured motions to the target system 104, a process commonly referred to as motion retargeting. Motions in the source system 102 are tracked (e.g., by measuring marker positions, feature points) and expressed as motion descriptors 108 (also known as motion trajectories, desired task descriptors, task variables) using one or more motion primitives in Cartesian (or task) space. The motion descriptors 108 are converted to joint variables 110 (also known as joint space trajectories, joint motions, joint commands, joint motion descriptors) by the motion retargeting system 100. The motion retargeting system 100 uses the joint variables 110 to control the target system 104 to simulate the motion in the source system 102. The motion retargeting system 100 can impose constraints on motion in the target system 104 to avoid joint limits, muscular torque limits, self collisions, obstacles, and the like. For the sake of illustration, without loss of generality, the source system 102 represents a human model and the source motion represents human motion primitives which are typically observed or inferred from measurements, and the target system 104 represents a humanoid robot that is controlled to imitate the human model's motion. One of ordinary skill in the related arts would understand that the motion retargeting system 100 may be used for other purposes such as human pose estimation, tracking and estimation, and joint torque estimation in biomechanics.
In one embodiment, the motion retargeting system 100 uses a task space control framework to generate motion for all degrees of freedom in the target system 104 (in this case, the robot system 214) from a set of desired motion descriptors 108 which are observed from measurements (e.g., at feature points), synthesized, or computed from the current configuration of the target system 104. The tracking retargeting system 202 generates motion results in a set of computed joint commands which track the desired task descriptors, e.g., minimize the Cartesian tracking error. The balance control system 206 controls the resulting motion for balance and keeps the target system 104 stable. The constraint system 204 provides commands to prevent the target system 104 from violating the physical limits, such as joint limits, velocity limits, torque limits, and also works with the tracking retargeting system 202 to ensure the target system 104 avoids obstacles, self collisions, and computational problems arising from singularities. The three systems 202, 204 and 206 may present a large number of conflicting tasks which may be resolved through a hierarchical task management strategy. Further information of resolving conflicting tasks is found in U.S. application Ser. No. 11/734,758, filed Apr. 12, 2007, titled “Control Of Robots From Human Motion Descriptors”, the content of which is incorporated by reference herein in its entirety. The precision of lower-priority (or lower level of importance) factors may be sacrificed at the expense of higher priority (or higher level of importance) factors.
The motion retargeting system 100, or any of its components described below, may be configured as software (e.g., modules that comprise instructions executable by a processor), hardware (e.g., an application specific integrated circuit), or a combination thereof. The software and/or hardware may operate in a computer system that is structured to include a processor, memory, computer-readable storage medium (e.g., hard drive), network interfaces, and applicable operating system and other functional software (e.g., network drivers, communication protocols).
Trajectory Conversion Process
The tracking retargeting system 202 converts the desired trajectories (the motion descriptors 108) from Cartesian space to joint space through a trajectory conversion process. The joint (or configuration) space refers to the set of all possible configurations of the target system 104. The Cartesian (or task) space refers to the space of the source system 102.
The number of robot degrees of freedom is denoted by n. These degrees of freedom which fully characterize the joint space is described by vector q=[q1, . . . , qn]T. One goal for the tracking retargeting system 202 is to generate collision-free joint motion based on reference motion described in the task space. For generality, suppose the task variables operate the full six dimensional task space, three for position and three for orientation. Also, suppose the number of task variables is k. Let i (i=1 . . . k) be the index of the spatial velocity vector {dot over (x)}i corresponding to the ith task descriptor. The associated Jacobian is given by
The mapping between the joint space velocities and the task space velocities is obtained by considering the differential kinematics relating the two spaces as follows
{dot over (x)}i=Ji(q){dot over (q)}, (1)
where Ji is the Jacobian of the task. The spatial velocity vector is defined by
{dot over (x)}i=[ωi {dot over (p)}i]T, (2)
where ωi and {dot over (p)}i are vectors corresponding to the angular velocity of the task frame and the linear velocity of the task position referenced to the base frame, respectively. To augment all task variables, an augmented spatial velocity vector {dot over (x)}i and an augmented Jacobian matrix J are formed as follows,
{dot over (x)}=[{dot over (x)}1T . . . {dot over (x)}iT . . . {dot over (x)}kT]T, (3)
J=[J1T . . . JiT . . . JkT]T. (4)
The Jacobian matrix may be decomposed to its rotational and translational components, denoted by Jo and Jp, respectively, as follows,
If, for example, only a position descriptor pi is observable, then the parameters in Equation 1 can be modified to {dot over (x)}i={dot over (p)}i, and Ji=Jp
Closed Loop Inverse Kinematics
Closed loop inverse kinematics (CLIK) is an effective method to perform the trajectory conversion from task space to joint space. In one embodiment, the tracking retargeting system 202 utilizes a CLIK algorithm to perform the trajectory conversion. A detailed description about CLIK algorithms can be found in B. Dariush, M. Gienger, B. Jian, C. Goerick, and K. Fujimura, “Whole body humanoid control from human motion descriptors”, Int. Conf. Robotics and Automation, (ICRA), Pasadena, Calif. (2008), the content of which is incorporated by reference herein in its entirety.
A CLIK algorithm uses a set of task descriptors as input and estimates the robot joint commands that minimize the tracking error between the reference Cartesian motion (the desired task descriptors) and predicted Cartesian motion.
In one embodiment, a control policy of the CLIK algorithm is configured to produce robot joint commands such that the Cartesian error between the predicted robot task descriptors and the reference task descriptors is minimized. The tracking performance is subject to the kinematic constraints of the robot system 214, as well as the execution of multiple and often conflicting task descriptor requirements. The formulation of such a constrained optimization is based on extensions of the CLIK formulation.
Let the velocity of the reference task descriptors in the task space be described by,
{dot over (x)}r=[{dot over (x)}r
The joint velocities may be computed by inverting Equation 1 and adding a feedback error term to correct for numerical drift.
{dot over (q)}=J*({dot over (x)}r+K e), (7)
where J* denotes the regularized right pseudo-inverse of J weighted by a positive definite matrix W,
J*=W−1JT(JW−1JT+λ2I)−1. (8)
The parameter λ>0 is a damping term, and I is an identity matrix. If λ2I=0, then Equation 8 is simply the weighted right pseudo-inverse of J. Furthermore, if J is a square non-singular matrix, W is the identity matrix, and λ2I=0, the matrix J* is replaced by the standard matrix inversion J−1.
The vector {dot over (x)}r
Further information and additional embodiments of the tracking retargeting system 202 are found in B. Dariush, M. Gienger, A. Arumbakkam, Y. Zhu, K. Fujimura, and C. Goerick, “Online Transfer of Human Motion to Humanoids”, International Journal of Humanoid Robotics (Oct. 6, 2008), and U.S. application Ser. No. 11/734,758, filed Apr. 12, 2007, titled “Control Of Robots From Human Motion Descriptors”, the contents of both are incorporated by reference herein in its entirety.
Collision Avoidance
Collision avoidance of a target system 104 with itself or with other obstacles allows the target system 104 to safely execute a motion. Collision avoidance may be categorized as self-collision avoidance or obstacle avoidance. Self collision avoidance refers to a situation where two segments of the robot system 214 come into contact; whereas obstacle avoidance refers to the situation where the robot system 214 comes into contact with an object in the environment. Self collision avoidance may be further categorized as avoidance of self collision between connected body segment pairs and avoidance of self collision between unconnected body segment pairs. By connected segment pairs, it is implied that the two segments are connected at a common joint and assumed that the joint is rotational. In the case of obstacle avoidance, the two colliding bodies are always unconnected.
An approach to avoid self collisions and obstacles is described in detail below. The approach can be implemented in the CLIK formulation of the tracking retargeting system 202. Avoidance of self collision between connected body segment pairs are described first, followed by avoidance of collision between unconnected bodies. A unified approach to avoid self collisions and obstacles are developed utilizing the algorithms. It is noted that the approach for avoidance of collision between unconnected bodies can be used to avoid both self collision between unconnected body segment pairs and collision with obstacles since both involves a segment of the robot system 214 comes into contact with an unconnected body.
Avoidance of Self Collision Between Two Connected Bodies
If two segment pairs are connected at a common rotational joint, i.e. connected segment pairs, self collision avoidance between the segment pairs may be handled by limiting the joint range. Joint limits for self collision avoidance need not correspond to the physical joint limits; rather, they may be more conservative virtual joint limits whose values are obtained by manually verifying the bounds at which collision does not occur. Therefore, collision avoidance between segment pairs can be achieved by limiting the joint range.
In one embodiment, avoidance of self collision between connected body segments and joint limit avoidance are achieved by the proper selection of the weighting matrix W in Equation 8. One example weighting matrix is defined by the Weighted Least-Norm (WLN) solution. The WLN solution was originally proposed by T. F. Chan and R. V. Dubey, “A weighted least-norm solution based scheme for avoiding joint limits for redundant joint manipulators”, IEEE Transactions on Robotics and Automation, 11(2), (1995), the content of which is incorporated by reference herein in its entirety. A WLN solution is formulated in the context of Damped Least Squares Jacobian inverse. The WLN solution is utilized to generate an appropriate weighting matrix based on the gradient of a joint limit function to dampen joints nearing their limits. This solution is described below.
A candidate joint limit function that has higher values when the joints near their limits and tends to infinity at the joint limits is denoted by H(q). One such candidate function proposed by Zghal et al. is given by
where qi represents the generalized coordinates of the ith degree of freedom, and qi,min and qi,max are the lower and upper joint limits, respectively. See H. Zghal and R. V. Dubey, “Efficient gradient projection optimization for manipulators with multiple degrees of redundancy”, Int. Conf. Robotics and Automation, volume 2, pages 1006-1011 (1990), the content of which is incorporated by reference herein in its entirety. The upper and lower joint limits represent the more conservative limits between the physical joint limits and the virtual joint limits used for collision avoidance. Note that H(q) is normalized to account for the variations in the range of motion. The gradient of H, denoted as ∇H, represents the joint limit gradient function, an n×1 vector whose entries point in the direction of the fastest rate of increase of H.
The element associated with joint i is given by,
The gradient
is equal to zero if the joint is at the middle of its range and goes to infinity at either limit. The joint limit gradient weighting matrix, denoted by WJL, is defined by the following n×n diagonal matrix with diagonal elements wJLi(i=1 . . . n):
The weighting matrix W in Equation 8 is constructed by WJL (e.g., W=WJL). The diagonal elements wJLi are defined by:
The term Δ|∂H/∂qi| represents the change in the magnitude of the joint limit gradient function. A positive value indicates the joint is moving toward its limit while a negative value indicates the joint is moving away from its limit. When a joint moves toward its limit, the associated weighting factor described by the first condition in Equation 13, becomes very large causing the motion to slow down. When the joint nearly reaches its limit, the weighting factor is near infinity and the corresponding joint virtually stops. If the joint is moving away from the limit, there is no need to restrict or penalize the motions. In this scenario, the second condition in Equation 13 allows the joint to move freely. Therefore, WJL can be used for joint limit avoidance.
Avoidance of Collision Between Unconnected Bodies
One embodiment of a collision avoidance strategy for unconnected bodies is described below. Avoidance of self collision between unconnected body segment pairs is discussed first, followed by avoidance with obstacles.
For segment pairs of the robot system 214 that do not share the same joint, the collision avoidance strategy takes the minimum Euclidian distance (shortest distance) between the two colliding bodies as an input. The minimum distance between two bodies is denoted by d (d≧0). The coordinates of the minimum distance d between the two bodies are denoted by pa and pb, referring to the base frame of the joint space. The two points, represented by a and b, are referred to as collision points (or collision point pairs). P(q, d) represents a candidate collision function that has a maximum value at d=0 and decays exponentially toward zero as d increases. In other embodiments, the candidate collision function can have a minimum value at d=0 and can increase as the distance d increases, e.g., the increase can be exponential.
The gradient of P, denoted as ∇P, is the collision gradient function, an n×1 vector whose entries point in the direction of the fastest rate of increase of P.
The elements of the collision gradient function ∇P represent the degree to which each degree of freedom influences the distance to collision. The collision gradient function can be computed as follows
Consider first the case of self collision, or collision between two bodies of the robot system 214. In this scenario, it can be shown that the second term in Equation 15 is,
where pa and pb represent the coordinates, referred to the base, of the two collision points, and Ja and Jb are the associated Jacobian matrices. The coordinates pa and pb can be obtained using a standard collision detection software.
If one of the collision point pairs, for example point b, represents a point on an environment obstacle that is not attached to the robot system 214 (e.g., a2 and b2 in
As noted above, the elements of the collision gradient function ∇P in Equation 14 represent the degree to which each degree of freedom influences the distance to collision. Therefore, it is desirable to select a collision function P(q), such that the gradient ∂P(q)/∂qi is zero when d is large and infinity when d approaches zero. The collision gradient weighting matrix, denoted by WCOL, is defined by an n×n diagonal matrix with diagonal elements wcol
The scalars wcol
The collision function P can have different forms. For example, the following is a candidate collision function P according to one embodiment,
P=ρe−αdd−β. (20)
This candidate collision function is at infinity when d=0 and decays exponentially as d increases. The rate of decay is controlled by adjusting the parameters α and β. By increasing α, the exponential rate of decay can be adjusted so that the function approaches zero more quickly. Likewise, increasing β results in a faster rate of decay. The parameter ρ controls the amplitude. The partial derivative of P with respect to q is,
The quantity
in Equation 15 can be analytically computed from Equation 16 (or Equation 17) and Equation 21.
Unified Collision Avoidance Approach
As described above, collision between connected body segment pairs can be avoided by applying a joint limit gradient weighting matrix WJL, and collision between unconnected bodies can be avoided though a collision gradient weighting matrix WCOL. A weighting matrix W can be constructed to incorporate both matrices and avoid self collision between connected or unconnected segment pairs and collision with obstacles. It is assumed that WJL may be designed to enforce the physical joint limits as well as the virtual joint limits for collision avoidance of two connected bodies.
The basic concept of the unified approach is to design an appropriate weighting matrix W as in Equation 8 to penalize and dampen joints whose motion directs the segments toward collision. The weighting matrix is a diagonal matrix whose elements are derived by considering the gradients of the joint limit and collision potential functions. This strategy is also referred to as the joint limit and collision gradient weighting (JLCGW) matrix method for collision avoidance. The JLCGW matrix W is influenced by the n×n matrix WJL and the n×n matrix WCOL For example, W can be defined as simple as the following,
W=(WJL+WCOL)/2. (22)
Alternatively, suppose a total of Nc body pairs (connected body segment pairs, unconnected body segment pairs, body segment and obstacle pairs) are checked for collision. Let j (j=1 . . . Nc) be the index of the jth collision pair, and dj the minimum distance between the two colliding bodies. Let pa
Pj=ρe−α
The collision gradient weighting matrix is given by,
The scalars wcol
The weighting matrix W is described by,
W=(WJL+WCOL)/(Nc+1) (26)
where the denominator normalizes the solution by the total number of collision pairs Nc in addition to the effect of the joint limit avoidance, thus Nc+1.
Process for Collision Avoidance
One or more portions of the process 500 may be implemented in embodiments of hardware and/or software or combinations thereof. For example, the process 500 may be embodied through instructions for performing the actions described herein and such instrumentations can be stored within a tangible computer readable medium (e.g., flash memory, hard drive), and are executable by a computer processor. Furthermore, one of ordinary skill in the related arts would understand that other embodiments can perform the steps of the process 500 in different order. Moreover, other embodiments can include different and/or additional steps than the ones described here. The CLIK trajectory conversion system 308 can perform multiple instances of the steps of the process 500 concurrently and/or in parallel.
The CLIK trajectory conversion system 308 controls motion of the robot system 214 based on motion descriptors received from a source system 102. The CLIK trajectory conversion system 308 determines 510 a joint limitation function for avoiding collisions between connected body segment pairs and a collision function for avoiding obstacles and collisions between unconnected body segment pairs. An example of the joint limitation function is shown in Equation 9. An example of the collision gradient function is shown in Equation 20.
The CLIK trajectory conversion system 308 measures 520 distances between body pairs that can potentially collide into each other. Such body pairs include connected body segment pairs, unconnected body segment pairs, and body segment and obstacle pairs. In one embodiment, the CLIK trajectory conversion system 308 maintains a distance table listing the distances between the collision pairs, measures 520 their distances through sensors in real time, and updates the distance table accordingly.
The CLIK trajectory conversion system 308 generates 530 a joint limit and collision gradient weighting (JLCGW) matrix based on the joint limitation function, the collision function, and the measured distances. Example processes for generating 530 the JLCGW matrix are disclosed above with relation to Equations 22 and 26.
The CLIK trajectory conversion system 308 avoids 540 potential collisions by using the JLCGW matrix. In one embodiment, the JLCGW matrix is used as a weighting matrix in a CLIK algorithm used by the CLIK trajectory conversion system 308 to redirect the movements of the body segments in the robot system 214 to avoid collisions, as illustrated in Equation 8 above. Thereafter, the CLIK trajectory conversion system 308 continues measuring 520 the distances between the redirected collision pairs to prevent future collisions.
One embodiment of the disclosed collision avoidance strategy is tested in a human-to-robot motion retargeting application. A set of desired upper body human motion descriptors are associated with a human model. The human motion descriptors consist of up to eight upper body Cartesian positions corresponding to a waist joint, two shoulder joints, two elbow joints, two wrist joints, and a neck joint.
The human motion data was low pass filtered, interpolated, and scaled to the dimensions of ASIMO, a humanoid robot created by Honda Motor Company. The resulting motion corresponds to the reference robot motion descriptors used as input in the CLIK trajectory conversion system 308 which produces collision free robot joint motion.
For further detail of the test, please refer to B. Dariush, M. Gienger, B. Jian, C. Goerick, and K. Fujimura, “Whole body humanoid control from human motion descriptors”, Int. Conf. Robotics and Automation (ICRA), Pasadena, Calif. (2008), the content of which is incorporated by reference herein in its entirety.
Alternate Embodiments
The above embodiments describe a motion retargeting system for real-time motion control of robot systems to avoid self collisions and obstacles in motion retargeting. One skilled in the art would understand that the motion retargeting system can be used to control other articulated rigid body systems, either in real environment or virtual environment (e.g., human figure animation), and for purpose other than motion retargeting (e.g., robotic motion generation and control, human pose estimation, tracking and estimation, and joint torque estimation in biomechanics).
Some portions of above description describe the embodiments in terms of algorithmic processes or operations, for example, the processes and operations as described with
As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some embodiments may be described using the expression “coupled” and “connected” along with their derivatives. It should be understood that these terms are not intended as synonyms for each other. For example, some embodiments may be described using the term “connected” to indicate that two or more elements are in direct physical or electrical contact with each other. In another example, some embodiments may be described using the term “coupled” to indicate that two or more elements are in direct physical or electrical contact. The term “coupled,” however, may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other. The embodiments are not limited in this context.
As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the invention. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for a system and a process for real-time motion control of robots and other articulated rigid body systems to avoid self collisions and obstacles. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the present invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope as defined in the appended claims.
This application claims a benefit of, and priority under 35 USC §119(e) to, U.S. Provisional Patent Application No. 60/984,639, filed Nov. 1, 2007, the contents of which are herein incorporated by reference herein in their entirety. This application is related to U.S. patent application No. 11/614,930, filed Dec. 21, 2006, titled “Reconstruction, Retargetting, Tracking, and Estimation of Motion for Articulated Systems”, U.S. patent application No. 11/734,758, filed Apr. 12, 2007, titled “Control Of Robots From Human Motion Descriptors”, and U.S. patent application Ser. No. 12/257,664, entitled “Real-Time Self Collision And Obstacle Avoidance,” by Behzad Dariush, Youding Zhu, and Kikuo Fujimura, filed concurrently with this application, all of which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4831549 | Red et al. | May 1989 | A |
5159249 | Megherbi | Oct 1992 | A |
5293461 | Grudic et al. | Mar 1994 | A |
5430643 | Seraji | Jul 1995 | A |
5550953 | Seraji | Aug 1996 | A |
5586224 | Kunii et al. | Dec 1996 | A |
5625577 | Kunii et al. | Apr 1997 | A |
5675720 | Sato et al. | Oct 1997 | A |
5737500 | Seraji et al. | Apr 1998 | A |
5808433 | Tagami et al. | Sep 1998 | A |
6004016 | Spector | Dec 1999 | A |
6236737 | Gregson et al. | May 2001 | B1 |
6331181 | Tierney et al. | Dec 2001 | B1 |
6341246 | Gerstenberger et al. | Jan 2002 | B1 |
6424885 | Niemeyer et al. | Jul 2002 | B1 |
6456901 | Xi et al. | Sep 2002 | B1 |
6493608 | Niemeyer | Dec 2002 | B1 |
6674877 | Jojic et al. | Jan 2004 | B1 |
6699177 | Wang et al. | Mar 2004 | B1 |
6708142 | Baillot et al. | Mar 2004 | B1 |
6786896 | Madhani et al. | Sep 2004 | B1 |
6853964 | Rockwood et al. | Feb 2005 | B1 |
6985620 | Sawhney et al. | Jan 2006 | B2 |
7274800 | Nefian et al. | Sep 2007 | B2 |
7292151 | Ferguson et al. | Nov 2007 | B2 |
7457733 | Maille et al. | Nov 2008 | B2 |
7688016 | Aghili | Mar 2010 | B2 |
7859540 | Dariush | Dec 2010 | B2 |
8170287 | Dariush et al. | May 2012 | B2 |
20030113018 | Nefian et al. | Jun 2003 | A1 |
20040193413 | Wilson et al. | Sep 2004 | A1 |
20050001842 | Park et al. | Jan 2005 | A1 |
20050043718 | Madhani et al. | Feb 2005 | A1 |
20050209534 | Dariush | Sep 2005 | A1 |
20050271279 | Fujimura et al. | Dec 2005 | A1 |
20060269145 | Roberts | Nov 2006 | A1 |
20060293792 | Hasegawa et al. | Dec 2006 | A1 |
20070013336 | Nowlin et al. | Jan 2007 | A1 |
20070140562 | Linderman | Jun 2007 | A1 |
20070162164 | Dariush | Jul 2007 | A1 |
20070233280 | Bacon et al. | Oct 2007 | A1 |
20070255454 | Dariush | Nov 2007 | A1 |
20080019589 | Yoon et al. | Jan 2008 | A1 |
20080181459 | Martin et al. | Jul 2008 | A1 |
20080234864 | Sugiura et al. | Sep 2008 | A1 |
20080317331 | Winn et al. | Dec 2008 | A1 |
20100145521 | Prisco et al. | Jun 2010 | A1 |
Number | Date | Country |
---|---|---|
8-108383 | Apr 1996 | JP |
10-097316 | Apr 1998 | JP |
WO 2007076118 | Jul 2007 | WO |
Entry |
---|
PCT International Search Report and Written Opinion, PCT/US08/87657, Mar. 19, 2009, 8 pages. |
PCT International Search Report and Written Opinion, PCT/US08/81171, Jan. 5, 2009, 10 pages. |
PCT International Search Report and Written Opinion, PCT/US08/81201, Jan. 5, 2009, 9 pages. |
United States Patent and Trademark Office, Final Rejection, U.S. Appl. No. 12/317,369, Aug. 31, 2012, 21 pages. |
Japanese Patent Office, Non-Final Office Action, Japanese Patent Application No. P2010-531282, Jun. 18, 2012, 6 pages. (with English translation).d. |
Japanese Patent Office, Non-Final Office Action, Japanese Patent Application No. JP 2010-532163, Dec. 18, 2012, seven pages. |
Number | Date | Country | |
---|---|---|---|
20090118863 A1 | May 2009 | US |
Number | Date | Country | |
---|---|---|---|
60984639 | Nov 2007 | US |