This disclosure relates generally to robotics and more specifically to systems, methods and apparatuses, including computer programs, for manipulating objects using robotic devices.
Robotic devices are being developed for a variety of purposes today, such as to advance foundational research and to assist with missions that may be risky or taxing for humans to perform. Over time, robots have been tasked with increasingly complicated tasks, such as manipulation of objects. Robots can benefit from improved techniques for planning and coordination of grasps on objects to assist in performing manipulation tasks.
Some embodiments of the present disclosure describe systems, methods and apparatuses, including computer programs, for manipulating objects using robotic devices. Many objects that robotic devices may manipulate (e.g., grasp, re-grasp, place) may be grasped in several different ways by the robotic device. Selecting a particular grasp from among the set of possible grasps that allows the robotic device to accomplish a task (e.g., pick-and-place an object, pick-and-use a tool) may involve satisfying one or more objectives (e.g., providing a sufficiently strong hold on an object during transport while avoiding collisions with the robot) while being subject to various constraints (e.g., kinematic constraints of the robot, a balance constraint of the robot). For example, a particular grasp may be selected based on one or more of information about the shape of the object(s), the capabilities of the robot to manipulate the object(s), aspects of the robot's environment, a location of the object(s) relative to the robot, and/or a desired behavior for the robot to perform while manipulating the object(s).
In some prior robotic devices, grasping an object involves (i) determining how to grasp the object (e.g., where to grasp the object) and (ii) determining how to move components of the robot to achieve the determined grasp. The inventors have recognized that prior approaches that decouple these two steps have challenges. For example, if it is determined in step (ii) that the robot is unable to move in a way that enables the robot to successfully grasp the object as determined in step (i), the attempted grasp may fail and/or step (i) may need to be repeated until the conditions in both steps can be satisfied.
Some embodiments of the present disclosure relate to improved techniques for robotic grasp planning and object manipulation using grasp regions associated with an object. Within each grasp region a continuous set of potential grasps may be achieved by an end effector of a robotic device. As described in further detail below, the use of grasp regions in grasp planning and object manipulation provides flexibility in how the robotic device is able to manipulate an object relative to prior techniques.
In one aspect, the invention features a computer-implemented method. The method includes associating a first grasp region of an object with an end effector of a robotic device, wherein the first grasp region includes a set of potential grasps achievable by the end effector of the robotic device, determining, within the first grasp region, a grasp from among the set of potential grasps, wherein the grasp is determined based, at least in part, on information associated with a capability of the robotic device to perform the grasp, and instructing the robotic device to manipulate the object based on the grasp.
In one aspect, the method further includes defining a set of grasp regions for the object, wherein the set of grasp regions includes the first grasp region. In another aspect, the method defining the set of grasp regions for the object is based, at least in part, on a shape of the object and/or information associated with at least one end effector of the robotic device. In another aspect, defining the set of grasp regions for the object comprises fitting one or more primitive shapes to the object. In another aspect, the one or more primitive shapes include at least one of a cylinder, a prism, a disk, or a plane. In another aspect, the at least one end effector of the robotic device includes a robotic gripper having a set of appendages, and defining the set of grasp regions for the object comprises defining the first grasp region as having a shape capable of being grasped by at least two appendages in the set of appendages. In another aspect, the shape capable of being grasped by at least two appendages in the set of appendages includes a cylinder, a prism or a disk. In another aspect, the shape capable of being grasped by at least two appendages in the set of appendages is defined along a single axis, and the set of potential grasps within the first grasp region include grasps having different rotations around the single axis and/or translations along the single axis. In another aspect, the at least one end effector of the robotic device includes a suction-based gripper, and defining the set of grasp regions for the object comprises defining the first grasp region as a substantially planar region configured to be grasped by the suction-based gripper. In another aspect, defining a set of grasp regions for the object comprises determining using simulation, a set of possible grasps of the at least one end effector on the object, clustering possible grasps within the set of possible grasps to generate a set of clusters, and defining the set of grasp regions based on the set of clusters.
In another aspect, the end effector of the robotic device includes a gripper having a set of appendages, and the first grasp region has a shape capable of being grasped by at least two appendages of the set of appendages. In another aspect, the shape capable of being grasped by at least two appendages of the set of appendages includes a cylinder, a prism or a disk. In another aspect, the shape capable of being grasped by at least two appendages of the set of appendages is defined along a single axis, and the set of potential grasps within the first grasp region include grasps having different rotations around the single axis and/or translations along the single axis. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a pose of the robotic device.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a manipulation goal of the robotic device. In another aspect, the grasp is determined based, at least in part, on a placement location of the object associated with the manipulation goal. In another aspect, the grasp is determined based, at least in part, on a location of the object prior to manipulating the object. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a balance constraint of the robotic device when manipulating the object.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a collision constraint associated with the robotic device when manipulating the object. In another aspect, the collision constraint is a self-collision constraint of the robotic device and/or an external collision constraint of the robotic device with an object in an environment of the robotic device. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a gaze constraint associated with a camera of the robotic device when manipulating the object. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on one or more kinematic constraints of the robotic device.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp using a technique that considers for each potential grasp in the set of potential grasps within the first grasp region, a location of the potential grasp within the first grasp region, and one or more kinematic or reachability constraints of the robotic device to achieve the potential grasp. In another aspect, the technique is an optimization technique. In another aspect, the first grasp region has a first axis, and the set of potential grasps within the first grasp region include potential grasps at different rotations around the first axis and/or translations along the first axis.
In another aspect, manipulating the object based on the grasp comprises controlling the robotic device to grasp the object using the grasp. In another aspect, manipulating the object based on the grasp comprises controlling the robotic device to re-grasp the object using the grasp. In another aspect, re-grasping the object is performed during movement of the object from a first location to a second location. In another aspect, re-grasping the object comprises rotating a grasp of the object by the end effector of the robotic device within the first grasp region. In another aspect, re-grasping the object comprises translating a grasp of the object by the end effector of the robotic device along a first axis of the first grasp region. In another aspect, re-grasping the object is performed in response to detecting that the object is slipping relative to the end effector of the robotic device. In another aspect, manipulating the object based on the grasp comprises controlling the robotic device to place the object at a location using the grasp.
In another aspect, the end effector is a first end effector of the robotic device, the grasp is a first grasp for the first end effector, the information associated with the capability of the robotic device to perform the grasp is first information, and the robotic device includes a second end effector. The method further includes assigning a second grasp region to the second end effector, determining, within the second grasp region, a second grasp for the second end effector, wherein the second grasp is determined based, at least in part, on second information associated with a capability of the robotic device to perform the second grasp, and manipulating the object based on the first grasp and the second grasp. In another aspect, determining the first grasp and the second grasp comprises using a technique that considers the first information and the second information. In another aspect, the first grasp region is located on an opposite side of the object from the second grasp region. In another aspect, the first grasp region is located at a different height on the object from the second grasp region to improve stability of the object when manipulated by the robotic device.
In some embodiments, the invention features a computing system of a robot. The computing system includes data processing hardware and memory hardware in communication with the data processing hardware. The memory hardware is configured to store instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include associating a first grasp region of an object with an end effector of the robot, wherein the first grasp region includes a set of potential grasps achievable by the end effector of the robot, determining, within the first grasp region, a grasp, wherein the grasp is determined based, at least in part, on information associated with a capability of the robot to perform the grasp, and instructing the robot to manipulate the object based on the grasp.
In one aspect, the operations further include defining a set of grasp regions for the object, wherein the set of grasp regions includes the first grasp region. In another aspect, defining the set of grasp regions for the object is based, at least in part, on a shape of the object and/or information associated with at least one end effector of the robot. In another aspect, defining the set of grasp regions for the object comprises fitting one or more primitive shapes to the object. In another aspect, the one or more primitive shapes include at least one of a cylinder, a prism, a disk, or a plane.
In another aspect, the end effector of the robot includes a robotic gripper having a set of appendages, and defining the set of grasp regions for the object comprises defining the first grasp region as having a shape capable of being grasped by at least two appendages in the set of appendages. In another aspect, the shape capable of being grasped by at least two appendages in the set of appendages includes a cylinder, a prism or a disk. In another aspect, the shape capable of being grasped by at least two appendages in the set of appendages is defined along a single axis, and the set of potential grasps within the first grasp region include grasps having different rotations around the single axis and/or translations along the single axis. In another aspect the at least one end effector of the robot includes a suction-based gripper, and defining the set of grasp regions for the object comprises defining the first grasp region as a substantially planar region configured to be grasped by the suction-based gripper. In another aspect, defining a set of grasp regions for the object comprises determining using simulation, a set of possible grasps of the at least one end effector on the object, clustering possible grasps within the set of possible grasps to generate a set of clusters, and defining the set of grasp regions based on the set of clusters.
In another aspect, the end effector of the robot includes a gripper having a set of appendages, and the first grasp region has a shape capable of being grasped by at least two appendages of the set of appendages. In another aspect, the shape capable of being grasped by at least two appendages of the set of appendages includes a cylinder, a prism or a disk. In another aspect, the shape capable of being grasped by at least two appendages of the set of appendages is defined along a single axis, and the set of potential grasps within the first grasp region include grasps having different rotations around the single axis and/or translations along the single axis. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a pose of the robot.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a manipulation goal of the robot. In another aspect, the grasp is determined based, at least in part, on a placement location of the object associated with the manipulation goal. In another aspect, the grasp is determined based, at least in part, on a location of the object prior to manipulating the object. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a balance constraint of the robot when manipulating the object.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a collision constraint associated with the robot when manipulating the object. In another aspect, the collision constraint is a self-collision constraint of the robot and/or an external collision constraint of the robot with an object in an environment of the robot. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a gaze constraint associated with a camera of the robot when manipulating the object. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on one or more kinematic constraints of the robot.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp using a technique that considers for each potential grasp in the set of potential grasps within the first grasp region, a location of the potential grasp within the first grasp region, and one or more kinematic or reachability constraints of the robot to achieve the potential grasp. In another aspect, the technique is an optimization technique. In another aspect, the first grasp region has a first axis, and the set of potential grasps within the first grasp region include potential grasps at different rotations around the first axis and/or translations along the first axis.
In another aspect, manipulating the object based on the grasp comprises controlling the robot to grasp the object using the grasp. In another aspect, manipulating the object based on the grasp comprises controlling the robot to re-grasp the object using the grasp. In another aspect, re-grasping the object is performed during movement of the object from a first location to a second location. In another aspect, re-grasping the object comprises rotating a grasp of the object by the end effector of the robot within the first grasp region. In another aspect, re-grasping the object comprises translating a grasp of the object by the end effector of the robot along a first axis of the first grasp region. In another aspect, re-grasping the object is performed in response to detecting that the object is slipping relative to the end effector of the robot. In another aspect, manipulating the object based on the grasp comprises controlling the robot to place the object at a location using the grasp.
In another aspect, the end effector is a first end effector of the robot, the grasp is a first grasp for the first end effector, the information associated with the capability of the robot to perform the grasp is first information, and the robot includes a second end effector. The operations further include assigning a second grasp region to the second end effector, determining, within the second grasp region, a second grasp for the second end effector, wherein the second grasp is determined based, at least in part, on second information associated with a capability of the robot to perform the second grasp, and manipulating the object based on the first grasp and the second grasp. In another aspect, determining the first grasp and the second grasp comprises using a technique that considers the first information and the second information. In another aspect, the first grasp region is located on an opposite side of the object from the second grasp region. In another aspect, the first grasp region is located at a different height on the object from the second grasp region to improve stability of the object when manipulated by the robot.
In some embodiments, the invention features a robot. The robot including an end effector, data processing hardware and memory hardware in communication with the data processing hardware. The memory hardware is configured to store instructions that when executed on the data processing hardware cause the data processing hardware to perform operations. The operations include associating a first grasp region of an object with the end effector, wherein the first grasp region includes a set of potential grasps achievable by the end effector, determining, within the first grasp region, a grasp, wherein the grasp is determined based, at least in part, on information associated with a capability of the robot to perform the grasp, and instructing the robot to manipulate the object based on the grasp.
In one aspect, the operations further include defining a set of grasp regions for the object, wherein the set of grasp regions includes the first grasp region. In another aspect, defining the set of grasp regions for the object is based, at least in part, on a shape of the object and/or information associated with at least one end effector of the robot. In another aspect, defining the set of grasp regions for the object comprises fitting one or more primitive shapes to the object. In another aspect, the one or more primitive shapes include at least one of a cylinder, a prism, a disk, or a plane.
In another aspect, the end effector includes a robotic gripper having a set of appendages, and defining the set of grasp regions for the object comprises defining the first grasp region as having a shape capable of being grasped by at least two appendages in the set of appendages. In another aspect, the shape capable of being grasped by at least two appendages in the set of appendages includes a cylinder, a prism or a disk. In another aspect, the shape capable of being grasped by at least two appendages in the set of appendages is defined along a single axis, and the set of potential grasps within the first grasp region include grasps having different rotations around the single axis and/or translations along the single axis. In another aspect, the at least one end effector of the robot includes a suction-based gripper, and defining the set of grasp regions for the object comprises defining the first grasp region as a substantially planar region configured to be grasped by the suction-based gripper.
In another aspect, defining a set of grasp regions for the object comprises determining using simulation, a set of possible grasps of the at least one end effector on the object, clustering possible grasps within the set of possible grasps to generate a set of clusters, and defining the set of grasp regions based on the set of clusters. In another aspect, the end effector of the robot includes a gripper having a set of appendages, and the first grasp region has a shape capable of being grasped by at least two appendages of the set of appendages. In another aspect, the shape capable of being grasped by at least two appendages of the set of appendages includes a cylinder, a prism or a disk. In another aspect, the shape capable of being grasped by at least two appendages of the set of appendages is defined along a single axis, and the set of potential grasps within the first grasp region include grasps having different rotations around the single axis and/or translations along the single axis.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a pose of the robot. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a manipulation goal of the robot. In another aspect, the grasp is determined based, at least in part, on a placement location of the object associated with the manipulation goal. In another aspect, the grasp is determined based, at least in part, on a location of the object prior to manipulating the object. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a balance constraint of the robot when manipulating the object.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a collision constraint associated with the robot when manipulating the object. In another aspect, the collision constraint is a self-collision constraint of the robot and/or an external collision constraint of the robot with an object in an environment of the robot. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on a gaze constraint associated with a camera of the robot when manipulating the object. In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp based, at least in part, on one or more kinematic constraints of the robot.
In another aspect, determining, within the first grasp region, a grasp, comprises determining the grasp using a technique that considers for each potential grasp in the set of potential grasps within the first grasp region, a location of the potential grasp within the first grasp region, and one or more kinematic or reachability constraints of the robot to achieve the potential grasp. In another aspect, the technique is an optimization technique. In another aspect, the first grasp region has a first axis, and the set of potential grasps within the first grasp region include potential grasps at different rotations around the first axis and/or translations along the first axis.
In another aspect, manipulating the object based on the grasp comprises controlling the robot to grasp the object using the grasp. In another aspect, manipulating the object based on the grasp comprises controlling the robot to re-grasp the object using the grasp. In another aspect, re-grasping the object is performed during movement of the object from a first location to a second location. In another aspect, re-grasping the object comprises rotating a grasp of the object by the end effector of the robot within the first grasp region. In another aspect, re-grasping the object comprises translating a grasp of the object by the end effector of the robot along a first axis of the first grasp region. In another aspect, re-grasping the object is performed in response to detecting that the object is slipping relative to the end effector of the robot. In another aspect, manipulating the object based on the grasp comprises controlling the robot to place the object at a location using the grasp.
In another aspect, the end effector is a first end effector, the grasp is a first grasp for the first end effector, the information associated with the capability of the robot to perform the grasp is first information, and the robot includes a second end effector. The operations further include assigning a second grasp region to the second end effector, determining, within the second grasp region, a second grasp for the second end effector, wherein the second grasp is determined based, at least in part, on second information associated with a capability of the robot to perform the second grasp, and manipulating the object based on the first grasp and the second grasp. In another aspect, determining the first grasp and the second grasp comprises using a technique that considers the first information and the second information. In another aspect, the first grasp region is located on an opposite side of the object from the second grasp region. In another aspect, the first grasp region is located at a different height on the object from the second grasp region to improve stability of the object when manipulated by the robot.
The advantages of the invention, together with further advantages, may be better understood by referring to the following description taken in conjunction with the accompanying drawings. The drawings are not necessarily to scale, and emphasis is instead generally placed upon illustrating the principles of the invention.
An example implementation involves a robotic device configured with at least one robotic limb, one or more sensors, and a processing system. The robotic limb may be an articulated robotic appendage including a number of members connected by joints. The robotic limb may also include a number of actuators (e.g., 2-5 actuators) coupled to the members of the limb that facilitate movement of the robotic limb through a range of motion limited by the joints connecting the members. The sensors may be configured to measure properties of the robotic device, such as angles of the joints, pressures within the actuators, joint torques, and/or positions, velocities, and/or accelerations of members of the robotic limb(s) at a given point in time. The sensors may also be configured to measure an orientation (e.g., a body orientation measurement) of the body of the robotic device (which may also be referred to herein as the “base” of the robotic device). Other example properties include the masses of various components of the robotic device, among other properties. The processing system of the robotic device may determine the angles of the joints of the robotic limb, either directly from angle sensor information or indirectly from other sensor information from which the joint angles can be calculated. The processing system may then estimate an orientation of the robotic device based on the sensed orientation of the base of the robotic device and the joint angles.
An orientation may herein refer to an angular position of an object. In some instances, an orientation may refer to an amount of rotation (e.g., in degrees or radians) about three axes. In some cases, an orientation of a robotic device may refer to the orientation of the robotic device with respect to a particular reference frame, such as the ground or a surface on which it stands. An orientation may describe the angular position using Euler angles, Tait-Bryan angles (also known as yaw, pitch, and roll angles), and/or Quaternions. In some instances, such as on a computer-readable medium, the orientation may be represented by an orientation matrix and/or an orientation quaternion, among other representations.
In some scenarios, measurements from sensors on the base of the robotic device may indicate that the robotic device is oriented in such a way and/or has a linear and/or angular velocity that requires control of one or more of the articulated appendages in order to maintain balance of the robotic device. In these scenarios, however, it may be the case that the limbs of the robotic device are oriented and/or moving such that balance control is not required. For example, the body of the robotic device may be tilted to the left, and sensors measuring the body's orientation may thus indicate a need to move limbs to balance the robotic device; however, one or more limbs of the robotic device may be extended to the right, causing the robotic device to be balanced despite the sensors on the base of the robotic device indicating otherwise. The limbs of a robotic device may apply a torque on the body of the robotic device and may also affect the robotic device's center of mass. Thus, orientation and angular velocity measurements of one portion of the robotic device may be an inaccurate representation of the orientation and angular velocity of the combination of the robotic device's body and limbs (which may be referred to herein as the “aggregate” orientation and angular velocity).
In some implementations, the processing system may be configured to estimate the aggregate orientation and/or angular velocity of the entire robotic device based on the sensed orientation of the base of the robotic device and the measured joint angles. The processing system may have stored thereon a relationship between the joint angles of the robotic device and the extent to which the joint angles of the robotic device affect the orientation and/or angular velocity of the base of the robotic device. The relationship between the joint angles of the robotic device and the motion of the base of the robotic device may be determined based on the kinematics and mass properties of the limbs of the robotic devices. In other words, the relationship may specify the effects that the joint angles have on the aggregate orientation and/or angular velocity of the robotic device. Additionally, the processing system may be configured to determine components of the orientation and/or angular velocity of the robotic device caused by internal motion and components of the orientation and/or angular velocity of the robotic device caused by external motion. Further, the processing system may differentiate components of the aggregate orientation in order to determine the robotic device's aggregate yaw rate, pitch rate, and roll rate (which may be collectively referred to as the “aggregate angular velocity”).
In some implementations, the robotic device may also include a control system that is configured to control the robotic device on the basis of a simplified model of the robotic device. The control system may be configured to receive the estimated aggregate orientation and/or angular velocity of the robotic device, and subsequently control one or more jointed limbs of the robotic device to behave in a certain manner (e.g., maintain the balance of the robotic device). For instance, the control system may determine locations at which to place the robotic device's feet and/or the force to exert by the robotic device's feet on a surface based on the aggregate orientation.
In some implementations, the robotic device may include force sensors that measure or estimate the external forces (e.g., the force applied by a leg of the robotic device against the ground) along with kinematic sensors to measure the orientation of the limbs of the robotic device. The processing system may be configured to determine the robotic device's angular momentum based on information measured by the sensors.
The control system may be configured to actuate one or more actuators connected across components of a robotic leg. The actuators may be controlled to raise or lower the robotic leg. In some cases, a robotic leg may include actuators to control the robotic leg's motion in three dimensions. Depending on the particular implementation, the control system may be configured to use the aggregate orientation, along with other sensor measurements, as a basis to control the robot in a certain manner (e.g., stationary balancing, walking, running, galloping, etc.).
In some implementations, multiple relationships between the joint angles and their effect on the orientation and/or angular velocity of the base of the robotic device may be stored on the processing system. The processing system may select a particular relationship with which to determine the aggregate orientation and/or angular velocity based on the joint angles. For example, one relationship may be associated with a particular joint being between 0 and 90 degrees, and another relationship may be associated with the particular joint being between 91 and 180 degrees. The selected relationship may more accurately estimate the aggregate orientation of the robotic device than the other relationships.
In some implementations, the processing system may have stored thereon more than one relationship between the joint angles of the robotic device and the extent to which the joint angles of the robotic device affect the orientation and/or angular velocity of the base of the robotic device. Each relationship may correspond to one or more ranges of joint angle values (e.g., operating ranges). In some implementations, the robotic device may operate in one or more modes. A mode of operation may correspond to one or more of the joint angles being within a corresponding set of operating ranges. In these implementations, each mode of operation may correspond to a certain relationship.
The angular velocity of the robotic device may have multiple components describing the robotic device's orientation (e.g., rotational angles) along multiple planes. From the perspective of the robotic device, a rotational angle of the robotic device turned to the left or the right may be referred to herein as “yaw.” A rotational angle of the robotic device upwards or downwards may be referred to herein as “pitch.” A rotational angle of the robotic device tilted to the left or the right may be referred to herein as “roll.” Additionally, the rate of change of the yaw, pitch, and roll may be referred to herein as the “yaw rate,” the “pitch rate,” and the “roll rate,” respectively.
Referring now to the figures,
As shown in
Processor(s) 102 may operate as one or more general-purpose processor or special purpose processors (e.g., digital signal processors, application specific integrated circuits, etc.). The processor(s) 102 can be configured to execute computer-readable program instructions 106 that are stored in the data storage 104 and are executable to provide the operations of the robotic device 100 described herein. For instance, the program instructions 106 may be executable to provide operations of controller 108, where the controller 108 may be configured to cause activation and/or deactivation of the mechanical components 114 and the electrical components 116. The processor(s) 102 may operate and enable the robotic device 100 to perform various functions, including the functions described herein.
The data storage 104 may exist as various types of storage media, such as a memory. For example, the data storage 104 may include or take the form of one or more computer-readable storage media that can be read or accessed by processor(s) 102. The one or more computer-readable storage media can include volatile and/or non-volatile storage components, such as optical, magnetic, organic or other memory or disk storage, which can be integrated in whole or in part with processor(s) 102. In some implementations, the data storage 104 can be implemented using a single physical device (e.g., one optical, magnetic, organic or other memory or disk storage unit), while in other implementations, the data storage 104 can be implemented using two or more physical devices, which may communicate electronically (e.g., via wired or wireless communication). Further, in addition to the computer-readable program instructions 106, the data storage 104 may include additional data such as diagnostic data, among other possibilities.
The robotic device 100 may include at least one controller 108, which may interface with the robotic device 100. The controller 108 may serve as a link between portions of the robotic device 100, such as a link between mechanical components 114 and/or electrical components 116. In some instances, the controller 108 may serve as an interface between the robotic device 100 and another computing device. Furthermore, the controller 108 may serve as an interface between the robotic device 100 and a user(s). The controller 108 may include various components for communicating with the robotic device 100, including one or more joysticks or buttons, among other features. The controller 108 may perform other operations for the robotic device 100 as well. Other examples of controllers may exist as well.
Additionally, the robotic device 100 includes one or more sensor(s) 110 such as force sensors, proximity sensors, motion sensors, load sensors, position sensors, touch sensors, depth sensors, ultrasonic range sensors, and/or infrared sensors, among other possibilities. The sensor(s) 110 may provide sensor data to the processor(s) 102 to allow for appropriate interaction of the robotic device 100 with the environment as well as monitoring of operation of the systems of the robotic device 100. The sensor data may be used in evaluation of various factors for activation and deactivation of mechanical components 114 and electrical components 116 by controller 108 and/or a computing system of the robotic device 100.
The sensor(s) 110 may provide information indicative of the environment of the robotic device for the controller 108 and/or computing system to use to determine operations for the robotic device 100. For example, the sensor(s) 110 may capture data corresponding to the terrain of the environment or location of nearby objects, which may assist with environment recognition and navigation, etc. In an example configuration, the robotic device 100 may include a sensor system that may include a camera, RADAR, LIDAR, time-of-flight camera, global positioning system (GPS) transceiver, and/or other sensors for capturing information of the environment of the robotic device 100. The sensor(s) 110 may monitor the environment in real-time and detect obstacles, elements of the terrain, weather conditions, temperature, and/or other parameters of the environment for the robotic device 100.
Further, the robotic device 100 may include other sensor(s) 110 configured to receive information indicative of the state of the robotic device 100, including sensor(s) 110 that may monitor the state of the various components of the robotic device 100. The sensor(s) 110 may measure activity of systems of the robotic device 100 and receive information based on the operation of the various features of the robotic device 100, such as the operation of extendable legs, arms, or other mechanical and/or electrical features of the robotic device 100. The sensor data provided by the sensors may enable the computing system of the robotic device 100 to determine errors in operation as well as monitor overall functioning of components of the robotic device 100.
For example, the computing system may use sensor data to determine the stability of the robotic device 100 during operations as well as measurements related to power levels, communication activities, components that require repair, among other information. As an example configuration, the robotic device 100 may include gyroscope(s), accelerometer(s), and/or other possible sensors to provide sensor data relating to the state of operation of the robotic device. Further, sensor(s) 110 may also monitor the current state of a function, such as a gait, that the robotic device 100 may currently be operating. Additionally, the sensor(s) 110 may measure a distance between a given robotic leg of a robotic device and a center of mass of the robotic device. Other example uses for the sensor(s) 110 may exist as well.
Additionally, the robotic device 100 may also include one or more power source(s) 112 configured to supply power to various components of the robotic device 100. Among possible power systems, the robotic device 100 may include a hydraulic system, electrical system, batteries, and/or other types of power systems. As an example illustration, the robotic device 100 may include one or more batteries configured to provide power to components via a wired and/or wireless connection. Within examples, components of the mechanical components 114 and electrical components 116 may each connect to a different power source or may be powered by the same power source. Components of the robotic device 100 may connect to multiple power sources as well.
Within example configurations, any type of power source may be used to power the robotic device 100, such as a gasoline and/or electric engine. Further, the power source(s) 112 may be charged using various types of charging, such as wired connections to an outside power source, wireless charging, combustion, or other examples. Other configurations may also be possible. Additionally, the robotic device 100 may include a hydraulic system configured to provide power to the mechanical components 114 using fluid power. Components of the robotic device 100 may operate based on hydraulic fluid being transmitted throughout the hydraulic system to various hydraulic motors and hydraulic cylinders, for example. The hydraulic system of the robotic device 100 may transfer a large amount of power through small tubes, flexible hoses, or other links between components of the robotic device 100. Other power sources may be included within the robotic device 100.
Mechanical components 114 can represent hardware of the robotic device 100 that may enable the robotic device 100 to operate and perform physical functions. As a few examples, the robotic device 100 may include actuator(s), extendable leg(s) (“legs”), arm(s), wheel(s), one or multiple structured bodies for housing the computing system or other components, and/or other mechanical components. The particular mechanical components 114 used may depend on the design of the robotic device 100 and may also be based on the functions and/or tasks the robotic device 100 may be configured to perform. As such, depending on the operation and functions of the robotic device 100, different mechanical components 114 may be available for the robotic device 100 to utilize. In some examples, the robotic device 100 may be configured to add and/or remove mechanical components 114, which may involve assistance from a user and/or other robotic device. For example, the robotic device 100 may be initially configured with four legs, but may be altered by a user or the robotic device 100 to remove two of the four legs to operate as a biped. Other examples of mechanical components 114 may be included.
The electrical components 116 may include various components capable of processing, transferring, and/or providing electrical charge or electric signals, for example. Among possible examples, the electrical components 116 may include electrical wires, circuitry, and/or wireless communication transmitters and receivers to enable operations of the robotic device 100. The electrical components 116 may interwork with the mechanical components 114 to enable the robotic device 100 to perform various operations. The electrical components 116 may be configured to provide power from the power source(s) 112 to the various mechanical components 114, for example. Further, the robotic device 100 may include electric motors. Other examples of electrical components 116 may exist as well.
In some implementations, the robotic device 100 may also include communication link(s) 118 configured to send and/or receive information. The communication link(s) 118 may transmit data indicating the state of the various components of the robotic device 100. For example, information sensed by sensor(s) 110 may be transmitted via the communication link(s) 118 to a separate device. Other diagnostic information indicating the integrity or health of the power source(s) 112, mechanical components 114, electrical components 116, processor(s) 102, data storage 104, and/or controller 108 may be transmitted via the communication link(s) 118 to an external communication device.
In some implementations, the robotic device 100 may receive information at the communication link(s) 118 that is processed by the processor(s) 102. The received information may indicate data that is accessible by the processor(s) 102 during execution of the program instructions 106, for example. Further, the received information may change aspects of the controller 108 that may affect the behavior of the mechanical components 114 and/or the electrical components 116. In some cases, the received information may indicate a query requesting a particular piece of information (e.g., the operational state of one or more of the components of the robotic device 100), and the processor(s) 102 may subsequently transmit that particular piece of information via the communication link(s) 118.
In some cases, the communication link(s) 118 include a wired connection. The robotic device 100 may include one or more ports to interface the communication link(s) 118 to an external device. The communication link(s) 118 may include, in addition to or alternatively to the wired connection, a wireless connection. Some example wireless connections may utilize a cellular connection, such as CDMA, EVDO, GSM/GPRS, or 4G telecommunication, such as WiMAX or LTE. Alternatively or in addition, the wireless connection may utilize a Wi-Fi connection to transmit data to a wireless local area network (WLAN). In some implementations, the wireless connection may also communicate over an infrared link, radio, Bluetooth, or a near-field communication (NFC) device.
The robotic device 200 may include a number of articulated appendages, such as robotic legs and/or robotic arms. Each articulated appendage may include a number of members connected by joints that allow the articulated appendage to move through certain degrees of freedom. Each member of an articulated appendage may have properties describing aspects of the member, such as its weight, weight distribution, length, and/or shape, among other properties. Similarly, each joint connecting the members of an articulated appendage may have known properties, such as the degrees of its range of motion the joint allows, the size of the joint, and the distance between members connected by the joint, among other properties. A given joint may be a joint allowing one degree of freedom (e.g., a knuckle joint or a hinge joint), a joint allowing two degrees of freedom (e.g., a cylindrical joint), a joint allowing three degrees of freedom (e.g., a ball and socket joint), or a joint allowing four or more degrees of freedom. A degree of freedom may refer to the ability of a member connected to a joint to move about a particular translational or rotational axis.
The robotic device 200 may also include sensors to measure the angles of the joints of its articulated appendages. In addition, the articulated appendages may include a number of actuators that can be controlled to extend and retract members of the articulated appendages. In some cases, the angle of a joint may be determined based on the extent of protrusion or retraction of a given actuator. In some instances, the joint angles may be inferred from position data of inertial measurement units (IMUs) mounted on the members of an articulated appendage. In some implementations, the joint angles may be measured using rotary position sensors, such as rotary encoders. In other implementations, the joint angles may be measured using optical reflection techniques. Other joint angle measurement techniques may also be used.
The robotic device 200 may be configured to send sensor data from the articulated appendages to a device coupled to the robotic device 200 such as a processing system, a computing system, or a control system. The robotic device 200 may include a memory, either included in a device on the robotic device 200 or as a standalone component, on which sensor data is stored. In some implementations, the sensor data is retained in the memory for a certain amount of time. In some cases, the stored sensor data may be processed or otherwise transformed for use by a control system on the robotic device 200. In some cases, the robotic device 200 may also transmit the sensor data over a wired or wireless connection (or other electronic communication means) to an external device.
The humanoid robot 250 may include a number of articulated appendages, such as robotic legs 202, 204 and/or robotic arms 206, 208. The humanoid robot 250 may also include a robotic head 210, which may contain one or more vision sensors (e.g., cameras, infrared sensors, object sensors, range sensors, etc.). Each articulated appendage may include a number of members connected by joints that allow the articulated appendage to move through certain degrees of freedom. For example, each robotic leg 202, 204 may include a respective foot 212, 214, which may contact a surface (e.g., a ground surface). The legs 202, 204 may enable the robot 250 to travel at various speeds according to various gaits. In addition, each robotic arm 206, 208 may facilitate object manipulation, load carrying, and/or balancing of the robot 250. Each arm 206, 208 may also include one or more members connected by joints and may be configured to operate with various degrees of freedom. Each arm 206, 208 may also include a respective end effector (e.g., gripper, hand, etc.) 216, 218. The robot 250 may use end effectors 216, 218 for interacting with (e.g., gripping, turning, pulling, and/or pushing) objects. Each end effector 216, 218 may include various types of appendages or attachments, such as fingers, attached tools or grasping mechanisms.
The robot 250 may also include sensors to measure the angles of the joints of its articulated appendages. In addition, the articulated appendages may include a number of actuators that can be controlled to extend and/or retract members of the articulated appendages. In some embodiments, the angle of a joint may be determined based on the extent of protrusion and/or retraction of a given actuator. In some embodiments, the joint angles may be inferred from position data of inertial measurement units (IMUs) mounted on the members of an articulated appendage. In some embodiments, the joint angles may be measured using rotary position sensors, such as rotary encoders. In some embodiments, the joint angles may be measured using optical reflection techniques. Other joint angle measurement techniques may also be used.
The perception module 308 may be configured to perceive one or more aspects of the environment of the robotic device 300 and/or provide input reflecting the environment to the computing architecture 304 (e.g., input reflecting an object to be manipulated by the robotic device). For example, in some embodiments, the perception module 308 can sense aspects of the environment using a RGB camera, a depth camera, a LIDAR or stereo vision device, or another piece of equipment with suitable sensory capabilities. In some embodiments, one or more additional modules (not shown in
The kinematic state estimation module 316 may be configured to track kinematic data for the robotic device 300 (e.g., a form of “robot data”) and/or one or more grasped objects (e.g., a form of “object data”). In some embodiments, the kinematic data for the robotic device 300 includes one or more vectors, which may include joint positions, joint velocities, joint accelerations, angular orientations, angular velocities, angular accelerations, sensed forces, or other parameters suitable to characterize the kinematics of the robotic device 300 and/or one or more grasped objects.
As described above, some prior art techniques for grasping and manipulating objects with a robotic device are inflexible due to their commitment to a single grasp prior to determining whether the robot is capable to move in such a way to perform the grasp. The inventors have recognized and appreciated that it may be useful to have flexibility to choose between a set of grasps when planning object manipulation behaviors for a robotic device. In some tasks, and for many objects, there are an infinite number of potential discrete grasps, with the difference between them being a continuous rotation and/or translation along an axis or other motion across a manifold of the object. Some conventional grasp techniques may attempt to identify an optimal grasp from among the infinite set of grasps using a machine learning (e.g., neural network) approach. However, such an approach may require considerable computation power and take considerable time due to the requirement to search over a large surface of the object. Some embodiments narrow the set of infinite grasps by defining grasp regions for an object, each of which represents a set of potential grasps using a low dimensional continuous parametric set of parameters. In some embodiments, a grasp region may be implemented as a parameterization that may be easily translated into a continuous optimization. By contrast, some conventional grasping techniques describe graspable parts of an object implicitly by creating a function representing the quality of a single grasp, and searching for grasps by sampling or a form of gradient ascent. Some embodiments separate the continuous choice (where in a grasp region to grasp) from the discrete choice (which grasp region to use), which facilitates incorporation into a continuous optimization process that includes other objectives and/or constraints.
Based on the information received from the perception module 308 and/or the kinematic state estimation module 316 (and/or other sensory modules not shown in
In some embodiments, a set of grasp regions for an object may be determined “offline,” and the set of grasp regions for an object that was determined offline may be accessed “online” by grasp region determination module 320 during operation of the robotic device. During operation of the robotic device, one or more modules of the robotic device may be configured to determine, for example, which end effector(s) of the robotic device to grasp the object with, which grasp region(s) to associate with each end effector(s), and which grasp from a set of potential grasps within the grasp region along with the whole-body robot posture to use to achieve the grasp.
In some embodiments, a set of potential grasps within a grasp region may be defined relative to a single axis of the grasp region. For example, a cylindrical grasp region may have a single axis and a set of potential grasps within the grasp region may be defined as rotations around the single axis and/or translations along the single axis. Such a grasp region may be suitable to characterize objects that have cylindrical portions. For instance, if the object is a pole, a single-hand grasp on the pole can rotate about the length of the pole or slide along it and still remain a feasible grasp for grasping the pole.
In some embodiments, a grasp selected from the set of potential grasps may be selected based, at least in part, on a quality of the grasp and/or other information including, but not limited to, capabilities of the robotic device to grasp the object given certain behaviors the robotic device attempts to perform. For instance, a potential grasp that combines the potential grasps 612 and 622 may be selected to grasp the handle 600 when it is determined that the grasp quality of that potential grasp is the highest for a given manipulation task the robotic device is to perform.
The inventors have recognized that it may be useful to reduce the large (e.g., infinite) number of potential grasps of an object to a discrete set of grasp regions as shown in
It should be appreciated that in some embodiments, not every portion of an object may be associated with a grasp region even though it may be possible to grasp the object at that location. For instance, the four bottom sections of the legs of object 700 shown in
In some embodiments, a set of grasp regions for an object may be implemented as a sparse representation of the set of all possible grasps on an object, with the sparse representation being informed by prior knowledge about the object and/or a robot configured to grasp the object. For instance, each grasp region may describe bounds on where a given grasp strategy for a particular end effector of a robot may successfully grip an object. In this way, a grasp region may encode a set of potential grasps that can be achieved for a given grasp policy/algorithm of a robot. As an example, in the case of a robot having a gripper with multiple appendages, each grasp region may represent a set of potential grasps capable of being grasped by at least two opposing appendages of the gripper. If the grasp policy/algorithm of the robot is simple (e.g., just closing the two opposing appendages), small variations in the object geometry may affect whether a successful grasp may be achieved, which may result in smaller grasp regions. However, if the grasp policy/algorithm of the robot is more complex (e.g., a gripper with multiple appendages having multiple ways to wrap the appendages around an object to successfully grasp the object), such small variations in the object geometry may be less important when determining whether a secure grasp can be achieved, which may enable the use of larger grasp regions. In some embodiments, the sparse representation for a grasp region may be implemented as a parameterized grasping space (e.g., translations and rotations about a single axis of the grasping region) associated with a portion of the object. The parameterized grasping space may enable quick (e.g., near instantaneous) computation of different grasps within the space that enable the robotic device to grasp and/or re-grasp the portion of the object withing the parameterized grasping space as the object is manipulated by the robotic device. Without such a computationally efficient implementation for recalculating the grasp, the robotic device may more easily lose its grasp on the object by not being able to make quick adjustments of its grasping position when needed to maintain control of the object during manipulation.
As discussed in more detail below, some embodiments may be configured to associate one grasp region of the set of grasp regions with an end effector of a robotic device (e.g., a first claw end effector), and a grasp (e.g., an ideal grasp) from within the set of potential grasps of the associated grasp region may be determined for the end effector (e.g., using optimization). In some embodiments, multiple grasp regions may be associated with an end effector of a robotic device, and a grasp (e.g., an ideal grasp) may be determined for each of the associated grasp regions (e.g., by performing multiple optimizations in parallel). In some embodiments, a robotic device may include multiple end effectors (e.g., multiple claw end effectors), and each of the end effectors may be associated with one or more grasp regions for an object to determine how to grasp the object with one or more of the end effectors (e.g., to perform a bimanual grasp).
In some embodiments, information about the grasp quality and/or location of a grasp for one end effector of the robotic device may be used to inform (e.g., constrain) the location of a grasp (or selection of a grasp region) for another end effector of the robotic device. For instance, in the example object shown in
Returning to the computing architecture 304 shown in
Returning to the computing architecture 304 shown in
In some embodiments, selecting a grasp within a grasp region may depend, at least in part, on an estimated grasp quality metric associated with different potential grasps within the grasp region. For instance, when using optimization to select the grasp, the optimization may be configured to encourage selection of “more stable” grasps within a grasp region compared to “less stable” grasps based on a grasp quality associated with the set of potential grasps within the grasp region. In some embodiments, the grasp quality metric may be based, at least in part, on one or more model-based or data-driven grasp scoring metrics. Such metrics may be based on a grasping policy/algorithm of the robot and/or the object shape.
In some embodiments, even though a robotic device may have two (or more) end effectors, it may be determined that grasping an object with a single end effector is preferred. For instance, if the handle of an object to be grasped is too small to use bimanual grasping, one end effector may be controlled to grasp the handle, and the other end effector may be controlled to not interact with the object, to support the object on the bottom or side of the object while lifting, or by performing some other action.
In some embodiments, the robot control module 328 may determine how to grasp an object based, at least in part, on a manipulation goal of the robotic device. For instance, the manipulation goal may include how the object is to be placed. For instance, if the object is to be moved from one location on a surface to another location on the same surface, it may be desired to grasp the object using a more vertical grasp if possible (e.g., see
In some embodiments, it may be desirable to maintain the same grasp on an object throughout a pick-and-place operation. In such instances, determining a grasp by the robot control module 328 may include enforcing a constraint in the optimization that the selected grasp be compatible with both a pick operation and a place operation of the robotic device. In other embodiments, it may be permissible to change the grasp of the object during a pick-and-place operation (or any other operation in which an object is grasped and manipulated). In such embodiments, the use of grasp regions as described herein may facilitate such re-grasping by enabling a smooth transition (e.g., a sliding re-grasp along a single axis of the grasp region) from one grasp to another grasp within the same grasp region. Re-grasps of an object within the same grasp region may involve a simpler motion of the robotic device and/or less time to execute compared to a complete re-grasp of the object (e.g., a re-grasp that requires the robotic device to set the object down and re-grasp it). For example, performing a complete re-grasp of the object may require the optimization performed in the robot control module 328 to be re-executed, which may not necessarily be the case if the re-grasp is performed within the same grasp region. In some embodiments, multiple re-grasps within the same grasp region may be preferred compared to a single complete re-grasp in which the object must be released from the robotic device's grasp. In this way, grasp regions as described herein may be a useful construct for both grasp planning and also updating the grasp of an object as the object is manipulated (e.g., when the object slips in the end effector of the robotic device, to achieve some manipulation objective, etc.).
In some embodiments, the joint optimization of the grasp on an object and the whole-body trajectory of the robot may be subject to various constraints including, but not limited to, a balance constraint of the robot when manipulating the object, a collision constraint (e.g., a self-collision constraint and/or an external collision constraint) associated with the robot when manipulating the object, a gaze constraint associated with a camera of the robotic device when manipulating the object, and a kinematic constraint of the robot associated with the object grasp being within reach of the robot's gripper. For instance, the gaze constraint may require the grasped object to be within a field of view of at least one camera of the robot at all times during manipulation of the object.
The inverse dynamics module 332 can receive output from the robot control module 328 and output a reference joint position and/or torque for each of the robotic joint servo controllers 336, which can be provided to actuators of the robotic device 300 to enable the robotic device 300 to execute its planned movement. In some embodiments, the inverse dynamics module 332 can track a desired wrench of the robotic device 300 as closely as possible or desired in a given situation. In some embodiments, the inverse dynamics module 332 can map a desired robot pose and/or one or more external wrenches to joint torques.
In some embodiments, inverse dynamics module 332 may receive grasp region information as input from grasp region determination module 320, and the grasp region information may be used by the inverse dynamics module 332 to optimize (e.g., jointly) selection of a grasp on the object by an end effector of the robotic device 300 and the reference joint position and/or torque for each of the robotic joint servo controllers 336. In some embodiments, inverse dynamics module 332 may include a more detailed kinematics description of the robot compared to robot control module 328. Considering the grasp region information at multiple and/or different stages of the robot control processing pipeline may enable a wider variety of grasps that can be selected to manipulate objects. For example
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure.