The present technology is directed generally to robotic systems and, more specifically, to systems, processes, and techniques for grasping objects. More particularly, the present technology may be used for grasping flexible, wrapped, or bagged objects.
With their ever-increasing performance and lowering cost, many robots (e.g., machines configured to automatically/autonomously execute physical actions) are now extensively used in various different fields. Robots, for example, can be used to execute various tasks (e.g., manipulate or transfer an object through space) in manufacturing and/or assembly, packing and/or packaging, transport and/or shipping, etc. In executing the tasks, the robots can replicate human actions, thereby replacing or reducing human involvements that are otherwise required to perform dangerous or repetitive tasks.
There remains a need for improved techniques and devices for grasping, moving, relocating, and robotically object with different form factors.
In an embodiment, a robotic grasping system including a robotic arm, a suction gripping device connected to the actuator arm, and a pinch gripping device connected to the actuator arm is provided.
In another embodiment, robotic grasping system including an actuator hub; a plurality of extension arms extending from the actuator hub in an at least a partially lateral orientation; and a plurality of gripping devices arranged at the ends is provided.
In some aspects, the techniques described herein relate to a robotic grasping system including an actuator arm; a suction gripping device; and a pinch gripping device.
In some aspects, the techniques described herein relate to a robotic grasping system including an actuator hub; a plurality of extension arms extending from the robotic hub in an at least a partially lateral orientation; and a plurality of gripping devices arranged at ends of the plurality of extension arms.
In some aspects, the techniques described herein relate to a robotic system for grasping objects, including: at least one processing circuit; and an end effector apparatus including: an actuator hub, a plurality of extension arms extending from the actuator hub in an at a least partially lateral orientation, a plurality of gripping devices arranged at corresponding ends of the extension arms, wherein the actuator hub includes one or more actuators couple to corresponding extension arms, the one or more actuators being configured to rotate the plurality of extension arms such that a gripping span of the plurality of gripping devices is adjusted, and a robot arm controlled by the at least one processing circuit and configured for attachment to the end effector apparatus, wherein the at least one processing circuit is configured to provide: a first command to cause at least one of the plurality of gripping devices to engage suction gripping, and a second command to cause at least one of the plurality of gripping devices to engage pinch gripping.
In some aspects, the techniques described herein relate to a robotic control method, for gripping a deformable object, operable by at least one processing circuit via a communication interface configured to communicate with a robot having a robot arm that includes an end effector apparatus having a plurality of movable dual gripping devices, each dual gripping device including a suction gripping device and a pinch gripping device, the method including: receiving image information describing the deformable object, wherein the image information is generated by a camera; performing, based on the image information, an object identification operation to generate grasping information for determining an object grasping command to grip the deformable object; outputting the object grasping command to the end effector apparatus, the object grasping command including: a dual gripping device movement command configured to cause the end effector apparatus to move each of the plurality of dual gripping devices to a respective engagement position, each dual gripping device being configured to engage the deformable object when moved to the respective engagement position; a suction gripping command configured to cause each dual gripping device to engage suction gripping of the deformable object using a respective suction gripping device; and a pinch gripping command configured to cause each dual gripping device to engage pinch gripping of the deformable object using a respective pinch gripping device; and outputting a lift object command configured to cause the robot arm to lift the deformable object.
In some aspects, the techniques described herein relate to a non-transitory computer-readable medium, configured with executable instructions for implementing a robot control method for gripping a deformable object, operable by at least one processing circuit via a communication interface configured to communicate with a robot having a robot arm that includes an end effector apparatus having a plurality of movable dual gripping devices, each dual gripping device including a suction gripping device and a pinch gripping device, the method including: receiving image information describing the deformable object, wherein the image information is generated by a camera; performing, based on the image information, an object identification operation to generate grasping information for determining an object grasping command to grip the deformable object; outputting the object grasping command to the end effector apparatus, the object grasping command including: a dual gripping device movement command configured to cause the end effector apparatus to move each of the plurality of dual gripping devices to a respective engagement position, each dual gripping device being configured to engage the deformable object when moved to the respective engagement position; a suction gripping command configured to cause each dual gripping device to engage suction gripping of the deformable object using a respective suction gripping device; and a pinch gripping command configured to cause each dual gripping device to engage pinch gripping of the deformable object using a respective pinch gripping device; and outputting a lift object command configured to cause the robot arm to lift the deformable object.
Systems, devices, and methods related to object grasping and gripping are provided. In an embodiment, a dual-mode gripping device is provided. The dual-mode gripping device may be configured to facilitate robotic grasping, gripping, transport, and movement of soft objects. As used herein, soft objects may refer to flexible objects, deformable objects, or partially deformable objects with a flexible outer casing, bagged objects, wrapped objects, and other objects that lack stiff and/or uniform sides. Soft objects may be difficult to grasp, grip, move, or transport due to difficulty in securing the object to a robotic gripper, a tendency to sag, flex, droop, or otherwise change shape when lifted, and/or a tendency to shift and move in unpredictable ways when transported. Such tendencies may result in difficulty in transport, with adverse consequences including dropped and misplaced objects. Although the technologies described herein are specifically discussed with respect to soft objects, the technology is not limited to such. Any suitable object of any shape, size, material, make-up, etc., that may benefit from robotic handling via the systems, devices, and methods discussed herein may be used. Additionally, although some specific references include the term “soft objects,” it may be understood that any objects discussed herein may include or may be soft objects.
In embodiments, a dual mode gripping system or device is provided to facilitate handling of soft object. A dual mode gripping system consistent with embodiments hereof includes at least a pair of integrated gripping devices. The gripping devices may include a suction gripping device and a pinch gripping device. The suction gripping device may be configured to provide an initial or primary grip on the soft object. The pinch gripping device may be configured to provide a supplementary or secondary grip on the soft object.
In embodiments, an adjustable multi-point gripping system is provided. An adjustable multi-point gripping system, as described herein may include a plurality of gripping devices, individually operable, with an adjustable gripping span. The multiple gripping devices may thus provide “multi-point” gripping of an object (such as a soft object). The “gripping span,” or area covered by the multiple gripping devices, may be adjustable, permitting a smaller gripping span for smaller objects, a larger span for larger objects, and/or manipulating objects while being gripped by the multiple gripping devices (e.g., folding an object). Multi-point gripping may be advantageous in providing additional gripping force as well. Spreading out the gripping points through adjustability may provide a more stable grip, as torques at any individual gripping point may be reduced. These advantages may be particularly useful with soft objects, where unpredictable movement may occur during object transport.
Robotic systems configured in accordance with embodiments hereof may autonomously execute integrated tasks by coordinating operations of multiple robots. Robotic systems, as described herein, may include any suitable combination of robotic devices, actuators, sensors, cameras, and computing systems configured to control, issue commands, receive information from robotic devices and sensors, access, analyze, and process data generated by robotic devices, sensors, and camera, generate data or information usable in the control of robotic systems, and plan actions for robotic devices, sensors, and cameras. As used herein, robotic systems are not required to have immediate access or control of robotic actuators, sensors, or other devices. Robotic systems, as described here, may be computational systems configured to improve the performance of such robotic actuators, sensors, and other devices through reception, analysis, and processing of information.
The technology described herein provides technical improvements to a robotic system configured for use in object transport. Technical improvements described herein increase the facility with which specific objects, e.g., soft objects, deformable objects, partially deformable objects and other types of objects, may be manipulated, handled, and/or transported. The robotic systems and computational systems described herein further provide for increased efficiency in motion planning, trajectory planning, and robotic control of systems and devices configured to robotically interact with soft objects. By addressing this technical problem, the technology of robotic interaction with soft objects is improved.
The present application refers to systems and robotic systems. Robotic systems, as discussed herein, may include robotic actuator components (e.g., robotic arms, robotic grippers, etc.), various sensors (e.g., cameras, etc.), and various computing or control systems. As discussed herein, computing systems or control systems may be referred to as “controlling” various robotic components, such as robotic arms, robotic grippers, cameras, etc. Such “control” may refer to direct control of and interaction with the various actuators, sensors, and other functional aspects of the robotic components. For example, a computing system may control a robotic arm by issuing or providing all of the required signals to cause the various motors, actuators, and sensors to cause robotic movement. Such “control” may also refer to the issuance of abstract or indirect commands to a further robotic control system that then translates such commands into the necessary signals for causing robotic movement. For example, a computing system may control a robotic arm by issuing a command describing a trajectory or destination location to which the robotic arm should move to and a further robotic control system associated with the robotic arm may receive and interpret such a command and then provide the necessary direct signals to the various actuators and sensors of the robotic arm to cause the required movement.
In the following, specific details are set forth to provide an understanding of the presently disclosed technology. In embodiments, the techniques introduced here may be practiced without including each specific detail disclosed herein. In other instances, well-known features, such as specific functions or routines, are not described in detail to avoid unnecessarily obscuring the present disclosure. References in this description to “an embodiment,” “one embodiment,” or the like mean that a particular feature, structure, material, or characteristic being described is included in at least one embodiment of the present disclosure. Thus, the appearances of such phrases in this specification do not necessarily all refer to the same embodiment. On the other hand, such references are not necessarily mutually exclusive either. Furthermore, the particular features, structures, materials, or characteristics described with respect to any one embodiments can be combined in any suitable manner with those of any other embodiment, unless such items are mutually exclusive. It is to be understood that the various embodiments shown in the figures are merely illustrative representations and are not necessarily drawn to scale.
Several details describing structures or processes that are well-known and often associated with robotic systems and subsystems, but that can unnecessarily obscure some significant aspects of the disclosed techniques, are not set forth in the following description for purposes of clarity. Moreover, although the following disclosure sets forth several embodiments of different aspects of the present technology, several other embodiments may have different configurations or different components than those described in this section. Accordingly, the disclosed techniques may have other embodiments with additional elements or without several of the elements described below.
Many embodiments or aspects of the present disclosure described below may take the form of computer- or controller-executable instructions, including routines executed by a programmable computer or controller. Those skilled in the relevant art will appreciate that the disclosed techniques can be practiced on or with computer or controller systems other than those shown and described below. The techniques described herein can be embodied in a special-purpose computer or data processor that is specifically programmed, configured, or constructed to execute one or more of the computer-executable instructions described below. Accordingly, the terms “computer” and “controller” as generally used herein refer to any data processor and can include Internet appliances and handheld devices (including palm-top computers, wearable computers, cellular or mobile phones, multi-processor systems, processor-based or programmable consumer electronics, network computers, minicomputers, and the like). Information handled by these computers and controllers can be presented at any suitable display medium, including a liquid crystal display (LCD). Instructions for executing computer- or controller-executable tasks can be stored in or on any suitable computer-readable medium, including hardware, firmware, or a combination of hardware and firmware. Instructions can be contained in any suitable memory device, including, for example, a flash drive, USB device, and/or other suitable medium.
The terms “coupled” and “connected,” along with their derivatives, can be used herein to describe structural relationships between components. It should be understood that these terms are not intended as synonyms for each other. Rather, in particular embodiments, “connected” can be used to indicate that two or more elements are in direct contact with each other. Unless otherwise made apparent in the context, the term “coupled” can be used to indicate that two or more elements are in either direct or indirect (with other intervening elements between them) contact with each other, or that the two or more elements co-operate or interact with each other (e.g., as in a cause-and-effect relationship, such as for signal transmission/reception or for function calls), or both.
Any reference herein to image analysis by a computing system may be performed according to or using spatial structure information that may include depth information which describes respective depth value of various locations relative to a chosen point. The depth information may be used to identify objects or estimate how objects are spatially arranged. In some instances, the spatial structure information may include or may be used to generate a point cloud that describes locations of one or more surfaces of an object. Spatial structure information is merely one form of possible image analysis and other forms known by one skilled in the art may be used in accordance with the methods described herein.
In an embodiment, the camera 1200 (which may also be referred to as an image sensing device) may be a 2D camera and/or a 3D camera. For example,
In an embodiment, the system 1000 may be a robot operation system for facilitating robot interaction between a robot and various objects in the environment of the camera 1200. For example,
In an embodiment, the computing system 1100 of
In an embodiment, the computing system 1100 may form or be part of a vision system. The vision system may be a system which generates, e.g., vision information which describes an environment in which the robot 1300 is located, or, alternatively or in addition to, describes an environment in which the camera 1200 is located. The vision information may include the 3D image information and/or the 2D image information discussed above, or some other image information. In some scenarios, if the computing system 1100 forms a vision system, the vision system may be part of the robot control system discussed above or may be separate from the robot control system. If the vision system is separate from the robot control system, the vision system may be configured to output information describing the environment in which the robot 1300 is located. The information may be outputted to the robot control system, which may receive such information from the vision system and performs motion planning and/or generates robot interaction movement commands based on the information. Further information regarding the vision system is detailed below.
In an embodiment, the computing system 1100 may communicate with the camera 1200 and/or with the robot 1300 via a direct connection, such as a connection provided via a dedicated wired communication interface, such as a RS-232 interface, a universal serial bus (USB) interface, and/or via a local computer bus, such as a peripheral component interconnect (PCI) bus. In an embodiment, the computing system 1100 may communicate with the camera 1200 and/or with the robot 1300 via a network. The network may be any type and/or form of network, such as a personal area network (PAN), a local-area network (LAN), e.g., Intranet, a metropolitan area network (MAN), a wide area network (WAN), or the Internet. The network may utilize different techniques and layers or stacks of protocols, including, e.g., the Ethernet protocol, the internet protocol suite (TCP/IP), the ATM (Asynchronous Transfer Mode) technique, the SONET (Synchronous Optical Networking) protocol, or the SDH (Synchronous Digital Hierarchy) protocol.
In an embodiment, the computing system 1100 may communicate information directly with the camera 1200 and/or with the robot 1300, or may communicate via an intermediate storage device, or more generally an intermediate non-transitory computer-readable medium. For example,
As stated above, the camera 1200 may be a 3D camera and/or a 2D camera. The 2D camera may be configured to generate a 2D image, such as a color image or a grayscale image. The 3D camera may be, e.g., a depth-sensing camera, such as a time-of-flight (TOF) camera or a structured light camera, or any other type of 3D camera. In some cases, the 2D camera and/or 3D camera may include an image sensor, such as a charge coupled devices (CCDs) sensor and/or complementary metal oxide semiconductors (CMOS) sensor. In an embodiment, the 3D camera may include lasers, a LIDAR device, an infrared device, a light/dark sensor, a motion sensor, a microwave detector, an ultrasonic detector, a RADAR detector, or any other device configured to capture depth information or other spatial structure information.
As stated above, the image information may be processed by the computing system 1100. In an embodiment, the computing system 1100 may include or be configured as a server (e.g., having one or more server blades, processors, etc.), a personal computer (e.g., a desktop computer, a laptop computer, etc.), a smartphone, a tablet computing device, and/or other any other computing system. In an embodiment, any or all of the functionality of the computing system 1100 may be performed as part of a cloud computing platform. The computing system 1100 may be a single computing device (e.g., a desktop computer), or may include multiple computing devices.
In an embodiment, the non-transitory computer-readable medium 1120, which is part of the computing system 1100, may be an alternative or addition to the intermediate non-transitory computer-readable medium 1400 discussed above. The non-transitory computer-readable medium 1120 may be a storage device, such as an electronic storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination thereof, for example, such as a computer diskette, a hard disk drive (HDD), a solid state drive (SSD), a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), a static random access memory (SRAM), a portable compact disc read-only memory (CD-ROM), a digital versatile disk (DVD), a memory stick, any combination thereof, or any other storage device. In some instances, the non-transitory computer-readable medium 1120 may include multiple storage devices. In certain implementations, the non-transitory computer-readable medium 1120 is configured to store image information generated by the camera 1200 and received by the computing system 1100. In some instances, the non-transitory computer-readable medium 1120 may store one or more object recognition template used for performing methods and operations discussed herein. The non-transitory computer-readable medium 1120 may alternatively or additionally store computer readable program instructions that, when executed by the processing circuit 1110, causes the processing circuit 1110 to perform one or more methodologies described here.
In an embodiment, as depicted in
In an embodiment, the processing circuit 1110 may be programmed by one or more computer-readable program instructions stored on the non-transitory computer-readable medium 1120. For example,
In an embodiment, the object recognition module 1121 may be configured to obtain and analyze image information as discussed throughout the disclosure. Methods, systems, and techniques discussed herein with respect to image information may use the object recognition module 1121. The object recognition module may further be configured for object recognition tasks related to object identification, as discussed herein.
The motion planning and control module 1129 may be configured plan and execute the movement of a robot. For example, the motion planning and control module 1129 may interact with other modules described herein to plan motion of a robot 3300 for object retrieval operations and for camera placement operations. Methods, systems, and techniques discussed herein with respect to robotic arm movements and trajectories may be performed by the motion planning and control module 1129.
In embodiments, the motion planning and control module 1129 may be configured to plan robotic motion and robotic trajectories to account for the carriage of soft objects. As discussed herein, soft objects may have a tendency to droop, sag, flex, bend, etc. during movement. Such tendencies may be addressed by the motion planning and control module 1129. For example, during lifting operations, it may be expected that a soft object will sag or flex, causing forces on the robotic arm (and associated gripping devices, as described below) to vary, alter, or change in unpredictable ways. Accordingly, the motion planning and control module 1129 may be configured to include control parameters that provide a greater degree of reactivity, permitting the robotic system to adjust to alterations in load more quickly. In another example, soft objects may be expected to swing or flex (e.g., predicted flex behavior) during movement due to internal momentum. Such movements may be adjusted for by the motion planning and control module 1129 by calculating the predicted flex behavior of an object. In yet another example, the motion planning and control module 1129 may be configured to predict or otherwise account for a deformed or altered shape of a transported soft object when the object is deposited at a destination. The flexing or deformation of a soft object (e.g., flex behavior) may result in an object of a different shape, footprint, etc., then that same object had when it was initially lifted. Thus, the motion planning and control module 1129 may be configured to predict or otherwise account for such changes when placing the object down.
The object manipulation planning and control module 1126 may be configured to plan and execute the object manipulation activities of a robotic arm or end effector apparatus, e.g., grasping and releasing objects and executing robotic arm commands to aid and facilitate such grasping and releasing. As discussed below, dual grippers and adjustable multi-point gripping devices may require a series of integrated and coordinated operations to grasp, lift, and transport objects. Such operations may be coordinated by the object manipulation planning and control module 1126 to ensure smooth operation of the dual grippers and adjustable multi-point gripping devices.
With reference to
In embodiments, the computing system 1100 may obtain image information representing an object in a camera field of view (e.g., field of view 3200) of a camera 1200. The steps and techniques described below for obtaining image information may be referred to below as an image information capture operation 5002. In some instances, the object may be one object from a plurality of objects in the field of view 3200 of a camera 1200. The image information 2600, 2700 may be generated by the camera (e.g., camera 1200) when the objects are (or have been) in the camera field of view 3200 and may describe one or more of the individual objects in the field of view 3200 of a camera 1200. The object appearance describes the appearance of an object from the viewpoint of the camera 1200. If there are multiple objects in the camera field of view, the camera may generate image information that represents the multiple objects or a single object (such image information related to a single object may be referred to as object image information), as necessary. The image information may be generated by the camera (e.g., camera 1200) when the group of objects is (or has been) in the camera field of view, and may include, e.g., 2D image information and/or 3D image information.
As an example,
As stated above, the image information may in some embodiments be all or a portion of an image, such as the 2D image information 2600. In examples, the computing system 1100 may be configured to extract an image portion 2000A from the 2D image information 2600 to obtain only the image information associated with a corresponding object 3000A. Where an image portion (such as image portion 2000A) is directed towards a single object it may be referred to as object image information. Object image information is not required to contain information only about an object to which it is directed. For example, the object to which it is directed may be close to, under, over, or otherwise situated in the vicinity of one or more other objects. In such cases, the object image information may include information about the object to which it is directed as well as to one or more neighboring objects. The computing system 1100 may extract the image portion 2000A by performing an image segmentation or other analysis or processing operation based on the 2D image information 2600 and/or 3D image information 2700 illustrated in
The respective depth values may be relative to the camera 1200 which generates the 3D image information 2700 or may be relative to some other reference point. In some embodiments, the 3D image information 2700 may include a point cloud which includes respective coordinates for various locations on structures of objects in the camera field of view (e.g., field of view 3200). In the example of
In an embodiment, an image normalization operation may be performed by the computing system 1100 as part of obtaining the image information. The image normalization operation may involve transforming an image or an image portion generated by the camera 1200, so as to generate a transformed image or transformed image portion. For example, if the image information, which may include the 2D image information 2600, the 3D image information 2700, or a combination of the two, obtained may undergo an image normalization operation to attempt to cause the image information to be altered in viewpoint, object position, lighting condition associated with the visual description information. Such normalizations may be performed to facilitate a more accurate comparison between the image information and model (e.g., template) information. The viewpoint may refer to a pose of an object relative to the camera 1200, and/or an angle at which the camera 1200 is viewing the object when the camera 1200 generates an image representing the object. As used herein, “pose” may refer to an object location and/or orientation.
For example, the image information may be generated during an object recognition operation in which a target object is in the camera field of view 3200. The camera 1200 may generate image information that represents the target object when the target object has a specific pose relative to the camera. For instance, the target object may have a pose which causes its top surface to be perpendicular to an optical axis of the camera 1200. In such an example, the image information generated by the camera 1200 may represent a specific viewpoint, such as a top view of the target object. In some instances, when the camera 1200 is generating the image information during the object recognition operation, the image information may be generated with a particular lighting condition, such as a lighting intensity. In such instances, the image information may represent a particular lighting intensity, lighting color, or other lighting condition.
In an embodiment, the image normalization operation may involve adjusting an image or an image portion of a scene generated by the camera, so as to cause the image or image portion to better match a viewpoint and/or lighting condition associated with information of an object recognition template. The adjustment may involve transforming the image or image portion to generate a transformed image which matches at least one of an object pose or a lighting condition associated with the visual description information of the object recognition template.
The viewpoint adjustment may involve processing, warping, and/or shifting of the image of the scene so that the image represents the same viewpoint as visual description information that may be included within an object recognition template. Processing, for example, may include altering the color, contrast, or lighting of the image, warping of the scene may include changing the size, dimensions, or proportions of the image, and shifting of the image may include changing the position, orientation, or rotation of the image. In an example embodiment, processing, warping, and or/shifting may be used to alter an object in the image of the scene to have an orientation and/or a size which matches or better corresponds to the visual description information of the object recognition template. If the object recognition template describes a head-on view (e.g., top view) of some object, the image of the scene may be warped so as to also represent a head-on view of an object in the scene.
Further aspects of the object recognition and image normalization methods performed herein are described in greater detail in U.S. application Ser. No. 16/991,510, filed Aug. 12, 2020, and U.S. application Ser. No. 16/991,466, filed Aug. 12, 2020, each of which is incorporated herein by reference.
In various embodiments, the terms “computer-readable instructions” and “computer-readable program instructions” are used to describe software instructions or computer code configured to carry out various tasks and operations. In various embodiments, the term “module” refers broadly to a collection of software instructions or code configured to cause the processing circuit 1110 to perform one or more functional tasks. The modules and computer-readable instructions may be described as performing various operations or tasks when a processing circuit or other hardware component is executing the modules or computer-readable instructions.
In an embodiment, the system 3100 of
In an embodiment, the system 3100 may include a camera 1200 or multiple cameras 1200, including a 2D camera that is configured to generate 2D image information 2600 and a 3D camera that is configured to generate 3D image information 2700. The camera 1200 or cameras 1200 may be mounted or affixed to the robot 3300, may be stationary within the environment, and/or may be affixed to a dedicated robotic system separate from the robot 3300 used for object manipulation, such as a robotic arm, gantry, or other automated system configured for camera movement.
In the example of
The robot 3300 may further include additional sensors not shown configured to obtain information used to implement the tasks, such as for manipulating the structural members and/or for transporting the robotic units. The sensors can include devices configured to detect or measure one or more physical properties of the robot 3300 (e.g., a state, a condition, and/or a location of one or more structural members/joints thereof) and/or of a surrounding environment. Some examples of the sensors can include accelerometers, gyroscopes, force sensors, strain gauges, tactile sensors, torque sensors, position encoders, etc.
The suction gripping device 501 includes a suction head 510 having a suction seal 511 and a suction port 512. The suction seal 511 is configured to contact an object (e.g., a soft object or another type of object) and create a seal between the suction head 510 and the object. When the seal is created, applying suction or low pressure via the suction port 512 generates a grasping or gripping force between the suction head 510 and the object. The suction seal 511 may include a flexible material to facilitate sealing with more rigid objects. In embodiments, the suction seal 511 may also be rigid. Suction or reduced pressure is provided to the suction head 510 via the suction port 512, which may be connected to a suction actuator (e.g., a pump or the like—not shown). The suction gripping device 501 may be mounted to or otherwise attached to the extension actuator 504 of the actuator arm 503. The suction gripping device 501 is configured to provide suction or reduced pressure to grip an object.
The pinch gripping device 502 may include one or more pinch heads 521 and a gripping actuator (not shown), and may be mounted to the actuator arm 503. The pinch gripping device 502 is configured to generate a mechanical gripping force, e.g., a pinch grip on an object via the one or more pinch heads 521. In an embodiment, the gripping actuator causes the one or more pinch heads 521 to come together into a gripping position and provide a gripping force to any object or portion of an object situated therebetween. A gripping position refers to the pinch heads 521 being brought together such that they provide a gripping force on an object or portion of an object that is located between the pinch heads 521 and prevents them from contacting one another. The gripping actuator may cause the pinch heads 521 to rotate into a gripping position, to move laterally (translate) into a gripping position, or perform any combination of translation and rotation to achieve a gripping position.
The actuation hub 601 may include one or more actuators 606 that are coupled to the extension arms 602. The extension arms 602 may extend from the actuation hub 601 in at least a partially lateral orientation. As used herein, “lateral” refers to an orientation that is perpendicular to the central axis 605 of the actuation hub 601. By “at least partially lateral” it is meant that the extension arms 602 extend in a lateral orientation but also may extend in a vertical orientation (e.g., parallel to the central axis 605). As shown in
The extension arms 602 extend from the actuation hub 601. The actuation centers 902 of the extension arms 602 are illustrated, as are the gripping centers 901. The actuation centers 902 represent the points about which the extension arms 602 rotate when actuated while the gripping centers 901 represent the centers of the suction gripping devices 501 (or any other gripping device that may be equipped). The suction gripping devices 501 are not shown in
Based on the law of cosines as applied to the triangle 920 defined by the system center 903, the actuation center 902, and the gripping center 901, G2=A2+X2−2AX cos(α). It can be seen that the pitch distance (P) 911 is also the hypotenuse of a right triangle with a right angle at the system center 903. The legs of the right triangle each have a length of the gripping distance (G) 914. Thus, P=√{square root over (2G2)}. Accordingly, the relationship between α and P is as follows for values of α between 0 and 180.
P=√{square root over (2(A2+X2−2AX cos(α)))}
For α=180, the triangle 920 disappears because the extension distance (X) 912 and the actuation distance (A) 915 become collinear. Thus, the pitch distance (P) 911 is based on a right triangle with the pitch distance (P) as hypotenuse—P=√{square root over (2)}+(X+A).
Based on the law of sines, the equation
may be derived. Accordingly,
where G=√{square root over (A2+X2−2AX cos(α))}. For values of α between 0 and 67.5333°,
for α=67.5333°, β=90°, for values of α between 67.5333° and 180°,
and α=180°, β=0
The dual mode gripper 500 (or multiple dual mode grippers 500) is brought into an engagement position (e.g., a position in a vicinity of an object 3000), as shown in
The extension actuator 504 is activated to retract the suction gripping device 501 back towards the actuator arm 503, as shown in
As shown in
In embodiments, each dual mode gripper 500 may operate in conjunction with other dual mode grippers 500 or independently from one another when employed in the adjustable multi-point gripping system 600. In the example of
For example, each suction gripping device 501 may be independently extended, retracted, and activated. Each pinch gripping device 501 may be independently activated. Such independent activation may provide advantages in object movement, lifting, folding and transport by providing different numbers of contact points. This may be advantageous when objects have different or odd shapes, when objects that are flexible are folded, flexed, or otherwise distorted into non-standard shapes, and/or when object size constraints are taken into account. For example, it may be more advantageous to grip an object with three spaced apart dual mode grippers 500 (where a fourth could not find purchase on the object) relative to reducing the span of the adjustable multi-point gripping system 600 to achieve four gripping points. Additionally, the independent operation may assist in lifting procedures. For example, lifting multiple gripping points at different rates may increase stability, particularly when a force provided by an object on one gripping point is greater than that provided on another.
The present disclosure relates further to grasping flexible, wrapped, or bagged objects.
In an embodiment, the method 5000 may be performed by, e.g., the computing system 1100 of
The steps of the method 5000 may be used to achieve specific sequential robot movements for performing specific tasks. As a general overview, the method 5000 may operate to cause the robot 3300 to grasp soft objects. Such an object manipulation operation may further include operation of the robot 3300 that is updated and/or refined according to various operations and conditions (e.g., unpredictable soft object behavior) during the operation.
The method 5000 may begin with or otherwise include an operation 5002, in which the computing system (or processing circuit thereof) is configured to generate image information (e.g., 2D image information 2600 shown in
In an embodiment, the method 5000 includes object identification operation 5004, in which the computing system performs an object identification operation. The object identification operation may be performed based on the image information. As discussed above, the image information is obtained by the computing system 1100 and may include all or at least a portion of a camera's field of view (e.g., camera's field of view 3200 shown in
The computing system (e.g., computing system 1100) may use the image information to more precisely determine a physical structure of the object to be grasped. The structure may be determined directly from the image information, and/or may be determined by comparing the image information generated by the camera against, e.g., model repository templates and/or model object templates.
The object identification operation 5004 may include additional optional steps/and or operations (e.g., template matching operations where features identified in the image information are matched by the processing circuit 1110 against a template of a target object stored in the non-transitory computer-readable medium 1120) to improve system performance. Further aspects of the optional template matching operations are described in greater detail in U.S. application Ser. No. 17/733,024, filed Apr. 29, 2022, which is incorporated herein by reference.
In embodiments, the object identification operation 5004 may compensate for image noise by inferring missing image information. For example, if the computing system (e.g., computing system 1100) is using a 2D image or a point cloud that represents a repository, the 2D image or point cloud may have one or more missing portions due to noise. The object identification operation 5004 may be configured to infer the missing info by closing or filling in the gap, for example, by interpolation or other means.
As described above, the object identification operation 5004 may be used to refine the computing system understanding of a geometry of the deformable object to be grasped, which may be used to guide the robot. For example, as shown in
In an embodiment, the method 5000 includes the object grasping operation 5006, in which the computing system (e.g., computing system 1100) outputs an object grasping command. The object grasping command causes the end effector apparatus (e.g., end effector apparatus 3330) of the robot arm (e.g., robot arm 3320) to grasp an object to be picked up (e.g., object 3000, which may be a soft, deformable, encased, bagged and/or flexible object).
According to an embodiment, the object grasping command includes a multi-point gripping system movement operation 5008. According to embodiments described herein, the multi-point gripping system 600 coupled to the end effector apparatus 3330 is moved to the engagement position to pick up the object in accordance with the output of movement commands. In some embodiments, all of the dual mode grippers 500 are moved to According to other embodiments, less than all of the dual mode grippers 500 coupled to the end effector apparatus 3330 are moved to the engagement position to pick up the object (e.g., due to the size of the object, due to the size of a container storing the object, to pick up multiple objects in one container, etc.). In addition, according to one embodiment, the object grasping operation 5006 outputs commands that instruct the end effector apparatus (e.g., end effector apparatus 3330) to pick up multiple objects (e.g., at least one soft object per dual mode gripper coupled to end effector apparatus). While not shown in
In an embodiment, the object grasping operation 5006 of the method 5000 includes a suction gripping command operation 5010 and a pinch gripping command operation 5012. According to the embodiment shown in
In an embodiment, the method 5000 includes suction gripping command operation 5010, in which the computing system (e.g., computing system 1100) outputs suction gripping commands. According to embodiments, the suction gripping command causes a suction gripping device (e.g., suction gripping device 501) to grip or otherwise grasp an object via suction, as described above. The suction gripping command may be executed during execution of the object grasping operation when the robot arm (e.g., robot arm 3320) is in position to pick up or grasp an object (e.g., object 3000). Moreover, the suction gripping command may be calculated based on the object identification operation (e.g., calculation performed based on an understanding of a geometry of the deformable object).
In an embodiment, the method 5000 includes pinch gripping command operation 5012, in which the computing system (e.g., computing system 1100) outputs pinch gripping commands. According to embodiments, the pinch gripping command causes a pinch gripping device (e.g., pinch gripping device 502) to grip or otherwise grasp the object 3000 via a mechanical gripping force, as described above. The pinch gripping command may be executed during the object grasping operation and the robot arm (e.g., robot arm 3320) is in position to pick up or grasp an object (e.g., object 3000). Moreover, the pinch gripping command may be calculated based on the object identification operation (e.g., calculation performed based on an understanding of a geometry of the deformable object).
In embodiments, the method 5000 may include pitch adjustment determination operation 5013, in which the computing system (e.g., computing system 1100) optionally determines whether to output an adjust pitch command. Furthermore, in embodiments, the method 5000 includes pitch adjustment operation 5014, in which the computing system, based on the pitch adjustment determination of operation 5013 to optionally output a pitch adjustment command. According to embodiments, the adjust pitch command causes an actuation hub (e.g., actuation hub 601) coupled to the end effector apparatus (e.g., end effector apparatus 3330) to actuate one or more actuators (e.g., actuators 606) to rotate the extension arms 602 such that a gripping span (or pitch between gripping devices) is adjusted (e.g., reduced or enlarged), as described above. The adjust pitch command may be executed during execution of the object grasping operation and the robot arm (e.g., robot arm 3320) is in position to pick up or grasp an object (e.g., object 3000). Moreover, the adjust pitch command may be calculated based on the object identification operation (e.g., calculation performed based on an understanding of a geometry or behavior of the deformable object). In embodiments, the pitch adjustment operation 5014 may be configured to occur after of before any of the object grasping operation 5006 sub-operations. For example, the pitch adjustment operation 5014 may occur before or after the multi-point gripping system movement operation 5008, before or after the suction gripping command operation 5010, and/or before or after the pinch gripping command operation 5012. In some scenarios, the pitch may be adjusted while the object is grasped (as discussed above). In some scenarios, the object may be released after grasping to adjust the pitch before re-grasping. In some scenarios, the multi-point gripping system 600 may have its position adjusted after a pitch adjustment.
In an embodiment, the method 5000 includes outputting a lift object command operation 5016, in which the computing system (e.g., computing system 1100) outputs a lift object command. According to embodiments, the lift object command causes a robot arm (e.g., robot arm 3320) to lift an object (e.g., object 3000) from the surface or other object (e.g., object 3550) that it is resting on (e.g., a container for transporting one or more soft objects) and thereby allow the object to be moved freely, as described above. The lift object command may be executed after the object grasping operation 5006 is executed and the dual mode gripping system 600 has gripped the object. Moreover, the lift object command may be calculated based on the object identification operation 5004 (e.g., calculation performed based on an understanding of a geometry or behavior of the deformable object).
Subsequent to the lift object command operation 5016, a robotic motion trajectory operation 5018 may be carried out. During the robotic motion trajectory operation 5018, the robotic system and robotic arm may receive commands from the computer system (e.g., computing system 1100) to execute a robotic motion trajectory and an object placement command. Accordingly, the robotic motion trajectory operation 5018 may be executed to cause movement and placement of the grasped/lifted object.
It will be apparent to one of ordinary skill in the relevant arts that other suitable modifications and adaptations to the methods and applications described herein can be made without departing from the scope of any of the embodiments. The embodiments described above are illustrative examples and it should not be construed that the present disclosure is limited to these particular embodiments. It should be understood that various embodiments disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the methods or processes). In addition, while certain features of embodiments hereof are described as being performed by a single component, module, or unit for purposes of clarity, it should be understood that the features and functions described herein may be performed by any combination of components, units, or modules. Thus, various changes and modifications may be affected by one skilled in the art without departing from the spirit or scope of the invention.
Further embodiments are included in the following numbered paragraphs.
Embodiment 1 is a robotic grasping system comprising: an actuator arm; a suction gripping device connected to the actuator arm; and a pinch gripping device connected to the actuator arm.
Embodiment 2 the robotic grasping system of embodiment 1, wherein: the suction gripping device is configured to apply suction to grip an object.
Embodiment 3 is the robotic grasping system of any of embodiments 1-2, wherein: the pinch gripping device is configured to apply a mechanical force to grip an object.
Embodiment 4 is the robotic grasping system of any of embodiments 1-3, wherein the suction gripping device and the pinch gripping device are integrated together as a dual-mode gripper extending from the actuator arm.
Embodiment 5 is the robotic grasping system of embodiment 4, wherein the suction gripping device is configured to apply suction to an object to provide an initial grip and the pinch gripping device is configured to apply a mechanical force to the object to provide a secondary grip.
Embodiment 6 is the robotic grasping system of embodiment 5, wherein the pinch gripping device is configured to apply the mechanical force at a location on the object gripped by the suction gripping device.
Embodiment 7 is the robotic grasping system of embodiment 6, wherein the suction gripping device is configured to apply the initial grip to a flexible object to raise a portion of the flexible object and the pinch gripping device is configured to apply the secondary grip by pinching the portion.
Embodiment 8 is the robotic grasping system of embodiment 7, wherein the suction gripping device includes an extension actuator configured to extend a suction head of the suction gripping device to make contact with the flexible object and retract the suction head of the suction gripping device to bring the portion of the flexible object into a gripping range of the pinch grip device.
Embodiment 9 is the robotic grasping system of any of embodiments 1-8, further comprising a plurality of additional actuator arms, each additional actuator arm including a suction gripping device and a pinch gripping device.
Embodiment 10 is the robotic grasping system of any of embodiments 1-9, further comprising a coupler configured to permit the robotic grasping system to be attached to a robotic system as an end effector apparatus.
Embodiment 11 is a robotic grasping system comprising: an actuator hub; a plurality of extension arms extending from the robotic hub in an at least a partially lateral orientation; and a plurality of gripping devices arranged at ends of the plurality of extension arms.
Embodiment 12 is the robotic grasping system of embodiment 11, wherein each of the plurality of gripping devices includes: a suction gripping device; and a pinch gripping device.
Embodiment 13 is the robotic grasping system of any of embodiments 11-12, wherein: the actuator hub includes one or more actuators coupled to the extension arms, the one or more actuators being configured to rotate the plurality of extension arms such that a gripping span of the plurality of gripping devices is adjusted.
Embodiment 14 is the robotic grasping system of embodiment 13, further comprising: at least one processing circuit configured to adjust the gripping span of the plurality of gripping devices by at least one of: causing the one or more actuators to increase the gripping span of the plurality of gripping devices; and causing the one or more actuators to reduce the gripping span of the plurality of gripping devices.
Embodiment 15 is a robotic system for grasping objects, comprising: at least one processing circuit; and an end effector apparatus including: an actuator hub, a plurality of extension arms extending from the actuator hub in an at a least partially lateral orientation, a plurality of gripping devices arranged at corresponding ends of the extension arms, wherein the actuator hub includes one or more actuators couple to corresponding extension arms, the one or more actuators being configured to rotate the plurality of extension arms such that a gripping span of the plurality of gripping devices is adjusted, and a robot arm controlled by the at least one processing circuit and configured for attachment to the end effector apparatus, wherein the at least one processing circuit is configured to provide: a first command to cause at least one of the plurality of gripping devices to engage suction gripping, and a second command to cause at least one of the plurality of gripping devices to engage pinch gripping.
Embodiment 16 is the robotic system of embodiment 15, wherein the at least one processing circuit is further configured for selectively activating an individual gripping device of the plurality of gripping devices.
Embodiment 17 is the robotic system of any of embodiments 15-16, wherein the at least one processing circuit is further configured for engaging the one or more actuators for adjusting a span of the plurality of gripping devices.
Embodiment 18 is the robotic system of any of embodiments 15-17, wherein the at least one processing circuit is further configured for calculating a predicted flex behavior for a gripped object and planning a motion of the robot arm using the predicted flex behavior from the gripped object.
Embodiment 19 is a robotic control method, for gripping a deformable object, operable by at least one processing circuit via a communication interface configured to communicate with a robot having a robot arm that includes an end effector apparatus having a plurality of movable dual gripping devices, each dual gripping device including a suction gripping device and a pinch gripping device, the method comprising: receiving image information describing the deformable object, wherein the image information is generated by a camera; performing, based on the image information, an object identification operation to generate grasping information for determining an object grasping command to grip the deformable object; outputting the object grasping command to the end effector apparatus, the object grasping command comprising: a dual gripping device movement command configured to cause the end effector apparatus to move each of the plurality of dual gripping devices to a respective engagement position, each dual gripping device being configured to engage the deformable object when moved to the respective engagement position; a suction gripping command configured to cause each dual gripping device to engage suction gripping of the deformable object using a respective suction gripping device; and a pinch gripping command configured to cause each dual gripping device to engage pinch gripping of the deformable object using a respective pinch gripping device; and outputting a lift object command configured to cause the robot arm to lift the deformable object.
Embodiment 20 is a non-transitory computer-readable medium, configured with executable instructions for implementing a robot control method for gripping a deformable object, operable by at least one processing circuit via a communication interface configured to communicate with a robot having a robot arm that includes an end effector apparatus having a plurality of movable dual gripping devices, each dual gripping device including a suction gripping device and a pinch gripping device, the method comprising: receiving image information describing the deformable object, wherein the image information is generated by a camera; performing, based on the image information, an object identification operation to generate grasping information for determining an object grasping command to grip the deformable object; outputting the object grasping command to the end effector apparatus, the object grasping command comprising: a dual gripping device movement command configured to cause the end effector apparatus to move each of the plurality of dual gripping devices to a respective engagement position, each dual gripping device being configured to engage the deformable object when moved to the respective engagement position; a suction gripping command configured to cause each dual gripping device to engage suction gripping of the deformable object using a respective suction gripping device; and a pinch gripping command configured to cause each dual gripping device to engage pinch gripping of the deformable object using a respective pinch gripping device; and outputting a lift object command configured to cause the robot arm to lift the deformable object.
This application claims priority to U.S. Provisional Application No. 63/385,906, filed Dec. 2, 2022 which is hereby incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63385906 | Dec 2022 | US |