Industrial robots are well known in the art. Such robots are intended to replace human workers in a variety of assembly tasks. It has been recognized that in order for such robots to effectively replace human workers in increasingly more delicate and detailed tasks, it will be necessary to provide sensory apparatus for the robots which is functionally equivalent to the various senses with which human workers are naturally endowed, for example, sight, touch, etc.
In robotic picking applications for small part assembly, warehouse/logistics automation, food and beverage, etc., a robot gripper needs to pick an object, then insert/place it accurately into another part. There are some traditional solutions: (1.) Customized fingers on the gripper can self-align the part to a fixed location relative to the gripper. But for different shape of the part, a different type of finger has to be made and changed. (2.) After picking up the part, the robot brings the part in front of a camera and a machine vision system detects the location of the part relative the gripper. But this extra step increases the cycle time for the robot system. (3.) The part is placed on a customized fixture and the robot is programmed to pick up the part at the same location each time. But various fixtures have to be made for different parts which may not be cost effective to produce.
Of particular importance for delicate and detailed assembly tasks is the sense of touch. Touch can be important for close-up assembly work where vision may be obscured by arms or other objects, and touch can be important for providing the sensory feedback necessary for grasping delicate objects firmly without causing damage to them. Touch can also provide a useful means for discriminating between objects having different sizes, shapes or weights. Accordingly, various tactile sensors have been developed for use with industrial robots.
However, there are problems such as easy wear and tear damage with this sensor for robotic picking and assembly applications that need to be overcome. In this problem, the robot hand is constantly picking parts and assembling parts which means that the finger/gripper surface is prone to abrasion/wear. This implies that any tactile sensing which employs fragile thin film coatings at grip points can easily wear off. Also, any elaborate light/LED source configuration limits the size of the in-hand object location system. An additional problem is the size of the light source and sensor are too big to mount on small robotic fingers to pick up small objects. Thus, mounting an elaborate light source for in-hand perception is not feasible. The current state of the art lacks information on object handling/gripping as a part of the robot hand.
Further, there are problems such as easy wear and tear damage with this sensor for robotic picking and assembly applications that need to be overcome. In this problem, the robot hand is constantly picking parts and assembling parts which means that the finger/gripper surface is prone to abrasion/wear. This implies that any tactile sensing which employs fragile thin film coatings at grip points can easily wear off. Also, such an elaborate light/LED source limits the size of the in-hand object location system. Therefore, an additional problem is the size of the light source and sensor may be too big to mount on small robotic fingers to pick up small objects. Thus, mounting an elaborate light source for in-hand perception is not feasible. Another problem is that adding an in-hand light source and detector means that there will be a need for an extra calibration step.
Another problem is most robot picking/grasping/manipulation has the lack of information about the object with reference to the gripper or the hand itself. A further problem is that there is usually a compromise in the quality of image, usually a low resolution image. Also, there is a problem of using up high engineer time and cost to design, build, install and tune a robotic picking and assembly system, especially when the system includes a vision system, customized fingers and fixtures to handle parts with different shapes. It is typical for an engineer to spend time to exchange the fingers, set up fixtures and change robot programs for different parts.
The invention is a method of object manipulation and training including providing at least one robotic hand including a plurality of grippers connected to a body and providing a plurality of cameras disposed in a periphery surface of the plurality of grippers. The method also includes providing a plurality of tactile sensors disposed in the periphery surface of the plurality of grippers and actuating the plurality of grippers to grasp an object. The method further includes detecting a position of the object with respect to the at least one robotic hand via a first image feed from the plurality of tactile sensors and detecting a position of the object with respect to the at least one robotic hand via a second image feed from the plurality of cameras. The method also includes generating instructions to grip and manipulate an orientation of the object based on the first and the second image feeds for a visualization of the object relative to the at least one robotic hand. The at least one robotic hand, the plurality of grippers, the plurality of cameras and the plurality of tactile sensors are electrically connected to a controller.
The invention is a robotic hand including a plurality of grippers and a body and a plurality of cameras disposed in a peripheral surface of the plurality of grippers. The robotic hand also includes at least one illumination surface disposed on a periphery surface of the plurality of grippers and a plurality of tactile sensors disposed in the peripheral surface of the plurality of grippers. The at least one robotic hand, the plurality of grippers, the plurality of cameras, the at least one illumination surface and the plurality of tactile sensors are electrically connected to a controller.
The invention is a non-transitory computer-readable medium storing instructions that, when executed by a processor of a computer, cause the processor to perform operations including actuating the plurality of grippers to grasp an object and detecting a position of the object with respect to the at least one robotic hand via a first image feed from the plurality of tactile sensors. Operations also include detecting a position of the object with respect to the at least one robotic hand via a second image feed from the plurality of cameras and generating instructions to grip and manipulate an orientation of the object based on the first and the second image feeds for a visualization of the object relative to the at least one robotic hand. Operations further include performing a pick procedure on the object based on the generated instructions and determining whether or not the image feeds from the visualization of the object correlates with the generated instructions. Operations also include correcting the gripping and manipulating of the object based on the determining and placing the object in an assembly of parts.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
The use of the terms “a” and “an” and “the” and “at least one” and similar referents in the context of describing the invention (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. The use of the term “at least one” followed by a list of one or more items (for example, “at least one of A and B”) is to be construed to mean one item selected from the listed items (A or B) or any combination of two or more of the listed items (A and B), unless otherwise indicated herein or clearly contradicted by context. The terms “comprising,” “having,” “including,” and “containing” are to be construed as open-ended terms (i.e., meaning “including, but not limited to,”) unless otherwise noted. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. All methods described herein can be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. The use of any and all examples, or exemplary language (e.g., “such as”) provided herein, is intended merely to better illuminate the invention and does not pose a limitation on the scope of the invention unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the invention.
The invention particularly describes the use of computer-aided design (CAD) model/synthetic data of the objects being handled/assembled, together with the tactile imaging information with reference to a robotic hand, and the robot joint coordinate information that is easily accessible as well. This pool of information can allow coordinated movement and easy manipulation of the object which is being picked or assembled. This pool of information may also allow for easier forecasting of robot gestures or grasp planning.
Referring now to
Referring now to
Referring now to
Referring now to
Further, the in-hand sensor 40 may include a block of transparent rubber or gel, one face of which is coated with metallic paint. When the paint-coated face is pressed against an object, it conforms to the object's shape. The metallic paint makes the object's surface reflective, so its geometry becomes much easier for computer vision algorithms to infer. Mounted on the sensor opposite the paint-coated face of the rubber block are colored lights/LEDs 50a-d and a single camera 45. This system needs to have colored lights at different angles, and then it has the reflective material, and by looking at the colors, a computer can figure out a 3-D shape of what is being sensed or touched.
Referring now to
In some embodiments, an in-hand object location system may be used to determine the location of a part held within a robotic hand. This system may additionally provide information about the geometry of the object 90. This system may also be used to find a different location that may provide a better grasp of the object 90. Such an in-hand object location system requires a light source and a detector or camera unit within the robotic hand. Mounting an elaborate light source while maintaining a compact robot hand/fingers may be challenging, but the tactile architecture as described makes it possible to do so.
Since some in-hand object location systems may be limited in field of view and resolution, it can prove very beneficial to combine an in-hand object location system (75a to 75f) with a vision system (80a to 80f) as described herein below. Such an in-hand object location and vision system 70 may provide information about the 3D geometry of the object 90. This system 70 may also be used to find a different location that may provide a better grasp of the object 90. Incorporating the information of an in-hand object location system with that of a 2D/3D vision system together, facilitates a robot system to accurately and robustly pick, place and assembled objects/workpieces. This type of configuration reduces the engineering time and cost to design, build, install and tuning the system. Such a configuration may also reduce the cycle time.
In some embodiments, the plurality of in-hand tactile sensors 75a to 75f each include a layer of pressure generated illumination surfaces comprised of pressure sensitive luminescent films. Using an in-hand object location system with pressure sensitive illumination can allow easy perception of the part of an object that has been gripped without the need for an elaborate light source. Illumination surfaces may generate enough light to act as a light source for cameras 80a to 80f to receive better imagery of object 90 as it is manipulated in-hand. In some embodiments, surfaces illuminate upon coming into contact with an object 90 via a pressure-activated glow effect triggered by pressure on object 90. Tactile sensors 75a to 75f, cameras 80a to 80f and grippers 95a, 95b may be electrically and mechanically connected to a power source and control system 135 (
Referring now to
Referring now to
An offline trained model 110 (for example deep learning Convolution neural network) as shown in
In the offline training phase, the robot system automatically conducts the experiments to pick, place and assembly the parts and collects part information from 2D/3D vision system and in-hand object location and vision system 70 as well as the robot movement with the successful and fail of the picking, placing and assembly task. The initial robot picking, placing and assembly movement can come either from manual teaching or a general purpose model (the model trained for general part and tasks). In
The novel idea here is not only to use both the 2D/3D vision system and the in-hand object location system 70, which provides in hand location information after picking part, to guide robot movement 132. It allows offline training in an end-to-end model by simplifying the training phase.
Referring now to
Referring now to
In
Referring now to
The set up time step 202 is significantly shorter due to extra sensing ability available with the in-hand object location and vision system 70. Thus, the golden part training at 202 may include information from four sources working in parallel: 1) In-hand image 165, 2) vision system 170, 3) robot joint coordinates 205 and 4) synthetic information about object 210.
In
Referring now to
Referring now to
Referring now to
Using this invention, a robotic system can use a general purpose finger/gripper with or without a general purpose fixture to pick, place and assemble various parts.
The various embodiments described herein may provide the benefits of a reduction in the engineering time and cost to design, build, install and tune a special finger, or a special fixture, or a vision system for picking, placing and assembly applications in logistics, warehouse or small part assembly. Also, these embodiments may provide a reduction in cycle time since the robotic hand can detect the position of the in-hand part right after picking the part. Further, these embodiments may provide improved robustness of the system. In other words, with the highly accurate in-hand object location and geometry, the robot can adjust the placement or assembly motion to compensate for any error in the picking. Moreover, these embodiments may be easy to integrate with general purpose robot grippers, such as the robotic YUMI hand, herein incorporated by reference, for a wide range of picking, placing and assembly applications.
The techniques and systems disclosed herein may be implemented as a computer program product for use with a computer system or computerized electronic device. Such implementations may include a series of computer instructions, or logic, fixed either on a tangible/non-transitory medium, such as a computer readable medium 400 (e.g., a diskette, CD-ROM, ROM, flash memory or other memory or fixed disk) or transmittable to a computer system or a device, via a modem or other interface device, such as a communications adapter connected to a network over a medium.
The medium 300 may be either a tangible medium (e.g., optical or analog communications lines) or a medium implemented with wireless techniques (e.g., Wi-Fi, cellular, microwave, infrared or other transmission techniques). The series of computer instructions (e.g.,
Furthermore, such instructions (e.g., at 500) may be stored in any tangible memory device 505, such as semiconductor, magnetic, optical or other memory devices, and may be transmitted using any communications technology, such as optical, infrared, microwave, or other transmission technologies.
It is expected that such a computer program product may be distributed as a removable medium with accompanying printed or electronic documentation (e.g., shrink wrapped software), preloaded with a computer system (e.g., on system ROM or fixed disk), or distributed from a server or electronic bulletin board over the network (e.g., the Internet or World Wide Web). Of course, some embodiments of the invention may be implemented as a combination of both software (e.g., a computer program product) and hardware. Still other embodiments of the invention are implemented as entirely hardware, or entirely software (e.g., a computer program product).
As will be apparent to one of ordinary skill in the art from a reading of this disclosure, the present disclosure can be embodied in forms other than those specifically disclosed above. The particular embodiments described above are, therefore, to be considered as illustrative and not restrictive. Those skilled in the art will recognize, or be able to ascertain, using no more than routine experimentation, numerous equivalents to the specific embodiments described herein. Thus, it will be appreciated that the scope of the present invention is not limited to the above described embodiments, but rather is defined by the appended claims; and that these claims will encompass modifications of and improvements to what has been described.
Preferred embodiments of this invention are described herein, including the best mode known to the inventors for carrying out the invention. Variations of those preferred embodiments may become apparent to those of ordinary skill in the art upon reading the description herein. The inventors expect skilled artisans to employ such variations as appropriate, and the inventors intend for the invention to be practiced otherwise than as specifically described herein. Accordingly, this invention includes all modifications and equivalents of the subject matter recited in the claims appended hereto as permitted by applicable law. Moreover, any combination of the above-described elements in all possible variations thereof is encompassed by the invention unless otherwise indicated herein or otherwise clearly contradicted by context.