The present disclosure is directed, in general, to a system and method for automatically selecting the optimum grasping position of an object by a robot, and more specifically to such a system that uses upcoming activities as a parameter in selecting the grasping position.
Automation of tasks such as assembly, warehouse stocking, packaging, and the like are being increasingly performed by robots. Robots have proven effective at performing repetitive tasks with little or no user intervention. However, as the tasks performed by the robots become increasingly diverse, additional programming of the robots becomes necessary to assure proper operation. The additional programming can become overly-burdensome and complex in situations where each object handled by the robot is randomly delivered from a number of options or where the task to be performed with each object can differ.
A robot operable within a 3-D volume includes a gripper movable between an open position and a closed position to grasp any one of a plurality of objects, an articulatable portion coupled to the gripper and operable to move the gripper to a desired position within the 3-D volume, and an object detection system operable to capture information indicative of the shape of a first object of the plurality of objects positioned to be grasped by the gripper. A computer is coupled to the object detection system. The computer is operable to identify a plurality of possible grasp locations on the first object and to generate a numerical parameter indicative of the desirability of each grasp location, wherein the numerical parameter is at least partially defined by the next task to be performed by the robot.
In another construction, a robot operable within a 3-D volume includes a gripper movable between an open position and a closed position to grasp any one of a plurality of objects, an articulatable portion coupled to the gripper and operable to move the gripper to a desired position within the 3-D volume, and an imaging system operable to capture an image of a first object of the plurality of objects which is positioned to be grasped by the gripper. A computer is coupled to the imaging system and includes a neural network that identifies a plurality of possible grasp locations on the first object, and that determines a numerical parameter of the desirability of each grasp location, wherein the numerical parameter is at least partially defined by the next task to be performed by the robot, and the arrangement of the gripper, and wherein a portion of the plurality of possible grasp locations are eliminated at least partially in response to an available movement path of the gripper and the articulatable portion within the 3-D volume.
In another construction, a method of gripping an object with a robot that is movable within a 3-D volume includes connecting a gripper that is movable between an open position and a closed position to an articulatable portion of the robot, capturing an image of the object to be grasped, and operating a neural network on a computer to analyze the image and generate a plurality of possible grasp locations for consideration. The method also includes assigning a numerical parameter indicative of the desirability of each grasp location to each grasp location, wherein the numerical parameter is at least partially defined by the next task to be performed by the robot, selecting the most desirable grasp location based on the numerical parameter, and grasping the object in the selected grasp location.
The foregoing has outlined rather broadly the technical features of the present disclosure so that those skilled in the art may better understand the detailed description that follows. Additional features and advantages of the disclosure will be described hereinafter that form the subject of the claims. Those skilled in the art will appreciate that they may readily use the conception and the specific embodiments disclosed as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. Those skilled in the art will also realize that such equivalent constructions do not depart from the spirit and scope of the disclosure in its broadest form.
Also, before undertaking the Detailed Description below, it should be understood that various definitions for certain words and phrases are provided throughout this specification and those of ordinary skill in the art will understand that such definitions apply in many, if not most, instances to prior as well as future uses of such defined words and phrases. While some terms may include a wide variety of embodiments, the appended claims may expressly limit these terms to specific embodiments.
Before any embodiments of the invention are explained in detail, it is to be understood that the invention is not limited in its application to the details of construction and the arrangement of components set forth in the following description or illustrated in the following drawings. The invention is capable of other embodiments and of being practiced or of being carried out in various ways. Also, it is to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
Various technologies that pertain to systems and methods will now be described with reference to the drawings, where like reference numerals represent like elements throughout. The drawings discussed below, and the various embodiments used to describe the principles of the present disclosure in this patent document are by way of illustration only and should not be construed in any way to limit the scope of the disclosure. Those skilled in the art will understand that the principles of the present disclosure may be implemented in any suitably arranged apparatus. It is to be understood that functionality that is described as being carried out by certain system elements may be performed by multiple elements. Similarly, for instance, an element may be configured to perform functionality that is described as being carried out by multiple elements. The numerous innovative teachings of the present application will be described with reference to exemplary non-limiting embodiments.
Also, it should be understood that the words or phrases used herein should be construed broadly, unless expressly limited in some examples. For example, the terms “including,” “having,” and “comprising,” as well as derivatives thereof, mean inclusion without limitation. The singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Further, the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. The term “or” is inclusive, meaning and/or, unless the context clearly indicates otherwise. The phrases “associated with” and “associated therewith,” as well as derivatives thereof, may mean to include, be included within, interconnect with, contain, be contained within, connect to or with, couple to or with, be communicable with, cooperate with, interleave, juxtapose, be proximate to, be bound to or with, have, have a property of, or the like.
Also, although the terms “first”, “second”, “third” and so forth may be used herein to refer to various elements, information, functions, or acts, these elements, information, functions, or acts should not be limited by these terms. Rather these numeral adjectives are used to distinguish different elements, information, functions or acts from each other. For example, a first element, information, function, or act could be termed a second element, information, function, or act, and, similarly, a second element, information, function, or act could be termed a first element, information, function, or act, without departing from the scope of the present disclosure.
In addition, the term “adjacent to” may mean: that an element is relatively near to but not in contact with a further element; or that the element is in contact with the further portion, unless the context clearly indicates otherwise. Further, the phrase “based on” is intended to mean “based, at least in part, on” unless explicitly stated otherwise. Terms “about” or “substantially” or like terms are intended to cover variations in a value that are within normal industry manufacturing tolerances for that dimension. If no industry standard as available a variation of twenty percent would fall within the meaning of these terms unless otherwise stated.
The gripper 35 includes a body 90 that connects to the wrist joint 50, a first finger 95 and a second finger 100. The first finger 95 and the second finger 100 each include an engagement surface 105 arranged to grasp various objects. The first finger 95 and the second finger 100 attach to the body 90 and are movable between an open position in which the engagement surfaces 105 of the first finger 95 and the second finger 100 are spaced apart from one another and a closed position in which the engagement surfaces 105 of the first finger 95 and the second finger 100 are either in contact with one another or spaced very close to one another. In the illustrated construction, two linkages 110 interconnect the first finger 95 and the second finger 100 to the body 90. The linkages 110 are arranged to assure that the engagement surfaces 105 remain substantially parallel to one another while in any position between the open position and the closed position.
As one of ordinary skill will realize, there is great variation in the design of the gripper 35 available in the robot arts. Grippers could include fingers 95, 100 that move linearly on a screw mechanism or could include fingers 95, 100 that pivot and do not remain parallel. Entirely different gripper mechanisms such as vacuum systems, magnetic systems, three or more finger systems, etc. could also be employed. In addition, many different arrangements are available for the articulatable portion including linear motion robots, fewer joints or more joints, etc. The actual design of the articulatable portion 30 and the gripper 35 are not critical to the invention so long as the articulatable portion 30 is able to move the gripper 35 into desired locations and orientations and the gripper 35 is able to grip multiple different objects in different ways.
As discussed, the robot 15, and specifically the articulatable portion 30 and the gripper 35 are movable to any point within the predefined 3-D volume 20 to pick up an object or perform a particular task. Each robot 15 has a predefined limit of motion that is well-known and is a function of the construction of the robot 15.
With continued reference to
A computer 125 is connected to the robot 15 to control the movement of the robot 15. The computer 125 receives feedback from various sensors on the robot 15 and in the related systems and generates control signals to move the articulatable portion 30 and the gripper 35 as required for a particular operation. The computer 125 also receives user input such as manufacturing plans, customer orders, and the like such that the computer 125 includes the next step for any given object 120 that is delivered on the conveyor system 115.
An object detection system 130 is also connected to the computer 125 and positioned to detect objects 120 on the conveyor 115 and within the 3-D volume 20. The object detection system 130 detects more than the presence of the object 120. Rather, the object detection system 130 detects what the object 120 is. In one construction, the object detection system 130 includes an imaging system 135 that captures a still image of the object 120. The still image is sent to the computer 125 and the computer 125 analyzes the image to determine what object 120 is in position to be grasped. In other constructions, other detection systems 130 may be employed. For example, one system 130 could read an RFID, bar code, or other indicator attached to or proximate the object 120 and send that information to the computer 125 to identify the object 120. For object detection systems 130 that do not capture an image of the object 120, images would need to be available to the computer 125 for each possible object 120. The images would be associated with the object 120 and used to determine grasp locations as will be discussed with regard to
The software aspects of the present invention could be stored on virtually any computer readable medium including a local disk drive system, a remote server, internet, or cloud-based storage location. In addition, aspects could be stored on portable devices or memory devices as may be required. The computer 125 generally includes an input/output device that allows for access to the software regardless of where it is stored, one or more processors, memory devices, user input devices, and output devices such as monitors, printers, and the like.
The processor could include a standard micro-processor or could include artificial intelligence accelerators or processors that are specifically designed to perform artificial intelligence applications such as artificial neural networks, machine vision, and machine learning. Typical applications include algorithms for robotics, internet of things, and other data-intensive or sensor-driven tasks. Often AI accelerators are multi-core designs and generally focus on low-precision arithmetic, novel dataflow architectures, or in-memory computing capability. In still other applications, the processor may include a graphics processing unit (GPU) designed for the manipulation of images and the calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, leading GPUs to become increasingly used for machine learning tasks. Of course, other processors or arrangements could be employed if desired. Other options include but are not limited to field-programmable gate arrays (FPGA), application-specific integrated circuits (ASIC), and the like.
The computer 125 also includes communication devices that may allow for communication between other computers or computer networks, as well as for communication with other devices such as machine tools, work stations, actuators, controllers, sensors, and the like.
In many applications, the robot 15 may be used to grasp the object 120 and then perform a task with that object 120. In repetitious tasks where the same object 120 is repeatedly grasped with the same task then being performed with the object 120, the robot 15 can simply be programmed to perform the grasp and task. However, more flexible systems are needed when any one of a large number of objects 120 may appear on the conveyor 115 and each object 120 may have a different task to be performed depending on the object 120. In systems like this, straightforward programming of the robot 15 may be prohibitive.
The computer 125 illustrated in
Once trained, the computer 125 uses the captured image, or an otherwise obtained image of the object 120 to determine possible grasp locations, to estimate the quality of each grasp location, and to assign a numerical parameter indicative of that quality. For example, the computer 125 could assign a value between 0 and 100 with 100 being the best possible grasp location.
Thus, the system described herein determines optimal grasping locations without pre-programming of the computer 125. Possible grasp positions can be found on the edges of the object 120 to be grasped. The search for the optimal grasp location can be implemented by a genetic algorithm like a particle filter motivated approach. That is, grasp candidates are uniformly sampled on the edges, when a good grasp is found there is a high likelihood that other grasp candidates are generated with a gaussian distribution in its proximity. If a grasp candidate is bad, the candidate is forgotten and no “offsprings” are generated in its proximity. This process is continued to iteratively find the best grasp candidate or candidates.
As illustrated in
The computer 125 is also provided with data regarding the next step to be performed with the particular object 120. In one construction, the computer 125 includes a database of next tasks that are each associated with a different potential object 120 to be grasped. The next task could be assembly related, could require the packaging of the object 120, could require the placement of the object 120 in a machine tool, or could include another task.
The next task acts as a further constraint that is analyzed by the computer 125 to adjust the numerical parameter. For example,
With the evaluation of candidate gripping positions complete, the computer 125 selects the most desirable gripping position and provides the necessary instructions to the robot 15 to grasp the object 120 as desired and perform the assembly step.
In use and as illustrated in
The numerical parameter is used to select the most desirable gripping position 175. The computer 125 provides the necessary commands to the robot 15 to cause the robot 15 to grab the object 120 using the selected gripping position and to perform the next task.
In yet another application, a robot is used to pack a bin or other storage space 210 having a fixed volume. When packing a bin or other volume, objects of different sizes, shapes, or volumes must be packed into a finite number of bins or containers, so as to minimize the number of bins used or maximize the number of objects packed in the volume. A variant of bin packing that occurs in practice is when the packing is constrained by size, by weight, by cost, etc. Bin or volume packing is important in industrial and commercial applications, in logistics, supply chain management and manufacturing, such as loading trucks with weight capacity constraints, filling containers, storing goods in warehouses, and cutting raw materials. Robotic bin packing requires industrial robots to implement an optimized bin packing strategy, i.e. grasping and packing the objects in the desired poses (e.g. locations and orientations).
Task-specific grasping will significantly boost the efficiency of robotic bin packing. In order to implement the optimal bin packing strategy, each grasping and packaging operation is dependent on the previous operation or operations as well as subsequent operations. Therefore, the proposed method may calculate different but more efficient task-relevant grasping points for the same object depending on the available storage geometry or space. Thus, the same object may be grasped one way to place a first object in a bin and a second way to place a second object in the bin.
Future manufacturing will move towards higher level of automation, e.g. mass customization require manufacturing in small production volumes and with high product variability. Future manufacturing automation systems will need the ability to adapt to changing complex environments. In many autonomous manufacturing tasks, grasping and assembling operations are dependent on the sequence of manufacturing operations, which are task relevant.
Task-specific grasping can also be used to automatically select between different types of grippers. Traditional grippers can be categorized as magnetic, vacuum, mechanical collet-type, and mechanical gripper-type. Magnetic grippers are typically used for lifting and moving ferromagnetic pieces such as steel sheets, blanks, and stamped parts. Vacuum (suction) grippers can be used for grasping non-ferrous objects where a magnetic gripper might otherwise be suitable. Multi-finger mechanical grippers are the most popular mechanical grippers. Two-finger grippers can grasp and handle a wide range of objects with good precision as illustrated in
When multiple grippers are available, the system described herein can evaluate the different possible grasps for each possible gripper to determine the most efficient way to grasp an object and perform a subsequent step or steps with that object. For a given task and object, the system first evaluates all possible grasping locations for all available grippers. The system selects the best grasping location and thereby, the best gripper for the task. In another arrangement, the system first selects the most desirable gripper. The system then calculates a set of potential grasping locations using the selected gripper.
As an example, the gear assembly 225 of
The system described herein improves the art of robotics and more specifically the use of robots in grasping applications. By accounting for the upcoming tasks in making grasping decisions, the efficiency of the complete assembly process can be improved.
Although an exemplary embodiment of the present disclosure has been described in detail, those skilled in the art will understand that various changes, substitutions, variations, and improvements disclosed herein may be made without departing from the spirit and scope of the disclosure in its broadest form.
None of the description in the present application should be read as implying that any particular element, step, act, or function is an essential element, which must be included in the claim scope: the scope of patented subject matter is defined only by the allowed claims. Moreover, none of these claims are intended to invoke a means plus function claim construction unless the exact words “means for” are followed by a participle.
Number | Date | Country | Kind |
---|---|---|---|
18213367.8 | Dec 2018 | EP | regional |