The invention relates to a sensor apparatus for a gripping system, wherein the gripping system comprises a robot, that is to say a manipulator with at least one degree of freedom such as, for example, an industrial robot, having a gripping device for handling objects and a robot or machine control for controlling the robot and the gripping device. The invention also relates to a method for generating gripping poses for a machine or robot control for controlling the robot and the gripping device for gripping objects and an associated gripping system. U.S. Pat. No. 9,002,098 B1 describes a robot-assisted visual perception system for determining a position and pose of a three-dimensional object. The system receives an external input for selecting an object to be gripped. The system also receives visual inputs from a sensor of a robot control that scans the object of interest. Rotation-invariant form features and appearance are extracted from the detected object and a set of object templates. A match between the scanned object and an object template is identified on the basis of form features. The match between the scanned object and the object template is confirmed using appearance features. The scanned object is then identified, and a three-dimensional pose of the scanned object of interest is determined. Based on the determined three-dimensional pose of the scanned object, the robot control is used to grip and manipulate the scanned object. The system operates on the basis of templates or rotation-invariant features in order to compare the sensor data with the model. These methods can preferably be used in contrast-rich scenes, but fail when there is insufficient contrast or geometric similarity between object classes. Model-free gripping is not shown. The semantic assignment of the object class is also not solved.
The object of the present invention is to provide the production of optimal gripping poses. From these gripping poses, command sets for controlling the gripping device for gripping objects can then be produced in an advantageous manner on the part of the robot or machine. Both the gripping of known objects and the gripping of unknown objects should be possible. This object is achieved by a sensor apparatus having the features of claim 1.
Such a sensor apparatus allows, in particular, a rapid start-up of handling tasks such as pick & place without intervention in the robot or machine control, and without expert knowledge in the field of image processing and robotics. The sensor apparatus represents a largely autonomous unit with which suitable gripping poses can be generated. From these gripping poses, application-independent command sets for the robot or machine control can be generated on the part of the robot or machine.
The segmentation is a subarea of the digital image processing and of machine vision. The generation of regions related by content by combining adjacent pixels or voxels corresponding to a homogeneity criterion is termed segmentation.
As a service, the control interface provides the robot/machine controls in particular with semantic/numerical information about the objects contained in the image data and in particular the gripping poses.
A system with such a sensor apparatus consequently allows the gripping of known objects as well as the gripping of unknown objects on the basis of the generalized segmentation and gripping planning algorithm.
With the user interface, in particular the object models for the segmentation model, the gripping planning parameters for the gripping planning module and/or the control parameters for the control interface can be specified such as, for example, sensor parameterization, sensor and robot calibration, and/or the parameterization of gripping planning.
Further embodiments and advantageous designs of the invention are defined in the dependent claims.
The stated object is also achieved by a method according to claim 12 and by a gripping system according to claim 14.
Further details and advantageous embodiments of the invention can be found in the following description.
In the drawings:
Ever shrinking batch sizes and increasing labor costs are major challenges in production engineering in high-wage countries. To allow them to be addressed, a modern automation system must be able to be quickly adapted to the new environmental conditions. In the following, a sensor apparatus is presented, which permits rapid start-up of handling tasks such as pick & place without programming.
The sensor apparatus, in particular, represents a computing unit that allows a suitable gripping pose for an object to be determined on the basis of gray value data, color data or 3D point cloud data (for example by mono or stereo camera systems). Suitable in this case means that the resulting grasp meets both certain quality criteria and does not lead to collisions between grippers, robots and other objects.
The camera system can be structurally integrated externally or directly into the sensor apparatus, which is clear from the hardware architecture according to
Any imaging sensors, or camera systems, as well as manipulator systems can be connected via, in particular, a physical Ethernet interface. The software characteristics of the respective subsystems (robot, camera) are abstracted via a metadata description and integrated function drivers.
The software architecture is termed a pipeline, since the result of process i represents the input variable for the process i+1. The individual objects are detected by an instance segmentation method from the image information provided by the sensor system. If other/further image processing functions are required, they can be made available to the overall system via the Vision Runtime. In this case, intrinsic functions can be developed, and finished runtime systems can be incorporated. The segmented objects (object envelope with class membership) represent the input variable for gripping planning. The gripping planner then detects suitable/desired gripping that is made available to the control interface for execution.
The process shown in
Given the automatization of the individual steps, the time-consuming programming of the image processing and robot program is omitted. In particular, only the following processes must be parameterized/executed by end users:
Customer-specific gripping problems can therefore be solved individually and without time-consuming and expensive programming effort.
The sensor apparatus forms the complete engineering process for automating a pick & place application. In this case, both 2D and 3D imaging sensors are considered, so that a solution that is suitable for hardware is obtained depending on the application. Also, no known system combines the various possibilities of gripping planning (model-free/model-based), so that it can be freely used for various applications. Known solutions are specified either for any object gripping or a specific one. By shifting the system boundaries, task-oriented programming of the pick & place task is possible for the first time. This means that the user only needs to indicate which object (semantics) he wishes to grip next.
In the following, the overall system is presented both in terms of the software and hardware. The software architecture and the system sequence will first be described. Based on this, the teaching and deployment of the sensor apparatus for new objects is presented before the hardware implementation is finally described.
The pipeline with the process of sensor data acquisition up to communication with the control by the sensor apparatus 14 is shown in
The data are processed by the Vision Runtime module 2. This uses the instance segmentation module 3 in normal operation (gripping planning). The output are the object envelopes together with the class association of the objects contained in the sensor data. So that the method can segment the objects in 3, a segmentation model must be trained beforehand via a data-driven method (see
In the feature generation module 5, the relevant gripping features are determined from the object segmentation. These then represent the basis for gripping planning in the gripping planning module 6.
Various methods for gripping planning are freely selectable by the user in the gripping planning module 6. In this case, model-based methods (one gripping or a plurality of grippings are specified by the user, and the system searches for this in the scene object) as well as model-free methods (the most optimum gripping in relation to gripping stability and quality is determined by the system) are possible in the gripping planning module 6. Various gripping systems (number of fingers, operating principle (clamping gripping and vacuum gripping) can also be set. This is configured via the user interface 9 via the gripping planning parameters.
As output, the planner provides a gripping pose and the gripping finger configuration in SE(3) via the control interface. Optionally, a list of all recognized objects together with class association and object envelopes can also be provided (in addition to the gripping pose).
The control interface 7 is used to communicate with the robot or machine control 8. This is designed as a client-server interface, wherein the sensor apparatus represents the server, and the control 8 represents the client. The interface 7 is based on a generally valid protocol, so that it can be used for various proprietary controls and their specific command sets. Communication takes place via TCP/IP or via a fieldbus protocol. A specific function block, which generates control-specific command sets, is integrated in the control module 8.
All the parameterization and configuration of the sensor apparatus is done via the user interface 9. This is mapped by a web server that runs locally on the sensor apparatus 14. The teaching of segmentation models is done on a possibly external training server, and the uploading of training data and downloading of the finished model is done via the user interface 9.
The process for teaching the gripping objects and for deployment on the sensor apparatus is shown in
A training server 11 is available for teaching the segmentation model. This service can be carried out outside the sensor apparatus 14. The user 10 can provide the objects to be gripped as CAD data and as real scene data. On the basis of these data, various object scenes are generated in the virtual environment module 12 and made available as photosynthetic data to the training module 13. The time expenditure for the training data annotation can therefore be largely minimized. The data-driven segmentation algorithm is trained in the module 13. The output is a segmentation model that the user 10 incorporates on the sensor apparatus 14 via the user interface 9.
The hardware architecture and the embedding of the sensor apparatus 14 in the overall automation system is shown in
The electrical energy is supplied to the sensor apparatus 14 via the energy supply module 18. The sensor apparatus, which functions as a server with respect to the control 8, represents the slave in the communication topology of the overall automation system. As the master, the control 8 integrates the gripping device 22 in terms of software and hardware by the provided fieldbus system. The gripping device 22 can also be integrated via a system control 21 if required by the architecture of the overall installation.
The sensor apparatus 14 is connected via the physical user interface 15 (Ethernet, for example) to a terminal (for example a PC) by the user 10. The software configuration then takes place via the interface 9 (web server).
The communication of the sensor apparatus 14 with the control 8 also takes place via an optionally physically separate or common interface 15 (Ethernet, fieldbus, for example). The communication takes place as already shown.
The communication to the imaging sensor takes place via a further, physically separate Ethernet interface 16. For example, GigE can be used in this case. An additional lighting module 19 can also be activated via the sensor apparatus interface 17 (digital output). The system boundary of 14 can also be expanded by the integration of 1 and 19, wherein the interfaces remain the same.
Number | Date | Country | Kind |
---|---|---|---|
10 2020 115 628.6 | Jun 2020 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2021/065736 | 6/11/2021 | WO |