The invention relates to a method for producing an environment map for a mobile logistics robot, wherein the environment is sensed by means of a sensor system and the sensor data is evaluated in a processor unit, wherein a virtual grid of the environment is produced using cells, and wherein the cells in which objects are detected are labeled as occupied cells and the cells in which no objects are detected are labeled as free cells, as a result of which a representation of the environment is produced.
The invention further relates to a mobile logistics robot for carrying out the method.
Mobile logistics robots are increasingly used in industry and in logistics operations to automate industrial manufacturing processes as well as to automate logistics tasks such as order picking, for example. The robots most commonly used in these operations are mobile logistics robots with arm manipulators, in particular robot arms. Articulated arm robots are one example of this type of robot.
The deployment of mobile logistics robots, in particular autonomous guided vehicles with robot arms for load handling, e.g. mobile order-picking robots, is particularly challenging because logistics robots must be able to move freely in a logistics area such as a warehouse building, for example. The mobile logistics robots are therefore constantly encountering ever-changing working environments.
To make possible the localization and navigation of a mobile logistics robot in changing environmental conditions of this type, environment maps for the mobile logistics robot must be constantly updated.
There are different methods for producing environment maps for mobile logistics robots. On one hand, 2D maps, which are also called grid maps, can be used, in which the data from 2D sensor systems such as laser scans, can be entered. The grid map is based on a pattern, the grid, with cells that are each the size of 10 cm×10 cm, for example. Everything that is seen with the 2D sensor system is labeled as “occupied” in the map. Cells that are clear from the sensor source to an object are labeled as free. The result is a 2D representation of the environment, although without the information about what the object is, i.e. without the information what object occupies the cell. The same method can also be used with a 3D sensor system. In the case of a 3D sensor system, an octree is used, for example. The method for a 3D sensor system is identical to that using the 2D sensor system, wherein the only difference is that the cells are now three-dimensional, i.e. with the dimensions 10 cm×10 cm×10 cm, for example. Only the actual sensor information is also used for the map, so that the result is a representation of the environment. No conclusion about objects in the map, i.e. what objects are occupying the cells, is possible, or such a conclusion requires subsequent processing.
The object of this invention is to provide a method of the type described above and a mobile logistics robot to carry out the method so that environment maps with a higher information content can be produced.
The invention accomplishes this object in that the objects that are occupying the cells are identified in the processor unit.
The invention makes it possible to close the gap in the known mapping methods, which is that the known maps are unable to identify the objects that occupy the cells. With the identification of the objects, it thereby becomes possible to enter the result of the identification of the objects in the environment map produced, i.e. the information about what object is occupying a cell. With the invention, the mobile logistics robot receives specific information about where what objects are located.
As part of this process the objects are appropriately identified by means of image processing methods. In this context it is advantageous to use a sensor system that comprises at least one optical sensor, in particular a camera. In the processor unit, the sensor data can then be evaluated by means of imaging methods so that the objects can be identified.
In one preferred development of the invention, the objects are identified by means of artificial intelligence methods.
For the identification, the objects are advantageously recognized in at least one object recognition unit of the processor unit, which works in particular with imaging processes and/or artificial intelligence.
With the recognition of objects it becomes possible to enter the objects with their current posture, i.e. both the translational spatial coordinates, x, y and z, as well as the orientation coordinates, roll, pitch and yaw, i.e. with their position and orientation, as well as with their dimensions, i.e. with height and depth, into the environment map. An object that is recognized repeatedly can be used as a natural landmark for localization.
For identification, the objects can also be classified in at least one classification unit of the processor unit. For this purpose, in addition to the three-dimensional position and the dimensions of the object, at least one additional characteristic of the detected object can be taken into consideration for entry into the map.
The objects are preferably classified into static, manipulable and dynamic objects and entered into the environment map. The objects are therefore divided into categories that include static objects, such as walls, columns and shelves, for example, manipulable objects, such as pallets, boxes and pallet cages, for example, and dynamic objects such as people and vehicles, for example. The dynamic objects must never be used for localization, and can therefore always be excluded from this task. The information, however, is useful, for example for a management system which can detect where exactly each vehicle is.
The environment map generated can thereby be constructed from the following three parts, for example:
As a result, by means of such a map structure, a digital twin of the environment of the logistics robot, in particular of a warehouse including inventory, can be produced in the processor unit.
The environment map can be configured in a variety of ways, e.g. with all objects in one map or distributed over a plurality of maps.
To increase the quality of identification of objects, in one preferred development of the invention, a plurality of classification units, in particular different classification units, and/or a plurality of object recognition units, in particular different object recognition units, are consolidated.
The consolidation can also be performed upstream of the classification and/or the object recognition. For example, sensor signals from different sensors of the sensor system can be transmitted to the classification unit and/or object recognition unit.
For this purpose, different sensor types can be used as inputs, such as, for example, laser scanners, RGB cameras, depth cameras, RGBD cameras, radar sensors etc.
The objects to be identified can in particular be all objects that can be found in a warehouse or its outdoor operating areas. These objects include, for example, pallets, fire extinguishers, doors, emergency exit signs, walls, ceilings, floor markings etc.
The invention further relates to a mobile logistics robot to carry out the method with an apparatus to generate an environment map for the mobile logistics robot that comprises a sensor system for the sensing of the environment of the mobile logistic robot and a processor unit for the evaluation of the sensor data, wherein the processor unit is designed to generate a virtual grid of the environment with cells and to label the cells in which objects are detected as occupied and the cells in which no object is detected as free cells, as a result of which a representation of the environment can be generated.
The mobile logistics robot accomplishes the stated object in that the processor unit comprises an identification unit which is designed to identify the objects that occupy the cells.
The identification unit thereby appropriately comprises at least one classification unit and/or at least one object recognition unit.
The sensor system preferably further comprises an optical sensor, in particular a laser scanner and/or a camera.
The sensor system can also comprise at least one radar sensor.
The invention offers a whole series of advantages:
The invention makes it possible to produce a semantic map as a map of the environment of the mobile logistics robot. The information gap concerning the identification of the objects that appear in a map can thereby be closed. The quantity of data for this semantic map can also be significantly smaller, which results in a conservation of resources. The invention further makes it possible to easily reach conclusions about where what objects are located.
The maps also contain significantly more information concerning the objects, so that filtering by objects can also be performed to locate them or to manipulate them, i.e. to pick them up, to relocate them etc. Overall, a great many more operations based on the semantic map can be carried out than with conventional mapping methods such as with grid maps, for example.
There are above all configuration capabilities in the digital services on which the map according to the invention is based. For example, the invention can be used to take inventory, to track goods inside a warehouse, to detect damage to infrastructure, to detect anomalies (such as blocked emergency exits or vehicles in no-parking zones), and to avoid and prevent accidents.
The terms Fig., Figs., Figure, and Figures are used interchangeably in the specification to refer to the corresponding figures in the drawings.
Additional advantages and details of the invention are described in greater detail below with reference to the exemplary embodiments illustrated in the accompanying schematic figures, in which
To increase the quality of object recognition and object classification, a plurality of classification units 8 and object recognition units 4 can be consolidated.
The environment map 5 is created in the processor unit 3 from the results of the classification units 8 and object recognition units 4. The environment map 5 therefore includes recognized objects 1 with their position, orientation and spread (dimensions), as well as with the additional property, of what type of object 1 it is, i.e. whether it is a static object or a manipulable object or a dynamic object.
The sensor signals from the different sensors 6 of the sensor system 2 can also be consolidated upstream of the object recognition units 4 and classification units 8 and transmitted to them.
Finally,
Number | Date | Country | Kind |
---|---|---|---|
10 2021 133 614.7 | Dec 2021 | DE | national |
This application is the United States national phase of International Patent Application No. PCT/EP2022/082947 filed Nov. 23, 2022, and claims priority to German Patent Application No. 10 2021 133 614.7 filed Dec. 17, 2021, the disclosures of which are hereby incorporated by reference in their entireties.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2022/082947 | 11/23/2022 | WO |