Method for Producing an Environment Map for a Mobile Logistics Robot and Mobile Logistics Robot

Information

  • Patent Application
  • 20250042016
  • Publication Number
    20250042016
  • Date Filed
    November 23, 2022
    2 years ago
  • Date Published
    February 06, 2025
    a month ago
Abstract
A method for the production of an environment map (5) for a mobile logistics robot includes sensing an environment by use of a sensor system (2). The sensor data is evaluated in a processor unit (3), and a virtual grid of the environment is produced using cells. The cells in which objects (1) are detected are labeled as occupied cells and the cells in which no objects (1) are detected are labeled as free cells, as a result of which a representation of the environment is generated. The objects (1) that occupy the cells are identified in the processor unit (3). The mobile logistics robot carries out the method.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The invention relates to a method for producing an environment map for a mobile logistics robot, wherein the environment is sensed by means of a sensor system and the sensor data is evaluated in a processor unit, wherein a virtual grid of the environment is produced using cells, and wherein the cells in which objects are detected are labeled as occupied cells and the cells in which no objects are detected are labeled as free cells, as a result of which a representation of the environment is produced.


The invention further relates to a mobile logistics robot for carrying out the method.


Description of Related Art

Mobile logistics robots are increasingly used in industry and in logistics operations to automate industrial manufacturing processes as well as to automate logistics tasks such as order picking, for example. The robots most commonly used in these operations are mobile logistics robots with arm manipulators, in particular robot arms. Articulated arm robots are one example of this type of robot.


The deployment of mobile logistics robots, in particular autonomous guided vehicles with robot arms for load handling, e.g. mobile order-picking robots, is particularly challenging because logistics robots must be able to move freely in a logistics area such as a warehouse building, for example. The mobile logistics robots are therefore constantly encountering ever-changing working environments.


To make possible the localization and navigation of a mobile logistics robot in changing environmental conditions of this type, environment maps for the mobile logistics robot must be constantly updated.


There are different methods for producing environment maps for mobile logistics robots. On one hand, 2D maps, which are also called grid maps, can be used, in which the data from 2D sensor systems such as laser scans, can be entered. The grid map is based on a pattern, the grid, with cells that are each the size of 10 cm×10 cm, for example. Everything that is seen with the 2D sensor system is labeled as “occupied” in the map. Cells that are clear from the sensor source to an object are labeled as free. The result is a 2D representation of the environment, although without the information about what the object is, i.e. without the information what object occupies the cell. The same method can also be used with a 3D sensor system. In the case of a 3D sensor system, an octree is used, for example. The method for a 3D sensor system is identical to that using the 2D sensor system, wherein the only difference is that the cells are now three-dimensional, i.e. with the dimensions 10 cm×10 cm×10 cm, for example. Only the actual sensor information is also used for the map, so that the result is a representation of the environment. No conclusion about objects in the map, i.e. what objects are occupying the cells, is possible, or such a conclusion requires subsequent processing.


SUMMARY OF THE INVENTION

The object of this invention is to provide a method of the type described above and a mobile logistics robot to carry out the method so that environment maps with a higher information content can be produced.


The invention accomplishes this object in that the objects that are occupying the cells are identified in the processor unit.


The invention makes it possible to close the gap in the known mapping methods, which is that the known maps are unable to identify the objects that occupy the cells. With the identification of the objects, it thereby becomes possible to enter the result of the identification of the objects in the environment map produced, i.e. the information about what object is occupying a cell. With the invention, the mobile logistics robot receives specific information about where what objects are located.


As part of this process the objects are appropriately identified by means of image processing methods. In this context it is advantageous to use a sensor system that comprises at least one optical sensor, in particular a camera. In the processor unit, the sensor data can then be evaluated by means of imaging methods so that the objects can be identified.


In one preferred development of the invention, the objects are identified by means of artificial intelligence methods.


For the identification, the objects are advantageously recognized in at least one object recognition unit of the processor unit, which works in particular with imaging processes and/or artificial intelligence.


With the recognition of objects it becomes possible to enter the objects with their current posture, i.e. both the translational spatial coordinates, x, y and z, as well as the orientation coordinates, roll, pitch and yaw, i.e. with their position and orientation, as well as with their dimensions, i.e. with height and depth, into the environment map. An object that is recognized repeatedly can be used as a natural landmark for localization.


For identification, the objects can also be classified in at least one classification unit of the processor unit. For this purpose, in addition to the three-dimensional position and the dimensions of the object, at least one additional characteristic of the detected object can be taken into consideration for entry into the map.


The objects are preferably classified into static, manipulable and dynamic objects and entered into the environment map. The objects are therefore divided into categories that include static objects, such as walls, columns and shelves, for example, manipulable objects, such as pallets, boxes and pallet cages, for example, and dynamic objects such as people and vehicles, for example. The dynamic objects must never be used for localization, and can therefore always be excluded from this task. The information, however, is useful, for example for a management system which can detect where exactly each vehicle is.


The environment map generated can thereby be constructed from the following three parts, for example:

    • 1) Map for static objects: This map is very well suited for the localization of the mobile logistics robot.
    • 2) Map for manipulable objects: This map can be used to track goods and therefore to take inventory.
    • 3) Map for dynamic objects: This map can be used, for example, for the localization of all vehicles, both robotic and non-robotic vehicles that have no sensor systems on board.


As a result, by means of such a map structure, a digital twin of the environment of the logistics robot, in particular of a warehouse including inventory, can be produced in the processor unit.


The environment map can be configured in a variety of ways, e.g. with all objects in one map or distributed over a plurality of maps.


To increase the quality of identification of objects, in one preferred development of the invention, a plurality of classification units, in particular different classification units, and/or a plurality of object recognition units, in particular different object recognition units, are consolidated.


The consolidation can also be performed upstream of the classification and/or the object recognition. For example, sensor signals from different sensors of the sensor system can be transmitted to the classification unit and/or object recognition unit.


For this purpose, different sensor types can be used as inputs, such as, for example, laser scanners, RGB cameras, depth cameras, RGBD cameras, radar sensors etc.


The objects to be identified can in particular be all objects that can be found in a warehouse or its outdoor operating areas. These objects include, for example, pallets, fire extinguishers, doors, emergency exit signs, walls, ceilings, floor markings etc.


The invention further relates to a mobile logistics robot to carry out the method with an apparatus to generate an environment map for the mobile logistics robot that comprises a sensor system for the sensing of the environment of the mobile logistic robot and a processor unit for the evaluation of the sensor data, wherein the processor unit is designed to generate a virtual grid of the environment with cells and to label the cells in which objects are detected as occupied and the cells in which no object is detected as free cells, as a result of which a representation of the environment can be generated.


The mobile logistics robot accomplishes the stated object in that the processor unit comprises an identification unit which is designed to identify the objects that occupy the cells.


The identification unit thereby appropriately comprises at least one classification unit and/or at least one object recognition unit.


The sensor system preferably further comprises an optical sensor, in particular a laser scanner and/or a camera.


The sensor system can also comprise at least one radar sensor.


The invention offers a whole series of advantages:


The invention makes it possible to produce a semantic map as a map of the environment of the mobile logistics robot. The information gap concerning the identification of the objects that appear in a map can thereby be closed. The quantity of data for this semantic map can also be significantly smaller, which results in a conservation of resources. The invention further makes it possible to easily reach conclusions about where what objects are located.


The maps also contain significantly more information concerning the objects, so that filtering by objects can also be performed to locate them or to manipulate them, i.e. to pick them up, to relocate them etc. Overall, a great many more operations based on the semantic map can be carried out than with conventional mapping methods such as with grid maps, for example.


There are above all configuration capabilities in the digital services on which the map according to the invention is based. For example, the invention can be used to take inventory, to track goods inside a warehouse, to detect damage to infrastructure, to detect anomalies (such as blocked emergency exits or vehicles in no-parking zones), and to avoid and prevent accidents.





BRIEF DESCRIPTION OF THE DRAWINGS

The terms Fig., Figs., Figure, and Figures are used interchangeably in the specification to refer to the corresponding figures in the drawings.


Additional advantages and details of the invention are described in greater detail below with reference to the exemplary embodiments illustrated in the accompanying schematic figures, in which



FIG. 1 is a flowchart for the production of the environment map,



FIG. 2 shows one example of a classification of dynamic objects,



FIG. 3 shows one example of a classification of manipulable objects, and



FIG. 4 shows one example of a classification of static objects.





DESCRIPTION OF THE INVENTION


FIG. 1 is a flowchart for the production of the environment map 5 according to the invention for a mobile logistics robot by means of a device 15. The object 1, such as a shelf for example, is detected by the sensor system 2, which comprises optical sensors 6, which are in the form of cameras 7, for example. The sensor data from the sensors 6 is transmitted to the processor unit 3, which has an identification unit 14, which comprises object recognition units 4 and classification units 8. The object recognition units 4 recognize the object by means of image processing methods, for example, and determine the position, orientation and spread (dimensions) of the object 1. The classification units 8 classify the object 1, for example using artificial intelligence methods, as a static, manipulable or dynamic object 1.


To increase the quality of object recognition and object classification, a plurality of classification units 8 and object recognition units 4 can be consolidated.


The environment map 5 is created in the processor unit 3 from the results of the classification units 8 and object recognition units 4. The environment map 5 therefore includes recognized objects 1 with their position, orientation and spread (dimensions), as well as with the additional property, of what type of object 1 it is, i.e. whether it is a static object or a manipulable object or a dynamic object.


The sensor signals from the different sensors 6 of the sensor system 2 can also be consolidated upstream of the object recognition units 4 and classification units 8 and transmitted to them.



FIG. 2 shows one example of a classification of dynamic objects 1 from the point of view of the mobile logistics robot which is provided with the device 15 illustrated in FIG. 1. Industrial trucks 9 and 10 in the vicinity of the mobile logistics robot are therefore classified as dynamic objects 1 and entered as such into the environment map. The dynamic objects 1 must never be used for localization and can therefore always be excluded from this task. The information, however, is useful, for example for a management system that can detect the exact location of each industrial truck 9, 10.



FIG. 3 shows one example of a classification of manipulable objects 1 from the point of view of the mobile logistics robot which is provided with the device 15 in FIG. 1. Packages 11 and 12 in the vicinity of the mobile logistics robot are therefore classified as manipulable objects 1 and entered as such into the environment map. This information can be used to track goods and therefore to take inventory.


Finally, FIG. 4 shows one example of a classification of static objects 1 from the point of view of the mobile logistics robot which is provided with the device 15 in FIG. 1. Floor markings 13 in the vicinity of the mobile logistics robot are therefore classified as static objects 1 and entered as such into the environment map. The floor markings 13 can be used for the localization and navigation of the mobile logistics robot.

Claims
  • 1-14. (canceled)
  • 15. A method for production of an environment map for a mobile logistics robot, comprising: sensing an environment using a sensor system;evaluating sensor data in a processor unit; andproducing a virtual grid of the environment using cells,wherein the cells in which objects are detected are labeled as occupied cells and the cells in which no objects are detected are labeled as free cells, as a result of which a representation of the environment is produced, andwherein the objects that occupy the cells are identified in the processor unit.
  • 16. A method according to claim 15, wherein the objects are identified by image processing methods.
  • 17. A method according to claim 15, wherein the objects are identified by artificial intelligence methods.
  • 18. A method according to claim 15, wherein the objects are recognized in at least one object recognition unit of the processor unit.
  • 19. A method according to claim 15, wherein the objects are entered into the environment map with their position and orientation as well as their dimensions.
  • 20. A method according to claim 15, wherein the objects are classified in at least one object classification unit of the processor unit.
  • 21. A method according to claim 20, wherein the objects are classified as static, manipulable, and dynamic objects.
  • 22. A method according to claim 18, comprising a plurality of object recognition units, and wherein a plurality of classification units and/or the plurality of object recognition units are consolidated.
  • 23. A method according to claim 20, comprising a plurality of object classification units, and wherein the plurality of object classification units and/or a plurality of object recognition units are consolidated.
  • 24. A method according to claim 18, wherein sensor signals from different sensors of the sensor system are transmitted to at least one classification unit and/or the at least one object recognition unit.
  • 25. A method according to claim 20, wherein sensor signals from different sensors of the sensor system are transmitted to the at least one object classification unit and/or at least one object recognition unit.
  • 26. A method according to claim 15, wherein a digital twin of the environment is generated in the processor unit.
  • 27. A mobile logistics robot to carry out a production of an environment map comprising: a device for generation of the environment map for the mobile logistics robot, the device comprising:a sensor system for sensing of an environment of the mobile logistics robot; anda processor unit for evaluation of sensor data,wherein the processor unit is designed to produce a virtual grid of the environment with cells, and to label the cells in which objects are labeled as occupied cells and the cells in which no objects are detected are labeled as free cells, as a result of which a representation of the environment can be produced, andwherein the processor unit comprises an identification unit which is designed to identify the objects that occupy the cells.
  • 28. A mobile logistics robot according to claim 27, wherein the identification unit comprises at least one classification unit and/or at least one object recognition unit.
  • 29. A mobile logistics robot according to claim 27, wherein the sensor system comprises at least one optical sensor.
  • 30. A mobile logistics robot according to claim 29, wherein the at least one optical sensor is a laser scanner and/or a camera.
  • 31. A mobile logistics robot according to claim 28, wherein the sensor system comprises at least one optical sensor.
  • 32. A mobile logistics robot according to claim 31, wherein the at least one optical sensor is a laser scanner and/or a camera.
  • 33. A mobile logistics unit according to claim 27, wherein the sensor system comprises at least one radar sensor.
  • 34. A mobile logistics unit according to claim 28, wherein the sensor system comprises at least one radar sensor.
Priority Claims (1)
Number Date Country Kind
10 2021 133 614.7 Dec 2021 DE national
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is the United States national phase of International Patent Application No. PCT/EP2022/082947 filed Nov. 23, 2022, and claims priority to German Patent Application No. 10 2021 133 614.7 filed Dec. 17, 2021, the disclosures of which are hereby incorporated by reference in their entireties.

PCT Information
Filing Document Filing Date Country Kind
PCT/EP2022/082947 11/23/2022 WO