The specification relates to the field of autonomous mobile robots, especially methods for exploration of an environment as yet unknown to the autonomous mobile robot in a robot operating zone.
Many autonomous mobile robots can be obtained for the most diverse of private or commercial applications, such as the processing or cleaning of floor surfaces, the transport of objects, or the inspection of an environment. Simple devices work without the composing and use of a map of the robot areas of applications, for example by moving randomly over a floor surface to be cleaned (see, e.g., the publication EP 2287697 A2 of iRobot Corp.). More complex robots use a map of the robot areas of applications which they compose themselves or are provided in electronic form.
Before the map can be used for a trajectory planning (and other purposes), the robot must explore its environment in the robot operating zone in order to compose a map. Methods are known for the exploration of an environment not familiar to the robot. For example, techniques such as “Simultaneous Localization and Mapping (SLAM) can be used in an exploration phase of a robot application. The inventors propose to solve the problem of improving the process of exploration of an environment not yet familiar to the robot within a robot operating zone.
The aforementioned problem can be solved with a method according to claim 1 or 16, as well as with a robot control system according to claim 19. Various exemplary embodiments and further developments are the subject matter of the dependent claims.
A method is described for exploration of a robot operating zone by an autonomous mobile robot. According to one exemplary embodiment, the method involves the starting of an exploration run, wherein the robot during the exploration run detects objects in its environment and stores detected objects as map data in a map, while the robot moves through the robot operating zone. During the exploration run, the robot carries out a partial region detection based on the stored map data, wherein at least one reference partial region is detected. It is then checked whether the reference partial region has been fully explored. The robot repeats the partial region detection in order to update the reference partial region and again checks whether the (updated) reference partial region has been fully explored. The exploration of the reference partial region is continued until the check reveals that the reference partial region has been fully explored. The robot then continues the exploration run in another partial region, if another partial region has been detected, using the other partial region as a reference partial region.
According to another exemplary embodiment, the method involves starting an exploration run in a first of many rooms of the robot operating zone, connected by door openings. The robot during the exploration run detects objects in its environment and stores the detected objects as map data in a map, while the robot moves through the robot operating zone. Furthermore, the robot detects one or more door openings during the exploration run and checks whether the first room has already been fully explored. The exploration run is continued in the first room until the check reveals that the first room is fully explored. The exploration run can then be continued in another room.
Various exemplary embodiments shall be explained more closely below with the aid of figures. The representations are not necessarily true to scale and the invention is not just confined to the aspects shown. Instead, emphasis is placed on representing the underlying principles. The figures show:
Furthermore, the partitioning of the map composed by the robot into partial regions is familiar in itself (see, e.g., DE 10 2010 017 689 A1). The robot will partition its map with the aid of given criteria, such as door openings detected by means of sensors, floor coverings detected, etc. The purpose of the partitioning of the robot operating zone into several partial regions is to create the possibility of individual treatment of different areas (such as the rooms of a dwelling). In the case of a cleaning robot, different partial regions can be cleaned, e.g., with different frequency, different intensity, at certain times, with certain implements or cleaning agents, etc. But a definitive partitioning of the map into partial regions is not possible until the robot has (basically completely) explored its environment. Various strategies exist for the exploration of a robot operating zone. Examples of this are random travel, travel along obstacles (especially to move around the outer contour), or even complex methods which determine a closest exploration point which the robot can head for to achieve a maximum exploration savings (see, e.g., D. Lee: The Map-Building and Exploration Strategies of a Simple Sonar-Equipped Mobile Robot (Cambridge University Press, 1996)). However, there is no method which takes account of the special circumstances of residential environments with individual rooms. The rooms are generally connected by door openings and thus are clearly bounded off from each other. At the same time, a room may be very complex, at least in the view of the robot, due to the furnishings (obstacles). This means that, with the usual strategies for exploration, the robot very often travels back and forth between the rooms, which costs a lot of time and energy. The fundamental idea of the invention is to completely explore a room before the robot moves to the next room. For this, the robot during its exploration produces a partitioning of the map into partial regions, for example in order to identify a room as a meaningful partial region. The robot can explore this partial region and determine when this partial region (and thus the room) has been fully explored.
Before discussing more closely the exploration of the environment of the robot, we shall first briefly describe the construction of an autonomous mobile robot.
The autonomous mobile robot 100 comprises a drive unit 170, which may have for example electric motors, gears, and wheels, by which the robot 100—at least in theory—can move toward each point of an operating zone. The drive unit 170 is adapted to convert commands or signals received from the control unit 150 into a movement of the robot 100
The autonomous mobile robot 100 further comprises a communication unit 140, in order to produce a communication link 145 to a human/machine interface (HMI) 200 and/or other external devices 300. The communication link 145 is for example a direct wireless link (e.g., Bluetooth), a local wireless network link (e.g., WLAN or ZigBee) or an Internet link (e.g., to a cloud service). The human/machine interface 200 can put out user information via the autonomous mobile robot 100 for example in visual or also acoustical form (e.g., battery status, current work task, map information such as a cleaning map, etc.) and receive user commands for a work task of the autonomous mobile robot 100. Examples of a HMI 200 are tablet PC, Smartphone, Smartwatch and other wearables, computer, Smart-TV, or Head-Mounted Displays, and so forth. A HMI 200 can additionally or alternatively be integrated directly in the robot, so that the robot 100 can be operated for example by key touch, gestures, and/or voice input and output.
Examples of external devices 300 are computers and servers, to which computations and/or data can be moved, external sensors providing additional information, or other household appliances (such as other autonomous mobile robots), with which the autonomous mobile robot 100 can interact and/or exchange information.
The autonomous mobile robot 100 may have a working unit 160, such as a processing unit for processing a floor surface and in particular for the cleaning of a floor surface (e.g., brush, vacuum cleaner) or a grip arm for the fitting and transporting of objects.
In certain instances, such as a telepresence robot or a monitoring robot, a different component is used to fulfill the intended tasks and no working unit 160 is necessary. Thus, a telepresence robot may have a communication unit 140 coupled to the HMI, which may be equipped for example with a multimedia unit, having a microphone, camera and monitor screen, for example, to enable the communication among many persons at remote physical locations. A monitoring robot during inspection runs ascertains unusual events with the aid of its sensors (such as fire, light, unauthorized persons, etc.) and gives notice of this for example to a watch station. In this case, instead of the working unit 160 there is provided a monitoring unit with sensors to monitor the robot operating zone.
The autonomous mobile robot 100 comprises a sensor unit 120 with various sensors, such as one or more sensors for the detecting of information about the environment of the robot in nits operating zone, such as the position and extension of obstacles or landmarks in the operating zone. Sensors for the detection of information about the environment are, for example, sensors for measuring of distances from objects (such as walls or other obstacles, etc.) in the environment of the robot, such as an optical and/or acoustical sensor, which can measure distances by means of triangulation or run time measurement of a signal emitted (triangulation sensor, 3D camera, laser scanner, ultrasound sensors, etc.). Alternatively or additionally, a camera may be used to gather information about the environment. In particular, the position and extension of an object can also be determined by viewing an object from two or more positions.
In addition, the robot may possess sensors for detecting a (usually unintentional) contact (or collision) with an obstacle. This may be realized by accelerometers (which detect, e.g., the change in velocity of the robot upon collision), contact switches, capacitive sensors or other tactile or touch-sensitive sensors. In addition, the robot may possess floor sensors, in order to identify an edge in the floor, such as a stairway. Other customary sensors in the field of autonomous mobile robots are sensors for determining the speed of the robot and/or the distance traveled, such as odometers or inertial sensors (acceleration sensor, turn rate sensor) for determining a change in position and movement of the robot, as well as wheel contact switches to detect a contact between wheel and floor.
The autonomous mobile robot 100 may be associated with a base station 110, at which it may charge its energy storage (batteries), for example. The robot 100 can return to this base station 110 after completing its task. When the robot has no further task to perform, it can wait at the base station 110 for a new use.
The control unit 150 may be adapted to provide all functions needed by the robot to move by itself in its operating zone and perform a task. For this, the control unit 150 comprises, for example, the processor 155 and the storage module 156 in order to execute software. The control unit 150 on the basis of the information received from the sensor unit 120 and the communication unit 140 can generate control commands (e.g., control signals) for the working unit 160 and the drive unit 170. The drive unit 170, as already mentioned, can convert these control signals or control commands into a movement of the robot. The software contained in the storage 156 may have a modular design. For example, a navigation module 152 provides functions for the automatic production of a map of the robot operating zone, and for planning the movement of the robot 100. The control software module 151 provides, for example, general (global) control functions and can form an interface between the individual modules.
In order for the robot to perform a task autonomously, the control unit 150 may comprise functions for the navigation of the robot in its operating zone that are provided by the aforementioned navigation module 152. These functions are familiar in themselves and may include one of the following, among others:
The control unit 150 can constantly update a map of the robot operating zone with the aid of the navigation module 152 and based on the information of the sensor unit 120, for example during the operation of the robot, such as when the environment of the robot changes (obstacle moved, door opened, etc.). A current map can then be used by the control unit 150 for short-term and/or long-term movement planning for the robot. The planning horizon refers to the path calculated in advance by the control unit 150 for a (target) movement of the robot before it is actually carried out. The exemplary embodiments described here involve, among others, various approaches and strategies for the movement planning in particular situations, e.g., situations in which certain maneuvers are blocked by obstacles and therefore cannot be carried out.
In general, an (electronic) map which can be used by the robot 100 is a collection of map data (such as a database) for saving of position-related information about an operating zone of the robot and the environment relevant to the robot in this operating zone. In this context, “position-related” means that the stored information is associated each time with a position or a posture in a map. A map thus represents a plurality of data records with map data, and the map data can contain any given position-related information. The position-related information can be saved in different degrees of detail and abstraction, and can be adapted to a specific function. In particular, individual information items may be saved redundantly. However, oftentimes a collection of multiple maps regarding the same region but saved in different form (data structure) is likewise called “a map”.
A technical device is most useful to a human user in everyday life if on the one hand the behavior of the device is clear and comprehensible to the user and on the other hand an intuitive operation is possible. It is generally desirable for an autonomous mobile robot (such as a floor cleaning robot) to exhibit an intuitively comprehensible and practical behavior for a human user. For this, the robot must interpret its operating zone by technical methods and divide it into partial regions in a way similar to what a human user would do (e.g., living room, bedroom, hallway, kitchen, dining area, etc.). This enables a simple communication between user and robot, for example in the form of simple commands to the robot (such as “clean the bedroom”) and/or in the form of messages to the user (such as “cleaning of bedroom finished”). Furthermore, the mentioned partial regions can be used for the displaying of a map of the robot operating zone and the operating of the robot by means of this map.
Now, a partitioning of the robot operating zone into partial regions by a user can be done on the one hand by recognized conventions and on the other hand by personal preference (and thus user-specific, such as dining area, children's play room, etc.). One example of a known convention is the subdividing of a dwelling into different rooms, such as bedroom, living room and hallway. According to one user-specific exemplary subdividing, a living room could be divided into a kitchen area, a dining area, or areas in front of and behind the sofa. The boundaries between these areas might sometimes be defined very “vague” and are generally subject to the interpretation of the user. A kitchen area, for example, might be characterized by a tile floor, while the dining area is characterized merely by the presence of a table and chairs. The adapting to the human user might be a very difficult task for a robot and often a robot/user interaction may be needed to correctly perform the partitioning of the robot operating zone. For a simple and comprehensible robot/user interaction, the map data and the partitioning already done automatically must be interpreted and processed by the device. Furthermore, the human user expects a behavior of the autonomous mobile robot adapted to the partitioning done. Therefore, it may be desirable to provide the partial regions with attributes by the user or automatically, thus influencing the behavior of the robot.
One technical requirement for this is that the autonomous mobile robot has a map of its operating zone, in order to orient itself here with the aid of the map. This map is constructed by the robot itself, for example, and it is stored permanently. In order to accomplish the goal of an intuitive partitioning of the robot operating zone for the user, technical methods are needed which (1) automatically perform a partitioning of the map of the robot operating zone, such as a dwelling, according to given rules, (2) allow a simple interaction with the user, in order to conform to the partitioning wishes of the user, not known a priori, (3) preprocess the automatically generated partitioning in order to represent it easily and understandably to the user in a map, and (4) derive by itself certain attributes from the partitioning so created that are suitable to achieving the behavior expected by the user.
Not only to simplify the interaction with a human user, but also to “work off” the operating zone in a sensible manner (from the standpoint of the user), the robot should first of all divide its robot operating zone in automated manner into partial regions (i.e., perform a partial region detection). Such a subdivision into partial regions allows the robot to perform its task in its operating zone in an easier, more systematic, differentiated, and “logical” manner (from the standpoint of the user), and to improve the interaction with the user. In order to achieve a sensible subdivision, the robot must weight various sensor data against one another. In particular, it can use information on the passability (easy/difficult) of a region of its operating zone to define a partial region. Furthermore, the robot can proceed on the (disprovable) assumption that rooms are generally rectangular. The robot can learn that certain changes in the partitioning will lead to more meaningful results (so that, e.g., particular obstacles will lie with a certain probability in a particular partial region).
As is shown in
In order to solve the mentioned problems and to enable an automated subdividing of the robot operating zones into different partial regions (such as rooms), the robot produces “hypotheses” as to the environment of the robot based on the sensor data, which are tested by various methods. If a hypothesis can be falsified, it is rejected. If two boundary lines (such as lines A-A′ and O-O′ in
When producing a hypothesis by the robot, various sensor measurements are combined. For example, for a door opening these are the opening width, opening depth (given by the wall thickness), the existence of a wall at the right and left of the opening or a door protruding into the room. These information items may be determined by the robot with a distance sensor, for example. A door threshold over which the robot travels can be detected by an acceleration sensor or a position sensor (e.g., a gyroscopic sensor). Additional information can be ascertained by image processing and a measuring of the ceiling height.
Another example of a possible hypothesis is the course of walls in the robot operating zone. These are characterized, among other things, by two parallel lines, having a spacing of a typical wall thickness (see
In order to test and evaluate hypotheses, they can be assigned a degree of plausibility. In one simple exemplary embodiment, a predefined point score is given to each hypothesis for each confirming sensor measurement. If a particular hypothesis has reached a minimum number of points in this way, it is regarded as being plausible. A negative number of points might result in a rejecting of the hypothesis. In another developed exemplary embodiment, a probability is assigned to a particular hypothesis coming true. This requires a probability model allowing for correlations between different sensor measurements, but also making possible complex probability statements with the aid of stochastic computation models and thus a more reliable predicting of the expectations of the user. For example, the door widths might be standardized in certain regions (e.g., countries) where the robot will be used. If the robot measures such a standardized width, it is therefore a door with high probability. Departures from the standard widths reduce the probability of it being a door. For example, a probability model based on a normal distribution can be used for this. Another possibility of producing and evaluating hypotheses is the use of “machine learning” to construct suitable models and dimensional functions (see, e.g., Trevor Hastie, Robert Tibshirani, Jerome Friedman: “The Elements of Statistical Learning”, 2nd ed. Springer-Verlag, 2008). For this, map data is recorded in different residential environments by one or more robots, for example. This may then be supplemented with floor plans or data entered by a user (e.g., regarding the course of walls or door openings or a desired partitioning) and be evaluated by a learning algorithm.
Another method which can be used alliteratively or additionally to the use of the above explained hypotheses is the dividing of a robot operating zone (such as a dwelling) into multiple rectangular regions (e.g., rooms). This approach is based on the assumption that rooms are generally rectangular or can be composed of several rectangles. In a map produced by a robot, this rectangular shape of the rooms is not generally identifiable, since numerous obstacles in the rooms with complex boundaries, such as furniture, restrict the operating zone of the robot.
Based on the assumption of rectangular rooms, the robot operating zone is tiled with rectangles of different size, which are meant to reproduce the rooms. In particular, the rectangles are chosen such that a rectangle can be distinctly coordinated with each point on the map of the robot operating zone accessible to the robot. That is, the rectangles generally do not overlap. It is not ruled out that a rectangle will contain points not accessible to the robot (e.g., because furniture prevents accessibility). Thus, the region described by the rectangles may be larger and of more simple geometrical shape than the actual robot operating zone. In order to determine the orientation and size of the individual rectangles, long straight boundary lines are used for example in the map of the robot operating zone, such as occur for example along walls (see, e.g.,
Based on the assumption that rooms are substantially rectangular, the robot can complete the outermost boundary lines from the map of boundary lines (see
As already mentioned, a substantially complete map of the robot operating zone is generally needed before the robot can perform a meaningful automatic subdividing of a map into partial regions (such as rooms). Until then, the robot will move through the robot operating zone, unable to obey partial region boundaries. This may lead to inefficient and, to the user, hard to understand behavior of the robot in the “exploration phase”. For example, the robot passes through a door opening and arrives in another room before it has fully explored one room, which may mean that the robot has traveled through a large portion of the robot operating zone (such as a dwelling), yet still “blank spots” remain in the map in various places of the dwelling (in different rooms). The robot must then head for these “blank spots” in the map individually in order to explore them and obtain a complete map. The exemplary embodiments described here involve, among others, a method for organizing the mentioned exploration phase more efficiently at least in some situations and making the behavior of the robot seem more “logical” to the user in this exploration phase.
During the exploration run, the robot 100 performs a partial region detection based on the current map data (see
For the updating of the reference partial region, the partial region detection can take account of both the current map data and the previous boundaries of the reference partial region (i.e., those found during a previous partial region detection). The ongoing updating of the reference partial region shall be explained more closely later on (see
Algorithms for the automated partitioning of a map into partial regions are familiar in themselves. Some of these algorithms only work when the map is fully explored, i.e., fully enclosed by walls and other obstacles. These algorithms can be used for a not fully explored map if the map is “artificially” completed. For this, for example, a frame (bounding box) can be placed around the already explored region and be regarded as a “virtual” obstacle. Other possibilities of completing an incompletely explored map in order to use the algorithms for automated partitioning of a complete map can also be used alternatively. In the exemplary embodiments described here, one or more information items based on map data may be used for the partial region detection, such as the position of walls and/or other obstacles, the position of door openings and the position of floor cover boundaries. In addition or alternatively, information about the floor structure, the ceiling structure, and/or the wall structure (stored in the map, for example) can be taken into account. A further criterion for determining a partial region are predetermined geometrical properties of partial regions, such as a minimum or a maximum size of a partial region. Ceiling structures and in particular the corners between ceiling and a wall can provide direct information as to the size and shape of a room. From this, door openings can be identified, for example, as openings in the wall not reaching up to the ceiling. Wall structures, such as windows, may furnish information as to whether that wall is an exterior wall. Floor structures, such as the change in the floor covering or a door threshold, may be indications of room boundaries and especially of door openings.
Depending on the implementation of the partial region detection, the boundary lines of a partial region can be determined at least predictively. In this context, predictively means that existing contours already detected and saved in the map (such as those of a wall) are used in order to predict a boundary line of a partial region. For example, an already detected contour of a wall already saved in the map can be prolonged (virtually) in a straight line in order to complete the boundary of a partial region. According to another example, a boundary line of a partial region parallel or at right angles to a contour of a wall (or another obstacle) already detected and saved in the map can be established such that it touches the edge of the already explored region of the robot operating zone. One example of this shall be explained later on with the aid of
The robot follows (tracking) its own position in the map. After a partial region detection, the robot can check to see whether it is still in the (updated) reference partial region. If not, and if the reference partial region is not yet fully explored, the robot returns to the reference partial region in order to continue the exploration run there. The exploration run is not continued outside of the reference partial region. But if the check reveals that the (updated) reference partial region has already been fully explored (for example, because it is bounded solely by contours of obstacles or boundary lines with other detected partial regions), a different partial region becomes the reference partial region and the exploration run is continued there.
As mentioned, the partial region detection can be repeated regularly or as a response to the detection of certain events. A repetition of the partial region detection can be triggered, e.g., when the robot determines that a particular interval of time has elapsed since the last partial region detection, that the robot has traveled a certain distance since the last partial region detection, that the explored region of the robot operating zone has grown by a particular area since the last partial region detection or that the cost for the further exploration of the reference partial region is greater than a given value. The cost may be assessed, e.g., by means of a cost function. A repetition of the partial region detection may also be triggered, e.g., if the robot has reached a target point determined for the exploration. Such a target point can be chosen, for example, on the boundary between explored and not (yet) explored partial regions. While the robot is heading for this point, it can detect new regions with its sensors and thus expand the bounds of the explored region.
When the robot determines that the reference partial region has been fully explored, this region is saved and its boundaries will no longer be changed. As long as further partial regions not fully explored exist, the robot will select another partial region as the reference partial region and continue the exploration run there. The former reference partial region (or the former reference partial regions) can be taken into account during the further partial region detection in that their boundary lines are no longer changed and thus the boundaries of neighboring partial regions are also established (at least partly). That is, the boundary lines of the former reference partial regions can be used when determining the boundary lines of further partial regions.
If the robot is a cleaning robot, it can clean a reference partial region—after it has been fully explored—before selecting another partial region as the reference region and continuing the exploration run there. This behavior can be made dependent on a user input, which heightens the flexibility of the robot. Accordingly, the robot can receive a user input, depending on which the robot distinguishes three operating modes. The user input (e.g., “explore”, “explore and clean”, “clean”) may come for example via a HMI (e.g., on a portable external device or directly on the robot). In a first operating mode, the robot performs an exploration run and the robot explores the robot operating zone, producing a new map in the process. The exploration run can be implemented according to the method described here. In a second operating mode, the robot perform an exploration run, produces a new map in this process, and also cleans the robot operating zone. In a third operating mode, no new map is produced, but the robot operating zone is cleaned based on an already existing and stored map. This concept may also be used for other robots which are not cleaning robots. In this case, the robot performs another activity instead of the cleaning
The robot can mark the already explored region of the robot operating zone as “explored” on the map (e.g., by setting a particular bit or another marking, or by detecting, updating and storing the boundaries between explored and not explored regions). For example, those regions are marked as explored on the map that are located at least once during the exploration run within a detection region of a navigation sensor of the robot (see
The robot can end the exploration run when it determines that the operating zone has been fully explored (e.g., because the region marked as explored is entirely bounded by obstacles) and/or if a continuing of the exploration run is not possible because no further partial region (not yet fully explored) has been detected. In this situation, the robot can again perform a partial region detection and consider in this process the map data regarding the completely explored robot operating zone. During this concluding partial region detection, the robot can use different (e.g., more precise and/or more expensive) algorithms than those during the repeated partial region detection of the exploration run. Alliteratively, however, the same algorithm can be used with altered parameters. Finally, after the ending of the exploration run the robot can return to a starting point at which the exploration run was started, or to a basis station that was detected during the exploration run and saved in the map.
The diagrams in
The robot 100 generally knows its own position in the map; the robot 100 can measure changes in its position for example by means of odometry (e.g., by means of wheel sensors, visual odometry, etc). Hence, the robot also “knows” which regions of the robot operating zone it has already explored and can mark these explored regions as “explored” on the map. In the example shown in
The robot 100 can now perform a partial region detection in order to establish a first reference partial region R (or its boundaries). The reference partial region R is bounded for example by the identified obstacles, on the one hand. On the other hand, two preliminary virtual boundary lines are defined. Since there is no further information for their position, they are established for example as straight lines lying orthogonally to the identified obstacles, touching the boundary EB of the explored region E (see
In order to further explore the reference partial region, the robot may for example try to move toward one of the boundary lines of the explored region EB not formed by a detected obstacle (such as a wall, bed, dresser, etc.). In the present example, the robot travels downward into the region of the room 10 situated at the bottom of the map while it continues to take ongoing measurements and detects obstacles in its environment and saves them in its map. This situation is represented in
Thanks to the determination of the reference partial region R, the robot 100 now does not move through the door opening into the adjacent room, since it would thereby leave the reference partial region. Instead, the robot 100 remains in the reference partial region R and explores the not yet explored region (blank spot) at bottom left (in the map) and the explored region E increases further and now encompasses nearly the entire room (see
Based on the situation shown in
In the example shown in
In the situation represented in
In a simple variant of the method described here, the partial region detection is reduced to a detection of the door openings. The detection of an (open) door opening implies the detection of an adjacent room. That is, the partial region detection detects during the exploration run practically only various rooms as different partial regions. Otherwise, this exemplary embodiment is identical or similar to the previously described exemplary embodiments. The robot will first completely explore a room before it moves through a door opening to continue the exploration run in an adjacent room. If the robot should randomly travel into an adjacent room, for example because a door opening is only recognized as such after the robot has detected this door opening as such, the robot can determine that it has left the previously explored room, even though it is not yet fully explored. In this situation, the robot will end the exploration run in the room where it presently finds itself and again move through the door opening previously randomly entered (in the opposite direction) so as to return to the previously explored room and explore it further.
The methods described here can be implemented in the form of software. The software can be executed on a robot, on a human-machine interface (HMI) and/or on any other computer such as a home server or a cloud server. In particular, individual parts of the method can be implemented by means of software, which can be subdivided into different software modules and can be run on different devices. When the robot “does something” (e.g., executes a step of the method), this process (e.g., a step of the method) can be initiated by the control unit 150 (see
Number | Date | Country | Kind |
---|---|---|---|
10 2017 121 128.4 | Sep 2017 | DE | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2018/074607 | 9/12/2018 | WO | 00 |