The present invention relates to an apparatus and method for simulating a mobile robot at a work site, in particular, to simulate the mobile robot and/or gather information required for planning the deployment of one or more mobile robots at the work site.
In a work site such as a production line or manufacturing plant, autonomous mobile robots can be used to perform tasks like transportation of objects/goods, perform production jobs, and/or inspection of environment. They are capable of human interaction such as to take in instructions to move between instructed locations.
Currently, it is time consuming and challenging to plan for the deployment of these autonomous mobile robots for a work site. A lot of information of the work site has to be gathered manually to determine the best way to deploy the robots. At the planning stage, it is hard for users to visualize the scenario after the robots are deployed. Furthermore, after the robots are deployed, it can also be difficult for users to plan for the addition of more robots.
Moreover, it is not always practical to arrange for demonstration or testing of the robots at the work site, as each robot can be rather bulky and difficult to transport. If the production premise is located at a remote location, and sending robots on-site for demonstration or testing is difficult, costly, or not possible.
According to an example of the present disclosure, there are provided an apparatus and a method for simulating a mobile robot at a work site, as claimed in the independent claims.
There is also provided a system for simulating a mobile robot at a work site in a system claim. Some optional features are defined in the dependent claims.
In a production line or manufacturing plant, autonomous mobile robots may be deployed to facilitate the production or manufacturing process. Each mobile robot may be mounted with machinery or equipment to transform the mobile robot into a mobile station for performing tasks using the mounted machinery or equipment. A term “payload” is used in the present disclosure to refer to the mounted machinery or equipment.
It is time consuming and challenging to plan for the deployment of these autonomous mobile robots for a work site. For instance, it can take 6 to 12 months for the planning and deployment. A lot of information of the work site has to be gathered to determine the best way to deploy the robots. For example, floor plans, images of the site, equipment location, etc. At the planning stage, it is hard for users to visualize the scenario after the robots are deployed. Even after the robots are deployed, it is difficult for users to plan for the addition of more robots.
Examples of the present disclosure can provide an apparatus and a method that are capable of simplifying and reducing the amount of time for the planning and deployment of the mobile robots to as short as a few weeks. Specifically, the apparatus and method provide a simple and convenient solution to gather information from a work site for the planning. The apparatus and method also provides a demonstration tool for demonstrating the capabilities of the mobile robots and/or simulating a scenario after deployment of the mobile robots to potential users.
The apparatus has to be present at the work site to gather information. In one example of the present disclosure, the apparatus can be a mobile device that can be hand-held such as a smartphone, a tablet device, a laptop, and the like. In another example, the apparatus can be a real mobile robot (a real robot present at the work site) having a mobile base that is movable, or the apparatus can be a movable cart that can be pushed by a user. The mobile base may have wheels and/or tracks for moving the mobile robot. The mobile robot can be configured to move around the work site on its own or it can be remotely controlled by a user to move around the work site. In the case of the movable cart, it can have a mobile base with wheels and/or tracks as well and be pushed around by the user.
The apparatus comprises one or more sensors to be used for navigation/area mapping purposes and may be or include one or more cameras for capturing images (photograph or video). The term “mapping” means scanning or sensing the environment of the work site for purposes of creating a map for the environment of the work site. Mapping can refer to a process to create parts of the map or create the entire map. Mapping inputs refer to data for full or partial map creation. Such mapping inputs can be obtained from the one or more sensors. The one or more sensors may also be or include laser sensors (e.g., for LiDAR system), infrared sensors, ultrasonic sensors and the like. LiDAR relates to a technology for measuring distances (ranging) by illuminating a target with laser light and measuring the reflection with a sensor. Differences in laser return times and wavelengths can then be used to make digital three-dimensional (3D) representations of the target.
The apparatus can be configured to have Simultaneous Localization and Mapping (SLAM) capability, which can be implemented using, for instance, an Adaptive Monte Carlo Localization (AMCL) algorithm. AMCL algorithm relates to a probabilistic localization system for an object (known as AMCL node) navigating in two-dimensional (2D) i.e., navigate in a 2D map. An AMCL node can be configured to work with laser scans and laser maps provided through LiDAR, but it could also be extended to work with other sensor data, such as a sonar or stereo and the like. In the case that SLAM and AMCL are involved, a 2D map of the environment, typically a top view of the environment, has to be created.
The apparatus is configured to operate or run software, which can be an application, such as a mobile application of a mobile operating system (e.g. Google's Android, iPhone Operating System, etc.), or an application to be run on an operating system (e.g. Windows, MacOS, etc.) of a portable computer (e.g. notebook, laptop etc.). The software works with the one or more sensors for navigation in/area mapping of the environment of the work site. The software also helps a user to obtain data (e.g., cycle time, utilization etc.) other than those relating to navigation/area mapping, which will be discussed in more detail later. The software is configured to create a virtual mobile robot for display and simulate the movements and/or operation of the created virtual mobile robot in the environment of the work site. The software can provide a graphical user interface to facilitate a user to configure and/or use the software.
The apparatus can have a display, which can be an LCD, LED, or OLED screen, and the like. The display is used for showing the graphical user interface of the software. The apparatus may be configured to have the environment of the work site displayed as a background on the display and have the created virtual mobile robot overlaid or augmented in the displayed environment. The environment may be captured by one or more cameras. Alternatively, the one or more sensors can be other types of sensors like laser sensors, infrared sensors, ultrasonic sensors, and the like, which can map out the environment and provide data for rendering graphics representing the environment to be displayed on the display. The environment shown as background can be an entirely virtual environment that is created based on the mapping inputs (e.g., the virtual environment can simulate a fully furnished environment), or a partially virtual environment with overlays or is augmented over what a camera captures (e.g., only walls and/or other objects that can obstruct the virtual mobile robot are highlighted, ground is deliberately provided with a mesh, etc.). The appearance of the virtual graphical content and/or graphical overlays may be customizable.
A user can provide inputs to control the behavior of the virtual mobile robot. Such user inputs can be used to program an actual mobile robot to be deployed. In the case of remote control of a real mobile robot used to gather information at the work site, a display can be provided at where the user is remotely located to provide the user with the graphical user interface to operate the virtual mobile robot for simulating the virtual mobile robot and/or to gather information from the simulation. The graphics of the virtual mobile robot can be in any form as long as a movable object is shown. For instance, it may be a simple sphere or box object with no elaborate designs. In another example, the graphics may be a realistic representation of a real mobile robot.
In one example, the apparatus is configured to perform jobs a) to e) as follows:
a) Produce a map of the environment of the work site based on inputs from the one or more sensors. The map can be a two-dimensional (2D) and/or three-dimensional (3D) map. All references to “the map” in the present disclosure refers to this map. The 2D map can be a top view of the environment. In one example, a 3D map can be first created and then converted into the 2D map. The map can be generated by first moving the apparatus around the environment of the work site to scan and map the environment. After the map is generated, localization data, global path data, object data relating to obstacles that include static objects and/or dynamic objects, and/or other information can be collected. Examples of dynamic objects can include human, machine, and/or random things lying on the floor. Alternatively, it is also possible to have the map generated as the apparatus is moved around the environment of the work site, and concurrently collect localization data, global path data, object data relating to obstacles that include static objects and/or dynamic objects, and/or other information.
b) Obtain localization data i.e. data relating to positioning of the virtual mobile robot in the produced map. The apparatus that is present at the environment of the work site can be deemed as the hardware of the virtual mobile robot. Hence, when the apparatus tracks its location, the location of the apparatus can be used to track the location of the virtual mobile robot. Localization is the way in which the apparatus tracks its location on the produced map. It is a process the apparatus uses to estimate its position in the environment based on data collected from the one or more sensors. For example, there can be 2 types of localization. Laser localization uses data from a laser sensor (e.g., LiDAR scanner) of the apparatus in conjunction with the map to calculate its position. This can be set as a default way the apparatus keeps track of its position in the environment (or operating space). Optionally, the apparatus can use lights to track its position in the environment. In this case, a light source has to be provided. This may be a torchlight function in the case that the apparatus is a smartphone or other mobile device, and/or mounted headlights in the case that the apparatus is a real mobile robot controlled by a user or a moveable cart that is pushable by a user. The light localization process can be used in dynamic environments, wherein objects move frequently for the apparatus to localize based on the past positions of the objects and/or the apparatus. During localization, the map is compared with data collected from the one or more sensors to correct the position of the virtual mobile robot. With this information, the apparatus can track the position of the virtual mobile robot on the map.
c) Obtain global path data, for instance, one or more routes or paths auto-computed by the apparatus for the virtual mobile robot. These auto-computed paths can serve as instructions or guidance to a real mobile robot that may be deployed at the work site. A path planning scheme called cost based path planning can be used for the automatic computation of paths. In this scheme, a virtual mobile robot instructed to proceed to a goal (e.g. a waypoint set by a user in the software) searches the map for the most efficient path from a current location of the virtual mobile robot to reach the goal, based on information about the map. This path is called a global path and is the optimal path between two points. The virtual mobile robot follows this path to the goal and in the process, avoids any unmapped obstacle (i.e. object blocking movement of the virtual mobile robot that is not recorded in the map). If the apparatus detects unmapped objects, it alters the local path of the virtual mobile robot to avoid them. If the virtual mobile robot is unable to proceed along the global path, the apparatus can re-plan a new global path for the virtual mobile robot based on information about the map.
d) Obtain object data relating to obstacles that include static objects and/or dynamic objects to the virtual mobile and record their movements and positions relative to the map. Such objects are detected using the one or more sensors.
e) Optionally, gather other information relating to improvement of work efficiency such as cycle time and utilization data. Cycle time refers to the time in which the virtual mobile robot takes to complete a work cycle comprising of tasks assigned to the virtual mobile robot to be virtually performed. Utilization data refers to the extent of usage of the virtual mobile robot. Another example can be traffic condition for movement at the work site.
For example, the apparatus may be configured to receive the following 6 types of inputs (i) to (vi) from a user.
(i) The user can input waypoints in specific points or areas in the environment displayed on the display as the apparatus is moved around. For example, in the case the display is a touchscreen display, the user can select a point or area in the environment displayed on the display and set it as a waypoint. Alternatively, the waypoint can be set by selecting points or areas in the created map via the graphical user interface. Each waypoint may be annotated with comments and/or goals may be set at these waypoints. For instance, a waypoint can be set as a point or area to perform specific actions, e.g., a recharge area to simulate recharging of a rechargeable battery of a virtual mobile robot, an area for changing payload mounted on a virtual mobile robot, a repair area for a virtual mobile robot, an area to perform a specific production/warehousing task by a virtual mobile robot, etc. The waypoints can also function as a guide for navigating the path of the virtual mobile robot that is simulated by the apparatus. Numbers can be provided to the waypoints to indicate a priority order in which the virtual mobile robot should move between the waypoints.
(ii) The user can mark out one or more zones or areas that the virtual mobile robot is allowed to enter and/or zones or areas that the virtual mobile robot is not allowed to enter in the environment. If certain zones are preferred for entry or not entirely disallowed, resistance or preference level may be assigned to these marked out zones or areas for path planning purposes. More details on resistance and preference are provided below.
(iii) The user can input path planning parameters via the graphical user interface of the software run by the apparatus. Path planning parameters control how the virtual mobile robot moves through the environment. A non-exhaustive list of examples of path planning parameters is as follows:
1) The virtual mobile robot's maximum traveling and rotating speed.
2) Turning radius of the virtual mobile robot.
3) Grid resolution of a Path Planning Grid. The path planning grid can be related to the path planning scheme called the Cost Based Path Planning, which is discussed above. For example, conceptually, the ‘cost-based’ aspect or grid resolution of this path planning scheme breaks the 2D map (top view) that is produced into discrete 100 mm squares (typically sufficient), called the path planning grid, and assigns a cost to each square. Free (empty) squares (those not close to any obstacles) may have a cost of 0.1. The cost for squares containing walls and other fixed objects is infinite, which means the virtual mobile robot will never enter these squares because the cost is far too high.
4) Fast and slow speeds of the virtual mobile robot. For example, a user can set the movement speed of the virtual mobile robot at specific waypoint, and/or point or area in the map. Some areas may require the virtual mobile robot to slow down for traffic control purposes, for safety reasons, or for any other purpose.
5) Padding and clearances (at fast and slow speeds) of the virtual mobile robot. This refers to how much clearance or distance the virtual mobile robot needs to have from objects in the environment.
6) Rotational speed of a virtual mobile robot at a specific waypoint, and/or point or area in the map.
7) Acceleration/deceleration of the virtual mobile robot at selected waypoint, and/or point or area in the map, or the acceleration/deceleration as a user drives the virtual mobile robot around the environment.
8) The amount of resistance for resisted sectors and resisted lines. Sectors are the zones or areas, which a user may set in the map, for instance, by selecting grid squares. Lines refer to boundary lines set by a user in the map. The term “resistance” herein determines the cost of crossing a resisted sector and/or line set in the map. It is a value defining how much the virtual mobile robot will resist driving through a particular sector and/or cross a line and find an alternative path. A cost of driving through a resisted sector or line can be multiplied by its resistance value. For example, a normal area or sector can be given a cost value of 1 and setting the resistance value to 1 will turn off the resistance behavior.
9) Preferred lines for the virtual mobile robot to cross or follow, preferred sectors for the virtual mobile to enter, and/or preferred directions for the virtual mobile robot to move. For example, the same resistance value may be used to indicate level of preference, or another preference value may be used to indicate the level of preference.
(iv) The user can control/adjust the localization parameters of the one or more sensors according to user requirements. Note that although the unit millimeters (mm) is indicated below, other measurement units may also be similarly applied.
When laser localization is used, examples of some localization parameters may be as follows.
A flag to vary the number of samples based on a localization score. When enabled, number of samples may be lowered when the virtual mobile robot is moving, and the localization score is high. This reduces Central Processing Unit (CPU) or computational demand at the apparatus.
A parameter to adjust separation (in degrees) required between laser settings used for localization. The parameter may be set to discard readings that are too close to each other and this reduces CPU or computational demand at the apparatus.
Grid Resolution (in mm) of map grids created during the laser localization. This relates to scan resolution. Decreasing this value increases localization accuracy, but increases demand on memory usage (e.g., RAM usage) at the apparatus.
Millimeters of linear error per linear millimeter of travel. This refers to allowed percentage error (in mm) of the apparatus's linear odometry readings. If set too high, sample poses will be to spread out for the apparatus to determine its location. If set too low, the apparatus might not be able to localize.
Degrees of error per degrees of rotation of the apparatus. This refers to allowed percentage error (in degrees) of the apparatus's rotational odometry readings. If set too high, sample poses will be to spread out for the apparatus to determine its location. If set too low, the apparatus might not be able to localize.
Degrees of error per linear mm traveled. This refers to allowed error (in degrees) of the robot's rotation per one mm of linear travel. If set too high, sample poses to spread out for the apparatus to determine its location. If set too low, the apparatus might not be able to localize.
Number of pose samples the apparatus uses for localization. This relates to scan resolution. Increasing this value increases localization computations. If too low, the apparatus fails to localize.
Distance the apparatus travels (in mm) before localizing. This relates to frequency of localization of the apparatus. This parameter can be adjusted to help reduce CPU or computational demand at the apparatus. The apparatus only localizes when it travels beyond the listed distance value.
Angle (in degrees) the apparatus turns before triggering localization. This relates to frequency of localization of the apparatus. This parameter can be adjusted to help reduce CPU or computational demand at the apparatus. The apparatus only localizes when it rotates beyond the listed angular value.
When light localization is used, examples of some localization parameters may be as follows. The light source mentioned below refers to one that is mounted on the apparatus or separately provided to work with the apparatus.
Minimum height (in mm) of the light source above the ground. This value is approximate and should be lower than the real value. It is used to establish a valid range and eliminate false positives.
Maximum height (in mm) of the light source above the ground. This value is approximate and should be higher than the real value. It is used to establish a valid range and eliminate false positives.
Minimum length (in mm) of the light source. This value is approximate and should be slightly lower than the real value.
Maximum length (in mm) of the light source. This value is approximate and should be slightly higher than the real value.
(v) The user can configure the virtual mobile robots to perform specific tasks. The time duration for the tasks may also be set. Optionally, a simulation (e.g., in the form of an animated video) can be provided to show the performance of the tasks by the virtual mobile robots. In an example, a virtual mobile robot can be configured to move and operate in the environment as if it is a real mobile robot once the following is done:
(vi) The user can select a payload from a list of different payloads to be mounted to. Users can also input CAD Models of a customized payload that they would like to have, and/or CAD Models of a customized mobile robot that the users required.
The software of the apparatus can be configured to include more than one virtual mobile robots, and/or one or more objects for simulating traffic conditions in the case that more than one virtual mobile robots are deployed. Each virtual mobile robot can be configured according to what is described above for a virtual mobile robot. The one or more objects created may be static objects such as furniture, building structure, and/or equipment placed at a location. The one or more objects created may also be dynamic objects such as people, moving equipment, and/or other moving objects. Traffic data can be gathered from the simulation for traffic control management purposes. The software can also be configured to enable a user to view the simulation of the one or more virtual mobile robots and any created object in the environment.
An example of a system architecture of a system 100 comprising the apparatus as described above is illustrated in
In the present example, the apparatus 102 is a smartphone with a touchscreen display, and is configured to run a mobile application 103. The apparatus 102 has one or more sensors 109 (3D Scanner/Sensor), including sensors for a LiDAR system, for performing a mapping and localization process 107 at the environment of the work site. The one or more sensors 109 also includes a camera. The apparatus 102 provides a graphical user interface 105 (GUI) for a user 120 to view the environment captured by the camera as a background, and to view the virtual mobile robot that is created by the application 103 and overlaid or augmented on the background. In addition, the GUI 105 provides features for the user 120 to:
The apparatus 102 works with a cloud system 104 that may use virtual machine (VM) technology. For instance, cloud services can use virtual machines to provide virtual application resources to multiple users at once. A Virtual Machine (VM) can be a computing resource that uses software instead of a physical computer to run programs and deploy applications. One or more virtual “guest or client” machines can run on a physical “host” machine. Each virtual machine may run its own operating system and function separately from the other VMs, even when they are all running on the same host.
Specifically, the apparatus 102 works with a plurality of servers in the cloud system 104, which is described as follows. It is appreciated that in another example, more than one servers may handle the job scope of each of the servers described below.
In the present example, the cloud system 104 includes a storage server 106 for managing cloud storage of:
The apparatus 102 downloads to or upload from the server 106 the above-mentioned data as required during operation. Alternatively, in another example, such data may be stored locally in local memory accessible to the apparatus 102. The server 106 can also be configured to manage user accounts for a plurality of users of apparatuses like the apparatus 102, and conduct authentication for each user logged in to use the application 103.
The cloud system 104 further includes a planning server 108 for running an autonomous intelligent vehicle (AIV) mapping software (also known as Mobile Planner), which amongst other things, functions as a control center for creating a map of the work site from mapping inputs provided by the apparatus 102 and managing AIV configurations for the virtual mobile robot. The planning server 108 can support the mapping and configurations for a plurality of apparatuses like the apparatus 102, and also real mobile robots. AIV refers to the virtual mobile robot created at the apparatus 102. In the present example, the Mobile Planner is a server-based application. Hence, the mapping process to create the map is performed at the server 108 instead of the apparatus 102. The apparatus 102 just needs to provide the mapping inputs to the server 108 and the server 108 will return the created map. In another example, it is possible that the Mobile Planner is a local application run at the apparatus 102.
The cloud system 104 also includes a simulation server 110 for running software (also known as Fleet Manager) for performing the required processing to simulate the operation of the virtual mobile robot in the environment of the work site. The Fleet Manager can control real and/or virtual mobile robots in the environment of the work site. The Fleet Manager provides a centralized configuration platform for configuring a plurality of real and/or virtual mobile robots in the environment. It also provides a central map management platform for the plurality of real and/or virtual mobile robots. The user configurations made centrally using the Fleet Manager and the map information collected centrally at the Fleet Manager can be automatically propagated to the plurality of real and/or virtual mobile robots. The Fleet Manager further manages job queuing, matching jobs to available real and/or virtual mobile robots, and the dispatching of the plurality of real and/or virtual mobile robots in the environment to perform assigned jobs. It also performs traffic management for the plurality of real and/or virtual mobile robots to prevent collision and/or to ensure that they move efficiently. Location and trajectory information of the real and/or virtual mobile robots are shared by the Fleet manager. The Fleet manager acts as a single-point of integration and communication for software clients, which can include a plurality of the same apparatus 102 and/or real mobile robots, and other automation equipment (excluding the real and/or virtual mobile robots) in the environment. In the present example, the Fleet Manager is a server-based application. In another example, it is possible that some or all the features of the Fleet Manager is provided as a local application run at the apparatus.
During simulation of the movement of a virtual mobile robot, if a user does not move the apparatus 102 along with the virtual mobile robot to perform localization to enable the virtual mobile robot to avoid (i.e. get around) dynamic objects, the simulation of the virtual mobile robot can still continue and be displayed based on real-time updates from the Fleet Manager. The Fleet manage can provide real-time updates on traffic conditions in the work site based on updates from other real and/or virtual mobile robots deployed in the work site. Dynamic objects detected by other real and/or virtual mobile robots can still be recorded and mapped for a virtual mobile robot even if the apparatus 102 is not around to perform localization for the virtual mobile robot.
The cloud system 104 includes a Manufacturing Execution Systems (MES) server 112 that can be used to drive manufacturing operations, by managing and reporting on plant activities, typically as events occur in real-time. The MES server 112 can be used in manufacturing to track and document the transformation of raw materials to finished goods. The MES server 112 can provide information that helps manufacturing decision makers understand how current conditions in a plant can be optimized to improve production output. The MES server 112 can work in real time to enable the control of multiple elements of a production process (e.g., inputs, personnel, machines and support services). The MES server 112 is used to manage, adjust, monitor, and control manufacturing processes performed by real and/or virtual equipment/machinery in the environment, and/or by real and/or virtual mobile robots in the environment.
The Mobile Planner, Fleet Manager and MES server 112 can communicate with one another to support a plurality of apparatuses like the apparatus 102.
Upon user selection to perform mapping or upon launch of the application 103, the application 103 checks whether mapping is required to create the map at a step 204. If a map of the work site already exists (e.g., uploaded or updated by the Mobile Planner or Fleet Manager, or previously created), then mapping is not required and the process goes to step 222. The apparatus 102 can communicate with the Mobile Planner or Fleet manager to obtain an existing map or obtain an update for an existing map. In one example, the application 103 may prompt the user whether to proceed with re-mapping if a map already exists. If mapping is required, 3D mapping is initiated at a step 206. In this example, the user has to move the apparatus 102 around the work site to scan the environment to create a 3D map of the environment. During the mapping process, static/dynamic objects can be provided in the environment for the apparatus 102 to detect and record in the 3D map. The apparatus 102 obtains inputs from the one or more sensors 109 of
If the user wishes to edit the 2D map (or the Fleet Manager has an update for the 2D map), the user (or the Fleet Manager) proceeds to edit the map at a step 210. Editing of the map can include adding waypoints and setting goals to be achieved at the waypoints, marking allowed/disallowed zones, editing path planning parameters, and setting tasks for the virtual mobile robot to be simulated according to (i), (ii), (iii) and (v) respectively of the 6 types of user inputs (i) to (vi) described earlier. Editing the map can also include adding static/dynamic objects at specific points and/or areas of the map.
If the map is already edited and the user does not wish to edit the 2D map at step 208, a simulation for automatic path planning by the virtual mobile robot can be performed at step 212. Step 212 can also proceed once the map editing is done at step 210.
At step 212, the virtual mobile robot is created to appear on the display of the apparatus 102 with images of the environment of the work site captured by the camera as a background. The virtual mobile robot is then set to navigate autonomously at a step 214 according to the goals/tasks that are set for the edited map. In this example, the user has to carry the apparatus and follow the virtual mobile robot, which is set to move automatically according to the set goals/tasks, to enable the apparatus 102 to perform localization for the virtual mobile robot. As the apparatus 102 is moved with the virtual mobile robot, it detects any unmapped or unrecorded dynamic object in the environment and have them recorded for the map at a step 216. On detection of such dynamic object, if it obstructs the movement of the virtual mobile robot, a local path plan is determined based on inputs of the one or more sensors during localization at a step 218 to guide the virtual mobile robot to move to avoid (i.e. get around) the dynamic object. For example, when an object is found in a global planned path (in front of the virtual mobile robot), the virtual robot will halt, alter its path and continue towards it next goal. If no such dynamic object is detected, the virtual mobile robot will follow a global path plan at a step 220 to get from one waypoint to another waypoint in an optimal manner without any concern for dynamic object. As the information of static objects would have been gathered during the map creation, it is understood that the movements of the virtual mobile robot during autonomous navigation would consider the presence of the static objects. A static object that is unmapped, newly included in the environment, and obstructs the movement of the virtual mobile robot, will be handled in the same manner as the detected dynamic object at step 216. The method ends at step 224 when the simulation is ended by a user or the set goals/tasks are accomplished by the virtual mobile robot.
In another example, the virtual mobile robot is configured to appear and navigate autonomously in the 2D map at the step 214 according to the goals/tasks that are set. The user does not carry the apparatus 102 and follow the virtual mobile robot around but watches on the display how the virtual mobile robot automatically conducts autonomous navigation in the captured environment as it goes around accomplishing the goals/tasks. In this case, as the virtual mobile robot moves in the environment, the virtual mobile robot will only detect dynamic objects that are already mapped or recorded in the map at step 216. The virtual mobile robot will simulate the localization process and move based on a local path plan at step 218 to overcome or get around the mapped dynamic object. For example, when an object is found in a global planned path (in front of the virtual mobile robot), the virtual robot will halt, alter its path and continue towards it next goal. If no such dynamic object is detected, the virtual mobile robot will follow a global path plan at step 220 to get from one waypoint to another waypoint in an optimal manner without any concern for dynamic object. As the information of static objects would have been gathered during the map creation, it is understood that the movements of the virtual mobile robot during autonomous navigation would consider the presence of the static objects. The method ends at step 224 when the simulation is ended by a user, or the set goals/tasks are accomplished by the virtual mobile robot.
Back to step 204, the method goes from step 204 to step 222 when mapping is not required and the 2D map is already created, or upon user request. At step 222, instead of letting the virtual mobile robot navigate autonomously based on set goals/tasks (such as at step 214), the user manually controls and drives the virtual mobile robot around the work site using controls provided by the GUI 105, such as a virtual movement pad or joystick. The user can choose to move the apparatus 102 and follow the virtual mobile robot as it is driven. In this case, localization performed by the apparatus 102 for the virtual mobile robot can occur and the apparatus 102 can detect unmapped dynamic objects (that obstructs movement of the virtual mobile robot) to be mapped or recorded. If the user chooses not to move the apparatus 102 and follow the virtual mobile robot, then the virtual mobile robot will be driven in the environment as captured during the last map creation (or as last updated by the Fleet manager).
Optionally, the user may add a payload to be mounted to the virtual mobile robot at step 222 and watch a simulation of the virtual mobile robot together with the payload in operation. One or more on-screen button or a menu option can be provided to turn the audio of the virtual mobile robot and/or payload on or off. When audio is on, a realistic simulation including sound produced by the virtual mobile robot and/or payload is provided. There can also be an option to turn on or off or reduce any animation of the virtual mobile robot and/or the payload. In this case, the virtual mobile robot and/or the payload can become static images and/or animation of the virtual mobile robot and/or the payload can be minimized during operation.
A mesh 304, which is made up of a plurality of spots in the present example, is augmented on the ground 302 on successful detection of the ground 302 by the apparatus 102 with the help of the one or more sensors 109. Static objects such as walls 306 shown in
The intensity level of the UV light of the UVC top 704 of
In the step 1508, the AR device is used to collect data. In this step, a user can manually input the data to be collected on the AR device and/or let the AR device automatically collect data using the one or more sensors 109 of
In the step 1510, the AR device sends the data collected in step 1508 via the 5G network to a simulation software (which can be the Fleet Manager or another software) as a simulation task. In this example, the simulation software is run on the simulation server 110 of
In the step 1512, the AR device retrieves the simulated results from the simulation server 110 via the 5G network. The simulated results can be displayed in real-time on the display of the AR device. Alternatively, the simulated results can be recorded, and a user can view the pre-recorded simulation on the display of the AR device at a later time to learn the demonstration provided by the simulation, and/or to study the simulation and provide improvements.
With the help of the 5G network and software such as the Fleet Manager, the AR device can be used to smoothly visualize not just one virtual mobile robot but a plurality of virtual and/or real mobile robots operating in the environment of a work site. Each apparatus controlling one or more virtual mobile robots and/or each real mobile robot working in the environment can be configured to communicate data about itself with the server running the Fleet manager (e.g., 110 of
An example of the apparatus (e.g., apparatus 102) described in examples of the present disclosure or any one of the servers described with reference to
The display generally operates to provide a presentation of graphical content (e.g. the graphical user interface 105 of
In general, the non-volatile memory functions to store (e.g., persistently store) data and executable code including code that is associated with the functional components of the mobile platform. In some cases, for example, the non-volatile memory includes bootloader code, modem software, operating system code, file system code, as well as other codes well known to those of ordinary skill in the art that are not depicted for simplicity.
In many implementations, the non-volatile memory is realized by flash memory (e.g., NAND or NOR memory), but it is certainly contemplated that other memory types may be utilized as well. Although it may be possible to execute the code from the non-volatile memory, the executable code in the non-volatile memory is typically loaded into RAM and executed by one or more of the N processing components.
One or more computer programs may be stored on any machine or computer readable medium that may be non-transitory in nature. The computer readable medium may include storage devices such as magnetic or optical disks, memory chips, or other storage devices suitable for interfacing with the mobile platform. The machine or computer readable medium may also include a hard-wired medium such as exemplified in the Internet system, or wireless medium such as exemplified in the Wireless LAN (WLAN) system.
The N processing components (or “one or more processors”) in connection with RAM generally operate to execute the instructions stored in non-volatile memory to effectuate the functional components. As one skilled in the art (including ordinarily skilled) will appreciate, the N processing components may include a video processor, modem processor, DSP, graphics processing unit (GPU), and other processing components.
The transceiver component may include N transceiver chains, which may be used for communicating with external devices via wireless networks. Each of the N transceiver chains may represent a transceiver associated with a particular communication scheme. For example, each transceiver may correspond to protocols that are specific to local area networks, cellular networks (e.g., a WIFI network, a CDMA network, a GPRS network, a UMTS networks, a 5G network etc.), and other types of communication networks. In some implementations, the communication of the transceiver component with communication networks enables a location of connected devices to be determined.
The one or more sensors and/or components for navigation/area mapping purposes may be or include the image capturing components for capturing images (photograph or video). The one or more sensors may be or include laser sensors (e.g., LiDAR scanner), infrared and/or ultrasonic sensors.
In summary, examples of the present disclosure may have the following features.
An apparatus (e.g., 102 in
The one or more objects detected in the environment may include one or more objects moveable in the environment (dynamic object).
The apparatus may be operable to: receive user input to add one or more waypoints in the displayed environment to navigate movement of the virtual mobile robot according to the one or more waypoints.
The apparatus may be operable to: set a task for the virtual mobile robot to perform at any of the one or more way points; and display graphics for simulating the performance of the task.
A map (e.g. 1402 and 1404 in
The map may be a three-dimensional map (e.g. 1402 in
The apparatus may be operable to: send data of one or more movement paths (e.g., local path plan) determined by the virtual mobile robot to the server.
The apparatus may be operable to: send data to a server (e.g., 110 in
The apparatus may be operable to: process captured images of the environment directed to ground in the environment during an initialization step; display a graphical indication (e.g., 304 in
The apparatus may be operable to: receive user input to select one or more zones that the virtual mobile robot is allowed to enter and/or zones that the virtual mobile robot is not allowed to enter in the displayed environment.
The apparatus may be operable to: receive user input to select a payload (e.g., 602 and 604 in
The payload may be a mobile manipulator that supports robot kinematics of 6 or more axes, and the mobile manipulator is configurable to simulate performance of one or more production tasks.
The apparatus may be operable to: calculate work cycle time of the virtual mobile robot and/or utilization information of the virtual mobile robot based on simulation of the operation of the virtual mobile robot.
The apparatus may be operable to: receive user input to set one or more path planning parameters for the virtual mobile robot, wherein the one or more movement parameters include one or more of the following:
More than one of the virtual mobile robot may be generated (e.g., upon user request or by the Fleet Manager to simulate traffic condition) to move in the displayed environment, and each virtual mobile robot may consider one another as an object to avoid.
The apparatus may be operable to: display graphics for one or more features that are invisible to human eye, wherein the one or more features include one or more of the following:
The apparatus may be operable to: enable gathering of information relating to mapping (for full map or partial map creation), navigation (e.g., localization data, local/global path plans auto-computed by the virtual mobile robot, etc.) and/or operation (e.g., cycle time, utilization time, traffic condition, other work efficiency related parameter, etc.) of the virtual mobile robot by setting the virtual mobile robot to autonomously navigate in the environment, or enable gathering of information relating to mapping, navigation and/or operation of the virtual mobile robot by a user providing input to drive the virtual mobile robot to move in the environment.
The apparatus may be a hand-held mobile device.
A method (e.g., 200 in
A system (e.g., 100 in
In the specification and claims, unless the context clearly indicates otherwise, the term “comprising” has the non-exclusive meaning of the word, in the sense of “including at least” rather than the exclusive meaning in the sense of “consisting only of”. The same applies with corresponding grammatical changes to other forms of the word such as “comprise”, “comprises” and so on.
While the invention has been described in the present disclosure in connection with a number of examples and implementations, the invention is not so limited but covers various obvious modifications and equivalent arrangements, which fall within the purview of the appended claims. Although features of the invention are expressed in certain combinations among the claims, it is contemplated that these features can be arranged in any combination and order.
Number | Date | Country | Kind |
---|---|---|---|
10202011988T | Dec 2020 | SG | national |
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2021/042214 | 11/17/2021 | WO |