The embodiments herein relate to generating and displaying targeted information related to a plurality of robots working in an operating environment and, more particularly, generating targeted information for fixing or diagnosing faults or errors encountered in the operating environment and displaying the targeted information.
Usage of robots in the industry has been exponentially increasing. Robots are now being used for both personal use and commercial space. In order to tap the potential of robots, a large number of different types of robots are employed within a particular operating environment, for example, a warehouse floor. Existing systems do not provide live targeted information relevant for an end-user while handling problems or finding solutions related to robots in an operating environment.
Systems, methods, computer programs, and user interfaces are provided for generating and displaying targeted information. In one embodiment, targeted information may be generated at least based on the contextual information obtained by the system. The system includes a plurality of robots working in an operating environment, wherein each robot includes a processor to execute instructions and a memory to store the instructions. A plurality of server nodes in communication with the plurality of robots, and a plurality of nodes executing at the one or more robots in communication with the plurality of server nodes. One or more behaviors related to an active plan are executed while the plurality of robots is working in the operating environment. One or more Snapshots are created related to the executing behaviors, wherein a Snapshot includes a plurality of fields wherein each field is assigned a value. The value may be in any form from a digit, text, string, character, or any other form that may be relevant. Information is captured based on a parent context related to the executing behaviors, wherein the parent context includes parent information of the active plan. The plurality of fields is populated with values related to at least one or more captured information, the operating environment, and one or more robots. The Snapshots are later closed with a result of the execution of one or more behaviors. The Snapshots are aggregated, and the aggregated Snapshots are reported as part of the targeted information by the reporting nodes running on one or more robots.
In one embodiment, systems, methods, computer programs, and user interfaces are provided to enable a live debugger or tracer or a recommendation system, or pattern recognizer to fix or diagnose errors/faults that may occur while the robots are either working or not working in an operating environment. Users can provide customized search queries to find solutions to improve the productivity of the operating environment based on targeted information generated by the system. The system allows multiple plugins or libraries to be imported to increase the system's capability; for example, a rosbag or log collection module may be integrated for diagnosing an error in real-time. The user may provide a start and end time for diagnosing the error. Based on the results, the system may provide a visual representation for diagnosing the error by enabling analysis of rosbags to extract relevant information for diagnosing the error.
In one embodiment, systems, methods, computer programs, and user interfaces are provided to generate and display targeted information related to collaboration between multiple robots in the operating environment. An input is received that a second robot (e.g. AGV) is available to assist a first robot (e.g. forklift) from the plurality of robots. A Snapshot is closed related to a behavior (e.g. AGV assisting a forklift) being executed by the first robot in response to the received input. A new Snapshot is later created related to at least one or more of plan, behavior, and plantype being executed by the first robot in assistance with the second robot. Summary information is then populated in a related field with a value that both the robots are executing the behavior. The robot ID and task details of both the robots for the task allocation-related field are updated. In response to updating the robot ID and task details, the achieved gain is verified due to the second robot assisting the first robot. In response to the verification, the gain-related field is populated to a value indicating the performance of the execution of the behavior. The collaboration between both the robots may be handled by the system with nodes running on at least one or more robots.
In one embodiment, systems, methods, computer programs, and user interfaces are provided to generate and display targeted information related to collaboration between multiple robots in the operating environment. The location of one or more robots in the operating environment, for example, aisle rows, dynamic zones, narrow corners, charging points, etc., in a warehouse, are populated in the relevant fields of the Snapshot and task-related information, for example, pallet pickup or drop, assist robots, etc. A confidence score is then provided as a value for a field related to the result of the execution of behavior, wherein the confidence score is represented by a scale indicating success or failure. Furthermore, the distance traveled and orientation of the second robot are captured, which can assist the first robot. In response to the capturing, the orientation and the distance traveled of the second robot are verified to enable proper alignment with the first robot before populating the fields of the new Snapshot.
In one embodiment, systems, methods, computer programs, and user interfaces are provided to generate and display targeted information related to collaboration between multiple robots in the operating environment, receiving inputs related to at least one of a behavior, a plan, and a plantype executed by the plurality of robots within a subset of a time-window, wherein the time-window comprises the time spent by the plurality of robots working in the operating environment; and reporting values of fields of the aggregated Snapshots as part of the targeted information within the subset of the time-window and related to at least one of a behavior, a plan, and a plantype executed by the robots. The system receives inputs related to at least one of a behavior, a plan, and a plantype executed by the plurality of robots within a subset of a time-window, wherein the time-window comprises the time spent by the plurality of robots working in the operating environment; and reports values of fields of the aggregated Snapshots as part of the targeted information within the subset of the time-window and related to at least one of a behavior, a plan, and a plantype executed by the robots. Furthermore, the system receives input related to at least one of a behavior, a plan, and a plantype executed by the first robot or second robot; and reports values of relevant fields related to the received input to either fix errors related to debugging or measure performance gain of the first robot or second robot.
In one embodiment, systems, methods, computer programs, and user interfaces are provided to display targeted information related to collaboration between multiple robots in the operating environment, receiving an error related to the assistance of the second robot to the first robot; identifying the faulting robot based on the values in the aggregated Snapshot, wherein the second robot is the faulting robot; automatically initiating a repairing process to verify if another robot is available to assist the first robot; and based on availability, retriggering at least one or more of pan, behavior, and plantype being executed by another robot. The repairing process may be handled collaboratively by one or more nodes running on one or more robots and/or nodes running on one or more server nodes running on one or more servers.
The embodiments disclosed herein will be better understood from the following detailed description with reference to the drawings, in which:
The embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques are omitted so as to not unnecessarily obscure the embodiments herein. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein may be practiced and to further enable those of skill in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The non-limiting embodiments herein refer to terms ‘plan’, agent, ‘behavior’, a collaboration between homogeneous and heterogeneous mobile robots, system-level components, robot level tasks, execution of plans or behaviors, etc. are described in the co-pending U.S. patent application Ser. No. 16/994,556, filed Aug. 15, 2020, which is incorporated herein in its entirety.
There is a need for a live debugger, tracer, recommendation, pattern recognizer, and other related systems that can diagnose/fix the problems faced by different types of robots in an operating environment or provide optimization information that can improve productivity or find solutions related to the robots working in an operating environment. Currently, for such systems, providing targeted, granular, contextualized, and relevant information to an end-user is a challenge.
The present invention solves a technical problem in the field of robotics systems (e.g. live debugger, tracer, recommendation, pattern recognizer, optimizer, etc.). In existing systems, when a failure occurs, the robotics system streams world model information as it is, and the systems extrapolate the information from the world model. It becomes difficult for such systems to scale—as the number of robots increases, for example, to 100 robots. In such scenarios, it becomes difficult to analyze the world model and, later, use the world model data to identify and fix the issue behind the failure. These systems just report the World model that includes tons of information which results in streaming huge data. The existing systems then perform a postmortem on the bulk data, and then, the systems are not in a position to provide relevant solutions. In addition, a robotics expert has to spend time understanding the details of the tons of data and is required to provide meaning to this huge data before attempting to identify and fix the problem. As compared to existing systems, the present invention provides technical solutions that include providing a granular level of targeted and contextual diagnostic data that may be used for tracing and real-time debugging. Consider a scenario where a Behavior X (e.g., Navigation of a robot) fails or didn't succeed or is taking too long, and then, in the present invention, the investigation is reduced to analyzing the operation and checking instances when the failure happened. The end-user need not worry about tons of irrelevant data but focus on the specific robot behavior that failed. The system provides a summary of the failure at the robot's behavior level. So, the system now exposes the actual event that led to the failure rather than causing the user to compare the world model or make the user spend time on analyzing irrelevant data.
Furthermore, the system provides other technical solutions. In existing systems, a Snapshot is created and then closed, and once the Snapshot is closed, then the Snapshot gets reported to the Trace Collector node. The problem with this approach of existing systems is that if there is a long-running behavior or a plan and if a system crash happens or an error occurs or if there is a hard reboot of the system, such information is never relayed or exposed to the end-user. This indicates that the existing systems are not suitable for real-time debugging. Such systems just provide a post-mortem kind of analysis which is not useful. However, the present invention provides a real-time debugging of events. As the events are getting reported, the system provides a hierarchy of events for supporting the real-time debugging feature.
In addition, the system provides a direct representation of all events reported in the operating environment, with additional context including—where the error was reported and why the error was reported. This information may be integrated with environment-related events that are network friendly, can be reported and collected, which may be used to build the entire history. A lot of user's (e.g. warehouse manager, software developer, System integrator, etc.) time is saved as the user need not spend time integrating logs, rosbags, or time-stamping the various events as the system provides all the relevant details in a single dashboard for simplified access to the user.
The term “plan” can be defined in multiple ways and nowhere should it be construed as restrictive. The simplest way to describe a plan is a list of behaviours to be executed one after the other. It is a structure that is used to formulate more complex recipes or sub plans. There are various representations of such recipes or policies, which extend this simple description. The basic concept of plans is strongly related to finite automata, which not only allows the formulation of sequences of actions, but also of loops and conditional execution. Hence, a plan p in the set of Plans P is a structure of states and the transitions between them. In one of the embodiments, a plan ‘charging’ may contain two tasks: one for a set of robots to queue for charging and one for another set of robots to dock and get charged. In one embodiment, a plan may also include a “robot behaviour”. Robot behaviour is a low level atomic activity, within a plan, executed by a robot under certain conditions. In another embodiment of the invention, the ‘charging’ plan includes three states: robot waiting state, robot docking state, and robot charging state. In one embodiment, the plan is a collection of states that represent stages during the execution of the plan.
A plan tree may include plans such as charging plan, navigation plan, autonomous mode plan, pick and place objects plan, lifting plan etc. The robots are shifted as per the states of the plan. Sometimes, the robot may be in a ‘charging’ state and later, they may be pulled out of the ‘charging’ state and shifted to ‘autonomous mode’ state of carrying and picking a pallet. So, for these two plans—‘charging’ plan and ‘autonomous pick and carry’ plan, a sub-plan called ‘navigation’ plan may be used. A sub-plan is a portion of the plan that achieves one or more goals or a portion of the goal to be achieved by the plan.
Each pair of states in the plan may be linked by a transition. A transition is a condition that needs to be satisfied in order for a robot to traverse from its current state to the next state. In each state, one or more robots execute a set of sub-plans within the plan. The one or more robots may transition to the next state either after successful execution of the sub-plans, when the task fails, or when execution of sub-plans becomes irrelevant. The transition condition may be defined as a boolean expression including the different plan variable and plan variable values that needs to be satisfied in order to transition to the next plan state.
The system may include a plan execution engine that further includes a logic for executing a plan, allocating tasks by one or more heterogeneous robots and/or servers. A plan may include several sub-plans that in combination form the plan. Robot behaviour is a low level atomic activity, within a plan, executed by a robot under certain conditions. For example, a “robot charging” plan may include three states: robot waiting state, robot docking state, and robot charging state. Consider a user has developed a plan and a task allocation strategy for execution for a team of Automated Guided Vehicle (AGV)s and for execution for a team of forklifts. The plan includes functionalities like route planning, navigation, local obstacle avoidance, LiDAR scanners, which may be common for a team of forklifts and there may be some functionalities which may be different for forklifts. In a warehouse environment, the system, with the help of a plan execution engine, handles the coordination between heterogeneous devices—for example, a team of Automated Guided Vehicles (AGVs) assisting a team of Forklifts.
The Distributed task allocation component 110 determines the task allocation of different tasks to different robots. Task allocation to robots may be determined at different stages of plan execution or in real-time, for example, when any of the robots assigned to a particular task breaks down, then the Distributed task allocation 110 component has to reassign the tasks to other robots that have not broken down (are available) and are capable to execute the task. The Trace reporter 115 reports the traces which are collected and stored at the Trace Collector node 131 of server 130. Similarly, the Rosbag reporter 119, Logs recorder 122 reports the Rosbag and Logs respectively which is collected at the server end by Rosbag collector node 134 and Log collector node 137 respectively. The Rosbag and Logs are stored at the server end and are always available for analysis by the system 100.
In one embodiment, the Rosbag recorder 119 and Logs recorder 122 increases the capability of the system to generate targeted information and enhance debugging experience. For debugging, consider a scenario where a robot route trace request has some error and the user may be interested in only downloading the rosbag and logs concerned with the request. The UI module 141 provides the download option via which a user can download the rosbag and/or logs. The server 130 provides a server node—Intelligent filtering module 140 that communicates with UI module 141. For a given trace data or targeted information, the Intelligent filtering module 140 allows a user to select a time window for retrieving logs in the input time range. From a contextual perspective, module 140 may allow a user to download all the rosbags and logs for a trace that may have involvement from multiple robots, servers, processes, different machines within the system. Further, module 140 may allow the system to classify the different logs and rosbags to identify which trace reports to which machine and for what time duration. After the identification process, module 140 automatically downloads the logs and rosbags. For example, if there are 3 robots, the UI module 141 due to Intelligent filtering module 140, may automatically enable the user to download 3 rosbag files for 3 different robots, servers, etc. for a different duration. Similarly, the UI module 141 with control of Intelligent filtering module 140, may perform similar functionality for logs. In existing systems, there is no way if an error related to the robot's routing process happens, the user cannot use the relevant rosbags or logs for debugging the errors that may occur on multiple robots. The present invention overcomes this problem and provides the user a rich debugging experience by providing a single dashboard for downloading the relevant rosbags and/or logs, etc. for the errors that occur while the robots are working in the operating environment.
In one embodiment, the target information may also include details of the robot, for example, machine ID, process name, hostname, etc. This information is available to the robot as part of the targeted information and also may be stored on the server for additional functionalities. The Intelligent filtering module 140 may use the hostname, identify and retrieve the relevant rosbag for the hostname, and filter the relevant information for the duration the trace was entered. The present invention is not limited to this example of filtering or utilization of rosbags for filtering the trace data. The server 130 provides different nodes where relevant data (e.g. rosbag/logs) and device context information (machine ID, hostname, process name, etc.) is collected and stored. The Intelligent filtering module 140 filters the relevant targeted information and provides the output to the user via UI module 130.
In one embodiment, the trace data has device context information which is also used in the Rosbags. This information is used to filter the appropriate Rosbags for the specific robot. The Rosbags maintain information on the timestamps, for example, at which time the data was collected, etc. The system has the relevant information, like timestamp (start time to end time), when the trace was active. The filtering module 140 can filter the relevant Rosbag data based on the timestamp and the device context information.
Initially, the DebTrace Engine 202 triggers an event (“CreateSnapshot”) 208 which has parameters, like “Operation”, “Parent”, etc. The operation parameter indicates the behavior that was started. The system also treats a Snapshot as an event. Consider a “ParentPlanContext” as the parent for Behavior X. From the ParentPlanContext, the Rule Book module 201 creates a DebTrace Basic Behavior X instance 202. The DebTrace Engine 202 may then communicate 208 with a Real-time Tracing client module 204 that an event has been triggered. The metadata around the behaviors include, for example, answers for queries like what is the frequency, which state the robot is running in, what is the entry point? The key-value pair received by the Tracing client module 204 is left empty or is blank (e.g. Operation=< >, Parent=< >), so that the Tracing client module 204 may then populate the key-value pair with the related metadata. For example, the Tracing client 204 may add to the metadata of the Snapshot, a unique ID to Snapshot ID, agent_id may include the identifier of the agent reporting the details, start time of the event, etc. This populated information is shared 209 with the Trace Reporter module 205. The entities from DebTrace Engine 201 to Trace Reporter 205 may be considered as robot modules that are part of the System boundary level (shown as Process boundary in the robot 101) while the entity Trace Collector module 206 may be running as part of the back-end, DebTrace Server 119, which may be in the form of Cloud, beacons, edge server, etc. The UI module 123 may communicate with the Trace Collector 206 to collect the traces. So, at a high level, the process flow may begin with activation of one or behaviors, then, the metadata around the behaviors is shared with the Real-time tracing client, and the Behavior with updated metadata is reported to the Trace Reporter. The detailed, contextual, and/or targeted information may then be exposed to UI module 123 as per the system or end-user requirements.
In one embodiment, after a behavior is initiated, the DebTrace Engine 202 repeatedly calls the API run( ) 211 in a loop 210 for the application-level code, represented by the entity DebTrace Application 203. In the entire phase, the DebTrace Application 203 may add custom information by invoking 212 AddTags( ). For example, the system may compute at least one or more parameters, for example, calculating the speed with which the robot is navigating and may add either max speed or minimum speed, any other metadata that the system is planning to capture, any other route that is going to be taken by the agent, etc. The system 200 enables the DebTrace Application module 203 to share large chunks of information, which are exported 213 to the Trace Reporter 205. The example tags that are added while exporting 213 data by invoking AddTags( ) include snapshot_id along with custom tags that are added by application 203. This process is repeated iteratively 210 until the DebTrace Engine terminates 214 the behavior, by invoking 215 CloseSnapshot(snapshot_id). After the termination 214, the DebTrace Engine 202 adds additional tags, for example, success or failure for the specific behavior, and reports 215 the information to Real-time Tracing client module 204. The Tracing client module 204 may then close 216 the Snapshot. The system may collect a set of such Snapshots and the aggregated Snapshots 217 are then reported 218 to the Trace Collector module 206.
In the sequence diagram, the Real time Tracing Client module 204 is shown creating 303 a Snapshot by invoking CreateSnapshot( ) with parameters as explained earlier. Once the plan is activated, the DebTrace Engine 322 calls the DebTrace Application (Plan A) 323 by invoking 304 Initialize( ). The DebTrace Application may add 305 custom tags by AddTags( ) and the Real-time Tracing Client module 204 may populate 306 the tags with snapshot_id and other custom tags. While the agents are executing the plan, the agents are also continuously evaluating 307 task assignments so that the agents can perform the various tasks assigned to them. The DebTrace Engine 322 adds additional metadata that includes entry points in task mapping, utility functions for evaluating the desired frequency the user wants to diagnose, etc. As an example, consider there are 10 agents in the current plan A and 3 entry points to the current plan, then, the query for the system may be to provide details on which agent will be going into which entry point. This function is taken care of by the DebTrace Engine 322 by continuously evaluating 307 the task and whenever there is a task assignment change, the DebTrace Engine 322 creates 308 another Snapshot that says plan A has an assignment change. The parent parameter contains the snapshot_id of the parent to maintain the hierarchy of the Snapshots. The other relevant information may be related to the number of robots that were executing plan A along with the current robot, for example, the agents 232, 111, and 434 were also executing plan A when the task assignments changed. So, the agents are considered as teammates in the plan. In addition, as part of the active_entrypoint parameter, the data includes the active entry point to which the agent got assigned to. The next parameter may be the new_allocation which includes the agent who is working on a specific entry point. The forklift ID and the AGV ID is updated and the task details are updated in the task allocation-related field. The system also populates the gain achieved by the re-assignment, in the parameter utility_increase based on verification of the gain achieved due to the AGV assisting the forklift. The details related to these parameters will be explained with suitable non-limiting examples.
The flow from Real-time tracing Client module 204 to Trace Reporter module 205 in the sequence diagram is an example of simple reporting. The Real-time tracing client 204 populates 309 the relevant fields of the Snapshot, as part of reporting, with snapshot_id, agent_id, start_time, etc in the Tags field. The DebTrace Engine 322 sends a communication request 310 to close the Snapshot by invoking CloseSnapshot( ). The Snapshot is closed when the task assignment is acknowledged with an update on the status of the task. For example, the status update may be whether the task was honored or whether there was a conflict. If there is a conflict, then, a summary is generated, for example, say among the teammates, agent 434 disagreed. In course of time, this status gets reported 311 by the Real-time tracing client module 204 to the Trace reporter module 205. At some stage during this process, the parent plan may get terminated 312 which leads to the invocation 313 of CloseSnapshot( ), and a tag of either success or failure result is generated and populated in the said Snapshot. The parent Snapshot is then closed by the Real-time tracing client module and the Snapshots are aggregated 315. The aggregated Snapshots are then reported 316 by the Real-time reporter module 205 to the Real-time Trace Collector module 206.
In one embodiment, the system generates live targeted information as a diagnosis or solution for a fault or an error, or any activity that may have occurred in the operating environment. The DebTrace Engine 109 also outsources some of its functionality to other nodes like external tracing clients 113, 114, etc for generating a granular level of targeted information. Consider a scenario where there are 4 agents running on their respective forklifts that are executing a plan and then, at a particular point in time, the system decides that the forklifts need to switch places. Further, consider Forklift 1 is working on pallet 1001, Forklift 2 is working on pallet 2002, etc., and then the forklifts do the switch. Now, if the user wants to trace and debug either later or during live switching, the system generates and displays targeted information. When the switch happens, the system retrieves contextual information from the plurality of robots related to the operating activities on the warehouse floor, for example, an accounting of why the switch happened, what was the earlier assignment, new assignment, what was the gain, were there any conflicts, and other relevant information. For example, in a conflict scenario, it would be further relevant to have more granular information, like to know that one of the agents, for example, agent 434 disagreed. This information is targeted information for end-users to analyze and check what action was agent 434 performing. The system-reported targeted information may also indicate that agent 434 was not supposed to execute the specific plan. Based on this system-provided information, the diagnosis may be that the World Model information (stored in World model 105) may have not been as per the expectations. This kind of analysis is similar to a reverse analysis where the end-user receives targeted information instead of tons of data and based on the targeted information, the user may then decide if there is a need to analyze the tons of data.
In one embodiment, consider the UI module 123 displays an error for one of the 4 forklifts. The error message may be a “Navigation plan timeout” as included in the ‘detail’ parameter and hence, the forklift couldn't navigate. The metadata includes other details like which system component reported the error (e.g. Distributed task allocation 110). the error code (e.g. 643), agent_id (e.g. 1001), the action that was taken in response (e.g. 1) to the error for example, “Retry” or “Abort”. In addition, the other information includes map_id which indicates the map of the warehouse, where the robot was operating. All this information is targeted information for the end-user instead of sharing irrelevant tons of data. Further, the information also includes contextual information, for example, specific information like the Navigation plan under ‘Task’ instead of just saying that it was an error. This parent-child hierarchy relationship is maintained which brings all the contextual information while diagnosing the error. However, if the error information had been only about ‘Navigation plan timeout’, the user may not get the necessary context like what was the robot doing in the specific zone or portion of the warehouse. With the UI module 123, the system may provide the end-user with detailed contextual information. The UI module may be designed to include various interfaces like panes, visual representations like heat maps, drop boxes that may include relevant fields, menus to select various functionalities for example debugger, recommender, tracer, plugins, libraries, rosbags, etc. The visual interface may provide information related to Task when the user clicks the collapsable arrow related to Task. On clicking the arrow, the Task-related information is revealed. For example, the snapshot_id provides details of the Snapshot being performed by the agent, map_id provides map-specific information, result parameter provides detail related to the result of the Snapshot. The Snapshot information further includes the type of Snapshot, denoted by the ‘type’ parameter, and the destination where the agent was asked to navigate to. All this relevant information is provided organically by the system without any inputs from the application or from the designers.
In one embodiment, the system provides a heat map of tasks for a visual representation of trace data. The radius of the circle indicates the time taken by the agent to execute the task. A larger circle means more time taken by the agent to execute the task. The Y-axis of the heat map may represent the duration while X-axis may represent the timeline. The UI module 123 also provides the user an option to select the tasks and analyze the details of the tasks. In addition, the user can also narrow or filter to a specific task, for example, “Pickup” and the user may input a search query to identify the locations where most of the palette pickups have been completed by the robots. On clicking the task “Pickup”, the UI displays metadata related to the task “Pickup”. The visual interface may also include specific information, for example, the location in the operating environment where the palette pickup may have taken place.
In one embodiment, the system may be used for bottleneck identification, real-time or live debugging, actions to be taken to identify the context related to the error. The system answers the critical information to an end-user regarding when an error can occur, where the error occurred, why the error occurred, and details related to the error? The system is robust to provide recommendations to an end-user related to diagnosing an error or fault and for optimizing the system to improve productivity.
In one embodiment, the system enables the integration of multiple modules to provide a rich experience for the user to diagnose an error in real-time. Consider a scenario where a task is reported, that started at a certain time, 10.30 am on Jan. 12, 2021, and finished after a certain interval of time 11.25 on Jan. 13, 2021. The user may be interested in analyzing the rosbag during the specified interval. So, the system allows a user to select the time interval, for example, when an error was reported. The user can then use the heat-map interface and select a particular task. After selecting the task, the system displays the errors that may have occurred. After receiving the user selection, the system integrates the rosbag collection and presents a debug interface. The user is allowed to download Logs, Rosbags, and other relevant information, as required to diagnose the error in real-time. In one embodiment, the system provides a complete single dashboard where the user need not use any other modules to pull relevant information as they are available to the user in the same interface where the other events are reported. The modules may be related to rosbags, other logs that provide APIs which may be used by the UI to extract the relevant information as requested by the user.
In one embodiment, consider a team of forklift and AGVs working together to move a pallet. If an AGV is available, then the forklift picks and places the pallet onto the AGV and the AGV goes to its destination. At the destination, another forklift meets the AGV and unloads the pallet. However, if no AGV is available in the team, then the forklift has to perform all the AGV tasks, the forklift may go and drop the pallet to its destination. This application has multiple aspects, Task allocation relating to which robot does which activity, and behavior coordination relating to when the AGVs and forklifts are close to each other, then, both the robots need to collaborate so that if required, the forklift can pick and drop items onto the AGVs. The behavior coordination module 111 deals with the AGVs maintaining a certain distance threshold with the forklift so that the forklift can function with the AGV for picking and dropping items onto the AGV. A planType includes a list of plans. The Task assignment and utility functions help the system decide which plan has to be executed.
In the said Snapshot, the Operation field indicates the plan type is DropPallet plan type 406, which is “DropPalletpt”. The Parent field indicates the parent of the DropPallet plan type 406, which is the MovePallet plan 401. The ActivatedlnState field indicates the state activated by the system when the Snapshot was created, which is the DropPallet state 404. The Type field denotes that the type is a PlanType instead of a plan or behavior. The other useful information is captured in the Tags field. The first field “reason_for_activation’ gives insight on the reason for activating the DropPallet plan 404. The value for this field indicates that the forklift arrived at this state due to the “transition_condition.” The next field ‘activation_summary’ denotes that the previous child, PickPallet 403 was successful, and hence, the forklift transitioned to the DropPallet state 404 after the successful transition from PickPalletpt 405. The next field “teammates_in_plan” indicates the ID of agents that are executing the current plan, which in this example is an agent with ID “121”. The next field “app_tag_pallet_id” is a custom tag added by the application. When the system activates the plan, the application can add its own custom information. For example, for the said field, the pallet ID is added as custom information. It is understood that the previous field information is provided by the system, however, the field “app_tag_pallet_id” is provided by the application as additional information while debugging. This allows the system to be robust and flexible to be customized by application as per the needs of an end-user or a debugger. The other fields like agent_id, snapshot_id, and start_time are provided by the system as part of the Snapshot generation. It is understood that the procedure includes CreateSnapshot( ) and CloseSnapshot( ). The CloseSnapshot( ) gives the result that includes details on the final state of the process, whether there was success or failure. When the CreateSnapshot( ) is executed, the start_time field is captured by the TracingClient, and when the CloseSnapshot( ) is executed, the end_time field may also be populated for the duration for which the Snapshot was active.
In the Tags field, the reason for activation is denoted as “task_evaluation” with the activation_summary field providing more contextual information that “Only forklift 121 is available to execute the plan.” This is the reason for the system to choose DropPalletAlone state, as only one forklift 121 is available to execute the task of picking and dropping the pallet. This is further detailed in the next field where forklift with agent_id 121 is the only agent in the plan. The system fills the remaining fields like snapshot_id, agent_id, and start_time. This kind of non-limiting targeted information plays an important role for an end-user while debugging to fix failures or errors at run-time.
In one embodiment, while the forklift was executing the plan DropPalletAlone 407, consider a scenario where an AGV becomes available to assist the forklift. So, as soon as the AGV becomes free to help the forklift, the system triggers the closing of the Snapshot for the operation DropPalletAlone 407 and a new Snapshot is created for “DropPallet” with the updated reason for activation.
As shown in
It is understood from the above examples and
Both, the forklift and AGV, may report different Snapshots for their respective behavior execution. For example, when the forklift finishes the PickPalletFromAGV behavior 416, the Snapshot may look like as given below:
In one embodiment, the type for the above Snapshot is “Behavior” with the reason for activation field as “transition_condition.” The summary indicates that the previous behavior MoveToDropPallet 409 was successful and the forklift arrived to execute the next state PickPalletfromAGV 416. Note that there are multiple application-specific inputs that are relayed that may help in real-time debugging. For example, the location where the pallet was picked up from AGV represented by “app_tag_location” field, and the next field “app_tag_confidence” is filled to denote the result of the function, which is how successful the pallet pickup was. This is updated by the system based on various parameters like camera feed data, sensory inputs, etc. to make a decision whether the pallet is properly attached to the fork or not. The information in the “app_tag_confidence” is a confidence score relayed by the application so that it may be used later while debugging or for machine learning or other recommendation systems. The next field gives a measure of whether it was a successful operation or led to failure. The remaining fields are similar to other Snapshots like the start_time and end_time.
In one embodiment, the below code snippet represents the Snapshot for AGV when it completes the behavior AssistForklift 418, related to assisting the forklift. Similar to the previous Snapshot, most of the information is filled, an additional field may be “app_tag_distance_travelled_during_assistance”. The field includes the information related to the scenario wherein the AGV is assisting the forklift in picking pallets—for example, targeted information like the distance traveled by the AGV to assist the forklift. The value of 1 m, 23 degrees indicates that the AGV had to travel 1 m and rotate 23 degrees to align properly or to move around to assist the forklift. This is another example of information that the application may want to relay for debugging or for machine learning/recommendation systems.
In one embodiment, while the behavior related to AGV assisting the forklift is active, represented by AsistForklift behavior 418, the distributed solver module 112 of the system is triggered to provide solutions. Consider a scenario where the forklift is moving and picking the pallet from AGV (PickPalletFromAGV behavior 416.) The AGV may be expected to stay stationary when the pallet is being picked up, or the AGV may be expected to align appropriately to the forklift so that the forklift can pick the pallet accurately in case they are at weird angles to each other or the AGV is far away and the forklift is not in a position to pick up. In such situations, the distributed solver module 112 of the system is triggered. Considering the forklift's location and AGV's own location, the solver module may determine the optimal location where the forklift must arrive or whether it would be better for the AGV to stay at the current location. This is represented in the above block by the “Events” field. So, while the distributed solver module 112 is calculating, the system can add any number of “Events” fields to help an end-user in real-time debugging of the incidents or stories that may have happened during the interval of time. So, the system updates the Snapshot with continuous events with the results of the solver module. For example, agent 121 is 3 m away at location (12, 13), so, as per the distributed solver module result, agent 121 is arriving within 2 m at (12, 11.2). Similarly, another event may be that Agent 121's orientation maybe 23 degrees off at the goal, and the solver module results in the Agent 121 aligning to 90 degrees. These are non-limiting examples of the distributed solver module 112 that are logged as events that are captured at different time intervals say at 12 sec and 18 sec, rather than as separate Snapshots. Snapshots are logical units that have to run for a certain period of time and valid for a period of time and can have it's own metadata as shown in previous examples. Snapshots are a representation of operations, for example, AssistForklift 418. However, events happen quite frequently when an operation or a Snapshot is active. Events may include logs of incidents that need to be relayed or stored as useful information for debugging purposes.
In one embodiment, while creating snapshots, the system is not limited to fields and values, but may also include time-stamped events that may be captured during the working of the robot, for example, while the robot is navigating in the warehouse. The snapshots are not limited to specific metadata or format and may be customized depending on different scenarios encountered in an operating environment. The targeted information may be more granular and provide solutions to queries like what was being done within the environment of the warehouse when the snapshot was being captured, which checkpoint or which position in the warehouse was cleared, and what time, when the ‘Navigation’ behavior was being executed, etc. The system provides the granular targeted information based on the nodes running on one or more robots that work in collaboration and based on communication with the server nodes running on the servers. This targeted information may be visualized either based on a simple UI, command line, video encoder plugin, simulator interface, replay feature using a video, etc.
In one embodiment, the system includes additional metadata for capturing time-stamped events in addition to the snapshots. Consider that a trace is generated in a UI related to the routing of the robots in the warehouse for which the nodes running on the robot may receive metadata (for example, route_response). The route_response metadata may include detailed granular and contextual information:
For example, suppose the system has acquired a log at a 30 second period of time for a 2 minutes time interval while the navigation is active, the above log information may indicate that at 30 s, a lock was acquired for an external device, say rack, sheet shutter, etc. At 45 s, the fifth checkpoint in the route was cleared, and at 55 s, a tenth checkpoint was cleared in the route. Thus, a time-stamped event at a high-level snapshot is captured. In the above table for ‘AssistForklift’ operation, the “Events” field is shown with similar log entries (e.g. 12 sec: Agent 121 is 3 m away, etc.) that are updated. The system adds more granular information in the form of ‘Events’ metadata.
In
In one embodiment, the user may click on any of the Snapshots and get details of the metadata. Now, consider that the “MoveToDrop” plan 506 was active, which means the Navigation plan was active, as all these behaviors are part of the Navigation plan. In the UI, the user can click on the Navigation plan to drop down a list of checkpoints, nodes, locations, etc. These are the various points or places in the warehouse where the robot has to travel to arrive at the drop location while executing the “MoveToDrop” plan 506. The user can select any of the list items to debug. In addition, the user can also click on a specific plan, for example, “MoveToDrop” plan 506, the snap details will be visually displayed with information like the path taken by the Forklift, route that was planned to cover, or route assigned to the Forklift, along with the timestamp, nodes, checkpoints, and other granular levels of details that may be displayed to the user to fix issues while debugging. The user may also get a visual representation, for example, a map of the entire path taken by the Forklift on clicking the “MoveToDrop” plan. The route that was taken by the forklift may be shown on the map, for example, the user can select a time-window, for example, 11th March-23:00 to 12th March-02:00 hours, and trigger a query, say what was a specific forklift, say agent 123, doing within the given time-window. The user can fire such custom queries and the system, on receiving these queries, may trigger various system modules to give a visual representation to the user to enable debugging utilities. Another form of the query may be in a key: value form and the user may be able to visualize what the agent (forklift, AGV, arm, AMR, other types of robots, etc.) was doing at the specific instant of time. Once the queries are fired, the relevant Snapshots may pop out. For example, for the Snapshot related to “PickPalletFromAgv” 510, the query may be at which location, the pallet was picked, or for ‘DropPallet’ 505, Snapshot, at which location was the pallet dropped. The user may be provided either a list of Snapshots or can fire custom queries which will enable the user to fix the issues while debugging and/or tracing or measure performance gain related to the assistance of AGV to the forklift.
In one embodiment, in addition to the rosbag module that may be plugged into the system, the system also allows a logs module to be integrated so that logs can be collected and the system may provide the targeted information desired by the user related to the logs. The modules like rosbag, logs, etc. give a rich debug experience to the end-user to obtain targeted information.
In one embodiment, the system provides an intelligent filtering module as part of the system. The intelligent filtering module allows a user to filter the events related to targeted information not just restricted to time, but considering other factors also. The factors may be agents that a user (e.g. robotics developer) is interested in, a spatial area in the operating environment where the user (e.g. warehouse manager) may be focussed on, etc.
In one embodiment, the system provides a ‘replay’ feature that includes a visualizer module that allows a user to play a video to trace the action done by each agent at any specific time. The user can jump in the entire event at any point in time and provide a visualization interface. The visualization interface may be used to play a video or utilize a simulation interface to visualize the targeted information at a given interval of time, e.g. Look at the status of the warehouse between 9 pm-6 am.
In one embodiment, the system may also invoke a UI, for example, a heat map, in which, the user may select the ‘DropPallet’ Snapshot, and then, the system displays the drop location. The user can then click on the drop location or provide input in any other form (e.g. right-click) and view the heat map. The system displays the entire warehouse map and provides customized information like recent drop locations, frequent drop locations, or drop locations having some pattern, or drop locations on a given day, or drop locations near a particular zone of the warehouse, etc. Similarly, the system may customize and display different combinations of results in case the query is related to ‘pick locations.’ Other examples may be where users may select a ‘Navigation’ plan. As the ‘Navigation’ plan in itself is huge and complex, there may be more customized search queries with preconditions, like:
Display the checkpoints where each robot was waiting; or Display the checkpoints where a specific robot (say agent 121) was waiting for other robots to pass, etc.
Based on the search query, the system may generate a heat map that may be visually displayed. In the heat map, the user may observe that there is a specific critical junction (e.g. Zone 0 aisle 1), where the robots are waiting for each other.
These are the sort of non-limiting examples that indicate the robustness of the system to allow a user to retrieve a granular level of information in a visual or command line representation. Such information may be useful for a user to make business and/or solution-centric decisions for improving performance and enabling cost savings. This visual representation may also be customized as per the user's needs by importing a plug-in that allows different representations that may be stacked up or displayed within a simulated map that causes the user to get a real-world experience of the operating environment of the robots while debugging. The live traced data may be used by the user to improve the warehouse layout conditions, configure robot design parameters, provide solutions to complex collaboration scenarios for system integrators, observe patterns in the behavior of robots and improve the performance based on historical data, etc.
The embodiment disclosed herein specifies methods and systems for generating and displaying targeted information related to navigation of a plurality of robots in an operating environment. Therefore, it is understood that the scope of the protection is extended to such a program and in addition to a computer-readable means having a message therein, such computer-readable storage means contain program code means for implementation of one or more steps of the method when the program runs on a server or mobile device or any suitable programmable device. In one embodiment, the nodes running on one robot may distribute the functionality with the nodes running on other robots and/or may collaborate between them and/or with the server nodes to implement the functionalities and embodiments described herein. The nodes running on the devices is implemented in at least one embodiment through or together with a software program written in e.g. Very high speed integrated circuit Hardware Description Language (VHDL) another programming language, or implemented by one or more VHDL or several software modules being executed on at least one hardware device. The hardware device can be any kind of device which can be programmed including e.g. any kind of computers like a server or a personal computer, or the like, or any combination thereof, e.g. one processor and two FPGAs. The device may also include means which could be e.g. hardware means like e.g. an ASIC or a combination of hardware and software means, e.g. an ASIC and an FPGA, or at least one microprocessor and at least one memory with software modules located therein. Thus, the means are at least one hardware means and/or at least one software means. The method embodiments described herein could be implemented in pure hardware or partly in hardware and partly in software. The device may also include only software means. Alternatively, the invention may be implemented on different hardware devices, e.g. using a plurality of CPUs.
The foregoing description of the specific embodiments will so fully reveal the general nature of the embodiments herein that others can, by applying current knowledge, readily modify and/or adapt for various applications such specific embodiments without departing from the generic concept, and, therefore, such adaptations and modifications should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for the purpose of description and not of limitation. Therefore, while the embodiments herein have been described in terms of embodiments and examples, those skilled in the art will recognize that the embodiments herein can be practiced with modification within the spirit and scope of the claims as described herein.
The present application is related to co-pending U.S. patent application Ser. No. 16/994,556, filed Aug. 15, 2020, the contents of which are hereby incorporated by reference herein in their entirety.