This disclosure generally relates to autonomous systems.
An autonomous system is a robot or machine that performs behaviors or tasks with a high degree of autonomy. An autonomous system is typically capable of operating for an extended period of time without human intervention. A typical autonomous system is capable of gathering information about its environment and traversing the environment without human assistance. Further, an autonomous system uses such information collected from the environment to make independent decisions to carry out objectives. Such autonomous systems may be well suited to tasks in fields such as spaceflight, household maintenance, waste water treatment, delivering goods and services, military applications, cyber security, network management, AI assistants, and augmented reality or virtual reality applications.
In general, the disclosure describes techniques for facilitating identification and explanation of behaviors of an autonomous system that may be perceived as unexpected or surprising to outside, human observers. Such behavior may be perceived as “unexpected” when it conflicts with the human observer's general background knowledge or history of interactions with the autonomous system. Humans judge mistakes by computer systems more harshly than mistakes by other humans. Further, errors in computer systems have a disproportionately large impact on the reliability of the computer system as perceived by humans. This negative impact on trust has particularly significant repercussions for human-machine teams, where the humans' trust in the autonomous systems directly affects how well the humans utilize the autonomous systems. The effect is particularly unfortunate when a human perceives an autonomous system to be misbehaving when in fact the autonomous system is behaving correctly based on conditions unknown to the human. Accordingly, the techniques are disclosed for a self-reflection layer of an autonomous system that explains unexpected behaviors of the autonomous system.
In one example, a computing system includes an autonomous system configured to determine decisions for a task. The system further includes a self-reflection unit configured to generate a model for decisions determined by the autonomous system. Based on the model, the self-reflection unit may identify an action performed or decision determined by the autonomous system for the task that may be perceived as surprising or unexpected to a user. The self-reflection unit determines, based on the model, a context and a rationale for the unexpected action or decision. An explanation unit of the system outputs, for display to a user, an explanation for the unexpected action or decision comprising the at least one of a context and a rationale for the unexpected action or decision to improve trust by a user in the autonomous system.
In one example, this disclosure describes a system for improving trust and collaboration with a user of an autonomous system by identifying actions of the autonomous system that may be unexpected to the user, the system comprising: the autonomous system executing on processing circuitry and configured to interact with the user and to determine actions for one or more tasks; a self-reflection unit executing on the processing circuitry and configured to generate a model of a behavior of the autonomous system; and an explanation unit executing on the processing circuitry and configured to identify, based on the model, one or more actions of the actions for the one or more tasks that may be unexpected to the user, wherein the explanation unit is further configured to determine, based on the model, at least one of a context and a rationale for the one or more actions, and wherein the explanation unit is further configured to output, for display, an explanation for the one or more actions comprising the at least one of the context and the rationale for the one or more actions to improve trust and collaboration with the user of the autonomous system.
In another example, this disclosure a method for improving trust and collaboration with a user of an autonomous system by identifying actions of the autonomous system that may be unexpected to the user, the method comprising: determining, by an autonomous system executing on processing circuitry and configured to interact with the user, actions for one or more tasks; generating, by a self-reflection unit executing on the processing circuitry, a model of a behavior of the autonomous system; identifying, by an explanation unit executing on the processing circuitry and based on the model, one or more actions of the actions for the one or more tasks that may be unexpected to the user; determining, by the explanation unit and based on the model, at least one of a context and a rationale for the one or more actions; and outputting, by the explanation unit and for display, an explanation for the one or more actions comprising the at least one of the context and the rationale for the one or more actions to improve trust and collaboration with the user of the autonomous system.
In another example, this disclosure describes a non-transitory, computer-readable medium comprising instructions, that, when executed, cause processing circuitry of a system comprising an autonomous system to: determine actions for one or more tasks; generate a model of a behavior of the autonomous system; identify, based on the model, one or more actions for the one or more tasks that may be unexpected to the user; determine, based on the model, at least one of a context and a rationale for the one or more actions; and output, for display, an explanation for the one or more actions comprising the at least one of the context and the rationale for the one or more actions to improve trust and collaboration with the user of the autonomous system.
The details of one or more examples of the techniques of this disclosure are set forth in the accompanying drawings and the description below. Other features, objects, and advantages of the techniques will be apparent from the description and drawings, and from the claims.
Like reference characters refer to like elements throughout the figures and description.
An autonomous system may exhibit behavior that appears puzzling to an outside observer. For example, an autonomous system might suddenly change course away from an intended destination. An outside observer may conclude that there was a problem with the autonomous system but, upon further inspection, the autonomous system may have good reason for its action. For example, the autonomous system may be reacting to an unexpected event or diverting to a higher-priority task. A straightforward user interface showing the current plans of the autonomous system may help somewhat to alleviate this problem. However, this may not assist end users who would be teaming with the autonomous system in the future. For example, autonomy will be especially useful in domains that are complex, where the complexity can derive both from a rich, dynamic operating environment (e.g., where the true state of the world is difficult to objectively determine) and a complex set of goals, preferences, and possible actions. For such domains, straightforward user interfaces likely may not help in understanding the rationale for an autonomous agent making a surprising decision.
Conventional explanation schemes are reactive and informational. In other words, in a conventional system, an explanation is provided in response to a human user initiating an explicit request for additional information. For example, a conventional system may provide explanations in response to specific user queries and focus on making the reasoning of an autonomous system more transparent. While reactive explanations are useful in many situations, proactive explanations are sometimes called for, particularly in mixed autonomy settings where, for example, close coordination is required and humans are engaged in tasks of their own or are supervising large teams.
Techniques are disclosed herein for an explanation framework based on explanation drivers, such as the intent or purpose behind autonomous system decisions and actions. The techniques described herein may provide explanations for reconciling expectation violations and may include applying an enumerated set of triggers for triggering proactive explanations. For example, an autonomous system as described herein may generate proactive explanations that anticipate and preempt potential surprises in manner that is particularly valuable to a user. Proactive explanations serve to keep the user's mental model of the decisions made by the autonomous system aligned with the actual decision process of the autonomous system. This minimizes surprises that can distract from and disrupt the team's activities. Used judiciously, proactive explanations can also reduce the communication burden on the human, who will have less cause to question the decisions of the autonomous system. By providing timely, succinct, and context-sensitive explanations, an autonomous system as described herein may provide a technical advantage of avoiding perceived faulty behavior and the consequent erosion of trust, enabling more fluid collaboration between the autonomous system and users.
Explanation for an autonomous system differs in a number of ways. The decision to be explained is typically part of a larger, orchestrated sequence of actions to achieve some long-term goal. Decisions occur at different levels of granularity from overarching policies and strategic decisions down to individual actions. Explanation may be required for various reasons under different circumstances, e.g., before execution to explain planning decisions, during execution to explain deviations from planned or expected behavior, and afterwards to review agent actions.
In a collaborative human-machine team setting, whether humans serve as supervisors or as teammates, explanation during execution presents an additional challenge of limited cognitive resources. With the human already engaged in a cognitively demanding task, system explanations must be succinct, timely, and context-sensitive. In particular, a human may ask for an explanation of the behavior of an autonomous system when the autonomous system has done something unexpected. The autonomous system must provide an explanation that adequately explains the behavior of the autonomous system.
One of the motivations for explanation by an autonomous system is surprise. When an autonomous system violates expectations of a human collaborator (which is typically perceived as undesirable behavior), the human collaborator wants to know the reason why. However, an autonomous system that could anticipate the surprise and proactively explain what the autonomous system is about to do may function better over conventional systems. For example, such an autonomous system that can proactively explain unexpected behavior may avert a potentially unpleasant surprise that distracts the human user and erodes trust. Such unexpected behavior may take the form of a planned action, currently executing action, or action that has previously been executed, that contradicts expectations of the user. As described herein, unexpected “actions” or unexpected “decisions” may be used interchangeably to refer to actions performed by the autonomous system that may be perceived by a user as unexpected or surprising given the user's expectations.
Autonomous system 120 is a robot, machine, or software agent that performs behaviors or tasks with a high degree of autonomy. Autonomous system 120 is capable of operating for an extended period of time without human intervention. In some examples, autonomous system 120 is capable of traversing an environment without human assistance. In some examples, autonomous system 120 includes one or more sensors for collecting information about its environment. Further, autonomous system 120 uses such information collected from the environment to make independent decisions to carry out objectives. In some examples, autonomous system 120 is an autonomous system for an unmanned aerial vehicle (UAV), drone, rover, or self-guided vehicle. In some examples, autonomous system 120 is another type of system that operates autonomously or semi-autonomously, such as a robotic system, healthcare system, financial system, or cybersecurity system.
In the example of
In some examples, task performance units 122 may specify user policies and preference criteria. A human may define policies and preferences to guide the behavior of autonomous system 120. In some examples, the user policies and preference criteria define strict policies and preferences that autonomous system 120 may not violate when carrying out a task or seeking an objective. In other examples, the user policies and preference criteria define soft policies and preferences that autonomous system 120 preferably should not violate when carrying out a task or seeking an objective, but may violate under certain conditions (e.g., to achieve a high-value objective or task).
In accordance with the techniques of the disclosure, computing system 100 includes reflection unit 110. Reflection unit 110 generates model 116 of a behavior of autonomous system 120 that exposes, at least in part, the decision making of autonomous system 120. As described below, model 116 is in turn used by anticipatory explanation unit 104 to identify actions of autonomous system 120 that may be perceived as unexpected or surprising by a user. Autonomous system 120 generates model 116 incrementally in real-time or near real-time, as it operates, by constructing a declarative representation of knowledge, inferences, and decision factors that impact autonomous operation. Model 116 may assume that autonomous system 120 itself is organized around tasks, plays, actions, and roles.
Tasks are the objectives to be performed by autonomous system 120 as part of its autonomous operation. These tasks may be assigned by an external agent (a human, another autonomous system) or autonomous system 120 may determine through its own operation that certain tasks are desirable to achieve.
Plays encode strategies for achieving tasks. Autonomous system 120 may store one or more plays for achieving a given task, which may apply under different conditions or have differing effects. Autonomous system 120 may also have preferences associated with performing certain plays in certain contexts. A given play can be defined hierarchically, eventually grounding out in concrete actions that autonomous system 120 can execute to achieve or work towards a specific task. Plays may include variables that can be set so the plays apply across a broad range of contexts. Autonomous system 120 may specialize a play by selecting context-appropriate values for the play variables so as to execute a play to achieve a specific task for a specific situation.
Actions can either link to effectors of autonomous system 120 that operate in the physical world or can cause changes to the internal state of autonomous system 120, for example, by changing its current set of tasks or modifying its beliefs about the operating environment. Actions may include variables that can be set so that autonomous system 120 may apply the actions across a broad range of contexts. Autonomous system 120 may execute an action by specializing the action through the selection of context-appropriate values for the action variables.
For plays that involve coordinating multiple autonomous systems, roles may exist that are to be filled by one or more autonomous agents, such as autonomous system 120. For example, a play that has a team flying in formation may have a leader role and one or more follower roles. Autonomous system 120 may select a role based on information available to autonomous system 120 or may be assigned a particular role by another actor, such as an external autonomous system or a user.
To generate model 116, autonomous system 120 may provide continuous tracking of several different types of information. For example, autonomous system 120 may track tasks that have been adopted, modified, or dropped, and the rationale for those changes. The rationale may include the source of a task as well as state conditions and other task commitments or dependencies that led to task adoption, modification, or elimination. Autonomous system 120 may additionally track a relevant situational context for achieving those tasks. The rationale may encompass both external state (e.g., factors in the operating environment in which the autonomous system operates) and internal state (e.g., internal settings within the autonomy model, commitments to tasks or actions), along with provenance information for changes in state. Autonomous system 120 may further track plays that were considered for tasks and classification of the plays as unsuitable, acceptable, preferred, or selected, along with rationale for those classifications. The tracked rationale records relevant constraints and preferences associated with plays and their status at the time a play is classified. Autonomous system 120 may further track specializations made to a play to adapt the play to a context in which the play is to be executed. Further, autonomous system 120 may track other autonomous systems that were considered for roles within a play along with an evaluation of their suitability for those roles and rationale for that evaluation. Thus, the rationale may record relevant constraints and preferences associated with the roles along with a status at the time the roles are assigned, as well as a quantitative score evaluating goodness of fit based on evaluation criteria defined within the play. Further, autonomous system 120 may track specific actions that were performed and the rationale for their execution. The rationale may record the play that led to the execution of the action and the higher-level tasks for which that play is being executed along with relevant specializations of the action to the current context.
Reflection unit 110 may further generate, based on model 116, an action context 112 and an action rationale 114 for the action that may be perceived as unexpected by the user. Action context 112 may include information that provides a context for the action made by autonomous system 120 that may be perceived as unexpected by the user. For example, action context 112 may include a summary of information known to autonomous system 120 and used to determine the action that may be perceived as unexpected by the user. Action context 112 may include information related to an operational status of autonomous system 120, a fuel or power status of autonomous system 120, environmental or geographic information, such as terrain obstacles near or remote to autonomous system 120 or routes to a destination, or information related to a target, such as capabilities, location, vector, or defenses of the target, among other types of information used as input for autonomous system decision making.
Action rationale 114 may include information that explains the reasoning that autonomous system 120 used to arrive at the determination to select the action that may be perceived as unexpected by the user. For example, action rationale 114 may include rules, guidelines, directives, requirements, and/or user preferences, as well as route planning, role allocation, and task optimization criteria that, given action context 112, causes autonomous system 120 to determine a decision.
Computing system 100 further includes explanation management unit 102. Explanation management unit 102 includes anticipatory explanation 104, dialog management 106, and explanation generation 108. Explanation management unit 102 outputs, via user interface 150, an explanation for actions identified by reflection unit 110 as being potentially unexpected or surprising to a user. In some examples, the explanation includes at least one of a context and a rationale for the unexpected action.
Explanation management unit 102 uses model 116 to identify the unexpected action in a variety of ways. In anticipation of the surprise by the user, explanation management unit 102 may use anticipatory explanation 104 to generate an anticipatory explanation for the decision designed to anticipate and mollify surprise in the user. For example, anticipatory explanation unit 104 may use model 116 to identify decisions that are historical deviations from past decisions, decisions that arise as a result of unusual situations, decisions that may involve limitations on human knowledge, decisions that violate user preferences, decisions that arise due to multiple choices that have indistinguishable effects, decisions that result in a deviation from a plan, or decisions that result in indirect trajectories to a goal.
Anticipatory explanation unit 104 determines whether an action documented within autonomous system model 116 is likely to be perceived by a user as unexpected or surprising. Upon determining that the action is likely to be perceived as unexpected or surprising, anticipatory explanation unit 104 causes an explanation to be generated and offered proactively to avoid surprise. To determine whether an action is unexpected or surprising, anticipatory explanation unit 104 relies on a set of one or more classifiers. Each classifier determines whether an action is likely to cause surprise in the user. Classifiers represent different classes of surprises and may be manually engineered (e.g., a rule-based classifier, a probabilistic graphical model) or learned (e.g., a decision tree, a linear classifier, a neural net). A classifier receives as input an action, a context for the action, and a rationale for the action (as defined within autonomous system model 116. A classifier outputs a judgment of whether the action is surprising. The judgment may be binary (e.g., surprising/not surprising), ordinal (e.g., very surprising, somewhat surprising, . . . , not at all surprising), categorical (e.g., surprising to novices/experts/everyone, never/sometimes/always), probabilistic (n % probability of being surprising), etc. Given an assessment of sufficient surprise (e.g., upon determining that a probability that the action may cause surprise in the user exceeds a threshold), anticipatory explanation unit 104 generates an explanation drawing on the information in autonomous system model 116, action context 112, and action rationale 114.
In anticipation of the surprise by the user, explanation management unit 102 may generate an anticipatory explanation for the decision designed to anticipate and mollify surprise in the user. Explanation management unit 102 uses explanation generation unit 108 and dialog management unit 106 to craft explanations of actions that may be perceived as unexpected for display to the user. Explanation generation unit 108 may generate the explanations of actions that may be perceived as unexpected as well as the context and a rationale for the action that may be perceived as unexpected by the user. Dialog management unit 106 may include software structures for crafting coherent messages to the user. Dialog management unit 106 processes the explanations generated by explanation generation unit 108 to generate messages that are understandable by a human operator.
User interface 150 is configured to output, for display to a user, the explanation received from explanation management unit 102. In some examples, user interface 150 is or otherwise includes a workstation, a keyboard, pointing device, voice responsive system, video camera, biometric detection/response system, button, sensor, mobile device, control pad, microphone, presence-sensitive screen, network, or any other type of device for detecting input from a human or machine. In some examples, user interface 150 further includes a display for displaying an output to the user. The display may function as an output device using technologies including liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of display capable of generating tactile, audio, and/or visual output. In other examples, user interface 150 may produce an output to a user in another fashion, such as via a sound card, video graphics adapter card, speaker, presence-sensitive screen, one or more USB interfaces, video and/or audio output interfaces, or any other type of device capable of generating tactile, audio, video, or other output. In some examples, user interface 150 may include a presence-sensitive display that may serve as a user interface device that operates both as one or more input devices and one or more output devices.
In some examples, explanation management unit 102 uses surprise as the primary motivation for proactivity. For example, explanation management unit 102 may use potential expectation violations to trigger an explanation. To identify expectation violations, explanation management unit 102 may generate a model to model the expectations of the user and the behavior of autonomous system 120. However, instead of relying on a comprehensive formal model of the human's expectations based on a representation of team and individual goals, communication patterns, etc., explanation management unit 102 may identify classes of expectations based on the simpler idea of expectation norms. That is, given a cooperative team setting where the humans and autonomous system 120 have the same objectives, explanation management unit 102 determines expectations of the behavior of autonomous system 120 based on rational or commonsense reasoning. Explanation management unit 102 may use a set of triggers to determine when a proactive explanation is warranted. Further, explanation management unit 102 may discuss, for each trigger, the manifestation of surprise, the expectation violation underlying the surprise, and the information that the proactive explanation should impart. The triggers described herein are not an exhaustive list but include a broad range that may be useful for providing proactive explanations.
The techniques of the disclosure may be used in various explainable autonomy formulations. In one example where autonomous system 120 selects, instantiates, and executes plays from a pre-determined mission playbook, reflection unit 110 may identify surprising role allocations based on assignment to suboptimal resources and use degree of suboptimality to drive proactivity. In another example where autonomous system 120 is a reinforcement learner that acquires policies to determine its actions, reflection unit 110 may use sensitivity analyses that perturb an existing trajectory to identify points where relatively small changes in action lead to very different outcomes.
The techniques of the disclosure may in this way present a framework for explanation drivers, focusing in particular on explanations for reconciling expectation violations. An autonomous system may avert surprise in a human operator by providing an explanation and using an enumerated set of triggers for proactive explanations. The techniques of the disclosure may provide explanations that enable the appropriate and effective use of intelligent autonomous system in mixed autonomy settings.
Accordingly, the techniques of the disclosure may provide the technical advantage of enabling a system to identify and output an explanation for autonomous system behavior, which may improve an ability of a system to facilitate trust for a user in the decisions made by an autonomous system by identifying decisions of the autonomous system that may be unexpected to the user and present a context and a rationale for the action that may be perceived as unexpected by the user. In this fashion, the techniques of the disclosure may increase the transparency of decisions made by the autonomous system to the user such that the user may have confidence that the autonomous system is making rational, informed, and optimized decisions, as opposed to decisions due to incomplete or incorrect information or due to defects or misbehavior of the autonomous system. Thus, the techniques of the disclosure may enable increased trust by the user in an autonomous system, enabling the greater perception of reliability of autonomous systems and more wide-spread adoption of such systems.
In the example of
Input devices 202 may include hardware and/or software for establishing a connection with computation engine 230. In some examples, input devices 202 may communicate with computation engine 230 via a direct, wired connection, over a network, such as the Internet, or any public or private communications network, for instance, broadband, cellular, Wi-Fi, and/or other types of communication networks, capable of transmitting data between computing systems, servers, and computing devices. Input devices 202 may be configured to transmit and receive data, control signals, commands, and/or other information across such a connection using any suitable communication techniques to receive the sensor data. In some examples, input devices 202 and computation engine 230 may each be operatively coupled to the same network using one or more network links. The links coupling input devices 202 and computation engine 230 may be wireless wide area network link, wireless local area network link, Ethernet, Asynchronous Transfer Mode (ATM), or other types of network connections, and such connections may be wireless and/or wired connections.
Computation engine 230 includes autonomous system 120, reflective unit 110, and explanation management unit 102. Each of components 120, 110, and 102 may operate in a substantially similar fashion to the like components of
In some examples, reflective unit 110 and/or explanation management unit 102 may be located remote from autonomous system 120. For example, autonomous system 120 may include telemetry to upload telemetry data via a wireless communication link to a cloud-based or other remote computing system 200 that models a behavior of autonomous system 120, identifies actions of autonomous system 120 that may be perceived as unexpected, and generates an explanation for the actions that may be perceived as unexpected using the techniques described above.
In some examples, reflective unit 110 of computing system 100 generates model 116 of a behavior of autonomous system 120. Explanation management unit 102 identifies, based on model 116, an action determined by autonomous system 120 for a task that may be perceived as unexpected by the user. For example, explanation management unit 102 may use model 116 to identify decisions that are historical deviations from past decisions, decisions that arise as a result of unusual situations, decisions that may involve limitations on human knowledge, decisions that violate user preferences, decisions that arise due to multiple choices that have indistinguishable effects, decisions that result in a deviation from a plan, or decisions that result in indirect trajectories to a goal.
Explanation management unit 102 determines, based on model 116, a context or a rationale for the action that may be perceived as unexpected by the user. The decision context may include information that provides a context for the action determined by autonomous system 120 that may be perceived as unexpected by the user. For example, the decision context may include a summary of information known to autonomous system 120 and used to determine the action that may be perceived as unexpected by the user. The decision context may include information related to an operational status of autonomous system 120, a fuel or power status of autonomous system 120, environmental or geographic information, such as terrain obstacles near or remote to autonomous system 120 or routes to a destination, or information related to a target, such as capabilities, location, vector, or defenses of the target.
The decision rationale may include information that explains the reasoning that autonomous system 120 used to arrive at the action that may be perceived as unexpected by the user. For example, the decision rationale may include rules, guidelines, directives, requirements, and/or user preferences, as well as route planning, role allocation, and task optimization criteria that, given action context 112, caused autonomous system 120 to arrive at the action that may be perceived as unexpected by the user.
Explanation management unit 102 uses explanation generation unit 108 and dialog management unit 106 to generate explanation 220 of the action that may be perceived as unexpected for the user. For example, explanation management unit may output, for display via an output device 204, an explanation 220 for the unexpected action. As another example, explanation management unit 102 may generate explanation 220 and store, via output devices 204, explanation 220 to a storage device. Explanation 220 may represent text or a graphical symbol, a sound, audio, text file, a tactile indication, or other indication that at least partially explains a decision of autonomous system 120. In some examples, explanation 220 includes the context or the rationale for the unexpected action.
Output device 204 may include a display, sound card, video graphics adapter card, speaker, presence-sensitive screen, one or more USB interfaces, video and/or audio output interfaces, or any other type of device capable of generating tactile, audio, video, or other output. Output device 204 may include a display device, which may function as an output device using technologies including liquid crystal displays (LCD), quantum dot display, dot matrix displays, light emitting diode (LED) displays, organic light-emitting diode (OLED) displays, cathode ray tube (CRT) displays, e-ink, or monochrome, color, or any other type of display capable of generating tactile, audio, and/or visual output. In other examples, output device 204 may produce an output to a user in another fashion, such as via a sound card, video graphics adapter card, speaker, presence-sensitive screen, one or more USB interfaces, video and/or audio output interfaces, or any other type of device capable of generating tactile, audio, video, or other output. In some examples, output device 204 may include a presence-sensitive display that may serve as a user interface device that operates both as one or more input devices and one or more output devices.
The example of
As discussed briefly above, explanation management unit 102 may use the model of a behavior of autonomous system 120 to identify actions of autonomous system 120 that may be perceived as unexpected by a user. For example, explanation management unit 102 may use the model to identify specific triggers that warrant a proactive explanation, such as decisions that are historical deviations from past decisions, decisions that arise as a result of unusual situations, decisions that may involve limitations on human knowledge, decisions that violate user preferences, decisions that arise due to multiple choices that have indistinguishable effects, decisions that result in a deviation from a plan, or decisions that result in indirect trajectories to a goal.
With respect to decisions that are historical deviations 302 from past decisions, an important aspect of trust is predictability. A human generally expects autonomous system 120 to perform actions that are similar to what the autonomous system 120 has performed in comparable situations in the past. Thus, if autonomous system 120 executes an action that is different than an action that autonomous system 120 has performed in similar past situations, the action is likely to surprise the user. Explanation management unit 102 may anticipate this situation through a combination of statistical analysis of performance logs and semantic models for situation similarity. Explanation management unit 102 may generate an explanation that acknowledges the atypical behavior and provides the rationale behind it. An example explanation may be “Aborting rescue mission because of engine fire.”
With respect to decisions that arise as a result of unusual situations 304, a human observer lacking detailed understanding of an environment may be aware of actions for normal operations but not of actions for more unusual situations. The actions of autonomous system 120 in such unusual situations may thus surprise a user. Explanation management unit 102 may identify these unusual situations by their frequency of occurrence. For example, if the conditions that triggered the behavior are below a probability threshold, explanation management unit 102 may identify the behavior as arising as a result of an unusual situation. To avert surprise to the user, explanation management unit 102 may generate an explanation that describes the unusual situation. An example explanation may be “Normal operation is not to extinguish fires with civilians on board, but fire is preventing egress of Drone 17 with a high-priority evacuation.”
With respect to decisions that may involve limitations on human knowledge 306, sensing and computational capabilities, particularly in distributed settings, can enable autonomous system 120 to have insights and knowledge that are unavailable to human collaborators. With insufficient information, a human operator may perceive an action taken by the autonomous system as an incorrect one. Through awareness of decisions based on this information, explanation management unit 102 may identify potential mismatches in situational understanding that can lead to surprising the user with seemingly incorrect decision-making. Explanation management unit 102 may generate an explanation that identifies the potential information mismatch and draws the attention of the user to the information mismatch.
With respect to decisions that violate user preferences 308, in some examples, autonomous system 120 incorporates preferences over desired behaviors. In some examples, these preferences are created by autonomous system 120. In some examples, these preferences are learned by autonomous system 120 over time. In some examples, these preferences are created by a system modeler at the time of design or imposed by a human supervisor later on. When making decisions, autonomous system 120 seeks to satisfy these preferences. However, various factors (e.g., resource limitations, deadlines, physical restrictions) may require violation of the user preferences. In some situations, the user may perceive autonomous system 120 as seemingly operating contrary to plan and thus surprising the user. Explanation management unit 102 may generate an explanation that acknowledges the violated preference or directive and providing a reason why. An example explanation may be “Entering no-fly zone to avoid dangerously high winds.”
With respect to decisions that arise due to multiple choices that have indistinguishable effects 310, two actions may be very different in practice but achieve similar effects. For example, different routes to the same destination may have similar duration, and therefore selection one route or the other may have a negligible effect on travel time. However, selecting a particular route may surprise a human observer who may not have realized their comparable effects or even been aware that the selected option was a valid choice. Reflection unit 110 may anticipate this type of surprise by measuring the similarity of multiple actions or trajectories and of corresponding outcomes. Explanation management unit 102 may generate an explanation that informs the user of the different options and their similar impact. An example explanation may be “I will extinguish Fire 47 before Fire 32, but extinguishing Fire 32 before Fire 47 would be just as effective.”
With respect to decisions that result in a deviation from a plan 312, a user may expect autonomous system 120 to execute a predetermined plan to achieve a goal. However, situations may arise that require a change of plans which, if initiated by autonomous system 120, may cause surprise in the user. Absent an explicit shared understanding of the current plan, autonomous system 120 may rely on the user to expect inertia, e.g., an expectation that autonomous system 120 will continue moving in the same direction toward the same target. Explanation management unit 102 may characterize this tendency and recognize significant changes so as to anticipate potential surprise in the user. Explanation management unit 102 may generate an explanation that informs the user of the plan change. An example explanation may be “Diverting to rescue newly detected group.” The explanation may be sufficient if the explanation calls attention to a new goal or target previously unknown to the user. However, if the change involves a reprioritization of existing goals, explanation management unit 102 may include the rationale. Another example explanation may be “Diverting to rescue Group 5 before Group 4 because fire near Group 5 is growing faster than expected.”
With respect to decisions that result in indirect trajectories to a goal 314, a user may expect autonomous system 120 to be engaged in generally purposeful behavior. In spatio-temporal domains, observers may typically infer from a trajectory of autonomous system 120, a destination of autonomous system 120. Further, an observer may assume, based on the perceived destination, a goal of autonomous system 120. For example, a fire response drone heading toward a fire is likely to be planning to extinguish the fire. Surprises occur when autonomous system 120 decides to take an indirect route to a goal and therefore appears to an outside observer to be headed nowhere meaningful. Explanation management unit 102 may identify this situation by determining a difference between an actual destination and an apparent one, if any. Explanation management unit 102 may generate an explanation that explicitly identifies a goal of autonomous system 120 and the reason for the indirect action. An example explanation may be “New task to retrieve equipment from supply depot prior to proceeding to target.”
The aforementioned examples of actions of autonomous system 120 that may be perceived as unexpected by a user depicted in
User interface 400 depicts window element 402 for displaying actions that may be perceived as unexpected by a user, window element 406 for displaying a context for the action that may be perceived as unexpected by the user, and window element 404 for displaying a rationale for the decision. In the example of
Further, explanation management unit 102 identifies, as described above, a context and a rationale for the action that may be perceived as unexpected by the user. In the example of
In some examples, user interface 400 may have different configurations for presenting information to the user. For example, instead of window elements 402, 404, and 406, user interface 400 may present information via audio, video, images, or text. In some examples, user interface 400 may further include one or more input devices for receiving an input from the user approving or overruling the decision made by autonomous system 120.
In the example of
Explanation management unit 102 determines, based on model 116, a context or a rationale for the unexpected action (506). The decision context may include information that provides a context for the unexpected action made by autonomous system 120. For example, the decision context may include a summary of information known to autonomous system 120 and used to arrive at the action that may be perceived as unexpected by the user. The decision context may include information related to an operational status of autonomous system 120, a fuel or power status of autonomous system 120, environmental or geographic information, such as terrain obstacles near or remote to autonomous system 120 or routes to a destination, or information related to a target, such as capabilities, location, vector, or defenses of the target.
The decision rationale may include information that explains the reasoning that autonomous system 120 used to arrive at the action that may be perceived as unexpected by the user. For example, the decision rationale may include rules, guidelines, directives, requirements, and/or user preferences, as well as route planning, role allocation, and task optimization criteria that, given action context 112, caused autonomous system 120 to arrive at the action that may be perceived as unexpected by the user.
Explanation management unit 102 uses explanation generation unit 108 and dialog management unit 106 to craft an explanation of the unexpected action for display to the user. Explanation management unit outputs, for display via user interface 150, the explanation for the action that may be perceived as unexpected by the user (508). In some examples, the explanation includes the context or the rationale for the action that may be perceived as unexpected by the user. For example, user interface 400 may present the explanation via an audio message, a video, one or more image, or via text.
The techniques described in this disclosure may be implemented, at least in part, in hardware, software, firmware or any combination thereof. For example, various aspects of the described techniques may be implemented within one or more processors, including one or more microprocessors, digital signal processors (DSPs), application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), or any other equivalent integrated or discrete logic circuitry, as well as any combinations of such components. The term “processor” or “processing circuitry” may generally refer to any of the foregoing logic circuitry, alone or in combination with other logic circuitry, or any other equivalent circuitry. A control unit comprising hardware may also perform one or more of the techniques of this disclosure.
Such hardware, software, and firmware may be implemented within the same device or within separate devices to support the various operations and functions described in this disclosure. In addition, any of the described units, modules or components may be implemented together or separately as discrete but interoperable logic devices. Depiction of different features as modules or units is intended to highlight different functional aspects and does not necessarily imply that such modules or units must be realized by separate hardware or software components. Rather, functionality associated with one or more modules or units may be performed by separate hardware or software components, or integrated within common or separate hardware or software components.
The techniques described in this disclosure may also be embodied or encoded in a computer-readable medium, such as a computer-readable storage medium, containing instructions. Instructions embedded or encoded in a computer-readable storage medium may cause a programmable processor, or other processor, to perform the method, e.g., when the instructions are executed. Computer readable storage media may include random access memory (RAM), read only memory (ROM), programmable read only memory (PROM), erasable programmable read only memory (EPROM), electronically erasable programmable read only memory (EEPROM), flash memory, a hard disk, a CD-ROM, a floppy disk, a cassette, magnetic media, optical media, or other computer readable media.
Various examples have been described. These and other examples are within the scope of the following claims.