The following relates to the optimal diagnosis arts and to applications of same such as call center arts, device fault diagnosis arts, and related arts.
Diagnostic processes are employed to reach an implementable decision for addressing a problem, in a situation for which knowledge is limited. The “implementable decision” is ideally a decision that resolves the problem, but could alternatively be a less satisfactory decision such as “do nothing” or “re-route to a specialist”. In one optimal diagnosis approach, the process starts with a set of hypotheses, and tests are chosen and performed sequentially to gather information to confirm or reject various hypotheses. The term “test” in this context encompasses any action that yields information tending to support or reject a hypothesis. This process of selecting and performing tests and reassessing hypotheses is continued until one hypothesis, or a set of hypotheses, remain, all of which lead to the same implementable decision.
A related concept is “root cause”, which can be thought of as the underlying cause of the problem being diagnosed. Each root cause has a corresponding implementable decision, but two or more different root causes may lead to the same implementable decision. Diagnosis may be viewed as the process of determining the root cause; however, practically it is sufficient to reach a point where all remaining hypotheses lead to the same implementable decision, even if those remaining hypotheses encompass more than one possible root cause. It may also be noted that more than one hypothesis may lead to the same root cause.
Diagnosis devices providing guidance for optimal diagnosis find wide-ranging applications. For example, in a call center providing technical assistance, optimal diagnosis can be used to identify a sequence of tests (e.g. questions posed to the caller, or actual tests the caller performs on the device whose problem is being diagnosed) that most efficiently drill down through the space of hypotheses to reach a single implementable decision. As another example, a medical diagnostic system may identify a sequence of medical tests, questions to pose to the patient, or so forth which optimally lead to an implementable medical decision. These are merely non-limiting illustrative examples.
More formally, optimal diagnosis refers to processes for the determination of a policy to choose a sequence of tests that identify the root-cause of the problem (or, that identify an implementable decision) with minimal cost. If the root cause is treated as a hidden state, then informally the goal of an optimal policy is to gradually reduce the uncertainty about this hidden state by probing it through an efficient (i.e. optimally low cost) sequence of tests, so as to ultimately arrive at an implementable decision—the one with maximum utility—with high probability.
A known optimal diagnosis formulation is the Decision Region Determination problem formulation, which has the following inputs:
The goal is to obtain an optimal (adaptive) policy π* with minimum expected cost such that, eventually, there exists only one region Ri that contains all hypotheses consistent with the observations required by the policy. The policy is adaptive in that it selects an action depending on the test outcomes up to the current step.
When the regions Ri are non-overlapping, this problem can be solved by the known EC2 algorithm (Golovin et al., “Near-Optimal Bayesian Active Learning with Noisy Observations”, Proc. Neural Information Processing Systems (NIPS), 2010). The EC2 algorithm is a strategy operating in a weighted graph of hypotheses: edges link hypotheses (nodes) from different regions and a test t with outcome xt will cut edges whose end vertices are not consistent with xt. When the regions Ri are overlapping, a known extension of the EC2 algorithm (Chen et al., “Submodular Surrogates for Value of Information”, Proc. Conference on Artificial Intelligence (AAAI), 2015) operates by separating the problem into a graph coloring sub-problem and multiple (parallel) EC2-like sub-problems.
However, the EC2 algorithm and related algorithms based on the Decision Region Determination approach operate by explicitly enumerating all hypotheses in order to derive the next optimal test. As each hypothesis is defined as a unique configuration (sequence) of values for test results x1, . . . , xn, the hypothesis space grows exponentially with the number of tests n, so that these algorithms become infeasible in practice (for large values of n).
In some embodiments disclosed herein, a diagnosis device comprises a computer programmed to choose a sequence of tests to perform to diagnose a problem by iteratively performing tasks (1) and (2). In task (1), for each root cause yj of a set of m root causes, a hypotheses sampling generation task is performed to produce a ranked list of hypotheses for the root cause yj by operations which include adding hypotheses to a set of hypotheses wherein each hypothesis is represented by a configuration x1, . . . , xn of test results for a set of unperformed tests U. Task (2) includes performing a global update task including merging the ranked lists of hypotheses for the m root causes, selecting a test of the unperformed tests based on the merged ranked lists and generating or receiving a test result for the selected test, updating the set of unperformed tests U by removing the selected test, and removing from the ranked lists of hypotheses for the m root causes those hypotheses that are inconsistent with the test result of the selected test. In some embodiments, for each iteration of performing the hypotheses sampling generation task (1), the adding of hypotheses is performed to produce the ranked list of hypotheses covering at least a threshold conditional probability mass coverage for the conditional probability of root cause yj given all observed test outcomes up to the current iteration.
In some embodiments disclosed herein, a non-transitory storage medium stores instructions readable and executable by a computer to perform a diagnosis method including choosing a sequence of tests for diagnosing a problem by an iterative process. The iterative process includes: independently generating or updating a ranked list of hypotheses for each root cause of a set of root causes where each hypothesis is represented by a set of test results for a set of unperformed tests and the generating or updating is performed by adding hypotheses such that the ranked list for each root cause is ranked according to conditional probabilities of the hypotheses conditioned on the root cause; merging the ranked lists of hypotheses for all root causes and selecting a test of the set of unperformed tests using the merged ranked lists as if it was the complete set of hypotheses; generating or receiving a test result for the selected test; removing the selected test from the set of unperformed tests; and removing from the ranked lists of hypotheses for the root causes those hypotheses that are inconsistent with the test result of the selected test. In some embodiments, the independent generating or updating of the ranked list of hypotheses for each root cause is performed to produce the ranked list of hypotheses covering at least a threshold conditional probability mass coverage for the conditional probability of the root cause given all observed test outcomes up to the current iteration.
In some embodiments disclosed herein, a diagnosis method comprises choosing a sequence of tests for diagnosing a problem by an iterative process including: generating or updating a ranked list of hypotheses for each root cause of m root causes where each hypothesis is represented by a set of test results for a set of unperformed tests and the generating or updating is performed by adding hypotheses such that the ranked list for each root cause is ranked according to conditional probabilities of the hypotheses conditioned on the root cause; merging the ranked lists of hypotheses for the m root causes and selecting a test of the set of unperformed tests based on the merged ranked lists; generating or receiving a test result for the selected test; and performing an update including removing the selected test from the set of unperformed tests and removing from the ranked lists of hypotheses for the root causes those hypotheses that are inconsistent with the test result of the selected test. The generating or updating, the merging, the generating or receiving, and the performing of the update are performed by one or more computers. In some embodiments, the generating or updating produces the ranked list of hypotheses for each root cause which is effective to cover at least a threshold conditional probability mass coverage for the root cause. (In other words, the generating or updating employs a stopping criterion in which the generating or updating stops when the ranked list of hypotheses covers at least a threshold conditional probability mass coverage for the root cause.)
Decision Region Determination approaches generally require explicit enumeration of all hypotheses or, in other words, all potential configurations of test outcomes. For each hypothesis, its associated optimal decision is determined and its likelihood is computed; once this is done, a particular strategy (different for different Decision Region Determination approaches) is applied to choose the next test, in order to reduce as efficiently as possible the number of regions consistent with potential future observations.
In such approaches, each hypothesis can be represented as the test results for the set of available tests, e.g. if there are n tests each having a binary result, a given hypothesis is represented by one of 2n possible “configurations” of the n binary tests. (Binary tests are employed herein as an expository simplification, but the disclosed techniques are usable with non-binary tests.). The number of hypotheses (represented by configurations) is exponential with respect to the number of tests (goes with 2n in the example) so that these approaches do not scale up well when the number of tests increases to several hundreds of tests or more. Sampling the hypothesis space is a feasible alternative but could require a large sample size in order to guarantee that the loss in performance is bounded in an acceptable way. Moreover, as new test results are obtained, the number of sample hypotheses consistent with these test results could decrease significantly so that the effective sample size may be insufficient to compute a (nearly) optimal choice strategy (sequence of tests to perform). Furthermore, in practice, it is often the case that the tests are designed to have high specificity or/and high sensitivity. This means that a small number of configurations cover a significant part of the total probability mass and, conversely, that there are many configurations with very small (but non-null) probabilities. This skewness can be exploited if an efficient way is provided to generate the most likely configurations.
Optimal diagnosis approaches disclosed herein have improved scalability compared with approaches employing Decision Region Determination formulations. The improved scalability is achieved by dynamically (re-)sampling the hypothesis spaces independently for each root cause, while ensuring that the sample size and representativeness of the combined sampling for all m root causes (as measured by the total probability mass it covers, given all test outcomes observed) is sufficient to derive a nearly-optimal policy whose total cost is bounded with respect to the cost of the optimal policy derived from considering the entire hypotheses space. A “divide-and-conquer” sampling strategy is employed in which hypotheses are sampled for each root cause (i.e. each value of the hidden state) independently. In some embodiments, the Nayes-Bayes assumption is employed to generate the most probable hypotheses (conditioned on the root cause) and combine them over all m root causes to compute their global likelihood. A Directed Acyclic Graph (DAG)-based search may be employed in the sampling. A new sample is re-generated each time the result of a (previously unperformed) test is received, so that a pre-specified coverage level and reliable statistics are guaranteed to derive a near-optimal policy.
Optionally, a residual set of hypotheses that are sampled but are not in the ranked list of hypotheses is maintained. This residual set of hypotheses can be seen to be somewhat analogous to a type of “Pareto frontier” of candidate hypotheses. Such a residual set of hypotheses (loosely referred to herein as a Pareto frontier) is maintained for each root cause, and is sufficient to generate the next candidates for the next re-sampling, if needed. This also ensures that hypotheses already generated during a previous iteration are not reproduced.
In the illustrative examples herein, the following notation is employed. A hypothesis is represented by a configuration made of n test outcomes. In the illustrative examples, these test outcomes are binary, so that hypothesis h can be represented by a sequence of n bits xi. (Again, the assumption of binary tests is illustrative, but tests with more than two possible outcomes are contemplated). The probability of a configuration h is obtained as a mixture model over hidden components: p(h)=Σj=1mp(h|yj)p(yj) where yj∈y, and y the set of m hidden components. Each hidden component yj corresponds to a (possible) root cause, and there are (without loss of generality) m root causes. Under the Naïve Bayes assumption, the conditional independence of the test outcomes given the component/root cause is given by: p(h|yj)=Πi=1np(xi|yj). It is assumed that the individual conditional probabilities p(xi|yj) are known.
Optimal diagnosis methods disclosed herein aim at identifying the root cause(s) or, more generally, making a decision to solve a problem. Optimal diagnosis approaches disclosed herein achieve this goal through the analysis and the exploitation of all potential configurations consistent with the test outcomes currently observed. Conventionally, such approaches need the enumeration of all potential configurations. In the approaches disclosed herein, however, instead of trying to enumerate all configurations, only the most likely configurations are enumerated—covering up to a pre-specified portion of the total probability mass—in an efficient and adaptive way. Each component (possible root cause) is sampled independently so that, with the Naive Bayes assumption, the most probable hypotheses (that is, having highest conditional probability p(h|yj) of hypothesis h conditioned on the root cause yj) are generated. This mechanism automatically generates a ranked list of most probable hypotheses for each root cause, and these are combined (i.e. merged) over all root causes, and the merger used to select a next unperformed test to perform. A new sample is generated each time a new test outcome (result) is received: this constantly guarantees a pre-specified coverage level so that the statistics used by the strategy to optimally choose the next test are exploited reliably. Optionally, a residual set of hypotheses (called a Pareto frontier) is maintained, that is sufficient to generate the next candidates for the next re-sampling, if needed.
In sum, the disclosed approaches adaptively maintain a pool of configurations that constitute a sample whose representativeness and size (as measured by the total probability mass it covers, given all test outcomes observed) are sufficient to derive a nearly optimal policy. These approaches have computational advantages that facilitate scalability and more efficiently use computing resources. In one approach, the processing may be performed on m parallel processing paths to respectively update the most likely configurations for each respective component of the m components, which cover globally—by taking the union of all components—at least (1−η) of the total probability mass (where η is a design parameter). After observing a test outcome, inconsistent configurations are adaptively filtered out and additional configurations for each configuration are re-sampled by the respective m parallel processing paths. The re-sampling is performed to ensure that the new sampling coverage is sufficient to derive reliable statistics when deriving the next optimal test to be performed.
With reference to
Each computer 10 is programmed to perform at least a portion of the optimal diagnosis processing. The number of computers may be as low as one (a single computer). On the other hand, in the illustrative optimal diagnosis device of
With continuing reference to
The optimal diagnosis process further includes a central (or global) update task 30 including a merger operation 32 that merges the ranked lists 24 of hypotheses for the m root causes and selects a next test of the unperformed tests xA to perform based on the merged ranked lists. In an operation 34, a test result is generated or received for the selected test. This test result is transmitted back to the m hypothesis space sampling processes 20 to enable these processes 20 to perform the update process 28 by removing any hypotheses which are inconsistent with the test result. Finally, in an operation 36 the set of unperformed tests U is updated by removing the selected and now-performed test from the set of unperformed tests U.
It should be noted that in the operation 34, the optimal diagnosis device does not necessarily actually perform the selected test. For example, in the case of the optimal diagnosis device being used to support a fully automated online chat or telephonic dialog system of a call center, the operation 34 may entail generating the test result for the selected test by operating the dialog system to conduct a dialog using the dialog system to receive the test result via the dialog system. By way of illustration, in the case of an online chat dialog system the selected test may have an associated “question” text string that is sent to the caller via an online chat application program, and the test result is then received from the caller via the online chat application program (possibly with some post-processing, e.g. applying natural language processing to determine whether the response was “yes” or some equivalent, or “no” or some equivalent). A telephonic dialog system is used similarly except that the associated “question” text string is replaced by a pre-recorded audio equivalent (or is converted using voice synthesis hardware) and the received audio answer is processed by voice recognition software to extract the response. In a variant case in which the optimal diagnosis device used to support a manual online chat or telephonic dialog system of a call center, the operation 34 may entail presenting the question to a human call agent on a user interface display, and the human agent then communicates the question to the caller via online chat or telephone, receives the answer by the same pathway and types the received answer into the user interface whereby the optimal diagnosis device receives the test result. As yet another example, in the case of medical diagnosis the operation 34 may output a medical test recommendation and receive the test result for the recommended medical test. In this case, the medical test may be a “conventional” test such as a laboratory test, or the “test” may be in the form of the physician asking the patient a diagnostic question and receiving an answer.
In the following, some illustrative embodiments of the hypothesis space sampling process 20 are described. Again, each hypothesis h is defined by a configuration that can be represented as an array of bits (assuming binary tests). Each bit i represents the outcome or test result xi of test i (i=1, . . . , n). For strictly binary tests, there are 2n possible configurations at maximum, but most of them are either impossible or have a very low probability for a given root cause yj, depending on the conditional probability p(xi|yj) values. Each component yj has its own hypotheses sampling generator 20. In some illustrative embodiments, the generator 20 incrementally builds a Directed Acyclic Graph (DAG) of configurations, starting from the most likely configuration (which is easily identified as the configuration of the most probable test result xi for each respective test i). At each iteration, the current leaves of the DAG represent the current residual set of hypotheses, called the “Pareto Frontier” herein—this is the set of candidate configurations that dominate all other potential configurations from the likelihood viewpoint and that can generate all other configurations through the “children generation” mechanism described later herein. The most likely one is then developed further by creating (e.g.) two children as new further candidates (nodes) in the DAG.
The local generator 20 for root cause yj uses the following inputs. The component yj and its associated outcome probability vector over n tests: p(xi|yj) (i=1, . . . , nt). Note that nt will vary over time, as the number of available tests will gradually decrease during the decision making process. Another input is the pre-specified coverage level: (1−η). Optionally, a frontier Fy
The hypotheses sampling generator 20 produces the following outputs: the ranked list L*y of most likely configurations and their log-probabilities λy(h)=log(p(h|y,xA)), s.t. Σh∈L*
With continuing reference to
In the Step (4c) (operation 44 of
In an illustrative embodiment, the global update task 30 starts the optimal diagnosis process by initializing all ranked lists L*y
First, for each yj, j=1, . . . , m the corresponding hypotheses sampling generator 20 is called to generate extra configurations so that L*y
With continuing reference to
Σh∈Gp(h|xA)=Σh∈GΣyp(h|y,xA)·p(y|xA)≥ΣyΣh∈L*
For each hypothesis its probability weight is:
p(h|xA)=Σyp(h|y,xA)·p(y|xA)=Σyexp(λy(h)·p(y|xA)
In the operation 54, statistics are computed to derive next test t to perform (or to decide to stop if a stopping criterion is met, such as all remaining hypotheses of the sample (i.e. the ones that are consistent with all test outcomes observed up to the current iteration) lead to the same decision. For example, the most discriminative test for distinguishing between all remaining hypotheses of the sample may be chosen, where discriminativeness may be measured by information gain (IG) or another suitable metric. In the illustrative example of
The operation 34 is next performed to generate or receive the test result xt of the selected test t. In illustrative
It is to be appreciated that the dialog system 60 of
Regardless of the specific implementation of execution of the test t selected at operation 54, the result of executing the selected test t is the test result 80, denoted herein as xt. The hypotheses sampling generators 20 for the m respective possible root causes then operate to update the respective lists L*y
The foregoing process is repeated iteratively, with each iteration selecting a test t, receiving the test result xt and updating accordingly.
It can be shown that, under the assumption that the hypotheses are sampled only once in the beginning of each experiment (i.e., no resampling after each iteration), the following upper bound can be placed on the expected cost of the greedy policy with respect to the sampled prior:
Fix η∈ (0,1]. Suppose a set of hypotheses has been generated that covers 1−η fraction of the total mass. Let
be the EC2 policy on
, OPT be the optimal policy on
, and T be the cost of performing all tests. Then it holds that
where
The foregoing establishes a bound between the expected cost of the greedy algorithm on the sampled distribution of , and the expected cost of the optimal algorithm on the original distribution of H. The quality of the upper bound depends on η: if the sampled distribution covers more mass (i.e., η is small), then a better upper bound is obtained.
When the underlying true hypotheses h*∈, if the greedy policy
is run until it cuts all edges between different decision regions on
, then it will make the correct decision upon terminating on
. Otherwise, with small probability,
fails to make the correct decision. More precisely, the following bicriteria result can be stated:
where
and costwc(•) is the worst-case cost of a policy.
One intuitive consequence of the foregoing is, running the greedy policy on a larger set of samples leads to a lower failure rate, although {tilde over (p)}min might be significantly smaller for small η. Further, with adaptive re-sampling we constantly maintain a 1−η coverage on the posterior distribution over . With similar reasoning, we can show that the greedy policy with adaptively-resampled posteriors yields a lower failure rate than the greedy policy which only samples the hypotheses once at the beginning of each experiment.
In the following, some experimental test results are reported, which were performed on real training data coming from a collection of (test outcomes, hidden states) observations. This collection of observations was obtained from contact center agents and knowledge workers to solve complex troubleshooting problems for mobile devices. These training data involve around 1100 root-causes (the possible values yj of the hidden state) and 950 tests with binary outcomes. From the training data the following were derived: a joint probability distribution over the test outcomes and the root-causes as p(x1, . . . , x′n, y)=p0(y)Πi=1np(xi|y), where p0(y) is the prior distribution over the root-causes (assumed to be uniform in these experiments).
The tests simulated thousands of scenarios (10 scenarios for each possible root-cause y), where a customer enters in the system with an initial symptom x0 (i.e. a test outcome), according to the probability p(x0|y). Each scenario corresponds to a root-cause and to a complete configuration of symptoms that are initially unknown to the algorithm, except the value of the initial symptom. The number of decisions is the number of root-cause, plus one extra decision (the “give-up” decision) which is the optimal one when the posterior distribution over the root-causes knowing all test outcomes has no “peak” with a value higher than 98% (this is how the utility function was defined in this use case).
The actually performed experiments were run on an Intel i5-3340M @ 2.70 GHz (8 Gb RAM; 2 cores). The CPU time to the main loop of the algorithm (namely doing the re-sampling, computing the statistics to derive the next best action and filtering the lists) was on average less than 0.5 s, but can reach 1.5 s (at maximum) at the early stage of the process, when there is still a lot of ambiguity about the possible root-causes (this occurs with initial symptoms that are “very general” and not specific).
The performance of the EC2 algorithm (implemented using the optimal diagnosis device of
It is seen in Table 1 that both methods (EC2 and G-IG) offer a low failure rate of less than one failure over one thousand cases. However, there is a 16% improvement in the total number of tests required to solve a case, on average, when using the EC2 algorithm instead of the standard G-IG algorithm. This shows a clear advantage of using the disclosed approach for this kind of sequential problem: EC2 by construction is “less myopic” than the information-gain-greedy (G-IG) approach.
With reference back to
It will be appreciated that various of the above-disclosed and other features and functions, or alternatives thereof, may be desirably combined into many other different systems or applications. Also that various presently unforeseen or unanticipated alternatives, modifications, variations or improvements therein may be subsequently made by those skilled in the art which are also intended to be encompassed by the following claims.