System and method for automated customer service with contingent live interaction

Information

  • Patent Grant
  • 8379830
  • Patent Number
    8,379,830
  • Date Filed
    Tuesday, May 22, 2007
    17 years ago
  • Date Issued
    Tuesday, February 19, 2013
    11 years ago
Abstract
A balance between customer satisfaction and cost to providing customer care can be achieved based on the use of online interaction classification techniques. Such techniques can use measurements such as a log likelihood ratio to determine if an interaction should be removed from automation.
Description
BACKGROUND

Automating customer care through self-service solutions (e.g., Interactive Voice Response (IVR), web-based self-care, etc.) results in substantial cost savings and operational efficiencies. However, due to several factors, such automated systems are unable to provide customers with a quality experience. Such factors include the highly constrained nature of automated interactions, poor error recovery in automated interactions, and poor context handling in automated interactions. The present invention addresses some of the deficiencies experienced with presently existing automated care systems.


One challenge in providing automated customer service (e.g., through an interactive voice response system) is a tradeoff between cost and customer satisfaction. While customer interactions which take place using an automated system (e.g., an interactive voice response system) are generally less expensive than interactions which take place between a customer and a human being, automated interactions are also likely to lead to lower customer satisfaction. One technique for addressing this problem is to provide customer service wherein an interaction initially takes place between a customer and an automated system, and, if the interaction seems to be approaching a negative outcome (i.e., “going bad”), transferring the interaction from an automated to a live interaction. However, an obstacle to the successful use of this technique is the problem of determining when an interaction is “going bad.” If an algorithm for determining when an interaction is “going bad” is oversensitive, then too few interactions will be completed using automation, resulting in unnecessary cost. If an algorithm for determining when an interaction is “going bad” is under-sensitive, then too few interactions will be transferred to a live interaction, resulting in lower customer satisfaction and, ultimately, a lost customer and lost business. Further, even creating an algorithm for determining whether an interaction is “going bad” can be a difficult task. The teachings of this application can be used to address some of these deficiencies in the state of the art for systems and methods used in customer interactions.


SUMMARY OF THE INVENTION

In an embodiment, there is a computerized method for determining when to transfer a user from an automated service to a live agent comprising (a) classifying a set of historical interactions offline; (b) after said classifying step, training a set of classification models using said set of classified historical interactions to perform real-time classification of an interaction; (c) after said training step, determining a log likelihood ratio, using said classification models, to trigger whether to transfer said user, from an automated interaction to a live interaction, by computing a log of a prediction that an interaction is good over a prediction that an interaction is bad.


For the purpose of clarity, certain terms used in this application should be understood to have particular specialized meanings. For example, a “set of computer executable instructions” should be understood to include any combination of one or more computer instructions regardless of how organized, whether into one or more modules, one or more programs, a distributed system or any other organization. Also, as used in this application, “computer memory” should be understood to include any device or medium, or combination of devices and media, which is capable of storing computer readable and/or executable instructions and/or data. As used in this application, the term “model” should be understood to refer to a representation or a pattern for a thing. One example of a “model” is a classification model, such as an n-gram language model, which acts as a pattern for certain types of customer interactions.


A “customer interaction” (also, an interaction) should be understood to refer to a communication or set of communications through which information is obtained by a customer. Examples of automated interactions include dialog interactions where a customer is presented with prompts, and then responds to those prompts, and web page interactions, where a customer obtains information by following hyperlinks and providing input (i.e., through forms) on a web page. A live interaction takes place with a human being. Additionally, the term “monitor,” in the context of “monitoring the processing of a customer interaction” should be understood to refer to the act of observing, obtaining data about, or measuring the processing of the customer interaction.


An Interactive Voice Response (IVR) is an automated telephony system that interacts with callers, gathers information and routes calls to the appropriate recipient. An IVR system (IVR) accepts a combination of voice telephone input and touch-tone keypad selection and provides appropriate responses in the form of voice, fax, callback, e-mail and perhaps other media. An IVR system consists of telephony equipment, software applications, a database and a supporting infrastructure.


Classifying shall refer to arranging or organizing by classes or assigning a classification to information. Transferring shall refer to conveying or causing to pass from one place, person, or thing to another.


Interaction refers to an exchange between a user and either the automated system or a live agent. Training shall refer to coaching in or accustoming to a mode of behavior or performance; making proficient with specialized instruction and practice. A live agent generally refers to a human customer service representative.


A logarithm is an exponent used in mathematical calculations to depict the perceived levels of variable quantities. Suppose three real numbers a, x, and y are related according to the following equation: x=ay. Then y is defined as the base-a logarithm of x. This is written as follows: loga x=y. As an example, consider the expression 100=102. This is equivalent to saying that the base-10 logarithm of 100 is 2; that is, log10 100=2. Note also that 1000=103; thus log10 1000=3. (With base-10 logarithms, the subscript 10 is often omitted, so we could write log 100=2 and log 1000=3). When the base-10 logarithm of a quantity increases by 1, the quantity itself increases by a factor of 10. A 10-to-1 change in the size of a quantity, resulting in a logarithmic increase or decrease of 1, is called an order of magnitude. Thus, 1000 is one order of magnitude larger than 100. In an embodiment the log likelihood ratio may be computed using the formula log(P(x)|LMgood)/P(x|LMbad)).


In an embodiment, the log likelihood ratio is compared against a threshold value to determine whether said interaction is bad. A threshold value is the point that must be exceeded to begin producing a given effect or result or to elicit a response.


In an embodiment, threshold value may be dynamically reset based on external factors. In computer terminology, dynamic usually means capable of action and/or change, while static means fixed. Both terms can be applied to a number of different types of things, such as programming languages (or components of programming languages), Web pages, and application programs. External factors refer to factors situated or being outside something; acting or coming from without (e.g., external influences). In an embodiment, an example of a dynamic factor may be how successful the dialog has been up to that point—if a caller has proffered a large number (appropriately determined by business rules) of inputs during that interaction and the system has successfully recognized and a ‘dialog success’ metric has been reached, then the caller might be immediately transferred to an agent with appropriate data and thereby shorten that subsequent agent interaction. If a caller has given few or no input and is in need of agent intervention, they may be transferred to a normal queue or escalated into a more verbose dialog.


In an embodiment, the threshold value may be dynamically reset based on a lifetime value associated with said user. For discussion of lifetime value, please see co-pending patent application U.S. application Ser. No. 11/686,812, SYSTEM AND METHOD FOR CUSTOMER VALUE REALIZATION, which was filed on Mar. 15, 2007 (and which is incorporated by reference into this application).


In an embodiment, the models are based on a boostexter classification. For a discussion of boostexter classification, please see “Boostexter: A boosting-based system for text categorization.” By R. E. Schapire and Y. Singer appearing in Machine Learning, vol. 39, no 2/3, pp. 135-168, 2000; incorporated herein by reference.


In an embodiment, the boostexter classification is derived using Bayes' rule. Bayes' rule is a result in probability theory, which relates the conditional and marginal probability distributions of random variables. In some interpretations of probability, Bayes' theorem tells how to update or revise beliefs in light of new evidence a posteriori.


In an embodiment, the classification models are based on an N-gram based language model. An n-gram is a sub-sequence of n items from a given sequence. N-grams are used in various areas of statistical natural language processing and genetic sequence analysis. The items in question can be letters, words or base pairs according to the application. N-grams constitute a novel approach to developing classification models because there is some “dependence” or “memory” associated with present and past dialog states that has an immediate impact on the success of the current dialog. Dialogs may be designed based on a higher-level knowledge of the interaction and the business logic and processes that drive them and the recognition that there dependence on prior states exists (e.g., if you are ready to pay with a credit card, then it is reasonable to assume that you have gone through several dialog states such as login, product selection, etc.). Also, the deeper the dialog (and the longer the call, in general), the more dependence there is between present and prior dialog states. N-grams can allow the classification models to predict dialog success/failure more reliably.


In an embodiment, the log likelihood ratio is re-calculated for each turn in said interaction. A turn refers to a single exchange within an interaction between a user and the automated system and/or the live agent. This allows detection rather than mere prediction of when a call is going bad. Furthermore, this allows detection to be performed at any point during the call (after any number of turn exchanges).


In an embodiment computer-executable instructions encoded on a computer-readable medium for determining when to transfer a user from an automated service to a live agent comprising a) predicting whether an interaction is good, based on a classification model, using P(x|LMgood); b) predicting whether an interaction is bad, based on a classification model, using P(x|LMbad); c) calculating a log likelihood ratio using log(P(x|LMgood)/P(x|LMbad)); d) setting a threshold value for said log likelihood ratio; e) if said log likelihood ratio falls below said threshold value, executing instructions to transfer said user from automation to said live agent. Calculating refers to making a computation or forming an estimate. Setting refers to assigning a value to a variable.


In an embodiment, said classification model is based on a boostexter classification. The boostexter classification may be derived using Bayes' rule. The classification model may also be based on an N-gram-based language model. In an embodiment the threshold value may be dynamically modified based on external factors. In an embodiment the threshold value may be dynamically reset based on a lifetime value associated with said user. In an embodiment, the computer-executable instructions recalculate the log likelihood ratio for each turn in said interaction.


In an embodiment, a computerized system for determining when to transfer a user from an automated service to a live agent comprises a) an interactive voice response system (IVR) and b) a monitoring module. The user interacts with said IVR. The monitoring module evaluates, after each turn in said IVR, a probability that said user's interaction with the IVR is good and a probability that said user's interaction with the IVR is bad. The monitoring module signals an alarm to bring in a human agent if a log of the ratio of said probabilities is below a predetermined threshold. A monitoring module is a computer with instructions encoded thereon to receive data regarding an interaction and calculate the probabilities associated therewith. An alarm may comprise computer-executable instructions to take a particular action. In an embodiment, the monitoring module evaluates said probabilities based on an N-gram based language model built on partial inputs. Partial inputs may comprise any subsection of an interaction. In an embodiment, the monitoring module evaluates said probabilities based on a boostexter classifier in an iterative algorithm. In an embodiment, the threshold may be dynamically reset based on a lifetime value associated with said user.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 depicts an experimentally observed relationship between a log likelihood ratio and turn progression for “good” and “bad” interactions between a caller and an interactive voice response system.



FIG. 2 depicts an experimentally observed relationship between transfer thresholds based on log likelihood ratio and accuracy of transfer for interactions between a caller and an interactive voice response system.



FIGS. 3
a-b depict a representation of a call flow for an interactive voice response system in the form of a state machine.



FIG. 4 depicts a flow diagram of an interaction involving a caller, an IVR, and an agent using an assisted automation (AA) GUI.





DETAILED DESCRIPTION

Some embodiments might make use of transaction information and use it to learn, which might be accomplished with the help of machine learning software agents. This might allow the automated self-care system to improve its performance in the area of user interface, speech, language and classification models, application logic, and/or other areas relevant to customer and/or agent interactions.


For ease of comprehension, this application is structured in the following manner. First, the application describes techniques for offline classification of interactions. Second, the application discusses how data regarding interactions classified offline can be used to train models used for online classification. Third, the application will describe how those models can be used to detect when an interaction should be transferred from an automated interaction to a live interaction (e.g., when the interaction is “going bad”). To make concrete examples possible, the discussion below will be set forth in the context of interactive voice response system technology. However, upon reviewing this application, one of ordinary skill in the art will be able to apply the teachings of this application in contexts and for uses beyond those explicitly set forth herein including a variety of modalities (e.g., web-based interactions). Thus, this application should be understood as illustrative only, and not limiting on any claims included in patent applications which claim the benefit of this application.


To facilitate the discussion of offline classification of interactions, the following assumptions should be made. First, it should be assumed that there is provided a corpus of data representing interactions which can be classified. Such a data corpus can be generated during the normal operation of an interactive voice response system, as most such systems are designed to automatically monitor calls, and create records of calls for quality assurance purposes. Second, it should be assumed that the records in the data corpus representing individual interactions preserve all information regarding the interaction. Examples of such information include prompts provided by the interactive voice response system, transcriptions of statements by a caller derived from an automatic speech recognizer, meanings ascribed to statements made by the caller, confidence scores for the transcriptions and/or meanings, and other relevant information which can be used to represent or describe an interaction. Of course, it should be understood that these assumptions are made for the sake of clarity only, and that they are not intended to be limiting on the scope of any claims included in applications claiming the benefit of this disclosure.


Agent assisted support may be triggered by a variety of mechanisms including, but not limited, to dialog quality or a specific call path. In some embodiments, a voice dialog (or some other type of interaction, such as a self care web site) might be defined in terms of specific states and transition between states. Referring to FIG. 3b, an IVR application that automates tracking and shipping of a package may contain a number of states.


Given such a data corpus, the step of offline classification of interactions can be performed using a variety of techniques. For example, a live individual could review the records of individual interactions and classify the interactions “by hand.” Alternatively, a live individual could classify some of the interactions, and then those interactions could be used as training data for an automated system (e.g., a neural network) which would classify the remaining interactions. As another alternative, rules or logic functions could be developed and used to classify the interactions (e.g., IF a caller hung up THEN classify the interaction as “bad” ELSE classify the interaction as “good”). As yet another alternative, a finite state machine such as is depicted in FIG. 3 could be made based on the call flow for the interactive voice response system. Using this technique, calls which are accepted by the finite state machine (for example, because the call flow reaches a desired “final” state) can be classified as “good,” while calls which are not accepted by the finite state machine (for example, because no “final” state is reached due to caller disconnection or due to network or server problem, or because an undesirable “final” state, such as the caller opting out of automation is reached) could be classified as “bad.” Of course, it is also possible that combined methods of classification could be used. For example, classification could be based on a finite state machine, but additionally, calls which exhibit some identifiable characteristics (e.g., repeated “yes/no” confirmations, which generally happen only when an automatic speech recognizer is unsure of a transcription of customer speech) could also be classified as “bad.” Additional combinations, variations and alternative techniques could also be used in classification. Thus, the discussion above should be understood as illustrative only, and not limiting on the scope of any claims in applications claiming the benefit of this disclosure.


A state-transition model may be used to illustrate certain methods of tracking dialog quality which might be used in some embodiments. As will be known to one of ordinary skill in the art, tracking dialog quality as discussed below, is not limited to embodiments which utilize the state-transition model. One method which some embodiments might use to measure dialog quality comprises rules. For example, a transition from state 1 to state 2 might have a set of rules such as: no more than two loopbacks from state 2 to state 1, and no more than 10 seconds time should be spent in state 1 (e.g., the system might time out if the caller does not speak or takes too long to respond).


In a rule-based dialog quality system, the relevant dialog events might be logged and sent to a dialog quality monitor for each call. A set of decision rules might then be applied to those events and a dialog quality rating might be generated by the decision agent. This decision can be produced for every state transition. Such a rule generated dialog quality rating might then be used to measure the quality of a customer interaction.


Another type of method which some embodiments might use to measure dialog quality comprises an analysis of probabilities. In some such probability analysis methods, each state might have a probability associated with it, which might be based on the likelihood of a caller reaching that state. The transitions between states might also have probabilities associated with them (or such probabilities associated with transitions might be the only probabilities measured). In some probability based methods, the probabilities might be trained based on heuristics and/or data from real deployments of the application. In some probability based dialog quality models, with every transition, an overall probability measure might be determined. Such a measure might be defined as how likely a particular dialog flow is in relation to the general population or to a target user group, though other types of probability measures, such as a measure of how likely a particular transition was relative to the general population or a particular user group might also be utilized. Regardless, once a probability measure has been made, one way to use such measure to determine dialog quality is to compare the probability measure to values corresponding to dialog quality states. For example, probability measures between 0 and 30% might be defined as poor dialog quality, probability measures between 31 and 70% might be defined as acceptable dialog quality, while probability measures between 71 and 99% might be defined as good dialog quality.


In some embodiments, if the probability measurement falls too low, or if it indicates a high probability of a negative outcome from the interaction, some action might be taken, such as automatically transferring the interaction to an agent. When a caller is transferred to an agent, they might have access to the dialog quality measure and a description of how it was arrived at, i.e., what caused the interaction to deteriorate.


In some embodiments, specific events associated with a dialog can be obtained by mining call logs from existing or new applications in real-time to generate events of interest. One approach which might be taken is to list the desired events in a configuration file that the IVR application can use to generate events in real-time.


It should be noted that, while the assumption that the records in the data corpus preserve all information regarding the interaction could allow for a high degree of flexibility in techniques used in offline classification, such complete data preservation is not necessary for all offline classification. For example, when classification is performed using a finite state machine, the classification could be based on some subset of the information stored for the interactions (e.g., the meanings assigned to statements by the caller), while the remaining information (e.g., confidence in transcriptions of caller statements) could be discarded, or could be maintained for some other purpose. Similarly, using a logic function in offline classification, it is possible to only use a subset of the information regarding individual interactions in the logic functions, while the remaining information could be discarded, or could be maintained for some other purpose. Thus, in some implementations, it is possible that offline classification could be performed even if some subset of information regarding an interaction is not preserved in the records of the data corpus. Additional variations on this approach could be implemented by those of ordinary skill in the art without undue experimentation. Therefore, it should be understood that the variations presented herein are intended to be illustrative only, and not limiting on the scope of any claims included in applications claiming the benefit of this disclosure.


After the offline classification has taken place, the classified interaction data can be used to train models which can be used to perform classification in an online manner. As was the case with the offline classification, a variety of techniques can be used for the training of models. For example, it is possible that the classified models can be used to train models such as N-gram based language models, or that the records could be used to train iterative models, such as the boostexter classification models described in R. E. Shapire and Y. Singer, “Boostexter: A boosting-based system for text categorization.” Machine Learning, vol. 39, no. 2/3, pp. 135-168, 2000, the teachings of which are hereby incorporated by reference. Thus, concretely, in an example in which the records were classified as “good” or “bad” using a finite state machine based on the meanings of responses given by a caller, a class conditional language model LMc can be built by taking the caller's response sequence as a word sequence. Using this technique, a prior, given test input x consisting of response sequences (x=r1, r2, . . . rn) can be classified by estimating the likelihood of the sequence from each LMc:







c
^

=



arg





max


c

C




P


(

x


LM
c


)








where C={good, bad}.


Similarly, using a Boostexter classifier on an input x of responses as described above, the confidence that x is in a class c can be determined by using the following formula:

P(c|x)=(1+exp(−2*f(x)))−1

where








f


(
x
)


=




t
=
1

T




α
t



h
t



(
x
)




,





ht(x) is a base classifier at t, and αt is its weight, as in M. Karahan, D. Hakkani-Tür, G. Riccardi, and G. Tur, “Combining classifiers for spoken language understanding” Proc. of ASRU, Virgin Islands, USA, November 2003, pp. 589-594, the teachings of which are hereby incorporated by reference in their entirety.


Of course, additional variations could be implemented by those of ordinary skill in the art without undue experimentation in light of this disclosure. For example, while it is possible that models could be trained based on complete series' of responses from a caller (i.e., for an interaction in which the caller made n responses, the input x would be (r1, r2, . . . rn), it is also possible to use partial sequences, in order to more closely approximate online classification during training. This could be accomplished by chopping the individual records into turns, then feed turn sequence prefixes (up to each particular turn) to the classifier being trained. Still further variations could also be implemented. Thus, the discussion herein of training models for online classification should be understood as being illustrative only, and not limiting.


Once a model for online classification has been trained, that model can actually be used to determine whether to transfer a call from an automated interaction to a live interaction. As was the case with the steps of offline interaction classification and model training described above, a variety of techniques can be used in this step as well. For example, it is possible that a classification model could be used directly to predict whether an interaction is “good” and therefore should continue in automation or “bad” and therefore should be transferred to a live individual (e.g., using a language model as described above, the interaction could be transferred to a live individual if P(x|LMgood) falls below a set threshold, or if P(x|LMbad) rises above a set threshold).


Alternatively, it is also possible that models such as described above could be used as inputs to other functions which could be used for determining whether an interaction should be transferred. For example, a log likelihood ratio (LLR) could be used based on the good/bad classification (e.g., using a language model as described previously, the measurement log(P(x|LMgood)/P(x|LMbad)) to determine whether to transfer an interaction out of automation). Other variations, such as a log likelihood ratio for a Boostexter classifier derived using Bayes' rule could be used as well. In any case, the function or models could then be applied to actual interactions, and, in real time, determine whether an interaction should be transferred out of automation, thereby increasing customer satisfaction, by transferring interactions before they “go bad,” and at the same time reducing costs, by avoiding unnecessary transfers. A good call never hits the threshold. The LLR is compared against the threshold value to determine when to bring in a human agent. The use of logs allows an easier and more accurate determination of the best threshold value because taking log dynamically scales particular numbers within certain ranges. The invention achieves this by taking the log of the prediction that a call is good over the prediction that a call is bad. By taking the log of the ratio of each of the probabilities, the system more accurately distinguishes between a single bad turn triggering a live operator (particularly if it would be possible for the IVR to recover from the error), and when the system is not able to recover (e.g., by a continuous series of really bad turns). The log operation helps by providing a uniform ‘range’ of values to work with in determining the optimal thresholds across a large number of turns and variations.


Further variations on the theme of online classification of interactions are also possible. For example, various systems for online classification implemented according to this disclosure could vary from one another based on whether a transfer out of automation is triggered based on static or dynamic factors. To help clarify the nature of this variation, consider that it is possible to use the classified data for testing as well as for training (e.g., by chopping up the records into individual turns, as was described previously in the context of training in a manner similar to online classification). Using this data it is possible to empirically measure the likely behavior of different methods for online classification of interactions as “good” or “bad.” For example, given a set of training data with an average log likelihood ratio as shown in FIG. 1, it is possible to test different log likelihood ratios as potential triggers for transfer out of automation, an exemplary set of data which could be obtained from such a test being shown in FIG. 2. Then, to create a static trigger, the test data could be used to determine the log likelihood ratio which results in the greatest accuracy (negative 1.1, in the case of the test data for FIG. 2) which would be set as the threshold for transfer of an interaction into automation. Similarly, to create a dynamic trigger, the test data could be used to determine potential thresholds for transfer of interactions which could later be varied based on external factors (e.g., if a larger number of agents is available, the threshold could be set to transfer more calls, so as to avoid excess capacity, while if a smaller number of agents is available, the threshold could be set to transfer fewer calls, to avoid callers being placed in a hold queue).


Additional variations are possible as well. For example, while a log likelihood ratio measurement based on a language model based classification with a transfer threshold of −1.1 has been found to have an accuracy of 0.830, a precision of 0.831, a recall of 0.834 and an F-score of 0.830, different methods for determining transfer could be used to achieve different results (e.g., a log likelihood ratio using a boostexter classification and transfer threshold of −1.1 has been found to have an accuracy of 0.733, a precision of 0.797, a recall of 0.708, and a F-score of 0.702). Further, while the discussion of model creation above focused on the meanings ascribed to statements by a caller, it is also possible that other features of an interaction (e.g., confidence levels in transcriptions by an automatic speech recognizer) could be used. Additionally, learning might take place, rather than being based on classified data, through the use of unsupervised learning techniques (e.g., outlier detection).


In addition to, or as an alternative to, the interfaces described above, certain embodiments might comprise a method for measuring the quality of interactions, and/or a method of implementing assisted automation over existing care applications. FIG. 4 depicts the integration of certain products, services, service channels, data sources and agent contexts which might be implemented in some embodiments of the invention.


An IVR might be configured to present a dialog organized into specific states. In some embodiments, those states might be designed so that, at a high level, they correspond to business processes. In some embodiments, either in addition to, or as an alternative to, high level correspondence with business processes, the states might correspond to specific interaction points (such as a single system prompt and a corresponding caller response) at a low level. Further, in some embodiments, states might be defined in terms of the current state and context, the previous state, the next state, and the possible transitions between states. For coordination between an IVR and an assisted automation enabled application, one or more of the states might have an integration definition for a corresponding assisted automation application.


An illustrative example of an interaction between a caller, an agent, and assisted automation enabled application is set forth below:


Assume that the caller speaks the tracking number (say, a 16 character alpha-digit string) and the IVR/speech recognizer has been unable to find a match, and has asked the caller to repeat the tracking number multiple times. In some embodiments, this might trigger intervention by an agent, who might be given access to context information related to the customer's interaction. The agent might then enter the tracking number (which he or she might have heard from the caller's prior recording) into a GUI interface to the IVR, which might then proceed to the next state in the voice dialog. The agent might be given a choice to provide input (tracking #) in a surreptitious mode (without the caller's knowledge) or in a direct mode. In some embodiments, the agent might be instructed to directly interact with a caller if the caller has repeatedly been unable to reach a desired state using the IVR. In some embodiments, after the agent has provided the correct tracking number, the agent might have the option of placing the interaction back into automation, which might free the agent to process one or more additional transactions.


Further variations could also be practiced by those of ordinary skill in the art without undue experimentation. Thus, the disclosure set forth herein should be understood as illustrative only, and not limiting.

Claims
  • 1. A computerized method for determining when to transfer a user from an automated service to a live agent comprising: a) training a set of classification models using a set of classified historical interactions to perform real-time classification of an interaction wherein said classified historical interactions comprise, at least, prompts provided by the interactive voice response system, transcriptions of statements by a caller derived from an automatic speech recognizer, meanings ascribed to statements made by the caller, and confidence scores for the transcriptions; andb) during an automated interaction between the user and the automated service, using a computer to calculate a log likelihood ratio, using said classification models, to determine whether to transfer said user, from said automated interaction to a live interaction, by computing a log of a prediction that the interaction is good over a prediction that the interaction is bad;wherein:1) said log likelihood ratio is computed using the formula log(P(x|LMgood)/P(x|LMbad));2) LMgood is a first classification model trained using records of one or more previous interactions classified as good;3) LMbad is a second classification model trained using records of one or more previous interactions classified as bad; and4) x is a set of responses made by the user during the interaction.
  • 2. A computerized method as claimed in claim 1 wherein the prediction that the interaction is good and the prediction that the interaction is bad are made without respect to the topic of the interaction.
  • 3. A computerized method as claimed in claim 1 wherein: a) said log likelihood ratio is compared against a threshold value to determine whether said interaction is bad; andb) said threshold value may be dynamically reset based on external factors.
  • 4. A computerized method as claimed in claim 3 wherein said threshold value may be dynamically reset based on a lifetime value of a relationship with said user.
  • 5. A computerized method as claimed in claim 4 wherein said classification models are based on a boostexter classification.
  • 6. A computerized method as claimed in claim 5 wherein said boostexter classification is derived using Bayes' rule.
  • 7. A computerized method as claimed in claim 4 wherein said classification models are based on an N-gram language model.
  • 8. A computerized method as claimed in claim 4 wherein said log likelihood ratio is re-calculated for each turn in said automated interaction and wherein said re-calculation takes place after each turn in real time during said automated interaction.
  • 9. A non-transitory computer readable medium storing computer executable instructions to configure a computer to determine when to transfer a user from an automated service to a live agent by performing steps comprising: a) predicting whether an interaction is good, based on a first classification model trained using records of one or more previous interactions classified as good, using P(x|LMgood);b) predicting whether the interaction is bad, based on a second classification model trained using records of one or more previous interactions classified as bad, using P(x|LMbad);c) calculating a log likelihood ratio using log(P(x|LMgood)/P(x|LMbad));d) comparing said log likelihood ratio to a threshold value, such that if said log likelihood ratio falls below said threshold value, instructions are executed to transfer said user from automation to said live agent;wherein:i) x is a set of responses made by the user during the interaction; andii) the one or more previous interactions classified as good and the one or more previous interactions classified as bad comprise, at least, prompts provided by an interactive voice response system, transcriptions of statements by a caller derived from an automatic speech recognizer, meanings ascribed to statements made by the caller, and confidence scores for the transcriptions.
  • 10. The non-transitory computer readable medium as claimed in claim 9 wherein said classification model is based on a boostexter classification.
  • 11. The non-transitory computer readable medium as claimed in claim 10 wherein said boostexter classification is derived using Bayes' rule.
  • 12. The non-transitory computer readable medium as claimed in claim 9 wherein said classification model is based on an N-gram language model.
  • 13. The non-transitory computer readable medium as claimed in claim 9 wherein said threshold value may be dynamically modified based on external factors.
  • 14. The non-transitory computer readable medium as claimed in claim 13 wherein said threshold value may be dynamically reset based on a lifetime value of a relationship with said user.
  • 15. The non-transitory computer readable medium as claimed in claim 14 wherein said instructions recalculate the log likelihood ratio for each turn in said interaction.
  • 16. A computerized system for determining when to transfer a user from an automated service to a live agent comprising: a) an interactive voice response system (IVR);b) a monitoring module;whereini) said user interacts with said IVR;ii) said monitoring module evaluates, after each turn in said IVR, a probability that said user's interaction with the IVR is good and a probability that said user's interaction with the IVR is bad;iii) said monitoring module signals an alarm to bring in a human agent if a log of the ratio of said good probability over said bad probability is below a predetermined threshold;iv) said monitoring module evaluates the probability that the user's interaction with the IVR is good using P(x|LMgood);v) the monitoring module evaluates the probability that the user's interaction with the IVR is bad using P(x|LMbad);vi) x is a set of responses made by the user during the interaction;vii) LMgood is a first classification model trained using records of one or more previous interactions classified as good;viii) LMbad is a second classification model trained using records of one or more previous interactions classified as bad; andix) said one or more previous interactions classified as good and said one or more previous interactions classified as bad comprise, at least, prompts provided by the interactive voice response system, transcriptions of statements by a caller derived from an automatic speech recognizer, meanings ascribed to statements made by the caller, and confidence scores for the transcriptions.
  • 17. A computerized system as claimed in claim 16 wherein: a) x=r1, r2, . . . rn; andb) each r is a response made by the user in the user's interaction with the IVR.
  • 18. A computerized system as claimed in claim 16 wherein said monitoring module evaluates said probabilities based on a boostexter classifier in an iterative algorithm.
  • 19. A computerized system as claimed in claim 16 wherein said threshold may be dynamically reset based on a lifetime value of a relationship with said user.
PRIORITY

This non-provisional application claims priority from U.S. Provisional 60/747,896, SYSTEM AND METHOD FOR ASSISTED AUTOMATION, which was filed on May 22, 2006. It also claims priority from U.S. Provisional 60/908,044, SYSTEM AND METHOD FOR AUTOMATED CUSTOMER SERVICE WITH CONTINGENT LIVE INTERACTION, which was filed on Mar. 26, 2007. It also claims priority, as a continuation-in-part from U.S. application Ser. No. 11/686,812, SYSTEM AND METHOD FOR CUSTOMER VALUE REALIZATION, which was filed on Mar. 15, 2007. Each of these applications is hereby incorporated by reference.

US Referenced Citations (492)
Number Name Date Kind
5206903 Kohler et al. Apr 1993 A
5214715 Carpenter et al. May 1993 A
5345380 Babson, III et al. Sep 1994 A
5411947 Hostetler et al. May 1995 A
5488652 Bielby et al. Jan 1996 A
5570419 Cave et al. Oct 1996 A
5581664 Allen et al. Dec 1996 A
5586218 Allen Dec 1996 A
5615296 Stanford et al. Mar 1997 A
5625748 McDonough et al. Apr 1997 A
5652897 Linebarger et al. Jul 1997 A
5678002 Fawcett et al. Oct 1997 A
5701399 Lee et al. Dec 1997 A
5748711 Scherer May 1998 A
5757904 Anderson May 1998 A
5802526 Fawcett et al. Sep 1998 A
5802536 Yoshii Sep 1998 A
5825869 Brooks et al. Oct 1998 A
5852814 Allen Dec 1998 A
5867562 Scherer Feb 1999 A
5872833 Scherer Feb 1999 A
5895466 Goldberg et al. Apr 1999 A
5944839 Isenberg Aug 1999 A
5956683 Jacobs et al. Sep 1999 A
5960399 Barclay et al. Sep 1999 A
5963940 Liddy et al. Oct 1999 A
5966429 Scherer Oct 1999 A
5987415 Breese et al. Nov 1999 A
5991394 Dezonno Nov 1999 A
6021403 Horvitz et al. Feb 2000 A
6029099 Brown Feb 2000 A
6038544 Machin et al. Mar 2000 A
6044142 Hammarstrom et al. Mar 2000 A
6044146 Gisby et al. Mar 2000 A
6070142 McDonough et al. May 2000 A
6094673 Dilip et al. Jul 2000 A
6122632 Botts et al. Sep 2000 A
6137870 Scherer Oct 2000 A
6173266 Marx et al. Jan 2001 B1
6173279 Levin et al. Jan 2001 B1
6177932 Galdes et al. Jan 2001 B1
6182059 Angotti Jan 2001 B1
6188751 Scherer Feb 2001 B1
6192110 Abella et al. Feb 2001 B1
6205207 Scherer Mar 2001 B1
6212502 Ball et al. Apr 2001 B1
6233547 Denber May 2001 B1
6233570 Horvitz et al. May 2001 B1
6243680 Gupta et al. Jun 2001 B1
6243684 Stuart et al. Jun 2001 B1
6249807 Shaw et al. Jun 2001 B1
6249809 Bro Jun 2001 B1
6253173 Ma Jun 2001 B1
6256620 Jawaher et al. Jul 2001 B1
6260035 Horvitz et al. Jul 2001 B1
6262730 Horvitz et al. Jul 2001 B1
6263066 Shtivelman et al. Jul 2001 B1
6263325 Yoshida et al. Jul 2001 B1
6275806 Pertrushin Aug 2001 B1
6278996 Richardson et al. Aug 2001 B1
6282527 Gounares et al. Aug 2001 B1
6282565 Shaw et al. Aug 2001 B1
6304864 Liddy et al. Oct 2001 B1
6307922 Scherer Oct 2001 B1
6330554 Altschuler Dec 2001 B1
6337906 Bugash et al. Jan 2002 B1
6343116 Quinton et al. Jan 2002 B1
6356633 Armstrong Mar 2002 B1
6356869 Chapados et al. Mar 2002 B1
6366127 Friedman et al. Apr 2002 B1
6370526 Agrawal et al. Apr 2002 B1
6377944 Busey et al. Apr 2002 B1
6381640 Beck et al. Apr 2002 B1
6389124 Schnarel et al. May 2002 B1
6393428 Miller et al. May 2002 B1
6401061 Zieman Jun 2002 B1
6405185 Pechanek et al. Jun 2002 B1
6411692 Scherer Jun 2002 B1
6411926 Chang Jun 2002 B1
6411947 Rice et al. Jun 2002 B1
6415290 Botts et al. Jul 2002 B1
6434230 Gabriel Aug 2002 B1
6434550 Warner et al. Aug 2002 B1
6442519 Kanevsky et al. Aug 2002 B1
6449356 Dezonno Sep 2002 B1
6449588 Bowman-Amuah Sep 2002 B1
6449646 Sikora et al. Sep 2002 B1
6451187 Suzuki et al. Sep 2002 B1
6460037 Weiss et al. Oct 2002 B1
6480599 Ainslee et al. Nov 2002 B1
6493686 Francone et al. Dec 2002 B1
6498921 Ho et al. Dec 2002 B1
6519571 Guheen et al. Feb 2003 B1
6519580 Johnson et al. Feb 2003 B1
6519628 Locascio Feb 2003 B1
6523021 Monberg et al. Feb 2003 B1
6539419 Beck et al. Mar 2003 B2
6546087 Shaffer et al. Apr 2003 B2
6560590 Shwe et al. May 2003 B1
6563921 Williams et al. May 2003 B1
6567805 Johnson et al. May 2003 B1
6571225 Oles et al. May 2003 B1
6574599 Lim et al. Jun 2003 B1
6581048 Werbos Jun 2003 B1
6584180 Nemoto Jun 2003 B2
6584185 Nixon Jun 2003 B1
6587558 Lo Jul 2003 B2
6594684 Hodjat et al. Jul 2003 B1
6598039 Livowsky Jul 2003 B1
6604141 Ventura Aug 2003 B1
6606479 Cook et al. Aug 2003 B2
6606598 Holthouse et al. Aug 2003 B1
6614885 Polcyn Sep 2003 B2
6615172 Bennett et al. Sep 2003 B1
6618725 Fukuda et al. Sep 2003 B1
6632249 Pollock Oct 2003 B2
6633846 Bennett et al. Oct 2003 B1
6643622 Stuart et al. Nov 2003 B2
6650748 Edwards et al. Nov 2003 B1
6652283 Van Schaack et al. Nov 2003 B1
6658598 Sullivan Dec 2003 B1
6665395 Busey et al. Dec 2003 B1
6665640 Bennett et al. Dec 2003 B1
6665644 Kanevsky et al. Dec 2003 B1
6665655 Warner et al. Dec 2003 B1
6694314 Sullivan et al. Feb 2004 B1
6694482 Arellano et al. Feb 2004 B1
6701311 Biebesheimer et al. Mar 2004 B2
6704410 McFarlane et al. Mar 2004 B1
6707906 Ben-Chanoch Mar 2004 B1
6718313 Lent et al. Apr 2004 B1
6721416 Farrell Apr 2004 B1
6724887 Eilbacher et al. Apr 2004 B1
6725209 Iliff Apr 2004 B1
6732188 Flockhart et al. May 2004 B1
6735572 Landesmann May 2004 B2
6741698 Jensen May 2004 B1
6741699 Flockhart et al. May 2004 B1
6741959 Kaiser May 2004 B1
6741974 Harrison et al. May 2004 B1
6745172 Mancisidor et al. Jun 2004 B1
6754334 Williams et al. Jun 2004 B2
6760272 Franz et al. Jul 2004 B2
6760727 Schroeder et al. Jul 2004 B1
6766011 Fromm Jul 2004 B1
6766320 Wang et al. Jul 2004 B1
6771746 Shambaugh et al. Aug 2004 B2
6771765 Crowther et al. Aug 2004 B1
6772190 Hodjat et al. Aug 2004 B2
6775378 Villena et al. Aug 2004 B1
6778660 Fromm Aug 2004 B2
6778951 Contractor Aug 2004 B1
6795530 Gilbert et al. Sep 2004 B1
6798876 Bala Sep 2004 B1
6801763 Elsey et al. Oct 2004 B2
6807274 Joseph et al. Oct 2004 B2
6807544 Morimoto et al. Oct 2004 B1
6813606 Ueyama et al. Nov 2004 B2
6819748 Weiss et al. Nov 2004 B2
6819759 Khue et al. Nov 2004 B1
6823054 Suhm et al. Nov 2004 B1
6829348 Schroeder et al. Dec 2004 B1
6829585 Grewal et al. Dec 2004 B1
6829603 Chai et al. Dec 2004 B1
6832263 Polizzi et al. Dec 2004 B2
6836540 Falcone et al. Dec 2004 B2
6839671 Attwater et al. Jan 2005 B2
6842737 Stiles et al. Jan 2005 B1
6842748 Warner et al. Jan 2005 B1
6842877 Robarts et al. Jan 2005 B2
6845154 Cave et al. Jan 2005 B1
6845155 Elsey Jan 2005 B2
6845374 Oliver et al. Jan 2005 B1
6847715 Swartz Jan 2005 B1
6850612 Johnson et al. Feb 2005 B2
6850923 Nakisa et al. Feb 2005 B1
6850949 Warner et al. Feb 2005 B2
6856680 Mengshoel et al. Feb 2005 B2
6859529 Duncan et al. Feb 2005 B2
6871174 Dolan et al. Mar 2005 B1
6871213 Graham et al. Mar 2005 B1
6873990 Oblinger Mar 2005 B2
6879685 Peterson et al. Apr 2005 B1
6879967 Stork Apr 2005 B1
6882723 Peterson et al. Apr 2005 B1
6885734 Eberle et al. Apr 2005 B1
6895558 Loveland May 2005 B1
6898277 Meteer et al. May 2005 B1
6901397 Moldenhauer et al. May 2005 B1
6904143 Peterson et al. Jun 2005 B1
6907119 Case et al. Jun 2005 B2
6910003 Arnold et al. Jun 2005 B1
6910072 MacLeod Beck et al. Jun 2005 B2
6915246 Gusler et al. Jul 2005 B2
6915270 Young et al. Jul 2005 B1
6922466 Peterson et al. Jul 2005 B1
6922689 Shtivelman Jul 2005 B2
6924828 Hirsch Aug 2005 B1
6925452 Hellerstein et al. Aug 2005 B1
6928156 Book et al. Aug 2005 B2
6931119 Michelson et al. Aug 2005 B2
6931434 Donoho et al. Aug 2005 B1
6934381 Klein et al. Aug 2005 B1
6934684 Alpdemir et al. Aug 2005 B2
6937705 Godfrey et al. Aug 2005 B1
6938000 Joseph et al. Aug 2005 B2
6941266 Gorin et al. Sep 2005 B1
6941304 Gainey Sep 2005 B2
6944592 Pickering Sep 2005 B1
6950505 Longman et al. Sep 2005 B2
6950827 Jung Sep 2005 B2
6952470 Tioe et al. Oct 2005 B1
6956941 Duncan et al. Oct 2005 B1
6959080 Dezonno et al. Oct 2005 B2
6959081 Brown et al. Oct 2005 B2
6961720 Nelken Nov 2005 B1
6965865 Plez Nov 2005 B2
6967316 Lee Nov 2005 B2
6970554 Peterson et al. Nov 2005 B1
6970821 Shambaugh et al. Nov 2005 B1
6975708 Scherer Dec 2005 B1
6976019 Davallou Dec 2005 B2
6981020 Miloslavsky et al. Dec 2005 B2
6983239 Epstein Jan 2006 B1
6985862 Strom et al. Jan 2006 B2
6987846 James Jan 2006 B1
6988072 Horvitz Jan 2006 B2
6988132 Horvitz Jan 2006 B2
6990179 Merrow et al. Jan 2006 B2
6993475 McConnell et al. Jan 2006 B1
6996531 Korall et al. Feb 2006 B2
6999990 Sullivan et al. Feb 2006 B1
7003079 McCarthy et al. Feb 2006 B1
7003459 Gorin et al. Feb 2006 B1
7006607 Garcia Feb 2006 B2
7007067 Azvine et al. Feb 2006 B1
7012996 Polcyn Mar 2006 B2
7013290 Ananian Mar 2006 B2
7016056 Skaanning Mar 2006 B2
7016485 Shtivelman Mar 2006 B2
7016842 Mills Mar 2006 B2
7019749 Guo et al. Mar 2006 B2
7027586 Bushey et al. Apr 2006 B2
7031951 Mancisidor et al. Apr 2006 B2
7035384 Scherer Apr 2006 B1
7035388 Kurosaki Apr 2006 B2
7039165 Saylor et al. May 2006 B1
7039166 Peterson et al. May 2006 B1
7045181 Yoshizawa et al. May 2006 B2
7046777 Colson et al. May 2006 B2
7047498 Lui et al. May 2006 B2
7050568 Brown et al. May 2006 B2
7050976 Packingham May 2006 B1
7050977 Bennett May 2006 B1
7058169 Sumner et al. Jun 2006 B2
7058565 Gusler et al. Jun 2006 B2
7065188 Mei et al. Jun 2006 B1
7065202 Statham et al. Jun 2006 B2
7068774 Judkins et al. Jun 2006 B1
7072643 Pines et al. Jul 2006 B2
7076032 Pirasteh et al. Jul 2006 B1
7076051 Brown et al. Jul 2006 B2
7076427 Scarano et al. Jul 2006 B2
7076736 Hugh Jul 2006 B2
7080323 Knott et al. Jul 2006 B2
7085367 Lang Aug 2006 B1
7085755 Bluhm et al. Aug 2006 B2
7092509 Mears et al. Aug 2006 B1
7092510 Hamilton, II et al. Aug 2006 B2
7092888 McCarthy et al. Aug 2006 B1
7096219 Karch Aug 2006 B1
7099855 Nelken et al. Aug 2006 B1
7103170 Fain et al. Sep 2006 B2
7103172 Brown et al. Sep 2006 B2
7103553 Applebaum et al. Sep 2006 B2
7103562 Kosiba et al. Sep 2006 B2
7106850 Campbell et al. Sep 2006 B2
7107207 Goodman Sep 2006 B2
7107254 Dumais et al. Sep 2006 B1
7110523 Gagle et al. Sep 2006 B2
7110524 Rupe et al. Sep 2006 B2
7110525 Heller et al. Sep 2006 B1
7117158 Weldon et al. Oct 2006 B2
7117188 Guyon et al. Oct 2006 B2
7133866 Rishel et al. Nov 2006 B2
7134672 Beishline et al. Nov 2006 B2
7136851 Ma et al. Nov 2006 B2
7139555 Apfel Nov 2006 B2
7152029 Alshawi et al. Dec 2006 B2
7155158 Iuppa et al. Dec 2006 B1
7158935 Gorin et al. Jan 2007 B1
7171352 Chang et al. Jan 2007 B2
7181492 Wen et al. Feb 2007 B2
7194359 Duffy et al. Mar 2007 B2
7200675 Wang et al. Apr 2007 B2
7203635 Oliver et al. Apr 2007 B2
7203646 Bennett Apr 2007 B2
7209908 Li et al. Apr 2007 B2
7210135 McCrady et al. Apr 2007 B2
7213742 Birch et al. May 2007 B1
7215744 Scherer May 2007 B2
7219085 Buck et al. May 2007 B2
7237137 Goeller et al. Jun 2007 B2
7237243 Sutton et al. Jun 2007 B2
7240011 Horvitz Jul 2007 B2
7240244 Teegan et al. Jul 2007 B2
7246353 Forin et al. Jul 2007 B2
7249135 Ma et al. Jul 2007 B2
7254579 Cabrera et al. Aug 2007 B2
7254641 Broughton et al. Aug 2007 B2
7257203 Quinton Aug 2007 B2
7257514 Faihe Aug 2007 B2
7269516 Brunner et al. Sep 2007 B2
7275048 Bigus et al. Sep 2007 B2
7283621 Quinton Oct 2007 B2
7292689 Odinak et al. Nov 2007 B2
7299259 Petrovykh Nov 2007 B2
7305345 Bares et al. Dec 2007 B2
7313782 Lurie et al. Dec 2007 B2
7328146 Alshawi et al. Feb 2008 B1
7346493 Ringger et al. Mar 2008 B2
7373410 Monza et al. May 2008 B2
7382773 Schoeneberger et al. Jun 2008 B2
7389351 Horvitz Jun 2008 B2
7391421 Guo et al. Jun 2008 B2
7394393 Zhang et al. Jul 2008 B2
7395540 Rogers Jul 2008 B2
7409344 Gurram et al. Aug 2008 B2
7415417 Boyer et al. Aug 2008 B2
7424485 Kristiansen et al. Sep 2008 B2
7437295 Pitts, III et al. Oct 2008 B2
7437297 Chaar et al. Oct 2008 B2
7451432 Shukla et al. Nov 2008 B2
7454399 Matichuk Nov 2008 B2
7475010 Chao Jan 2009 B2
7487095 Hill et al. Feb 2009 B2
7493300 Palmer et al. Feb 2009 B2
7505756 Bahl Mar 2009 B2
7505921 Lukas et al. Mar 2009 B1
7509327 Joshi et al. Mar 2009 B2
7509422 Jaffray et al. Mar 2009 B2
7519564 Horvitz Apr 2009 B2
7519566 Priogin et al. Apr 2009 B2
7523220 Tan et al. Apr 2009 B2
7526439 Freishtat et al. Apr 2009 B2
7526474 Ohkuma et al. Apr 2009 B2
7529774 Lane et al. May 2009 B2
7539654 Ramaswamy et al. May 2009 B2
7539656 Fratkina et al. May 2009 B2
7546542 Premchandran Jun 2009 B2
7552055 Lecoeuche Jun 2009 B2
7558783 Vadlamani et al. Jul 2009 B2
7561673 Wang Jul 2009 B2
7562115 Zircher et al. Jul 2009 B2
7565648 Kline et al. Jul 2009 B2
7567967 Chopra et al. Jul 2009 B2
7574358 Deligne et al. Aug 2009 B2
7599861 Peterson Oct 2009 B2
7606714 Williams et al. Oct 2009 B2
7613722 Horvitz et al. Nov 2009 B2
7634066 Quinton Dec 2009 B2
7634077 Owhadi et al. Dec 2009 B2
7636735 Haas et al. Dec 2009 B2
7643995 Acero et al. Jan 2010 B2
7644376 Karachale et al. Jan 2010 B2
7650381 Peters Jan 2010 B2
7668961 Lomet Feb 2010 B2
7681186 Chang et al. Mar 2010 B2
7689410 Chang et al. Mar 2010 B2
7689521 Nodelman et al. Mar 2010 B2
7689615 Burges et al. Mar 2010 B2
7694022 Garms et al. Apr 2010 B2
7698324 Vries Apr 2010 B2
7716253 Netz et al. May 2010 B2
7739208 George et al. Jun 2010 B2
7746999 Williams et al. Jun 2010 B2
7774292 Brennan et al. Aug 2010 B2
7797403 Vedula et al. Sep 2010 B2
7831679 Apacible et al. Nov 2010 B2
7831688 Linyard et al. Nov 2010 B2
7856321 Lanza et al. Dec 2010 B2
7856601 Moore et al. Dec 2010 B2
7885820 Mancisidor et al. Feb 2011 B1
7890544 Swartz et al. Feb 2011 B2
7895262 Nielsen et al. Feb 2011 B2
7941492 Pearson et al. May 2011 B2
7949787 Box et al. May 2011 B2
7953219 Freedman et al. May 2011 B2
7984021 Bhide et al. Jul 2011 B2
7995735 Vos et al. Aug 2011 B2
8024406 Irwin et al. Sep 2011 B1
8027457 Coy et al. Sep 2011 B1
8068247 Wu Nov 2011 B2
8073699 Michelini et al. Dec 2011 B2
8096809 Burgin et al. Jan 2012 B2
8112383 Acheson et al. Feb 2012 B2
8126722 Robb et al. Feb 2012 B2
8160883 Lecoeuche Apr 2012 B2
8170197 Odinak May 2012 B2
8185399 Di Fabbrizio et al. May 2012 B2
8185589 Sundararajan et al. May 2012 B2
8234169 Fraser Jul 2012 B2
8266586 Wang Sep 2012 B2
20010044800 Han Nov 2001 A1
20010047261 Kassan Nov 2001 A1
20010047270 Gusick et al. Nov 2001 A1
20010053977 Schaefer Dec 2001 A1
20010054064 Kannan Dec 2001 A1
20020013692 Chandhok et al. Jan 2002 A1
20020026435 Wyss et al. Feb 2002 A1
20020032549 Axelrod et al. Mar 2002 A1
20020032591 Mahaffy et al. Mar 2002 A1
20020046096 Srinivasan et al. Apr 2002 A1
20020062245 Niu et al. May 2002 A1
20020072921 Boland et al. Jun 2002 A1
20020087325 Lee et al. Jul 2002 A1
20020095295 Cohen et al. Jul 2002 A1
20020104026 Barra et al. Aug 2002 A1
20020123957 Notarius et al. Sep 2002 A1
20020147848 Burgin et al. Oct 2002 A1
20020161626 Plante et al. Oct 2002 A1
20020178022 Anderson et al. Nov 2002 A1
20030046297 Mason Mar 2003 A1
20030046311 Baidya et al. Mar 2003 A1
20030084066 Waterman et al. May 2003 A1
20030120653 Brady et al. Jun 2003 A1
20030169870 Stanford Sep 2003 A1
20030200135 Wright Oct 2003 A1
20030212654 Harper et al. Nov 2003 A1
20040002502 Banholzer et al. Jan 2004 A1
20040030556 Bennett Feb 2004 A1
20040054743 McPartlan et al. Mar 2004 A1
20040138944 Whitacre et al. Jul 2004 A1
20040148154 Acero et al. Jul 2004 A1
20040158480 Lubars et al. Aug 2004 A1
20040162724 Hill et al. Aug 2004 A1
20040162812 Lane et al. Aug 2004 A1
20040176968 Syed et al. Sep 2004 A1
20040210637 Loveland Oct 2004 A1
20040220772 Cobble et al. Nov 2004 A1
20040228470 Williams et al. Nov 2004 A1
20040230689 Loveland Nov 2004 A1
20040260546 Seo et al. Dec 2004 A1
20040264677 Horvitz et al. Dec 2004 A1
20040268229 Paoli et al. Dec 2004 A1
20050065789 Yacoub et al. Mar 2005 A1
20050071178 Beckstrom et al. Mar 2005 A1
20050084082 Horvitz et al. Apr 2005 A1
20050091123 Freishtat et al. Apr 2005 A1
20050091147 Ingargiola et al. Apr 2005 A1
20050097028 Watanabe et al. May 2005 A1
20050097197 Vincent May 2005 A1
20050125229 Kurzweil Jun 2005 A1
20050143628 Dai et al. Jun 2005 A1
20050163302 Mock et al. Jul 2005 A1
20050171932 Nandhra Aug 2005 A1
20050177601 Chopra et al. Aug 2005 A1
20050193102 Horvitz Sep 2005 A1
20050195966 Adar et al. Sep 2005 A1
20050203760 Gottumukkala et al. Sep 2005 A1
20050228707 Hendrickson Oct 2005 A1
20050228796 Jung Oct 2005 A1
20050228803 Farmer et al. Oct 2005 A1
20050246241 Irizarry, Jr. et al. Nov 2005 A1
20050256819 Tibbs et al. Nov 2005 A1
20050278124 Duffy et al. Dec 2005 A1
20050278177 Gottesman Dec 2005 A1
20050286688 Scherer Dec 2005 A1
20050288981 Elias Dec 2005 A1
20060015390 Rijsinghani et al. Jan 2006 A1
20060041648 Horvitz Feb 2006 A1
20060059431 Pahud Mar 2006 A1
20060069564 Allison et al. Mar 2006 A1
20060069570 Allison et al. Mar 2006 A1
20060074831 Hyder et al. Apr 2006 A1
20060080468 Vadlamani et al. Apr 2006 A1
20060101077 Warner et al. May 2006 A1
20060109974 Paden et al. May 2006 A1
20060122834 Bennett Jun 2006 A1
20060122917 Lokuge et al. Jun 2006 A1
20060190226 Jojic et al. Aug 2006 A1
20060190253 Hakkani-Tur et al. Aug 2006 A1
20060195321 Deligne et al. Aug 2006 A1
20060198504 Shemisa et al. Sep 2006 A1
20060206330 Attwater et al. Sep 2006 A1
20060212446 Hammond et al. Sep 2006 A1
20060235861 Yamashita et al. Oct 2006 A1
20070033189 Levy et al. Feb 2007 A1
20070121902 Stoica et al. May 2007 A1
20080034354 Brughton et al. Feb 2008 A1
20090070113 Gupta et al. Mar 2009 A1
20090254344 Hakkani-Tur et al. Oct 2009 A1
Foreign Referenced Citations (60)
Number Date Country
2248897 Sep 1997 CA
2301664 Jan 1999 CA
2485238 Jan 1999 CA
0077175 Apr 1983 EP
0977175 Feb 2000 EP
1191772 Mar 2002 EP
1324534 Feb 2003 EP
1424844 Jun 2004 EP
1494499 Jan 2005 EP
2343772 May 2000 GB
10133847 May 1998 JP
2002055695 Feb 2002 JP
2002189483 Jul 2002 JP
2002366552 Dec 2002 JP
2002374356 Dec 2002 JP
2004030503 Jan 2004 JP
2004104353 Apr 2004 JP
2004118457 Apr 2004 JP
2004220219 Aug 2004 JP
2004241963 Aug 2004 JP
2004304278 Oct 2004 JP
2005258825 Sep 2005 JP
WO 9215951 Sep 1992 WO
WO 9321587 Oct 1993 WO
WO 9428541 Dec 1994 WO
WO 9502221 Jan 1995 WO
WO 9527360 Oct 1995 WO
WO 9904347 Jan 1999 WO
WO 9953676 Oct 1999 WO
WO 0018100 Mar 2000 WO
WO 0070481 Nov 2000 WO
WO 0073955 Dec 2000 WO
WO 0075851 Dec 2000 WO
WO 0104814 Jan 2001 WO
WO 0133414 May 2001 WO
WO 0135617 May 2001 WO
WO 0137136 May 2001 WO
WO 0139028 May 2001 WO
WO 0139082 May 2001 WO
WO 0139086 May 2001 WO
WO 0182123 Nov 2001 WO
WO 0209399 Jan 2002 WO
WO 0219603 Mar 2002 WO
WO 0227426 Apr 2002 WO
WO 02061730 Aug 2002 WO
WO 02073331 Sep 2002 WO
WO 03009175 Jan 2003 WO
WO 03021377 Mar 2003 WO
WO 03069874 Aug 2003 WO
WO 2004059805 May 2004 WO
WO 2004081720 Sep 2004 WO
WO 2004091184 Oct 2004 WO
WO 2004107094 Dec 2004 WO
WO 2005006116 Jan 2005 WO
WO 2005011240 Feb 2005 WO
WO 2005013094 Feb 2005 WO
WO 2005069595 Jul 2005 WO
WO 2006050503 May 2006 WO
WO 2006062854 Jun 2006 WO
WO 2007033300 Mar 2007 WO
Non-Patent Literature Citations (96)
Entry
Acl.ldc.upenn.edu/W/W99/W99-0306.pdf (visited on Aug. 22, 2007).
Dingo.sbs.arizona.edu/˜sandiway/ling538o/lecture1.pdf (visited on Aug. 22, 2007).
En.wikipedia.org/wiki/Microsoft—Agent (visited on Aug. 22, 2007).
en.wikipedia.org/wiki/Wizard—of—Oz—experiment (visited on Aug. 22, 2007).
Liveops.com/news/news—07-0116.html (visited on Aug. 22, 2007).
www.aumtechinc.com/CVCC/cvcc11.0.htm (visited on Aug. 22, 2007).
www.beamyourscreen.com/US/Welcome.aspx (visited Aug. 24, 2007).
www.bultreebank.org/SProLaC/paper05.pdf (visited on Aug. 22, 2007).
www.callcenterdemo.com (visited on Aug. 22, 2007).
www.changingcallcenters.com (visited on Aug. 22, 2007).
www.crm2day.com/news/crm/115147.php (visited on Aug. 22, 2007).
www.csdl2.computer.org/persagen/DLAbsToc (visied Sep. 13, 2007).
www.eff.org/patent (visited on Aug. 22, 2007).
www.eff.org/patent/wanted/patent.php?p=firepond (visited on Aug. 22, 2007).
www.egain.com (visited Aug. 24, 2007).
www.instantservice.com (visited Aug. 24, 2007).
www.kana.com (visited Aug. 24, 2007).
www.learn.serebra.com/trainingclasses/index (visited Sep. 13, 2007).
www.livecare.it/en/par—business (visited Aug. 24, 2007).
www.livehelper.com/products (visited Aug. 24, 2007).
www.liveperson.com/enterprise/proficient.asp (visited Aug. 24, 2007).
www.microsoft.com/serviceproviders/solutions/ccf.mspx (visited on Aug. 22, 2007).
www.oracle.com/siebel/index (visited Aug. 24, 2007).
www.pageshare.com (visited Aug. 24, 2007).
www.realmarket.com/news/firepond082703.html (visited on Aug. 22, 2007).
www.serebra.com/naa/index (visited Sep. 13, 2007).
www.speechcycle.com/about/management—team.asp (visited on Aug. 22, 2007).
www.spoken.com (visited on Aug. 22, 2007).
www.spoken.com/who/our—story.asp (visited on Aug. 22, 2007).
www.training-classes.com/course—hierarchy/courses/4322—call—center—structures—customer—relationships.php.
www.velaro.com (visited Aug. 24, 2007).
www.virtualhold.com (visited on Aug. 22, 2007).
www.volusion.com (visited Aug. 24, 2007).
www.vyew.com/content (visited Aug. 24, 2007).
www.webdialogs.com (visited Aug. 24, 2007).
www.webmeetpro.com/technology.asp (visited Aug. 24, 2007).
www.wired.com/news/business/0,64038-0.html (visited on Aug. 22, 2007).
Adams, Scott, Dilbert cartoon.
Alam, Hisham; Stand and Deliver (Industry Trend or Event) (Editorial), Intelligent Enterprise, Mar. 27, 2001, pp. 44, vol. 4, No. 5, CMP Media, Inc., USA. (Abstract only reviewed and provided.).
Avaya. Advanced Multichannel Contact Management Avaya Interaction Center White Paper. Apr. 2005.
Bernstel, J.B., Speak right up! Ybank call centers, ABA Bank Marketing, Nov. 2001, pp. 16-21, vol. 33, No. 9, American Bankers Assoc., USA. (Abstract only reviewed and provided.).
Burbach, Stacey; Niedenthal, Ashley; Siebel Leverages Telephony@Work Technology as Part of Siebel CRM OnDemand Release 7; Siebel Incorporates Telephony@Work Technology in CRM OnDemand Release 7, Jun. 6, 2005, ProQuest, Chicago, IL. (Abstract only reviewed and provided.).
Caruso, Jeff; Standards Committee to Define Call Center Terms (industry Reporting Standards Steering Committee) (Technology Information), CommunicationsWeek, Apr. 29, 1996, 1(2) pp., No. 608, USA. (Abstract only reviewed and provided.).
Chan, C.; Chen, Liqiang; Chen, Lin-Li; Development of an Intelligent Case-Based System for Help Desk Operations, May 9-12, 1999, pp. 1062-1067, vol. 2, Electrical and Computer Engineering, 1999 IEEE Canadian Conference on, Edmonton, Alta., Canada. (Abstract only reviewed and provided.).
Chiu, Dickson K.W.; Chan, Wesley C.W.; Lam, Gary K.W.; Cheung, S.C.; and Luk, Franklin T., An Event Driven Approach to Customer Relationship Management in E-Brokerage Industry, Jan. 2003, 36th Annual Hawaii International Conference on System Sciences, USA. (Abstract only reviewed and provided.).
Choudhary, Alok; Dubey, Pradeep; Liao, Wei-Keng; Liu, Ying; Memik, Gokhan; Pisharath, Jayaprakash; Performance Evaluation and Characterization of Scalable Data Mining Algorithms, 2004, pp. 620-625, vol. 16, Proc. IASTED Int. Conf. Parall. Distrib. Comput. Syst., USA.
Finke, M.; Lapata, M.; Lavie, A.; Levin, L.; Tomokiyo, L.M.; Polzin, T.; Ries, K.; Waibel, A.; Zechner, K.; Clarity: Inferring Discourse Structure from Speech, Mar. 23-25, 1998, pp. 25-32, Proceedings of 1998 Spring Symposium Series Applying Machine Learning to Discourse Processing, USA. (Abstract only reviewed and provided.).
Fowler, Dennis. The Personal Touch—How E-Businesses Are Using Customer Relations Management to Thwart Competitors and Bolster Their Bottom Lines. Dec. 2000.
Gao, Jianfeng; Microsoft Research Asia and Chin-Yew Lin, Information Sciences Institute, Univ. of S. California, Introduction to the Special Issue on Statistical Language Modeling, ACM Transactions on Asian Language Information Processing, vol. 3, No. 2, Jun. 2004, pp. 87-93.
Getting Better Every Day: How Marrying Customer Relationship Marketing to Continuous Improvement Brings Sustained Growth, Aug. 2005, pp. 24-25, vol. 21, No. 8, Strategic Direction, USA. (Abstract only reviewed and provided.).
Gupta, Narendra; Gokhan Tur, Dilek Hakkani-Tür, Member, IEEE, Srinivas Bangalore, Giuseppe Riccardi, Senior Member; IEEE, and Mazin Gilbert, Senior Member, IEEE. The AT&T Spoken Language Understanding System. IEEE Transaction on Audio, Speech, and Language Processing. vol. 14, No. 1, Jan. 2006.
Hu, Xunlei Rose, and Eric Atwell. A survey of machine learning approaches to analysis of large corpora. School of Computing, University of Leeds, U.K. LS2 9JT.
IBM TDB, #7 Business Method to Improve Problem Diagnosis in Current Systems Using a Combination of XML and VoiceXML, Jan. 1, 2002, IPCOM000014964D, USA.
Iyer, A.V.; Deshpande, V.; Zhengping, Wu; A Postponement Model for Demand Management, Aug. 2003, pp. 983-1002, vol. 49, No. 8, Management Science, USA. (Abstract only reviewed and provided.).
Karahan, Mercan; Dilek Hakkani-Tür, Giuseppe Riccardi, Gokhan Tur. Combining Classifiers for Spoken Language Understanding. © 2003 IEEE.
Langkilde, Irene; Marilyn Walker; Jerry Wright, Allen Gorin, Diane Litman. Automatic Prediction of Problematic Human-Computer Dialogues in ‘How May I Help You?’ AT&T Labs—Research.
Lewis, Michael, Research note: A Dynamic Programming Approach to Customer Relationship Pricing, Jun. 2005, pp. 986-994, vol. 51, No. 6, Management Science, USA. (Abstract only reviewed and provided.).
Lindsay, Jeff; Schuh, James; Reade, Walter; Peterson, Karin; Mc Kinney, Christopher. The Historic Use of Computerized Tools for Marketing and Market Research: A Brief Survey, Dec. 27, 2001, www.ip.com, USA. (Abstract only reviewed and provided.).
Litman, Diane J. and Shimei Pan. Designing and Evaluating an Adaptive Spoken Dialogue System. © 2002 Kluwer Academic Publishers.
Loren Struck, Dennis. Business Rule Continuous Requirement Environment. A Dissertation Submitted to the Graduate Council in Partial Fulfillment of the Reqirement for the Degree of Doctor of Computer Science. Colorado Springs, Colorado, May 1999.
Marsico, K., Call Centers: Today's New Profit Centers, 1995-1996, pp. 14-18, vol. 10, No. 4, AT&T Technology, USA. (Abstract only reviewed and provided.).
Mohri, Mehryar; Fernando Pereira, Michael Riley. Weighted Finite-State Transducers in Speech Recognition. Article submitted to Computer Speech and Language.
Paek, Tim & Eric Horvitz. Optimizing Automated Call Routing by Integrating Spoken Dialog Models with Queuing Models. (timpaek/horvitz@Microsoft.com) Microsoft, Redmond, WA.
Peng, Fuchun and Dale Schuurmans. Combining Naïve Bayes and n-Gram Language Models for Text Classification. (f3peng, dale@cs.uwaterloo.ca).
Pradhan, Sameer S.; Ward, Wayne H.; Estimating Semantic Confidence for Spoken Dialogue Systems, 2002, pp. 233-236, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing—Proceedings v 1, USA. (Abstract only reviewed and provided.).
Riccardi, G.; Gorin, A.L.; Ljolje, A.; Riley, M.; A Spoken Language System for Automated Call Routing, Apr. 21, 1997, pp. 1143-1146, International Conference on Acoustics, Speech and Signal Processing, USA. (Abstract only reviewed and provided.).
Ritter, Julie, Crossroads Custumer Solutions Expands Siebel Contact OnDemand Deployment with Siebel CRM OnDemand, Apr. 12, 2004, ProQuest, Chicago, IL. (Abstract only reviewed and provided.).
Schapire, Robert E., and Yoram Singer. BoosTexter: A Boosting-based System for Text Categorization. (schapire@research.att.com; singer@research.att.com).
Schmidt, M.S., Identifying Speakers with Support Vector Networks, Jul. 8-12, 1996, pp. 305-314, Proceedings of 28th Symposium on the Interface of Computing Science and Statistics (Graph-Image-Vision.), USA. (Abstract only reviewed and provided.).
Seiya and Masaru (Toshiba), #3 Knowledge Management Improvement of Help Desk Operation by Q & A Case Referencing, Toshiba Rebya, 2001, vol. 56, No. 5, pp. 28-31.
Shriberg, E.; Bates, R.; Stolcke, A.; Taylor, P.; Jurafsky, D.; Ries, K.; Coccaro, N.; Martin, R.; Mateer, M.; Vaness-Dykema, C.; Can Prosody Aid the Automatic Classification of Dialog Acts in Conversational Speech?, 1988, pp. 443-492, vol. 41, Language and Speech, USA. (abstract only reviewed and provided.).
SSA Global Strengthens CRM Functionality for Customers' Inbound and Outbound Marketing Initiatives; SSA Marketing 7.0 introduces major enhancements to the company's industry-leading marketing automation solution, Apr. 3, 2006, ProQuest, Chicago, IL. (Abstract only reviewed and provided.).
Steinborn, D. Time flies, even wating (bank telephone answering), Bank Systems + Technology, Sep. 1993, pp. 39, 41, vol. 30, No. 9, La Salle Nat. Bank, Chicago, IL. (Abstract only reviewed and provided.).
Stolcke, A.; Ries, K.; Coccaro, N.; Shriberg, E.; Bates, R.; Jurafsky, D.; Taylor, P.; Martin, R.; Van Ess-Dykema, C.; Meteer, M.; Dialogue Act Modeling for Automatic Tagging and Recognition of Conversational Speech, Sep. 2000, pp. 339-373, vol. 26, No. 3, Computational Linguistics, USA. (abstract only reviewed and provided.).
Suhm, Bernhard and Pat Peterson. Received May 29, 2001. Data-Driven Methodology for Evaluating and Optimizing call Center IVRs. Revised Aug. 22, 2001. (bsuhm@bbn.com; patp@bbn.com).
Tang, Min; Bryan Pellom, Kadri Hacioglu. Call-Type Classification and Unsupervised Training for the Call Center Domain. (tagm,pellom,hacioglu@cslr.colorado.edu).
Thiel, Beth Miller; Weber, Porter Novelli Kimberly; PeopleSoft Announces General Availability of PeopleSoft Enterprise CRM 8.9 Services, Jun. 16, 2004, ProQuest, Chicago, IL. (Abstract only reviewed and provided.).
To use and abuse (Voice processing), What to Buy for Business, Jan. 1995, pp. 2-20, No. 166, UK. (Abstract only reviewed and provided.).
Wahlster, Wolfgang. The Role of Natural Language in Advanced Knowledge-Based Systems. In: H. Winter (ed.): Artificial Intelligence and Man-Machine Systems, Berlin: Springer.
Walker, Marilyn A.; Irene Langkilde-Geary, Helen Wright Hastie, Jerry Wright, Allen Gorin. Automatically Training a Problematic Dialogue Predictor for a Spoken Dialogue System. © 2002 Al Access Foundation and Morgan Kaufmann Publishers, published May 2002.
Walker, Marilyn, Learning to Predict Problematic Situations in a Spoken Dialogue System: Experiments with How May I Help You?, 2000, pp. 210-217, ACM International Conference Proceeding Series; vol. 4 archive, Proceedings of the first conference on North American Chapter of the Association for Computational Linguistics , Seattle, WA. (Abstract only reviewed and provided.).
Whittaker, Scahill, Attwater and Geenhow, Interactive Voice Technology for Telecommunications for Telecommuniactions Applications,: #10 Practical Issues in the application of speech technology to network and customer service applications, (1998) IVTTA '98 Proceedings, 1998 IEEE 4th Workshop, Published Sep. 1998, pp. 185-190. USA.
Yan, Lian; R.H. Wolniewica, R. Dodier. Abstract—Predicting Customer Behavior in Telecommunications. Intelligent Systems, IEEE Mar.-Apr. 2004.
Young, Alan; Innis, Rafael; System and Method for Developing Business Process Policies, Jul. 3, 2002, Patent Publication 2003005154, Computer Associates International, Inc., USA. (Abstract only reviewed and provided.).
Young, Howard; Adiano, Cynthia; Enand, Navin; Ernst, Martha; Thompson, Harvey; Zia, May Sun; Customer Relationship Management Business Method, Jul. 5, 2005, Patent Application 723519, USA. (Abstract only reviewed and provided.).
Zweig, G.; O. Siohan,G. Saon, B. Ramabhadran, D. Povey, L. Mangu and B. Kingsbu. Automated Quality Monitoring in the Call Center With ASR and Maximum Entropy. IBM T.J. Watson Research Center, Yorktown Heights, NY 10598 (ICASSP 2006).
Alpaydin, Ethem; Introduction to Machine Learning, Abe Books.
Mitchell, Tom, Machine Learning, Abe Books.
Russell, Stuart J. and Peter Norvig, Artificial Intelligence: A Modern Approach, Abe Books.
Witten, Ian H., and Eibe Frank, Data Mining: Practical Tools Machine Learning Tools and Techniques, Abe Books.
Office Action dated Oct. 17, 2011 for U.S. Appl. No. 11/686,812.
U.S. Appl. No. 10/862,482, filed Jun. 7, 2004, Irwin, et al.
U.S. Appl. No. 10/044,848, filed Jan. 27, 2005, Irwin, et al.
U.S. Appl. No. 11/291,562, field Dec. 1, 2005, Shomo, et al.
U.S. Appl. No. 11/686,562, field Mar. 15, 2007, Irwin, et al.
U.S. Appl. No. 13/208,953, field Aug. 12, 2011, Coy, et al.
Provisional Applications (2)
Number Date Country
60747896 May 2006 US
60908044 Mar 2007 US
Continuation in Parts (1)
Number Date Country
Parent 11686812 Mar 2007 US
Child 11751976 US