Generally, a bot is a software application that can execute automated tasks over a network, such as the Internet or a phone line. For example, a bot can be designed to conduct a conversation with a user via text, auditory, and/or visual methods to simulate human conversation. In particular, a bot may utilize natural language processing systems or scan for keywords from a user input, make one or more decisions with regard to the input and then generate a reply to the user input. Typically, bots are implemented in dialogue systems, natural language processing systems, and the like to perform various practical tasks, such as user support, and information acquisition.
The following presents a simplified summary in order to provide a basic understanding of some aspects described herein. This summary is not an extensive overview of the claimed subject matter. This summary is not intended to identify key or critical elements of the claimed subject matter nor delineate the scope of the claimed subject matter. This summary's sole purpose is to present some concepts of the claimed subject matter in a simplified form as a prelude to the more detailed description that is presented later.
In an embodiment described herein, a method that enables a handoff between a bot and a human is described. The method includes monitoring a conversation between a user and a first bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold. The method also includes, in response to a detected trigger, determining a type of a handoff to be executed, wherein the handoff is to a second bot or a human support agent and the type of handoff is based on an experience of the user. Additionally, the method includes executing the determined handoff to the second bot or a human support agent, wherein the second bot or a human support agent engages the user in conversation to execute functionality desired by the user.
In another embodiment described herein, a system is described. The system comprises an intent monitor. The intent monitor is to monitor a conversation between a user and a first bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold. In response to a detected trigger, the intent monitor is to determine a type of a handoff to be executed, wherein the handoff is to a second bot or a human support agent and the type of handoff is based on an experience of the user. Additionally, the intent monitor is to execute the determined handoff to the second bot or a human support agent, wherein the second bot or a human support agent engages the user in conversation to execute functionality desired by the user.
In another embodiment described herein, a method that enables a two-way handoff between a bot and a human is described. The method includes monitoring a conversation between a user and a bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold. The method also includes, in response to a detected trigger, determining a type of a first handoff to be executed, wherein the first handoff is to a human support agent and the type of handoff is based on an experience of the user. Additionally, the method includes executing the determined first handoff from the bot to the human support agent, wherein the human support agent engages the user in conversation to execute functionality desired by the user. Further, the method includes monitoring a conversation between the user and the human support agent to detect a signal, wherein the signal is an indication that the functionality desired by the user is complete. The method also includes, in response to the detected signal, executing a second handoff from the human support agent to the bot.
The following description and the annexed drawings set forth in detail certain illustrative aspects of the claimed subject matter. These aspects are indicative, however, of a few of the various ways in which the principles of the innovation may be employed and the claimed subject matter is intended to include all such aspects and their equivalents. Other advantages and novel features of the claimed subject matter will become apparent from the following detailed description of the innovation when considered in conjunction with the drawings.
The following detailed description may be better understood by referencing the accompanying drawings, which contain specific examples of numerous features of the disclosed subject matter.
Various aspects of the disclosure are described more fully below with reference to the accompanying drawings, which form a part hereof, and which show specific exemplary aspects. However, different aspects of the disclosure may be implemented in many different forms and should not be construed as limited to the aspects set forth herein; rather, these aspects are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the aspects to those skilled in the art. Aspects may be practiced as methods, systems or devices. Accordingly, aspects may take the form of a hardware implementation, an entirely software implementation or an implementation combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.
Generally, bots are software applications that run automated tasks over a network, such as the Internet or a phone line. Bots are designed to conduct a conversation with a user via auditory or visual methods to simulate human conversation. A bot may utilize natural language processing systems or scan for keywords from a user input to generate a response to the user input. Often, bots are often utilized in tasks such as user service or information acquisition. An organization may also use human support agents to engage in tasks such as service, information acquisition, or other support. However, bots typically operate without human support agent engagement. As described herein, a support operator is a bot or human support agent that is assigned tasks with respect to a user.
As an example, one or more bots may be deployed in a technical support scenario. Each bot may be customized according to a particular technical support need. A user may engage a bot in conversation to obtain a solution to a technical problem. Consider a user that owns a device for use with a computing system. In the event that the device fails to be recognized or communicate with the system, the installation of a device driver would likely solve the issue. When the user contacts the organization for support, a bot can diagnose the issue, determine a solution to the issue, and if needed, complete any remaining tasks to resolve the issue. In this example, a bot may ask a series of questions to diagnose the user issue. The bot may also be able to automatically determine a solution to the issue by scanning the user's computer system. Finally, the bot may automatically install a device driver to resolve the user's issue. In this manner, a bot can efficiently and quickly resolve the user's issue.
However, circumstances may arise where a bot cannot satisfy a user request or another bot or human support agent is better suited to address the user request. The present techniques describe a handoff between a bot and human. According to the present techniques, the handoff may be seamless or transparent to a user. For example, the user may be unaware a handoff has occurred. In other examples, the user may be presented with an option to approve a handoff. In operation, the bot can respond to user input based on a conversational context of the input. In response to a trigger, the current bot can automatically connect the user with the second bot or a human support agent. The trigger may be, for example, based on the current bot recognizing that a second bot or human support agent would better address the user's needs when compared to the first bot. The trigger may also be based on a frustration level of the user. In response to the trigger, the second bot or human support agent may then engage in conversation with the user. When appropriate, the first bot can automatically reengage the user, without any directive to reengage the user from the human support agent. Further, the first bot may use feedback to train and update the learning models to improve the bot's responses over time based on each user. By contrast, typical bots fail to implement any adaptive intelligence when issues are encountered, thereby resulting in a negative user experience.
As a preliminary matter, some of the figures describe concepts in the context of one or more structural components, referred to as functionalities, modules, features, elements, etc. The various components shown in the figures can be implemented in any manner, for example, by software, hardware (e.g., discrete logic components, etc.), firmware, and so on, or any combination of these implementations. In one embodiment, the various components may reflect the use of corresponding components in an actual implementation. In other embodiments, any single component illustrated in the figures may be implemented by a number of actual components. The depiction of any two or more separate components in the figures may reflect different functions performed by a single actual component.
Other figures describe the concepts in flowchart form. In this form, certain operations are described as constituting distinct blocks performed in a certain order. Such implementations are exemplary and non-limiting. Certain blocks described herein can be grouped together and performed in a single operation, certain blocks can be broken apart into plural component blocks, and certain blocks can be performed in an order that differs from that which is illustrated herein, including a parallel manner of performing the blocks. The blocks shown in the flowcharts can be implemented by software, hardware, firmware, and the like, or any combination of these implementations. As used herein, hardware may include computer systems, discrete logic components, such as application specific integrated circuits (ASICs), and the like, as well as any combinations thereof.
As for terminology, the phrase “configured to” encompasses any way that any kind of structural component can be constructed to perform an identified operation. The structural component can be configured to perform an operation using software, hardware, firmware and the like, or any combinations thereof. For example, the phrase “configured to” can refer to a logic circuit structure of a hardware element that is to implement the associated functionality. The phrase “configured to” can also refer to a logic circuit structure of a hardware element that is to implement the coding design of associated functionality of firmware or software. The term “module” refers to a structural element that can be implemented using any suitable hardware (e.g., a processor, among others), software (e.g., an application, among others), firmware, or any combination of hardware, software, and firmware.
The term “logic” encompasses any functionality for performing a task. For instance, each operation illustrated in the flowcharts corresponds to logic for performing that operation. An operation can be performed using software, hardware, firmware, etc., or any combinations thereof.
As utilized herein, terms “component,” “system,” “client” and the like are intended to refer to a computer-related entity, either hardware, software (e.g., in execution), and/or firmware, or a combination thereof. For example, a component can be a process running on a processor, an object, an executable, a program, a function, a library, a subroutine, and/or a computer or a combination of software and hardware. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and a component can be localized on one computer and/or distributed between two or more computers.
Furthermore, the claimed subject matter may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques to produce software, firmware, hardware, or any combination thereof to control a computer to implement the disclosed subject matter. The term “article of manufacture” as used herein is intended to encompass a computer program accessible from any tangible, computer-readable device, or media.
Computer-readable storage media can include but are not limited to magnetic storage devices (e.g., hard disk, floppy disk, and magnetic strips, among others), optical disks (e.g., compact disk (CD), and digital versatile disk (DVD), among others), smart cards, and flash memory devices (e.g., card, stick, and key drive, among others). In contrast, computer-readable media generally (i.e., not storage media) may additionally include communication media such as transmission media for wireless signals and the like. The communication media may include cables, such as fiber optic cables, coaxial cables, twisted-pair cables, and the like. Moreover, transmission media for wireless signals may include hardware that enables the transmission of wireless signals such as broadcast radio waves, cellular radio waves, microwaves, and infrared signals. In some cases, the transmission media for wireless signals is a component of a physical layer of a networking stack of an electronic device.
As illustrated in
In the example of
A bot 104 or bot 105 may include hardware, software, or any combination thereof, to engage a user in conversation across the network. In particular, the bot may receive user input and generate a response based on the input. One or more bots may be available in the network environment 100. Each bot may support various functionality within an organization. Bot functionality may be, for example providing information on products and services that the user intends to acquire or use, or answering “how-to” questions that the user may have on a specific functionality of an acquired product or service. Bot functionality may also include diagnosing problems occurring with the user's products or services, and taking corrective action to resolve those problems.
In the example of
At the machine learning module 130, observations may be made in response to the user input. In particular, the machine learning module 130 enables the bot to learn from the entities and intents extracted from the user input via natural language processing. As used herein, an intent is an intention of a user. An entity is a descriptive aspect of the intent. In some cases, the machine learning algorithms according to the present techniques may be based on a classification algorithm that includes decision tree. The decision tree may include a series of observations that are used to arrive at a conclusion. Various entities and intents may be mapped to elements of the decision tree. The bot may access a knowledge base 136 as needed. Elements of the decision tree may be used by the response generation module 134 to determine a pre-written response to the original user input. The response generation module may transmit the pre-written response to a dialogue client where it is rendered for the user in a text, auditory, or visual format.
The intent monitor 132 may identify an intent of the language used by the user during engagement with the bot. The intent monitor 132 may monitor turns of conversation between a customer and a bot via the bot's logic. During the turns of conversation, the intent monitor assesses the conversation between the user and the bot to determine if a second bot or a human support agent is better suited to engage with the user when compared to the currently selected bot. In embodiments, the intent monitor may predict that the user will subsequently encounter a point at which the bot can no longer be of assistance. The prediction may be derived from one or more factors that are assessed by the intent monitor. In an embodiment, each factor contributes to a user frustration level. If the user frustration level exceeds a predetermined value, a handoff of the from the bot to a second bot or human support agent may occur. The handoff may be temporary, partial, of full as described below.
For example, at the root node 202, a user input is obtained. The branches from the root node 202 each correspond to an observation in response to the user input at root node 202. In embodiments, the particular branch or outcome selected may be in response to input from the user. The branches lead to internal node 208, node 210, and node 212, respectively. Each internal node corresponds to an outcome of the observations of the user input at the root node 202. The branches from each internal node 208, 210, and 212 correspond to a further observation from their respective internal node. Each node may continue to split into one or more nodes based on further observations until a leaf node is reached. As illustrated, node 214, node 216, node 218, node 220, node 222 and, node 224 represent leaf nodes of the decision tree 200. In embodiments, a leaf node represents a final outcome based on series of user inputs in response to the bot response generation. In this manner, user responses may guide a traversal of the decision tree based on a scripted series of dialogue from the bot. For ease of description, a limited number of nodes are illustrated in the decision tree 200. However, a decision tree according to the present techniques may be of any size and include any number of nodes. In embodiments, the decision tree may grow as the bot learns additional information, conditions, and resources that can be integrated into the decision tree 200. For ease of description, the machine learning as described herein is illustrated using the decision tree as a classification model. However, any number of classification models may be used in the machine learning according to the present techniques. For example, a decision graph may be implemented according to the present techniques. In the decision graph, there is no single direction to traverse the graph, and the nodes may be linked many-to-many. Other models used according to the present techniques may be random decision forests, decision jungles, and the like.
At each node in the example of
The execution of a handoff of control of a conversation may be based on one or more factors that are analyzed to predict user frustration or limitations. In embodiments, the one or more factors may trigger a handoff of control of the conversation. A trigger, as used herein, may describe a point in time where the bot cannot be of assistance to the user. Additionally, a trigger may be when the current bot recognizes that a second bot or human support agent has a higher likelihood of satisfying a user need when compared to the current bot. The trigger may be detected by assessing one or more factors associated with the user, wherein assessing the one or more factors includes a determination that a user frustration level exceeds a predetermined threshold.
User frustration may indicate a negative emotional status of a user. User limitations may indicate a particular constraint on abilities of the user. In embodiments, user frustration may be caused by the particular limitations of a user. Generally, a trigger may be detected when a predicted user frustration level or user limitation is exceeded. The prediction may be based on any number of factors, including but not limited to a semantic analysis of the textual content, structural and grammatic analysis of the textual content, command/verb dictionaries, text format, trigger words/phrases, a user intent, user attributes, and any combinations thereof. The prediction may also be based on historic data of user frustrations or limitations associated with the current user that caused a handoff. In embodiments, the prediction may also be based on aggregated historic data of user's with known information similar to that of the current user. Finally, in embodiments, the prediction may be based on general historic data for all users. In this manner, the present techniques enable a handoff scheme that is operable across all operating systems, browsers, and the like. As a result of machine leaning, the bot may become more efficient over time and can expand its knowledge of factors and how they impact the determination of a trigger.
As illustrated in
Each user input 306 is evaluated in view of each of the plurality of factors 302. In embodiments, the user inputs 306A, 306B, . . . , 306N, are evaluated to determine if a particular factor is detected in the input. For example, a user input 306A may be evaluated for the particular sentiment found in the input. If a negative sentiment is determined, the user input 306 a may be attributed with a full score with respect to sentiment analysis. For each factor, the user input can be scored based on the presence of the particular factor in the user input. The scores with respect to each user input may be summed to calculate a total score for each input. Thus, in
As illustrated in
Another factor that may indicate user frustration or limitations is a text format of the user input. When the user input is in text form, a text format is the particular symbols selected by the user to convey information. For example, a text format of the user input may be writing in all capitalized letters. Writing in all capitalized letters may indicate “yelling” or other displeasure from the user. In such a scenario, a user may be experiencing a high level of frustration and conveys the frustration through the text format. In some cases, a user may write using digits as substitutes for letters. For example, a user may input “I h@ve an i$$ue with my device.” The grammatically correct input would be “I have an issue with my device.” In this scenario, the text format selected by the user may indicate that a human support agent would be better suited to interpret the user's use of alternative symbols.
In another example, consider a scenario where a user is asked to verify a license product key that consists of five sets of five digits. For example, the user may continually respond with “the license is 834-92374-65297-84163.” However, such a product key may be unintelligible to the bot, which expects five sets of five digits. In an example, if this is the only factor that indicates a human is needed, the bot may generate a response that seeks a full set of five sets of five digits. However, if other factors indicate that a user limitation is met or exceeded and the user does not understand how to obtain the full product key, a human support agent may engage the user to resolve the determination of the product key.
Particular trigger words or phrases that suggest failure of the bot to engage the user successfully may also be another factor that may indicate user frustration or limitations. A failure to engage the user is an inability of the bot to obtain intelligible input from the user in response to bot-generated responses. Certain words or phrases may be designated as trigger words, where one or more occurrences of the trigger words is a factor in user frustrations or limitations. For example, expletives may be considered trigger words. Additionally, derogatory words and phrases may also be considered in determining user frustration or limitations. In embodiments, the use of a trigger word or phrase may automatically trigger a handoff to a human support agent.
User intent is a determination of what the user desired in view of the actual user input and can be a factor in determining a user frustration level. When the user input is a typed statement, the user intent is extracted from the text as an actual purpose of the user in engaging in a conversation with the bot. The user intent may be derived by classifying terms of the query based on natural language processing. For example, in response to a bot prompting, “How can I help you?”, a user may input “won't load, non-functional, cannot use device.” Here, an exemplary user intent may be the correction of a software issue associated with device driver installation. However, this is not immediately evident from the user intent. Once derived, the intent monitor may determine that a particular user intent may be best addressed by a human support agent or another bot. Additionally, user attributes may be factors that indicate user frustration or limitations and can predict handoff. User attributes as described herein may be a part of historic data, gathered over time. In embodiments, the user attributes include, but are not limited to, a native language, geolocation, time zone, prior communications, or any combination thereof. Additional factors may include a number of turns in the conversation, particular terminology used in the conversation (such as terminology that indicates fraud or illegal activity), and technical signals (such as a Bluetooth radio being turned off).
The one or more factors as discussed above may be assessed by the intent monitor to detect a trigger of a handoff from a bot to another bot or human support agent. In particular, each factor may be evaluated for the level of user frustration or user limitation it represents. The factors may be evaluated alone or in combination with other factors to determine a level of user frustration or limitation. When a trigger is detected, each factor may also be analyzed to determine a type of handoff.
The assessment matrix 300 is illustrated for exemplary purposes and should not be viewed as limiting. Various techniques may be used to evaluate one or more factors to determine a level of user frustration, either alone or in combination with other factors. Accordingly, factors as described herein may be evaluated according to different techniques. Moreover, each factor may be scored, weighted, ranked, or any other ordering applied to analyze an importance level of the factor on user frustration levels. The scoring as described is for exemplary purposes and should not be viewed as limiting on the technique used to determine an impact of a factor on the user frustration level.
By implementing intent monitors to analyze user input within a conversation, the bot can continually assess the factors in view of the input to determine the most appropriate support operator or user support professional upstream of a likely user end-point within a bot. When the intent monitor detects a back-and-forth progression of a user through the bot's logic that will likely subsequently encounter a point at which the bot can no longer be of assistance, the system passes the conversation to second bot or human support agent. In examples, the system may transparently pass along the subject of the bot dialog, the user's language, and a current date and time to a support agent selector. In embodiments, the support agent selector may use the received information to determine a particular bot or human support agent best suited to resolve the issue presented by the user.
If the user reaches a bot end-point, the user may be presented with a message like “I'm sorry I've not been able to resolve this issue. However, while we've been interacting I've contacted George in Las Colinas, Tex. who is the best qualified agent to assist you further. Would you like to me to connect you with George through phone or chat?” The user is then able to decide in real-time how the user wants to interact with the human support agent. When the user selects the desired contact method, he or she is either transferred to the now-waiting George in a chat window or the user is asked to enter his or her phone number where George might call immediately.
The handoff as described in the above example may be a partial handoff where the human support agent can supplement responses from the bot with human responses. In some embodiments, the human support agent may also change or modify input to bot. The handoff may also be a temporary handoff, where control of the conversation is passed from the first bot to the human support agent for a particular amount of time or until an issue is resolved. An event may occur that automatically reconnects the conversation with the bot. Accordingly, in examples the control of the conversation may be passed to a human support agent that generates responses to the user, the original bot may automatically reengage the conversation in response to a resolution of an issue. Furthermore, a handoff may transfer control of a conversation from a first bot to a second bot that may be better able to handle a user request.
The intent monitor 204 may monitor user input as captured by the dialogue client 402. In particular, the intent monitor 204 may scan user input and continually monitor factors that are used to predict a level of user frustration or user limitation. In the event that a predicted level of user frustration or user limitation exceeds a threshold level, the intent monitor may hand off the conversation to a bot or human support agent, such as a human support agent or another bot. In response to handing off the conversation to a human support agent, the bot 104 may monitor the conversation between as input to the dialogue client 402 and transmitted to the human support agent 404. In some cases, the bot 104 may automatically reengage the user in conversation when an issue is resolved or some other event occurs.
Generally, communication flow from the dialogue client 402 represents input from a user at the dialogue client. Natural language processing may be applied to user input captured at the dialogue client 402. In the example of
Based on the generated response from the bot at reference number 414, the user may input additional information at the dialogue client 402. Again, at reference number 416, the user input is intercepted or monitored by the intent monitor 204. Thus, at reference number 418, the intent monitor 204 forwards the user input to the bot 104. For this second exchange of user input, the intent monitor does not detect a user frustration level or a user limitation that triggers a handoff from bot engagement with the user to a human support agent engagement with the user. Accordingly, at reference number 420 the bot generates a response to the user input which is transmitted to the dialogue client 402. The transmission of a response as indicated at reference number 420 completes a second turn of conversation between the user and the bot.
Based on the generated response at reference number 420, the user may input additional information at the dialogue client 402. At reference number 422, the user input is intercepted or monitored by the intent monitor 204. In response to the input at reference number 422, the intent monitor 204 detects a trigger such that the intent monitor 204 does not forward the user input to the bot 104. Instead, the intent monitor reroutes the user input to a human support agent 404. Thus, at reference number 424 user input is transmitted to a human support agent 404. In response to the user input at reference number 424, the human support agent 404 generates a response. Accordingly, at reference number 426 the human generated response is transmitted to the user. However, the bot 104 monitors each response generated by the human support agent 404. If the bot 104 detects an event indicating that the bot should reengage the user in conversation, a handoff may occur from the human support agent 404 to the bot 104. As illustrated in the example of
In embodiments, the detected signal may be an indication that some functionality desired by the user is complete. For example, a particular task or issue may cause a rise in the user frustration level and a first handoff from a first bot to a human support agent. When the particular task or issue is resolved, the first bot may reengage in conversation with the user. Note that the user input may have a plurality of tasks or issues. Particular tasks or issues may cause an increase in user frustration levels as indicated by one or more factors. The resolution of one or more tasks or issues, or the completion of a particular functionality as desired by the user, may cause a reduction in a user frustration level. In embodiments, when a second bot or human support agent has caused a reduction in a user frustration level, the first bot may reengage the conversation with the user. The signal may be generated by the intent monitor.
In response to the response generated by the human support agent 404, at reference number 430 the user provides additional input as captured by the dialogue client 402. The intent monitor may monitor conversation between the user and the human support agent 404. The intent monitor 204 may not initiate a handoff while the human support agent 404 is engaged in conversation with the user. However, the intent monitor 204 may still monitor the conversation between the user and the human support agent 404. During this exchange or turn of communication, the intent monitor may operate to capture data, feedback, known information of the user, or other attributes that may be used to grow or train a model that determines when handoffs occur.
At reference number 432, the user input is transmitted to the human support agent 404. In response to the user input at reference number 432, the human support agent generates a response that is transmitted as illustrated at reference number 434. In response to the human support agent generated response at reference number 434, the bot 104 may detect an event or signal from the human support agent 404 that indicates the bot should reengage the conversation with the user. In embodiments, the bot may also reengage the conversation with the user when the bot determines that an issue that caused the human support agent to engage in the conversation with the user is resolved. Accordingly, in response to a signal, event, or other indicator, the bot 104 reengages the user in conversation. Accordingly, at reference number 436 the bot generates a response to the user input obtained at reference number 430. The conversation between the user and the bot 104 continues with a new turn of conversation at reference numbers 438 and 440. The intent monitor 204 monitors the input, and in response to no triggers detected in the user input, forwards the user input to the bot 104 at reference number 440.
As illustrated in
The handoff may occur from a bot to a human support agent or vice versa. As illustrated in
At block 504, in response to a detected trigger, a type of a handoff to be executed is determined. The handoff is to a second bot or a human support agent and the type of handoff is based on an experience of a user. As used herein, the experience of the user may refer to user attributes and historic user information obtained from previous conversations with the user. In embodiments, the type of handoff may be partial or temporary. Moreover, the handoff may be from the first bot to a second bot, wherein the second bot has functionality that can address information detected in the user input that is not available at the first bot. At block 506, the determined handoff to the second bot or a human support agent may be executed, wherein the second bot or a human support agent engages the user in conversation to execute functionality desired by the user.
In this manner, the present techniques incorporate multiple attributes into a model that predicts when a handoff should occur. Further, the handoff may be the handoff is partial or temporary. For example, in a partial handoff, the human support agent is injected into the conversation and does not need take control of the conversation completely or indefinitely. The human support agent can supplement the information provided by the bot. For example, the human support agent can re-route the user elsewhere, provide a link or other information, or resolve an ambiguity. The human support agent may re-route the user to another bot. When the human support agent tasks are complete, control of the conversation may automatically return to the bot.
In one embodiment, the process flow diagram of
At block 604, in response to a detected trigger, a type of a handoff to be executed is determined. In embodiments, the type of handoff may be partial or temporary. For example, a human support agent may engage in a partial handoff, where the human support agent supplements responses generated by the bot. The handoff may be temporary, where the human support agent may temporarily take complete control of the conversation. At block 606, the determined handoff to the second bot or a human support agent may be executed, wherein the second bot or a human support agent engages the user in conversation to execute functionality desired by the user.
At block 608, a conversation between the user and the second bot or human support agent is monitored to detect a signal to reengage the first bot in the conversation. In embodiments, the first bot monitors the conversation between the user and the second bot or human support agent to detect a signal to reengage in conversation with the user. The signal for the first bot to reengage in conversation with the user may be a determination that the functionality desired by the user is complete. At block 610, in response to a detected signal, a second handoff from the second bot or human support agent to the first bot is executed. In this manner, the present techniques enable a two-way handoff from a bot to a second bot or human support agent, and from the second bot or human support agent to the first bot. Through the two-way handoff, the first bot may automatically reengage in conversation with the user, or the second bot/human support agent can deliberately transfer control back to the first bot at an appropriate juncture. The present techniques enable a single human support agent can scale human provided support more effectively across a larger number of user-bot interactions.
In one embodiment, the process flow diagram of
Turning to
Turning to
The processor 802 executes instructions retrieved from the memory 804 (and/or from computer-readable media, such as computer-readable medium 808 of
Further still, the illustrated computing device 800 includes a network communication component 812 for interconnecting this computing device with other devices and/or services over a computer network, including other user devices, such as user computing devices 114, 118, and 122 as illustrated in
The computing device 800 also includes a bot 814. The bot 814 may include a plurality of independent executable modules that are configured (in execution) as follows. In operation/execution, a language processing module 818 may process input from a user is processed to derive a request, query, or other information from the user input. In embodiments, the natural language processing may be used by the machine learning module to extract entities and intents from the user input. The machine learning module 820 may learn from the entities and intents extracted from the user input via natural language processing. The machine learning algorithms according to the present techniques may be based on a classification algorithm that includes decision tree. The decision tree may include a series of observations that are used to arrive at a conclusion. Various entities and intents may be mapped to elements of the decision tree. The intent monitor module 822 may monitor a conversation between a user and a first bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold. The bot may access a knowledge base 816 as needed. Elements of the decision tree may be used by the response generation module 824 to determine a pre-written response to the original user input. The response generation module may transmit the pre-written response to a dialogue client where it is rendered for the user in a text, auditory, or visual format.
Example 1 is a method. The method includes monitoring a conversation between a user and a first bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold; in response to a detected trigger, determining a type of a handoff to be executed, wherein the handoff is to a second bot or a human support agent, and the type of handoff is based on an experience of the user; and executing the determined handoff to the second bot or the human support agent, wherein the second bot or the human support agent engages the user in conversation to execute functionality desired by the user.
Example 2 includes the method of example 1, including or excluding optional features. In this example, the type of handoff is a temporary handoff to a human support agent and the human support agent provides a signal to the first bot to reengage in the conversation with the user via a second handoff.
Example 3 includes the method of any one of examples 1 to 2, including or excluding optional features. In this example, the type of handoff is a partial handoff to a human support agent, wherein the human support agent supplements responses to the user as generated by the first bot.
Example 4 includes the method of any one of examples 1 to 3, including or excluding optional features. In this example, the type of handoff is a temporary handoff to a human support agent, wherein the human support agent generates responses to the user, and the first bot automatically reengages the conversation in response to a resolution of an issue detected in the conversation.
Example 5 includes the method of any one of examples 1 to 4, including or excluding optional features. In this example, the type of handoff is a full handoff to a second bot, wherein the second bot is better suited for the conversation based on the user input.
Example 6 includes the method of any one of examples 1 to 5, including or excluding optional features. In this example, the determination that a user frustration level exceeds a predetermined threshold is made by scoring one or more factors as detected in the user input and comparing the total score with the predetermined threshold.
Example 7 includes the method of any one of examples 1 to 6, including or excluding optional features. In this example, the type of handoff is a temporary handoff to a human support agent, wherein the human support agent generates responses to the user and then transfers the conversation to a second bot in response to a determination that the second bot is better suited to resolve a user request.
Example 8 includes the method of any one of examples 1 to 7, including or excluding optional features. In this example, the one or more factors comprises a sentiment analysis, a text format, trigger words, user intent, user attributes, or any combination thereof.
Example 9 includes the method of any one of examples 1 to 8, including or excluding optional features. In this example, the experience of the user is historic data associated with the user, users similar to the user, or all users.
Example 10 includes the method of any one of examples 1 to 9, including or excluding optional features. In this example, the one or more factors is used to build a model that indicates when a handoff should occur and the type of handoff that should occur.
Example 11 is a system. The system includes an intent monitor to monitor a conversation between a user and a first bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold; in response to a detected trigger, the intent monitor to determine a type of a handoff to be executed, wherein the handoff is to a second bot or a human support agent and the type of handoff is based on an experience of a user; and the intent monitor to execute the determined handoff to the second bot or a human support agent by the intent monitor, wherein the second bot or a human support agent engages the user in conversation to execute functionality desired by the user.
Example 12 includes the system of example 11, including or excluding optional features. In this example, the type of handoff is a temporary handoff to a human support agent and the human support agent provides a signal to the first bot to reengage in the conversation with the user via a second handoff.
Example 13 includes the system of any one of examples 11 to 12, including or excluding optional features. In this example, the type of handoff is a partial handoff to a human support agent, wherein the human support agent supplements responses to the user as generated by the first bot.
Example 14 includes the system of any one of examples 11 to 13, including or excluding optional features. In this example, the type of handoff is a temporary handoff to a human support agent, wherein the human support agent generates responses to the user and the first bot automatically reengages the conversation in response to a resolution of an issue detected in the conversation.
Example 15 includes the system of any one of examples 11 to 14, including or excluding optional features. In this example, the type of handoff is a full handoff to a second bot, wherein the second bot is better suited for the conversation based on the user input.
Example 16 includes the system of any one of examples 11 to 15, including or excluding optional features. In this example, the determination that a user frustration level exceeds a predetermined threshold is made by scoring one or more factors as detected in the user input and comparing the total score with the predetermined threshold.
Example 17 includes the system of any one of examples 11 to 16, including or excluding optional features. In this example, the type of handoff is a temporary handoff to a human support agent, wherein the human support agent generates responses to the user and then transfers the conversation to a second bot in response to a determination that the second bot is better suited to resolve a user request.
Example 18 is a method. The method includes monitoring a conversation between a user and a bot to detect a trigger, wherein the trigger is detected by assessing one or more factors associated with the user, wherein assessing the one or more factors comprises a determination that a user frustration level exceeds a predetermined threshold; in response to a detected trigger, determining a type of a first handoff to be executed, wherein the first handoff is from the bot to a human support agent, and the type of the first handoff is based on an experience of the user; executing the determined first handoff from the bot to the human support agent, wherein the human support agent engages the user in conversation to execute functionality desired by the user; monitoring a conversation between the user and the human support agent to detect a signal, wherein the signal is an indication that the functionality desired by the user is complete; in response to the detected signal, executing a second handoff from the human support agent to the bot.
Example 19 includes the method of example 18, including or excluding optional features. In this example, the detected signal is generated when the human support agent reduces the user frustration level.
Example 20 includes the method of any one of examples 18 to 19, including or excluding optional features. In this example, the detected signal is generated from particular words or phrases used by the human support agent in the conversation.
In particular and in regard to the various functions performed by the above described components, devices, circuits, systems and the like, the terms (including a reference to a “means”) used to describe such components are intended to correspond, unless otherwise indicated, to any component which performs the specified function of the described component, e.g., a functional equivalent, even though not structurally equivalent to the disclosed structure, which performs the function in the herein illustrated exemplary aspects of the claimed subject matter. In this regard, it will also be recognized that the innovation includes a system as well as a computer-readable storage media having computer-executable instructions for performing the acts and events of the various methods of the claimed subject matter.
There are multiple ways of implementing the claimed subject matter, e.g., an appropriate API, tool kit, driver code, operating system, control, standalone or downloadable software object, etc., which enables applications and services to use the techniques described herein. The claimed subject matter contemplates the use from the standpoint of an API (or other software object), as well as from a software or hardware object that operates according to the techniques set forth herein. Thus, various implementations of the claimed subject matter described herein may have aspects that are wholly in hardware, partly in hardware and partly in software, as well as in software.
The aforementioned systems have been described with respect to interaction between several components. It can be appreciated that such systems and components can include those components or specified sub-components, some of the specified components or sub-components, and additional components, and according to various permutations and combinations of the foregoing. Sub-components can also be implemented as components communicatively coupled to other components rather than included within parent components (hierarchical).
Additionally, it can be noted that one or more components may be combined into a single component providing aggregate functionality or divided into several separate sub-components, and any one or more middle layers, such as a management layer, may be provided to communicatively couple to such sub-components in order to provide integrated functionality. Any components described herein may also interact with one or more other components not specifically described herein but generally known by those of skill in the art.
In addition, while a particular feature of the claimed subject matter may have been disclosed with respect to one of several implementations, such feature may be combined with one or more other features of the other implementations as may be desired and advantageous for any given or particular application. Furthermore, to the extent that the terms “includes,” “including,” “has,” “contains,” variants thereof, and other similar words are used in either the detailed description or the claims, these terms are intended to be inclusive in a manner similar to the term “comprising” as an open transition word without precluding any additional or other elements.