Aspects of the present disclosure relate to artificial intelligence (AI)-enhanced support ticket analysis, and more specifically to use of an explainable artificial intelligence (XAI) model to explain how and why an AI model that predicts support ticket resolution statistics arrives at its predictions.
Customer support within the IT industry often operates on a ticketing system, where customers create support tickets describing their particular issue with the software/service being offered by a company. Examples of issues can include malfunctions/bugs as well as security vulnerabilities. These support tickets are then triaged by the company (which may hold e.g., a service level agreement (SLA) with the client), which assigns tickets to the company's customer support agents according to the problem domain, complexity, and priority. Once the ticket's issue is resolved, the ticket is marked closed, and statistics about the ticket's resolution are collected. These statistics include the time required to resolve the support ticket, the number of reassignments (e.g., how often the ticket needed to be moved to a different support agent or department) required to resolve the support ticket, the relative difficulty/personnel cost of resolving the support ticket, and whether the terms of the SLA were breached.
The described embodiments and the advantages thereof may best be understood by reference to the following description taken in conjunction with the accompanying drawings. These drawings in no way limit any changes in form and detail that may be made to the described embodiments by one skilled in the art without departing from the spirit and scope of the described embodiments.
Accurately predicting resolution statistics at support ticket creation time can help the supporting company triage the ticket and allocate its resource appropriately. One common way of estimating any of these statistics is via natural language processing (NLP), wherein artificial intelligence (AI) models analyze the inbound ticket's content and try to predict the desired statistic. Ticketing systems usually store a vast number of previously-resolved support tickets as a resource for support agents to reference when supporting new tickets. These previously-resolved support tickets can serve as an excellent source of training data for the AI model.
While such prediction models are a useful triaging tool, understanding why certain support tickets are highly complex or difficult to resolve can also be a useful diagnostic tool for the company's support stack, since fixing identified problems can save significant amounts of time and money. If the prediction model is accurate, it can be taken as a proxy for the real, underlying process that determines ticket complexity. However, such prediction models operate as black box models whose inner workings are either inaccessible or so complex as to be conventionally uninterpretable. Common examples of such black-box models are neural networks and random forests. Explainable artificial intelligence (XAI) algorithms seek to find quick and intuitive explanations of such complex models. With respect to a support ticket prediction system, an explainable artificial intelligence (XAI) algorithm can be used to explain how and why the prediction model arrives at predicted statistics for inbound support tickets.
Embodiments of the present disclosure provide an XAI explanation aggregation model that aggregates explanations of a predictive AI models' predictions regarding support ticket resolution statistics to diagnose and improve the performance of a support stack. The aggregated explanations may be used to generate insights into the performance of the support stack. A processing device may receive an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack. A prediction model may be trained to predict resolution statistics including values for the one or more desired statistical parameters. For each of a plurality of support tickets input to the support stack, the support ticket may be analyzed using the prediction model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters and the set of predicted resolution statistics may be analyzed using an XAI algorithm to generate a set of explanations for the predicted resolution statistics. The set of explanations for each of the plurality of support tickets may be aggregated to generate one or more insights regarding the one or more desired statistical parameters.
The system 100 may represent any system in which a ticket-based support system described hereinabove may be implemented. For example, the system 100 may be a system for developing, testing, and delivering applications in containers and may implement a container orchestration platform such as the Red Hat OpenShift™ platform. One example is the use of such platforms to automate and push software as containers to small-scale edge and Internet-of-Things (IoT) gateway devices in a domain (a domain may include of a group of devices that share the same configuration, policies, and identity stores). In other examples, the system 100 may represent a system for streaming content (e.g., computing device 110 may stream audio and/or video content to a customer using computing device 112), an enterprise database system, or any other appropriate system where a ticket-based support system described hereinabove may be implemented.
As shown in
The prediction model 130 may be an AI model trained to predict resolution statistics of a support ticket that is input to the support stack 120 (i.e., predict the resolution statistics that would be output by the support stack 120 after resolution of the support ticket). The prediction model 130 may comprise any appropriate AI model such as a neural network. The prediction model 130 may be trained using a previously-resolved tickets database 117 (i.e., a database of previously-resolved tickets) which may comprise a large number of previously-resolved tickets and their corresponding resolution statistics. However, the prediction model 130 may not necessarily be trained to predict values for each of the statistical parameters described hereinabove. Indeed, in accordance with embodiments of the present disclosure, the customer support team may wish to gain insights regarding specific statistical parameters to optimize the functionality of the support stack 120. For example, the customer support team may wish to determine ways to improve the time required for resolution of support tickets and may want to understand what factors cause the time required to resolve support tickets to increase. Thus, the customer support team may select certain desired statistical parameters that the prediction model 130 should output predicted values for and the prediction model 130 may be trained to predict values for the selected statistical parameters (e.g., time required for resolution) given an input support ticket. The customer support team may select any number/combination of statistical parameters for the prediction model 130 to output predicted values for. In some embodiments, the insight generation module 119 may provide a user interface (not shown) via which a member of the customer support team may select desired statistical parameters from a set of available statistical parameters. In this way, the prediction model 130 may be trained and retrained on the fly as the customer support team wishes to gain insights as to various different statistical parameters.
The prediction model 130 may predict resolution statistics using any appropriate method(s) or combination thereof. One common method of predicting resolution statistics is via natural language processing (NLP), wherein the prediction model 130 may analyze the inbound support ticket's text content and try to predict values for each of the desired statistical parameters. A support ticket may comprise various fields having text in them, and the text from each field may be combined into one body of text and analyzed by the prediction model 130. For example, a first field of a support ticket may describe an issue (e.g., malfunction/bug, security vulnerability) with a particular product or service, while a second field of the support ticket may describe a platform the product is being used on (e.g., Mac, PC) as well as details on the hardware specifications of the platform (e.g., memory, processor speed, internet connection speed, etc.). Other fields of the support ticket may describe the geographical region in which the product or service is being used, whether the customer is an individual or an organization, and whether the customer has submitted support tickets previously, among other information.
The XAI algorithm 140 may be an AI model trained to analyze a support ticket and corresponding resolution statistics output by the prediction model 130 and provide an explanation(s) of how and why the prediction model 130 arrived at its predicted resolution statistics. The XAI algorithm 140 may implement saliency techniques such as LIME or SHAP that generate a quantitative breakdown of how components within an input to the prediction model 130 (i.e., the support ticket) affect the output of the prediction model 130 (the resolution statistics).
The XAI algorithm 140 may communicate with the prediction model 130 to generate its explanation(s) of how and why the prediction model 130 arrived at its predicted resolution statistics. More specifically, the XAI algorithm 140 may analyze the inbound support ticket and create synthetic support tickets, each of which are permutations of the original inbound support ticket. The XAI algorithm 140 may query the prediction model 130 with each synthetic support ticket and receive the resolution statistics predicted by the prediction model 130 for that synthetic support ticket. Based on the predicted resolution statistics of each synthetic support ticket (as well as the resolution statistics of the inbound support ticket), the XAI algorithm 140 may generate explanations of how and why the prediction model 130 arrived at its predicted resolution statistics for the inbound support ticket.
The XAI algorithm 140 may utilize any appropriate technique to generate its explanations using the above framework. For example, the XAI algorithm 140 may utilize a perturbation-based technique wherein it may perturb the prediction model 130 (using synthetic support tickets as described hereinabove), measure the reaction of the prediction model 130 (i.e., the resolution statistics generated by the prediction model 130 in response to each synthetic support ticket), and generate explanations based on the resolution statistics generated by the prediction model 130 for each synthetic support ticket that it perturbs the prediction model 130 with. In another example, the XAI algorithm 140 may use a function-based technique wherein it treats the predictive model 130 as a function, obtains resolution statistics generated by the prediction model 130 for each synthetic support ticket, and generates explanations based on the resolution statistics generated by the prediction model 130 for each synthetic support ticket.
The explanation(s) generated by the XAI algorithm 140 for the inbound support ticket may be stored in a support ticket explanation database 118 within memory 115A. Over time, as the explanations provided by the XAI algorithm 140 for the predicted resolution statistics of various subsequent inbound support tickets are collected, the aggregator 150 may aggregate the explanations for each of the subsequent inbound support ticket stored in the database 118 in view of the desired statistical parameters. The total sum of the explanations generated by the XAI model 140 for each of the inbound support tickets stored in the database 118 may provide comprehensive insights into the statistical parameters being predicted by the prediction model 130. For example, after aggregation, the aggregator 150 may determine that whenever an inbound support ticket mentions product X in usage with customer Y, this adds a significant time cost to the resolution of the support ticket. In another example, the aggregator 150 may determine that whenever a support ticket mentions product X in use on computer type S, this subtracts a significant time cost from the resolution of the support ticket. The aggregator 150 may use any appropriate aggregation technique to perform the aggregation and may perform the aggregation at any appropriate interval or on an on-demand basis e.g., when instructed to do so by the customer support team. For example, for each word among the set of explanations (if the explanations are in a word-cost format), the aggregator 150 may average the associated cost regarding the desired statistical parameter across each explanation where the word occurs. In some embodiments, where the explanations generated by the XAI algorithm 140 are based on resolution statistics comprising predicted values for multiple statistical parameters, the aggregator 150 may normalize all of the explanations before performing aggregation using any appropriate normalization technique.
The aggregator 150 may aggregate the set of explanations for each inbound support ticket 1-4 and generate a set of insights regarding the number of reassignments and the personnel cost of support tickets (i.e., the desired statistical parameters). More specifically, upon aggregating the set of explanations for each inbound support ticket 1-4, the aggregator 150 may generate a set of insights as shown in
Referring also to
At block 605, the insight generation module 119 may receive the selected statistical parameters. The prediction model 130 may be trained to predict values for the selected statistical parameters (e.g., time required for resolution) given an input support ticket. The prediction model 130 may be trained using a previously-resolved tickets database 117 (i.e., a database of previously-resolved tickets) which may comprise a large number of previously-resolved tickets and their corresponding resolution statistics. At block 610, for each support ticket that is input to the support stack 120, the prediction model 130 may generate a set of predicted resolution statistics including values for each of the selected statistical parameters.
The XAI algorithm 140 may be an AI model trained to analyze a support ticket and corresponding resolution statistics output by the prediction model 130 and provide an explanation(s) of how and why the prediction model 130 arrived at its predicted resolution statistics. The XAI algorithm 140 may implement saliency techniques such as LIME or SHAP that generate a quantitative breakdown of how components within an input to the prediction model 130 (i.e., the support ticket) affect the output of the prediction model 130. At block 615, for each support ticket that is input to the support stack 120, the XAI algorithm 140 may analyze the support ticket and corresponding resolution statistics output by the prediction model 130, and generate a set of explanations as to how and why the prediction model 130 arrived at its predicted resolution statistics.
Over time, as the explanations provided by the XAI algorithm 140 for the predicted resolution statistics of various subsequent inbound support tickets are collected, at block 620 the aggregator 150 may aggregate the explanations for each of inbound support ticket stored in the database 118 in view of the desired statistical parameters. The total sum of the explanations generated by the XAI model 140 for each of the inbound support tickets stored in the database 118 may provide comprehensive insights into the statistical parameters being predicted by the prediction model 130. For example, after aggregation, the aggregator 150 may determine that whenever an inbound support ticket mentions product X in usage with customer Y, this adds a significant time cost to the resolution of the support ticket. In another example, the aggregator 150 may determine that whenever a support ticket mentions product X in use on computer type S, this subtracts a significant time cost from the resolution of the support ticket. The aggregator 150 may use any appropriate aggregation technique to perform the aggregation and may perform the aggregation at any appropriate interval or on an on-demand basis e.g., when instructed to do so by the customer support team. In some embodiments, where the explanations generated by the XAI algorithm 140 are based on resolution statistics comprising predicted values for multiple statistical parameters, the aggregator 150 may normalize all of the explanations before performing aggregation using any appropriate normalization technique.
In alternative embodiments, the machine may be connected (e.g., networked) to other machines in a local area network (LAN), an intranet, an extranet, or the Internet. The machine may operate in the capacity of a server or a client machine in a client-server network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, a switch or bridge, a hub, an access point, a network access control device, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein. In one embodiment, computer system 700 may be representative of a server.
The exemplary computer system 700 includes a processing device 702, a main memory 704 (e.g., read-only memory (ROM), flash memory, dynamic random access memory (DRAM)), a static memory 705 (e.g., flash memory, static random access memory (SRAM), etc.), and a data storage device 718, which communicate with each other via a bus 730. Any of the signals provided over various buses described herein may be time multiplexed with other signals and provided over one or more common buses. Additionally, the interconnection between circuit components or blocks may be shown as buses or as single signal lines. Each of the buses may alternatively be one or more single signal lines and each of the single signal lines may alternatively be buses.
Computing device 700 may further include a network interface device 707 which may communicate with a network 720. The computing device 700 also may include a video display unit 710 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)), an alphanumeric input device 712 (e.g., a keyboard), a cursor control device 714 (e.g., a mouse) and an acoustic signal generation device 715 (e.g., a speaker). In one embodiment, video display unit 710, alphanumeric input device 712, and cursor control device 714 may be combined into a single component or device (e.g., an LCD touch screen).
Processing device 702 represents one or more general-purpose processing devices such as a microprocessor, central processing unit, or the like. More particularly, the processing device may be complex instruction set computing (CISC) microprocessor, reduced instruction set computer (RISC) microprocessor, very long instruction word (VLIW) microprocessor, or processor implementing other instruction sets, or processors implementing a combination of instruction sets. Processing device 702 may also be one or more special-purpose processing devices such as an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), network processor, or the like. The processing device 702 is configured to execute insight generation instructions 725, for performing the operations and steps discussed herein.
The data storage device 718 may include a machine-readable storage medium 728, on which is stored one or more sets of insight generation instructions 725 (e.g., software) embodying any one or more of the methodologies of functions described herein. The insight generation instructions 725 may also reside, completely or at least partially, within the main memory 704 or within the processing device 702 during execution thereof by the computer system 700; the main memory 704 and the processing device 702 also constituting machine-readable storage media. The insight generation instructions 725 may further be transmitted or received over a network 720 via the network interface device 707.
The machine-readable storage medium 728 may also be used to store instructions to perform a method for generating insights related to a ticket-based customer support system by aggregating explanations of predicted resolution statistics. While the machine-readable storage medium 728 is shown in an exemplary embodiment to be a single medium, the term “machine-readable storage medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) that store the one or more sets of instructions. A machine-readable medium includes any mechanism for storing information in a form (e.g., software, processing application) readable by a machine (e.g., a computer). The machine-readable medium may include, but is not limited to, magnetic storage medium (e.g., floppy diskette); optical storage medium (e.g., CD-ROM); magneto-optical storage medium; read-only memory (ROM); random-access memory (RAM); erasable programmable memory (e.g., EPROM and EEPROM); flash memory; or another type of medium suitable for storing electronic instructions.
Example 1 is a method comprising: receiving an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack; for each of a plurality of support tickets input to the support stack: analyzing the support ticket using an artificial intelligence (AI) model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters; and analyzing the set of predicted resolution statistics using an explainable artificial intelligence (XAI) algorithm to generate a set of explanations for the predicted resolution statistics; and aggregating the set of explanations for each of the plurality of support tickets to generate one or more insights regarding the one or more desired statistical parameters.
Example 2 is the method of example 1, further comprising: training the AI model to predict values for each of the one or more desired statistical parameters based on data in a support ticket.
Example 3 is the method of example 1, wherein the AI model is trained using a database of previously resolved support tickets and corresponding resolution statistics.
Example 4 is the method of example 1, wherein each explanation in the set of explanations comprises: a plurality of different words that the support ticket is comprised of; and for each of the plurality of different words, an associated cost regarding a desired statistical parameter of the one or more desired statistical parameters.
Example 5 is the method of example 4, wherein aggregating the set of explanations comprises: for each word among the set of explanations, averaging the associated cost regarding the desired statistical parameter across each explanation where the word occurs.
Example 6 is the method of example 1, wherein the set of statistical parameters comprises: an amount of time required to resolve a support ticket, a number of reassignments required to resolve the support ticket, a personnel cost required to resolve the support ticket, and an indication of whether any terms of a service level agreement (SLA) were breached.
Example 7 is the method of example 1, wherein generating the set of explanations for the set of predicted resolution statistics of a support ticket comprises: generating a set of synthetic support tickets, each of the set of synthetic support tickets comprising a permutation of the support ticket; querying the AI model with each of the set of synthetic support tickets; and generating the set of explanations for the set of predicted resolution statistics of the support ticket based predicted resolution statistics generated by the AI model for each of the set of synthetic support tickets and the set of predicted resolution statistics.
Example 8 is a system comprising: a memory; and a processing device operatively coupled to the memory, the processing device to: receive an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack; for each of a plurality of support tickets input to the support stack: analyze the support ticket using an artificial intelligence (AI) model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters; and analyze the set of predicted resolution statistics using an explainable artificial intelligence (XAI) algorithm to generate a set of explanations for the predicted resolution statistics; and aggregate the set of explanations for each of the plurality of support tickets to generate one or more insights regarding the one or more desired statistical parameters.
Example 9 is the system of example 8, wherein the processing device is further to: train the AI model to predict values for each of the one or more desired statistical parameters based on data in a support ticket.
Example 10 is the system of example 8, wherein the AI model is trained using a database of previously resolved support tickets and corresponding resolution statistics.
Example 11 is the system of example 8, wherein each explanation in the set of explanations comprises: a plurality of different words that the support ticket is comprised of; and for each of the plurality of different words, an associated cost regarding a desired statistical parameter of the one or more desired statistical parameters.
Example 12 is the system of example 11, wherein to aggregate the set of explanations, the processing device is to: for each word among the set of explanations, average the associated cost regarding the desired statistical parameter across each explanation where the word occurs.
Example 13 is the system of example 8, wherein the set of statistical parameters comprises: an amount of time required to resolve a support ticket, a number of reassignments required to resolve the support ticket, a personnel cost required to resolve the support ticket, and an indication of whether any terms of a service level agreement (SLA) were breached.
Example 14 is the system of example 8, wherein to generate the set of explanations for the set of predicted resolution statistics of a support ticket, the processing device is to: generate a set of synthetic support tickets, each of the set of synthetic support tickets comprising a permutation of the support ticket; query the AI model with each of the set of synthetic support tickets; and generate the set of explanations for the set of predicted resolution statistics of the support ticket based predicted resolution statistics generated by the AI model for each of the set of synthetic support tickets and the set of predicted resolution statistics.
Example 15 is a non-transitory computer-readable medium having instructions stored thereon which, when executed by a processing device, cause the processing device to: receive an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack; for each of a plurality of support tickets input to the support stack: analyze the support ticket using an artificial intelligence (AI) model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters; and analyze the set of predicted resolution statistics using an explainable artificial intelligence (XAI) algorithm to generate a set of explanations for the predicted resolution statistics; and aggregate the set of explanations for each of the plurality of support tickets to generate one or more insights regarding the one or more desired statistical parameters.
Example 16 is the non-transitory computer-readable medium of example 15, wherein the processing device is further to: train the AI model to predict values for each of the one or more desired statistical parameters based on data in a support ticket.
Example 17 is the non-transitory computer-readable medium of example 15, wherein the AI model is trained using a database of previously resolved support tickets and corresponding resolution statistics.
Example 18 is the non-transitory computer-readable medium of example 15, wherein each explanation in the set of explanations comprises: a plurality of different words that the support ticket is comprised of; and for each of the plurality of different words, an associated cost regarding a desired statistical parameter of the one or more desired statistical parameters.
Example 19 is the non-transitory computer-readable medium of example 18, wherein to aggregate the set of explanations, the processing device is to: for each word among the set of explanations, average the associated cost regarding the desired statistical parameter across each explanation where the word occurs.
Example 20 is the non-transitory computer-readable medium of example 15, wherein the set of statistical parameters comprises: an amount of time required to resolve a support ticket, a number of reassignments required to resolve the support ticket, a personnel cost required to resolve the support ticket, and an indication of whether any terms of a service level agreement (SLA) were breached.
Example 21 is the non-transitory computer-readable medium of example 15, wherein to generate the set of explanations for the set of predicted resolution statistics of a support ticket, the processing device is to: generate a set of synthetic support tickets, each of the set of synthetic support tickets comprising a permutation of the support ticket; query the AI model with each of the set of synthetic support tickets; and generate the set of explanations for the set of predicted resolution statistics of the support ticket based predicted resolution statistics generated by the AI model for each of the set of synthetic support tickets and the set of predicted resolution statistics.
Example 22 is a method comprising: receiving an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack; training an AI model to predict values for each of the one or more desired statistical parameters based on data in a support ticket; for each of a plurality of support tickets input to the support stack: analyzing the support ticket using the artificial intelligence (AI) model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters; and analyzing the set of predicted resolution statistics using an explainable artificial intelligence (XAI) algorithm to generate a set of explanations for the set of predicted resolution statistics; and aggregating the set of explanations for each of the plurality of support tickets to generate one or more insights regarding the one or more desired statistical parameters.
Example 23 is the method of example 22, wherein the AI model comprises a neural network.
Example 24 is the method of example 22, wherein the AI model is trained using a database of previously resolved support tickets and corresponding resolution statistics.
Example 25 is the method of example 22, wherein each explanation in the set of explanations comprises: a plurality of different words that the support ticket is comprised of; and for each of the plurality of different words, an associated cost regarding a desired statistical parameter of the one or more desired statistical parameters.
Example 26 is the method of example 25, wherein aggregating the set of explanations comprises: for each word among the set of explanations, average the associated cost regarding the desired statistical parameter across each explanation where the word occurs.
Example 27 is the method of example 22, wherein the set of statistical parameters comprises: an amount of time required to resolve a support ticket, a number of reassignments required to resolve the support ticket, a personnel cost required to resolve the support ticket, and an indication of whether any terms of a service level agreement (SLA) were breached.
Example 28 is the method of example 22, wherein generating the set of explanations for the set of predicted resolution statistics of a support ticket comprises: generate a set of synthetic support tickets, each of the set of synthetic support tickets comprising a permutation of the support ticket; query the AI model with each of the set of synthetic support tickets; and generate the set of explanations for the set of predicted resolution statistics of the support ticket based predicted resolution statistics generated by the AI model for each of the set of synthetic support tickets and the set of predicted resolution statistics.
Example 29 is a system comprising: a memory; and a processing device operatively coupled to the memory, the processing device to: receive an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack; train an AI model to predict values for each of the one or more desired statistical parameters based on data in a support ticket; for each of a plurality of support tickets input to the support stack: analyze the support ticket using the artificial intelligence (AI) model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters; and analyze the set of predicted resolution statistics using an explainable artificial intelligence (XAI) algorithm to generate a set of explanations for the set of predicted resolution statistics; and aggregate the set of explanations for each of the plurality of support tickets to generate one or more insights regarding the one or more desired statistical parameters.
Example 30 is the system of example 29, wherein the AI model comprises a neural network.
Example 31 is the system of example 29, wherein the AI model is trained using a database of previously resolved support tickets and corresponding resolution statistics.
Example 32 is the system of example 29, wherein each explanation in the set of explanations comprises: a plurality of different words that the support ticket is comprised of; and for each of the plurality of different words, an associated cost regarding a desired statistical parameter of the one or more desired statistical parameters.
Example 33 is the system of example 32, wherein to aggregate the set of explanations, the processing device is to: for each word among the set of explanations, average the associated cost regarding the desired statistical parameter across each explanation where the word occurs.
Example 34 is the system of example 29, wherein the set of statistical parameters comprises: an amount of time required to resolve a support ticket, a number of reassignments required to resolve the support ticket, a personnel cost required to resolve the support ticket, and an indication of whether any terms of a service level agreement (SLA) were breached.
Example 35 is the system of example 29, wherein to generate the set of explanations for the set of predicted resolution statistics of a support ticket, the processing device is to: generate a set of synthetic support tickets, each of the set of synthetic support tickets comprising a permutation of the support ticket; query the AI model with each of the set of synthetic support tickets; and generate the set of explanations for the set of predicted resolution statistics of the support ticket based predicted resolution statistics generated by the AI model for each of the set of synthetic support tickets and the set of predicted resolution statistics.
Example 36 is an apparatus comprising: means for receiving an indication of one or more desired statistical parameters to be optimized, the one or more desired statistical parameters being part of a set of statistical parameters relating to performance of a support stack; for each of a plurality of support tickets input to the support stack: means for analyzing the support ticket using an artificial intelligence (AI) model to generate a set of predicted resolution statistics including predicted values for each of the one or more desired statistical parameters; and means for analyzing the set of predicted resolution statistics using an explainable artificial intelligence (XAI) algorithm to generate a set of explanations for the predicted resolution statistics; and means for aggregating the set of explanations for each of the plurality of support tickets to generate one or more insights regarding the one or more desired statistical parameters.
Example 37 is the apparatus of example 36, further comprising: means for training the AI model to predict values for each of the one or more desired statistical parameters based on data in a support ticket.
Example 38 is the apparatus of example 36, wherein the AI model is trained using a database of previously resolved support tickets and corresponding resolution statistics.
Example 39 is the apparatus of example 36, wherein each explanation in the set of explanations comprises: a plurality of different words that the support ticket is comprised of; and for each of the plurality of different words, an associated cost regarding a desired statistical parameter of the one or more desired statistical parameters.
Unless specifically stated otherwise, terms such as “receiving,” “routing,” “updating,” “providing,” or the like, refer to actions and processes performed or implemented by computing devices that manipulates and transforms data represented as physical (electronic) quantities within the computing device's registers and memories into other data similarly represented as physical quantities within the computing device memories or registers or other such information storage, transmission or display devices. Also, the terms “first,” “second,” “third,” “fourth,” etc., as used herein are meant as labels to distinguish among different elements and may not necessarily have an ordinal meaning according to their numerical designation.
Examples described herein also relate to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general purpose computing device selectively programmed by a computer program stored in the computing device. Such a computer program may be stored in a computer-readable non-transitory storage medium.
The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear as set forth in the description above.
The above description is intended to be illustrative, and not restrictive. Although the present disclosure has been described with references to specific illustrative examples, it will be recognized that the present disclosure is not limited to the examples described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled.
As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises”, “comprising”, “includes”, and/or “including”, when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Therefore, the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting.
It should also be noted that in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may in fact be executed substantially concurrently or may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
Although the method operations were described in a specific order, it should be understood that other operations may be performed in between described operations, described operations may be adjusted so that they occur at slightly different times or the described operations may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing.
Various units, circuits, or other components may be described or claimed as “configured to” or “configurable to” perform a task or tasks. In such contexts, the phrase “configured to” or “configurable to” is used to connote structure by indicating that the units/circuits/components include structure (e.g., circuitry) that performs the task or tasks during operation. As such, the unit/circuit/component can be said to be configured to perform the task, or configurable to perform the task, even when the specified unit/circuit/component is not currently operational (e.g., is not on). The units/circuits/components used with the “configured to” or “configurable to” language include hardware—for example, circuits, memory storing program instructions executable to implement the operation, etc. Reciting that a unit/circuit/component is “configured to” perform one or more tasks, or is “configurable to” perform one or more tasks, is expressly intended not to invoke 35 U.S.C. 112, sixth paragraph, for that unit/circuit/component. Additionally, “configured to” or “configurable to” can include generic structure (e.g., generic circuitry) that is manipulated by software and/or firmware (e.g., an FPGA or a general-purpose processor executing software) to operate in manner that is capable of performing the task(s) at issue. “Configured to” may also include adapting a manufacturing process (e.g., a semiconductor fabrication facility) to fabricate devices (e.g., integrated circuits) that are adapted to implement or perform one or more tasks. “Configurable to” is expressly intended not to apply to blank media, an unprogrammed processor or unprogrammed generic computer, or an unprogrammed programmable logic device, programmable gate array, or other unprogrammed device, unless accompanied by programmed media that confers the ability to the unprogrammed device to be configured to perform the disclosed function(s).
The foregoing description, for the purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the embodiments and its practical applications, to thereby enable others skilled in the art to best utilize the embodiments and various modifications as may be suited to the particular use contemplated. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the invention is not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.