The present disclosure relates generally to case management and, specifically, to using machine learning to facilitate creating service cases.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present disclosure, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Organizations, regardless of size, rely upon access to information technology (IT) and data and services for their continued operation and success. A respective organization's IT infrastructure may have associated hardware resources (e.g. computing devices, load balancers, firewalls, switches, etc.) and software resources (e.g. productivity software, database applications, custom applications, and so forth). Over time, more and more organizations have turned to cloud computing approaches to supplement or enhance their IT infrastructure solutions.
Cloud computing relates to the sharing of computing resources that are generally accessed via the Internet. In particular, a cloud computing infrastructure allows users, such as individuals and/or enterprises, to access a shared pool of computing resources, such as servers, storage devices, networks, applications, and/or other computing based services. By doing so, users are able to access computing resources on demand that are located at remote locations, which resources may be used to perform a variety of computing functions (e.g., storing and/or processing large quantities of computing data). For enterprise and other organization users, cloud computing provides flexibility in accessing cloud computing resources without accruing large up-front costs, such as purchasing expensive network equipment or investing large amounts of time in establishing a private network infrastructure. Instead, by utilizing cloud computing resources, users are able redirect their resources to focus on their enterprise's core functions.
Certain service events may occur in the context of such systems, which may impact a performance of certain devices and/or networks. Service cases may be opened to manage and address different service events, such as by providing information for the service events to facilitate addressing such service events. However, the steps associated with opening each service case may be inefficient and/or tedious in conventional approaches.
A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
The present disclosure relates to a use and design of a case management application. A user may utilize the case management application to open a service case to manage a certain service event. The user may provide information into fields of the service case via the case management application. The case management application may receive input information provided by the user in one of the fields of the service case, and the case management application may identify output information to fill or populate other fields of the service case. A designer of the case management application may configure how the case management application identifies the output information. For example, the case management application may access trained machine learning routines to identify relevant output information based on the input information. The designer may select the trained machine learning routines that are accessed by the case management application to adjust how the case management application identifies the output information based on the input information.
Various refinements of the features noted above may exist in relation to various aspects of the present disclosure. Further features may also be incorporated in these various aspects as well. These refinements and additional features may exist individually or in any combination. For instance, various features discussed below in relation to one or more of the illustrated embodiments may be incorporated into any of the above-described aspects of the present disclosure alone or in any combination. The brief summary presented above is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without limitation to the claimed subject matter.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments will be described below. In an effort to provide a concise description of these embodiments, not all features of an actual implementation are described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and enterprise-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
As used herein, the term “computing system” refers to an electronic computing device such as, but not limited to, a single computer, virtual machine, virtual container, host, server, laptop, and/or mobile device, or to a plurality of electronic computing devices working together to perform the function described as being performed on or by the computing system. As used herein, the term “medium” refers to one or more non-transitory, computer-readable physical media that together store the contents described as being stored thereon. Embodiments may include non-volatile secondary storage, read-only memory (ROM), and/or random-access memory (RAM). As used herein, the term “application” refers to one or more computing modules, programs, processes, workloads, threads and/or a set of computing instructions executed by a computing system. Example embodiments of an application include software modules, software objects, software instances and/or other types of executable code.
A case management application may be used to manage and address service events. A user (e.g., a customer service agent) of the case management application may open a service case for each service event. In each service case, the user may provide information pertaining to the associated service event, such as within a plurality of fields of the service case. The information may describe the service event such that other users may address the associated service event based on the information. Providing information for each service event may be tedious and/or inefficient. For example, different service cases associated with similar service events may share common information. However, the user may still be required to manually provide information for each service case and, therefore, may spend an excessive amount of time creating service cases.
Thus, a case management application configured to generate information for a service case automatically may reduce an amount of time the user spends to create the service case. For example, the case management application may receive input information entered by the user into one of the fields of the service case, and the case management application may identify output information for one or more fields based on the input information. The case management application may then automatically populate or fill certain fields of the service case with the output information. In some embodiments, the case management application may access trained machine learning routines trained using paired input and ground truth output data to enable the case management application to generate output information based on the input information. By automatically providing output information for one or more fields of the service case, the case management application may enable the user to avoid filling certain information of the service case manually. Thus, the user may create service cases more quickly.
A designer of the case management application may be able to configure the case management application, such as via a design application. For example, the designer may select which trained machine learning routines are accessed by the case management application and/or manage the training of such machine learning routines. In this manner, the designer may configure or customize the case management application to adjust how the output information is identified based on the input information.
With the preceding in mind, the following figures relate to various types of generalized system architectures or configurations that may be employed to provide services to an organization in a multi-instance framework and on which the present approaches may be employed. Correspondingly, these system and platform examples may also relate to systems and platforms on which the techniques discussed herein may be implemented or otherwise utilized. Turning now to
For the illustrated embodiment,
In
To utilize computing resources within the platform 16, network operators may choose to configure the data centers 18 using a variety of computing infrastructures. In one embodiment, one or more of the data centers 18 are configured using a multi-tenant cloud architecture, such that one of the server instances 26 handles requests from and serves multiple customers. Data centers 18 with multi-tenant cloud architecture commingle and store data from multiple customers, where multiple customer instances are assigned to one of the virtual servers 26. In a multi-tenant cloud architecture, the particular virtual server 26 distinguishes between and segregates data and other information of the various customers. For example, a multi-tenant cloud architecture could assign a particular identifier for each customer in order to identify and segregate the data from each customer. Generally, implementing a multi-tenant cloud architecture may suffer from various drawbacks, such as a failure of a particular one of the server instances 26 causing outages for all customers allocated to the particular server instance.
In another embodiment, one or more of the data centers 18 are configured using a multi-instance cloud architecture to provide every customer its own unique customer instance or instances. For example, a multi-instance cloud architecture could provide each customer instance with its own dedicated application server and dedicated database server. In other examples, the multi-instance cloud architecture could deploy a single physical or virtual server 26 and/or other combinations of physical and/or virtual servers 26, such as one or more dedicated web servers, one or more dedicated application servers, and one or more database servers, for each customer instance. In a multi-instance cloud architecture, multiple customer instances could be installed on one or more respective hardware servers, where each customer instance is allocated certain portions of the physical server resources, such as computing memory, storage, and processing power. By doing so, each customer instance has its own unique software stack that provides the benefit of data isolation, relatively less downtime for customers to access the platform 16, and customer-driven upgrade schedules. An example of implementing a customer instance within a multi-instance cloud architecture will be discussed in more detail below with reference to
Although
As may be appreciated, the respective architectures and frameworks discussed with respect to
By way of background, it may be appreciated that the present approach may be implemented using one or more processor-based systems such as shown in
With this in mind, an example computer system may include some or all of the computer components depicted in
The one or more processors 202 may include one or more microprocessors capable of performing instructions stored in the memory 206. Additionally or alternatively, the one or more processors 202 may include application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and/or other devices designed to perform some or all of the functions discussed herein without calling instructions from the memory 206.
With respect to other components, the one or more busses 204 include suitable electrical channels to provide data and/or power between the various components of the computing system 200. The memory 206 may include any tangible, non-transitory, and computer-readable storage media. Although shown as a single block in
With the preceding in mind,
As discussed herein, the client instance 102 may be implemented so as to support access to a case management application. The case management application may be used to facilitate creating service cases, such as by generating output information to populate certain fields associated with the service cases based on input information from a user. In some embodiments, the case management application may be a cloud-based application running on the cloud-based platform 16 that is accessed via the client device 20. For example, the case management application may be executed on an application server running on the cloud based platform 16 and may access trained machine learning routines stored on the cloud-based platform 16. The trained machine learning routines may be trained so as to generate relevant output information for one or more service case fields in response to limited information, such as a short problem description or summary.
The paired training data 254 may be provided as an input to an existing machine learning routine or solution 256. The machine learning routine 256 may be an untrained routine that has not previously been trained with paired training data 254 or may be a previously trained routine that is receiving supplemental training with additional paired training data 254. Implementing the paired training data 254 to the machine learning routine 256 results in a trained machine learning routine 252. The trained machine learning routine 252 may be readily used or accessed by the case management application and provides an output for one or more case management fields in response to an input data string (e.g., a brief problem description or summary). In certain embodiments, each existing trained machine learning routine 252 may be re-trained with additional paired training data 254. That is, each trained machine learning routine 252 may receive additional paired training data 254, such as to improve performance (e.g., reduce a number of incorrect or unsuitable output field response). The re-trained machine learning routine may then be used or accessed by the case management application.
In the illustrated embodiment, the definition interface 270 has a plurality of fields, including a name field 274, a solution template field 276, a created field 278, a table field 280, an input field 282, an output field 284, and an active field 286. Additional or alternative fields may also be included in the definition interface 270. Each field may include respective information associated with each trained model 272. For example, the name field 274 may include a respective name of each trained model 272 and the solution template field 276 may include a classification or grouping associated with each trained model 272. As illustrated in
Moreover, the input field 282 includes the type of input information that each trained model 272 may be configured to receive as an input. In the illustrated implementation, each trained model 272 is associated with a short description entry 304 in the input field 282. In additional or alternative implementations, the input field 282 may include other types of input information that each trained machine learning routine 252 of the trained models 272 may use to identify output information. The output field 284 includes the type of output information (e.g., case management table field or fields) that each trained machine learning routine 252 of the trained models is trained to generate based on the input information, including a category entry 306, an assignment group entry 308, a priority entry 310, or another suitable type of output information. As an example, if the input field 282 corresponds to a short description entry 304 and the output field 284 includes the category 306, the particular trained model 272 generates a category field value in response to an input short description of a problem. In other words, the trained model 272 may use input information entered in the short description of an opened service case to identify output information to be entered into the category of the same opened service case. Finally, each active field 286 may include a false entry 312, indicating that the particular trained model 272 is not active and is not being accessed by the case management application, or a true entry 314, indicating that the particular trained model 272 is active and is being accessed by the case management application.
In some embodiments, the designer may be able to perform certain actions to the trained models 272 on the definition interface 270. As an example, the definition interface 270 may include a selectable action icon 316. The designer may select the action icon 316 and perform certain actions, such as enabling one of the trained models 272 to be accessed by the case management application, disabling one of the trained models 272 from being accessed by the case management application, adding another trained model 272 to the definition interface 270, removing a particular trained model 272 from the definition interface 270, another suitable action, or any combination thereof. Furthermore, the definition interface 270 may include a search icon 318, which the designer may use to query or search for a particular ML routine, such as based on any of the fields illustrated in
As shown in
The version field 352 may indicate the iteration of a particular trained model 272. For example, if the trained model 272 has been modified multiple times, the version field 352 indicates which modified version of the trained model 272 is in effect. The coverage field 354 may be associated with a coverage percentage 362, or a percentage of output information identified using the associated trained model 272 relative to a total amount of input information received from the user. In this manner, the coverage field 354 may indicate a probability that the associated trained model 272 is able to identify output information based on input information. Moreover, the precision field 356 may be associated with a precision percentage 364, or a percentage that identified output information is not changed (e.g., overridden) by the user. That is, the precision field 356 may indicate a probability that identified output information is accurate.
To obtain the coverage percentage 362 and precision percentage 364, data, such as data associated with a quantity of input information, a quantity of output information, and/or a quantity of output information changed by the user, may be continuously monitored to obtain the respective information associated with the coverage field 354 and the precision field 356. In certain embodiments, the coverage percentage 362 and/or the precision percentage 364 may be associated with a time interval. For example, the designer may specify displaying the coverage percentage 362 and/or the precision percentage 364 pertaining to the previous day, week, month, and so forth. The corresponding coverage percentage 362 and/or the precision percentage 364 may generally indicate how well the associated trained model 272 is functioning to generate useful or suitable output information. By way of example, if the coverage percentage 362 and/or the precision percentage 364 is below a certain threshold (e.g., 60%, 50%, or a value below 40%), a notification may be sent, such as to indicate that the associated trained model 272 should not be used and/or the associated trained model 272 should be modified to improve identifying output information.
In addition, the class field 358 and the row field 360 may each indicate a quantity of paired training data 254 associated with the trained models 272. By way of example, the class field 358 may indicate a quantity of different types of paired training data 254, such as a field, topic, or grouping represented by the paired training data 254. Moreover, the row field 360 may indicate a total number of entered pairs. That is, each input information (e.g., a keyword entered by the user) and output information (e.g., a particular category) may considered a pair, and the row field 360 may indicate the number of pairs of input information and output information.
The configuration interface 350 may also include the selectable action icon 316 to enable the designer to perform certain actions. For example, the selectable action icon 316 may enable the designer to remove a particular trained model 272 from being accessed by the case management application, to view other information associated with each trained model 272, and so forth. Moreover, each trained model 272 may be selectable via the configuration interface 350. As an example, the user may select the date entry 294 and/or time entry 296, which may enable the user to view detailed information associated with a selected trained model 272.
The designer may be able change certain information associated with the trained model 272 via the detail interface 390. For example, the designer may override the name of the trained model 272 associated with the name field 274, the definition associated with the definition field 392, and so forth. Moreover, the designer may be able to adjust whether or not the associated trained model 272 is active or inactive, such as by a checkbox 400 at the active field 286.
Furthermore, the detail interface 390 may show information associated with classes or types of paired training data 254. In certain embodiments, the detail interface 390 may include class records 402 corresponding to a respective class of paired training data 254. In the illustrated embodiment, the class records 402 include automation and integration, but in additional or alternative embodiments, the class records 402 may include other classes. Each class may include a plurality, a set, or a collection of associated paired data 254 pertaining to the class. In some implementations, the designer may select which class records 402 may be included by a particular trained model 272. In other words, the designer may determine which plurality of paired training data 254 may be implemented in each associated trained model 272. As an example, the designer may select particular paired data 254 based on a possible implementation of the trained model 272 such that certain output information may be generated more frequently or less frequently. The detail interface 390 may also include fields having information associated with each class record 402, such as a class precision field 404 (e.g., similar to the precision field 356), a class coverage field 406 (e.g., similar to the coverage field 406), and a distribution field 408 (e.g., a quantity of paired training data 254).
In further embodiments, the designer may create different versions of the case management application. As an example, the designer may create a first version of the case management application, in which the case management application may access a first set of trained models 272. The designer may also create a second version of the case management application, in which the case management application may access a second set of trained models 272 that are different than the first set of trained models 272. The designer may also determine when a particular version of the case management application may be in effect. By way of example, the designer may designate that different groups of users (e.g., based on geographic location, degree of experience, job title) use different versions of the case management application. In this manner, the designer does not have to reconfigure the case management application for different implementations.
Referring back to block 452, after the user opens the service case, the user may enter input information, which includes typed information associated with the short description of the service case, as shown at block 462. In some circumstances, the user may enter and/or may have entered other input information, such as information associated with a category, priority, and/or assignment of the service case, as indicated by block 464. In response, at block 466, the field(s) in which the user enters information may be marked as “dirty”. For example, a change handler of the script 422 may be executed to mark the relevant field(s) as dirty.
Referring back to block 462, after the user enters the input information (e.g., tabs out or otherwise submits the input information) in the short description, the change handler of the script 422 may be executed to evaluate or process the input information, as indicated at block 468. By way of example, at block 470, the script 422 may determine if any of the fields associated with the potential output information (e.g., category, priority, assignment) have been marked as dirty (e.g., by the change handler at block 466). For the fields that have been marked as dirty, the script 422 may not perform any actions on the fields that have been marked as dirty, as shown at block 472. That is, the script 422 may not populate the fields with output information, because the fields have been previously defined by the user. However, if the script 422 determines the fields have not been marked as dirty, another portion of the script may be executed to identify output information to populate the fields, as indicated at block 474.
At block 476, the script 422 may identify a relevant trained models 272 that may be used to identify output information based on entered input information. At block 478, the relevant trained models 272 may identify an output information based on the input information. As an example, the relevant trained models 272 may determine that 90% of the paired data 254 having a particular keyword or phrase included in the input information may correspond with the same priority. As such, the relevant trained models 272 may predict or identify the same priority for the entered input information. In some circumstances, no output information may be identified by the relevant trained models 272. As such, the relevant trained models 272 may indicate that there is no identified output information.
At block 480, the script 422 may receive the identified output information, which may include an indication that no output information was identified. Then, at block 482, the script 422 determines if identified output information exists. For example, if the script 422 determines that no identified output information exists (e.g., the script 422 received an indication that no output information as identified), the script 422 may determine that no output information was identified or predicted. In response, the script 422 may indicate on the user interface (e.g., via a message) that no output information was identified for the associated field, as indicated at block 486. Thus, the user may be informed that no output information was automatically generated for the associated field, and may manually enter information. However, if the script 422 determines that identified output information exists, the script 422 may automatically populate the associated field with the identified output information, as shown at block 488. Additionally, at block 486, the script 422 may indicate on the user interface (e.g., via another message) that the associated field was populated with the identified output information. After the desired information associated with the opened service case has been entered manually by the user and/or automatically by the case management application, the user may finalize and create the service case, as shown at block 490.
The case management application may be configured to automatically enter output information into certain fields on the user interface 510 based on input information entered by the user into one of the fields. As primarily described herein, the case management application may be configured to generate output information to fill the priority field, category field 516, and assignment field 518 based on input information entered by the user in the short description field 520. However, it should be noted that in additional or alternative embodiments, the case management application may be configured to generate output information in other fields and/or may identify output information based on input information entered in other fields.
Moreover, the user interface 510 may display a first type of message 550 indicating that the output information is recommended or identified by the case management application based on the input information. The first type of message 550 may inform the user which fields have been automatically populated. As shown in
In some embodiments, the case management application also may not provide output information for a particular field that previously had output information overridden by the user. For example, during a first opened case, the case management application may provide output information for the category field 516. However, the user may have manually changed and overridden the output information. As a result, in a second subsequently opened case, the case management application may no longer provide output information for the category field 516 such that the user may manually enter information for the category field 516.
The present disclosure is related to a case management application that may be used by a user to open service cases. The user may enter certain input information in a field of the service case and the case management application may automatically identify output information based on the input information. The case management application may then fill or populate other fields of the service case with the identified output information such that the user does not have to fill the fields manually, which may reduce an amount of time the user spends to create a service case. In some embodiments, the case management application may use trained machine learning routines to identify the output information based on input information. Moreover, a designer of the case management application may customize or configure the case management application. For example, the designer may select the trained machine learning routines that are accessed by the case management application to adjust how the case management application identifies output information.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function] . . . ” or “step for [perform]ing [a function] . . . ”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).