This description relates to automated skill discovery, skill level computation, and intelligent matching using generated hierarchical skill paths.
Knowing agent skills in service management can help in many information technology service management (ITSM) service desk processes such as routing tickets or routing cases to the right “skilled” agents, which, in turn, can reduce the mean time to repair (MTTR) and improve customer satisfaction. However, agent skills are rarely used in managing service desk processes because determining and knowing agent skills is a complicated, time-consuming activity involving many variables making it almost impossible for humans to manage.
Questions arise regarding an agent's depth and proficiency in a particular skill. For example, some agents have a higher proficiency and more skill in handling and resolving “Mac desktop issues” than other agents and should have such issues routed to them. Similarly, Windows desktop tickets should be re-routed to an agent skilled in “Windows desktop issues.” An agent's depth and proficiency in particular skills need to be evaluated and tracked so that more “complex” tickets are routed to those agents with a higher skill level in that subject area.
Furthermore, manual-skills management is error-prone and inaccurate due to the fact that agents' skills are dynamic and can evolve over time. Due to these challenges, skills that are manually curated and maintained rarely work well in practice. And yet, knowing agents' skills across an organization can benefit both the organization and the agent. For example, knowing agents' skills can help create organizational and individual training plans. During major ITSM incidents, knowing agent skills can help in swarming where the right team members with appropriate skills are needed for collaborating to solve widely impacting issues. The organization needs to identify skills gaps and areas where an agent or agents would benefit from additional training and to identify areas where an organization is lacking skilled agents. Identifying agents with sufficient skills to author knowledge articles on certain topics helps the organization preserve accumulated knowledge on such topics for the benefit of other less skilled agents. The agent benefits in that the agent's level of skill can be enhanced when greater skill challenges are presented to the agent as experience is built. The organization benefits by having more satisfied employees resulting in a greater possibility of retaining experienced agents.
According to one general aspect, a computer-implemented method for intelligent-skills-matching includes receiving a plurality of tickets, where each ticket in the plurality of tickets includes a plurality of fields and at least one agent who resolved the ticket is identified. A clustering algorithm is used on one or more of the plurality of fields to determine skills from the plurality of tickets. A taxonomy of the skills is generated using a taxonomy-construction algorithm. Using the taxonomy of the skills, a skills matrix or a skills knowledge graph is created with agents assigned to the skills.
Implementations may include one or more of the following features. For example, the computer-implemented method may further include computing a skills score for each agent and a related skill, and updating the skills matrix or the skills knowledge graph with the skills score. The computer-implemented method may further include receiving a new ticket, determining skills needed to resolve the new ticket, using a search engine to search for the determined skills in the skills matrix, or in the skills knowledge graph and to search for an agent with a high skills score for the determined skills, and automatically routing the new ticket to the agent with the high skills score for the determined skills. The computer-implemented method may further include, in response to the agent completing the new ticket, re-computing the skills score for the agent and the determined skills and updating the skills matrix of the skills knowledge graph with the re-computed skills score.
In some implementations, determining the skills includes determining static skills from category fields from the plurality of fields.
In some implementations, determining the skills includes determining dynamic skills from text fields from the plurality of fields using the clustering algorithm. The computer-implemented method may further include generating sub-skills from the text fields and updating the taxonomy with the sub-skills.
In another general aspect, a computer program product for intelligent skills matching is tangibly embodied on a non-transitory computer-readable medium and includes executable code that, when executed, is configured to cause a data processing apparatus to receive a plurality of tickets, where each ticket in the plurality of tickets includes a plurality of fields and at least one agent that resolved the ticket. The data processing apparatus determines skills from the plurality of tickets using a clustering algorithm on one or more of the plurality of fields, generates a taxonomy of the skills using a taxonomy construction algorithm, and creates and outputs a skills matrix or a skills knowledge graph using the taxonomy of the skills with agents connected to the skills.
In another general aspect, a system for intelligent skills matching includes at least one processor and a non-transitory computer-readable medium including instructions that, when executed by the at least one processor, cause the system to implement an application that is programmed to receive a plurality of tickets, where each ticket in the plurality of tickets includes a plurality of fields and at least one agent that resolved the ticket. The application is programmed to determine skills from the plurality of tickets using a clustering algorithm on one or more of the plurality of fields and generate a taxonomy of the skills using a taxonomy construction algorithm. The application is programmed to create and output a skills matrix or a skills knowledge graph using the taxonomy of the skills with agents connected to the skills.
Implementations for the computer program product and the system may include one or more of the features described above with respect to the computer-implemented method.
The details of one or more implementations are set forth in the accompanying drawings and the description below. Other features will be apparent from the description and drawings, and from the claims.
This document describes systems and techniques for automated skill discovery, skill level computation, and intelligent matching using generated hierarchical skill paths. The systems and techniques use machine learning (ML) and/or artificial intelligence (AI) techniques to identify a hierarchy of skills from a historical database of artifacts. The automatically generated hierarchy of skills may be laid onto a knowledge graph. In this manner, a taxonomy of skills is autogenerated using ML and/or AI techniques from a database of artifacts. Additionally, the skills for each person interacting with the artifacts are determined and a skill level is computed using statistical computational techniques for each person and a skills matrix and/or skills knowledge graph is generated. In response to receiving a new artifact, the system uses an automated search using the skills matrix and/or the skills knowledge graph to find a person with skills appropriate for handling the new artifact. The new artifact may be automatically routed to a person with requisite skills to handle the artifact. The skills matrix and/or the skill knowledge graph learns and is updated with each new interaction between a person and an artifact.
In a similar manner, the automated search may be used as an expert locator to intelligently assemble a team of experts having various needed skills to handle a major incident. The system also may be used for skills gap training to identify areas where an agent or agents would benefit from additional training and to identify areas where an organization is lacking skilled agents. Finally, the system may be used to identify agents with requisite skills to author knowledge articles using their skill knowledge.
In one example of use of the system described in this document, the artifact is an ITSM ticket and the taxonomy and skills matrix and/or skills knowledge graph is automatically determined from historical tickets. An ITSM ticket may be a support request from one of multiple different channels related to one or more various aspects of an organization. An ITSM ticket is a digital record of an IT incident or event that includes relevant information about what happened, who raised the issue, and what has been done to resolve it. Incoming tickets may then be routed to an agent with the appropriate skills by performing an intelligent matching of the new tickets against the skills matrix and/or skills knowledge graph to find the appropriate agent(s) to assign automatically to handle the ticket. In another example use context, the skills matrix and/or the skills knowledge graph may be used to locate one or more experts to form a team for a major IT incident such as an outage. In other example use contexts, the artifact may be incidents, cases, work orders, etc.
The system 100 may be implemented on a computing device 101. The computing device 101 includes at least one memory 154, at least one processor 156, and at least one application 158. The computing device 101 may communicate with one or more other computing devices over a network (not shown). The computing device 101 may be implemented as a server (e.g., an application server), a desktop computer, a laptop computer, a mobile device such as a tablet device or mobile phone device, a mainframe, as well as other types of computing devices. Although a single computing device 101 is illustrated, the computing device 101 may be representative of multiple computing devices in communication with one another, such as multiple servers in communication with one another being utilized to perform the various functions and processes of the system 100 over a network. In some implementations, the computing device 101 may be representative of multiple virtual machines in communication with one another in a virtual server environment. In some implementations, the computing device 101 may be representative of one or more mainframe computing devices.
The at least one processor 156 may represent two or more processors on the computing device 101 executing in parallel and utilizing corresponding instructions stored using the at least one memory 154. The at least one processor 156 may include at least one graphics processing unit (GPU) and/or central processing unit (CPU). The at least one memory 154 represents a non-transitory computer-readable storage medium. Of course, similarly, the at least one memory 154 may represent one or more different types of memory utilized by the computing device 101. In addition to storing instructions, which allow the at least one processor 156 to implement an application 158 and its various components, the at least one memory 154 may be used to store data, such as clusters of tickets and outputs of the system 100, and other data and information used by and/or generated by the application 158 and the components used by application 158. The application 158 may include the various modules and components for the system 100 on the computing device 101, as discussed below. The application 158 may be accessed directly by a user of the computing device 101. In some implementations, the application 158 may be running on the computing device 101 as a component of a cloud network, where a user accesses the application 158 from another computer device over a network.
As agents resolve a variety of tickets, the system 100 analyzes the text and types of tickets the agent has resolved as well as the feedback and quality of the resolution and uses this knowledge of historical ticket descriptions and resolutions to build an AI/ML model that can learn agent skills automatically. How well the ticket got resolved in terms of time to resolve (MTTR), quality of resolution (e.g., no kick-backs, no transfers to other agents, etc.) and explicit feedback, all shape the skill level of the agent and is automatically determined through AI/ML techniques. The system 100 builds a skills agent knowledge graph that is created and continuously updated as new tickets get resolved. The process flow for the system 100 is illustrated in
In Step A 105, the system 100 uses multiple tickets 102 and parameters from the ticket fields 104 to infer skills 103 of agents who worked on the tickets 102. In some implementations, a clustering algorithm 106 may be used to perform topic modelling clustering on the tickets 102 to infer skills 103 of agents. There are three ways skills can be inferred from structured and unstructured parts of the tickets that each agent resolves:
Referring to
In a “ticket” one or more fields can be configured for skills tracking. All the values for these fields are taken into consideration as potential skills that need to be tracked. A skill definition includes skill definition name and list of field names to identify. Users can specify multiple skill definitions.
Product name field skills are illustrated in
Tickets 102 also include qualification-based skills. When a query is used to specify a skill, a set of incidents are identified that represents the skill. For example, a “major incident” skill can be defined as a set of incidents which have Major Incident flag=True.
Another example of a qualification-based skill is when an agent specifies “I am good at DB servers.” The agent statement can be converted into a search string and queried to retrieve the list of tickets.
Dynamic skills also may be inferred from tickets 102, where text fields are used to generate dynamic skills. These can be combined with a field-based skill or a standalone skill. The clustering algorithm 106 may be run on ticket data to generate a set of “topics” that groups similar tickets together. These form a dynamic skill that agents are resolving. In some implementations, the machine learning clustering algorithms 106 may include topic modelling algorithms such as Latent Dirichlet Allocation (LDA) or k-means clustering and can be run periodically or in real-time.
For example, if a company just released a new product “Webex”, and tickets start flowing in for such as “Cannot connect to webex”, “webex fails to install”, “webex voice call issues”, these are dynamic skills that are automatically added using the clustering algorithm 106.
In another example, topics that are generated can be for new services such as “address proof letter” cluster of tickets that just formed in recent weeks due to an increase request by employees. This is also another example of a dynamic skill.
Finally, once the skills are all identified, they are laid onto a create knowledge graph/matrix-skill 108. In this step, the system 100 builds a create knowledge graph/matrix-skill 108 that includes skill nodes and agent nodes. For each static and dynamic skill output from the clustering algorithm 106, a node in the graph is generated. For each agent, a node in the graph is generated. When the skill is based on a hierarchical field specification such as (Opcat1, Opcat2, Opcat3) or (SG, Service) tuples, then the corresponding skill nodes with a containment relationship are used as shown in
In the example of
Referring to
In
Referring back to
Referring to
The next step in the system 100 is Step B compute skill scores 115, to compute the skill scores for each relationship between an agent and a skill. Once the relationships defined by the create knowledge graph/matrix-skill 108, the next step is to find out the strength of the relationship that defines how good is the agent in resolving the tickets of that skill by computing skills scores for agents using a skills score computation module 116. This results in the skill level for that agent. Agent metrics are used to define the skill level for each agent by combining multiple factors. In some implementations, the skills score computation module 116 uses statistics, centrality analysis, and regression analysis.
If the “purity” of the skills cluster has one agent who has resolved a high volume of cases, then this agent is clearly a skilled agent.
Each skill with a set of tickets has a MTTR for that skill cluster of tickets. Finding the ratio of agent's MTTR to skill's MTTR provides an indicator on how much better (or worse) the agent is compared to an agent population's average. If the resolved cases have high customer feedback (5***** rating) or have no escalations or no kickback or transfer counts, then the agent's skill level is considered high. All these metrics are combined for an agent to calculate the agent's skill score.
Each of these metrics will be normalized to a computed score that can be, for example, between 0 and 1 based on example specific formulae where 1 is higher skill while 0 is no skill. The following metrics may be used:
In some implementations, the skills score computation module 116 uses the formula to calculate an agent skill score, where the agent skill score represents the proficiency of the agent at the skill, for example:
Skill score=W1*Volume_tickets_score+W2*Escalated_score+W3*Kickback_count_score+ . . .
Where W1, W2, . . . are weights that can be configured or learned through supervised learning to determine the weights automatically. Supervised learning can be used if the agent performance or skill scores are known and entered. If they are not, then an unsupervised weight-based approach will be used as indicated above to come up with final score. In the formula below, the w1, w2, . . . are the weights and xi is a skill score between 0 and 1, such as x1=“Volume_tickets_score”, x2=“Escalated_score”, etc. as defined above.
Aggregations can be done at various hierarchical levels of the skills ontology and a skills score can be computed at each level. For example, in
Below are example ticket scoring formulas used to calculate the above-listed various metrics:
ResolvedTicketVolume_Score=resolved_ticket_count/total ticket count in a skill type
Kickback_score=−1*(kickback count/total resolved ticket count of an agent in a skill type)
Escalation_score=−1*(escalated_ticket_count/total resolved ticket count of an agent in a skill type)
Service level agreement (SLA or sla)_breach=#of times SLA breached (0 is good) or SLA warning generated or Within SLA.
When the agent resolves a maximum tickets with ‘Service Target Warning’ generated in a specific skill type, then his slm_status purity will be ‘Service Target Warning’ and sla_breach_score=0.6
The ticket-scoring formulas are evaluated at each skill node and a score is assigned to agents who have resolved tickets with that skill. In some implementations, these formulas may be configured and can be active or inactive as set by a user or administrator of the system.
The skills score computation module 116 also may use other parameters in addition to the metrics above to compute the skills score for an agent. Referring to
The skills score computation module 116 calculates the scores for the agents and a skills matrix and the create skills matrix (knowledge graph) 118 is created. The skills matrix 122 and/or skills knowledge graph 124 is used in the intelligent matching 126 of the system 100.
Step C in the system 100 is intelligent matching 126 using the skills matrix 122 and/or the skills knowledge graph 124. As new tickets are created, the skills needed to resolve the ticket are determined based on the skills definitions. In one example, single skill matching is determined. For static skills, the fields specified in the new incident ticket 128 definition are used by search engine 130 to look for those skills in the skills matrix122 and/or the skills knowledge graph 124.
For dynamic skills in the ticket, the search engine 130 computes the ticket's distance from dynamic skill nodes to determine which skill node it belongs to using, for example, cosine similarity, which is the measure of similarity between two non-zero vectors of an inner product space. For example, in
For multiple skills matching when multiple skills are specified, then the search engine 130 performs a search for each skill and then a weighted average is taken of the scores for each skill.
The search engine 130 also may perform hierarchical skill matching. For example, when a skill fails to match, as shown in the example process 900 of
Step D in the system 100 is continuous skill updates 136. That is, the skills matrix 122 and/or the skills knowledge graph 124 is updated continuously with each ticket received and resolved by an agent. First, using intelligent matching 126, an identify skill nodes and agent nodes 138 process is implemented within the tickets.
As agents resolve tickets, the skill score is re-computed and the skills matrix 122 and/or the skills knowledge graph 124 are kept updated as a recompute skills score/new nodes/rels 140 step. Multiple methods can be used to do this either on a batch process that is run on a schedule or in real-time as soon as the incident is resolved. This can involve multiple scenarios such as:
Step E in the system 100 is human feedback 142.
Humans can provide feedback on how the agents are performing so that the algorithm can improve over time. As shown in table 1000 of
Instructions for the performance of the process 1100 may be stored in the at least one memory 154 of
In
In
Implementations of the various techniques described herein may be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. Implementations may be implemented as a computer program product, i.e., a computer program tangibly embodied in an information carrier, e.g., in a machine-readable storage device, for execution by, or to control the operation of, data processing apparatus, e.g., a programmable processor, a computer, or multiple computers. A computer program, such as the computer program(s) described above, can be written in any form of programming language, including compiled or interpreted languages, and can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment. A computer program can be deployed to be executed on one computer or on multiple computers at one site or distributed across multiple sites and interconnected by a communication network.
Method steps may be performed by one or more programmable processors executing a computer program to perform functions by operating on input data and generating output. Method steps also may be performed by, and an apparatus may be implemented as, special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application-specific integrated circuit).
Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. Elements of a computer may include at least one processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer also may include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory may be supplemented by, or incorporated in special purpose logic circuitry.
To provide for interaction with a user, implementations may be implemented on a computer having a display device, e.g., a cathode ray tube (CRT) or liquid crystal display (LCD) monitor, for displaying information to the user and a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide for interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input.
Implementations may be implemented in a computing system that includes a back-end component, e.g., as a data server, or that includes a middleware component, e.g., an application server, or that includes a front-end component, e.g., a client computer having a graphical user interface or a Web browser through which a user can interact with an implementation, or any combination of such back-end, middleware, or front-end components. Components may be interconnected by any form or medium of digital data communication, e.g., a communication network. Examples of communication networks include a local area network (LAN) and a wide area network (WAN), e.g., the Internet.
While certain features of the described implementations have been illustrated as described herein, many modifications, substitutions, changes and equivalents will now occur to those skilled in the art. It is, therefore, to be understood that the appended claims are intended to cover all such modifications and changes as fall within the scope of the embodiments.