The present invention generally relates to telecommunications systems and methods, as well as the analysis of contact center metrics. More particularly, the present invention pertains to the optimization of key performance indicators and the benefits thereof to the contact center.
A system and method are presented for estimating expected improvement in a target metric for a contact center. A lift estimation analysis is performed to estimate the benefit the contact center is likely to achieve assuming different agent availability conditions for a specific future time interval. Historic data is extracted over a set time interval and used to create new datasets for training and testing. The historic data comprises interaction data and associated outcomes for the interaction data. A predictive model is constructed and used to analyze the test dataset by predicting an outcome score for a target metric and estimating an expected lift.
In one embodiment, a method is presented for estimating expected improvement in a target metric of a contact center, the method comprising: extracting a first dataset over a set time interval from a database associated with the contact center, wherein the dataset comprises: a plurality of past interaction data between customers and agents of the contact center, associated outcomes for the interaction data, and future availability of agents from the past interaction data; creating a second dataset from the first dataset, wherein the second dataset comprises a training dataset and a test dataset; building, by a predictor training server, a predictive model using the training dataset wherein the predictor training server applies a machine-learning algorithm which seeks to minimize the error in difference between true outcome for a target metric of a given interaction and a predicted outcome of the given interaction; analyzing, by an analytics server, the test dataset using the predictive model by predicting an outcome score for the target metric for an interaction handled by an agent from a pool of the agents available in the future; estimating the expected improvement for the target metric when each interaction is handled by an agent meeting a threshold for the outcome score; and generating, by the analytics server, a visual representation comprising the outcome score for the target metric and the expected improvement.
The associated outcomes for the interaction data comprise customer satisfaction survey results. The associated outcomes for the interaction data may also comprise customer satisfaction survey results.
The test dataset comprises a most recent percentage of the first dataset as measured by time over the set time interval and the training dataset comprises a remainder of the first dataset. The most recent percentage may be 20% with the remainder being 80%.
The future availability of agents from the past interaction data comprises a set percentage of a number of the available agents for the contact center.
The machine-learning algorithm comprises decision trees. The machine-learning algorithm may comprise neural networks in another embodiment.
The first dataset further comprises profile information of customers and profile information of agents and the interaction data further comprises a context associated with each interaction. The analyzing is performed for each of the contexts.
In another embodiment, a system of estimating expected improvement in a target metric of a contact center is presented, the system comprising: a processor; and a memory coupled to the processor, wherein the memory stores instructions that, when executed by the processor, cause the processor to: extract a first dataset over a set time interval from a database associated with the contact center, wherein the dataset comprises: a plurality of past interaction data between customers and agents of the contact center, associated outcomes for the interaction data, and future availability of agents from the past interaction data; create a second dataset from the first dataset, wherein the second dataset comprises a training dataset and a test dataset; build, by a predictor training server, a predictive model using the training dataset wherein the predictor training server applies a machine-learning algorithm which seeks to minimize the error in difference between true outcome for a target metric of a given interaction and a predicted outcome of the given interaction; analyze, by an analytics server, the test dataset using the predictive model by predicting an outcome score for the target metric for an interaction handled by an agent from a pool of the agents available in the future; estimate the expected improvement for the target metric when each interaction is handled by an agent meeting a threshold for the outcome score; and generate, by the analytics server, a visual representation comprising the outcome score for the target metric and the expected improvement.
For the purposes of promoting an understanding of the principles of the invention, reference will now be made to the embodiment illustrated in the drawings and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of the invention is thereby intended. Any alterations and further modifications in the described embodiments, and any further applications of the principles of the invention as described herein are contemplated as would normally occur to one skilled in the art to which the invention relates.
The present disclosure is directed to a system and method of estimating expected improvement in a target metric of a contact center. When operators of contact centers are seeking to upgrade or otherwise alter existing architecture of a contact center, it may be difficult to quantify the expected benefits to the contact center. This may be true in the addition of a predictive routing system to traditional interaction routing architecture for a contact center. The deployment of a predictive routing system to route interactions in a contact center may use, among other inputs, accumulated agent and customer-agent interaction data, which can permit the contact center operator to analyze omnichannel interactions and outcomes and generate models to predict outcomes. From this analysis by the predictive routing system, combined with machine learning, an optimal match between waiting interactions and available agents may be determined, and the interactions routed accordingly. Reports may also be generated on predicted versus actual outcomes of customer-agent interactions. Further training of the machine-learning model of the predictive routing may be accomplished from the use of actual outcomes, which may improve the accuracy of predicted outcomes between similar customer profiles and agent profiles.
Upon implementing a predictive routing system as part of an existing contact center, it may be desirable for the operator of the contact center to have a quantitative analysis of the benefits of the addition of the predictive routing system to the contact center. Contact centers with an existing store of historic data that includes the type of input data for a predictive routing server, such as agent data and customer-agent interaction data, may be able to leverage the historic data into estimating and modeling how the contact center might have performed with the predictive routing server over a baseline of actual performance information over the same time period. From the modeling of the historic data, a simulation can be performed on historic data to determine agent availability by scoring all agents who are part of a daily agent pool and selecting the maximum scored agent among the pool. Simulations may be run over varying percentages of availability, such as 100% or 50%, and the results verified. In addition, a generated lift curve could also reveal issues underlying with the scoring model as to how well the model discriminates between agents and how reliable the scores are.
Contact Center Systems
Components of the communication infrastructure indicated generally at 100 include: a plurality of end user devices 105A, 105B, 105C; a communications network 110; a switch/media gateway 115; a call controller 120; an IMR server 125; a routing server 130; a storage device 135; a stat server 140; a plurality of agent devices 145A, 145B, 145C comprising workbins 146A, 146B, 146C; a multimedia/social media server 150; web servers 155; an iXn server 160; a UCS 165; a reporting server 170; and a configuration server 175.
In an embodiment, the contact center system manages resources (e.g., personnel, computers, telecommunication equipment, etc.) to enable delivery of services via telephone or other communication mechanisms. Such services may vary depending on the type of contact center and may range from customer service to help desk, emergency response, telemarketing, order taking, etc.
Customers, potential customers, or other end users (collectively referred to as customers or end users) desiring to receive services from the contact center may initiate inbound communications (e.g., telephony calls, emails, chats, etc.) to the contact center via end user devices 105A, 105B, and 105C (collectively referenced as 105). Each of the end user devices 105 may be a communication device conventional in the art, such as a telephone, wireless phone, smart phone, personal computer, electronic tablet, laptop, etc., to name some non-limiting examples. Users operating the end user devices 105 may initiate, manage, and respond to telephone calls, emails, chats, text messages, web-browsing sessions, and other multi-media transactions. While three end user devices 105 are illustrated at 100 for simplicity, any number may be present.
Inbound and outbound communications from and to the end user devices 105 may traverse a network 110 depending on the type of device that is being used. The network 110 may comprise a communication network of telephone, cellular, and/or data services and may also comprise a private or public switched telephone network (PSTN), local area network (LAN), private wide area network (WAN), and/or public WAN such as the Internet, to name a non-limiting example. The network 110 may also include a wireless carrier network including a code division multiple access (CDMA) network, global system for mobile communications (GSM) network, or any wireless network/technology conventional in the art, including but not limited to 3G, 4G, LTE, etc.
In an embodiment, the contact center system includes a switch/media gateway 115 coupled to the network 110 for receiving and transmitting telephony calls between the end users and the contact center. The switch/media gateway 115 may include a telephony switch or communication switch configured to function as a central switch for agent level routing within the center. The switch may be a hardware switching system or a soft switch implemented via software. For example, the switch 115 may include an automatic call distributor, a private branch exchange (PBX), an IP-based software switch, and/or any other switch with specialized hardware and software configured to receive Internet-sourced interactions and/or telephone network-sourced interactions from a customer, and route those interactions to, for example, an agent telephony or communication device. In this example, the switch/media gateway establishes a voice path/connection (not shown) between the calling customer and the agent telephony device, by establishing, for example, a connection between the customer's telephony device and the agent telephony device.
In an embodiment, the switch is coupled to a call controller 120 which may, for example, serve as an adapter or interface between the switch and the remainder of the routing, monitoring, and other communication-handling components of the contact center. The call controller 120 may be configured to process PSTN calls, VoIP calls, etc. For example, the call controller 120 may be configured with computer-telephony integration (CTI) software for interfacing with the switch/media gateway and contact center equipment. In an embodiment, the call controller 120 may include a session initiation protocol (SIP) server for processing SIP calls. The call controller 120 may also extract data about the customer interaction, such as the caller's telephone number (e.g., the automatic number identification (ANI) number), the customer's internet protocol (IP) address, or email address, and communicate with other components of the system 100 in processing the interaction.
In an embodiment, the system 100 further includes an interactive media response (IMR) server 125. The IMR server 125 may also be referred to as a self-help system, a virtual assistant, etc. The IMR server 125 may be similar to an interactive voice response (IVR) server, except that the IMR server 125 is not restricted to voice and additionally may cover a variety of media channels. In an example illustrating voice, the IMR server 125 may be configured with an IMR script for querying customers on their needs. For example, a contact center for a bank may tell customers via the IMR script to ‘press 1’ if they wish to retrieve their account balance. Through continued interaction with the IMR server 125, customers may be able to complete service without needing to speak with an agent. The IMR server 125 may also ask an open-ended question such as, “How can I help you?” and the customer may speak or otherwise enter a reason for contacting the contact center. The customer's response may be used by a routing server 130 to route the call or communication to an appropriate contact center resource.
If the communication is to be routed to an agent, the call controller 120 interacts with the routing server (also referred to as an orchestration server) 130 to find an appropriate agent for processing the interaction. The selection of an appropriate agent for routing an inbound interaction may be based, for example, on a routing strategy employed by the routing server 130, and further based on information about agent availability, skills, and other routing parameters provided, for example, by a statistics server 140.
In an embodiment, the routing server 130 may query a customer database, which stores information about existing clients, such as contact information, service level agreement (SLA) requirements, nature of previous customer contacts and actions taken by the contact center to resolve any customer issues, etc. The database may be, for example, Cassandra or any NoSQL database, and may be stored in a mass storage device 135. The database may also be a SQL database and may be managed by any database management system such as, for example, Oracle, IBM DB2, Microsoft SQL server, Microsoft Access, PostgreSQL, etc., to name a few non-limiting examples. The routing server 130 may query the customer information from the customer database via an ANI or any other information collected by the IMR server 125.
Once an appropriate agent is identified as being available to handle a communication, a connection may be made between the customer and an agent device 145A, 145B and/or 145C (collectively referenced as 145) of the identified agent. While three agent devices are illustrated in
The contact center system 100 may also include a multimedia/social media server 150 for engaging in media interactions other than voice interactions with the end user devices 105 and/or web servers 155. The media interactions may be related, for example, to email, vmail (voice mail through email), chat, video, text-messaging, web, social media, co-browsing, etc. The multi-media/social media server 150 may take the form of any IP router conventional in the art with specialized hardware and software for receiving, processing, and forwarding multi-media events.
The web servers 155 may include, for example, social interaction site hosts for a variety of known social interaction sites to which an end user may subscribe, such as Facebook, Twitter, Instagram, etc., to name a few non-limiting examples. In an embodiment, although web servers 155 are depicted as part of the contact center system 100, the web servers may also be provided by third parties and/or maintained outside of the contact center premise. The web servers 155 may also provide web pages for the enterprise that is being supported by the contact center system 100. End users may browse the web pages and get information about the enterprise's products and services. The web pages may also provide a mechanism for contacting the contact center via, for example, web chat, voice call, email, web real-time communication (WebRTC), etc. Widgets may be deployed on the websites hosted on the web servers 155.
In an embodiment, deferrable interactions/activities may also be routed to the contact center agents in addition to real-time interactions. Deferrable interaction/activities may comprise back-office work or work that may be performed off-line such as responding to emails, letters, attending training, or other activities that do not entail real-time communication with a customer. An interaction (iXn) server 160 interacts with the routing server 130 for selecting an appropriate agent to handle the activity. Once assigned to an agent, an activity may be pushed to the agent, or may appear in the agent's workbin 146A, 146B, 146C (collectively 146) as a task to be completed by the agent. The agent's workbin may be implemented via any data structure conventional in the art, such as, for example, a linked list, array, etc. In an embodiment, a workbin 146 may be maintained, for example, in buffer memory of each agent device 145.
In an embodiment, the mass storage device(s) 135 may store one or more databases relating to agent data (e.g., agent profiles, schedules, etc.), customer data (e.g., customer profiles), interaction data (e.g., details of each interaction with a customer, including, but not limited to: reason for the interaction, disposition data, wait time, handle time, etc.), and the like. In another embodiment, some of the data (e.g., customer profile data) may be maintained in a customer relations management (CRM) database hosted in the mass storage device 135 or elsewhere. The mass storage device 135 may take form of a hard disk or disk array as is conventional in the art.
In an embodiment, the contact center system may include a universal contact server (UCS) 165, configured to retrieve information stored in the CRM database and direct information to be stored in the CRM database. The UCS 165 may also be configured to facilitate maintaining a history of customers' preferences and interaction history, and to capture and store data regarding comments from agents, customer communication history, etc.
The contact center system may also include a reporting server 170 configured to generate reports from data aggregated by the statistics server 140. Such reports may include near real-time reports or historical reports concerning the state of resources, such as, for example, average wait time, abandonment rate, agent occupancy, etc. The reports may be generated automatically or in response to specific requests from a requestor (e.g., agent/administrator, contact center application, etc.).
The configuration server 175 may be configured to store agent profile information associated with agents that operate an agent device 145. The information associate with an agent access through configuration server 175 may include, but are not limited to, an unique agent identification number, agent activity, an agent grouping of the contact center, language skills, and assignment to particular contexts of interactions. The configuration server 175 may also associate certain statistic from reporting server 175 with particular agents. The associated statistics may include, but are not limited to, statistics concerning agent utilization, first call resolution, average handle time, and retention rate.
The various servers of
In an embodiment, the terms “interaction” and “communication” are used interchangeably, and generally refer to any real-time and non-real-time interaction that uses any communication channel including, without limitation, telephony calls (PSTN or VoIP calls), emails, vmails, video, chat, screen-sharing, text messages, social media messages, WebRTC calls, etc.
In an embodiment, the premises-based platform product may provide access to and control of components of the system 100 through user interfaces (UIs) present on the agent devices 145A-C. Within the premises-based platform product, the graphical application generator program may be integrated which allows a user to write the programs (handlers) that control various interaction processing behaviors within the premises-based platform product.
As noted above, the contact center may operate as a hybrid system in which some or all components are hosted remotely, such as in a cloud-based environment. For the sake of convenience, aspects of embodiments of the present invention will be described below with respect to providing modular tools from a cloud-based environment to components housed on-premises.
The predictive routing server 200 may permit inputs of customer profile data, agent profile data, interaction data, and outcome data associated with the interaction data. The inputs may be obtained from a storage device 135, a configuration server 175, or other appropriate hardware for maintaining the data. The predictive routing server 200 is configured to perform batch processing to run a number of analytical processes, for example variance analysis and feature analysis on historical data. A variance analysis performed by the predictive routing server 200 may provide analysis on how a selected target metric may vary across different agent populations. A variance analysis may be generated for inter-agent variance, which may show how well individual agents perform, including an identification of the highest-performing and weakest agents. A variance analysis may also be generated for agent variance, which may show the range of agent performance for each category as specified group of agents. Examples of a grouping of agents can include, but are not limited to, an agent's role and an agent's skills. A feature analysis performed by the predictive routing server 200 is configured to generate predictors and models for the predictive routing server 200. In particular, generated predictors and models enables the contact center operator to determine which factors have the most impact on a selected target metric. Examples of factors that determine the impact on a selected target metric may include agent characteristics and agent behavior, customer characteristics, and behavior.
The predictive routing server 200 may be configured to train predictive models for real-time scoring use. In other words, a generated predictive model, detailed below, may be used to determine the routing of incoming interaction to an available agent. The predictive model used by the predictive routing server 200 may generate, upon receipt of the routing server 130 of an interaction, a real-time list of agents that are available to handle the interaction are scored against the model, and then route the interaction to an available agent based on the score of the model. Additionally, the predictive routing server 200 may store predicted and actual outcomes of agent-customer interactions and generate reports and dashboards tracking performance of models for target metrics.
The scoring service 202 may be configured to receive a model trained on the predictive routing server 200. The received trained model can be available on the scoring server 202 as a worker to take scoring requests that may come from other systems of the contact center 100. In an embodiment, the predictive routing server 200 may have multiple workers of scoring server 202 to score requests for a number of target metrics. The scoring server 202 operating as a worker can be available as a scoring service in contact center environment. In a an embodiment, dedicated instances of scoring service 202 are provided for each provided scoring service of the predictive routing server 200.
The model training service 204 may be configured as a dedicated service to analyze received data and train models for the predictive routing server 200. Model training service 204 may be configured to train a model by (i) transforming historical data, (ii) generating new features, (iii) aggregating outcomes, (iv) running feature analysis, and (v) training and testing for regression and classification models. The model training service 204 may be configured to utilize machine learning algorithms to train models. Machine learning algorithms deployed by the model training server 204 may include, but are not limited to, neural networks and decision trees.
The analytics service 206 may be configured to perform analysis on different datasets to help guide configuration of usable datasets and target metrics to optimize datasets to consume for model training. The analytics service 206 may perform aggregation on datasets as part of the scoring service worker. In an embodiment, the analytics service may include dedicated processes to run feature analysis, variance analysis, simulations, A/B testing, and lift estimation.
However, embodiments of the present disclosure are not limited to predictive routing (such as the predictive routing server described above) and may also be applied to other service(s). Embodiments of the present disclosure are not limited to a particular architecture for the contact center. For example, embodiments of the present disclosure may also be applied in data centers operated directly by the contact center or within the contact center, where resources within the data center may be dynamically and programmatically allocated and/or deallocated from a group. Embodiments of the present disclosure may also be deployed with remotely-hosted or cloud-based infrastructure.
The process 300 begins and at operation 302, extraction of an historic dataset from a contact center database over a prior time interval occurs. For example, a contact center may generate through its daily operation a voluminous amount of historic data from prior interactions between customers and agents handled by the contact center. The historic dataset may contain interaction data, agent data, customer data, and results from customer satisfaction surveys (CSAT).
The interaction data from the historic dataset may include identification of the customer and the agent that interacted during a particular interaction. For example, each customer may have a unique identification number associated with the customer profile that specifically identifies the customer that contacted the contact center. As an additional example, each agent may have a unique identification number or employee number associated with the agent of the contact center. The customer unique identification number and the agent unique identification number could be included in the interaction data. Additional information associated with, or contained in, the interaction data may include interaction duration, response time, queue time, routing point time, total duration, customer engagement time, customer hold time, and interaction handle time. The interaction data may also contain the context of the interaction, which categorizes the purpose of the interaction. For example, in a contact center that handles a multiplicity of different types of inbound inquiries from a customer, examples of the different types of inbound inquiries might comprise billing, sales, and technical support. The context of “sales” could be associated with the interaction data for a customer that contacts the contact center of the business to inquiry about purchasing a new service.
In an embodiment, the set time interval is a set amount of historic data as determined by a particular time interview. For example, the most-recent six months' worth of data from the contact center could be the set time interval for the amount of historic data. The time interval may be comprised of one continuous length of time or a number of distinct time intervals. The time interval may seek to omit time periods where there is an absence of data from the historic data, due to a loss of data from the storage medium, for example. The length of the set time interval should be appropriately set to permit sufficient data for building the predictive model and analysis by an analytics server.
The historic database may also contain information concerning the outcomes for the interactions handled through the contact center between the agents and customers. In an embodiment, the associated outcomes for the interaction data may be first call resolution (FCR) data. A variant of the FCR data may be first call resolution within seven days (FCR7) data, which takes into account whether a second interaction between a customer and agent was necessary for the same matter or context within seven days of the first interaction. In an embodiment the associated outcomes for the interaction data may be customer satisfaction survey results. Similar metrics to FCR that account for different types of interactions handled by a contact center 100 may also be used to determine associated outcomes for the interaction data.
The historical database may also contain information concerning the agents that were noted as available agents for the contact center during the set time interval. The historic database may provide additional granularity of detail for when an agent was available to handle an interaction routed through the contact center. For example, the historic database may provide the specific days for when an agent was marked as available to handle an interaction. The historic database may also provide specific shifts, hours, or minutes of the day when an agent was marked as available to handle an interaction.
As disclosed further below, in an embodiment, it may be advantageous to recreate the original routing scenario as close as possible for modeling and simulating the contact center with predictive routing. Accordingly, it may be desirable for the choice of agents to be the same as that of the existing baseline routing, at the granularity of a day. In other words, while scoring an interaction using the predictive model, the model is constrained to consider only those agents who had handled calls through baseline routing on the day of the interaction. In addition, the historical database may contain additional characteristics of agents, such as whether an agent was assigned to handle specific types of interactions (based on context or communication-type), or otherwise grouped within the pool of agents. In an embodiment, when additional limits or groupings of agents are deployed in the contact center, it may be desirable to choose the values of those setting as the same as applicable on the day of the interaction (i.e. selecting one of the interactions handled by an agent on that day and extract those corresponding values.)
In an embodiment, it may be assumed that an agent profile is the same for the extracted granularity of the agent information. For example, agent information is available at a day-by-day level of granularity, then it may be assumed that the agent profile remains the same throughout the given day.
At operation 304, partitioning of the extracted historic dataset from a contact center database occurs. In an embodiment, a split of the historic dataset into two temporal periods occurs at operation 304. For example, a historic dataset containing data (e.g., interaction data, agent data, and customer data) has been extracted for the specific months for a given calendar year (e.g., January through October). In this example, it is desired that the partition of operation 304 creates an 80:20 split of the extracted historic dataset, and thus creates two datasets: one containing the extracted interaction data, agent data, and customer data for January through August; and one containing the extracted interaction data, agent data, and customer data for September through October. In an embodiment, the earliest temporal partition of the historic dataset is used to create the training dataset in operation 306, while the latest temporal partition of the historic dataset is used to create the test dataset in operation 310.
At operation 308, a predictive model of the interaction routing from the contact center is created utilizing the training dataset from operation 306. In an embodiment, the predictive model may use the training dataset wherein the predictive routing server applies a machine-learning algorithm, which seeks to minimize the error in difference between true outcome for a target metric of a given agent-customer interaction and a predicted outcome of the given agent-customer interaction. As an example, the predictive model is generated by inputting the training dataset through a selected machine-learning algorithm, generating a predicted outcome of each of a given agent-customer interaction from the training dataset and an error from an actual outcome for the given agent-customer interaction is determined. The machine-learning algorithm continues to be applied to the predicted outcomes for the training dataset until the error across the differences predicted outcomes and actual outcomes is minimized for all of the interactions from the agent-customer interactions in the training dataset.
In an embodiment, the machine-learning algorithm applied at operation 308 to build the predictive model can be a decision tree, neural network, or other suitable approach. In an example, the model training service 204 (
In an embodiment, the predictive model is generated at operation 308 for a number of identified contexts from the interactions from the historical agent-customer interaction data. For example, a contact center 100 may have a number of contexts identified for the customer's intention of an interaction. In an example, the predictive model generated at operation 308 may seek to minimize the error within each of the identified contexts for the contact center 100.
At operation 312, the test dataset generated in operation 310 is scored using the predictive model constructed in operation 308. In an embodiment, the test dataset is scored through analyzing, by the analytics service of the predictive routing server, the test data using the predictive model. This may be accomplished by predicting an outcome score for the target metric for a given agent-customer interaction handled by an agent from a pool of simulated available agents on the day of the agent-customer interaction. An expected lift is then estimated for when each interaction is handled by its best-scored agent from the pool of simulated available agents on the day of the agent-customer interaction.
In an embodiment, the number of simulations may be set for operation 312 for how many times scoring the test dataset select subsets of agents, whereby each subset of agents is used in one simulation. The predictive routing server determines an average across the specified number of simulations to account for the randomness in subset selection. The calculated agent availability figure may determine the size of these subsets. In an embodiment, scoring the test dataset may be performed for different agent availability percentages. For example, agent availability may be varied from 0.01, when only 1% of total agents are available, to 1, when 100% of agents are available. In another example, 50% availability uses random selections of half the total agents. In yet another example, 25% availability uses simulations that each pick a random 25% of the total number of agents.
In an embodiment, the number of samples to use from the test dataset for operation 312 may be varied. For example, the number of samples might be approximately 30 times the number of agents (i.e., an average of 30 samples per agent). It should be noted, however, that the number of samples should not exceed the number of records in the test dataset. As a result, the method of 300 would disregard the excess number and run reports for only the available number of samples in the test dataset.
In an embodiment, operation 312 may permit inclusion of a parameter to use to group interactions for estimation. For example, only agents who handled interactions of the type specified in the specified parameter value are used in the estimation for that group.
The expected lift in the target metric may be dependent on the available agent pool. In an embodiment, candidate agent pools of different sizes may be created to study the increase or decrease in lift. Agent availability may be defined as the fraction of total agents who handled calls on the day of the interaction. In an embodiment, to simulate 100% agent availability, for each interaction, all the agents who are part of the daily agent pool determined for a set day are scored and the maximum scored agent among them is agent selected by the generated predictive model. A 50% agent availability may be determined by randomly selecting half of the agents from the daily agent pool, scoring them and picking the maximum-scored agent among the selected half. In an embodiment, to ensure the estimated lift is not occurring by chance, the random selection of agents may be repeated a number of times, and then the lift is averaged across the runs. For example, the random selection and scoring of agents may be simulated 100 times, selecting the highest-scoring agent from each simulation selected, and then the 100 scores are averaged to determine the estimated lift.
In an embodiment, the scores obtained from the predictive model may be error-prone and may need to be adjusted in order for the predictive model to provide a reliable estimate. For a given historic agent-customer interaction, when the chosen agent in the predictive model based on a maximum score for a target metric matches with the agent who originally handled the interaction through the historic routing information, this information may be used to correct for bias in the predicted scores, if any exists. Since the number of interactions for which the two agents selected by the two different routing policies match is likely to be a small fraction of the total interactions, the estimation algorithm also amplifies the correction to account for other interactions where the actual agent and the predictive-routing-suggested agent do not match. The error-correction algorithm further assumes that the baseline routing had chosen agents randomly to determine the amplification factor. The estimate is expected to be accurate if either the predicted score or the amplification factor is accurate, but both could have errors associated with them that are likely to impact the estimate. The bias in the predicted score is determined based on the sample of interactions where the predictive-routing-chosen agent and the actual agents who handled them match, and this correction is amplified and applied to all the samples to account for the overall bias.
At operation 314, estimating the expected lift from the scoring of the test dataset from operation 312 is performed. The expected lift may be estimated by generating through an analytics service of the predictive routing server, whereby the expected lift is determined from where each interaction in the contact is handled by its best-scored agent from a pool of simulated available agents during a certain timer period for the agent-customer interaction. The lift estimator uses the original interaction-agent assignment as captured in the historic dataset, the corresponding outcome, and the predicted scores by the developed model for the same set of interactions and agents, to estimate the performance the contact center would have seen had the agent assignment been using the predictive model. In an embodiment, the estimated lift may be determined through deployment of a doubly robust estimator, although other techniques may be applied. See Dudik, M., Langsford, J., Li, L.; “Doubly Robust Policy Evaluation and Learning,” Proceedings of the 28th International Conference on Machine Learning, 2011. The expected lift may be graphically displayed through a graph which demonstrates the expected lift of the target metric for the contact center, wherein the graph contains the outcome score for the target metric.
The columns of
As shown in
A very large number of samples can be used to generate the data as shown in
The predicted score of a target metric from the generated predictive model created as a result of operation 312 (
The different set levels of agent availability applied to the predictive model created as a result of operation 312 (
The shape of the estimated lift curve may depend on factors such as the variance in the scores across agents for each interaction. If a lot of agents have very similar scores close to the maximum score, the increase in agent availability will not have impact on the lift and a flat line across availabilities may be seen. The shape of the lift curve may depend on the generated model's accuracy. The position of the data points on the curve relative to the base line may be determined by the reliability of the predicted scores against the true outcome.
Computer Systems
In an embodiment, each of the various servers, controls, switches, gateways, engines, and/or modules (collectively referred to as servers) in the described figures are implemented via hardware or firmware (e.g., ASIC) as will be appreciated by a person of skill in the art. Each of the various servers may be a process or thread, running on one or more processors, in one or more computing devices (e.g.,
The various servers may be located on a computing device on-site at the same physical location as the agents of the contact center or may be located off-site (or in the cloud) in a geographically different location, e.g., in a remote data center, connected to the contact center via a network such as the Internet. In addition, some of the servers may be located in a computing device on-site at the contact center while others may be located in a computing device off-site, or servers providing redundant functionality may be provided both via on-site and off-site computing devices to provide greater fault tolerance. In some embodiments, functionality provided by servers located on computing devices off-site may be accessed and provided over a virtual private network (VPN) as if such servers were on-site, or the functionality may be provided using a software as a service (SaaS) to provide functionality over the internet using various protocols, such as by exchanging data using encoded in extensible markup language (XML) or JSON.
The CPU 605 is any logic circuitry that responds to and processes instructions fetched from the main memory unit 610. It may be implemented, for example, in an integrated circuit, in the form of a microprocessor, microcontroller, or graphics processing unit, or in a field-programmable gate array (FPGA) or application-specific integrated circuit (ASIC). The main memory unit 610 may be one or more memory chips capable of storing data and allowing any storage location to be directly accessed by the central processing unit 605. As shown in
In an embodiment, the CPU 605 may include a plurality of processors and may provide functionality for simultaneous execution of instructions or for simultaneous execution of one instruction on more than one piece of data. In an embodiment, the computing device 600 may include a parallel processor with one or more cores. In an embodiment, the computing device 600 comprises a shared memory parallel device, with multiple processors and/or multiple processor cores, accessing all available memory as a single global address space. In another embodiment, the computing device 600 is a distributed memory parallel device with multiple processors each accessing local memory only. The computing device 600 may have both some memory which is shared and some which may only be accessed by particular processors or subsets of processors. The CPU 605 may include a multicore microprocessor, which combines two or more independent processors into a single package, e.g., into a single integrated circuit (IC). For example, the computing device 600 may include at least one CPU 605 and at least one graphics processing unit.
In an embodiment, a CPU 605 provides single instruction multiple data (SIMD) functionality, e.g., execution of a single instruction simultaneously on multiple pieces of data. In another embodiment, several processors in the CPU 605 may provide functionality for execution of multiple instructions simultaneously on multiple pieces of data (MIMD). The CPU 605 may also use any combination of SIMD and MIMD cores in a single device.
A wide variety of I/O devices 635 may be present in the computing device 600. Input devices include one or more keyboards 635B, mice, trackpads, trackballs, microphones, and drawing tables, to name a few non-limiting examples. Output devices include video display devices 635A, speakers and printers. An I/O controller 630 as shown in
Referring again to
The removable media interface 620 may, for example, be used for installing software and programs. The computing device 600 may further include a storage device 615, such as one or more hard disk drives or hard disk drive arrays, for storing an operating system and other related software, and for storing application software programs. Optionally, a removable media interface 620 may also be used as the storage device. For example, the operating system and the software may be run from a bootable medium, for example, a bootable CD.
In an embodiment, the computing device 600 may include or be connected to multiple display devices 635A, which each may be of the same or different type and/or form. As such, any of the I/O devices 635 and/or the I/O controller 630 may include any type and/or form of suitable hardware, software, or combination of hardware and software to support, enable or provide for the connection to, and use of, multiple display devices 635A by the computing device 600. For example, the computing device 600 may include any type and/or form of video adapter, video card, driver, and/or library to interface, communicate, connect or otherwise use the display devices 635A. In an embodiment, a video adapter may include multiple connectors to interface to multiple display devices 635A. In another embodiment, the computing device 600 may include multiple video adapters, with each video adapter connected to one or more of the display devices 635A. In other embodiments, one or more of the display devices 635A may be provided by one or more other computing devices, connected, for example, to the computing device 600 via a network. These embodiments may include any type of software designed and constructed to use the display device of another computing device as a second display device 635A for the computing device 600. One of ordinary skill in the art will recognize and appreciate the various ways and embodiments that a computing device 600 may be configured to have multiple display devices 635A.
An embodiment of a computing device indicated generally in
The computing device 600 may be any workstation, desktop computer, laptop or notebook computer, server machine, handled computer, mobile telephone or other portable telecommunication device, media playing device, gaming system, mobile computing device, or any other type and/or form of computing, telecommunications or media device that is capable of communication and that has sufficient processor power and memory capacity to perform the operations described herein. In some embodiments, the computing device 600 may have different processors, operating systems, and input devices consistent with the device.
In other embodiments, the computing device 600 is a mobile device. Examples might include a Java-enabled cellular telephone or personal digital assistant (PDA), a smart phone, a digital audio player, or a portable media player. In an embodiment, the computing device 600 includes a combination of devices, such as a mobile phone combined with a digital audio player or portable media player.
A computing device 600 may be one of a plurality of machines connected by a network, or it may include a plurality of machines so connected. A network environment may include one or more local machine(s), client(s), client node(s), client machine(s), client computer(s), client device(s), endpoint(s), or endpoint node(s) in communication with one or more remote machines (which may also be generally referred to as server machines or remote machines) via one or more networks. In an embodiment, a local machine has the capacity to function as both a client node seeking access to resources provided by a server machine and as a server machine providing access to hosted resources for other clients. The network may be LAN or WAN links, broadband connections, wireless connections, or a combination of any or all of the above. Connections may be established using a variety of communication protocols. In one embodiment, the computing device 600 communicates with other computing devices 600 via any type and/or form of gateway or tunneling protocol such as Secure Socket Layer (SSL) or Transport Layer Security (TLS). The network interface may include a built-in network adapter, such as a network interface card, suitable for interfacing the computing device to any type of network capable of communication and performing the operations described herein. An I/O device may be a bridge between the system bus and an external communication bus.
In an embodiment, a network environment may be a virtual network environment where the various components of the network are virtualized. For example, the various machines may be virtual machines implemented as a software-based computer running on a physical machine. The virtual machines may share the same operating system. In other embodiments, different operating system may be run on each virtual machine instance. In an embodiment, a “hypervisor” type of virtualizing is implemented where multiple virtual machines run on the same host physical machine, each acting as if it has its own dedicated box. The virtual machines may also run on different host physical machines.
Other types of virtualization are also contemplated, such as, for example, the network (e.g., via Software Defined Networking (SDN)). Functions, such as functions of session border controller and other types of functions, may also be virtualized, such as, for example, via Network Functions Virtualization (NFV).
In an embodiment, the use of LSH to automatically discover carrier audio messages in a large set of pre-connected audio recordings may be applied in the support process of media services for a contact center environment. For example, this can assist with the call analysis process for a contact center and removes the need to have humans listen to a large set of audio recordings to discover new carrier audio messages.
While the invention has been illustrated and described in detail in the drawings and foregoing description, the same is to be considered as illustrative and not restrictive in character, it being understood that only the preferred embodiment has been shown and described and that all equivalents, changes, and modifications that come within the spirit of the invention as described herein and/or by the following claims are desired to be protected.
Hence, the proper scope of the present invention should be determined only by the broadest interpretation of the appended claims so as to encompass all such modifications as well as all relationships equivalent to those illustrated in the drawings and described in the specification.
This application claims the benefit of U.S. Provisional Patent Application No. 62/783,188, titled “LIFT ESTIMATION ANALYSIS FOR CONTACT CENTER ROUTING”, filed in the U.S. Patent and Trademark Office on Dec. 20, 2018, the contents of which are incorporated herein.
Number | Date | Country | |
---|---|---|---|
62783188 | Dec 2018 | US |