Techniques for decisioning behavioral pairing in a task assignment system

Information

  • Patent Grant
  • 11736614
  • Patent Number
    11,736,614
  • Date Filed
    Wednesday, November 3, 2021
    2 years ago
  • Date Issued
    Tuesday, August 22, 2023
    9 months ago
  • Inventors
  • Original Assignees
  • Examiners
    • Hong; Harry S
    Agents
    • Wilmer Cutler Pickering Hale and Dorr LLP
  • CPC
  • Field of Search
    • US
    • 379 265110TO
    • 379 265140
    • CPC
    • H04M3/51
    • H04M3/523
    • H04M3/5232
  • International Classifications
    • H04M3/523
    • H04M3/51
    • Disclaimer
      This patent is subject to a terminal disclaimer.
Abstract
Techniques for decisioning behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for decisioning behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a plurality of possible task-agent pairings among at least one task waiting for assignment and at least one agent available for assignment; and selecting, by the least one computer processor for assignment in the task assignment system, a first task-agent pairing of the plurality of possible task-agent pairings based at least in part on a first offer set to be offered by the agent or a first compensation to be received by the agent.
Description
FIELD OF THE DISCLOSURE

The present disclosure generally relates to behavioral pairing and, more particularly, to techniques for decisioning behavioral pairing in a task assignment system.


BACKGROUND OF THE DISCLOSURE

A typical task assignment system algorithmically assigns tasks arriving at a task assignment center to agents available to handle those tasks. At times, the task assignment center may be in an “L1 state” and have agents available and waiting for assignment to tasks. At other times, the task assignment center may be in an “L2 state” and have tasks waiting in one or more queues for an agent to become available for assignment.


In some typical task assignment centers, tasks are assigned to agents ordered based on time of arrival, and agents receive tasks ordered based on the time when those agents became available. This strategy may be referred to as a “first-in, first-out,” “FIFO,” or “round-robin” strategy. For example, in an L2 environment, when an agent becomes available, the task at the head of the queue would be selected for assignment to the agent.


In other typical task assignment centers, a performance-based routing (PBR) strategy for prioritizing higher-performing agents for task assignment may be implemented. Under PBR, for example, the highest-performing agent among available agents receives the next available task. Other PBR and PBR-like strategies may make assignments using specific information about the agents.


“Behavioral Pairing” or “BP” strategies, for assigning tasks to agents, improve upon traditional assignment methods. BP targets balanced utilization of agents while simultaneously improving overall task assignment center performance potentially beyond what FIFO or PBR methods will achieve in practice.


In some task assignment systems, it may be advantageous to consider the next-best action for a task given its assignment to a particular agent. Thus, it may be understood that there may be a need for a decisioning BP strategy that takes into consideration the next-best action for a task-agent pairing in order to optimize the overall performance of the task assignment system.


SUMMARY OF THE DISCLOSURE

Techniques for decisioning behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for decisioning behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a plurality of possible task-agent pairings among at least one task waiting for assignment and at least one agent available for assignment; and selecting, by the least one computer processor for assignment in the task assignment system, a first task-agent pairing of the plurality of possible task-agent pairings based at least in part on a first offer set to be offered by the agent or a first compensation to be received by the agent.


In accordance with other aspects of this particular embodiment, the task assignment system may be a contact center system.


In accordance with other aspects of this particular embodiment, selecting the first task-agent pairing may be based at least in part on both the first offer set and the first compensation.


In accordance with other aspects of this particular embodiment, the method may further comprise selecting, by the at least one computer processor, the first offer set from a plurality of potential offer sets.


In accordance with other aspects of this particular embodiment, the method may further comprise selecting, by the at least one computer processor, the first compensation from a plurality of potential compensations.


In accordance with other aspects of this particular embodiment, selecting the first task-agent pairing may be based on at least one of a first ordering of a plurality of tasks and a second ordering of a plurality of agents, and wherein the at least one of the first and second orderings is expressed as percentiles or percentile ranges.


In accordance with other aspects of this particular embodiment, the method may further comprise adjusting, by the at least one computer processor, the first offer set or the first compensation to adjust the first or second orderings.


In another particular embodiment, the techniques may be realized as a system for decisioning behavioral pairing in a task assignment system comprising at least one computer processor communicatively coupled to and configured to operate in the task assignment system, wherein the at least one computer processor is further configured to perform the steps in the above-described method.


In another particular embodiment, the techniques may be realized as an article of manufacture for decisioning behavioral pairing in a task assignment system comprising a non-transitory processor readable medium and instructions stored on the medium, wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to and configured to operate in the task assignment system and thereby cause the at least one computer processor to operate so as to perform the steps in the above-described method.


The present disclosure will now be described in more detail with reference to particular embodiments thereof as shown in the accompanying drawings. While the present disclosure is described below with reference to particular embodiments, it should be understood that the present disclosure is not limited thereto. Those of ordinary skill in the art having access to the teachings herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein, and with respect to which the present disclosure may be of significant utility.





BRIEF DESCRIPTION OF THE DRAWINGS

To facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals. These drawings should not be construed as limiting the present disclosure, but are intended to be illustrative only.



FIG. 1 shows a block diagram of a task assignment center according to embodiments of the present disclosure.



FIG. 2 shows a block diagram of a task assignment system according to embodiments of the present disclosure.



FIG. 3 shows a flow diagram of a task assignment method according to embodiments of the present disclosure.





DETAILED DESCRIPTION

A typical task assignment system algorithmically assigns tasks arriving at a task assignment center to agents available to handle those tasks. At times, the task assignment center may be in an “L1 state” and have agents available and waiting for assignment to tasks. At other times, the task assignment center may be in an “L2 state” and have tasks waiting in one or more queues for an agent to become available for assignment. At yet other times, the task assignment system may be in an “L3” state and have multiple agents available and multiple tasks waiting for assignment. An example of a task assignment system is a contact center system that receives contacts (e.g., telephone calls, internet chat sessions, emails, etc.) to be assigned to agents.


In some traditional task assignment centers, tasks (e.g., callers) are assigned to agents ordered based on time of arrival, and agents receive tasks ordered based on the time when those agents became available. This strategy may be referred to as a “first-in, first-out,” “FIFO,” or “round-robin” strategy. For example, in an L2 environment, when an agent becomes available, the task at the head of the queue would be selected for assignment to the agent. In other traditional task assignment centers, a performance-based routing (PBR) strategy for prioritizing higher-performing agents for task assignment may be implemented. Under PBR, for example, the highest-performing agent among available agents receives the next available task.


The present disclosure refers to optimized strategies, such as “Behavioral Pairing” or “BP” strategies, for assigning tasks to agents that improve upon traditional assignment methods. BP targets balanced utilization of agents while simultaneously improving overall task assignment center performance potentially beyond what FIFO or PBR methods will achieve in practice. This is a remarkable achievement inasmuch as BP acts on the same tasks and same agents as FIFO or PBR methods, approximately balancing the utilization of agents as FIFO provides, while improving overall task assignment center performance beyond what either FIFO or PBR provide in practice. BP improves performance by assigning agent and task pairs in a fashion that takes into consideration the assignment of potential subsequent agent and task pairs such that, when the benefits of all assignments are aggregated, they may exceed those of FIFO and PBR strategies.


Various BP strategies may be used, such as a diagonal model BP strategy or a network flow (or “off-diagonal”) BP strategy. These task assignment strategies and others are described in detail for a contact center context in, e.g., U.S. Pat. Nos. 9,300,802; 9,781,269; 9,787,841; and 9,930,180; all of which are hereby incorporated by reference herein. BP strategies may be applied in an L1 environment (agent surplus, one task; select among multiple available/idle agents), an L2 environment (task surplus, one available/idle agent; select among multiple tasks in queue), and an L3 environment (multiple agents and multiple tasks; select among pairing permutations).


The various BP strategies discussed above may be considered two-dimensional (2-D), where one dimension relates to the agents, and the second dimension relates to the tasks (e.g., callers), and the various BP strategies take into account information about agents and tasks to pair them. As explained in detail below, embodiments of the present disclosure relate to decisioning BP strategies that account for higher-dimensional assignments. For a three-dimensional (3-D) example, the BP strategy may assign an agent to both a task and a set of offers the agent can make or a set of actions the agent can take during the task assignment. For another 3-D example, the BP strategy may assign an agent to both a task and a (monetary or non-monetary) reward to be given to an agent for a given task assignment. For a four-dimensional (4-D) example, the BP strategy may assign an agent to a task, an offer set, and a reward.


These decisioning BP strategies may also consider historical outcome data for, e.g., agent-task-offers, agent-task-reward, or agent-task-offer-reward pairings to build a BP model and apply a BP strategy to “pair” a task with an agent and a specific offer set and/or agent compensation (throughout the specification, the noun and verb “pair” and other forms such as “Behavioral Pairing” may be used to describe triads and higher-dimensional groupings).



FIG. 1 shows a block diagram of a task assignment center 100 according to embodiments of the present disclosure. The description herein describes network elements, computers, and/or components of a system and method for pairing strategies in a task assignment system that may include one or more modules. As used herein, the term “module” may be understood to refer to computing software, firmware, hardware, and/or various combinations thereof. Modules, however, are not to be interpreted as software which is not implemented on hardware, firmware, or recorded on a non-transitory processor readable recordable storage medium (i.e., modules are not software per se). It is noted that the modules are exemplary. The modules may be combined, integrated, separated, and/or duplicated to support various applications. Also, a function described herein as being performed at a particular module may be performed at one or more other modules and/or by one or more other devices instead of or in addition to the function performed at the particular module. Further, the modules may be implemented across multiple devices and/or other components local or remote to one another. Additionally, the modules may be moved from one device and added to another device, and/or may be included in both devices.


As shown in FIG. 1, the task assignment center 100 may include a central switch 210. The central switch 110 may receive incoming tasks (e.g., telephone calls, internet chat sessions, emails, etc.) or support outbound connections to contacts via a dialer, a telecommunications network, or other modules (not shown). The central switch 110 may include routing hardware and software for helping to route tasks among one or more subcenters, or to one or more Private Branch Exchange (“PBX”) or Automatic Call Distribution (ACD) routing components or other queuing or switching components within the task assignment center 100. The central switch 110 may not be necessary if there is only one subcenter, or if there is only one PBX or ACD routing component in the task assignment center 100.


If more than one subcenter is part of the task assignment center 100, each subcenter may include at least one switch (e.g., switches 120A and 120B). The switches 120A and 120B may be communicatively coupled to the central switch 110. Each switch for each subcenter may be communicatively coupled to a plurality (or “pool”) of agents. Each switch may support a certain number of agents (or “seats”) to be logged in at one time. At any given time, a logged-in agent may be available and waiting to be connected to a task, or the logged-in agent may be unavailable for any of a number of reasons, such as being connected to another contact, performing certain post-call functions such as logging information about the call, or taking a break. In the example of FIG. 1, the central switch 110 routes tasks to one of two subcenters via switch 120A and switch 120B, respectively. Each of the switches 120A and 120B are shown with two agents each. Agents 130A and 130B may be logged into switch 120A, and agents 130C and 130D may be logged into switch 120B.


The task assignment center 100 may also be communicatively coupled to an integrated service from, for example, a third-party vendor. In the example of FIG. 1, behavioral pairing module 140 may be communicatively coupled to one or more switches in the switch system of the task assignment center 100, such as central switch 110, switch 120A, and switch 120B. In some embodiments, switches of the task assignment center 100 may be communicatively coupled to multiple behavioral pairing modules. In some embodiments, behavioral pairing module 140 may be embedded within a component of the task assignment center 100 (e.g., embedded in or otherwise integrated with a switch).


Behavioral pairing module 140 may receive information from a switch (e.g., switch 120A) about agents logged into the switch (e.g., agents 130A and 130B) and about incoming tasks via another switch (e.g., central switch 110) or, in some embodiments, from a network (e.g., the Internet or a telecommunications network) (not shown). The behavioral pairing module 140 may process this information to determine which agents should be paired (e.g., matched, assigned, distributed, routed) with which tasks along with other dimensions (e.g., offers, actions, channels, non-monetary rewards, monetary rewards or compensation, etc.).


For example, in an L1 state, multiple agents may be available and waiting for connection to a contact, and a task arrives at the task assignment center 100 via a network or the central switch 110. As explained above, without the behavioral pairing module 140, a switch will typically automatically distribute the new task to whichever available agent has been waiting the longest amount of time for an agent under a FIFO strategy, or whichever available agent has been determined to be the highest-performing agent under a PBR strategy. With a behavioral pairing module 140, contacts and agents may be given scores (e.g., percentiles or percentile ranges/bandwidths) according to a pairing model or other artificial intelligence data model, so that a task may be matched, paired, or otherwise connected to a preferred agent. The higher-dimensional analysis of BP decisioning will be explained in more detail below.


In an L2 state, multiple tasks are available and waiting for connection to an agent, and an agent becomes available. These tasks may be queued in a switch such as a PBX or ACD device. Without the behavioral pairing module 140, a switch will typically connect the newly available agent to whichever task has been waiting on hold in the queue for the longest amount of time as in a FIFO strategy or a PBR strategy when agent choice is not available. In some task assignment centers, priority queuing may also be incorporated, as previously explained. With a behavioral pairing module 140 in this L2 scenario, as in the L1 state described above, tasks and agents may be given percentiles (or percentile ranges/bandwidths, etc.) according to, for example, a model, such as an artificial intelligence model, so that an agent becoming available may be matched, paired, or otherwise connected to a preferred task. The higher-dimensional analysis of BP decisioning will be explained in more detail below.



FIG. 2 shows a block diagram of a task assignment system 200 according to embodiments of the present disclosure. The task assignment system 200 may be included in a task assignment center (e.g., task assignment center 100) or incorporated in a component or module (e.g., behavioral pairing module 140) of a task assignment center for helping to assign agents among various tasks and other dimensions for grouping.


The task assignment system 200 may include a task assignment module 210 that is configured to pair (e.g., match, assign) incoming tasks to available agents. (The higher-dimensional analysis of BP decisioning will be explained in more detail below.) In the example of FIG. 2, m tasks 220A-220m are received over a given period, and n agents 230A-230n are available during the given period. Each of the m tasks may be assigned to one of the n agents for servicing or other types of task processing. In the example of FIG. 1, m and n may be arbitrarily large finite integers greater than or equal to one. In a real-world task assignment center, such as a contact center, there may be dozens, hundreds, etc. of agents logged into the contact center to interact with contacts during a shift, and the contact center may receive dozens, hundreds, thousands, etc. of contacts (e.g., telephone calls, internet chat sessions, emails, etc.) during the shift.


In some embodiments, a task assignment strategy module 240 may be communicatively coupled to and/or configured to operate in the task assignment system 200. The task assignment strategy module 240 may implement one or more task assignment strategies (or “pairing strategies”) for assigning individual tasks to individual agents (e.g., pairing contacts with contact center agents). A variety of different task assignment strategies may be devised and implemented by the task assignment strategy module 240. In some embodiments, a FIFO strategy may be implemented in which, for example, the longest-waiting agent receives the next available task (in L1 environments) or the longest-waiting task is assigned to the next available agent (in L2 environments). In other embodiments, a PBR strategy for prioritizing higher-performing agents for task assignment may be implemented. Under PBR, for example, the highest-performing agent among available agents receives the next available task. In yet other embodiments, a BP strategy may be used for optimally assigning tasks to agents using information about either tasks or agents, or both. Various BP strategies may be used, such as a diagonal model BP strategy or a network flow (“off-diagonal”) BP strategy. See U.S. Pat. Nos. 9,300,802; 9,781,269; 9,787,841; and 9,930,180.


In some embodiments, the task assignment strategy module 240 may implement a decisioning BP strategy that takes into account the next-best action for a task, when the task is assigned to a particular agent. For a task-agent pair, the decisioning BP strategy may also assign an action or set of actions available to the agent to complete the task. In the context of a contact center system, the action or set of actions may include an offer or a set of offers that the agent may present to a customer. For example, in a contact center system, a decisioning BP strategy may pair a contact with an agent along with an offer or set of offers available to the agent to make to a customer, based on the expected outcome of the contact-agent interaction using that particular offer or set of offers. By influencing the choices or options among offers available to an agent, a decisioning BP strategy goes beyond pairing a contact to an agent by optimizing the outcome of the individual interaction between the agent and the contact.


For example, if agent 230A loves sports, agent 230A may be more adept at selling sports packages. Therefore, sports packages may be included in agent 230A's set of offers for some or all contact types. On the other hand, agent 230B may love movies and may be more adept at selling premium movie packages; so premium movie packages may be included in agent 230B's set of offers for some or all contact types. Further, based on an artificial intelligence process such as machine learning, a decisioning BP model may automatically segment customers over a variety of variables and data types. For example, the decisioning BP model may recommend offering a package that includes sports to a first type of customer (“Customer Type 1”) that may fit a particular age range, income range, and other demographic and psychographic factors. The decisioning BP model may recommend offering a premium movie package to a second type of customer (“Customer Type 2”) that may fit a different age range, income range, or other demographic or psychographic factors. A decisioning BP strategy may preferably pair a Customer Type 1 with agent 230A and an offer set with a sports package, and a Customer Type 2 with agent 230B and an offer set with a premium movie package.


As with previously-disclosed BP strategies, a decisioning BP strategy optimizes the overall performance of the task assignment system rather than every individual instant task-agent pairing. For instance, in some embodiments, a decisioning BP system will not always offer sports to a Customer Type 1, nor will agent 230A always be given the option of offering deals based on a sports package. Such a scenario may arise when a marketing division of a company running a contact center system may have a budget for a finite, limited number deals (e.g., a limited number of discounted sports packages), other constraints on the frequency of certain offers, limits on the total amount of discounts (e.g., for any discount or discounted package) that can be made over a given time period, etc. Similarly, deals based on sports may sometimes be offered to a Customer Type 2, and agent 230B may sometimes be given the option of offering deals based on a sports package.


To optimize the overall performance of the task assignment system, a decisioning BP strategy may account for all types of customers waiting in a queue, agents available for customers, and any other dimensions for pairing such as the number and types of offers remaining, agent compensation or other non-monetary rewards, next-best actions, etc. In some embodiments, a probability distribution may be assigned based on the likelihood that an incoming task or customer type will accept a given offer level based on the agent being paired with the task or customer.


For example, for a Customer Type 1, if the discount offered is 0%, the likelihood of the customer accepting the offer from an average agent is 0%, and the likelihood of accepting the offer specifically from agent 230A is also 0% and from agent 230B is also 0%. For a 20% discount offer, the likelihood of the customer accepting the offer from an average agent may be 30%, whereas the likelihood of accepting the offer from agent 230A may be 60% and from agent 230B may be 25%. In a scenario where an average agent, agent 230A, and agent 230B are all assigned to the queue and available, it is possible for agent 230A to perform much higher than the average agent or agent 230B.


In some embodiments, an output measurement may be attached to each task before and after interaction with an agent. For example, a revenue number may be attached to each caller pre- and post-call. A decisioning BP system may measure the change in revenue and the influenceability of a caller based on an offer or a set of offers presented by an agent. For example, a Customer Type 1 may be more likely to renew her existing plan regardless of the discount offered, or regardless of the ability of the individual agent. A Customer Type 2 may be preferably assigned to a lower-performing agent with a higher cap on discounts in the offer set. In contrast, a Customer Type 2 may be more likely to upgrade her plans if she were paired with a higher-performing agent or an agent authorized to offer steeper discounts.


In some embodiments, a decisioning BP strategy may make sequential pairings of one or more dimensions in an arbitrary order. For example, the decisioning BP strategy may first pair an agent to a task and then pair an offer set to the agent-task pairing, then pair a reward to the agent-task-offer set pairing, and so on.


In other embodiments, a decisioning BP strategy may make “fully-coupled,” simultaneous multidimensional pairings. For example, the decisioning BP strategy may consider all dimensions at once to select an optimal 4-D agent-task-offers-reward pairing.


The same task may arrive at the task assignment system multiple times (e.g., the same caller calls a call center multiple times). In these “multi-touch” scenarios, in some embodiments, the task assignment system may always assign the same item for one or more dimensions to promote consistency. For example, if a task is paired with a particular offer set the first time the task arrives, the task will be paired with the same offer set each subsequent time the task arrives (e.g., for a given issue, within a given time period, etc.).


In some embodiments, a historical assignment module 250 may be communicatively coupled to and/or configured to operate in the task assignment system 200 via other modules such as the task assignment module 210 and/or the task assignment strategy module 240. The historical assignment module 250 may be responsible for various functions such as monitoring, storing, retrieving, and/or outputting information about task-agent assignments and higher-dimensional assignments that have already been made. For example, the historical assignment module 250 may monitor the task assignment module 210 to collect information about task assignments in a given period. Each record of a historical task assignment may include information such as an agent identifier, a task or task type identifier, offer or offer set identifier, outcome information, or a pairing strategy identifier (i.e., an identifier indicating whether a task assignment was made using a BP strategy, a decisioning BP strategy, or some other pairing strategy such as a FIFO or PBR pairing strategy).


In some embodiments and for some contexts, additional information may be stored. For example, in a call center context, the historical assignment module 250 may also store information about the time a call started, the time a call ended, the phone number dialed, and the caller's phone number. For another example, in a dispatch center (e.g., “truck roll”) context, the historical assignment module 250 may also store information about the time a driver (i.e., field agent) departs from the dispatch center, the route recommended, the route taken, the estimated travel time, the actual travel time, the amount of time spent at the customer site handling the customer's task, etc.


In some embodiments, the historical assignment module 250 may generate a pairing model, a decisioning BP model, or similar computer processor-generated model based on a set of historical assignments for a period of time (e.g., the past week, the past month, the past year, etc.), which may be used by the task assignment strategy module 240 to make task assignment recommendations or instructions to the task assignment module 210.


In some embodiments, instead of relying on predetermined offer sets in generating a decisioning BP model, the historical assignment module 250 may analyze historical outcome data to create or determine new or different offer sets, which are then incorporated into the decisioning BP model. This approach may be preferred when there is a budget or other limitation on the number of a particular offer set that may be made. For example, the marketing division may have limited the contact center system to five hundred discounted sports packages and five hundred discounted movie packages per month, and the company may want to optimize total revenue irrespective of how many sports and movie packages are sold, with or without a discount. Under such a scenario, the decisioning BP model may be similar to previously-disclosed BP diagonal models, except that, in addition to the “task or contact percentile” (CP) dimension and the “agent percentile” (AP) dimension, there may be a third “revenue or offer set percentile” dimension. Moreover, all three dimensions may be normalized or processed with mean regression (e.g., Bayesian mean regression (BMR) or hierarchical BMR).


In some embodiments, the historical assignment module 250 may generate a decisioning BP model that optimizes task-agent-offer set pairing based on individual channels or multi-channel interactions. The historical assignment module 250 may treat different channels differently. For example, a decisioning BP model may preferably pair a contact with different agents or offer sets depending on whether the contact calls a call center, initiates a chat session, sends an email or text message, enters a retail store, etc.


In some embodiments, the task assignment strategy module 240 may proactively create tasks or other actions (e.g., recommend outbound contact interactions, next-best actions, etc.) based on information about a contact or a customer, available agents, and available offer sets. For example, the task assignment system 200 may determine that a customer's contract is set to expire, the customer's usage is declining, or the customer's credit rating is declining. The task assignment system 200 may further determine that the customer is unlikely to renew the contract at the customer's current rate (e.g., based on information from the historical assignment module 250). The task assignment system 200 may determine that the next-best action is to call the customer (contact selection, channel selection, and timing selection), connect with a particular agent (agent selection), and give the agent the option to offer a downgrade at a particular discount or range of discounts (offer set selection). If the customer does not come to an agreement during the call, the task assignment system 200 may further determine that this customer is more likely to accept a downgrade discount offer if the agent follows up with a text message with information about the discount and how to confirm (multi-channel selection and optimization).


In some embodiment, similar to how a Kappa (κ) parameter is used to adjust/skew the agent percentile or percentile range (see U.S. Pat. No. 9,781,269) and how a Rho (ρ) parameter is used to adjust/skew the task or contact percentile or percentile range (see U.S. Pat. No. 9,787,841), the task assignment strategy module 240 may apply an Iota (τ) parameter to a third (or higher) dimension such as the offer set percentile or percentile range in a decisioning BP strategy. With the Iota parameter, the task assignment strategy module 240 may, for example, adjust the offer set percentile or percentile range (or other dimensions) to skew task-agent-offer set pairing toward higher-performing offers and imbalance offer set availability. The Iota parameter may be applied in either L1 or L2 environment and may be used in conjunction with Kappa or Rho parameter, or it may be applied with both Kappa and Rho parameters in an L3 environment. For example, if the task assignment strategy module 240 determines that the expected wait time for a contact has exceeded 100 seconds (high congestion), it may apply the Iota parameter so that an agent is more likely to have steeper discounts available to offer, which can be sold more quickly to reduce congestion and the expected wait time. More generous offers can speed up task-agent interaction, thereby reducing average handle time (AHT). When congestion is low, expected wait time may be low, and the Iota parameter may be adjusted to make only less generous offers available. These calls may take longer, and AHT may increase, but sales and revenue may be expected to increase as well.


In some embodiments, the task assignment strategy module 240 may optimize performance by optimizing for multiple business objectives or other goals simultaneously. Where the objectives are competing (e.g., discount amount and retention rates), module 240 may balance the tradeoff between the two objectives. For example, the task assignment strategy module 240 may balance increasing (or maintaining) revenue with maintaining or minimally decreasing retention rates, or it may balance decreasing (or maintaining) AHT with increasing (or maintaining) customer satisfaction, etc.


In some embodiments, the task assignment strategy module 240 may implement a decisioning BP strategy that takes into account agent compensation in lieu of an offer or offer set. The framework is similar to the description above, except that, instead of influencing a customer with an offer or offer set, the decisioning BP strategy influences the performance of an agent with a compensation that the agent may receive. In other words, instead of the task-agent-offer set three-dimensional pairing, the decisioning BP strategy makes a three-dimensional pairing of task-agent-reward. In some embodiments, a decisioning BP strategy may make a four-way pairing of task-agent-offer-reward.


A decisioning BP strategy being capable of providing variable agent compensation based on task-agent pairing may lead to better transparency and fairness. For example, some task assignment (e.g., contact center) systems may see a mix of more challenging and less challenging contacts and employ a mix of higher-performing and lower-performing agents. Under a FIFO strategy or a PBR strategy, agents of any ability are equally likely to be paired with more or less challenging contacts. Under a FIFO strategy, the overall performance of the contact center system may be low, but the average agent compensation may be transparent and fair. Under a PBR strategy, agent utilization may be skewed, and compensation may also be skewed toward higher-performing agents. Under previously-disclosed BP strategies, a more challenging contact type may be preferably paired with a higher-performing agent, whereas a less challenging contact type may be preferably paired with a lower-performing agent. For example, if a high-performing agent gets more difficult calls on average, this “call type skew” may result in the high-performing agent's conversion rate going down and compensation going down.


Therefore, adjusting an agent's compensation up or down for a given task (or contact) types may improve the fairness of compensation under the previously-disclosed BP strategies. When an agent is paired with a task or contact, a decisioning BP strategy may inform the agent of the expected value of the contact to the contact center and/or how much the agent will receive as a commission or other non-monetary reward for handling the contact or achieving a particular outcome. The decisioning BP strategy may influence the agent's behavior through offering variable compensation. For example, if the decisioning BP strategy determines that the agent should process the task or contact quickly (lower AHT, lower revenue), the decisioning BP strategy may select a lower compensation. Consequently, the agent may have less incentive to spend a lot of time earning a relatively lower commission. In contrast, if the decisioning BP strategy determines that the agent should put high effort into a higher value call (higher AHT, higher revenue), it may select a higher compensation. Such a decisioning BP strategy may maximize an agent's reward while improving the overall performance of the task assignment system 200.


The historical assignment module 250 may model a decisioning BP model based on historical information about task types, agents, and compensation amounts so that a simultaneous selection of task-agent-reward may be made. The amount of variation in compensation up or down may vary and depend on each combination of an individual agent and contact type, with the goal of improving the overall performance of the task assignment system 200.


Similar to applying the Iota parameter to the offer set or next-best action percentile or percentile range dimension, and as noted above, the task assignment strategy module 240 may apply an Iota parameter to other dimensions, such as skewing agent compensation to a greater or lesser degree, or to generally higher or generally lower values. In some embodiments, the amount and type of Iota parameter applied to agent compensation or other non-monetary rewards may be based at least in part on factors in the task assignment system 200 (e.g., the expected wait time of callers on hold in a call center).


In some embodiments that employ strategies that are similar to the diagonal model BP strategy, a variable compensation may be viewed as temporarily influencing the effective agent percentile (AP) of an available agent to be higher or lower, in order to move an available contact-agent pairing closer to the optimal diagonal. Similarly, adjusting the value of offers to be higher or lower may be viewed as influencing the effective contact percentile (CP) of a waiting contact to be higher or lower, in order to move an available contact-agent pairing closer to the optimal diagonal.


In some embodiments, a benchmarking module 260 may be communicatively coupled to and/or configured to operate in the task assignment system 200 via other modules such as the task assignment module 210 and/or the historical assignment module 250. The benchmarking module 260 may benchmark the relative performance of two or more pairing strategies (e.g., FIFO, PBR, BP, decisioning BP, etc.) using historical assignment information, which may be received from, for example, the historical assignment module 250. In some embodiments, the benchmarking module 260 may perform other functions, such as establishing a benchmarking schedule for cycling among various pairing strategies, tracking cohorts (e.g., base and measurement groups of historical assignments), etc. Benchmarking is described in detail for the contact center context in, e.g., U.S. Pat. No. 9,712,676, which is hereby incorporated by reference herein.


In some embodiments, the benchmarking module 260 may output or otherwise report or use the relative performance measurements. The relative performance measurements may be used to assess the quality of the task assignment strategy to determine, for example, whether a different task assignment strategy (or a different pairing model) should be used, or to measure the overall performance (or performance gain) that was achieved within the task assignment system 200 while it was optimized or otherwise configured to use one task assignment strategy instead of another.


In some embodiments, the benchmarking module 260 may benchmark a decisioning BP strategy against one or more alternative pairing strategies such as FIFO in conjunction with offer set availability. For example, in a contact center system, agents may have a matrix of nine offers-three tiers of service levels, each with three discount levels. During “off” calls, the longest-waiting agent may be connected to the longest-waiting caller, and the agent may offer any of the nine offers. A high-performing agent may be more likely to sell a higher tier of service at a higher price, whereas a lower-performing agent may not try as hard and go immediately to offering the biggest discounts. During “on” calls, the decisioning BP strategy may pair contacts with agents but limit agents to a subset of the nine available offers. For example, for some contact types, a higher-performing agent may be empowered to make any of the nine offers, whereas a lower-performing agent may be limited to offer only the smaller discount for certain tiers, if the task assignment strategy module 240 determines, based on the decisioning BP model, that the overall performance of the contact center system may be optimized by selectively limiting the offer sets in a given way for a given contact-agent pairing. Additionally, if a provider (e.g., vendor) that provides a task assignment system with decisioning BP strategy uses a benchmarking and revenue sharing business model, the provider may contribute a share of benchmarked revenue gain to the agent compensation pool.


In some embodiments, the task assignment system 200 may offer dashboards, visualizations, or other analytics and interfaces to improve overall performance of the system. For each agent, the analytics provided may vary depending on the relative ability or behavioral characteristics of an agent. For example, competitive or higher-performing agents may benefit from a rankings widget or other “gamification” elements (e.g., badges or achievements to unlock points and score boards, notifications when agents overtake one another in the rankings, etc.). On the other hand, less competitive or lower-performing agents may benefit from periodic messages of encouragement, recommendations on training/education sessions, etc.



FIG. 3 shows a task assignment method 300 according to embodiments of the present disclosure.


Task assignment method 300 may begin at block 310. At block 310, the task assignment method 300 may order a plurality of tasks, which may be in queue in a task assignment system (e.g., task assignment system 200). The task assignment method 300 may order the plurality of tasks by giving them percentiles or percentile ranges according to, for example, a model, such as an artificial intelligence model. In a contact center system, contacts may be ordered according to how long each contact has been waiting for an assignment to an agent relative to the other contacts, or how well each contact contributes to performance of the contact center system for some metric relative to the other contacts. In other embodiments, the plurality of tasks may be analyzed according to an “off-diagonal” model (e.g., a network flow model).


Task assignment method 300 may then proceed to block 320. At block 320, the task assignment method 300 may order a plurality of agents, who may be available in the task assignment system. The task assignment method 300 may order the plurality of agents by giving them percentiles or percentile ranges according to, for example, a model, such as an artificial intelligence model. In a contact center system, agents may be ordered according to how long each agent has been waiting for an assignment to a contact relative to the other agents, or how well each agent contributes to performance of the contact center system for some metric relative to the other agent. In other embodiments, the plurality of tasks may be analyzed according to a network flow model.


Task assignment method 300 may then proceed to block 330. At block 330, the task assignment method 300 may pair a task of the plurality of tasks to an agent of the plurality of agents. In some embodiments, the task-agent pairing may be based at least in part on the order of the plurality of tasks, the order of the plurality of agents, and at least one of a plurality of offers to be offered to the plurality of tasks or a compensation to be received by at least one of the plurality of agents. In some embodiments, the task-agent pairing may be based on an alternative behavioral pairing technique such as a network flow model.


In some embodiments, the task assignment method 300 may subsequently select an offer or offer set and/or an agent reward to be used in conjunction the task-agent pairing. In other embodiments, the task assignment method 300 may perform a three-way pairing (e.g., task-agent-offers, task-agent-reward, etc.) or a four-way pairing (e.g., task-agent-offer-reward). A given offer may skew or adjust the percentile or percentile range of task, while a given compensation may skew or adjust the percentile or percentile range of agent. Therefore, by considering at least one offer or a compensation, the task assignment method 300 is able to select a task-agent (or task-agent-offer, task-agent-compensation, task-agent-offer-compensation, etc.) pairing that improves the overall performance of the task assignment system.


At this point it should be noted that task assignment in accordance with the present disclosure as described above may involve the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software. For example, specific electronic components may be employed in a behavioral pairing module or similar or related circuitry for implementing the functions associated with task assignment in accordance with the present disclosure as described above. Alternatively, one or more processors operating in accordance with instructions may implement the functions associated with task assignment in accordance with the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more non-transitory processor readable storage media (e.g., a magnetic disk or other storage medium), or transmitted to one or more processors via one or more signals embodied in one or more carrier waves.


The present disclosure is not to be limited in scope by the specific embodiments described herein. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of at least one particular implementation in at least one particular environment for at least one particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.

Claims
  • 1. A method comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in a task assignment system, a plurality of agents available for pairing;determining, by the at least one computer processor, a set of one or more contacts available for pairing;determining, by the at least one computer processor, a set of one or more subsets of items according to a dimension;determining, by the at least one computer processor, a first performance of a first combination comprising a first agent of the plurality of agents, a contact of the set of contacts, and a first subset of items of the set of subsets of items;determining, by the at least one computer processor, a second performance of a second combination comprising the first agent, the contact, and a second subset of items of the set of subsets of items;comparing, by the at least one computer processor, the first performance with the second performance; andselecting, by the at least one computer processor, either the first combination or the second combination based on the comparing for connection in a contact center system.
  • 2. The method of claim 1, further comprising: determining, by the at least one computer processor, a third performance of a third combination comprising a second agent of the plurality of agents, the contact, and the first subset of items; andcomparing, by the at least one computer processor, the first performance with the third performance;wherein the selecting is further based on comparing the first performance with the third performance.
  • 3. The method of claim 1, wherein the dimension comprises at least one of: offers, agent compensations, actions, and channels.
  • 4. The method of claim 1, wherein the dimension is based on at least one of: the first agent and the contact.
  • 5. The method of claim 1, further comprising: determining, by the at least one computer processor, a second set of one or more subsets of items according to a second dimension;wherein the first combination further comprises a second subset of items of the second set of one or more subsets of items;wherein the second combination further comprises the second subset.
  • 6. A system comprising: at least one computer processor communicatively coupled to and configured to operate in a task assignment system, wherein the at least one computer processor is further configured to: determine a plurality of agents available for pairing;determine a set of one or more contacts available for pairing;determine a set of one or more subsets of items according to a dimension;determine a first performance of a first combination comprising a first agent of the plurality of agents, a contact of the set of contacts, and a first subset of items of the set of subsets of items;determine a second performance of a second combination comprising the first agent, the contact, and a second subset of items of the set of subsets of items;compare the first performance with the second performance; andselect either the first combination or the second combination based on the comparing for connection in a contact center system.
  • 7. The system of claim 6, wherein the at least one computer processor is further configured to: determine a third performance of a third combination comprising a second agent of the plurality of agents, the contact, and the first subset of items; andcompare the first performance with the third performance;wherein the selecting is further based on comparing the first performance with the third performance.
  • 8. The system of claim 6, wherein the dimension comprises at least one of: offers, agent compensations, actions, and channels.
  • 9. The system of claim 6, wherein the dimension is based on at least one of: the first agent and the contact.
  • 10. The system of claim 6, wherein the at least one computer processor is further configured to: determine a second set of one or more subsets of items according to a second dimension;wherein the first combination further comprises a second subset of items of the second set of one or more subsets of items;wherein the second combination further comprises the second subset.
  • 11. An article of manufacture comprising: a non-transitory processor readable medium; andinstructions stored on the medium;wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to and configured to operate in a task assignment system and thereby cause the at least one computer processor to operate so as to: determine a plurality of agents available for pairing;determine a set of one or more contacts available for pairing;determine a set of one or more subsets of items according to a dimension;determine a first performance of a first combination comprising a first agent of the plurality of agents, a contact of the set of contacts, and a first subset of items of the set of subsets of items;determine a second performance of a second combination comprising the first agent, the contact, and a second subset of items of the set of subsets of items;compare the first performance with the second performance; andselect either the first combination or the second combination based on the comparing for connection in a contact center system.
  • 12. The article of manufacture of claim 11, wherein the at least one computer processor is further configured to: determine a third performance of a third combination comprising a second agent of the plurality of agents, the contact, and the first subset of items; andcompare the first performance with the third performance;wherein the selecting is further based on comparing the first performance with the third performance.
  • 13. The article of manufacture of claim 11, wherein the dimension comprises at least one of: offers, agent compensations, actions, and channels.
  • 14. The article of manufacture of claim 11, wherein the dimension is based on at least one of: the first agent and the contact.
  • 15. The article of manufacture of claim 11, wherein the at least one computer processor is further configured to: determine a second set of one or more subsets of items according to a second dimension;wherein the first combination further comprises a second subset of items of the second set of one or more subsets of items;wherein the second combination further comprises the second subset.
CROSS-REFERENCE TO RELATED APPLICATION

This application is a continuation of U.S. patent application Ser. No. 17/169,948, filed on Feb. 8, 2021, which is a continuation of U.S. patent application Ser. No. 16/990,184, filed Aug. 11, 2020, now U.S. Pat. No. 10,917,526, which is a continuation of U.S. patent application Ser. No. 16/576,434, filed Sep. 19, 2019, now U.S. Pat. No. 10,757,262, which is hereby incorporated by reference in its entirety as if fully set forth herein.

US Referenced Citations (284)
Number Name Date Kind
5155763 Bigus et al. Oct 1992 A
5206903 Kohler et al. Apr 1993 A
5327490 Cave Jul 1994 A
5537470 Lee Jul 1996 A
5702253 Bryce et al. Dec 1997 A
5825869 Brooks et al. Oct 1998 A
5903641 Tonisson May 1999 A
5907601 David et al. May 1999 A
5926538 Deryugin et al. Jul 1999 A
6021428 Miloslavsky Feb 2000 A
6044355 Crockett et al. Mar 2000 A
6044468 Osmond Mar 2000 A
6049603 Schwartz et al. Apr 2000 A
6052460 Fisher et al. Apr 2000 A
6064731 Flockhart et al. May 2000 A
6088444 Walker et al. Jul 2000 A
6163607 Bogart et al. Dec 2000 A
6222919 Hollatz et al. Apr 2001 B1
6292555 Okamoto Sep 2001 B1
6324282 McIllwaine et al. Nov 2001 B1
6333979 Bondi et al. Dec 2001 B1
6389132 Price May 2002 B1
6389400 Bushey et al. May 2002 B1
6408066 Andruska et al. Jun 2002 B1
6411687 Bohacek et al. Jun 2002 B1
6424709 Doyle et al. Jul 2002 B1
6434230 Gabriel Aug 2002 B1
6496580 Chack Dec 2002 B1
6504920 Okon et al. Jan 2003 B1
6519335 Bushnell Feb 2003 B1
6519568 Harvey et al. Feb 2003 B1
6535600 Fisher et al. Mar 2003 B1
6535601 Flockhart et al. Mar 2003 B1
6570980 Baruch May 2003 B1
6587556 Judkins et al. Jul 2003 B1
6603854 Judkins et al. Aug 2003 B1
6639976 Shellum et al. Oct 2003 B1
6661889 Flockhart et al. Dec 2003 B1
6704410 McFarlane et al. Mar 2004 B1
6707904 Judkins et al. Mar 2004 B1
6714643 Gargeya et al. Mar 2004 B1
6744878 Komissarchik et al. Jun 2004 B1
6763104 Judkins et al. Jul 2004 B1
6774932 Ewing et al. Aug 2004 B1
6775378 Villena et al. Aug 2004 B1
6798876 Bala Sep 2004 B1
6829348 Schroeder et al. Dec 2004 B1
6832203 Villena et al. Dec 2004 B1
6859529 Duncan et al. Feb 2005 B2
6895083 Bers et al. May 2005 B1
6922466 Peterson et al. Jul 2005 B1
6937715 Delaney Aug 2005 B2
6956941 Duncan et al. Oct 2005 B1
6970821 Shambaugh et al. Nov 2005 B1
6978006 Polcyn Dec 2005 B1
7023979 Wu et al. Apr 2006 B1
7039166 Peterson et al. May 2006 B1
7050566 Becerra et al. May 2006 B2
7050567 Jensen May 2006 B1
7062031 Becerra et al. Jun 2006 B2
7068775 Lee Jun 2006 B1
7092509 Mears et al. Aug 2006 B1
7103172 Brown et al. Sep 2006 B2
7158628 McConnell et al. Jan 2007 B2
7184540 Dezonno et al. Feb 2007 B2
7209549 Reynolds et al. Apr 2007 B2
7231032 Nevman et al. Jun 2007 B2
7231034 Rikhy et al. Jun 2007 B1
7236584 Torba Jun 2007 B2
7245716 Brown et al. Jul 2007 B2
7245719 Kawada et al. Jul 2007 B2
7266251 Rowe Sep 2007 B2
7269253 Wu et al. Sep 2007 B1
7353388 Gilman et al. Apr 2008 B1
7372952 Wu et al. May 2008 B1
7398224 Cooper Jul 2008 B2
7593521 Becerra et al. Sep 2009 B2
7676034 Wu et al. Mar 2010 B1
7725339 Aykin May 2010 B1
7734032 Kiefhaber et al. Jun 2010 B1
7798876 Mix Sep 2010 B2
7826597 Berner et al. Nov 2010 B2
7864944 Khouri et al. Jan 2011 B2
7899177 Bruening et al. Mar 2011 B1
7916858 Heller et al. Mar 2011 B1
7940917 Lauridsen et al. May 2011 B2
7961866 Boutcher et al. Jun 2011 B1
7995717 Conway et al. Aug 2011 B2
8000989 Kiefhaber et al. Aug 2011 B1
8010607 McCormack et al. Aug 2011 B2
8094790 Conway et al. Jan 2012 B2
8126133 Everingham et al. Feb 2012 B1
8140441 Cases et al. Mar 2012 B2
8175253 Knott et al. May 2012 B2
8229102 Knott et al. Jul 2012 B2
8249245 Jay et al. Aug 2012 B2
8295471 Spottiswoode et al. Oct 2012 B2
8300798 Wu et al. Oct 2012 B1
8306212 Arora Nov 2012 B2
8359219 Chishti et al. Jan 2013 B2
8433597 Chishti et al. Apr 2013 B2
8472611 Chishti Jun 2013 B2
8565410 Chishti et al. Oct 2013 B2
8634542 Spottiswoode et al. Jan 2014 B2
8644490 Stewart Feb 2014 B2
8670548 Xie et al. Mar 2014 B2
8699694 Chishti et al. Apr 2014 B2
8712821 Spottiswoode Apr 2014 B2
8718271 Spottiswoode May 2014 B2
8724797 Chishti et al. May 2014 B2
8731178 Chishti et al. May 2014 B2
8737595 Chishti et al. May 2014 B2
8750488 Spottiswoode et al. Jun 2014 B2
8761380 Kohler et al. Jun 2014 B2
8781100 Spottiswoode et al. Jul 2014 B2
8781106 Afzal Jul 2014 B2
8792630 Chishti et al. Jul 2014 B2
8824658 Chishti Sep 2014 B2
8831207 Agarwal Sep 2014 B1
8856869 Brinskelle Oct 2014 B1
8879715 Spottiswoode et al. Nov 2014 B2
8903079 Xie et al. Dec 2014 B2
8913736 Kohler et al. Dec 2014 B2
8929537 Chishti et al. Jan 2015 B2
8938063 Hackbarth et al. Jan 2015 B1
8995647 Li et al. Mar 2015 B2
9020137 Chishti et al. Apr 2015 B2
9025757 Spottiswoode et al. May 2015 B2
9215323 Chishti Dec 2015 B2
9277055 Spottiswoode et al. Mar 2016 B2
9300802 Chishti Mar 2016 B1
9426296 Chishti et al. Aug 2016 B2
9712676 Chishti Jul 2017 B1
9712679 Chishti et al. Jul 2017 B2
9781269 Chishti et al. Oct 2017 B2
9787841 Chishti et al. Oct 2017 B2
9930180 Kan et al. Mar 2018 B1
9942405 Kan et al. Apr 2018 B1
RE46986 Chishti et al. Aug 2018 E
10116795 Chishti et al. Oct 2018 B1
10135987 Chishti et al. Nov 2018 B1
RE47201 Chishti et al. Jan 2019 E
10757262 O'Brien et al. Aug 2020 B1
10917526 O'Brien et al. Feb 2021 B1
11196865 O'Brien Dec 2021 B2
20010032120 Stuart et al. Oct 2001 A1
20010044896 Schwartz et al. Nov 2001 A1
20020018554 Jensen et al. Feb 2002 A1
20020046030 Haritsa et al. Apr 2002 A1
20020059164 Shtivelman May 2002 A1
20020082736 Lech et al. Jun 2002 A1
20020110234 Walker et al. Aug 2002 A1
20020111172 DeWolf et al. Aug 2002 A1
20020131399 Philonenko Sep 2002 A1
20020138285 DeCotiis et al. Sep 2002 A1
20020143599 Nourbakhsh et al. Oct 2002 A1
20020161765 Kundrot et al. Oct 2002 A1
20020184069 Kosiba et al. Dec 2002 A1
20020196845 Richards et al. Dec 2002 A1
20030002653 Uckun Jan 2003 A1
20030059029 Mengshoel et al. Mar 2003 A1
20030081757 Mengshoel et al. May 2003 A1
20030095652 Mengshoel et al. May 2003 A1
20030169870 Stanford Sep 2003 A1
20030174830 Boyer et al. Sep 2003 A1
20030217016 Pericle Nov 2003 A1
20040028211 Culp et al. Feb 2004 A1
20040057416 McCormack Mar 2004 A1
20040096050 Das et al. May 2004 A1
20040098274 Dezonno et al. May 2004 A1
20040101127 Dezonno et al. May 2004 A1
20040109555 Williams Jun 2004 A1
20040133434 Szlam et al. Jul 2004 A1
20040210475 Starnes et al. Oct 2004 A1
20040230438 Pasquale et al. Nov 2004 A1
20040267816 Russek Dec 2004 A1
20050013428 Walters Jan 2005 A1
20050043986 McConnell et al. Feb 2005 A1
20050047581 Shaffer et al. Mar 2005 A1
20050047582 Shaffer et al. Mar 2005 A1
20050071223 Jain et al. Mar 2005 A1
20050129212 Parker Jun 2005 A1
20050135593 Becerra et al. Jun 2005 A1
20050135596 Zhao Jun 2005 A1
20050187802 Koeppel Aug 2005 A1
20050195960 Shaffer et al. Sep 2005 A1
20050286709 Horton et al. Dec 2005 A1
20060098803 Bushey et al. May 2006 A1
20060110052 Finlayson May 2006 A1
20060124113 Roberts Jun 2006 A1
20060184040 Keller et al. Aug 2006 A1
20060222164 Contractor et al. Oct 2006 A1
20060233346 McIlwaine et al. Oct 2006 A1
20060262918 Karnalkar et al. Nov 2006 A1
20060262922 Margulies et al. Nov 2006 A1
20070036323 Travis Feb 2007 A1
20070071222 Flockhart et al. Mar 2007 A1
20070116240 Foley et al. May 2007 A1
20070121602 Sin et al. May 2007 A1
20070121829 Tai et al. May 2007 A1
20070136342 Singhai et al. Jun 2007 A1
20070153996 Hansen Jul 2007 A1
20070154007 Bernhard Jul 2007 A1
20070174111 Anderson et al. Jul 2007 A1
20070198322 Bourne et al. Aug 2007 A1
20070211881 Parker-Stephen Sep 2007 A1
20070219816 Van Luchene et al. Sep 2007 A1
20070274502 Brown Nov 2007 A1
20080002823 Fama et al. Jan 2008 A1
20080008309 Dezonno et al. Jan 2008 A1
20080046386 Pieraccinii et al. Feb 2008 A1
20080065476 Klein et al. Mar 2008 A1
20080118052 Houmaidi et al. May 2008 A1
20080144803 Jaiswal et al. Jun 2008 A1
20080152122 Idan et al. Jun 2008 A1
20080181389 Bourne et al. Jul 2008 A1
20080199000 Su et al. Aug 2008 A1
20080205611 Jordan et al. Aug 2008 A1
20080267386 Cooper Oct 2008 A1
20080273687 Knott et al. Nov 2008 A1
20090043670 Johansson et al. Feb 2009 A1
20090086933 Patel et al. Apr 2009 A1
20090190740 Chishti et al. Jul 2009 A1
20090190743 Spottiswoode Jul 2009 A1
20090190744 Xie et al. Jul 2009 A1
20090190745 Xie et al. Jul 2009 A1
20090190746 Chishti et al. Jul 2009 A1
20090190747 Spottiswoode Jul 2009 A1
20090190748 Chishti et al. Jul 2009 A1
20090190749 Xie et al. Jul 2009 A1
20090190750 Xie et al. Jul 2009 A1
20090232294 Xie et al. Sep 2009 A1
20090234710 Belgaied Hassine et al. Sep 2009 A1
20090245493 Chen et al. Oct 2009 A1
20090249083 Forlenza et al. Oct 2009 A1
20090304172 Becerra et al. Dec 2009 A1
20090305172 Tanaka et al. Dec 2009 A1
20090318111 Desai et al. Dec 2009 A1
20090323921 Spottiswoode et al. Dec 2009 A1
20100020959 Spottiswoode Jan 2010 A1
20100020961 Spottiswoode Jan 2010 A1
20100054431 Jaiswal et al. Mar 2010 A1
20100054452 Afzal Mar 2010 A1
20100054453 Stewart Mar 2010 A1
20100086120 Brussat et al. Apr 2010 A1
20100111285 Chishti May 2010 A1
20100111286 Chishti May 2010 A1
20100111287 Xie et al. May 2010 A1
20100111288 Afzal et al. May 2010 A1
20100142689 Hansen et al. Jun 2010 A1
20100142698 Spottiswoode et al. Jun 2010 A1
20100158238 Saushkin Jun 2010 A1
20100183138 Spottiswoode et al. Jul 2010 A1
20110022357 Vock et al. Jan 2011 A1
20110031112 Birang et al. Feb 2011 A1
20110069821 Korolev et al. Mar 2011 A1
20110125048 Causevic et al. May 2011 A1
20110206199 Arora Aug 2011 A1
20120051536 Chishti et al. Mar 2012 A1
20120051537 Chishti et al. Mar 2012 A1
20120173355 Smith Jul 2012 A1
20120183131 Kohler et al. Jul 2012 A1
20120224680 Spottiswoode et al. Sep 2012 A1
20120278136 Flockhart et al. Nov 2012 A1
20130003959 Nishikawa et al. Jan 2013 A1
20130051545 Ross et al. Feb 2013 A1
20130251137 Chishti et al. Sep 2013 A1
20130287202 Flockhart et al. Oct 2013 A1
20140044246 Klemm et al. Feb 2014 A1
20140079210 Kohler et al. Mar 2014 A1
20140119531 Tuchman et al. May 2014 A1
20140119533 Spottiswoode et al. May 2014 A1
20140270133 Conway et al. Sep 2014 A1
20140341370 Li et al. Nov 2014 A1
20150055772 Klemm et al. Feb 2015 A1
20150281448 Putra et al. Oct 2015 A1
20160080573 Chishti Mar 2016 A1
20170013131 Craib Jan 2017 A1
20170064080 Chishti et al. Mar 2017 A1
20170064081 Chishti et al. Mar 2017 A1
20170316438 Konig et al. Nov 2017 A1
20180159977 Danson et al. Jun 2018 A1
20190068787 Chishti et al. Feb 2019 A1
20210168240 O'Brien Jun 2021 A1
Foreign Referenced Citations (74)
Number Date Country
2008349500 May 2014 AU
2009209317 May 2014 AU
2009311534 Aug 2014 AU
2015203175 Jul 2015 AU
2015243001 Nov 2015 AU
101093590 Dec 2007 CN
101202796 Jun 2008 CN
101645987 Feb 2010 CN
102164073 Aug 2011 CN
102390184 Mar 2012 CN
102555536 Jul 2012 CN
202965525 Jun 2013 CN
203311505 Nov 2013 CN
102301688 May 2014 CN
102017591 Nov 2014 CN
0493292 Jul 1992 EP
0863651 Sep 1998 EP
0949793 Oct 1999 EP
1011974 Jun 2000 EP
1032188 Aug 2000 EP
1107557 Jun 2001 EP
1335572 Aug 2003 EP
2338270 Apr 2018 EP
3495953 Jun 2019 EP
2339643 Feb 2000 GB
11-098252 Apr 1999 JP
2000-069168 Mar 2000 JP
2000-078291 Mar 2000 JP
2000-078292 Mar 2000 JP
2000-092213 Mar 2000 JP
2000-507420 Jun 2000 JP
2000-236393 Aug 2000 JP
2000-253154 Sep 2000 JP
2001-292236 Oct 2001 JP
2001-518753 Oct 2001 JP
2002-297900 Oct 2002 JP
3366565 Jan 2003 JP
2003-187061 Jul 2003 JP
2004-056517 Feb 2004 JP
2004-227228 Aug 2004 JP
2006-345132 Dec 2006 JP
2007-324708 Dec 2007 JP
2009-081627 Apr 2009 JP
2011-511533 Apr 2011 JP
2011-511536 Apr 2011 JP
2012-075146 Apr 2012 JP
5421928 Feb 2014 JP
5631326 Nov 2014 JP
5649575 Jan 2015 JP
2015-514268 May 2015 JP
2015-514371 May 2015 JP
10-2002-0044077 Jun 2002 KR
10-2013-0099554 Sep 2013 KR
316118 Dec 2013 MX
322251 Jul 2014 MX
587100 Oct 2013 NZ
587101 Oct 2013 NZ
591486 Jan 2014 NZ
592781 Mar 2014 NZ
1-2010-501704 Feb 2014 PH
1-2010-501705 Feb 2015 PH
1-2011-500868 Jun 2015 PH
WO-199917517 Apr 1999 WO
WO-0070849 Nov 2000 WO
WO-2001063894 Aug 2001 WO
WO-2006124113 Nov 2006 WO
WO-2009097018 Aug 2009 WO
WO-2009097210 Aug 2009 WO
WO-2010053701 May 2010 WO
WO-2011081514 Jul 2011 WO
WO-2013148453 Oct 2013 WO
WO-2015019806 Feb 2015 WO
WO-2016048290 Mar 2016 WO
WO-2017055900 Apr 2017 WO
Non-Patent Literature Citations (14)
Entry
Afiniti, “Afiniti® Enterprise Behavioral Pairing™ Improves Contact Center Performance,” White Paper, retrieved online from URL: <http://www.afinitit,com/wp-content/uploads/2016/04/Afiniti_White-Paper_Web-Email.pdf> 2016, (11 pages).
Anonymous, (2006) “Performance Based Routing in Profit Call Centers,” The Decision Makers' Direct, located atwww.decisioncraft.com, Issue Jun. 2002, (3 pages).
Chen, G., et al., “Enhanced Locality Sensitive Clustering in High Dimensional Space”, Transactions on Electrical and Electronic Materials, vol. 15, No. 3, Jun. 25, 2014, pp. 125-129 (5 pages).
Cleveland, William S., “Robust Locally Weighted Regression and Smoothing Scatterplots,” Journal of the American Statistical Association, vol. 74, No. 368, pp. 829-836 (Dec. 1979).
Cormen, T.H., et al., “Introduction to Algorithms”, Third Edition, Chapter 26 and 29, 2009, (116 pages).
Gans, N. et al., “Telephone Call Centers: Tutorial, Review and Research Prospects,” Manufacturing & Service Operations Management, vol. 5, No. 2, 2003, pp. 79-141, (84 pages).
Ioannis Ntzoufras “Bayesian Modeling Using Winbugs An Introduction”, Department of Statistics, Athens University of Economics and Business, Wiley-Interscience, A John Wiley & Sons, Inc., Publication, Chapters, Jan. 1, 2007, pp. 155-220 (67 pages).
Koole, G. et al., “An Overview of Routing and Staffing Algorithms in Multi-Skill Customer Contact Centers,” Manuscript, Mar. 6, 2006, (42 pages).
Koole, G., “Performance Analysis and Optimization in Customer Contact Centers,” Proceedings of the Quantitative Evaluation of Systems, First International Conference, Sep. 27-30, 2004, (4 pages).
Nocedal, J. and Wright, S. J., “Numerical Optimization,” Chapter 16 Quadratic Programming, 2006, pp. 448-496 (50 pages).
Ntzoufras, “Bayesian Modeling Using Winbugs”. Wiley Interscience, Chapters, Normal Regression Models, Oct. 18, 2007, Redacted version, pp. 155-220 (67 pages).
Press, W. H. and Rybicki, G. B., “Fast Algorithm for Spectral Analysis of Unevenly Sampled Data,” The Astrophysical Journal, vol. 338, Mar. 1, 1989, pp. 277-280 (4 pages).
Riedmiller, M. et al., “A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm,” 1993 IEEE International Conference on Neural Networks, San Francisco, CA, Mar. 28-Apr. 1, 1993, 1:586-591 (8 pages).
Stanley et al., “Improving Call Center Operations Using Performance-Based Routing Strategies,” California Journal of Operations Management, 6(1), 24-32, Feb. 2008; retrieved from http://userwww.sfsu.edu/saltzman/Publist.html (9 pages).
Related Publications (1)
Number Date Country
20220060582 A1 Feb 2022 US
Continuations (3)
Number Date Country
Parent 17169948 Feb 2021 US
Child 17518418 US
Parent 16990184 Aug 2020 US
Child 17169948 US
Parent 16576434 Sep 2019 US
Child 16990184 US