Techniques for behavioral pairing in a task assignment system

Information

  • Patent Grant
  • 11269682
  • Patent Number
    11,269,682
  • Date Filed
    Tuesday, December 17, 2019
    4 years ago
  • Date Issued
    Tuesday, March 8, 2022
    2 years ago
Abstract
Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks.
Description
FIELD OF THE DISCLOSURE

The present disclosure generally relates to behavioral pairing and, more particularly, to techniques for behavioral pairing in a task assignment system.


BACKGROUND OF THE DISCLOSURE

A typical task assignment system algorithmically assigns tasks arriving at the task assignment center to agents available to handle those tasks. At times, the task assignment system may have agents available and waiting for assignment to tasks. At other times, the task assignment center may have tasks waiting in one or more queues for an agent to become available for assignment.


In some typical task assignment centers, tasks are assigned to agents ordered based on time of arrival, and agents receive tasks ordered based on the time when those agents became available. This strategy may be referred to as a “first-in, first-out,” “FIFO,” or “round-robin” strategy. For example, in an “L2” environment, multiple tasks are waiting in a queue for assignment to an agent. When an agent becomes available, the task at the head of the queue would be selected for assignment to the agent.


Some task assignment systems prioritize some types of tasks ahead of other types of tasks. For example, some tasks may be high-priority tasks, while other tasks are low-priority tasks. Under a FIFO strategy, high-priority tasks will be assigned ahead of low-priority tasks. In some situations, some low-priority tasks may have a high average waiting time while high-priority tasks are handled instead. Moreover, agents that might have handled low-priority tasks more efficiently may end up being assigned to high-priority tasks instead, leading to suboptimal overall performance in the task assignment system.


In view of the foregoing, it may be understood that there may be a need for a system that efficiently optimizes the application of a BP strategy in L2 environments of a task assignment system.


SUMMARY OF THE DISCLOSURE

Techniques for behavioral pairing in a task assignment system are disclosed. In one particular embodiment, the techniques may be realized as a method for behavioral pairing in a task assignment system comprising determining, by at least one computer processor communicatively coupled to and configured to operate in the task assignment system, a priority for each of a plurality of tasks; determining, by the at least one computer processor, an agent available for assignment to any of the plurality of tasks; and assigning, by the at least one computer processor, a first task of the plurality of tasks to the agent using a task assignment strategy, wherein the first task has a lower-priority than a second task of the plurality of tasks.


In accordance with other aspects of this particular embodiment, the first plurality of tasks may comprise a number of tasks from a front of a queue of tasks.


In accordance with other aspects of this particular embodiment, the number of tasks is greater than one and less than ten.


In accordance with other aspects of this particular embodiment, the method may further comprise determining, by the at least one computer processor, an optimal degree of choice for the task assignment strategy, and determining, by the at least one computer processor, the number of tasks based on the optimal degree of choice.


In accordance with other aspects of this particular embodiment, the number of tasks may be proportional to a size of the queue of tasks.


In accordance with other aspects of this particular embodiment, the number of tasks may be proportional to relative numbers of tasks of different priorities.


In accordance with other aspects of this particular embodiment, the method may further comprise determining, by the at least one computer processor, that the first task of the plurality of tasks has exceeded a relevant service level agreement.


In accordance with other aspects of this particular embodiment, the service level agreement may be a function of an estimated wait time for the first task.


In accordance with other aspects of this particular embodiment, the first plurality of tasks may comprise a number of tasks from a front of a queue of tasks, and wherein the service level agreement may be a function of the number of tasks.


In accordance with other aspects of this particular embodiment, at least one of the plurality of tasks may be a virtual task.


In accordance with other aspects of this particular embodiment, the task assignment strategy may be a behavioral pairing strategy.


In another particular embodiment, the techniques may be realized as a system for behavioral pairing in a task assignment system comprising at least one computer processor communicatively coupled to and configured to operate in the task assignment system, wherein the at least one computer processor is further configured to perform the steps in the above-described method.


In another particular embodiment, the techniques may be realized as an article of manufacture for behavioral pairing in a task assignment system comprising a non-transitory processor readable medium and instructions stored on the medium, wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to and configured to operate in the task assignment system and thereby cause the at least one computer processor to operate so as to perform the steps in the above-described method.


The present disclosure will now be described in more detail with reference to particular embodiments thereof as shown in the accompanying drawings. While the present disclosure is described below with reference to particular embodiments, it should be understood that the present disclosure is not limited thereto. Those of ordinary skill in the art having access to the teachings herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein, and with respect to which the present disclosure may be of significant utility.





BRIEF DESCRIPTION OF THE DRAWINGS

To facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals. These drawings should not be construed as limiting the present disclosure, but are intended to be illustrative only.



FIG. 1 shows a block diagram of a task assignment system according to embodiments of the present disclosure.



FIG. 2 shows a flow diagram of a task assignment method according to embodiments of the present disclosure.





DETAILED DESCRIPTION

A typical task assignment system algorithmically assigns tasks arriving at the task assignment center to agents available to handle those tasks. At times, the task assignment system may have agents available and waiting for assignment to tasks. At other times, the task assignment center may have tasks waiting in one or more queues for an agent to become available for assignment.


In some typical task assignment centers, tasks are assigned to agents ordered based on time of arrival, and agents receive tasks ordered based on the time when those agents became available. This strategy may be referred to as a “first-in, first-out,” “FIFO,” or “round-robin” strategy. For example, in an “L2” environment, multiple tasks are waiting in a queue for assignment to an agent. When an agent becomes available, the task at the head of the queue would be selected for assignment to the agent.


Some task assignment systems prioritize some types of tasks ahead of other types of tasks. For example, some tasks may be high-priority tasks, while other tasks are low-priority tasks. Under a FIFO strategy, high-priority tasks will be assigned ahead of low-priority tasks. In some situations, some low-priority tasks may have a high average waiting time while high-priority tasks are handled instead. Moreover, agents that might have handled low-priority tasks more efficiently may end up being assigned to high-priority tasks instead, leading to suboptimal overall performance in the task assignment system.


In view of the foregoing, it may be understood that there may be a need for a system that efficiently optimizes the application of a BP strategy in L2 environments of a task assignment system.



FIG. 1 shows a block diagram of a task assignment system 100 according to embodiments of the present disclosure. The description herein describes network elements, computers, and/or components of a system and method for benchmarking pairing strategies in a task assignment system that may include one or more modules. As used herein, the term “module” may be understood to refer to computing software, firmware, hardware, and/or various combinations thereof. Modules, however, are not to be interpreted as software which is not implemented on hardware, firmware, or recorded on a non-transitory processor readable recordable storage medium (i.e., modules are not software per se). It is noted that the modules are exemplary. The modules may be combined, integrated, separated, and/or duplicated to support various applications. Also, a function described herein as being performed at a particular module may be performed at one or more other modules and/or by one or more other devices instead of or in addition to the function performed at the particular module. Further, the modules may be implemented across multiple devices and/or other components local or remote to one another. Additionally, the modules may be moved from one device and added to another device, and/or may be included in both devices.


As shown in FIG. 1, the task assignment system 100 may include a task assignment module 110. The task assignment system 100 may include a switch or other type of routing hardware and software for helping to assign tasks among various agents, including queuing or switching components or other Internet-, cloud-, or network-based hardware or software solutions.


The task assignment module 110 may receive incoming tasks. In the example of FIG. 1, the task assignment system 100 receives m tasks over a given period, tasks 130A-130m. Each of the m tasks may be assigned to an agent of the task assignment system 100 for servicing or other types of task processing. In the example of FIG. 1, n agents are available during the given period, agents 120A-120n. m and n may be arbitrarily large finite integers greater than or equal to one. In a real-world task assignment system, such as a contact center, there may be dozens, hundreds, etc. of agents logged into the contact center to interact with contacts during a shift, and the contact center may receive dozens, hundreds, thousands, etc. of contacts (e.g., calls) during the shift.


In some embodiments, a task assignment strategy module 140 may be communicatively coupled to and/or configured to operate in the task assignment system 100. The task assignment strategy module 140 may implement one or more task assignment strategies (or “pairing strategies”) for assigning individual tasks to individual agents (e.g., pairing contacts with contact center agents).


A variety of different task assignment strategies may be devised and implemented by the task assignment strategy module 140. In some embodiments, a first-in/first-out (“FIFO”) strategy may be implemented in which, for example, the longest-waiting agent receives the next available task (in L1 environments) or the longest-waiting task is assigned to the next available task (in L2 environments). Other FIFO and FIFO-like strategies may make assignments without relying on information specific to individual tasks or individual agents.


In other embodiments, a performance-based routing (PBR) strategy may be used for prioritizing higher-performing agents for task assignment may be implemented. Under PBR, for example, the highest-performing agent among available agents receives the next available task. Other PBR and PBR-like strategies may make assignments using information about specific agents but without necessarily relying on information about specific tasks or agents.


In yet other embodiments, a behavioral pairing (BP) strategy may be used for optimally assigning tasks to agents using information about both specific tasks and specific agents. Various BP strategies may be used, such as a diagonal model BP strategy or a network flow BP strategy. These task assignment strategies and others are described in detail for the contact center context in, e.g., U.S. Pat. No. 9,300,802 and U.S. patent application Ser. No. 15/582,223, which are hereby incorporated by reference herein.


In some embodiments, a historical assignment module 150 may be communicatively coupled to and/or configured to operate in the task assignment system 100 via other modules such as the task assignment module 110 and/or the task assignment strategy module 140. The historical assignment module 150 may be responsible for various functions such as monitoring, storing, retrieving, and/or outputting information about agent task assignments that have already been made. For example, the historical assignment module 150 may monitor the task assignment module 110 to collect information about task assignments in a given period. Each record of a historical task assignment may include information such as an agent identifier, a task or task type identifier, outcome information, or a pairing strategy identifier (i.e., an identifier indicating whether a task assignment was made using a BP pairing strategy or some other pairing strategy such as a FIFO or PBR pairing strategy).


In some embodiments and for some contexts, additional information may be stored. For example, in a call center context, the historical assignment module 150 may also store information about the time a call started, the time a call ended, the phone number dialed, and the caller's phone number. For another example, in a dispatch center (e.g., “truck roll”) context, the historical assignment module 150 may also store information about the time a driver (i.e., field agent) departs from the dispatch center, the route recommended, the route taken, the estimated travel time, the actual travel time, the amount of time spent at the customer site handling the customer's task, etc.


In some embodiments, the historical assignment module 150 may generate a pairing model or similar computer processor-generate model based on a set of historical assignments for a period of time (e.g., the past week, the past month, the past year, etc.), which may be used by the task assignment strategy module 140 to make task assignment recommendations or instructions to the task assignment module 110. In other embodiments, the historical assignment module 150 may send historical assignment information to another module such as the task assignment strategy module 140 or the benchmarking module 160.


In some embodiments, a benchmarking module 160 may be communicatively coupled to and/or configured to operate in the task assignment system 100 via other modules such as the task assignment module 110 and/or the historical assignment module 150. The benchmarking module 160 may benchmark the relative performance of two or more pairing strategies (e.g., FIFO, PBR, BP, etc.) using historical assignment information, which may be received from, for example, the historical assignment module 150. In some embodiments, the benchmarking module 160 may perform other functions, such as establishing a benchmarking schedule for cycling among various pairing strategies, tracking cohorts (e.g., base and measurement groups of historical assignments), etc. The techniques for benchmarking and other functionality performed by the benchmarking module 160 for various task assignment strategies and various contexts are described in later sections throughout the present disclosure. Benchmarking is described in detail for the contact center context in, e.g., U.S. Pat. No. 9,712,676, which is hereby incorporated by reference herein.


In some embodiments, the benchmarking module 160 may output or otherwise report or use the relative performance measurements. The relative performance measurements may be used to assess the quality of the task assignment strategy to determine, for example, whether a different task assignment strategy (or a different pairing model) should be used, or to measure the overall performance (or performance gain) that was achieved within the task assignment system 100 while it was optimized or otherwise configured to use one task assignment strategy instead of another.


In some task assignment systems, a relatively large number of tasks can build up in a queue while waiting for assignment to agents as they become available. For this highly simplified example, there are nine tasks waiting in queue. Three of the tasks are high-priority tasks: H1, H2, and H3; and six of the tasks are low-priority tasks: L1, L2, L3, L4, L5, and L6.


In some task assignment systems, the tasks of different priorities may be organized (within the system, or at least conceptually) in different priority queues:


High-Priority Queue: H1, H2, H3


Low-Priority Queue: L1, L2, L3, L4, L5, L6


In this example, each priority queue is chronologically ordered according to the arrival time for each task (e.g., contact or caller in a contact center system). H1 is the longest-waiting high-priority task, H3 is the shortest-waiting high-priority task, L1 is the longest-waiting low-priority task, L6 is the shortest-waiting low-priority task, etc. In some embodiments, one or more of the tasks may be a “virtual task.” For example, in a call center context, a caller may request a callback and disconnect from the call center, but the caller's position and priority level is maintained in the queue.


In other task assignment systems, the tasks of different priorities may be intermingled (within the system, or at least conceptually) in a chronologically ordered queue, except that higher-priority tasks may be inserted in the queue ahead of lower-priority tasks:


Queue: H1, H2, H3, L1, L2, L3, L4, L5, L6


In this example, even if L1 is the longest-waiting task among all nine tasks, the three high-priority tasks that arrived later in time have been inserted into the queue ahead of L1.


A typical FIFO strategy may operate by assigning all of the high-priority tasks prior to assigning any of the low-priority tasks, allowing low-priority tasks to wait in the queue indefinitely, even as agents become available that may be able to handle lower-priority tasks more efficiently than higher-priority tasks. This shortcoming may be especially pernicious if higher-priority contacts continue arriving at the task assignment system.


In some task assignment systems, a service level agreement (SLA) may be in place that puts limits on how long any one task should be expected to wait for assignment. Some examples of SLAs include a fixed time (e.g., 10 seconds, 30 seconds, 3 minutes, etc.); an estimated wait time (EWT) plus some fixed time (e.g., an EWT of 1 min. 45 sec. plus 30 seconds); and a multiplier of EWT (e.g., 150% of EWT, or 1.2*EWT).


In these task assignment systems, a FIFO strategy may eventually assign some lower-priority tasks if the SLA is exceeded for that task (sometimes referred to a “blown SLA”). Nevertheless, low-priority tasks may still end up waiting in the queue for longer than average expected wait time, and agent assignments may still be made inefficiently.


In some embodiments, a more effective and efficient task assignment strategy is a BP strategy. Under a BP strategy, as many as all nine tasks may be considered for assignment when an agent becomes available. The BP strategy may still take the priority level of each task into account, but it may ultimately prefer to assign a lower-priority task ahead of a higher-priority task if information about the task and the available agent indicate that such a pairing is optimal for performance of the task assignment system and achieving a desired target task utilization or rate of assignment.


The extent to which a BP strategy may account for priority level is a spectrum. On one extreme end of the spectrum, a BP strategy may consider all tasks in queue (or all tasks in all priority queues), giving relatively little to no weight to each tasks' priority level:


Queue: T1, T2, T3, T4, T5, T6, T7, T8, T9


In this example, the BP strategy may be able to make efficient, optimal task assignments. However, one possible consequence of this strategy is that some high-priority tasks may end up waiting much longer than they would under a FIFO strategy as lower-priority tasks are assigned first


Near the other end of the spectrum, a BP strategy may consider all tasks in queue for the highest-priority level:


High-Priority Queue: H1, H2, H3


In this example, the BP strategy may still be able to make more efficient, optimal task assignments than the FIFO strategy. Under the FIFO strategy, the tasks would be assigned in queue order: first H1, then H2, and finally H3, regardless of which agent becomes available, whereas the BP strategy would consider information about the three tasks and the agent to select the more efficient pairing, even though the assigned high-priority task may not be the longest-waiting high-priority task. However, one possible consequence of this strategy is that low-priority tasks may end up waiting just as long as they would under the FIFO strategy, and opportunities to pair agents with low-priority tasks efficiently would be missed.


In some embodiments, a hybrid approach may be used that gives some deference to task prioritization and waiting time while also timely handling at least some of the longer-waiting lower-priority tasks. Some of these embodiments may be referred to as “Front-N” or “Head-N” because it considers the first N tasks in a prioritized queue.


For example, if N=6, such a BP strategy will select among the first six tasks in queue:


Queue: H1, H2, H3, L1, L2, L3, L4, L5, L6


In this example, when an agent become available, the BP strategy may assign any of the three high-priority tasks or any of the three longest-waiting low-priority tasks.


In some embodiments, N may be a predetermined and/or fixed value. In other embodiments, N may be dynamically determined for each pairing. For example, the BP strategy may determine a size for N that represents an optimal amount or degree of choice (e.g., 3, 6, 10, 20, etc.). For another example, N may be a function of the number of tasks waiting in the queue (e.g., one-quarter, -third, -half, etc. of the number of tasks in the queue). For another example, N may be a function of the relative number tasks at different priority levels.


For another example, the BP strategy may consider up to i calls for i≤N if it encounters an i-th call for which the SLA has already been blown. In this example, if L1 has already been waiting for longer than the SLA expects, the BP strategy may consider H1, H2, H3, and L1—disregarding L2 and L3 because it will prefer to pair the longer-waiting L1 before pairing L2 or L3.


In some embodiments, the BP strategy may use a SLA based on tracking how many times an individual task has been up for selection (i.e., how many times a task has appeared in the Front-N tasks):


1. H1(1), H2(1), H3(1), L1(1), L2(1), L3(1)=>H3 selected


2. H1(2), H2(2), L1(2), L2(2), L3(2), L4(1)=>L2 selected


3. H1(3), H2(3), L1(3), L3(3), L4(3), L5(1)=>H1 selected


4. H2(4), L1(4), L3(4), L5(2), L6(1)


If the SLA is based on whether a task has appeared in the Front-6 more than three times, there are now three tasks with blown SLAs by the fourth assignment: H2, L1, and L3 have now appeared for a fourth time. In these embodiments, the BP strategy may preferably pair these three tasks ahead of other tasks that have appeared in the Front-6 only three or fewer times (i.e., L5 and L6).


In some embodiments, the SLA based on Front-N may be a function of N. For example, a task may appear in the Front-N up to ½ NN, 2N, 5N, etc. before the SLA is blown. This type of SLA may be especially useful in real-world scenarios in which higher-priority tasks continue to arrive at the queue and would otherwise be assigned ahead of longer-waiting lower-priority tasks that have already appeared in the Front-N more than the Front-N SLA would normally expect or allow.


In some embodiments, individual tasks or types of tasks may have different SLAs from other tasks or other types of tasks. The different SLAs may be based on any of the techniques described above, such as time-based SLAs or SLAs based on the number of times an individual task has been included in the Front-N or otherwise evaluated. For example, the first task in the queue may have a SLA of 2N, whereas the second task in the queue may have a SLA of 3N. The determination of which SLA an individual task has may be based on information about the task, information about the available agent or agents, or both.


In some embodiments, the SLA for a task may be dynamic, changing as the amount of waiting time increases or the number of times the task has been evaluated in the Front-N increases.



FIG. 2 shows a task assignment method 200 according to embodiments of the present disclosure.


Task assignment method 200 may begin at block 210. At block 210, a number of tasks for a size of a plurality of tasks may be determined. In some embodiments, the number of tasks for the size of the plurality of tasks may be equivalent to a size of a queue of tasks. For example, in a contact center context, if twenty contacts are waiting in a queue for connection to an agent, the plurality of tasks would include all twenty contacts from the queue. In other embodiments, the number of tasks may be a fixed or predetermined number of tasks take from the front or head of the queue. For example, if the number of tasks is ten, the plurality of tasks may include the first ten tasks (e.g., contacts) from the queue of size twenty. In other embodiments, the number of tasks may be dynamically determined according to any of the techniques described above, such as a function (e.g., fraction, percentage, proportion) of the size of the queue, a function of a relative number of tasks for different priority levels, a function of a degree of choice for a behavioral pairing strategy, etc. In some embodiments, this number of tasks may be referred to as “N” and the plurality of tasks may be referred to as the “Front-N” plurality of tasks.


Task assignment method 200 may proceed to block 220. At block 220, a priority may be determined for each of the plurality of tasks (e.g., the Front-N tasks). For example, a first portion of the plurality of tasks may be designated as “high priority,” and a second portion of the plurality of tasks may be designated as “low priority.” In some embodiments, there may be an arbitrarily large number of different priorities and identifiers for priorities. In some embodiments, the task assignment system may maintain separate queues of tasks for each priority. In other embodiments, the task assignment system may maintain a single queue of tasks ordered first by priority and, in some cases, second by order of arrival time or another chronological ordering. In these embodiments, task assignment method 200 may consider all tasks or the Front-N tasks regardless of whether the tasks are maintained in a single prioritized queue or multiple priority queues.


Task assignment method 200 may proceed to block 230. In some embodiments, whether a SLA has been exceeded for at least one task of the plurality of tasks may be determined. In some embodiments, the task assignment strategy or the task assignment system will assign an agent to a task that has exceeded its SLA (e.g., the longest-waiting task with an exceeded or blown SLA). In various embodiments, the SLA may be defined or otherwise determined according to the any of the techniques described above, such as a fixed time, a function of EWT, or a function of the number of times a given task has been available for assignment in the Front-N. In other embodiments, there may be no SLA relevant to the task assignment strategy, and the task assignment method 200 may proceed without determining or otherwise checking for any exceeded SLAs.


Task assignment method 200 may proceed to block 240. At block 240, an agent may be determined that is available for assignment to any of the plurality of tasks. For example, in L2 environments, an agent becomes available for assignment. In other environments, such as L3 environments, multiple agents may be available for assignment.


Task assignment method 200 may proceed to block 250. At block 250, a task of the plurality of tasks may be assigned to the agent using the task assignment strategy. For example, if the task assignment strategy is a BP strategy, the BP strategy may consider information about each of the plurality of tasks and information about the agent to determine which task assignment is expected to optimize overall performance of the task assignment system. In some instances, the optimal assignment may be the longest-waiting, highest-priority task, as would be the case for a FIFO or PBR strategy. However, in other instances, the optimal assignment may be a longer-waiting and/or lower-priority task. Even if these instances, a lower expected performance for the instant pairing may be expected to lead to a higher overall performance of the task assignment system while also, in some embodiments, achieving a balanced or otherwise targeted task utilization (e.g., normalizing or balancing average waiting time for all tasks, or balancing average waiting time for all tasks within the same priority level).


In some embodiments, the task assignment strategy or the task assignment system may prioritize assigning a task with an exceeded SLA (such as a longest-waiting and/or highest-priority task with an exceeded SLA) if there is one.


In some embodiments, the task assignment system may cycle among multiple task assignment strategies (e.g., cycling between a BP strategy and FIFO or a PBR strategy). In some of these embodiments, the task assignment system may benchmark the relative performance of the multiple task assignment strategies.


After assigning the task to the agent, ask assignment method 200 may end.


At this point it should be noted that TK in accordance with the present disclosure as described above may involve the processing of input data and the generation of output data to some extent. This input data processing and output data generation may be implemented in hardware or software. For example, specific electronic components may be employed in a behavioral pairing module or similar or related circuitry for implementing the functions associated with TK in accordance with the present disclosure as described above. Alternatively, one or more processors operating in accordance with instructions may implement the functions associated with TK in accordance with the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more non-transitory processor readable storage media (e.g., a magnetic disk or other storage medium), or transmitted to one or more processors via one or more signals embodied in one or more carrier waves.


The present disclosure is not to be limited in scope by the specific embodiments described herein. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of at least one particular implementation in at least one particular environment for at least one particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.

Claims
  • 1. A method for pairing in a contact center system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the contact center system, a first threshold time based on a first number of contacts in a set of contacts,pairing, by the at least one computer processor, a first contact of the set of contacts based on a first pairing strategy;after the pairing of the first contact and before another contact is received by the contact center system, determining, by the at least one computer processor, a second threshold time different from the first threshold time and based on a second number of contacts in a remainder of the set of contacts;determining, by the at least one computer processor, whether a second contact of the set of contacts exceeds the second threshold time; andpairing, by the at least one computer processor, the second contact based on determining that the second contact of the set of contacts exceeds the second threshold time and based on a second pairing strategy;wherein the first contact and the second contact are waiting on hold connected to the contact center system.
  • 2. The method of claim 1, wherein at least one of the first threshold time and the second threshold time comprises a wait time limit according to a service level agreement.
  • 3. The method of claim 1, wherein at least one of the first threshold time and the second threshold time comprises an expected amount of wait time before pairing the first contact based on the first pairing strategy.
  • 4. The method of claim 1, wherein at least one of the first threshold time and the second threshold time comprises a wait time having a greatest value in a plurality of wait times.
  • 5. The method of claim 1, further comprising pairing, by the at least one computer processor, a third contact of the remainder of the set of contacts based on determining that the second contact of the set of contacts does not exceed the second threshold time.
  • 6. A method for pairing in a contact center system comprising: determining, by at least one computer processor communicatively coupled to and configured to operate in the contact center system, a first threshold time based on a first number of contacts in a set of contacts waiting on to be paired based on a first pairing strategy in the contact center system;receiving, by the at least one computer processor, a new contact by the contact center system;after the receiving and before any contacts of the set of contacts are paired based on the first pairing strategy in the contact center system, determining, by the at least one computer processor, a second threshold time different from the first threshold time and based on a second number of contacts including the set of contacts and the new contact;determining, by the at least one computer processor, whether a first contact of the set of contacts exceeds the second threshold time; andpairing, by the at least one computer processor, the first contact of the set of contacts based on determining that the first contact of the set of contacts exceeds the second threshold time and based on a second pairing strategy.
  • 7. The method of claim 6, wherein at least one of the first threshold time and the second threshold time comprises a wait time limit according to a service level agreement.
  • 8. The method of claim 6, wherein at least one of the first threshold time and the second threshold time comprises an expected amount of wait time before pairing the first contact based on the second pairing strategy.
  • 9. The method of claim 6, wherein at least one of the first threshold time and the second threshold time comprises a wait time having a greatest value in a plurality of wait times.
  • 10. The method of claim 6, further comprising pairing, by the at least one computer processor, a second contact of the set of contacts based on determining that the first contact of the set of contacts does not exceed the second threshold time.
  • 11. A system for pairing in a contact center system comprising: at least one computer processor communicatively coupled to and configured to operate in the contact center system, wherein the at least one computer processor is further configured to: determine a first threshold time based on a first number of contacts in a set of contacts,pair a first contact of the set of contacts based on a first pairing strategy;after the pairing of the first contact and before another contact is received by the contact center system, determine a second threshold time different from the first threshold time and based on a second number of contacts in a remainder of the set of contacts;determine whether a second contact of the set of contacts exceeds the second threshold time; andpair the second contact based on determining that the second contact of the set of contacts exceeds the second threshold time and based on a second pairing strategy;wherein the first contact and the second contact are waiting on hold connected to the contact center system.
  • 12. The system of claim 11, wherein at least one of the first threshold time and the second threshold time comprises a wait time limit according to a service level agreement.
  • 13. The system of claim 11, wherein at least one of the first threshold time and the second threshold time comprises an expected amount of wait time before pairing the first contact based on the first pairing strategy.
  • 14. The system of claim 11, wherein at least one of the first threshold time and the second threshold time comprises a wait time having a greatest value in a plurality of wait times.
  • 15. The system of claim 11, wherein the at least one computer processor is further configured to pair a third contact of the remainder of the set of contacts based on determining that the second contact of the set of contacts does not exceed the second threshold time.
  • 16. A system for pairing in a contact center system comprising: at least one computer processor communicatively coupled to and configured to operate in the contact center system, wherein the at least one computer processor is further configured to: determine a first threshold time based on a first number of contacts in a set of contacts waiting on to be paired based on a first pairing strategy in the contact center system;receive a new contact by the contact center system;after the receiving and before any contacts of the set of contacts are paired based on the first pairing strategy in the contact center system, determine a second threshold time different from the first threshold time and based on a second number of contacts including the set of contacts and the new contact;determine whether a first contact of the set of contacts exceeds the second threshold time; andpair the first contact of the set of contacts based on determining that the first contact of the set of contacts exceeds the second threshold time and based on a second pairing strategy.
  • 17. The system of claim 16, wherein at least one of the first threshold time and the second threshold time comprises a wait time limit according to a service level agreement.
  • 18. The system of claim 16, wherein at least one of the first threshold time and the second threshold time comprises an expected amount of wait time before pairing the first contact based on the second pairing strategy.
  • 19. The system of claim 16, wherein at least one of the first threshold time and the second threshold time comprises a wait time having a greatest value in a plurality of wait times.
  • 20. The system of claim 16, wherein the at least one computer processor is further configured to pair a second contact of the set of contacts based on determining that the first contact of the set of contacts does not exceed the second threshold time.
  • 21. An article of manufacture for pairing in a contact center system comprising: a non-transitory computer processor readable medium; andinstructions stored on the medium;wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to and configured to operate in a contact center system and thereby cause the at least one computer processor to operate so as to: determine a first threshold time based on a first number of contacts in a set of contacts,pair a first contact of the set of contacts based on a first pairing strategy;after the pairing of the first contact and before another contact is received by the contact center system, determine a second threshold time different from the first threshold time and based on a second number of contacts in a remainder of the set of contacts;determine whether a second contact of the set of contacts exceeds the second threshold time; andpair the second contact based on determining that the second contact of the set of contacts exceeds the second threshold time and based on a second pairing strategy;wherein the first contact and the second contact are waiting on hold connected to the contact center system.
  • 22. The article of manufacture of claim 21, wherein at least one of the first threshold time and the second threshold time comprises a wait time limit according to a service level agreement.
  • 23. The article of manufacture of claim 21, wherein at least one of the first threshold time and the second threshold time comprises an expected amount of wait time before pairing the first contact based on the first pairing strategy.
  • 24. The article of manufacture of claim 21, wherein at least one of the first threshold time and the second threshold time comprises a wait time having a greatest value in a plurality of wait times.
  • 25. The article of manufacture of claim 21, wherein the at least one computer processor is further caused to operate so as to pair a third contact of the remainder of the set of contacts based on determining that the second contact of the set of contacts does not exceed the second threshold time.
  • 26. An article of manufacture for pairing in a contact center system comprising: a non-transitory computer processor readable medium; andinstructions stored on the medium;wherein the instructions are configured to be readable from the medium by at least one computer processor communicatively coupled to and configured to operate in a contact center system and thereby cause the at least one computer processor to operate so as to: determine a first threshold time based on a first number of contacts in a set of contacts waiting on to be paired based on a first pairing strategy in the contact center system;receive a new contact by the contact center system;after the receiving and before any contacts of the set of contacts are paired based on the first pairing strategy in the contact center system, determine a second threshold time different from the first threshold time and based on a second number of contacts including the set of contacts and the new contact;determine whether a first contact of the set of contacts exceeds the second threshold time; andpair the first contact of the set of contacts based on determining that the first contact of the set of contacts exceeds the second threshold time and based on a second pairing strategy.
  • 27. The article of manufacture of claim 26, wherein at least one of the first threshold time and the second threshold time comprises a wait time limit according to a service level agreement.
  • 28. The article of manufacture of claim 26, wherein at least one of the first threshold time and the second threshold time comprises an expected amount of wait time before pairing the first contact based on the second pairing strategy.
  • 29. The article of manufacture of claim 26, wherein at least one of the first threshold time and the second threshold time comprises a wait time having a greatest value in a plurality of wait times.
  • 30. The article of manufacture of claim 26, wherein the at least one computer processor is further caused to operate so as to pair a second contact of the set of contacts based on determining that the first contact of the set of contacts does not exceed the second threshold time.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 15/837,911, filed Dec. 11, 2017 (now U.S. Pat. No. 10,509,671), of which is hereby incorporated by reference in its entirety as if fully set forth herein.

US Referenced Citations (278)
Number Name Date Kind
5155763 Bigus et al. Oct 1992 A
5206903 Kohler et al. Apr 1993 A
5327490 Cave Jul 1994 A
5537470 Lee Jul 1996 A
5702253 Bryce et al. Dec 1997 A
5825869 Brooks et al. Oct 1998 A
5903641 Tonisson May 1999 A
5907601 David et al. May 1999 A
5926538 Deryugin et al. Jul 1999 A
6044355 Crockett et al. Mar 2000 A
6049603 Schwartz et al. Apr 2000 A
6052460 Fisher et al. Apr 2000 A
6064731 Flockhart et al. May 2000 A
6088444 Walker et al. Jul 2000 A
6163607 Bogart et al. Dec 2000 A
6222919 Hollatz et al. Apr 2001 B1
6292555 Okamoto Sep 2001 B1
6324282 McIllwaine et al. Nov 2001 B1
6333979 Bondi et al. Dec 2001 B1
6389132 Price May 2002 B1
6389400 Bushey et al. May 2002 B1
6408066 Andruska et al. Jun 2002 B1
6411687 Bohacek et al. Jun 2002 B1
6424709 Doyle et al. Jul 2002 B1
6434230 Gabriel Aug 2002 B1
6496580 Chack Dec 2002 B1
6504920 Okon et al. Jan 2003 B1
6519335 Bushnell Feb 2003 B1
6535600 Fisher et al. Mar 2003 B1
6535601 Flockhart et al. Mar 2003 B1
6570980 Baruch May 2003 B1
6587556 Judkins et al. Jul 2003 B1
6603854 Judkins et al. Aug 2003 B1
6639976 Shellum et al. Oct 2003 B1
6661889 Flockhart et al. Dec 2003 B1
6704410 McFarlane et al. Mar 2004 B1
6707904 Judkins et al. Mar 2004 B1
6714643 Gargeya et al. Mar 2004 B1
6757897 Shi et al. Jun 2004 B1
6763104 Judkins et al. Jul 2004 B1
6774932 Ewing et al. Aug 2004 B1
6775378 Villena et al. Aug 2004 B1
6798876 Bala Sep 2004 B1
6829348 Schroeder et al. Dec 2004 B1
6832203 Villena et al. Dec 2004 B1
6859529 Duncan et al. Feb 2005 B2
6895083 Bers et al. May 2005 B1
6922466 Peterson et al. Jul 2005 B1
6937715 Delaney Aug 2005 B2
6956941 Duncan et al. Oct 2005 B1
6970821 Shambaugh et al. Nov 2005 B1
6978006 Polcyn Dec 2005 B1
7023979 Wu et al. Apr 2006 B1
7039166 Peterson et al. May 2006 B1
7050566 Becerra et al. May 2006 B2
7050567 Jensen May 2006 B1
7062031 Becerra et al. Jun 2006 B2
7068775 Lee Jun 2006 B1
7092509 Mears et al. Aug 2006 B1
7103172 Brown et al. Sep 2006 B2
7158628 McConnell et al. Jan 2007 B2
7184540 Dezonno et al. Feb 2007 B2
7209549 Reynolds et al. Apr 2007 B2
7231032 Nevman et al. Jun 2007 B2
7231034 Rikhy et al. Jun 2007 B1
7236584 Torba Jun 2007 B2
7245716 Brown et al. Jul 2007 B2
7245719 Kawada et al. Jul 2007 B2
7266251 Rowe Sep 2007 B2
7269253 Wu et al. Sep 2007 B1
7353388 Gilman et al. Apr 2008 B1
7398224 Cooper Jul 2008 B2
7593521 Becerra et al. Sep 2009 B2
7676034 Wu et al. Mar 2010 B1
7689998 Chrysanthakopoulos Mar 2010 B1
7725339 Aykin May 2010 B1
7734032 Kiefhaber et al. Jun 2010 B1
7798876 Mix Sep 2010 B2
7826597 Berner et al. Nov 2010 B2
7864944 Khouri et al. Jan 2011 B2
7899177 Bruening et al. Mar 2011 B1
7916858 Heller et al. Mar 2011 B1
7940917 Lauridsen et al. May 2011 B2
7961866 Boutcher et al. Jun 2011 B1
7995717 Conway et al. Aug 2011 B2
8000989 Kiefhaber et al. Aug 2011 B1
8010607 McCormack et al. Aug 2011 B2
8094790 Conway et al. Jan 2012 B2
8126133 Everingham et al. Feb 2012 B1
8140441 Cases et al. Mar 2012 B2
8175253 Knott et al. May 2012 B2
8229102 Knott et al. Jul 2012 B2
8249245 Jay et al. Aug 2012 B2
8295471 Spottiswoode et al. Oct 2012 B2
8300798 Wu et al. Oct 2012 B1
8306212 Arora Nov 2012 B2
8359219 Chishti et al. Jan 2013 B2
8433597 Chishti et al. Apr 2013 B2
8472611 Chishti Jun 2013 B2
8565410 Chishti et al. Oct 2013 B2
8634542 Spottiswoode et al. Jan 2014 B2
8644490 Stewart Feb 2014 B2
8670548 Xie et al. Mar 2014 B2
8699694 Chishti et al. Apr 2014 B2
8712821 Spottiswoode Apr 2014 B2
8718271 Spottiswoode May 2014 B2
8724797 Chishti et al. May 2014 B2
8731178 Chishti et al. May 2014 B2
8737595 Chishti et al. May 2014 B2
8750488 Spottiswoode et al. Jun 2014 B2
8761380 Kohler et al. Jun 2014 B2
8781100 Spottiswoode et al. Jul 2014 B2
8781106 Afzal Jul 2014 B2
8792630 Chishti et al. Jul 2014 B2
8824658 Chishti Sep 2014 B2
8831207 Agarwal Sep 2014 B1
8879715 Spottiswoode et al. Nov 2014 B2
8903079 Xie et al. Dec 2014 B2
8913736 Kohler et al. Dec 2014 B2
8929537 Chishti et al. Jan 2015 B2
8995647 Li et al. Mar 2015 B2
9020137 Chishti et al. Apr 2015 B2
9025757 Spottiswoode et al. May 2015 B2
9215323 Chishti Dec 2015 B2
9277055 Spottiswoode et al. Mar 2016 B2
9300802 Chishti Mar 2016 B1
9426296 Chishti et al. Aug 2016 B2
9712676 Chishti Jul 2017 B1
9712679 Chishti et al. Jul 2017 B2
10135987 Chishti et al. Nov 2018 B1
20010032120 Stuart et al. Oct 2001 A1
20020018554 Jensen Feb 2002 A1
20020046030 Haritsa et al. Apr 2002 A1
20020059164 Shtivelman May 2002 A1
20020082736 Lech et al. Jun 2002 A1
20020110234 Walker et al. Aug 2002 A1
20020111172 DeWolf et al. Aug 2002 A1
20020131399 Philonenko Sep 2002 A1
20020138285 DeCotiis et al. Sep 2002 A1
20020143599 Nourbakhsh et al. Oct 2002 A1
20020161765 Kundrot et al. Oct 2002 A1
20020184069 Kosiba et al. Dec 2002 A1
20020196845 Richards et al. Dec 2002 A1
20030002653 Uckun Jan 2003 A1
20030059029 Mengshoel et al. Mar 2003 A1
20030081757 Mengshoel et al. May 2003 A1
20030095652 Mengshoel et al. May 2003 A1
20030169870 Stanford Sep 2003 A1
20030174830 Boyer et al. Sep 2003 A1
20030217016 Pericle Nov 2003 A1
20040015973 Skovira Jan 2004 A1
20040028211 Culp et al. Feb 2004 A1
20040057416 McCormack Mar 2004 A1
20040096050 Das et al. May 2004 A1
20040098274 Dezonno et al. May 2004 A1
20040101127 Dezonno et al. May 2004 A1
20040109555 Williams Jun 2004 A1
20040133434 Szlam et al. Jul 2004 A1
20040210475 Starnes et al. Oct 2004 A1
20040230438 Pasquale et al. Nov 2004 A1
20040267816 Russek Dec 2004 A1
20050013428 Walters Jan 2005 A1
20050043986 McConnell et al. Feb 2005 A1
20050047581 Shaffer et al. Mar 2005 A1
20050047582 Shaffer et al. Mar 2005 A1
20050071223 Jain et al. Mar 2005 A1
20050097556 Code et al. May 2005 A1
20050129212 Parker Jun 2005 A1
20050135593 Becerra et al. Jun 2005 A1
20050135596 Zhao Jun 2005 A1
20050187802 Koeppel Aug 2005 A1
20050195960 Shaffer et al. Sep 2005 A1
20050201547 Burg Sep 2005 A1
20050286709 Horton et al. Dec 2005 A1
20060037021 Anand et al. Feb 2006 A1
20060098803 Bushey et al. May 2006 A1
20060110052 Finlayson May 2006 A1
20060124113 Roberts Jun 2006 A1
20060184040 Keller et al. Aug 2006 A1
20060222164 Contractor et al. Oct 2006 A1
20060233346 McIlwaine et al. Oct 2006 A1
20060262918 Karnalkar et al. Nov 2006 A1
20060262922 Margulies et al. Nov 2006 A1
20070036323 Travis Feb 2007 A1
20070071222 Flockhart et al. Mar 2007 A1
20070121602 Sin et al. May 2007 A1
20070121829 Tal et al. May 2007 A1
20070127690 Patakula Jun 2007 A1
20070136342 Singhai et al. Jun 2007 A1
20070153996 Hansen Jul 2007 A1
20070154007 Bernhard Jul 2007 A1
20070174111 Anderson et al. Jul 2007 A1
20070198322 Bourne et al. Aug 2007 A1
20070211881 Parker-Stephen Sep 2007 A1
20070219816 Van Luchene et al. Sep 2007 A1
20070274502 Brown Nov 2007 A1
20080002823 Fama et al. Jan 2008 A1
20080008309 Dezonno et al. Jan 2008 A1
20080046386 Pieraccinii et al. Feb 2008 A1
20080065476 Klein et al. Mar 2008 A1
20080118052 Houmaidi et al. May 2008 A1
20080147470 Johri et al. Jun 2008 A1
20080152122 Idan et al. Jun 2008 A1
20080181389 Bourne et al. Jul 2008 A1
20080199000 Su et al. Aug 2008 A1
20080205611 Jordan et al. Aug 2008 A1
20080222640 Daly et al. Sep 2008 A1
20080267386 Cooper Oct 2008 A1
20080273687 Knott et al. Nov 2008 A1
20090031312 Mausolf et al. Jan 2009 A1
20090043670 Johansson et al. Feb 2009 A1
20090086933 Patel et al. Apr 2009 A1
20090158299 Carter Jun 2009 A1
20090190740 Chishti et al. Jul 2009 A1
20090190743 Spottiswoode Jul 2009 A1
20090190744 Xie et al. Jul 2009 A1
20090190745 Xie et al. Jul 2009 A1
20090190746 Chishti et al. Jul 2009 A1
20090190747 Spottiswoode Jul 2009 A1
20090190748 Chishti et al. Jul 2009 A1
20090190749 Xie et al. Jul 2009 A1
20090190750 Xie et al. Jul 2009 A1
20090232294 Xie et al. Sep 2009 A1
20090234710 Belgaied Hassine et al. Sep 2009 A1
20090245493 Chen et al. Oct 2009 A1
20090254774 Chamdani et al. Oct 2009 A1
20090304172 Becerra et al. Dec 2009 A1
20090318111 Desai et al. Dec 2009 A1
20090323921 Spottiswoode et al. Dec 2009 A1
20100020959 Spottiswoode Jan 2010 A1
20100020961 Spottiswoode Jan 2010 A1
20100054431 Jaiswal et al. Mar 2010 A1
20100054452 Afzal Mar 2010 A1
20100054453 Stewart Mar 2010 A1
20100086120 Brussat et al. Apr 2010 A1
20100111285 Chishti May 2010 A1
20100111286 Chishti May 2010 A1
20100111287 Xie et al. May 2010 A1
20100111288 Afzal et al. May 2010 A1
20100142689 Hansen et al. Jun 2010 A1
20100142698 Spottiswoode et al. Jun 2010 A1
20100158238 Saushkin Jun 2010 A1
20100183138 Spottiswoode et al. Jul 2010 A1
20110022357 Vock et al. Jan 2011 A1
20110031112 Birang et al. Feb 2011 A1
20110069821 Korolev et al. Mar 2011 A1
20110125048 Causevic et al. May 2011 A1
20110173329 Zhang et al. Jul 2011 A1
20120051536 Chishti et al. Mar 2012 A1
20120051537 Chishti et al. Mar 2012 A1
20120183131 Kohler Jul 2012 A1
20120224680 Spottiswoode et al. Sep 2012 A1
20120233623 van Riel Sep 2012 A1
20120278136 Flockhart et al. Nov 2012 A1
20130003959 Nishikawa et al. Jan 2013 A1
20130051545 Ross et al. Feb 2013 A1
20130074088 Purcell et al. Mar 2013 A1
20130111009 Sng et al. May 2013 A1
20130251137 Chishti et al. Sep 2013 A1
20130287202 Flockhart et al. Oct 2013 A1
20140044246 Klemm et al. Feb 2014 A1
20140079210 Kohler et al. Mar 2014 A1
20140119531 Tuchman et al. May 2014 A1
20140119533 Spottiswoode et al. May 2014 A1
20140169549 Desai Jun 2014 A1
20140325524 Zangaro et al. Oct 2014 A1
20140341370 Li et al. Nov 2014 A1
20150055772 Klemm et al. Feb 2015 A1
20150103999 Nowak Apr 2015 A1
20150106819 Kim et al. Apr 2015 A1
20150268992 Fan Sep 2015 A1
20150281448 Putra et al. Oct 2015 A1
20160062797 Holt et al. Mar 2016 A1
20160080573 Chishti Mar 2016 A1
20160330324 Srinivas Nov 2016 A1
20170064080 Chishti et al. Mar 2017 A1
20170064081 Chishti et al. Mar 2017 A1
20170109206 Wang et al. Apr 2017 A1
Foreign Referenced Citations (46)
Number Date Country
2008349500 May 2014 AU
2009209317 May 2014 AU
2009311534 Aug 2014 AU
102301688 May 2014 CN
102017591 Nov 2014 CN
0493292 Jul 1992 EP
0949793 Oct 1999 EP
1032188 Aug 2000 EP
1335572 Aug 2003 EP
11-098252 Apr 1999 JP
2000-069168 Mar 2000 JP
2000-078291 Mar 2000 JP
2000-078292 Mar 2000 JP
2000-092213 Mar 2000 JP
2000-236393 Aug 2000 JP
2001-217939 Aug 2001 JP
2001-292236 Oct 2001 JP
2001-518753 Oct 2001 JP
2002-297900 Oct 2002 JP
3366565 Jan 2003 JP
2003-187061 Jul 2003 JP
2004-056517 Feb 2004 JP
2004-227228 Aug 2004 JP
2006-345132 Dec 2006 JP
2007-324708 Dec 2007 JP
2011-511533 Apr 2011 JP
2011-511536 Apr 2011 JP
2012-075146 Apr 2012 JP
5421928 Feb 2014 JP
5631326 Nov 2014 JP
5649575 Jan 2015 JP
2015-514371 May 2015 JP
316118 Dec 2013 MX
322251 Jul 2014 MX
587100 Oct 2013 NZ
587101 Oct 2013 NZ
591486 Jan 2014 NZ
592781 Mar 2014 NZ
1-2010-501704 Feb 2014 PH
1-2010-501705 Feb 2015 PH
WO-9917517 Apr 1999 WO
WO-2001063894 Aug 2001 WO
WO-2006124113 Nov 2006 WO
WO-2009097018 Aug 2009 WO
WO-2010053701 May 2010 WO
WO-2011081514 Jul 2011 WO
Non-Patent Literature Citations (11)
Entry
Anonymous. (2006) “Performance Based Routing in Profit Call Centers,” The Decision Makers' Direct, located at www.decisioncraft.com, Issue Jun. 2002 (3 pages).
Cleveland, William S., “Robust Locally Weighted Regression and Smoothing Scatterplots,” Journal of the American Statistical Association, vol. 74, No. 368, Dec. 1979, pp. 829-836 (8 pages).
Cormen, T. H., et al., “Introduction to Algorithms,” 3rd Edition, Chapter 26 Maximum Flow, pp. 708-768 and Chapter 29 Linear Programming, pp. 843-897 (2009).
Gans, N. et al., “Telephone Call Centers: Tutorial, Review and Research Prospects,” Manufacturing & Service Operations Management, vol. 5, No. 2, 2003, pp. 79-141, (84 pages).
Koole, G. (2004). “Performance Analysis and Optimization in Customer Contact Centers,” Proceedings of the Quantitative Evaluation of Systems, First International Conference, Sep. 27-30, 2004 (4 pages).
Koole, G. et al. (Mar. 6, 2006). “An Overview of Routing and Staffing Algorithms in Multi-Skill Customer Contact Centers,” Manuscript, 42 pages.
Nocedal, J. and Wright, S. J., “Numerical Optimization,” Chapter 16 Quadratic Programming, pp. 448-496 (2006) 50 pages.
Ntzoufras, “Bayesian Modeling Using Winbugs”. Wiley Interscience, Chapter 5, Normal Regression Models, Oct. 18, 2007, Redacted version, pp. 155-220 (67 pages).
Press, W. H. and Rybicki, G. B., “Fast Algorithm for Spectral Analysis of Unevenly Sampled Data,” The Astrophysical Journal, vol. 338, Mar. 1, 1989, pp. 277-280 (4 pages).
Riedmiller, M. et al. (1993). “A Direct Adaptive Method for Faster Back Propagation Learning: The RPROP Algorithm,” 1993 IEEE International Conference on Neural Networks, San Francisco, CA, Mar. 28-Apr. 1, 1993, 1:586-591.
Stanley et al., “Improving call center operations using performance-based routing strategies,” Calif. Journal of Operations Management, 6(1), 24-32, Feb. 2008; retrieved from http://userwww.sfsu.edu/saltzman/Publist.html.
Related Publications (1)
Number Date Country
20200125407 A1 Apr 2020 US
Continuations (1)
Number Date Country
Parent 15837911 Dec 2017 US
Child 16717724 US