The technology disclosed describes systems and methods for intelligent routing of service requests in a large, distributed service center operation—efficiently prioritizing the routing of work across organizations to agents based on availability, capacity and priority, in a multi-tenant environment. The methods disclosed include managing digital data for a plurality of tenants to software instances, each tenant of the plurality of tenants comprising a group of users who share a common access with a specific set of privileges to a software instance of at least one application.
The technology discloses systems and methods for an omni-channel routing broker.
Customer service is moving toward a more personalized 1:1 communication with consumers, through the many channels and on the many devices they use. Omni-channel is a multichannel approach for providing customers with a seamless experience, whether the customer is interacting online via email, web, short message service (SMS), chat, or live agent video support on a desktop or mobile device, by telephone, or in a brick and mortar store.
Historically, a series of requests for services have been stored in a database, as an event sequence—a queue of available work. Common techniques for routing work from the queue to agents include the following options: either agents pull work from the queue and assign it to themselves, or a supervisor assigns work to agents. Given that companies have extensive information about their agents—their capabilities, the amount of work that is waiting, and how much work the agents already have in their queues—one goal is to intelligently route work to the agents. Methods of prioritizing include either ‘most available agent’: determining which agent is most available, based on a difference between their capacity and the amount of work already in the agent's queue; and ‘least active agent’: prioritizing routing of work to an agent based purely on how much work an agent already has. Note that two agents can have different capacity amplitudes, based on various factors, such as number of work hours per week, amount of work experience, or level of training.
Service channels for contact centers are evolving significantly for organizations. In this era of omni-channel, it is important for a business to determine the relative priority for handling a variety of service channels, and to efficiently route issues accordingly.
In a multi-tenant environment, agents are potentially connected to different app servers, generating a need for keeping work queues synchronized. In order to select a preferred agent to receive any given piece of work, the system needs to evaluate the availability of the agents in the org, their queue membership, their current workload, and the priority of the work. Making these selections in a multi-tenant environment with a high load of incoming work is difficult due to the concurrent nature of the updates made to the variables used to perform agent selection and the distributed system that handles these requests. For example, for a routing system that searches to identify the agent with the least amount of current work, if two work request cases are pushed into a queue simultaneously, and we make routing decisions on two different app servers, then we could potentially push both pieces of work to the same agent, leaving that agent over-burdened.
Increasing bandwidth issues accompany routing requests across app servers, and synchronizing access to shared resources is a challenging problem that has relatively slow solutions, with limitations on throughput. Existing technology solves the limitation by segmenting contact centers, but a new approach is needed to allow very large scale service organizations to utilize a very large pool of agents. Some approaches break up distributed systems entirely, but with traffic served by a single app server with agents in an org connected to it, the size of the app server becomes a limiting factor.
Speed and efficiency are two of the biggest drivers for customer service departments. The disclosed technology delivers an improved performance from routing 3-4 requests per second to routing 100 requests per second.
An opportunity arises to improve the experience for customers and for workers using disclosed omni-channel routing broker technology, including making it feasible for very large enterprise service operation centers to have very large pools of agents.
The included drawings are for illustrative purposes and serve only to provide examples of possible structures and process operations for one or more implementations of this disclosure. These drawings in no way limit any changes in form and detail that may be made by one skilled in the art without departing from the spirit and scope of this disclosure. A more complete understanding of the subject matter may be derived by referring to the detailed description and claims when considered in conjunction with the following figures, wherein like reference numbers refer to similar elements throughout the figures.
The following detailed description is made with reference to the figures. Sample implementations are described to illustrate the technology disclosed, not to limit its scope, which is defined by the claims. Those of ordinary skill in the art will recognize a variety of equivalent variations on the description that follows.
In one implementation, an omni-channel routing broker system includes selecting an app server among the cluster of app servers (pod) to perform routing for a given org. Event handling results are stored in a database, to fulfill a requirement of many large organizations for recording permanent and highly available event logs that enable event tracking, agent activity tracking, and performance analysis.
A cascading series of queues is used to avoid the reduction in throughput that would occur if the orgs were routed via a single thread in a single app server. The disclosed system separates routing decisions from the work required to commit routing decisions, delivering improved routing performance and service for customers.
Routine Broker Environment
An app server among the cluster of app servers 148 is elected to perform routing for a given org. That app server will make the routing decisions for the org. A system could have a single app server for a hundred different orgs. That is, a given app server can serve many orgs. Each org has one or more work queues for their organization's agent pool. Cluster/app support data store 116 gets updated when agents complete tasks (i.e. close work) for their organizations.
Omni-channel routing broker environment 100 makes use of multithreading to manage requests from more than one user at a time, and to manage multiple requests by the same user-tracking the presence and status of agents for multiple orgs. Current presence and status for each agent is stored in master agents' presence and status data store 118, and presence and status update events are published to event queue 113.
Omni-channel routing broker environment 100 in
Per org routers 1-N 122 publish incoming service request events from the event queue 113 to at least one of the node-based routing queues 1-N 112. Additionally, routing broker environment 100 includes a master database of service requests 114 that provides a permanent record of events, enabling long-term event tracking and agent performance analysis.
In other implementations, environment 100 may not have the same elements as those listed above and/or may have other/different elements instead of, or in addition to, those listed above.
The disclosed omni-channel routing broker technology, described in detail below, evaluates presence and status for agents, and makes routing selections in a multi-tenant environment that handles a high volume of incoming work.
When an event comes from one of multiple threads on an app server, the event gets passed to a pool of listeners that processes the event and determines relevance, makes decisions, and adds a routing request to request log 232, as appropriate. Some events do not cause the addition of a routing request—such as events for orgs not of interest to the stream—because they require no routing decision. Events of interest include an agent doing something that changes their availability for work such as logging in; changing an agent's capacity for work such as closing work, etc.; or the addition of a new work request.
A service request event for an org can be stimulated by an agent requesting work, or by a service request being routed to push work to an agent. An example class for routing work from a pull request is shown below. The code identifies which queue has the most eligible piece of work to route for an agent based on priority and time in queue, and routes the pulled work to the agent.
An implementation of handling a routing result from the org's router is shown in the code snippet listed below. If a problem is encountered during the routing, then the work gets restored to the queue, and the pending agent's capacity gets restored. In one case, if the work is unavailable due to a concurrent modification, the agent's capacity gets restored. In another case, if the agent concurrently modifies their status to one that should not receive this work, the agent's capacity gets restored. In both cases, the routing request gets added back to the queue to be retried later. Alternatively, if the routing conditions are successfully met, then the route success marker gets activated.
Routers may not be constantly running. If a routing request is the first one in the queue for the particular organization, a router spin-up request event is generated, which causes spin-up of a router for a particular org. Per org routing requests are handled in a non-blocking fashion using the in-memory state snapshot, in order to quickly return the thread for further processing.
Multi-node presence and status updating 224 captures changes in agents' states, such as the completion of a task, and provides the changes to the event queue 113.
In one implementation of the disclosed system, a single router per org runs at any snapshot in time. An advantage of this single-router-per-org approach is the ability to route events serially. Single node presence and status updating and request queuing 228 updates an eventually consistent, in-memory subset of the master agent presence and status database 218 and at least one in-memory node-based routing queue 236. Single thread per org routing decision making 238 includes receiving incoming service requests from the node-based routing queue 236; and making routing decisions on the incoming service requests using the in-memory subset of the master agent presence and status database 218. The eventually consistent, in-memory subset of the master agent presence and status database 218 gets updated to reflect the routing decisions; and the routing decisions get published to the event queue 113.
Implementing the routing decisions applicable to the agent pools across the multiple nodes includes receiving routing decisions from the event queue 113, and testing the routing decisions for consistency with the master agent presence and status data store 118. Confirming consistency includes looking at the in-memory node-based routing queue 236 and in-memory presence and status database 218, and determining whether to roll back the route or to commit the route, based on whether the master presence and status data store 118 is consistent with in-memory node-based routing queue 236. Consistency-qualified updates are made to the master agent presence and status data store 118 and updated status events are published to event queue 113.
For some implementations, validation rules provided by an organization affect which of the decisions made during single node presence and status updating and request queuing 228 get applied to routing work, via service requests, to agents. Customer code can include Apex triggers or validation rules that affect the ultimate routing decision. For example, some organizations implement rules for fulfillment of customer orders and for processing claims made relative to customers' orders: ‘manager’ level permissions may be required for an agent who approves service requests that include refunds for customers.
Once the routing decision has been made; results are stored in the master presence and status data store 118; customer code has been executed successfully; and consistency has been confirmed, then the route is considered committed. The service request routing the work to that agent is posted to the event stream, and the agent receives notification that they have work. The agent ‘listening’ to the event stream learns that they have work.
In the case of lack of consistency between a particular routing decision and the master presence and status data store 118, a routing decision rollback event is published to the event queue 113, and the particular routing decision is not applied to the master presence and status data store 118. The node-based database—the in-memory presence and status database 218—gets updated to roll back the routing decision. That is, if unsuccessful, the state changes are rolled back and the work is made available for another routing attempt. For example, if an agent has gone offline during the routing of the request, then we learn that the agent is not available when we try to commit the route to the database, so the route will be rolled back as though it never happened and a new routing request will be generated.
App servers within a cluster keep a connection open to each other and ping periodically to be sure they are “up”. In one implementation, if an app server drops out of the pool, the remaining distributed processes coordinate with each other and elect a new app server to serve that org as router. That is, app server selection can be updated if cluster members change over time. A new leader can be elected if the app server that runs the routing decision maker goes offline.
A distributed commit log can handle hundreds of megabytes of reads and writes per second from thousands of clients. In one example implementation, a single Apache Kafka broker can allow a single cluster to serve as the central data backbone for a large enterprise organization. The commit log can be elastically and transparently expanded without downtime. Data streams can be partitioned and spread over a cluster of machines to allow data streams larger than the capability of any single machine and to allow clusters of coordinated consumers.
An example user interface for a multi-tenant, multi-threaded omni-channel routing broker system is shown in
While the technology disclosed is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the invention and the scope of the following claims.
Omni-Channel Routine Broker Workflow
At action 510, track the presence and status of agents in a plurality of disjoint agent pools. At action 515, publish update events to at least one event queue.
At action 520, process selected update and request events, as described in actions 525 through 550.
At action 525, update the node-based database from the selected update events; and at action 530, publish the selected request events to at least one node-based routing queue.
At action 535, on a single thread per organization running on a processor having memory-bus access to the node-based database: make routing decisions on the requests events using the node-based database and at action 540, update the node-based database accordingly. At action 545, publish routing decision events to the event queue.
At action 552, implement the routing decision events: at action 555, test the routing decision events for consistency with a master agent presence and status database; and at action 555, make consistency-qualified updates to the master agent presence and status database. At action 570, publish the consistency-qualified update events.
Computer System
User interface input devices 638 may include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices. In general, use of the term “input device” is intended to include the possible types of devices and ways to input information into computer system 610.
User interface output devices 676 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide a non-visual display such as audio output devices. In general, use of the term “output device” is intended to include the possible types of devices and ways to output information from computer system 610 to the user or to another machine or computer system.
Storage subsystem 624 stores programming and data constructs that provide the functionality of some or all of the methods described herein. This software is generally executed by processor 672 alone or in combination with other processors.
Memory 622 used in the storage subsystem can include a number of memories including a main random access memory (RAM) 634 for storage of instructions and data during program execution and a read only memory (ROM) 632 in which fixed instructions are stored. A file storage subsystem 636 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The software used to implement the functionality of certain systems may be stored by file storage subsystem 636 in the storage subsystem 624, or in other machines accessible by the processor.
Bus subsystem 650 provides a mechanism for letting the various components and subsystems of computer system 610 communicate with each other as intended. Although bus subsystem 650 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
Computer system 610 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computer system 610 depicted in
Particular Implementations
In one implementation, a method of routing service requests in a large, distributed service center includes, across multiple nodes having disjoint memory spaces, tracking presence and status of agents in a plurality of disjoint agent pools and publishing update events to at least one event queue. The method also includes processing selected update and request events, including updating the node-based database from the selected update events, and publishing the selected request events to at least one node-based routing queue, across one or more processors that have access to a node-based database used to track agent presence and status in one or more disjoint agent pools. The method further includes making routing decisions on the requests events using the node-based database and updating the node-based database accordingly; and publishing routing decision events to the event queue—on a single thread per organization running on a processor having memory-bus access to the node-based database. The method additionally includes implementing the routing decision events, including testing the routing decision events for consistency with a master agent presence and status database; and making consistency-qualified updates to the master agent presence and status database and publishing update events accordingly.
In some implementations of the method of routing service requests in a large distributed service center, the master presence and status database stores agent presence and status data across agent pools serving the multiple nodes; and the node-based database is a subset of the master presence and status database that is eventually consistent with the master presence and status database as a result of processing events from the event queue. The method further includes processing the selected update and request events from the event queue; and on the single thread per organization, reading service request events from the node-based routing queue.
In one implementation, a method of routing service requests in a large, distributed service center applies to managing digital data for a plurality of tenants to software instances, each tenant of the plurality of tenants comprising a group of users who share a common access with a specific set of privileges to a software instance of at least one application, wherein each tenant includes one or more of the organizations.
In some implementations, the method is enhanced by further including the distributed service center handling service requests for a plurality of organizations, each organization having an agent pool disjoint from agent pools of other organizations, and having one or more work queues for the organization's disjoint agent pool.
The method further includes tracking the presence and status of agents in the master presence and status database using multiple threads per node on the multiple nodes; and updating the node-based database and publishing to the node-based routing queue using multiple threads.
The method additionally includes, in case of lack of consistency between a particular routing decision and the master presence and status database: publishing a routing decision rollback event to the event queue and not applying the particular routing decision to the master presence and status database; and updating the node-based database to roll back the routing decision.
In some implementations of the method of routing service requests in a large distributed service center, the agent pool serving the organization includes agents working on a plurality of app servers, the method further including: operating a single thread for routing service requests to the agent pool serving the organization across the plurality of app servers used by the agent pool.
In some implementations, the method is enhanced by further including an agent pool serving the organization that includes agents working on a plurality of app servers, geographically disbursed across pods operating in different data centers, the method further including: operating a single thread for routing service requests to the agent pool serving the organization across the plurality of app servers, geographically disbursed across pods operating in different data centers, used by the agent pool.
Other implementations may include a computer implemented system to perform any of the methods described above, the system including a processor, memory coupled to the processor, and computer instructions loaded into the memory.
Yet another implementation may include a tangible computer readable storage medium including computer program instructions that cause a computer to implement any of the methods described above. The tangible computer readable storage medium does not include transitory signals.
While the technology disclosed is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the innovation and the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
5577188 | Zhu | Nov 1996 | A |
5608872 | Schwartz et al. | Mar 1997 | A |
5649104 | Carleton et al. | Jul 1997 | A |
5715450 | Ambrose et al. | Feb 1998 | A |
5761419 | Schwartz et al. | Jun 1998 | A |
5819038 | Carleton et al. | Oct 1998 | A |
5821937 | Tonelli et al. | Oct 1998 | A |
5831610 | Tonelli et al. | Nov 1998 | A |
5873096 | Lim et al. | Feb 1999 | A |
5918159 | Fomukong et al. | Jun 1999 | A |
5963953 | Cram et al. | Oct 1999 | A |
6092083 | Brodersen et al. | Jul 2000 | A |
6161149 | Achacoso et al. | Dec 2000 | A |
6169534 | Raffel et al. | Jan 2001 | B1 |
6178425 | Brodersen et al. | Jan 2001 | B1 |
6189011 | Lim et al. | Feb 2001 | B1 |
6216135 | Brodersen et al. | Apr 2001 | B1 |
6233617 | Rothwein et al. | May 2001 | B1 |
6266669 | Brodersen et al. | Jul 2001 | B1 |
6295530 | Ritchie et al. | Sep 2001 | B1 |
6324568 | Diec | Nov 2001 | B1 |
6324693 | Brodersen et al. | Nov 2001 | B1 |
6336137 | Lee et al. | Jan 2002 | B1 |
D454139 | Feldcamp | Mar 2002 | S |
6367077 | Brodersen et al. | Apr 2002 | B1 |
6393605 | Loomans | May 2002 | B1 |
6405220 | Brodersen et al. | Jun 2002 | B1 |
6434550 | Warner et al. | Aug 2002 | B1 |
6446089 | Brodersen et al. | Sep 2002 | B1 |
6535909 | Rust | Mar 2003 | B1 |
6549908 | Loomans | Apr 2003 | B1 |
6553563 | Ambrose et al. | Apr 2003 | B2 |
6560461 | Fomukong et al. | May 2003 | B1 |
6574635 | Stauber et al. | Jun 2003 | B2 |
6577726 | Huang et al. | Jun 2003 | B1 |
6601087 | Zhu et al. | Jul 2003 | B1 |
6604117 | Lim et al. | Aug 2003 | B2 |
6604128 | Diec | Aug 2003 | B2 |
6609150 | Lee et al. | Aug 2003 | B2 |
6621834 | Scherpbier et al. | Sep 2003 | B1 |
6654032 | Zhu et al. | Nov 2003 | B1 |
6665648 | Brodersen et al. | Dec 2003 | B2 |
6665655 | Warner et al. | Dec 2003 | B1 |
6684438 | Brodersen et al. | Feb 2004 | B2 |
6711565 | Subramaniam et al. | Mar 2004 | B1 |
6724399 | Katchour et al. | Apr 2004 | B1 |
6728702 | Subramaniam et al. | Apr 2004 | B1 |
6728960 | Loomans | Apr 2004 | B1 |
6732095 | Warshavsky et al. | May 2004 | B1 |
6732100 | Brodersen et al. | May 2004 | B1 |
6732111 | Brodersen et al. | May 2004 | B2 |
6754681 | Brodersen et al. | Jun 2004 | B2 |
6763351 | Subramaniam et al. | Jul 2004 | B1 |
6763501 | Zhu et al. | Jul 2004 | B1 |
6768904 | Kim | Jul 2004 | B2 |
6772229 | Achacoso et al. | Aug 2004 | B1 |
6782383 | Subramaniam et al. | Aug 2004 | B2 |
6804330 | Jones et al. | Oct 2004 | B1 |
6826565 | Ritchie et al. | Nov 2004 | B2 |
6826582 | Chatterjee et al. | Nov 2004 | B1 |
6826745 | Coker et al. | Nov 2004 | B2 |
6829655 | Huang et al. | Dec 2004 | B1 |
6842748 | Warner et al. | Jan 2005 | B1 |
6850895 | Brodersen et al. | Feb 2005 | B2 |
6850949 | Warner et al. | Feb 2005 | B2 |
7062502 | Kesler | Jun 2006 | B1 |
7069231 | Cinarkaya et al. | Jun 2006 | B1 |
7069497 | Desai | Jun 2006 | B1 |
7181758 | Chan | Feb 2007 | B1 |
7289976 | Kihneman et al. | Oct 2007 | B2 |
7340411 | Cook | Mar 2008 | B2 |
7356482 | Frankland et al. | Apr 2008 | B2 |
7401094 | Kesler | Jul 2008 | B1 |
7412455 | Dillon | Aug 2008 | B2 |
7508789 | Chan | Mar 2009 | B2 |
7603483 | Psounis et al. | Oct 2009 | B2 |
7620655 | Larsson et al. | Nov 2009 | B2 |
7698160 | Beaven et al. | Apr 2010 | B2 |
7779475 | Jakobson et al. | Aug 2010 | B2 |
7851004 | Hirao et al. | Dec 2010 | B2 |
7899177 | Bruening et al. | Mar 2011 | B1 |
8014943 | Jakobson | Sep 2011 | B2 |
8015495 | Achacoso et al. | Sep 2011 | B2 |
8032297 | Jakobson | Oct 2011 | B2 |
8073850 | Hubbard et al. | Dec 2011 | B1 |
8082301 | Ahlgren et al. | Dec 2011 | B2 |
8095413 | Beaven | Jan 2012 | B1 |
8095594 | Beaven et al. | Jan 2012 | B2 |
8209308 | Rueben et al. | Jun 2012 | B2 |
8209333 | Hubbard et al. | Jun 2012 | B2 |
8275836 | Beaven et al. | Sep 2012 | B2 |
8295471 | Spottiswoode et al. | Oct 2012 | B2 |
8457545 | Chan | Jun 2013 | B2 |
8484111 | Frankland et al. | Jul 2013 | B2 |
8490025 | Jakobson et al. | Jul 2013 | B2 |
8504945 | Jakobson et al. | Aug 2013 | B2 |
8510045 | Rueben et al. | Aug 2013 | B2 |
8510664 | Rueben et al. | Aug 2013 | B2 |
8566301 | Rueben et al. | Oct 2013 | B2 |
8646103 | Jakobson et al. | Feb 2014 | B2 |
8756275 | Jakobson | Jun 2014 | B2 |
8769004 | Jakobson | Jul 2014 | B2 |
8769017 | Jakobson | Jul 2014 | B2 |
8867733 | Conway | Oct 2014 | B1 |
9160858 | Khouri | Oct 2015 | B2 |
20010044791 | Richter et al. | Nov 2001 | A1 |
20020072951 | Lee et al. | Jun 2002 | A1 |
20020082892 | Raffel et al. | Jun 2002 | A1 |
20020129352 | Brodersen et al. | Sep 2002 | A1 |
20020140731 | Subramaniam et al. | Oct 2002 | A1 |
20020143997 | Huang et al. | Oct 2002 | A1 |
20020162090 | Parnell et al. | Oct 2002 | A1 |
20020165742 | Robins | Nov 2002 | A1 |
20030004971 | Gong et al. | Jan 2003 | A1 |
20030018705 | Chen et al. | Jan 2003 | A1 |
20030018830 | Chen et al. | Jan 2003 | A1 |
20030066031 | Laane | Apr 2003 | A1 |
20030066032 | Ramachandran et al. | Apr 2003 | A1 |
20030069936 | Warner et al. | Apr 2003 | A1 |
20030070000 | Coker et al. | Apr 2003 | A1 |
20030070004 | Mukundan et al. | Apr 2003 | A1 |
20030070005 | Mukundan et al. | Apr 2003 | A1 |
20030074418 | Coker | Apr 2003 | A1 |
20030120675 | Stauber et al. | Jun 2003 | A1 |
20030151633 | George et al. | Aug 2003 | A1 |
20030159136 | Huang et al. | Aug 2003 | A1 |
20030187921 | Diec | Oct 2003 | A1 |
20030189600 | Gune et al. | Oct 2003 | A1 |
20030204427 | Gune et al. | Oct 2003 | A1 |
20030206192 | Chen et al. | Nov 2003 | A1 |
20030225730 | Warner et al. | Dec 2003 | A1 |
20040001092 | Rothwein et al. | Jan 2004 | A1 |
20040010489 | Rio | Jan 2004 | A1 |
20040015981 | Coker et al. | Jan 2004 | A1 |
20040027388 | Berg et al. | Feb 2004 | A1 |
20040128001 | Levin et al. | Jul 2004 | A1 |
20040186860 | Lee et al. | Sep 2004 | A1 |
20040193510 | Catahan et al. | Sep 2004 | A1 |
20040199489 | Barnes-Leon et al. | Oct 2004 | A1 |
20040199536 | Barnes Leon et al. | Oct 2004 | A1 |
20040199543 | Braud et al. | Oct 2004 | A1 |
20040249854 | Barnes-Leon et al. | Dec 2004 | A1 |
20040260534 | Pak et al. | Dec 2004 | A1 |
20040260659 | Chan et al. | Dec 2004 | A1 |
20040268299 | Lei et al. | Dec 2004 | A1 |
20050050555 | Exley et al. | Mar 2005 | A1 |
20050091098 | Brodersen et al. | Apr 2005 | A1 |
20060021019 | Hinton et al. | Jan 2006 | A1 |
20080249972 | Dillon | Oct 2008 | A1 |
20090063415 | Chatfield et al. | Mar 2009 | A1 |
20090100342 | Jakobson | Apr 2009 | A1 |
20090177744 | Marlow et al. | Jul 2009 | A1 |
20110218958 | Warshavsky et al. | Sep 2011 | A1 |
20110247051 | Bulumulla et al. | Oct 2011 | A1 |
20120042218 | Cinarkaya et al. | Feb 2012 | A1 |
20120233137 | Jakobson et al. | Sep 2012 | A1 |
20120290407 | Hubbard et al. | Nov 2012 | A1 |
20130212497 | Zelenko et al. | Aug 2013 | A1 |
20130247216 | Cinarkaya et al. | Sep 2013 | A1 |
Entry |
---|
Sesum-Cavic, “Applying Swarm Intelligence Algorithms for Dynamic Load Balancing to a Cloud Based Call Center,” Self-Adaptive and Self-Organizing Systems (SASO), 2010 4th IEEE International Conference, 2010, <http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5630066&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs—all.jsp%3Farnumber%3D5630066>, retrieved Jul. 1, 2015, pp. 2. |
“ZooKeeper—A Distributed Coordination Service for Distributed Applications,” The Apache Software Foundation, 2008, p. 1-10. |
Salesforce, “Integration Patterns and Practices—Version 34.0, Summer '15,” @salesforcedocs, 2015, pp. 53. |
Salesforce, “Database,” Salesforce.com Help Portal (//help.salesforce.com/), 2015, <https://developer.salesforce.com/page/Database>, retrieved, Jun. 30, 2015, pp. 5. |