Contact center performance prediction

Information

  • Patent Grant
  • 8811597
  • Patent Number
    8,811,597
  • Date Filed
    Thursday, September 28, 2006
    17 years ago
  • Date Issued
    Tuesday, August 19, 2014
    10 years ago
Abstract
A contact center is provided that includes a plurality of agents for servicing incoming contacts and an a performance analysis module that compares a proposed contact center configuration and/or change in secondary contact center performance parameter against a set of contact center templates, the contact center templates defining a historical contact center configuration as of respective points in time and, based on the results of this operation, predicts an impact on a primary contact center performance parameter if the proposed contact center configuration and/or change in secondary performance parameter were to be implemented.
Description
FIELD

The present invention is directed generally to contact center administration and specifically to monitoring and correcting contact center performance.


BACKGROUND

Contact centers are employed by many enterprises to service customer contacts. A typical contact center includes a switch and/or server to receive and route incoming packet-switched and/or circuit-switched contacts and one or more resources, such as human agents and automated resources (e.g., Interactive Voice Response (IVR) units), to service the incoming contacts. Contact centers distribute contacts, whether inbound or outbound, for servicing to any suitable resource according to predefined criteria. In many existing systems, the criteria for servicing the contact from the moment that the contact center becomes aware of the contact until the contact is connected to an agent are customer-specifiable (i.e., programmable by the operator of the contact center), via a capability called vectoring. Normally in present-day ACDs when the ACD system's controller detects that an agent has become available to handle a contact, the controller identifies all predefined contact-handling queues for the agent (usually in some order of priority) and delivers to the agent the highest-priority, oldest contact that matches the agent's highest-priority queue. Generally, the only condition that results in a contact not being delivered to an available agent is that there are no contacts waiting to be handled.


The primary objective of contact center management is to ultimately maximize contact center performance and profitability. An ongoing challenge in contact center administration is monitoring and optimizing contact center efficiency. Contact center efficiency is generally measured in two ways.


Service level is one measurement of contact center efficiency. Service level is typically determined by dividing the number of contacts accepted within the specified period by the number accepted plus the number that were not accepted, but completed in some other way (e.g., abandoned, given busy, canceled, flowed out). Of course, service level definitions may vary from one enterprise to another.


Match rate is another indicator used in measuring contact center efficiency. Match rate is usually determined by dividing the number of contacts accepted by a primary skill level agent within a period of time by the number of contacts accepted by any agent for a queue over the same period. An agent with a primary skill level is one that typically can handle contacts of a certain nature most effectively and/or efficiently. There are other contact center agents that may not be as proficient as the primary skill level agent, and those agents are identified either as secondary skill level agents or backup skill level agents. As can be appreciated, contacts received by a primary skill level agent are typically handled more quickly and accurately or effectively (e.g., higher revenue attained) than a contact received by a secondary or even backup skill level agent. Thus, it is an objective of most contact centers to optimize match rate along with service level.


In this pursuit of contact center optimization, a contact center administrator will often make administrative changes trying to improve the level of service and/or match rate. The administrative changes may include changing staffing levels, changing agent queue assignments, or changing contact routing vectors. Usually the contact center administrator makes these changes with a goal of optimizing performance locally (i.e., within a certain group or within a certain business unit). Unfortunately, it is very difficult for the contact center administrator to predict the effects of such a change globally. For example, if two additional agents were added to a particular queue by changing their skill levels, they may no longer be able to service two other queues. Thus, the performance of one queue group will be improved at the expense of two other queue groups. It would be convenient to reverse such a change that has a negative impact on the overall contact center performance.


A problem is that identifying what system change affected performance is not an easy task. Additionally, there are currently few or no provisions for reversing such a change, even if it were identified. Rather, it is up to the contact center administrator to identify what change had a negative impact on contact center performance and reverse the change. However, in a complex contact center it may be nearly impossible for the administrator to determine the change that resulted in a decrease of contact center performance. If the administrator is unable to identify the degrading change but attempts to remedy the situation by changing some other contact center parameter the problem may be compounded.


SUMMARY

These and other needs are addressed by the various embodiments and configurations of the present invention. The present invention is directed generally to the identification, analysis, and/or tracking of administrative or configuration changes in a contact center and predicting the effect of configuration changes on contact center performance.


In one embodiment, a method is provided for analyzing a contact center that includes the steps:


(a) receiving a proposed contact center configuration and/or change in a secondary contact center performance parameter;


(b) comparing the proposed contact center configuration and/or change in secondary performance parameter against a set of contact center templates, each of the contact center templates defining a historical contact center configuration as of a respective point in time; and


(c) based on the results of step (b), predicting an impact on a primary contact center performance parameter if the proposed contact center configuration and/or change in secondary performance parameter were to be implemented.


Each contact center template can include a number of descriptors of a corresponding contact center state. Typically, the templates will include both primary and secondary contact center performance parameters and a respective time stamp indicating when the corresponding contact center state was last in existence. Each template can further include a level of change in a secondary performance parameter and a resulting (causally connected) change in the primary performance parameter.


The primary performance parameter is a measure of contact center performance. Exemplary primary performance parameters include service level, match rate, percent abandon, and average speed of answer.


The secondary performance parameter is a controllable or configurable parameter that positively or negatively impacts contact center operation and/or performance (e.g., a primary performance parameter). Examples of secondary performance parameters include staffing, number of agents logged out, number of agents logged in, number of agents working on other work, workflow levels, routing vector settings, automated response unit settings, queue assignments, percent network routing (e.g., the ability to change the percent of calls of a particular type to a contact center, and percent adherence.


In one configuration, the set of contact center templates is selected from among a number of contact center templates. Each of the template members of the set has at least a selected degree of similarity to the proposed contact configuration.


In one configuration, the predicted impact includes a level of confidence that each of the template members of the set will characterize the impact on the primary contact center performance parameter if the proposed contact center configuration were to be implemented.


This embodiment of the present invention can enable contact center administrators to “test” various possible contact center configurations globally and evaluate the likely result thereof before they are implemented. This ability to diagnose proposed changes can prevent substantial negative impacts on contact center performance (and the resulting adverse impact on agent and customer satisfaction) normally resulting from a trial-and-error approach. Trial-and-error approaches can not only fail to result in a substantially optimal contact center configuration but also compound existing contact center configuration issues by introducing additional configuration problems. This embodiment can avoid the inherent problems of the trial-and-error approach, and quickly and harmlessly substantially optimize contact center efficiency.


In another embodiment, a method is provided for managing a contact center that includes the steps of:


(a) detecting a predetermined level of change in a primary contact center performance parameter;


(b) in response, identifying a change in a secondary contact center parameter that occurred no more than a selected time period before the detected change, where the primary and secondary parameters are different; and


(c) performing at least one of the following substeps:

    • (c1) notifying a contact center administrator of the identified change in the secondary contact center parameter as a possible cause of the detected change; and
    • (c2) automatically reversing the identified change in the secondary contact center parameter.


Often contact center parameter changes are made in an attempt to increase performance locally (i.e., within a particular contact center queue). However, due to the complex nature of contact centers, an increase of performance locally may actually result in a degradation of performance globally (i.e., across multiple queues).


The types of performance changes that are generally detected include a decrease in service level and a decrease in match rate. Upon the detection of such a decrease in the service level and/or match rate or any other primary performance parameter change (i.e., a measurement parameter), a root cause for the decrease is determined. The root cause for the decrease in performance can be determined by examining secondary contact center parameters (i.e., a controlling parameter) and determining whether one or more of those parameter such as contact center traffic, staffing, or staff utilization has changed. Additionally, staff behavior can be analyzed as a possible root cause. Moreover, changes in other secondary contact center parameters including system administration parameters like queue assignments and/or vector assignments can be analyzed to determine if they were a contributing factor to the decrease in performance. If the determined root cause includes a change to a queue assignment and/or vector assignment, then reasons for the queue assignment and/or vector assignment are determined. If the reason for the queue assignment and/or vector assignment change was to improve service, then the parameters corresponding to the queue assignment and/or vector assignment change can be automatically reversed using type 2 data, in one embodiment.


By tracking the cause-and-effect relationships between contact center secondary and primary performance parameters and including this information in contact center templates, the prior embodiment can be substantially enhanced. In other words, the tracked cause-and-effect relationship can be used not only retroactively but also prospectively to restrict contact center configurations to those that are most likely to be optimal or near optimal given the uncontrollable factors contributing to changes in contact center performance (e.g., incoming and/or outgoing contact volume or network traffic levels, time-of-day constraints, weather, and the like).


As can be appreciated, incoming and outgoing contact routing allocation among contact queues is understood to include queue assignment, vector configuration parameters, and other types of work item routing changes. Additionally, an agent queue assignment as used herein is understood to include agent skill sets, agent skill splits, and any other type of association between agents, agent queues, and skills.


In another embodiment, a system administrator may be notified of the root cause for the decline in system performance and may further be instructed to reverse the change. If the reason for the change was not to improve service levels then the queue assignment and/or vector assignment change should not be reversed without further investigation of the reasons for the change. For example, if queue assignments and/or vector assignments were changed to comply with certain laws or business policies, then the change should not be reversed. Rather, other possible remedies for the decline in performance should be explored like changing staffing, adjusting the work flow management (WFM) schedule, and other contact center parameters (e.g., failure of automated resources creating increased traffic to the contact center).


In one embodiment, the notification to the system administrator may be in the form of a performance report that, at its highest level, outlines and describes the decrease in performance. The report may further include one set of reports for a measured decrease in service level and another set of reports for a measured decrease in match rate. The system administrator can drill down through each of the sets of reports to determine possible root causes for the decrease in system performance. Additionally, the reports may suggest causes that have a higher probability of being the root cause based on certain aspects of the decline in performance. The reports may further provide suggestions for correcting the problem. The suggestions may include reversing certain parameter changes that have been implemented in the contact center and implementing other parameter changes that could improve performance. These suggestions, in one embodiment, are based upon historical data related to changes that resulted in an increase in contact center performance.


As can be appreciated by one of skill in the art, a contact is understood herein to include voice calls, emails, chat, video calls, fax, and combinations thereof. Accordingly, a contact center may be equipped to handle any one or a number of the above-noted contact types.


These and other advantages will be apparent from the disclosure of the invention(s) contained herein. The above-described embodiments and configurations are neither complete nor exhaustive. As will be appreciated, other embodiments of the invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.


As used herein, “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram depicting a contact center according to an embodiment of the present invention;



FIG. 2 is a block diagram of a server according to an embodiment of the present invention;



FIG. 3 is a queue service level and match rate trend report graph in accordance with embodiments of the present invention;



FIG. 4 is a queue service level and match rate trend report table in accordance with embodiments of the present invention;



FIG. 5 is a queue service level cause trend report graph in accordance with embodiments of the present invention;



FIG. 6 is a queue service level cause trend report table in accordance with embodiments of the present invention;



FIG. 7 is an agent utilization summary report graph and table in accordance with embodiments of the present invention;



FIG. 8 is an agent queue utilization summary report graph and table in accordance with embodiments of the present invention;



FIG. 9 is a queue agent role summary report table in accordance with embodiments of the present invention;



FIG. 10 is a flowchart depicting a method of responding to changes in contact center performance in accordance with embodiments of the present invention;



FIG. 11 is a flowchart depicting a method of determining a cause of a contact center performance change in accordance with embodiments of the present invention;



FIG. 12 is a flowchart depicting a method of maintaining a database of positive and negative contact center parameter changes in accordance with embodiments of the present invention;



FIG. 13 is a block diagram of a contact center analysis system according to an embodiment of the present invention; and



FIG. 14 is a flowchart depicting a method of operation of a performance analysis module according to the embodiment.





DETAILED DESCRIPTION

The invention will be illustrated below in conjunction with an exemplary communication system. Although well suited for use with, e.g., a system having an ACD or other similar contact processing switch, the invention is not limited to any particular type of communication system switch or configuration of system elements. Those skilled in the art will recognize that the disclosed techniques may be used in any communication application in which it is desirable to provide improved contact processing.



FIG. 1 shows an illustrative embodiment of the present invention. A contact center 100 comprises a central server 110, a set of data stores or databases 114 containing contact or customer related information and other information that can enhance the value and efficiency of the contact processing, and a plurality of servers, namely a voice mail server 118, an Interactive Voice Response unit or IVR 122, and other servers 126, a switch 130, a plurality of working agents operating packet-switched (first) communication devices 134-1 to N (such as computer work stations or personal computers), and/or circuit-switched (second) communication devices 138-1 to M, all interconnected by a local area network LAN (or wide area network. WAN) 142. The servers can be connected via optional communication lines 146 to the switch 130. As will be appreciated, the other servers 126 can also include a scanner (which is normally not connected to the switch 130 or Web server), VoIP software, video call software, voice messaging software, an IP voice server, a fax server, a web server, an email server, and the like. The switch 130 is connected via a plurality of trunks 150 to the Public Switch Telephone Network or PSTN 154 and via link(s) 152 to the second communication devices 138-1 to M. A gateway 158 is positioned between the server 110 and the packet-switched network 162 to process communications passing between the server 110 and the network 162.


Although the preferred embodiment is discussed with reference to a client-server architecture, it is to be understood that the principles of the present invention apply to other network architectures. For example, the invention applies to peer-to-peer networks, such as those envisioned by the Session Initiation Protocol. In the client-server model or paradigm, network services and the programs used by end users to access the services are described. The client side provides a user with an interface for requesting services from the network, and the server side is responsible for accepting user requests for services and providing the services transparent to the user. By contrast in the peer-to-peer model or paradigm, each networked host runs both the client and server parts of an application program. Additionally, the invention does not require the presence of packet- or circuit-switched networks.


The term “switch” or “server” as used herein should be understood to include a PBX, an ACD, an enterprise switch, or other type of communications system switch or server, as well as other types of processor-based communication control devices such as media servers, computers, adjuncts, etc.


Referring to FIG. 2, one possible configuration of the server 110 is depicted. The server 110 is in communication with a plurality of customer communication lines 200a-y (which can be one or more trunks, phone lines, etc.) and agent communication line 204 (which can be a voice-and-data transmission line such as LAN 142 and/or a circuit switched voice line 140). The server 110 can include a Call Management System™ or CMS 228 that gathers call records and contact-center statistics for use in generating contact-center reports. CMS 228 and any other reporting system, such as a Basic Call Management System™, Operational Analyst™ or Customer Call Routing or CCR™, will hereinafter be referred to jointly as CMS 228.


The switch 130 and/or server 110 can be any architecture for directing contacts to one or more communication devices. In some embodiments, the switch 130 may perform load-balancing functions by allocating incoming or outgoing contacts among a plurality of logically and/or geographically distinct contact centers. Illustratively, the switch and/or server can be a modified form of the subscriber-premises equipment disclosed in U.S. Pat. Nos. 6,192,122; 6,173,053; 6,163,607; 5,982,873; 5,905,793; 5,828,747; and 5,206,903, all of which are incorporated herein by this reference; Avaya Inc.'s Definity™ Private-Branch Exchange (PBX)-based ACD system; MultiVantage™ PBX, CRM Central 2000 Server™, Communication Manager™, S8300™ media server, SIP Enabled Services™, and/or Avaya Interaction Center™. Typically, the switch/server is a stored-program-controlled system that conventionally includes interfaces to external communication links, a communications switching fabric, service circuits (e.g., tone generators, announcement circuits, etc.), memory for storing control programs and data, and a processor (i.e., a computer) for executing the stored control programs to control the interfaces and the fabric and to provide automatic contact-distribution functionality. The switch and/or server typically include a network interface card (not shown) to provide services to the serviced communication devices. Other types of known switches and servers are well known in the art and therefore not described in detail herein.


As can be seen in FIG. 2, included among the data stored in the server 110 is a set of contact queues 208a-n and a separate set of agent queues 212a-n. Each contact queue 208a-n corresponds to a different set of agent queues, as does each agent queue 212a-n. Conventionally, contacts are prioritized and either are enqueued in individual ones of the contact queues 208a-n in their order of priority or are enqueued in different ones of a plurality of contact queues that correspond to a different priority. Likewise, each agent's queues are prioritized according to his or her level of expertise in that queue, and either agents are enqueued in individual ones of agent queues 212a-n in their order of expertise level or are enqueued in different ones of a plurality of agent queues 212a-n that correspond to a queue and each one of which corresponds to a different expertise level. Included among the control programs in the server 110 is a contact vector 216. Contacts incoming to the contact center are assigned by contact vector 216 to different contact queues 208a-n based upon a number of predetermined criteria, including customer identity, customer needs, contact center needs, current contact center queue lengths, customer value, and the agent skill that is required for the proper handling of the contact. Agents who are available for handling contacts are assigned to agent queues 212a-n based upon the skills that they possess. An agent may have multiple skills, and hence may be assigned to multiple agent queues 212a-n simultaneously. Furthermore, an agent may have different levels of skill expertise (e.g., skill levels 1-N in one configuration or merely primary skill levels and secondary skill levels in another configuration), and hence may be assigned to different agent queues 212a-n at different expertise levels. Call vectoring is described in DEFINITY Communications System Generic 3 Call Vectoring/Expert Agent Selection (EAS) Guide, AT&T publication no. 555-230-520 (Issue 3, November 1993). Skills-based ACD is described in further detail in U.S. Pat. Nos. 6,173,053 and 5,206,903.


Referring to FIG. 1, the gateway 158 can be Avaya Inc.'s, G700 Media Gateway™ and may be implemented as hardware such as via an adjunct processor (as shown) or as a chip in the server.


The first communication devices 134-1, . . . 134-N are packet-switched and can include, for example, IP hardphones such as the Avaya Inc.'s, 4600 Series IP Phones™, IP softphones such as Avaya Inc.'s, IP Softphone™, Personal Digital Assistants or PDAs, Personal Computers or PCs, laptops, packet-based H.320 video phones and conferencing units, packet-based voice messaging and response units, packet-based traditional computer telephony adjuncts, peer-to-peer based communication devices, and any other communication device.


The second communication devices 138-1, . . . 138-M are circuit-switched. Each of the communication devices 138-1, . . . 138-M corresponds to one of a set of internal extensions Ext1, . . . ExtM, respectively. These extensions are referred to herein as “internal” in that they are extensions within the premises that are directly serviced by the switch. More particularly, these extensions correspond to conventional communication device endpoints serviced by the switch/server, and the switch/server can direct incoming calls to and receive outgoing calls from these extensions in a conventional manner. The second communication devices can include, for example, wired and wireless telephones, PDAs, H.320 videophones and conferencing units, voice messaging and response units, traditional computer telephony adjuncts, and any other communication device.


It should be noted that the invention does not require any particular type of information transport medium between switch or server and first and second communication devices, i.e., the invention may be implemented with any desired type of transport medium as well as combinations of different types of transport channels.


The packet-switched network 162 can be any data and/or distributed processing network, such as the Internet. The network 162 typically includes proxies (not shown), registrars (not shown), and routers (not shown) for managing packet flows.


The packet-switched network 162 is in communication with an external first communication device 174 via a gateway 178, and the circuit-switched network 154 with an external second communication device 180. These communication devices are referred to as “external” in that they are not directly supported as communication device endpoints by the switch or server. The communication devices 174 and 180 are an example of devices more generally referred to herein as “external endpoints.”


In a preferred configuration, the server 110, network 162, and first communication devices 134 are Session Initiation Protocol or SIP compatible and can include interfaces for various other protocols such as the Lightweight Directory Access Protocol or LDAP, H.248, H.323, Simple Mail Transfer Protocol or SMTP, IMAP4, ISDN, E1/T1, and analog line or trunk.


It should be emphasized that the configuration of the switch, server, user communication devices, and other elements as shown in FIG. 1 is for purposes of illustration only and should not be construed as limiting the invention to any particular arrangement of elements.


As will be appreciated, the central server 110 is notified via LAN 142 of an incoming contact by the communications component (e.g., switch 130, fax server, email server, web server, and/or other server) receiving the incoming contact. The incoming contact is held by the receiving communications component until the server 110 forwards instructions to the component to forward or route the contact to a specific contact center resource, such as the IVR unit 122, the voice mail server 118, and/or first or second communication device 134, 138 associated with a selected agent. The server 110 distributes and connects these contacts to communication devices of available agents based on the predetermined criteria noted above. When the central server 110 forwards a voice contact to an agent, the central server 110 also forwards customer-related information from databases 114 to the agent's computer work station for previewing and/or viewing (such as by a pop-up display) to permit the agent to better serve the customer. The agents process the contacts sent to them by the central server 110.


According to at least one embodiment of the present invention, an event manager 232 and performance analysis module 234 are provided. The event manager 232 and performance analysis module 234 are stored either in the main memory or in a peripheral memory (e.g., disk, CD ROM, etc.) or some other computer-readable medium of the center 100. The event manager 232 identifies and analyzes specific occurrences of changes in contact center performance. Specifically, the event manager 232 monitors service level of the agents in the contact center and match rates of the agent and contact selector 220. The performance analysis module 234, using contact center templates or images generated by the event manager 232, predicts, from user-selected contact center configurations, the likely qualitative or quantitative impact of the new configuration on contact center performance as embodied by a primary contact center performance parameter, which generally characterizes, measures, and/or quantifies contact center performance.


There are many potential types of primary contact center performance parameters. Exemplary parameters are service level, match rate, percent abandon, and average speed of answer. Other primary contact center performance parameters are possible including degree of realization of or compliance with contact center goals, objectives, and policies.


Service level is generally user-specified and provides an indication of how efficiently incoming contacts are being answered by agents in the agent queues 212. In one embodiment, service level is defined as the total number of contacts accepted (i.e., answered) by agents in the agent queue 212 divided by the total number of contacts entering the contact queue 208 over the same period of time.


Match rate usually identifies how well contacts are being connected with agent's having the necessary skill set to fulfill the requests represented by the contact. As noted above, some agents in a particular agent queue 212 may have a primary skill level for that queue whereas other agents may have a secondary skill level for the same queue. Additionally, one agent may have a primary skill level in the first queue 212a and a secondary skill level in the second queue 212b. Based on availabilities of agents in the agent queues 212 and the number of contacts waiting in the contact queue 208, the agent and contact selector 220 determines which agent should be assigned to which contact. It is a goal of the agent and contact selector 220 to assign contacts in the first customer queue 208a with primary skill level agents in the first agent queue 212a. Likewise, the agent and contact selector 220 would like to assign contacts in the second customer queue 208b with primary skill level agents in the second agent queue 212b. If the agent and contact selector 220 could realize such assignments, a substantially perfect match rate would be achieved. However, it is often the case that the number of contacts waiting in the contact queues 208 exceeds the number of agents waiting in the agent queues 212. For this reason, and due to other assignment parameters like required answering time, the agent and contact selector 220 is sometimes forced to assign contacts to secondary or backup skill level agents. Since match rate is generally determined as the number of contacts connected to primary skill level agents divided by the total number of contacts assigned to agents over a predetermined amount of time, if the agent and contact selector 220 is forced to assign contacts to less than primary skill level agents, the match rate performance indicator decreases.


The results of service level and match rate are actively monitored by the event manager 232 and, in the event that a change in performance occurs, the event manager 232 logs the change and attempts to determine why the change occurred. If the change was an increase in performance, then the event manager 232 determines what type of change was made and further determines what the system parameters (e.g., queue and/or vector assignment) of the agent and contact selector 220 were prior to the change. Then the event manager 232 can store data related to the change in a positive system change log file.


If the change was a decrease in performance, then the event manager 232 communicates with the CMS 228 and agent and contact selector 220 to determine what the root cause of the performance change was and possibly reverse any change that was associated with the root cause. The event manager 232 can control the queue and/or vector assignment parameters directly through the agent and contact selector 220, which allows for automatic adjustment and maintenance of the contact center if negative changes are detected. As in the case of an increase in contact center performance, the event manager 232 stores the data related to the change in a negative system change log file.


To generate the contact center image, the agent 232 commonly uses a number of secondary contact center performance parameters along with other selected contact center configuration descriptors. Secondary contact center parameters generally are configurable parameters that positively or negatively impact contact center operation and/or performance. Examples of secondary contact center parameters include staff, primary staffing, occupancy, primary occupancy, queue state, primary agent staffing, reserve agent staffing, number of agents logged out, number of agents logged in, number of agents working on other work, workflow levels, routing vector settings, automated response unit (e.g., IVR) settings, queue assignments, traffic, routing, time-of-day, day of week, contact type serviced, percent network routing, percent adherence, and other administration settings.


After identifying the cause of the performance change, or of the change in a primary contact center performance parameter, the agent 232 generates, over time, positive and negative change templates or contact center images. Each template has a timestamp to indicate a time associated with template generation, contains an image or snapshot of the contact center configuration at the time of the performance change, and includes the magnitude of the primary performance parameter before the performance change, resulting upward or downward magnitude of the change in the primary performance parameter, the (flagged) secondary performance parameter that likely caused the change, and, in some configurations, a magnitude of the variation in the causally linked secondary performance parameter.


As will be appreciated, a performance change can occur when there is no change in contact center configuration. In that event, it may be difficult to establish any causal link to the performance change. One method to determine the reasons for the performance change is to log system parameters such as traffic, average handle time, staffing, etc., to show how this configuration reacted given certain conditions.


The event manager 232 thus monitors system performance and maintains historical data related to the system performance. Additionally, the event manager 232 can monitor agent and contact selector parameter settings and other contact center parameters to supplement reports related to system performance.


Referring to FIG. 13, the performance analysis module 234 is depicted in greater detail. The module 234 includes a comparison engine 1300 and analysis module Application Program Interface or API 1304 that interfaces with a user interface 1308 and database 114. The database 114 includes a database API 1312, a plurality of templates 1316a-n, and a current contact center configuration 1320. For a user entered or new contact center configuration (not shown), the comparison engine 1300 identifies templates having at least a selected degree of similarity to the new contact center configuration, and, for each of the identified templates, determines a likelihood that the primary contact center performance parameter in each selected template (or an extrapolated derivative thereof) will be the performance result of the new configuration, and provides a selected number of the templates having higher associated likelihoods to the user for consideration. In one variation, the user requests, in a current contact center configuration, a change to a secondary performance parameter. The comparison engine, in addition to identifying similar contact center templates, identifies those similar templates in which the parameter to be changed was changed. The engine then projects, based on the requested magnitude of change in the parameter and the historic magnitude of the change in the same parameter in the similar templates, the qualitative and/or quantitative impact on one or more primary performance parameters. A likelihood of the projected change being accurate can also be provided.


The user entered or new contact center configuration can be received in a number of ways. In one way, the user is presented with a graphical display containing a number of fields, each field corresponding to a field of the stored templates. In other words, the set of fields in the templates is identical to the set of fields in user display. The user can fill in the blanks to generate the new configuration. In one variation, the various fields of the display are populated automatically with values representative of the current contact center configuration 1320. The user can simply change a subset of the fields and run the engine 1300 to obtain a predicted positive or negative change in contact center performance resulting from the changed fields. In another way, the user requests that one or more administrative changes be implemented. The module 234 can be configured to examine such requested changes before they are implemented and, if a negative impact on contact center performance has at least a threshold likelihood of occurring, warn the user and request confirmation that the requested change is to be implemented.


The degree of similarity of the templates to the new contact center configuration can be computed by a number of techniques. In one technique, only selected fields of the template are compared with the corresponding fields of the new contact center configuration. If, for each of the selected fields, the difference between the value of the field in the new configuration and the field in the selected template is no more than a selected threshold, the new configuration and selected template are deemed to be substantially similar to warrant further analysis. The fields in the selected subset generally are the controllable fields that most significantly impact contact center performance. Examples include staffing, occupancy, vector settings, and queue assignments. Incoming traffic or work item levels would not generally be considered as these parameters are not controllable absent the ability to effect load balancing amongst a number of call centers. In a variant of this technique, the fields are deemed to have a sufficient degree of similarity when the ratio between the fields is similar. For example, when there is 10% less traffic and 10% less agents in the new configuration when compared to the template the template could be deemed to be a sufficient match. In another technique, the prior technique is performed not just for a subset but for all of the fields in the template. This technique may, however, result in few effective matches and therefore frequent “no match found” responses.


The likelihood of a performance impact from an administrative change being characterized by a substantially similar template may be determined by any suitable type of statistical analysis. Generally, the difference between each of the fields in the template and the corresponding field in the new contact center configuration is determined, each difference modified by a weighting value, and the sum of the weighted values determined. The sum of the weighted values can be compared against a look up table indexing likelihood values against weighted value sums. As will be appreciated by those of ordinary skill in the art, other simpler or more elaborate techniques may also be employed depending on the application. In another approach, a number of substantially similar templates are used to extrapolate a performance change for the new contact center configuration. For example, if the templates are the same as the new contact center configuration except for staffing levels, a mathematical relationship, or graph, can be generated showing the relationship between staffing levels in the similar templates and performance result and the likely performance result for the new configuration determined from the relationship.


The likely performance result can be provided for a variety of otherwise uncontrollable secondary parameters, such as incoming traffic or work item levels, incoming contact types, and times of day. In other words, for each different uncontrollable secondary parameter in the substantially similar templates a corresponding performance level can be displayed to the user along with the percentage likelihood associated with the respective template; that is, assuming that a first template has an incoming traffic level of X, a match rate of Y, and a Z percentage likelihood of characterizing the match rate if the new configuration is implemented, the values of X, Y, and Z would be displayed as a line of a table, with the other entries in the table being similar values from other similar templates.


In one embodiment described above, each template includes not only an image of a contact center as of a specific point in time but also a change in a secondary parameter and the resulting change in primary parameter. This cause-and-effect relationship information can provide a powerful tool in estimating a likely change in contact center performance resulting from a change in a secondary parameter. In other words, when a proposed change in a new configuration is to a secondary parameter that was previously identified as a reason for a positive or negative change in performance in one or more substantially similar templates, the likely impact on contact center performance can be estimated with some degree of accuracy. A mathematical relationship between the degree of change of the secondary parameter and the degree of resulting change in the primary parameter can be generated based on the substantially similar templates, or even based on both substantially similar and dissimilar templates, and the mathematical relationship used to determine, for the proposed change in the new configuration relative to the current configuration 1320, the likely change in or level of one or more selected primary parameters and an associated likelihood for the change/level.


In one configuration, a proposed change may have a positive impact in one area and a negative impact in another. To determine whether the proposed change is desirable, the algorithm determines whether the overall impact is positive or negative. One measure of this could be created by measuring the absolute value of the deviation from target and averaging it. Contact centers can apply weighting to the queues or routing points as it may be more important to hit some targets than others.


In one configuration, the agent 234 does not provide a quantitative change in the primary parameter but rather a qualitative change only. In other words, the agent 234 will simply indicate that the primary parameter will change positively or negatively and provide a corresponding likelihood for the change. Frequently, a qualitative result is as useful as a quantitative result. The qualitative result may simply indicate positive or negative impact or may give predicted levels of impact, e.g., slightly impacted positively, slightly impacted negatively, substantially impacted positively, substantially impacted negatively, heavily impacted positively, and heavily impacted negatively.


Referring now to FIG. 3, a queue service level and match rate trend report graph 300 generated by the event manager 232 will be described in accordance with at least some embodiments of the present invention. The queue service level and match rate graph 300 generally comprises historical service level measurements 304 and historical match rate measurements 308. The historical data may be periodically or continuously updated to reflect changes in performance. The report may be displayed to a system administrator in real-time such that the system administrator can be alerted to any sudden changes in performance. In an alternative embodiment, the queue service level and match rate graph 300 is displayed to a system administrator only when a substantial change in performance is detected. As will be appreciated, what is a substantial change is determined by a comparison to a selected threshold. The magnitude of the threshold depends on the parameter under analysis. For example, the threshold for service level will generally differ from that for match rate. Additionally, for a selected parameter, the parameter threshold can vary by time of day and by work item type, customer class, and/or agent skill type or level.


One example of a substantial change can be seen in the relatively sudden decrease of service level performance 306. The service level 304 is expected to change throughout the day, but usually at a relative smooth rate. However, the sudden drop in service level performance 306 may be considered a substantial change in contact center performance.


Another example of a substantial change can be seen in the drop of match rate 310 that occurred around 12:00. This particular drop in match rate 310 may have been due to an increase of traffic entering the contact center or due to other parameter changes in the agent and contact selector 220.


As can be seen in FIG. 4, in one embodiment, when the service level 304 and/or match rate 308 change drastically, the event manager 232 may generate a queue service level and match rate trend report table 400 to supplement the trend report graph 300. The queue service level and match rate trend report table 400 generally comprises a field of data for the service level measurements and a field of data for the match rate measurements. The sudden decrease in service level 404 is highlighted for ease of reference by the system administrator. Additionally, the sudden decrease in match rate 408 can be highlighted. As can be seen in FIGS. 3 and 4 the decrease in match rate 308 preceded the decrease in service level 304. This may be one indication that the decrease in match rate 308 resulted in the decrease in service level 304; however, such a determination should not be made until further analysis of the contact center parameters around the time of the decrease is completed.


When the system administrator is provided with the queue service level and match rate trend reports 300 and 400 the event manager 232 may further provide the system administrator with the ability to drill down into service level and match rate specific reports. These reports allow a system administrator to gain perspective on possible causes of the decrease in contact center performance.


As can be seen in FIG. 5, a queue service level cause trend report graph 500 may be provided to the system administrator as part of the drill down features associated with service level changes. The queue service level cause trend report graph 500 generally comprises a historical record of various contact center parameters around the time of the detected change in performance. Examples of the parameters that may be tracked include, for example, number of contact arrivals 504, average number of positions staffed 508, and average number of positions working 512. The number of contact arrivals 504 is generally tracked to determine if a change in traffic may have been the cause of the change in contact center performance. For example, a decrease in traffic (i.e., inbound and outbound contacts) may result in an increase in contact center performance. Conversely, an increase in traffic may result in a decrease in contact center perfoiinance.


The average number of positions staffed 508 is also tracked to help determine if a change in contact center performance was the result of a staffing change. A decrease in the number of positions staffed may explain a decrease in contact center performance as defined by service level and match rate. Alternatively, an increase in the number of staffed positions may explain an increase in contact center performance. As can be appreciated, a combinational change in arrivals and positions staffed may also explain a sudden change in contact center performance. For example, if the number of arrivals increased and the number of staffed positions decreased at around the same time, then contact center performance would likely suffer. On the other hand, a decrease in arrivals and an increase in staffed positions may result in an increase in contact center performance.


Of course, the number of staffed positions does not necessarily dictate the number of people working. For instance, some staffed positions may be on break while other staffed positions may be working on other queues. For this reason, the average number of positions working 512 is tracked. Tracking the average number of positions working allows a system administrator to identify if the problem was actually staffing.


Referring now to FIG. 6, a queue service level cause trend report table 600 will be described in accordance with embodiments of the present invention. The event manager 232 may provide the queue service level cause trend report table 600 to the system administrator along with the corresponding graph 500. Additional information is contained in the table 600 as compared to the graph 500. For example, the table 600 generally comprises an occupancy field 620 and a service level performance field 604 in addition to an arrivals field 608, an average positions staffed field 612, and an average positions working field 616. The service level performance field 604 provides an indication of approximately when the change in performance occurred. The values of the other fields can be analyzed for around the same time to determine what, if any, were the reasons for the change in performance. As can be seen in the example depicted in FIG. 6, the service level performance decreased around 12:30. During approximately the same time, the number of arrivals did not change dramatically nor did the number of positions staffed. Also, the number of positions working increased. However, the occupancy (i.e., the ratio of work time to staff time) decreased around the same time as the service level declined. This may provide a further indication that agent behavior may be a cause of the change in performance rather than a change to other system parameters.


In one embodiment, if traffic has changed, it may be compared to WFM forecasts to see if the traffic was predicted. If the WFM forecast was off, then it should be adjusted for the problem time period to ensure that performance does not drop in the same way due to another WFM misforecast.


As an example, if there was a sudden increase or spike in traffic, a system administrator could drill into where the traffic originated (e.g., by differentiating arrivals by area code or country code). If the increased traffic is from the same geographic area, then weather conditions or other regional events may have triggered the traffic. In some instances, such an increase of traffic may not be predictable. However, other instances of traffic may be predictable, for example if a certain town's baseball team is in the playoffs, traffic can be expected to increase. As another example, certain regional occurrences may lead to longer service times because people want to talk more.


In another embodiment, if the average positions staffed has changed, then the staffing can be compared to the WFM schedule. If the WFM schedule called for the change, then the WFM forecast parameters can be adjusted to staff more positions. If the WFM schedule does not have a change, then the system administrator can drill into an agent login/logout report or view a schedule adherence report to determine if the staffed positions were actually working as they were scheduled.


In still another embodiment, following the example depicted where average positions working has changed, a system administrator can check to see if agent occupancy has changed. If the agent occupancy is down, the system administrator can drill into an auxiliary state report that shows the various states of agents in the contact center. Alternatively, the system administrator may view an agent utilization or occupancy report 700 as shown in FIG. 7. The agent utilization summary report 700 generally depicts the percentage of agent activity during a selected time interval. In the depicted example, the selected time interval is between 11:45 and 12:00. The agent utilization summary report 700 shows how agent utilization has changed over the selected time interval by various states available to the agent. The agent utilization summary report 700 generally includes preview duration, active duration, wrap-up duration, idle duration, initiate duration, and held duration as a percentage of total duration.


In the depicted example, the percentage of active duration increased from about 65% to about 70% between 11:45 and 12:00. Furthermore, percentage wrap up duration increased from about 10% to about 15% over the same time interval. The percentage idle duration decreased from about 15% to about 5% between 11:45 and 12:00. The slight increase in active duration and wrap-up duration may have been a result of local occurrences or may be the result of inefficient contact handling by an agent.


In one embodiment, if the percent active time is about the same over the selected duration, then the system administrator can drill down to an agent queue utilization summary report 800 to see what queues the agents are serving, as can be seen in FIG. 8. The agent queue utilization summary report 800 generally shows from what queues agents are receiving contacts. In the depicted embodiment, the shoes queue took agents away from the outerwear queue. This dramatic change in agent utilization among queues may have been due to a change in skill levels at the CMS 228 and/or agent and contact selector 220. Upon seeing such a change in agent utilization, a system administrator may view a system administration history to see if skill levels were previously changed for agents serving shoes. In the event that a skill level change for agents in shoes was made, the system administrator or the event manager 232 may reverse the change to restore contact center performance.


In an alternative embodiment, if the percent wrap-up time or percent held duration time changed, then the system administrator may be provided agent problem behavior reports. Examples of agent problem behavior reports are described in U.S. patent application Ser. No. 11/193,585, the entire disclosure of which is hereby incorporated herein by this reference. The agent problem behavior reports may provide the system administrator information on how particulars agent can be coached to improve their contact handling efficiency.


Of course, service level may not be the only performance measurement parameter that has suffered as a result of contact center changes. Match rate is another indicator used to measure performance levels. The match rate sets of reports available for a system administrator to drill down through are somewhat similar in nature to the service level reports described above.


In one embodiment, if the system administrator or event manager 232 determines that a change has occurred in match rate, then a queue agent role summary report 900 may be generated as can be seen in FIG. 9. The queue agent role summary report 900 generally comprises a total queue information section 904 and an agent information section 908. The queue information section 904 shows agents' roles within a particular queue (e.g., outerwear). It is preferable to have the queue populated in large part with primary skill level agents. There may also be secondary or backup skill level agents as well as reserve agents. The information shown in the queue information section 904 indicates the number of contacts received on a per skill level basis. The queue information 904 further shows the percent handles for the entire queue by skill level and the average handle time for each skill level. If the ratio of skill levels handling contacts is not at an optimum level, for example if the backup skill level agents are handling more contacts than the primary skill level agents, then the system administrator can drill down through performance reports to determine if a system change resulted in the improper handling ratio. The system administrator can drill down into various reports that show different performance parameters including, without limitation, staff, primary staffing, occupancy, primary occupancy, queue state, possible reasons for primary and reserve agents, number of agents logged out, number of agents working on other work, and administration changes (e.g., workflow changes, vector changes, IVR changes, queue assignments, etc.).


Instead of requiring the system administrator to drill down through different reports, the event manager 232 may automatically drill down through each report. By drilling down through various performance parameters the event manager 232 is operable to identify a root cause of a change in contact center performance and may further provide suggestions to a system administrator to correct such changes, if necessary. Alternatively, the event manager 232 may automatically adjust certain contact center parameters to correct the problem and notify an administrator of actions taken.


Referring now to FIG. 10, a method of responding to changes in contact center performance will be described in accordance with at least some embodiments of the present invention. The method begins by detecting a significant change in contact center performance as evidenced by a first or primary parameter (e.g., service level, match rate, percent abandon, or average speed of answer) (step 1004). A significant change may be defined as a percentage change and/or when a change is statistically significant within a particular time interval. An exemplary percentage change threshold may be if the service level and/or match rate changes by more than about +/−5% from the previous time interval measurement. An exemplary standard deviation threshold may be defined by the previous five time intervals. For instance, the previous five time intervals may have an average value of about 81% within service level. The standard deviation for those five values may be about +/−7%. Thus, the threshold for a significant change may be the standard deviation of the previous five values. Of course, a significant change may also be defined as any multiple or fraction of a standard deviation, depending upon user-defined tolerances. It should be noted that the thresholds are generally dynamic, in that they depend upon changing data. In the example above, when a new data point is added for a new time interval, the oldest of the five time interval values may be removed from the determination of a standard deviation and a new standard deviation may be calculated based on the new five most recent time interval measurements.


After a significant change has been detected, it is determined when the significant change occurred in the primary parameter (step 1008). This particular step helps to create a time line and a starting point for the drill-down reports described above. Moreover, determining when a significant change occurred can be useful in determining causal relationships. For example, a change in system parameters (or secondary parameters) after a significant change has been detected is not a likely cause of the significant change. Examples of secondary parameters include, but are not limited to, traffic, routing, staffing, occupancy, queue assignment, and any other contact center parameter that can change the operation or performance of the contact center.


It may also be informative to determine what queues were affected by the significant change. In step 1012, this particular determination is made regarding the parties affected by the significant change. For example, if one queue had a significant increase in performance at about the same time another queue had a significant decrease in performance, it could be the case that a queue assignment or vector change was implemented involving one of the two affected queues.


If a cause of the significant change can be identified, and the cause was at least partially due to a change in contact center parameters (or secondary parameters), it is determined what the purpose of the change was (step 1016). In some instances this determination may be speculative. For example, if records indicate that a fire drill was scheduled around the time the significant change was detected, then the reason for the parameter change may be related to that event. However, in accordance with some embodiments, a system administrator may be required to enter a reason for a system (secondary) parameter change. Thus, when the event manager 232 is searching for changes that resulted in a decrease/increase of performance, it may be able to definitively identify the reason for the change.


Based on the determined reason for the change and other factors, the event manager 232 may identify a set of possible causes of the change in system performance. Based on these causes, the event manager 232 may suggest a new parameter change for the contact center (step 1020). The suggestion for a change may be as simple as reversing the change that caused the alteration in performance. However, under certain circumstances, the change may not be reversible and the event manager 232 may supply other alternatives for changing contact center parameters. In some embodiments, the identified reasons for change may help determine if a particular change may be reversed. The suggestions are provided by the event manager 232 to a system administrator.


The suggestions may be provided in the form of an email or some other alert notification. The event manager 232 waits for a response from the system administrator to determine what changes should be made (step 1024). Of course, in certain embodiments, a system administrator may provide the event manager 232 standing instructions to reverse any reversible change that negatively affected contact center performance. This way a system administrator is only asked to make decisions regarding non-reversible changes.


If the event manager 232 receives notification from the system administrator that one or more of the suggested changes should be implemented, then the event manager 232 implements the new changes (step 1028). This particular step may include changing queue and/or vector assignments, adjusting the WFM forecasts, changing staffing, and the like. Thereafter, a contact center performance report may be generated for the system administrator (step 1036). Of course, the report may be provided to the system administrator as a part of the change suggestions. The report provided to the system administrator after the change is implemented may include a performance comparison report showing that the system has at least returned back to normal performance levels, or the performance levels have improved since a prior decrease in performance.


In the event that the system administrator either does not respond to the suggestions for changes or declines the suggestions, the system settings are maintained (step 1032). This essentially means that the contact center will continue running under the parameters that resulted in the change to contact center performance. Thereafter, the contact center performance report may be provided to the system administrator showing that no changes were made and further showing the updated levels of contact center performance (step 1036).


With reference now to FIG. 11, a method of determining a root cause of a performance change will be described in accordance with at least some embodiments of the present invention. The method begins when a significant change is detected in system performance or a primary parameter (step 1104). This particular step is similar to step 1004 described above. The change in performance may have been measured in either service level, match rate, both, or other indicators of contact center performance like percent abandon and average speed of answer. Once a significant change has been detected, it is determined if a secondary parameter such as traffic levels have changed (step 1108). The types of traffic that could have changed can include both inbound and outbound contacts from a particular queue or the entire contact center. One reason why traffic may change is because routing has been adjusted in the form of a queue change or a vector change. Thus, it is determined if a secondary parameter such as a routing change was made in temporal proximity before or around the time of the detected performance change (i.e., the target time) (step 1112). In the event that the traffic changed as a result of a routing change, it is determined if the routing change was made in an attempt to improve service level, match rate, and/or any other primary parameter (step 1124). If the routing change was made in an attempt to improve service level, match rate, and/or any other primary parameter, then the routing change is reversed (step 1128). As can be appreciated, the routing change could be automatically reversed by the event manager 232 or by a system administrator.


In one embodiment, a change in one secondary parameter may have been intended to improve service level numbers for a particular customer. Such a local change may have a negative impact on contact center performance globally. If an attempt to improve service locally resulted in a global degradation of service, the change may be reversed as noted above with respect to step 1128. As can be appreciated by one of skill in the art, hysteresis can be built into the system such that one parameter change is not continually reversed then reversed again. This hysteresis allows one parameter change to take effect on the contact center performance before the determination to make another change to the system is made.


In another embodiment, rather than simply reversing the last system change that may have resulted in the change of performance, system setting could be restored to a baseline configuration. In other words, the system may be restored to a “last known” working state for this template having a predetermined setting for agent queue and vector assignments. The event manager 232 can decide to swing to this predetermined configuration if there are too many changes that need to be analyzed and reversed.


There are a number of instances where a routing change may have been made without intending to improve service. Some of these reasons may still provide a reversible routing change, whereas others may indicate that the routing change should not be reversed. For example, if a routing change was made to accommodate a mandatory staff meeting, then the routing change cannot be reversed until the staff meeting is over. Another example of a potentially irreversible routing change is if the queue assignment change was made to compensate for an agent that just wasn't qualified to handle a particular queue. In the event that the routing change should not be reversed (i.e., the routing change was not made to improve service), then other options are explored like changing other routing parameters, changing staffing, or changing the WFM schedule (step 1120). Another example of a change that should not necessarily be reversed is a change in system settings to accommodate for a sudden increase in traffic in a certain queue. Such a change may remain temporarily until the local traffic in a particular queue has returned to normal. Thereafter, the event manager 232 may reverse the change sending the system settings back to their previous state.


In the event that there was a traffic change but no routing change as determined by steps 1108 and 1112 respectively, a determination is made as to where traffic is coming from (step 1116). In other words, it is determined if the increase in traffic is from a particular customer, region, queue, or the like. After the origination of the traffic is determined, then suggestions are provided to adjust routing, adjust staffing, or change the WFM schedule (step 1120).


In the event that traffic did not substantially change, the method continues by determining if a secondary parameter such as staffing changed in temporal proximity before and/or around the target time (step 1132). Examples of a staffing change may be in preparation for an expected contact volume change or in preparation for a holiday. If it is determined that a staffing change was made, then it is determined if the changed schedule was adhered to (step 1136). In other words, it is determined if the scheduled agents showed up for work. In the event that the agents did adhere to the changed schedule, then the problem is in the schedule itself and therefore a suggestion is made to adjust the schedule (step 1140). In other words, the workflow forecast may have been off and therefore an adjustment to the schedule should be made.


However, if the workflow was forecast properly but the agents did not adhere to the schedule then other forms of adjustment should be made. For example, if agent's were coming in to work late or leaving early, then they may need to be coached or otherwise spoken with by a supervisor (step 1144). Alternatively, some agents may have been sick and the deviation from the schedule was unavoidable. Under these circumstances there may be no recourse for adjusting the system parameters but rather the system administrator must make adjustments to deal with an understaffed workforce.


If there was no determined traffic change and no staffing change, then the method continues to determine if there was a significant change in a secondary parameter such as agent occupancy (step 1148). As noted above, occupancy is an indicator of agent efficiency and provides a measure of how busy a particular agent is. Examples of an occupancy change include, longer or shorter talk times, longer or shorter hold times, longer or shorter wrap-up times, and/or longer or shorter ring times. Also, depending on how occupancy is calculated, breaks and non-contact work may affect occupancy as well.


In the event that an occupancy change is detected around the time that a significant change in performance (a primary parameter) was detected, then it is determined if there are any problem behaviors with one or more of the agents in the workforce (step 1152). Examples of methods used to determine if there are agent behavior problems are explained in U.S. patent application Ser. No. 11/193,585. If there were problem behaviors detected, then the identified agent is coached on how to handle contacts with more efficiency (step 1144). Other forms of coaching an agent include motivating the agent by giving them a different shift or discussing other options that would make the agent more willing to work efficiently. Also, the agent may have his or her work observed for a period of time to determine with more accuracy what their problem behaviors are.


In the event that there are no problem behaviors detected, then there is either a problem with the number of agents staffed or the schedule, which are both secondary parameters. Therefore, the staffing or schedule is adjusted (step 1164). This particular step may include adjusting the WFM forecast to accommodate for the occupancy change. Alternatively, the staffing may need to be changed by having agents work overtime, do split shifts, or assigning agents new queues.


If there have been no significant determined traffic changes, staffing changes, or occupancy changes, then it is determined where agents are spending their time (step 1156). For example, if before a particular decline in service agents were distributed across three queues relatively evenly then after a decline the distribution had more agents in the first two queues, then the agents are possibly spending too much time in the first two queues and should be redistributed among all queues.


Once it is determined where the agents were spending their time, then it is determined if there was an assignment change (a secondary parameter) (step 1160). Examples of a queue assignment change include, without limitation, increasing an agent's skill level, decreasing an agent's skill level, adding a queue, and removing a queue. If there were no queue assignment changes, then the problem is still likely in the staffing and/or schedule. Therefore, if no queue assignment change was detected, then the staffing and/or schedule is adjusted (step 1164).


However, if there was an assignment change, then it is determined if the reason for the assignment change was to improve service, either locally or globally (step 1168). If the reason for the change was to improve service, or was any other reversible reason, then the change is reversed and the system parameters are restored to their last known working condition (step 1172). However, if the reason for the change should not be reversed, then the staffing and/or schedule needs to be changed (step 1164). Additionally, other routing parameters may be changed to compensate for the other change that should not be reversed (not shown).



FIG. 12 depicts a method of maintaining a database of positive system changes in accordance with at least some embodiments of the present invention. Initially, an administrator decides to implement a change to one or more the contact center parameters (step 1204). The change may include a staffing change, a schedule change, a queue assignment change, a vector change, or any other change known in the art. Thereafter, the administrator records the reason for the change (step 1208). The administrator may choose among various options for a reason of change, such as to improve service, to compensate for understaffing, to comply with new regulations, etc.


Once the reason for the change is known, the event manager 232 implements the change as defined by the system administrator (step 1212). Then the performance level of the contact center is monitored (step 1216). The performance level may be monitored by tracking the service level, match rate, both or some other primary parameter. As the performance level of the contact center is monitored, it is determined if the performance level has declined (step 1220). If the performance level has not declined (i.e., it has improved or remained the same), then the event manager stores the positive change, the system parameters before the change, the system parameters after the change, and the reason for the change in the database 114 (step 1224). The positive change is preferably added to a positive change history log that can be accessed by the event manager 232 at a later time to create suggestions for future changes to the system administrator. For example, if the event manager 232 has noticed that a particular system change worked multiple times in the past for a specified set of contact center parameter settings, then if the same set of parameter settings are encountered at a later date, the event manager 232 can suggest the same changes to the system administrator with a reasonable amount of assurance that the performance will improve. The use of a positive change history log allows the event manager 232 to learn what types of system changes provide positive results. This is particularly useful when a change to the system parameters should not be reversed and a new change needs to be suggested to the system administrator.


The change may also be stored as a positive change if no change in performance was detected. This may be useful if the administrator changed system parameters to make a particular agent or agents happier as they worked. If no degradation to performance was detected as the result of such a change it may be regarded as a positive change and can therefore be stored in the positive changes history log along with its reason for the change.


It may also be useful to store negative changes in some circumstances in a negative change database. Specifically, a database of previous negative changes may be referenced to identify if the requested change may likely result in a decrease in contact center performance.


If there was a decline in performance detected by monitoring a primary parameter, then it is determined if the change is reversible (step 1228). If the change should not be reversed, then the event manager 232 can access the positive changes history log and suggest additional changes to the system parameters (step 1232). However, if the change was reversible, then the change is reversed and the system parameters are restored to their previous state (step 1236). The ability to change parameters and have the event manager 232 either automatically reverse them if they are bad or store them for future reference if they are good provides the system administrator the ability to learn how to work with the contact center. This learning tool also provides the event manager 232 with more information which can ultimately help the system administrator when other changes in system performance are detected.


Referring now to FIG. 14, a method of responding to changes in contact center performance will be described in accordance with at least some embodiments of the present invention.


In step 1400, the proposed contact center configuration 1320 is received from the user.


In step 1404, the analysis module API 1304 retrieves a set of relevant templates from the stored templates 1316a-n. This is done by querying the database API 1312 for templates having selected ranges of values for selected fields. For example, the API 1304 requests database API 1312 to retrieve all templates having a timestamp earlier than X, a staffing value ranging from A to B, occupancy level ranging from C to D, vector settings of E, F, . . . and G, and queue assignments of H to I for a first queue, J to K for a second queue, . . . and L to M for an mth queue.


In step 1408, the comparison engine 1300 compares a selected member of the retrieved set of templates against the proposed contact center configuration and generates an indicator, e.g., a degree of similarity between the selected template and new configuration or a likelihood that the primary performance parameter for the selected template will characterize, or be the same or similar magnitude as, the same primary performance parameter for the new configuration (if implemented).


In decision diamond 1416, the comparison engine 1300 determines whether the indicator is at least a selected value. If the indicator has too low a value, the template is deemed to be non-deterministic of the likely result of implementing the new configuration. If the indicator has a high enough value, the template is deemed to be deterministic of the likely result of implementing the new configuration.


When the indicator value is too low, the selected template is discarded in step 1420.


When the indicator value is sufficiently high, the selected template is flagged as being potentially deteiministic and, in decision diamond 1424, the engine 1300 determines whether there is a next member in the retrieved set of templates. If so, the engine 1300, in step 1412, selects the next member and returns to and repeats step 1408. If there is no next member, the engine 1300 determines, in decision diamond 1429, whether a similar template was found to exist. If no template was located, the engine 1300, in step 1430 creates a new template, such as by extrapolating between existing templates.


If at least one template was located or after performing step 1430, the results of the engine's analysis are saved and presented to the user 1428 via the user interface 1308. Typically, the user interface is a workstation, Personal Computer, laptop, or the like. The results saved include a pointer to the template, the value of the indicator, and a pointer to the new contact center configuration, which is also stored in memory. At this step, the other types of analysis referenced above may also be performed to provide an estimated qualitative and/or quantitative impact on one or more selected primary performance parameters and an associated likelihood or level of confidence that the estimated impact is accurate.


In decision diamond 1432, the module 234 determines whether the user has requested that the new configuration be implemented. If so, the new configuration is implemented in step 1436 and the contact center reconfigured as defined by the proposed configuration. If the user has not elected to implement the new configuration within a selected period of time, the module 234 times out and terminates operation.


After step 1436, the engine 1300 records the results of the reconfigured contact center as a new template for use in making a future prediction. Typically, more recent templates are weighted more heavily than less recent templates when making a prediction respecting contact center performance.


The present invention, in various embodiments, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, subcombinations, and subsets thereof. Those of skill in the art will understand how to make and use the present invention after understanding the present disclosure. The present invention, in various embodiments, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and/or reducing cost of implementation.


The foregoing discussion of the invention has been presented for purposes of illustration and description. The foregoing is not intended to limit the invention to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the invention are grouped together in one or more embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the invention.


Moreover, though the description of the invention has included description of one or more embodiments and certain variations and modifications, other variations and modifications are within the scope of the invention, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.

Claims
  • 1. A method comprising: a processor generating two or more contact center templates, wherein each of the two or more contact center templates defines a historical contact center configuration as of a respective point in time, each of the two or more contact center templates including at least one primary contact center performance parameter, which characterizes, measures, or quantifies performance of a contact center and at least one secondary contact center parameter, which corresponds to at least one user-configurable parameter that, when changed, positively or negatively impacted the at least one primary contact center performance parameter of the respective contact center template, wherein each of the two or more contact center templates includes a timestamp to indicate an associated time of generation and wherein each of the two or more contact center templates also includes a magnitude of the at least one primary contact center performance parameter before a change to the at least one user-configurable parameter that positively or negatively impacted the at least one primary contact center performance parameter, a magnitude of the at least one primary contact center performance parameter after the change, and an identification of the at least one user-configurable parameter associated with the change;the processor receiving a proposed contact center configuration;the processor comparing the proposed contact center configuration to each of the two or more contact center templates; andbased on the results of the comparing step, the processor predicting an impact on a primary contact center performance parameter if the proposed contact center configuration were to be implemented.
  • 2. The method of claim 1, wherein the primary performance parameter is at least one of service level, match rate, percent abandon, and average speed of answer, wherein the templates comprise a plurality of secondary parameters, and wherein the secondary parameters comprise a plurality of staffing, number of agents logged out, number of agents logged in, number of agents working on other work, workflow levels, routing vector settings, traffic, average handle time, percent network routing, percent adherence, automated response unit settings, queue assignments, and skill levels.
  • 3. The method of claim 2, wherein the two or more contact center templates are selected from among a plurality of contact center templates and wherein each of the two or more contact center templates are selected from the plurality of contact center templates based, at least in part, on the two or more contact center templates having at least a selected degree of similarity to the proposed contact configuration and wherein, when no similar contact center template is located, the similar template is created based on other dissimilar templates.
  • 4. The method of claim 3, wherein the results of the comparing include a level of confidence that each of the two or more contact center templates will characterize the predicted impact on the primary contact center performance parameter if the proposed contact center configuration were to be implemented and wherein each of the two or more contact center templates comprise a level of change in a secondary performance parameter and a resulting change in the primary performance parameter.
  • 5. The method of claim 3, wherein the degree of similarity is based on only a subset of fields in each of the two or more contact center templates and wherein the degree of similarity reflects ratios between the subset of fields and corresponding values in the proposed contact center configuration.
  • 6. The method of claim 1, further comprising: an event manager detecting at least a predetermined level of change in a primary contact center performance parameter; andin response, the event manager identifying, as a likely cause of the detected change, at least one change in a secondary contact center parameter that occurred no more than a selected time period before the detected change, wherein the primary and secondary parameters are different.
  • 7. The method of claim 6, wherein the change in the primary contact center performance parameter is identified as a negative change in at least one of contact center percent in service level, percentage abandon, average speed of answer, expected wait time, and match rate.
  • 8. The method of claim 7, wherein detecting a predetermined level of change in the primary contact center performance parameter comprises: comparing the primary contact center performance parameter to at least one of an expected contact center performance parameter and a historical record of contact center performance parameters; anddetermining that the primary contact center performance parameter deviates from the at least one of an expected contact center performance parameter and historical record of contact center performance parameters by a predetermined threshold.
  • 9. The method of claim 8, wherein the predetermined threshold comprises at least one of a standard deviation and a percent change from the at least one of an expected contact center performance parameter and historical record of contact center performance parameters.
  • 10. A computer readable medium comprising processor executable instructions for performing the method of claim 1.
  • 11. A system for managing contact center performance, comprising: a non-transitory computer memory;a processor in communication with the non-transitory computer memory, the processor operable to execute: an event manager operable to generate two or more contact center templates, wherein each of the two or more contact center templates define a historical contact center configuration as of a respective point in time, each of the two or more contact center templates including at least one primary contact center performance parameter, which characterizes, measures, or quantifies contact center performance, and at least one secondary contact center parameter, which corresponds to at least one user-configurable parameter that, when changed, positively or negatively impacted the at least one primary contact center performance parameter of the respective contact center template, wherein each of the two or more contact center templates includes a timestamp to indicate an associated time of generation and wherein each of the two or more contact center templates also includes a magnitude of the at least one primary contact center performance parameter before a change to the at least one user-configurable parameter that positively or negatively impacted the at least one primary contact center performance parameter, a magnitude of the at least one primary contact center performance parameter after the change, and an identification of the at least one user-configurable parameter associated with the change;an agent and contact selector operable to assign a plurality of contacts to a plurality of resources to service the contacts; anda performance analysis module operable to: receive a change in a secondary contact center performance parameter;compare the change in secondary contact center performance parameter against each of the two or more contact center templates; andbased on the results of the compare operation, predict an impact on a primary contact center performance parameter if the change in secondary performance parameter were to be implemented.
  • 12. The system of claim 11, the primary performance parameter is at least one of service level, match rate, percent abandon, and average speed of answer, wherein the templates comprise a plurality of secondary parameters, and wherein the secondary parameters comprise a plurality of staffing, number of agents logged out, number of agents logged in, number of agents working on other work, workflow levels, routing vector settings, traffic, average handle time, percent network routing, percent adherence, automated response unit settings, queue assignments, and skill levels.
  • 13. The system of claim 11, wherein is the two or more contact center templates are selected from among a plurality of contact center templates and wherein each of the two or more contact center templates are selected from the plurality of contact center templates based, at least in part, on the two or more contact center templates having at least a selected degree of similarity to the proposed contact configuration and wherein, when no similar contact center template is located, the similar template is created based on other dissimilar templates.
  • 14. The system of claim 12, wherein the results of the compare include a level of confidence that each of the two or more contact center templates will characterize the predicted impact on the primary contact center performance parameter if the proposed contact center configuration were to be implemented and wherein each of the two or more contact center templates comprise a level of change in a secondary performance parameter and a resulting change in the primary performance parameter.
  • 15. The system of claim 13, wherein the degree of similarity is based on only a subset of fields in each of the two or more contact center templates and wherein the degree of similarity reflects ratios between the subset of fields and corresponding values in the proposed contact center configuration.
  • 16. The system of claim 11, wherein the system is further operable to measure a primary contact center performance parameter and further comprising: an event manager operable to detect, based on the measured primary performance parameter, a change in the primary contact center performance parameter, identify at least one change in a secondary contact center performance parameter that occurred no more than a selected time period before the detected change, and modify at least one secondary contact center performance parameter to compensate for the at least one change.
  • 17. The system of claim 16, wherein the at least one change comprises a first change to a secondary contact center parameters, and wherein the event manager is operable to reverse the first change.
  • 18. The system of claim 16, wherein the event manager is operable to modify at least one secondary contact center parameter comprising at least one of a queue assignment and contact vector.
  • 19. The system of claim 16, wherein the event manager is operable to suggest at least one of a staffing change, schedule change, routing change, and queue assignment change for a system administrator based on the detected change in the primary contact center performance parameter.
  • 20. The system of claim 16, wherein the event manager is operable to compare the primary contact center performance parameter to at least one of an expected contact center performance parameter and a historical record of contact center performance parameters and determine that contact center performance deviates from the at least one of an expected contact center performance parameter and historical record of contact center performance parameters by a predetermined threshold.
  • 21. The system of claim 20, wherein the predetermined threshold comprises at least one of a standard deviation and a percent change from the at least one of an expected contact center performance parameter and historical record of contact center performance parameters.
  • 22. The system of claim 21, wherein the primary contact center performance parameter comprises at least one of percent in service level, match rate, percent abandon, expected wait time and average speed of answer.
  • 23. The system of claim 21, wherein the secondary contact center parameter comprises at least one of traffic change, routing change, staffing change, occupancy change, and queue assignment change.
CROSS REFERENCE TO RELATED APPLICATIONS

This Application claims the benefit of U.S. Provisional Application No.60/824,876, filed Sep. 7, 2006, the entire disclosure of which is hereby incorporated herein by reference.

US Referenced Citations (393)
Number Name Date Kind
4163124 Jolissaint Jul 1979 A
4510351 Costello et al. Apr 1985 A
4567323 Lottes et al. Jan 1986 A
4737983 Frauenthal et al. Apr 1988 A
4797911 Szlam et al. Jan 1989 A
4894857 Szlam et al. Jan 1990 A
5001710 Gawrys et al. Mar 1991 A
5097528 Gursahaney et al. Mar 1992 A
5101425 Darland Mar 1992 A
5155761 Hammond Oct 1992 A
5164981 Mitchell et al. Nov 1992 A
5164983 Brown et al. Nov 1992 A
5167010 Elm et al. Nov 1992 A
5185780 Leggett Feb 1993 A
5206903 Kohler et al. Apr 1993 A
5210789 Jeffus et al. May 1993 A
5274700 Gechter et al. Dec 1993 A
5278898 Cambray et al. Jan 1994 A
5289368 Jordan et al. Feb 1994 A
5291550 Levy et al. Mar 1994 A
5299260 Shaio Mar 1994 A
5309513 Rose May 1994 A
5311422 Loftin et al. May 1994 A
5325292 Crockett Jun 1994 A
5335268 Kelly, Jr. et al. Aug 1994 A
5335269 Steinlicht Aug 1994 A
5390243 Casselman et al. Feb 1995 A
5436965 Grossman et al. Jul 1995 A
5444774 Friedes Aug 1995 A
5467391 Donaghue, Jr. et al. Nov 1995 A
5469503 Butensky et al. Nov 1995 A
5469504 Blaha Nov 1995 A
5473773 Aman et al. Dec 1995 A
5479497 Kovarik Dec 1995 A
5499291 Kepley Mar 1996 A
5500795 Powers et al. Mar 1996 A
5504894 Ferguson et al. Apr 1996 A
5506898 Costantini et al. Apr 1996 A
5530744 Charalambous et al. Jun 1996 A
5537470 Lee Jul 1996 A
5537542 Eilert et al. Jul 1996 A
5544232 Baker et al. Aug 1996 A
5546452 Andrews et al. Aug 1996 A
5555299 Maloney et al. Sep 1996 A
5577169 Prezioso Nov 1996 A
5592378 Cameron et al. Jan 1997 A
5592542 Honda et al. Jan 1997 A
5594726 Thompson et al. Jan 1997 A
5603029 Aman et al. Feb 1997 A
5604892 Nuttall et al. Feb 1997 A
5606361 Davidsohn et al. Feb 1997 A
5611076 Durflinger et al. Mar 1997 A
5627884 Williams et al. May 1997 A
5642515 Jones et al. Jun 1997 A
5673205 Brunson Sep 1997 A
5684872 Flockhart et al. Nov 1997 A
5684964 Powers et al. Nov 1997 A
5689698 Jones et al. Nov 1997 A
5703943 Otto Dec 1997 A
5713014 Durflinger et al. Jan 1998 A
5721770 Kohler Feb 1998 A
5724092 Davidsohn et al. Mar 1998 A
5740238 Flockhart et al. Apr 1998 A
5742675 Kilander et al. Apr 1998 A
5742763 Jones Apr 1998 A
5748468 Notenboom et al. May 1998 A
5749079 Yong et al. May 1998 A
5751707 Voit et al. May 1998 A
5752027 Familiar May 1998 A
5754639 Flockhart et al. May 1998 A
5754776 Hales et al. May 1998 A
5754841 Carino, Jr. May 1998 A
5757904 Anderson May 1998 A
5781614 Brunson Jul 1998 A
5784452 Carney Jul 1998 A
5787410 McMahon Jul 1998 A
5790642 Taylor et al. Aug 1998 A
5790650 Dunn et al. Aug 1998 A
5790677 Fox et al. Aug 1998 A
5794250 Carino, Jr. et al. Aug 1998 A
5796393 MacNaughton et al. Aug 1998 A
5802282 Hales et al. Sep 1998 A
5802510 Jones Sep 1998 A
5818907 Maloney et al. Oct 1998 A
5819084 Shapiro et al. Oct 1998 A
5825869 Brooks et al. Oct 1998 A
5826039 Jones Oct 1998 A
5828747 Fisher et al. Oct 1998 A
5836011 Hambrick et al. Nov 1998 A
5838968 Culbert Nov 1998 A
5839117 Cameron et al. Nov 1998 A
5864874 Shapiro Jan 1999 A
5875437 Atkins Feb 1999 A
5880720 Iwafune et al. Mar 1999 A
5881238 Aman et al. Mar 1999 A
5884032 Bateman et al. Mar 1999 A
5889956 Hauser et al. Mar 1999 A
5897622 Blinn et al. Apr 1999 A
5903641 Tonisson May 1999 A
5903877 Berkowitz et al. May 1999 A
5905793 Flockhart et al. May 1999 A
5909669 Havens Jun 1999 A
5911134 Castonguay et al. Jun 1999 A
5914951 Bentley et al. Jun 1999 A
5915012 Miloslavsky Jun 1999 A
5923745 Hurd Jul 1999 A
5926538 Deryugin et al. Jul 1999 A
5930786 Carino, Jr. et al. Jul 1999 A
5937051 Hurd et al. Aug 1999 A
5937402 Pandit Aug 1999 A
5940496 Gisby et al. Aug 1999 A
5943416 Gisby Aug 1999 A
5948065 Eilert et al. Sep 1999 A
5960073 Kikinis et al. Sep 1999 A
5963635 Szlam et al. Oct 1999 A
5963911 Walker et al. Oct 1999 A
5970132 Brady Oct 1999 A
5974135 Breneman et al. Oct 1999 A
5974462 Aman et al. Oct 1999 A
5982873 Flockhart et al. Nov 1999 A
5987117 McNeil et al. Nov 1999 A
5991392 Miloslavsky Nov 1999 A
5996013 Delp et al. Nov 1999 A
5999963 Bruno et al. Dec 1999 A
6000832 Franklin et al. Dec 1999 A
6011844 Uppaluru et al. Jan 2000 A
6014437 Acker et al. Jan 2000 A
6031896 Gardell et al. Feb 2000 A
6038293 Mcnerney et al. Mar 2000 A
6038296 Brunson et al. Mar 2000 A
6044144 Becker et al. Mar 2000 A
6044205 Reed et al. Mar 2000 A
6044355 Crockett et al. Mar 2000 A
6049547 Fisher et al. Apr 2000 A
6049779 Berkson Apr 2000 A
6052723 Ginn Apr 2000 A
6055308 Miloslavsky et al. Apr 2000 A
6064730 Ginsberg May 2000 A
6064731 Flockhart et al. May 2000 A
6084954 Harless Jul 2000 A
6088411 Powierski et al. Jul 2000 A
6088441 Flockhart et al. Jul 2000 A
6108670 Weida et al. Aug 2000 A
6115462 Servi et al. Sep 2000 A
6128304 Gardell et al. Oct 2000 A
6151571 Pertrushin Nov 2000 A
6154769 Cherkasova et al. Nov 2000 A
6163607 Bogart et al. Dec 2000 A
6173053 Bogart et al. Jan 2001 B1
6175564 Miloslavsky et al. Jan 2001 B1
6178441 Elnozahy Jan 2001 B1
6185292 Miloslavsky Feb 2001 B1
6185603 Henderson et al. Feb 2001 B1
6192122 Flockhart et al. Feb 2001 B1
6215865 McCalmont Apr 2001 B1
6226377 Donaghue, Jr. May 2001 B1
6229819 Darland et al. May 2001 B1
6230183 Yocom et al. May 2001 B1
6233333 Dezonmo May 2001 B1
6240417 Eastwick May 2001 B1
6259969 Tackett et al. Jul 2001 B1
6263359 Fong et al. Jul 2001 B1
6272544 Mullen Aug 2001 B1
6275806 Pertrushin Aug 2001 B1
6275812 Haq et al. Aug 2001 B1
6275991 Erlin Aug 2001 B1
6278777 Morley Aug 2001 B1
6292550 Burritt Sep 2001 B1
6295353 Flockhart et al. Sep 2001 B1
6298062 Gardell et al. Oct 2001 B1
6307931 Vaudreuil Oct 2001 B1
6324282 McIllwaine et al. Nov 2001 B1
6332081 Do Dec 2001 B1
6339754 Flanagan et al. Jan 2002 B1
6353810 Petrushin Mar 2002 B1
6356632 Foster et al. Mar 2002 B1
6360222 Quinn Mar 2002 B1
6366666 Bengtson et al. Apr 2002 B2
6366668 Borst et al. Apr 2002 B1
6389028 Bondarenko et al. May 2002 B1
6389132 Price May 2002 B1
6389400 Bushey et al. May 2002 B1
6408066 Andruska et al. Jun 2002 B1
6408277 Nelken Jun 2002 B1
6411682 Fuller et al. Jun 2002 B1
6424709 Doyle et al. Jul 2002 B1
6426950 Mistry Jul 2002 B1
6427137 Petrushin Jul 2002 B2
6430282 Bannister et al. Aug 2002 B1
6434230 Gabriel Aug 2002 B1
6446092 Sutter Sep 2002 B1
6449356 Dezonno Sep 2002 B1
6449358 Anisimov et al. Sep 2002 B1
6449646 Sikora et al. Sep 2002 B1
6453038 McFarlane et al. Sep 2002 B1
6459787 McIllwaine et al. Oct 2002 B2
6463148 Brady Oct 2002 B1
6463346 Flockhart et al. Oct 2002 B1
6463415 St. John Oct 2002 B2
6463471 Dreke et al. Oct 2002 B1
6480826 Pertrushin Nov 2002 B2
6490350 McDuff et al. Dec 2002 B2
6535600 Fisher et al. Mar 2003 B1
6535601 Flockhart et al. Mar 2003 B1
6553114 Fisher et al. Apr 2003 B1
6556974 D'Alessandro Apr 2003 B1
6560330 Gabriel May 2003 B2
6560649 Mullen et al. May 2003 B1
6560707 Curtis et al. May 2003 B2
6563920 Flockhart et al. May 2003 B1
6563921 Williams et al. May 2003 B1
6571285 Groath et al. May 2003 B1
6574599 Lim et al. Jun 2003 B1
6574605 Sanders et al. Jun 2003 B1
6597685 Miloslavsky et al. Jul 2003 B2
6603854 Judkins et al. Aug 2003 B1
6604084 Powers et al. Aug 2003 B1
6614903 Flockhart et al. Sep 2003 B1
6650748 Edwards et al. Nov 2003 B1
6662188 Rasmussen et al. Dec 2003 B1
6668167 McDowell et al. Dec 2003 B2
6675168 Shapiro et al. Jan 2004 B2
6684192 Honarvar et al. Jan 2004 B2
6697457 Petrushin Feb 2004 B2
6700967 Kleinoder et al. Mar 2004 B2
6704409 Dilip et al. Mar 2004 B1
6707903 Burok et al. Mar 2004 B2
6711253 Prabhaker Mar 2004 B1
6724885 Deutsch et al. Apr 2004 B1
6735299 Krimstock et al. May 2004 B2
6735593 Williams May 2004 B1
6738462 Brunson May 2004 B1
6744877 Edwards Jun 2004 B1
6754333 Flockhart et al. Jun 2004 B1
6757362 Cooper et al. Jun 2004 B1
6766013 Flockhart et al. Jul 2004 B2
6766014 Flockhart et al. Jul 2004 B2
6766326 Cena Jul 2004 B1
6775377 McIllwaine et al. Aug 2004 B2
6785666 Nareddy et al. Aug 2004 B1
6822945 Petrovykh Nov 2004 B2
6829348 Schroeder et al. Dec 2004 B1
6839735 Wong et al. Jan 2005 B2
6842503 Wildfeuer Jan 2005 B1
6847973 Griffin et al. Jan 2005 B2
6898190 Shtivelman et al. May 2005 B2
6915305 Subramanian et al. Jul 2005 B2
6947543 Alvarado et al. Sep 2005 B2
6947988 Saleh Sep 2005 B1
6963826 Hanaman et al. Nov 2005 B2
6968052 Wullert, II Nov 2005 B2
6981061 Sakakura Dec 2005 B1
6985901 Sachse et al. Jan 2006 B1
6988126 Wilcock et al. Jan 2006 B2
7010542 Trappen et al. Mar 2006 B2
7020254 Phillips Mar 2006 B2
7035808 Ford Apr 2006 B1
7035927 Flockhart et al. Apr 2006 B2
7039176 Borodow et al. May 2006 B2
7054434 Rodenbusch et al. May 2006 B2
7062031 Becerra et al. Jun 2006 B2
7076051 Brown et al. Jul 2006 B2
7100200 Pope et al. Aug 2006 B2
7103562 Kosiba et al. Sep 2006 B2
7110525 Heller et al. Sep 2006 B1
7117193 Basko et al. Oct 2006 B1
7127058 O'Connor et al. Oct 2006 B2
7136873 Smith et al. Nov 2006 B2
7149733 Lin et al. Dec 2006 B2
7155612 Licis Dec 2006 B2
7158628 McConnell et al. Jan 2007 B2
7158909 Tarpo et al. Jan 2007 B2
7162469 Anonsen et al. Jan 2007 B2
7165075 Harter et al. Jan 2007 B2
7170976 Keagy Jan 2007 B1
7170992 Knott et al. Jan 2007 B2
7177401 Mundra et al. Feb 2007 B2
7200219 Edwards et al. Apr 2007 B1
7203655 Herbert et al. Apr 2007 B2
7212625 McKenna et al. May 2007 B1
7215744 Scherer May 2007 B2
7246371 Diacakis et al. Jul 2007 B2
7257513 Lilly Aug 2007 B2
7257597 Pryce et al. Aug 2007 B1
7266508 Owen et al. Sep 2007 B1
7283805 Agrawal Oct 2007 B2
7295669 Denton et al. Nov 2007 B1
7299259 Petrovykh Nov 2007 B2
7324954 Calderaro et al. Jan 2008 B2
7340408 Drew et al. Mar 2008 B1
7373341 Polo-Malouvier May 2008 B2
7376127 Hepworth et al. May 2008 B2
7386100 Michaelis Jun 2008 B2
7392402 Suzuki Jun 2008 B2
7418093 Knott et al. Aug 2008 B2
7499844 Whitman, Jr. Mar 2009 B2
7545761 Kalbag Jun 2009 B1
7545925 Williams Jun 2009 B2
7885209 Michaelis et al. Feb 2011 B1
20010011228 Shenkman Aug 2001 A1
20010034628 Eder Oct 2001 A1
20020012186 Nakamura et al. Jan 2002 A1
20020019829 Shapiro Feb 2002 A1
20020021307 Glenn et al. Feb 2002 A1
20020035605 McDowell et al. Mar 2002 A1
20020038422 Suwamoto et al. Mar 2002 A1
20020065894 Dalal et al. May 2002 A1
20020076010 Sahai Jun 2002 A1
20020085701 Parsons et al. Jul 2002 A1
20020087630 Wu Jul 2002 A1
20020112186 Ford et al. Aug 2002 A1
20020116336 Diacakis et al. Aug 2002 A1
20020116461 Diacakis et al. Aug 2002 A1
20020123923 Manganaris et al. Sep 2002 A1
20020147730 Kohno Oct 2002 A1
20020194002 Petrushin Dec 2002 A1
20020194096 Falcone et al. Dec 2002 A1
20030004704 Baron Jan 2003 A1
20030014491 Horvitz et al. Jan 2003 A1
20030028621 Furlong et al. Feb 2003 A1
20030073440 Mukherjee et al. Apr 2003 A1
20030093465 Banerjee et al. May 2003 A1
20030095652 Mengshoel et al. May 2003 A1
20030108186 Brown et al. Jun 2003 A1
20030144900 Whitmer Jul 2003 A1
20030144959 Makita Jul 2003 A1
20030174830 Boyer et al. Sep 2003 A1
20030177017 Boyer et al. Sep 2003 A1
20030223559 Baker Dec 2003 A1
20030231757 Harkreader et al. Dec 2003 A1
20040008828 Coles et al. Jan 2004 A1
20040015496 Anonsen Jan 2004 A1
20040015506 Anonsen et al. Jan 2004 A1
20040054743 McPartlan et al. Mar 2004 A1
20040057569 Busey et al. Mar 2004 A1
20040102940 Lendermann et al. May 2004 A1
20040103324 Band May 2004 A1
20040138944 Whitacre et al. Jul 2004 A1
20040162998 Tuomi et al. Aug 2004 A1
20040165717 Mcllwaine et al. Aug 2004 A1
20040193646 Cuckson et al. Sep 2004 A1
20040202308 Baggenstoss et al. Oct 2004 A1
20040202309 Baggenstoss et al. Oct 2004 A1
20040203878 Thomson Oct 2004 A1
20040210475 Starnes et al. Oct 2004 A1
20040240659 Gagle et al. Dec 2004 A1
20040249650 Freedman et al. Dec 2004 A1
20040260706 Anonsen et al. Dec 2004 A1
20050021529 Hodson et al. Jan 2005 A1
20050027612 Walker et al. Feb 2005 A1
20050044375 Paatero et al. Feb 2005 A1
20050049911 Engelking et al. Mar 2005 A1
20050065837 Kosiba et al. Mar 2005 A1
20050071211 Flockhart et al. Mar 2005 A1
20050071212 Flockhart et al. Mar 2005 A1
20050071241 Flockhart et al. Mar 2005 A1
20050071844 Flockhart et al. Mar 2005 A1
20050091071 Lee Apr 2005 A1
20050125432 Lin et al. Jun 2005 A1
20050125458 Sutherland et al. Jun 2005 A1
20050138064 Trappen et al. Jun 2005 A1
20050154708 Sun Jul 2005 A1
20050182784 Trappen et al. Aug 2005 A1
20050228707 Hendrickson Oct 2005 A1
20050261035 Groskreutz et al. Nov 2005 A1
20050283393 White et al. Dec 2005 A1
20050289446 Moncsko et al. Dec 2005 A1
20060004686 Molnar et al. Jan 2006 A1
20060007916 Jones et al. Jan 2006 A1
20060015388 Flockhart et al. Jan 2006 A1
20060026049 Joseph et al. Feb 2006 A1
20060056598 Brandt et al. Mar 2006 A1
20060058049 McLaughlin et al. Mar 2006 A1
20060100973 McMaster et al. May 2006 A1
20060135058 Karabinis Jun 2006 A1
20060167667 Maturana et al. Jul 2006 A1
20060178994 Stolfo et al. Aug 2006 A1
20060242160 Kanchwalla et al. Oct 2006 A1
20060256957 Fain et al. Nov 2006 A1
20060271418 Hackbarth et al. Nov 2006 A1
20060285648 Wahl et al. Dec 2006 A1
20070038632 Engstrom Feb 2007 A1
20070064912 Kagan et al. Mar 2007 A1
20070083572 Bland et al. Apr 2007 A1
20070112953 Barnett May 2007 A1
20070127643 Keagy Jun 2007 A1
20070156375 Meier et al. Jul 2007 A1
20070192414 Chen et al. Aug 2007 A1
20070201311 Olson Aug 2007 A1
20070201674 Annadata et al. Aug 2007 A1
20070230681 Boyer et al. Oct 2007 A1
20070294343 Daswani et al. Dec 2007 A1
20080056165 Petrovykh Mar 2008 A1
Foreign Referenced Citations (33)
Number Date Country
2143198 Jan 1995 CA
2174762 Mar 1996 CA
0501189 Sep 1992 EP
0576205 Dec 1993 EP
0740450 Oct 1996 EP
0770967 May 1997 EP
0772335 May 1997 EP
0829996 Mar 1998 EP
0855826 Jul 1998 EP
0863651 Sep 1998 EP
0866407 Sep 1998 EP
899673 Mar 1999 EP
998108 May 2000 EP
1035718 Sep 2000 EP
1091307 Apr 2001 EP
1150236 Oct 2001 EP
1761078 Mar 2007 EP
2273418 Jun 1994 GB
2290192 Dec 1995 GB
07-007573 Jan 1995 JP
2001-053843 Feb 2001 JP
2002-032977 Jan 2002 JP
2002-304313 Oct 2002 JP
2006-054864 Feb 2006 JP
WO 9607141 Mar 1996 WO
WO 9728635 Aug 1997 WO
WO 9856207 Dec 1998 WO
WO 9917522 Apr 1999 WO
WO 0026804 May 2000 WO
WO 0026816 May 2000 WO
WO 0180094 Oct 2001 WO
WO 02099640 Dec 2002 WO
WO 03015425 Feb 2003 WO
Non-Patent Literature Citations (146)
Entry
Akitsu, “An Introduction of Run Time Library for C Program, the fourth round,” C Magazine, Jul. 1, 1990, vol. 2(7), pp. 78-83.
Emura, “Windows API Utilization Guide, Points for Knowledges and Technologies,” C Magazine, Oct. 1, 2005, vol. 17(10), pp. 147-150.
Examiner's Office Letter (including translation) for Japanese Patent Application No. 2007-043414, mailed Jul. 7, 2010.
Koutarou, “Building a Framework for EC using Hibernate, OSWorkflow”, JAVA Press, Japan, Gujutsu Hyouron Company, vol. 25, 2004, pp. 132-147.
Microsoft R Access 97 for Windows R Application development guide, Ver. 8.0, Microsoft Corp., a first version, pp. 569-599.
Seo, “akuto/FC shop sale assistant systme etc., compressing into halves the number of days for stock possession by a multi-bender EPR plus POS”, Network Computing, Japan Licktelecom Corp., vol. 12, No. 4, Apr. 1, 2000, pp. 45-49.
Bischoff et al. “Data Ware House Building Method—practical advices telled by persons having experience and experts”, Kyouritsu Shuppan Corp. May 30, 2000, first edition, pp. 197-216.
US 6,537,685, 3/2003, Higuchi (withdrawn).
U.S. Appl. No. 11/517,646, Hackbarth.
U.S. Appl. No. 10/815,566, Kiefhaber.
Ahmed, Sarah, “A Scalable Byzantine Fault Tolerant Secure Domain Name System,” thesis submitted to Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, Jan. 22, 2001, 101 pages.
Aspect—“Analysis and Reporting,” http://aspect.com/products/analysis/index.cfm, (Copyright 2005) (1page).
Aspect—“Call Center Reports,” http://aspect.com/products/analysis/ccreporting,cfm, (Copyright 2005) (2 pages). Avaya - “Avaya and Blue Pumpkin - Providing Workforce Optimization Solutions” (Copyright 2004) (3 pp.).
Aspect—“Performance Optimization,” http://aspect.com/products/wfm/performanceopt.cfm?section=performanceopt, (Copyright 2005) (1page)
Avaya—“Avaya and Blue Pumpkin—Providing Workforce Optimization Solutions”(Copyright 2004) (3 pages).
Avaya—“Avaya and Texas Digital Systems—Providing Real-time Access to Call Statistics” (Copyright 2004) (3 pages).
Avaya—“Avaya Basic Call Management System Reporting Desktop”(Copyright 2002) (4 pages).
Avaya—“Avaya Call Management System” (Copyright 2003) (3 pages).
Avaya—“Basic Call Management System Reporting Desktop,” Product Description, http://www.avaya.com/gcm/master-usa/en-us/products/offers/bcmrs—desktop.htm (Copyright 2005) (2 pages).
Avaya—“Basic Call Management System Reporting Desktop,” Product Features, http://www.avaya.com/gcm/master-usa/en-us/products/offers/bcmrs—desktop.htm (Copyright 2005) (2 pages).
Avaya—“Basic Call Management System Reporting Desktop,” Product Overview, http://www.avaya.com/gcm/master-usa/en-us/products/offers/bcmrs—desktop.htm (Copyright 2005) (2 pages).
Avaya—“Basic Call Management System Reporting Desktop,” Product Technical, http://www.avaya.com/gcm/master-usa/en-us/products/offers/bcmrs—desktop.htm (Copyright 2005) (2 pages).
Avaya—“Call Management System,” Product Description, http://www.avaya.com/gcm/master-usa/en-us/products/offers/call—management—system.htm (Copyright 2005) (2 pages).
Avaya—“Call Management System,” Product Features, http://www.avaya.com/gcm/masterusa/en-us/products/offers/call—management—system.htm (Copyright 2005) (3 pages).
Avaya—“Call Management System,” Product Overview, http://www.avaya.com/gcm/masterusa/en-us/products/offers/call—management—system.htm (Copyright 2005) (2 pages).
Avaya—“Call Management System,” Product Technical, http://www.avaya.com/gcm/master-usa/en-us/products/offers/call—management—system.htm (Copyright 2005) (2 pages).
Avaya—“Multi Channel Product Authorization,” (PA) Version 5.0, (Nov. 2003) (6 pages).
Avaya, Inc. Business Advocate Options, at http://www.avaya.com, downloaded on Feb. 15, 2003, Avaya, Inc. 2003.
Avaya, Inc. Business Advocate Product Summary, at http://www.avaya.com, downloaded on Feb. 15, 2003, Avaya, Inc. 2003, 3 pages.
Avaya, Inc. CentreVu Advocate, Release 9, User Guide, Dec. 2000.
Avaya, Inc., “Better Implementation of IP in Large Networks,” Avaya, Inc. 2002, 14 pages.
Avaya, Inc., “The Advantages of Load Balancing in the Multi-Call Center Enterprise,” Avaya, Inc., 2002, 14 pages.
Avaya, Inc., “Voice Over IP Via Virtual Private Networks: An Overview,” Avaya, Inc., Feb. 2001, 9 pages.
Bellsouth Corp., “Frequently Asked Questions—What is a registrar?,” available at https://registration.bellsouth.net/NASApp/DNSWebUI/FAQ.jsp, downloaded Mar. 31, 2003, 4 pages.
Chavez, David, et al., “Avaya MultiVantage Software: Adapting Proven Call Processing for the Transition to Converged IP Networks,” Avaya, Inc., Aug. 2002.
Cherry, “Anger Management,” IEEE Spectrum (Apr. 2005) (1 page).
Coles, Scott, “A Guide for Ensuring Service Quality in IP Voice Networks,” Avaya, Inc., 2002, pp. 1-17.
Dawson, “NPRI's Powerguide, Software Overview” Call Center Magazine (Jun. 1993), p. 85.
DEFINITY Communications System Generic 3 Call Vectoring/Expert Agent Selection (EAS) Guide, AT&T publication No. 555-230-520 (Issue 3, Nov. 1993).
Doo-Hyun Kim et al. “Collaborative Multimedia Middleware Architecture and Advanced Internet Call Center,” Proceedings at the International Conference on Information Networking (Jan. 31, 2001), pp. 246-250.
E. Noth et al., “Research Issues for the Next Generation Spoken”: University of Erlangen-Nuremberg, Bavarian Research Centre for Knowledge-Based Systems, at http://www5.informatik.uni-erlangen.de/literature/psdir/1999/Noeth99:RIF.ps.gz, printed Feb. 10, 2003; 8 pages.
Foster, Robin, et al., “Avaya Business Advocate and its Relationship to Multi-Site Load Balancing Applications,” Avaya, Inc., Mar. 2002, 14 pages.
GEOTEL Communications Corporation Web site printout entitled “Intelligent CallRouter” Optimizing the Interaction Between Customers and Answering Resources., 1998, 6 pages.
John H.L. Hansen and Levent M. Arsian, Foreign Accent Classification Using Source Generator Based Prosodic Features, IEEE Proc. ICASSP, vol. 1, pp. 836-839, Detroit USA (May 1995).
L.F. Lamel and J.L. Gauvain, Language Identification Using Phone-Based Acoustic Likelihood, ICASSP-94, date unknown; 4 pages.
Levent M. Arsian and John H.L. Hansen, Language Accent Classification in American English, Robust Speech Processing Laboratory, Duke University Department of Electrical Engineering, Durham, NC, Technical Report RSPL-96-7, revised Jan. 29, 1996. pp. 1-16.
Levent M. Arsian, Foreign Accent Classification in American English, Department of Electrical Computer Engineering, Duke University, Thesis, pp. 1-200 (1996).
MIT Project Oxygen, Pervasive, Human-Centered Computing (MIT Laboratory for Computer Science) (Jun. 2000) pp. 1-15.
NICE Systems—“Insight from Interactions,” “Overwhelmed by the Amount of Data at your Contact Center?” http://www.nice.com/products/multimedia/analyzer.php, (Printed May 19, 2005) (2 pages).
NICE Systems—“Multimedia Interaction Products,” “Insight from Interactions,” http://www.nice.com/products/multimedia/contact—centers.php (Printed May 19, 2005) (3 pages).
Nortel—“Centrex Internet Enabled Call Centers,” http://www.products.nortel.com/go/product—assoc.jsp?segId=0&parID=0&catID=-9191&rend—id . . . (Copyright 1999-2005) (1page).
Presentation by Victor Zue, The MIT Oxygen Project, MIT Laboratory for Computer Science (Apr. 25-26, 2000) 9 pages.
Stevenson et al.; “Name Resolution in Network and Systems Management Environments”; http://netman.cit.buffalo.edu/Doc/DStevenson/NR-NMSE.html; printed Mar. 31, 2003; 16 pages.
“Call Center Recording for Call Center Quality Assurance”, Voice Print International, Inc., available at http://www.voiceprintonline.com/call-center-recording.asp?ad—src=google&srch—trm=call—center—monitoring, date unknown, printed May 10, 2007, 2 pages.
“KANA—Contact Center Support”, available at http://www.kana.com/solutions.php?tid=46, copyright 2006, 3 pages.
“Monitoring: OneSight Call Statistics Monitors”, available at http://www.empirix.com/default.asp?action=article&ID=301, date unknown, printed May 10, 2007, 2 pages.
“Oracle and Siebel” Oracle, available at http://www.oracle.com/siebel/index.html, date unknown, printed May 10, 2007, 2 pages.
“Applications, NPRI's Predictive Dialing Package,” Computer Technology (Fall 1993), p. 86.
“Call Center Software You Can't Outgrow,” Telemarketing® (Jul. 1993), p. 105.
“Domain Name Services,” available at http://www.pism.com/chapt09/chapt09.html, downloaded Mar. 31, 2003, 21 pages.
“eGain's Commerce 2000 Platform Sets New Standard for eCommerce Customer Communications,” Business Wire (Nov. 15, 1999)., 3 pages.
“Internet Protocol Addressing,” available at http://samspade.org/d/ipdns.html, downloaded Mar. 31, 2003, 9 pages.
“Product Features,” Guide to Call Center Automation, Brock Control Systems, Inc., Activity Managers Series™, Section 5—Company B120, p. 59, 1992.
“Product Features,” Guide to Call Center Automation, CRC Information Systems, Inc., Tel-ATHENA, Section 5—Company C520, p. 95, 1992.
“VAST™, Voicelink Application Software for Teleservicing®,” System Manager User's Guide, Digital Systems (1994), pp. ii, vii-ix, 1-2, 2-41 through 2-77.
“When Talk Isn't Cheap,” Sm@rt Reseller, v. 3, n. 13 (Apr. 3, 2000), p. 50.
U.S. Appl. No. 10/683,039, filed Oct. 10, 2003, Flockhart et al.
U.S. Appl. No. 10/815,534, filed Mar. 31, 2004, Kiefhaber.
U.S. Appl. No. 10/815,584, filed Mar. 31, 2004, Kiefhaber.
U.S. Appl. No. 10/861,193, filed Jun. 3, 2004, Kiefhaber.
U.S. Appl. No. 10/946,638, filed Sep. 20, 2004, Flockhart et al.
U.S. Appl. No. 11/087,290, filed Mar. 22, 2005, Michaelis.
U.S. Appl. No. 11/199,828, filed Aug. 8, 2005, Bland et al.
U.S. Appl. No. 11/245,724, filed Oct. 6, 2005, Flockhart et al.
U.S. Appl. No. 11/861,857, filed Sep. 26, 2007, Tendick et al.
“Still Leaving It To Fate?: Optimizing Workforce Management”, Durr, William Jr., Nov. 2001.
G. Hellstrom et al., “RFC 2793—RTP Payload for Text Consersation,” Network Working Group Request for Comments 2793 (May 2000), available at http://www.faqs.org/rfcs/rfc2793.html, 8 pages.
H. Schulzrinne et al., “RFC 2833—RTP Payload for DTMF Digits, Telephony Tones and Telephony Signals,” Network Working Group Request for Comments 2833 (May 2000), available at http://www.faqs.org/rfcs/rfc2833.html, 23 pages.
“Services for Computer Supported Telecommunications Applications (CSTA) Phase III”; Standard ECMA-269, 5th Edition—Dec. 2002; ECMA International Standardizing Information and Communication Systems; URL: http://www.ecma.ch; pp. 1-666 (Parts 1-8).
“Access for 9-1-1 and Telephone Emergency Services,” Americans with Disabilities Act, U.S. Department of Justice, Civil Rights Division (Jul. 15, 1998), available at http://www.usdoj.gov/crt/ada/911ta.htm, 11 pages.
Kimball, et al., “Practical Techniques for Extracting, Cleaning, Conforming, and Delivering Data.” The Data Warehouse ETL Toolkit. 2004. Ch. 5, pp. 170-174.
Snape, James, “Time Dimension and Time Zones.” 2004. pp. 1-10. http://www.jamessnape.me.uk/blog/CommentView,gui,79e910a1-0150-4452-bda3-e98d.
Data Warehouse Designer—Divide and Conquer, Build Your Data Warehouse One Piece at a Time, Ralph Kimball, Oct. 30, 2002, 3 pages.
Data Warehouse Designer—The Soul of the Data Warehouse, Part One: Drilling Down, Ralph Kimball, Mar. 20, 2003, 3 pages.
Data Warehouse Designer—The Soul of the Data Warehouse, Part Two: Drilling Across, Ralph Kimball, Apr. 5, 2003, 3 pages.
Data Warehouse Designer—The Soul of the Data Warehouse, Part Three: Handling Time, Ralph Kimball, Apr. 22, 2003, 3 pages.
Data Warehouse Designer—TCO Starts with the End User, Ralph Kimball, May 13, 2003, http://www.intelligententerprise.com/030513/608warehouse1—1.jhtml?—requestid=598425, 3 pages.
Creating and Using Data Warehouse—Using Dimensional Modeling (Microsoft) downloaded May 18, 2005 http://msdn.microsoft.com/library/en-us/createdw/createdw—39z.asp?frame=true 1 page.
Creating and Using Data Warehouse Dimension Tables (Microsoft) copyright 2005, http://msdn.microsoft.com/library/en-us/createdw/createdw—10kz.asp?frame=true, 3 pages.
DMReview—Business Dimensional Modeling: The Logical Next Step: Translating the BDM, Laura Reeves, published May 2004, 4 pages.
Multi-Dimensional Modeling with BW ASAP for BW Accelerator Business Information Warehouse, copyright 2000, 71 pages.
ComputerWorld, ETL, M. Songini, at http://www.computerworld.com/databasetopics/businessintelligence/datawarehouse/story/ . . . , copyright 2005, 5 pages.
Kimball, et al., “The Complete Guide to Dimensional Modeling.” The Data Warehouse Toolkit. 2nd Edition, 2002. Ch. 11, pp. 240-241.
Fundamentals of Data Warehousing—Unit 3—Dimensional Modeling, Fundamentals of Data Warehousing, copyright 2005—Evolve Computer Solutions, 55 pages.
The Importance of Data Modeling as a Foundation for Business Insight, Larissa Moss and Steve Hoberman, copyright 2004, 38 pages.
CS 345: Topics in Data Warehousing, Oct. 5, 2004, 36 pages.
An Expert's Guide to Oracle Technology blog, My Personal Dictionary, Lewis R. Cunningham, posted Mar. 31, 2005, http://blogs.ittoolbox.com/oracle'guide/archives003684.asp, 4 page.
Data Warehouse Designer Fact Tables and Dimension, Jan. 1, 2003, http://www.inteeigententerprise.com/030101/602warehouse1—1.jhtml, Ralph Kimball, 3 page.
Glossary—Curlingstone Publishing, http://www.curlingstone.com/7002/7002glossary.html, downloaded May 24, 2005, 11 pages.
Data Warehouse—Surrogate Keys, Keep Control Over Record Identifiers by Generating New Keys for the Data Warehouse, Ralph Kimball, May 1998, 4 pages.
Data Warehouse Designer—An Engineer' s View—Its' Worthwhile to Remind Ourselves Why We Build Data Warehouses the Way We Do, Ralph Kimball, Jul. 26, 2002, 3 pages.
Data Warehouse Designer—Design Constraints and Unavoidable Realities, No design Problem in School was This Hard, Ralph Kimball, Sep. 3, 2002, 3 pages.
Data Warehouse Designer—Two Powerful Ideas, The Foundation for Modern Data Warehousing, Ralph Kimball, Sep. 17, 2002, 3 pages.
A.A. Vaisman et al., “A Temporal Query Language for OLAP: Implementation and a Case Study”, LNCS, 2001, vol. 2397, 36 pages.
A.B. Schwarzkopf, “Dimensional Modeling for a Data Warehouse”, date unknown, 18 pages.
Atkins et a.l; “Common Presence and Instant Messaging: Message Format,” Network Working Group (Jan. 9, 2003), available at http://www.ietf.org/internet-drafts/draft-ietf-impp-cpim-msgfmt-08.txt, 31 pages.
Bill Michael, “The Politics of Naming” www.cConvergence.com (Jul. 2001) pp. 31-35.
Crocker et al.; “Common Presence and Instant Messaging (CPIM),” Network Working Group (Aug. 14, 2002), available at http://www.ietf.org/internet-drafts/draft-ietf-impp-cpim-03.txt, 33 pages.
Day et al.; “A Model for Presence and Instant Messaging,” Network Working Group (Feb. 2000), available at http://www.ietf.org/rfc/rfc2778.txt?number=2778, 16 pages.
Day et al.; “Instant Messaging/Presence Protocol Requirements,” Network Working Group (Feb. 2000), available at http://www.ietf.org/rfc/rfc2779.txt?number=2779, 25 pages.
E. Veerman, “Designing a Dimensional Model”, date unknown, 38 pages.
G. Wiederhold, “Mediation to Deal with Heterogeneous Data Sources”, Stanford University, Jan. 1999, 19 pages.
Gulbrandsen et al.; “A DNS RR for Specifying the Location of Services (DNS SRV),” Network Working Group (Feb. 2000), available at http://www.ietf.org/rfc/rfc2782.txt?number=2782, 12 pages.
J. Cahoon, “Fast Development of a Data Warehouse Using MOF, CWM and Code Generation”, CubeModel, May 22, 2006, 32 pages.
J.E. Bentley, “Metadata: Everyone Talks About It, But What Is It?”, First Union National Bank, date unknown, 5 pages.
L. Cabibbo et al., “An Architecture for Data Warehousing Supporting Data Independence and Interoperability”, International Journal of Cooperative Information Systems, Nov. 2004, 41 pages.
Richard Shockey, “ENUM: Phone Numbers Meet the Net” www.cConvergence.com (Jul. 2001) pp. 21-30.
Rose et al..; “The APEX Presence Service,” Network Working Group (Jan. 14, 2002), available at http://www.ietf.org/internet-drafts/draft-ietf-apex-presence-06.txt, 31 pages.
Sugano et al. ;“Common Presence and Instant Messaging (CPIM) Presence Information Data Format,” Network Working Group (Dec. 2002), available at http://www.ietf.org/internet-drafts/draft-ietf-impp-cpim-pidf-07.txt, 26 pages.
Intelligent Enterprise Magazine—Data Warehouse Designer: Fact Tables and Dimension, downloaded May 18, 2005, http://www.intelligententerprise.com/030101/602warehouse1—1.jhtml, 7 pages.
Andy Zmolek; “Simple and Presence: Enterprise Value Propositions,” Avaya presentation, 16 pages, presented Jan. 24, 2002.
Berners-Lee et al.; “Uniform Resource Identifiers (URI); Generic Syntax,” Network Working Group, Request for Comments 2396 (Aug. 1998), 38 pages.
Dawson et al.; “Vcard MIME Directory Profile,” Network Working Group (Sep. 1998), available at http://www.ietf.org/rfc/rfc2426.txt?number=2426, 40 pages.
Fielding et al.; “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, Request for Comments 2068 (Jan. 1997), 152 pages.
G. Klyne; “A Syntax for Describing Media Feature Sets,” Network Working Group (Mar. 1999), available at http://www.ietf.org/rfc/rfc2533.txt?number=2533, 35 pages.
G. Klyne; “Protocol-independent Content Negotiation Framework,” Network Working Group (Sep. 1999), available at http://www.ietf.org/rfc/rfc2703.txt?number=2703, 19 pages.
Holtman et al.; “HTTP Remote Variant Selection Algorithm—RVSA/1.0,” Network Working Group (Mar. 1998), available at http://www.ietf.org/rfc/rfc2296.txt?number=2296, 13 pages.
Holtman et al.; “Transparent Content Negotiation in HTTP,” Network Working Group (Mar. 1998), available at http://www.ietf.org/rfc/rfc2295.txt?number=2295, 55 pages.
Background for the above-captioned application (previously provided).
U.S. Appl. No. 12/193,542, filed Aug. 18, 2008, Olson.
U.S. Appl. No. 12/242,916, filed Oct. 1, 2008, Kiefhaber et al.
Dillion, “Renaming fields and tracing dependencies”, available at http://allenbrowne.com/ser-41.html, Nov. 2003, updated May 2006, 1 page.
Karakasidis A. “Queues for Active Data Warehousing”, Jun. 17, 2005, Baltimore, MA, in Proceedings on Information Quality in Informational Systems (IQIS'2005), S.28-39, ISBN: 159593-160-0, DOI: 10.1109/DANTE.1999.844938.
Sarda, “Temporal Issues in Data Warehouse Systems”, 1999, Database Applications in Non-Traditional Environments (DANTE'99), S. 27, DOI: 10.1109/DANTE.1999.844938.
Thayer Watkins, “Cost Benefit Analysis”, 1999, San Jose State University Economics Department, Web Archive http://web.arch ive.org/web/19990225143131/http://www.sjsu.edu/faculty/watkins/cba.htm.
“Learn the structure of an Access database”, available at http://office.microsoft.com/en-us/access/HA012139541033.aspx, site updated Nov. 13, 2007, pp. 1-4.
U.S. Appl. No. 11/242,687, filed Oct. 3, 2005, Krimstock et al.
Microsoft Office Animated Help Tool, date unknown, 1 page.
U.S. Appl. No. 11/956,779, filed Dec. 14, 2007, Burritt et al.
U.S. Appl. No. 12/569,581, filed Sep. 29, 2009, Michaelis.
Google Docs “IP Softphone for Windows Mobile 5” printed on Sep. 15, 2009 from http://docs.google.com/gview?a=v&q=cache:92VrteFXqm8J:support.avaya.com/css/P8/documents/100021136+Avaya+telecom . . . , 1 page.
Overview of Avaya IP Softphone printed on Sep. 15, 2009 from http://support.avaya.com/elmodocs2/ip—softphone/Overview—IP—Softphone—R6.htm, 2 pages.
Product Brief of “Avaya IP Agent” printed on Sep. 15, 2009 from http://docs.google.com/gview?a=v&q=cache:IRR32Pfzp98J:www.nacr.com/uploadedFiles/Products/Avaya%2520IP%2520Age . . . , 1 page.
Product Description of “Avaya one-X Agent,” printed on Sep. 15, 2009 from http://www.avaya.com/usa/product/avaya-one-x-agent, 1 page.
Product Overview of “IP Softphone” printed on Sep. 15, 2009 from http://www.nacr.com/Products.aspx?id=236, 3 pages.
Venkatesan et al., “A Customer Lifetime Value Framework for Customer Selection and Resource Allocation Strategy,” Journal of Marketing, Oct. 2004, vol. 68, pp. 106-125.
Provisional Applications (1)
Number Date Country
60824876 Sep 2006 US