System and method for communication analysis for use with agent assist within a cloud-based contact center

Information

  • Patent Grant
  • 11706339
  • Patent Number
    11,706,339
  • Date Filed
    Wednesday, October 30, 2019
    5 years ago
  • Date Issued
    Tuesday, July 18, 2023
    a year ago
Abstract
Methods to reduce agent effort and improve customer experience quality through artificial intelligence. The Agent Assist tool provides contact centers with an innovative tool designed to reduce agent effort, improve quality and reduce costs by minimizing search and data entry tasks The Agent Assist tool is natively built and fully unified within the agent interface while keeping all data internally protected from third-party sharing.
Description
BACKGROUND

Today, contact centers are primarily on-premise software solutions. This requires an enterprise to make a substantial investment in hardware, installation and regular maintenance of such solutions. Using on-premise software, agents and supervisors are stationed in an on-site call center. In addition, a dedicated IT staff is required because on-site software may be too complicated for supervisors and agents to handle on their own. Another drawback of on-premise solutions is that such solutions cannot be easily enhanced to include capabilities to that meet the current demands of technology, such as automation. Thus, there is a need for a solution to enhance the agent experience to enhance the interactions with customers who interact with contact centers.


SUMMARY

Disclosed herein are systems and methods for providing a cloud-based contact center solution providing agent automation through the use of e.g., artificial intelligence and the like.


In accordance with an aspect, there is disclosed a method, comprising receiving a communication from a customer; automatically analyzing the communication to determine a subject of the customer's communication; automatically querying a database of communications between other customers and other agents related to the subject of the customer's communication; determining at least one responsive answer to the subject from the database; and providing the at least one responsive answer to an agent during the communication with the customer. In accordance with another aspect, a cloud-based software platform is disclosed in which the example method above is performed.


Other systems, methods, features and/or advantages will be or may become apparent to one with skill in the art upon examination of the following drawings and detailed description. It is intended that all such additional systems, methods, features and/or advantages be included within this description and be protected by the accompanying claims.





BRIEF DESCRIPTION OF THE DRAWINGS

The components in the drawings are not necessarily to scale relative to each other. Like reference numerals designate corresponding parts throughout the several views.



FIG. 1 illustrates an example environment;



FIG. 2 illustrates example component that provide automation, routing and/or omnichannel functionalities within the context of the environment of FIG. 1;



FIG. 3 illustrates a high-level overview of interactions, components and flow of Agent Assist in accordance with the present disclosure;



FIG. 4 illustrates an example operational flow in accordance with the present disclosure and provides additional details of the high-level overview shown in FIG. 3;



FIGS. 5A, 5B and 5C illustrate an example unified interface showing aspects of the operational flows of FIGS. 3 and 4;



FIG. 6 illustrates an operational flow to analyze a conversation to create smart notes;



FIG. 7 illustrates an example smart notes user interface;



FIG. 8 illustrates an operational flow to analyze a conversation to pre-populate forms;



FIG. 9 illustrates an example automatic scheduling user interface;



FIG. 10 illustrates an overview of the real-time analytics aspect of Agent Assist;



FIG. 11 illustrates an example operational flow to classify agent conversations;



FIG. 12 illustrates an example operational flow of escalation assistance; and



FIG. 13 illustrates an example computing device.





DETAILED DESCRIPTION

Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art. Methods and materials similar or equivalent to those described herein can be used in the practice or testing of the present disclosure. While implementations will be described within a cloud-based contact center, it will become evident to those skilled in the art that the implementations are not limited thereto.


The present disclosure is generally directed to a cloud-based contact center and, more particularly, methods and systems for proving intelligent, automated services within a cloud-based contact center. With the rise of cloud-based computing, contact centers that take advantage of this infrastructure are able to quickly add new features and channels. Cloud-based contact centers improve the customer experience by leveraging application programming interfaces (APIs) and software development kits (SDKs) to allow the contact center to change in in response to an enterprise's needs. For example, communications channels may be easily added as the APIs and SDKs enable adding channels, such as SMS/MMS, social media, web, etc. Cloud-based contact centers provide a platform that enables frequent updates. Yet another advantage of cloud-based contact centers is increased reliability, as cloud-based contact centers may be strategically and geographically distributed around the world to optimally route calls to reduce latency and provide the highest quality experience. As such, customers are connected to agents faster and more efficiently.


Example Cloud-Based Contact Center Architecture



FIG. 1 is an example system architecture 100, and illustrates example components, functional capabilities and optional modules that may be included in a cloud-based contact center infrastructure solution. Customers 110 interact with a contact center 150 using voice, email, text, and web interfaces in order to communicate with agent(s) 120 through a network 130 and one or more channels 140. The agent(s) 120 may be remote from the contact center 150 and handle communications with customers 110 on behalf of an enterprise or other entity. The agent(s) 120 may utilize devices, such as but not limited to, work stations, desktop computers, laptops, telephones, a mobile smartphone and/or a tablet. Similarly, customers 110 may communicate using a plurality of devices, including but not limited to, a telephone, a mobile smartphone, a tablet, a laptop, a desktop computer, or other. For example, telephone communication may traverse networks such as a public switched telephone networks (PSTN), Voice over Internet Protocol (VoIP) telephony (via the Internet), a Wide Area Network (WAN) or a Large Area Network. The network types are provided by way of example and are not intended to limit types of networks used for communications.


The contact center 150 may be cloud-based and distributed over a plurality of locations. The contact center 150 may include servers, databases, and other components. In particular, the contact center 150 may include, but is not limited to, a routing server, a SIP server, an outbound server, automated call distribution (ACD), a computer telephony integration server (CTI), an email server, an IM server, a social server, a SMS server, and one or more databases for routing, historical information and campaigns.


The routing server may serve as an adapter or interface between the switch and the remainder of the routing, monitoring, and other communication-handling components of the contact center. The routing server may be configured to process PSTN calls, VoIP calls, and the like. For example, the routing server may be configured with the CTI server software for interfacing with the switch/media gateway and contact center equipment. In other examples, the routing server may include the SIP server for processing SIP calls. The routing server may extract data about the customer interaction such as the caller's telephone number (often known as the automatic number identification (ANI) number), or the customer's internet protocol (IP) address, or email address, and communicate with other contact center components in processing the interaction.


The ACD is used by inbound, outbound and blended contact centers to manage the flow of interactions by routing and queuing them to the most appropriate agent. Within the CTI, software connects the ACD to a servicing application (e.g., customer service, CRM, sales, collections, etc.), and looks up or records information about the caller. CTI may display a customer's account information on the agent desktop when an interaction is delivered.


For inbound SIP messages, the routing server may use statistical data from the statistics server and a routing database to the route SIP request message. A response may be sent to the media server directing it to route the interaction to a target agent 120. The routing database may include: customer relationship management (CRM) data; data pertaining to one or more social networks (including, but not limited to network graphs capturing social relationships within relevant social networks, or media updates made by members of relevant social networks); agent skills data; data extracted from third party data sources including cloud-based data sources such as CRM; or any other data that may be useful in making routing decisions.


Customers 110 may initiate inbound communications (e.g., telephony calls, emails, chats, video chats, social media posts, etc.) to the contact center 150 via an end user device. End user devices may be a communication device, such as, a telephone, wireless phone, smart phone, personal computer, electronic tablet, etc., to name some non-limiting examples. Customers 110 operating the end user devices may initiate, manage, and respond to telephone calls, emails, chats, text messaging, web-browsing sessions, and other multi-media transactions. Agent(s) 120 and customers 110 may communicate with each other and with other services over the network 130. For example, a customer calling on telephone handset may connect through the PSTN and terminate on a private branch exchange (PBX). A video call originating from a tablet may connect through the network 130 terminate on the media server. The channels 140 are coupled to the communications network 130 for receiving and transmitting telephony calls between customers 110 and the contact center 150. A media gateway may include a telephony switch or communication switch for routing within the contact center. The switch may be a hardware switching system or a soft switch implemented via software. For example, the media gateway may communicate with an automatic call distributor (ACD), a private branch exchange (PBX), an IP-based software switch and/or other switch to receive Internet-based interactions and/or telephone network-based interactions from a customer 110 and route those interactions to an agent 120. More detail of these interactions is provided below.


As another example, a customer smartphone may connect via the WAN and terminate on an interactive voice response (IVR)/intelligent virtual agent (IVA) components. IVR are self-service voice tools that automate the handling of incoming and outgoing calls. Advanced IVRs use speech recognition technology to enable customers 110 to interact with them by speaking instead of pushing buttons on their phones. IVR applications may be used to collect data, schedule callbacks and transfer calls to live agents. IVA systems are more advanced and utilize artificial intelligence (AI), machine learning (ML), advanced speech technologies (e.g., natural language understanding (NLU)/natural language processing (NLP)/natural language generation (NLG)) to simulate live and unstructured cognitive conversations for voice, text and digital interactions. IVA systems may cover a variety of media channels in addition to voice, including, but not limited to social media, email, SMS/MMS, IM, etc. and they may communicate with their counterpart's application (not shown) within the contact center 150. The IVA system may be configured with a script for querying customers on their needs. The IVA system may ask an open-ended questions such as, for example, “How can I help you?” and the customer 110 may speak or otherwise enter a reason for contacting the contact center 150. The customer's response may then be used by a routing server to route the call or communication to an appropriate contact center resource.


In response, the routing server may find an appropriate agent 120 or automated resource to which an inbound customer communication is to be routed, for example, based on a routing strategy employed by the routing server, and further based on information about agent availability, skills, and other routing parameters provided, for example, by the statistics server. The routing server may query one or more databases, such as a customer database, which stores information about existing clients, such as contact information, service level agreement requirements, nature of previous customer contacts and actions taken by contact center to resolve any customer issues, etc. The routing server may query the customer information from the customer database via an ANI or any other information collected by the IVA system.


Once an appropriate agent and/or automated resource is identified as being available to handle a communication, a connection may be made between the customer 110 and an agent device of the identified agent 120 and/or the automate resource. Collected information about the customer and/or the customer's historical information may also be provided to the agent device for aiding the agent in better servicing the communication. In this regard, each agent device may include a telephone adapted for regular telephone calls, VoIP calls, etc. The agent device may also include a computer for communicating with one or more servers of the contact center and performing data processing associated with contact center operations, and for interfacing with customers via voice and other multimedia communication mechanisms.


The contact center 150 may also include a multimedia/social media server for engaging in media interactions other than voice interactions with the end user devices and/or other web servers 160. The media interactions may be related, for example, to email, vmail (voice mail through email), chat, video, text-messaging, web, social media, co-browsing, etc. In this regard, the multimedia/social media server may take the form of any IP router conventional in the art with specialized hardware and software for receiving, processing, and forwarding multi-media events.


The web servers 160 may include, for example, social media sites, such as, Facebook, Twitter, Instagram, etc. In this regard, the web servers 160 may be provided by third parties and/or maintained outside of the contact center 160 that communicate with the contact center 150 over the network 130. The web servers 160 may also provide web pages for the enterprise that is being supported by the contact center 150. End users may browse the web pages and get information about the enterprise's products and services. The web pages may also provide a mechanism for contacting the contact center, via, for example, web chat, voice call, email, WebRTC, etc.


The integration of real-time and non-real-time communication services may be performed by unified communications (UC)/presence sever. Real-time communication services include Internet Protocol (IP) telephony, call control, instant messaging (IM)/chat, presence information, real-time video and data sharing. Non-real-time applications include voicemail, email, SMS and fax services. The communications services are delivered over a variety of communications devices, including IP phones, personal computers (PCs), smartphones and tablets. Presence provides real-time status information about the availability of each person in the network, as well as their preferred method of communication (e.g., phone, email, chat and video).


Recording applications may be used to capture and play back audio and screen interactions between customers and agents. Recording systems should capture everything that happens during interactions and what agents do on their desktops. Surveying tools may provide the ability to create and deploy post-interaction customer feedback surveys in voice and digital channels. Typically, the IVR/IVA development environment is leveraged for survey development and deployment rules. Reporting/dashboards are tools used to track and manage the performance of agents, teams, departments, systems and processes within the contact center.


Automation


As shown in FIG. 1, automated services may enhance the operation of the contact center 150. In one aspect, the automated services may be implemented as an application running on a mobile device of a customer 110, one or more cloud computing devices (generally labeled automation servers 170 connected to the end user device over the network 130), one or more servers running in the contact center 150 (e.g., automation infrastructure 200), or combinations thereof.


With respect to the cloud-based contact center, FIG. 2 illustrates an example automation infrastructure 200 implemented within the cloud-based contact center 150. The automation infrastructure 200 may automatically collect information from a customer 110 user through, e.g., a user interface/voice interface 202, where the collection of information may not require the involvement of a live agent. The user input may be provided as free speech or text (e.g., unstructured, natural language input). This information may be used by the automation infrastructure 200 for routing the customer 110 to an agent 120, to automated resources in the contact center 150, as well as gathering information from other sources to be provided to the agent 120. In operation, the automation infrastructure 200 may parse the natural language user input using a natural language processing module 210 to infer the customer's intent using an intent inference module 212 in order to classify the intent. Where the user input is provided as speech, the speech is transcribed into text by a speech-to-text system 206 (e.g., a large vocabulary continuous speech recognition or LVCSR system) as part of the parsing by the natural language processing module 210. The communication manager 204 monitors user inputs and presents notifications within the user interface/voice interface 202. Responses by the automation infrastructure 200 to the customer 110 may be provided as speech using the text-to-speech system 208.


The intent inference module automatically infers the customer's 110 intent from the text of the user input using artificial intelligence or machine learning techniques. These artificial intelligence techniques may include, for example, identifying one or more keywords from the user input and searching a database of potential intents (e.g., call reasons) corresponding to the given keywords. The database of potential intents and the keywords corresponding to the intents may be automatically mined from a collection of historical interaction recordings, in which a customer may provide a statement of the issue, and in which the intent is explicitly encoded by an agent.


Some aspects of the present disclosure relate to automatically navigating an IVR system of a contact center on behalf of a user using, for example, the loaded script. In some implementations of the present disclosure, the script includes a set of fields (or parameters) of data that are expected to be required by the contact center in order to resolve the issue specified by the customer's 110 intent. In some implementations of the present disclosure, some of the fields of data are automatically loaded from a stored user profile. These stored fields may include, for example, the customer's 110 full name, address, customer account numbers, authentication information (e.g., answers to security questions) and the like.


Some aspects of the present disclosure relate to the automatic authentication of the customer 110 with the provider. For example, in some implementations of the present disclosure, the user profile may include authentication information that would typically be requested of users accessing customer support systems such as usernames, account identifying information, personal identification information (e.g., a social security number), and/or answers to security questions. As additional examples, the automation infrastructure 200 may have access to text messages and/or email messages sent to the customer's 110 account on the end user device in order to access one-time passwords sent to the customer 110, and/or may have access to a one-time password (OTP) generator stored locally on the end user device. Accordingly, implementations of the present disclosure may be capable of automatically authenticating the customer 110 with the contact center prior to an interaction.


In some implementations of the present disclosure an application programming interface (API) is used to interact with the provider directly. The provider may define a protocol for making commonplace requests to their systems. This API may be implemented over a variety of standard protocols such as Simple Object Access Protocol (SOAP) using Extensible Markup Language (XML), a Representational State Transfer (REST) API with messages formatted using XML or JavaScript Object Notation (JSON), and the like. Accordingly, a customer experience automation system 200 according to one implementation of the present disclosure automatically generates a formatted message in accordance with an API define by the provider, where the message contains the information specified by the script in appropriate portions of the formatted message.


Some aspects of the present disclosure relate to systems and methods for automating and augmenting aspects of an interaction between the customer 110 and a live agent of the contact center. In an implementation, once a interaction, such as through a phone call, has been initiated with the agent 120, metadata regarding the conversation is displayed to the customer 110 and/or agent 120 in the UI throughout the interaction. Information, such as call metadata, may be presented to the customer 110 through the UI 205 on the customer's 110 mobile device 105. Examples of such information might include, but not be limited to, the provider, department call reason, agent name, and a photo of the agent.


According to some aspects of implementations of the present disclosure, both the customer 110 and the agent 120 can share relevant content with each other through the application (e.g., the application running on the end user device). The agent may share their screen with the customer 110 or push relevant material to the customer 110.


In yet another implementation, the automation infrastructure 200 may also “listen” in on the conversation and automatically push relevant content from a knowledge base to the customer 110 and/or agent 120. For example, the application may use a real-time transcription of the customer's input (e.g., speech) to query a knowledgebase to provide a solution to the agent 120. The agent may share a document describing the solution with the customer 110. The application may include several layers of intelligence where it gathers customer intelligence to learn everything it can about why the customer 110 is calling. Next, it may perform conversation intelligence, which is extracting more context about the customer's intent. Next, it may perform interaction intelligence to pull information from other sources about customer 100. The automation infrastructure 200 may also perform contact center intelligence to implement WFM/WFO features of the contact center 150.


Agent Assist Overview


Thus, in the context of FIGS. 1-2, the present disclosure provides improvements by providing an innovative tool to reduce agent effort and improve customer experience quality through artificial intelligence (referred to herein as “Agent Assist”). Agent Assist is an innovative tool used within e.g., contact centers, designed to reduce agent effort, improve quality and reduce costs by minimizing search and data entry tasks Agent Assist is fully unified within the agent interface while keeping all data internally protected from third-party sharing. Agent Assist improve quality and reduce costs by minimizing search and data entry tasks through the use of AI capabilities. Agent Assist simplifies agent effort and improves Customer Satisfaction/Net Promoter Score CSAT/NPS.


Agent Assist is powered by artificial intelligence (AI) to provide real-time guidance for frontline employees to respond to customer needs quickly and accurately. For example, as a customer 110 states a need, agents 120 are provided answers or supporting information immediately to expedite the conversation and simplify tasks. Agent Assist determines why customers are calling and what their intent is. Similarly, IVR assist makes recommendations to a supervisor to optimize IVR for a better customer experience, for example, Agent Assist helps optimize IVR questions to match customers' reasons for calling and what their intent is.


By leveraging automated assistance and reducing agent-supervisor ad-hoc interactions, Agent Assist gives supervisors more time to focus on workforce engagement activities. Agent Assist reduces manual supervision and assistance. Agent Assist improves agent proficiency and accuracy. Agent Assist reduces short and long term training efforts through real-time error identification, eliminates busy work with smart note technology (the ability to systematically recognize and enter all key aspects of an interaction into the conversation notes); and improved handle time with in-app automations.


With reference to FIG. 3, there is illustrated a high-level overview of interactions, components and flow of Agent Assist in accordance with the present disclosure. In operation, a customer 110 will contact the cloud-based contact center 150 through one or more of the channels 140. as shown in FIG. 1. The agent 120 to whom the customer 110 is routed may listen to the customer 110 while the same time the Agent Assist functionality pulls information using a knowledge graph engine 308. The knowledge graph engine 312 gathers information from one or more of a knowledgebase 302, a customer relationship management (CRM) platform/a customer service management (CSM) platform 304, and/or conversational transcripts 306 of other agent conversations to provide contextually relevant information to the agent. Additionally, information captured within the agent interface (see, FIGS. 5A-5C, 7 and 13) can be automatically added to account profiles or work item tickets, within the CRM, without any additional agent effort. Agent Assist is an intelligent advisory tool which supplies data-driven real-time recommendations, next best actions and automations to aid agents in customer interactions and guide them to quality and outcome excellency. This may include making recommendations based on interactions, discussions and monitored KPIs. Agent Assist helps match agent skill to the reasons why customers are calling. In addition, information may be provided to the agent from third-party sources via the web servers 160 (e.g., knowledge bases of product manufacturers) or social media platforms.


With reference to FIG. 4, there is illustrated an example operational flow 400 in accordance with the present disclosure, and provides additional details of the high-level overview shown in FIG. 3. At 402, the process begins wherein the system listens the customer and agent voices as they speak (S. 404). For example, the automation infrastructure 200 may process the customer speech, as described with regard to FIG. 2. At 406, the agent voice is separated from the customer voice into their own respective channels. Once separated, at 408, unsupervised methods may be used to automatically perform one or more of the following non-limiting processes: apply biometrics to authenticate the caller/customer, predict a caller gender, predict a caller age category, predict a caller accent, and/or predict caller other demographics. Optionally or alternatively, if speaker separation is not performed at 406, then the system may distinguish between the customer and the agent by analyzing time that either the agent or the customer talks or listens, identify signature of agent voice or user voice, or apply non-supervised methods to separate user and agent voice in real-time.


The operational flow continues at 410, wherein the customer voice and/or agent voice may be analyzed before transcription to extract one or more of the following non-limiting features:

    • Pain
    • Agony
    • Empathy
    • Being sarcastic
    • Speech speed
    • Tone
    • Frustration
    • Enthusiasm
    • Interest
    • Engagement


Understanding these features helps the agent 120 better understand the customer 110. The agent 120 will be better able to understand the customer's problem or issues so a resolution can be more easily achieved.


At 412, the conversation between the agent and the customer is transcribed in either real-time or post-call. This may be performed by the speech-to-text component of the automation infrastructure 200 and saved to a database. At 414, the agent voice channel and the customer voice channel are separated. At 416, the automation infrastructure 200 determines information about the customer and agent, such as, intent, entities (e.g., names, locations, times, etc.) sentiment, sentence phrases (e.g. verb, noun, adjective, etc.). At 418, from the information determined at 416, Agent Assist provides useful insight to the agent 120. This information, as shown in FIG. 3, may be information retrieved from the relevant CRM, the most relevant documents in the related knowledge base, and/or a relevant conversation and interaction that occurred in the past that was related to a similar topic or other feature of the interaction between the agent and the customer. Information pulled from the knowledgebase may be highlighted to the agent in a display, such as shown in FIGS. 5A-5C, 7 and 13.


Thus, in accordance with the operational flow of FIG. 4, Agent Assist provides real-time guidance for frontline employees to respond to customer needs quickly and accurately. As a customer 110 states his or her need, agents 120 will be delivered answers or supporting information immediately to expedite the conversation and simplify agent effort. By delivering information from CRM 304 or knowledgebase 302 to the agent 120 in milliseconds, agent handling time will handle be reduced and customers will realize a time savings and ultimately a reduction in effort to interact with businesses.



FIGS. 5A-5C illustrate an example unified interface 500 showing aspects of the operational flows of FIGS. 3 and 4. In FIGS. 5A-5C, the agent 120 is speaking on behalf of a financial institution. The agent 120 could be speaking on behalf of any entity for which the cloud-based contact center 150 serves. As shown in FIG. 5A, the customer 110 is calling to ask questions about setting up a retirement plan. Because the context of the conversation is understood by the automation infrastructure 200 to be related to a financial institution, Agent Assist identifies that the term “retirement plan” is meaningful and highlights it to the agent. As shown in FIG. 5B, Agent Assist provides a prompt 502 indicating to the agent 120 that there are many different types of retirement plans that the customer 110 can choose from. A button or other control 504 is provided such that the agent 120 can click a link to see more information. The link to the information may provide text, audio, video, messages, tweets, posts, etc. to the agent 120. Agent Assist provides a segment and/or snippet in the text that is relevant to the customer's needs. In other implementations, Agent Assist provides a relevant interaction in the past (e.g., a similar call with a similar issue that agent 120 was able to address, etc.) or provide cross channel information (e.g., find a most relevant e-mail for a call, etc.). As shown in FIG. 5C, Agent Assist may provide an option 506 to schedule a meeting or call between the customer 110 and a financial planner (i.e., a person with additional knowledge within the entity who may satisfy the customer's request to the agent 120). Additional details of the scheduling operation are described below with reference to FIG. 8.


Smart Notes



FIGS. 6 and 7 provide details about the smart notes feature of Agent Assist. The smart notes feature may be used by the agent 120 to summarize a conversation with the customer 120, extract relevant portions of the interaction, etc. Important items in the smart notes may be highlighted using bold fonts or other. The process begins at 602 where operations 404-414 are performed. These may be performed in parallel with the other features described above. At 604, information is extracted from the transcript and populated into the smart notes. As shown in FIG. 7, a call notes user interface 702 is provided to the agent 120 with information from the call with the customer 110 pre-populated in an input field 704. For example, in the context of a retailer, the phrases “status of my last order” and “place a new order” may be determined to be relevant information by the automation infrastructure 200, and is populated into the call notes input field 704. At 606, important terms may be highlighted. At 608, the process ends. As shown in FIG. 7, the call notes user interface 702 may provide an option for the user to edit and/or add notes.


In accordance with the operations performed in FIG. 6, Agent Assist may analyze the conversation between the agent 120 and the customer 110 to create smart notes. This conversation could be a phone call, a text message, chat or video call, etc. Smart notes extracts the most relevant information from this conversation. For instance after a conversation, Agent Assist may determine that the discussion between the agent and the customer was about “canceling an old order” and “placing a new order.” These would be extracted as Smart Notes and provide to the agent, who has an option to accept or modify the note, as show in FIG. 7. To achieve the above, Agent Assist may separate the conversation between customer 110 and agent 120 to find words and phrases that are common between agents and customers, when a customer confirms a question, or when an agent confirms what customer says. For instance, the agent 120 may say, “Ok, so you would like to place a new order—correct?” In this case, the Smart Note would be a summary of the call about placing a new order.


Automatic Data Entry


In accordance with aspects of the disclosure, when Agent Assist detects the participants in a conversation it may automatically fill out any forms that pop-up after such conversations. With reference to FIG. 8, the process begins at 802 where operations 404-414 are performed. These may be performed in parallel with the other features described above. At 804, information is extracted to populate forms. As shown in FIG. 9, in response to the customer indicating that he or she is calling to move forward on a job application, scheduling information may be presented to the agent in a field 508. This information may populate into user interface (902) field 904 together with additional information in field 906 to schedule the call for an interview with the appropriate person. In another example, if the person says, “Hi my name is John? I like to return my iPhone 6,” a form may pop up with some of the information such as Name: John and Phone: iPhone 6 prefilled into the form.


Such automated data entry includes but not limited to:

    • Date
    • Time
    • Day of the week
    • First name
    • Last name
    • Gender
    • Address
    • Object e.g., Samsung Galaxy
    • Type of the Object—e.g. Galaxy S9
    • Time of the day (e.g. morning, afternoon)


After the information is populated, the process ends at 806.


Real-Time Analytics and Error Detection


With reference to FIG. 10, Agent Assist may provide for real-time analytics and error detection by monitoring a conversation (i.e., a call, a text, an e-mail, video, chat, etc.) between the customer 110 and agent 120 in real-time to detect the following non-limiting categories:

    • Compliance—words that should not say in the conversation.
    • Competitors—if agent says the name of competitors.
    • A set of “do's and don'ts”—words that agent should not say.
    • If the agent is angry, curse etc.
    • If the agent is making fun of the caller.
    • If the agent talks too fast, too slow, or if there is a delay between words.
    • If the agent shows empathy.
    • If the agent violates any policy.
    • If the agent markets other products.
    • If the agent talks about personal issues.
    • If the agent is politically motivated.
    • If the agent promotes violence.


The process monitors the agent in real-time and expands upon the current state of the art, which is monitoring is at word level to monitor the transcript of the conversation and look for certain words or a variation of such words. For instance, if the agent is talking about pricing, the system may look for words such as “our pricing.” “our price list,” “do you want to know how much our product is,” etc. As another example, the agent may say “our product is beating everybody else,” which means the price is very affordable. Other examples such as these are possible.


Artificial Intelligence (AI) Processing/Learning


In accordance with the present disclosure, a layer of deep learning 1002 is applied to create a large set of all potential of sentences and instances (natural language understanding 1004) where the agent:

    • Said X and meant A.
    • Said Y and meant A.
    • Said Z but did not mean A.
    • Said W and meant B.


This sets have several positive and negative examples around concepts, such as “cursing,” “being frustrated,” “rude attitude,” “too pushy for sale,” “soft attitude,” as well as word level examples, such as “shut up.” Deep learning 1002 does not need to extract features, rather deep learning takes a set of sentences and classes (class is positive/negative, good bad, cursing/not cursing). Deep learning 1002 learns and builds a model out of all of these examples. For example, audio files of conversations 1006 between agents 120 and customers 110 may be input to the deep learning module 1002. Alternatively, transcribed words may be input to the deep learning module 1002. Next, the system uses the learned model to listen to any conversation in real time and to identify the class such “cursing/not cursing.” As soon as the system identifies a class, and if it is negative or positive, it can do the following:

    • Send an alert to manager
    • Make an indicator red on the screen
    • Send a note to the agent to be reviewed in real-time or after the call
    • Update some data files for reporting and visualization.


As part of the above, the natural language understanding 1004 may be used for intent spotting 1008 to determine intent 1010, which may be used for IVR analysis 1012 and/or agent performance 1014.


In this approach words are not important, rather the combination of all of words, the order of words and al potential variations of them have relevance. Deep learning 1002 considers all of the potential signals that could describe and hint toward a class. This approach is also language agnostic. It does not matter what language agent or caller speaks as long as there are a set of words and a set of classes, deep learning 1002 will learn and the model can be applied to the same language. In addition to the above, metadata may be added to every call, such as the time of the call, the duration of the call, the number of times the agent talked over the caller could be added to the data, etc.


Listening to Other Agents Conversation in Real-Time


As described above, Agent Assist may periodically perform the following to classify conversations of other agents. With reference to FIG. 11, the process begins at 1102. At 1104, a feature vector of a conversation is created. Such feature vector(s) includes but are not limited to:

    • Time of the call
    • Duration of the call
    • Topic of the call
    • Frequency of words in the customer transcription (e.g. Ticket 2, Delay 4, etc.)
    • Frequency of words in the agent transcription (e.g. rebook 3, etc.)
    • Cluster conversations based on these features


At 1106, for the conversation happening in within a predetermined period (e.g., one month), the following are performed:

    • Calculate the point wise mutual information between all of the calls in one cluster
    • Make a graph of all calls in which the strength of the link is the weight of the point wise mutual information.


At 1108, for the current file:

    • Extract features
    • Find the cluster
    • Calculate the point wise mutual information
    • Find the closest call to the current call
    • Show the content of the call to the agent.


At 1110, the process ends.


Learning Module


While the process 1100 analyzes calls, Agent Assist learns and improves by analyzing user clicks. As relevant conversations are presented to the agent (see, e.g., 306), if the agent clicks on a conversation and spends time on it, then it means that the conversation is relevant. Further, if the conversation is located, e.g., third on the list, but the agent clicks on the first conversation and moves forward, Agent Assist does not make any assumptions about the conversation. Hence, the rank of the conversation may be of importance depending on the agent's actions. For the sake of simplicity, Agent Assist shows the top three conversations to the agent. If some conversations ranked equally, Agent Assist picks one based on heuristics, for instance any conversation that has not been picked recently will be picked.


Escalation Assistance


With reference to FIG. 12, there is shown an example operational flow of escalation assistance, which may occur when agent cannot answer a customer question or when user is frustrated. With escalation assistance, agent can transfer the call to his or her supervisor, where the transfer will include a summary of the call, along highlights of important notes. In this case, the supervisor has insight into the context and reason for the transfer, and the caller does not need to repeat the case over again. The process begins at 1202 where operations 404-414 are performed. These may be performed in parallel with the other features described above. At 1204, information extracted is from the transcript and populated into the smart notes with a call summary. At 1206, notable items may be highlighted. At 1208, the customer is transferred to the supervisor, where the supervisor is fully briefed on the reasons for the transfer. At 1210, the process ends.


Thus, the present disclosure described an Agent Assist tool within a cloud-based contact center environment that is a conversational guide that proactively delivers real-time contextualized next best actions, in-app, to enhance the customer and agent experience. Talkdesk Agent Assist uses AI to empower agents with a personalized assistant that listens, learns and provides intelligent recommendations in every conversation to help resolve complex customer issues faster


General Purpose Computer Description



FIG. 13 shows an exemplary computing environment in which example embodiments and aspects may be implemented. The computing system environment is only one example of a suitable computing environment and is not intended to suggest any limitation as to the scope of use or functionality.


Numerous other general purpose or special purpose computing system environments or configurations may be used. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use include, but are not limited to, personal computers, servers, handheld or laptop devices, multiprocessor systems, microprocessor-based systems, network personal computers (PCs), minicomputers, mainframe computers, embedded systems, distributed computing environments that include any of the above systems or devices, and the like.


Computer-executable instructions, such as program modules, being executed by a computer may be used. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Distributed computing environments may be used where tasks are performed by remote processing devices that are linked through a communications network or other data transmission medium. In a distributed computing environment, program modules and other data may be located in both local and remote computer storage media including memory storage devices.


With reference to FIG. 13, an exemplary system for implementing aspects described herein includes a computing device, such as computing device 1300. In its most basic configuration, computing device 1300 typically includes at least one processing unit 1302 and memory 1304. Depending on the exact configuration and type of computing device, memory 1304 may be volatile (such as random access memory (RAM)), non-volatile (such as read-only memory (ROM), flash memory, etc.), or some combination of the two. This most basic configuration is illustrated in FIG. 13 by dashed line 1306.


Computing device 1300 may have additional features/functionality. For example, computing device 1300 may include additional storage (removable and/or non-removable) including, but not limited to, magnetic or optical disks or tape. Such additional storage is illustrated in FIG. 13 by removable storage 1308 and non-removable storage 1310.


Computing device 1300 typically includes a variety of tangible computer readable media. Computer readable media can be any available tangible media that can be accessed by device 1300 and includes both volatile and non-volatile media, removable and non-removable media.


Tangible computer storage media include volatile and non-volatile, and removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Memory 1304, removable storage 1308, and non-removable storage 1310 are all examples of computer storage media. Tangible computer storage media include, but are not limited to, RAM, ROM, electrically erasable program read-only memory (EEPROM), flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computing device 1300. Any such computer storage media may be part of computing device 1300.


Computing device 1300 may contain communications connection(s) 1312 that allow the device to communicate with other devices. Computing device 1300 may also have input device(s) 1314 such as a keyboard, mouse, pen, voice input device, touch input device, etc. Output device(s) 1316 such as a display, speakers, printer, etc. may also be included. All these devices are well known in the art and need not be discussed at length here.


It should be understood that the various techniques described herein may be implemented in connection with hardware or software or, where appropriate, with a combination of both. Thus, the methods and apparatus of the presently disclosed subject matter, or certain aspects or portions thereof, may take the form of program code (i.e., instructions) embodied in tangible media, such as floppy diskettes, CD-ROMs, hard drives, or any other machine-readable storage medium wherein, when the program code is loaded into and executed by a machine, such as a computer, the machine becomes an apparatus for practicing the presently disclosed subject matter. In the case of program code execution on programmable computers, the computing device generally includes a processor, a storage medium readable by the processor (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device. One or more programs may implement or utilize the processes described in connection with the presently disclosed subject matter, e.g., through the use of an application programming interface (API), reusable controls, or the like. Such programs may be implemented in a high level procedural or object-oriented programming language to communicate with a computer system. However, the program(s) can be implemented in assembly or machine language, if desired. In any case, the language may be a compiled or interpreted language and it may be combined with hardware implementations.


Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims
  • 1. A method, comprising: receiving a communication from a customer;automatically analyzing the communication to determine a subject of the customer's communication;automatically querying a database of communications between other customers and agents related to the subject of the customer's communication;determining at least one responsive answer to the subject from the database; and providing the at least one responsive answer to an agent during the communication.
  • 2. The method of claim 1, further comprising populating the database with information about other communications, wherein the information includes, for at least one of the other communications, at least one of a time of, a duration of, a topic of, and/or a frequency of words in the at least one of the other communications.
  • 3. The method of claim 1, wherein the communication is in textual form, the method further comprising: displaying text input by the customer in a first field of a unified interface;parsing the text input by the customer for key terms;querying the database using the key terms; anddisplaying responsive results from the database as the at least one responsive answer in a second field in the unified interface.
  • 4. The method of claim 3, further comprising: querying a customer relationship management (CRM) platform/a customer service management (CSM) platform using the key terms; anddisplaying responsive results from the CRM/CSM in the second field in the unified interface.
  • 5. The method of claim 1, further comprising: receiving the communication as speech;converting the speech to text;determining intent from the text; andparsing the text for key terms.
  • 6. The method of 5, further comprising: querying a customer relationship management (CRM) platform/a customer service management (CSM) platform using the key terms; anddisplaying responsive results from the CRM/CSM in a unified interface.
  • 7. The method of claim 5, further comprising: querying a database of customer-agent transcripts using the key terms; anddisplaying responsive results from the database of customer-agent transcripts in a unified interface.
  • 8. The method of claim 1, wherein the communication is a multi-channel communication and received as one of an SMS text, voice call, e-mail, chat, interactive voice response (IVR)/intelligent virtual agent (IVA) systems, and social media.
  • 9. The method of claim 1, wherein all steps are accomplished by executing an agent assist functionality of an automation infrastructure within a cloud-based contact center that includes a communication manager, a speech-to-text converter, a natural language processor, and an inference processor exposed by application programming interfaces.
  • 10. A cloud-based software platform comprising: one or more computer processors; andone or more computer-readable mediums storing instructions that, when executed by the one or more computer processors, cause the cloud-based software platform to perform operations comprising: receiving a communication from a customer;automatically analyzing the communication to determine a subject of the customer's communication;automatically querying a database of communications between other customers and agents related to the subject of the customer's communication;determining at least one responsive answer to the subject from the database; andproviding the at least one responsive answer to an agent during the communication.
  • 11. The cloud-based software platform of claim 10, wherein the operations further comprise populating the database with information about the other communications, wherein the information includes, for at least one of the other communications, at least one of a time of, a duration of, a topic of and/or, a frequency of words in the at least one of the other communications.
  • 12. The cloud-based software platform of claim 10, wherein the communication is in textual form, further comprising instructions to cause operations comprising: displaying text input by the customer in a first field of a unified interface;parsing the text input by the customer for key terms;querying the database using the key terms; anddisplaying responsive results from the database as the at least one responsive answer in a second field in the unified interface.
  • 13. The cloud-based software platform of claim 12, further comprising instructions to cause operations comprising: querying a customer relationship management (CRM) platform/a customer service management (CSM) platform using the key terms; anddisplaying responsive results from the CRM/CSM in the second field in the unified interface.
  • 14. The cloud-based software platform of claim 10, further comprising instructions to cause operations comprising: receiving the communication as speech;converting the speech to text;determining intent from the text; andparsing the text for key terms.
  • 15. The cloud-based software platform of 14, further comprising instructions to cause operations comprising: querying a customer relationship management (CRM) platform/a customer service management (CSM) platform using the key terms; anddisplaying responsive results from the CRM/CSM in a unified interface.
  • 16. The cloud-based software platform of claim 14, further comprising instructions to cause operations comprising: querying a database of customer-agent transcripts using the key terms; anddisplaying responsive results from the database of customer-agent transcripts in a unified interface.
  • 17. The cloud-based software platform of claim 10, wherein the communication is a multi-channel communication and received as one of an SMS text, voice call, e-mail, chat, interactive voice response (IVR)/intelligent virtual agent (IVA) systems, and social media.
  • 18. The cloud-based software platform of claim 10, further comprising a communication manager, a speech-to-text converter, a natural language processor, and an inference processor exposed by application programming interfaces.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims priority to U.S. Provisional Patent Application No. 62/870,913, filed Jul. 5, 2019, entitled “SYSTEM AND METHOD FOR AUTOMATION WITHIN A CLOUD-BASED CONTACT CENTER,” which is incorporated herein by reference in its entirety.

US Referenced Citations (375)
Number Name Date Kind
3671020 Krup et al. Jun 1972 A
3861691 Wheeler et al. Jan 1975 A
5862203 Wulkan et al. Jan 1999 A
6100891 Thorne Aug 2000 A
6128415 Hultgren et al. Oct 2000 A
6163607 Bogart et al. Dec 2000 A
6377944 Busey Apr 2002 B1
6391466 Sogabe May 2002 B1
6411687 Bohacek et al. Jun 2002 B1
6587831 O'Brien Jul 2003 B1
6594306 Mehrabanzad et al. Jul 2003 B1
6639982 Stuart et al. Oct 2003 B1
6721416 Farrell Apr 2004 B1
6754333 Flockhart et al. Jun 2004 B1
6859776 Cohen Feb 2005 B1
6970829 Leamon Nov 2005 B1
7023979 Wu et al. Apr 2006 B1
7076047 Brennan et al. Jul 2006 B1
7110525 Heller et al. Sep 2006 B1
7209475 Shaffer et al. Apr 2007 B1
7274787 Schoeneberger Sep 2007 B1
7372952 Wu et al. May 2008 B1
7409336 Pak et al. Aug 2008 B2
7466334 Baba Dec 2008 B1
7537154 Ramachandran May 2009 B2
7634422 Andre et al. Dec 2009 B1
7672845 Beranek et al. Mar 2010 B2
7676034 Wu et al. Mar 2010 B1
7698163 Reed et al. Apr 2010 B2
7752159 Nelken et al. Jul 2010 B2
7774790 Jirman et al. Aug 2010 B1
7788286 Nourbakhsh et al. Aug 2010 B2
7853006 Fama et al. Dec 2010 B1
7864946 Fama et al. Jan 2011 B1
7869998 Di Fabbrizio et al. Jan 2011 B1
7949123 Flockhart et al. May 2011 B1
7953219 Freedman et al. May 2011 B2
7966369 Briere et al. Jun 2011 B1
8135125 Sidhu et al. Mar 2012 B2
8160233 Keren et al. Apr 2012 B2
8243896 Rae Aug 2012 B1
8300798 Wu et al. Oct 2012 B1
8370155 Byrd et al. Feb 2013 B2
8635226 Chang et al. Jan 2014 B2
8688557 Rose et al. Apr 2014 B2
8898219 Ricci Nov 2014 B2
8898290 Siemsgluess Nov 2014 B2
8909693 Frissora et al. Dec 2014 B2
8935172 Noble, Jr. et al. Jan 2015 B1
9026431 Moreno Mengibar et al. May 2015 B1
9082094 Etter et al. Jul 2015 B1
9117450 Cook et al. Aug 2015 B2
9123009 Etter et al. Sep 2015 B1
9137366 Medina et al. Sep 2015 B2
9152737 Micali et al. Oct 2015 B1
9160853 Daddi et al. Oct 2015 B1
9185222 Govindarajan et al. Nov 2015 B1
9237232 Williams et al. Jan 2016 B1
9286413 Coates et al. Mar 2016 B1
9300801 Warford et al. Mar 2016 B1
9317825 Defusco Apr 2016 B2
9319524 Webster Apr 2016 B1
9386152 Riahi et al. Jul 2016 B2
9426291 Ouimette et al. Aug 2016 B1
9514463 Grigg et al. Dec 2016 B2
9609131 Placiakis et al. Mar 2017 B2
9674361 Ristock et al. Jun 2017 B2
9679265 Schwartz et al. Jun 2017 B1
9787840 Neuer, III et al. Oct 2017 B1
9823949 Ristock et al. Nov 2017 B2
9883037 Lewis et al. Jan 2018 B1
9930181 Moran et al. Mar 2018 B1
RE46852 Petrovykh May 2018 E
9998596 Dunmire et al. Jun 2018 B1
10009465 Fang et al. Jun 2018 B1
10115065 Fama et al. Oct 2018 B1
10154138 Te Booij et al. Dec 2018 B2
10235999 Naughton et al. Mar 2019 B1
10242019 Shan et al. Mar 2019 B1
10331402 Spector et al. Jun 2019 B1
10380246 Clark et al. Aug 2019 B2
10440180 Jayapalan et al. Oct 2019 B1
10445742 Prendki et al. Oct 2019 B2
10460728 Anbazhagan et al. Oct 2019 B2
10497361 Rule et al. Dec 2019 B1
10554590 Cabrera-Cordon et al. Feb 2020 B2
10554817 Sullivan et al. Feb 2020 B1
10601992 Dwyer et al. Mar 2020 B2
10636425 Naughton et al. Apr 2020 B2
10718031 Wu et al. Jul 2020 B1
10742806 Kotak Aug 2020 B2
10783568 Chandra et al. Sep 2020 B1
10803865 Naughton et al. Oct 2020 B2
10812655 Adibi et al. Oct 2020 B1
10827069 Paiva Nov 2020 B1
10827071 Adibi et al. Nov 2020 B1
10839432 Konig et al. Nov 2020 B1
10841425 Langley et al. Nov 2020 B1
10855844 Smith et al. Dec 2020 B1
10861031 Sullivan et al. Dec 2020 B2
10943589 Naughton et al. Mar 2021 B2
20010008999 Bull Jul 2001 A1
20010054072 Discolo Dec 2001 A1
20020029272 Weller Mar 2002 A1
20020034304 Yang Mar 2002 A1
20020038420 Collins et al. Mar 2002 A1
20020143599 Nourbakhsh et al. Oct 2002 A1
20020169664 Walker et al. Nov 2002 A1
20020174182 Wilkinson et al. Nov 2002 A1
20030007621 Graves et al. Jan 2003 A1
20030009520 Nourbakhsh et al. Jan 2003 A1
20030032409 Hutcheson et al. Feb 2003 A1
20030061068 Curtis Mar 2003 A1
20030126136 Omoigui Jul 2003 A1
20030167167 Gong Sep 2003 A1
20040044585 Franco Mar 2004 A1
20040044664 Cash et al. Mar 2004 A1
20040078257 Schweitzer et al. Apr 2004 A1
20040098274 Dezonno et al. May 2004 A1
20040103051 Reed et al. May 2004 A1
20040162724 Hill Aug 2004 A1
20040162753 Vogel et al. Aug 2004 A1
20050033957 Enokida Feb 2005 A1
20050043986 Mcconnell et al. Feb 2005 A1
20050063365 Mathew et al. Mar 2005 A1
20050071178 Beckstrom et al. Mar 2005 A1
20050226220 Kilkki et al. Oct 2005 A1
20050228774 Ronnewinkel Oct 2005 A1
20050246511 Willman Nov 2005 A1
20050271198 Chin et al. Dec 2005 A1
20060153357 Acharya et al. Jul 2006 A1
20060166669 Claussen Jul 2006 A1
20060188086 Busey et al. Aug 2006 A1
20060215831 Knott et al. Sep 2006 A1
20060229931 Fligler et al. Oct 2006 A1
20060271361 Vora Nov 2006 A1
20060277108 Altberg et al. Dec 2006 A1
20070016565 Evans et al. Jan 2007 A1
20070036334 Culbertson et al. Feb 2007 A1
20070061183 Seetharaman et al. Mar 2007 A1
20070078725 Koszewski et al. Apr 2007 A1
20070121902 Stoica et al. May 2007 A1
20070136284 Cobb et al. Jun 2007 A1
20070157021 Whitfield Jul 2007 A1
20070160188 Sharpe et al. Jul 2007 A1
20070162296 Altberg et al. Jul 2007 A1
20070198329 Lyerly et al. Aug 2007 A1
20070201636 Gilbert et al. Aug 2007 A1
20070263810 Sterns Nov 2007 A1
20070265990 Sidhu et al. Nov 2007 A1
20080002823 Fama et al. Jan 2008 A1
20080043976 Maximo et al. Feb 2008 A1
20080095355 Mahalaha et al. Apr 2008 A1
20080126957 Tysowski et al. May 2008 A1
20080255944 Shah et al. Oct 2008 A1
20090018996 Hunt et al. Jan 2009 A1
20090080411 Lyman Mar 2009 A1
20090110182 Knight, Jr. et al. Apr 2009 A1
20090171164 Jung et al. Jul 2009 A1
20090228264 Williams et al. Sep 2009 A1
20090285384 Pollock et al. Nov 2009 A1
20090306981 Cromack et al. Dec 2009 A1
20090307052 Mankani et al. Dec 2009 A1
20100106568 Grimes Apr 2010 A1
20100114646 Mcilwain et al. May 2010 A1
20100189250 Williams et al. Jul 2010 A1
20100250196 Lawler et al. Sep 2010 A1
20100266115 Fedorov et al. Oct 2010 A1
20100266116 Stolyar et al. Oct 2010 A1
20100287131 Church Nov 2010 A1
20110022461 Simeonov Jan 2011 A1
20110071870 Gong Mar 2011 A1
20110116618 Zyarko et al. May 2011 A1
20110125697 Erhart et al. May 2011 A1
20110143323 Cohen Jun 2011 A1
20110216897 Laredo et al. Sep 2011 A1
20110264581 Clyne Oct 2011 A1
20110267985 Wilkinson et al. Nov 2011 A1
20110288897 Erhart et al. Nov 2011 A1
20120051537 Chishti et al. Mar 2012 A1
20120084217 Kohler et al. Apr 2012 A1
20120087486 Guerrero et al. Apr 2012 A1
20120109830 Vogel May 2012 A1
20120257116 Hendrickson et al. Oct 2012 A1
20120265587 Kinkead Oct 2012 A1
20120290373 Ferzacca et al. Nov 2012 A1
20120321073 Flockhart et al. Dec 2012 A1
20130073361 Silver Mar 2013 A1
20130085785 Rogers et al. Apr 2013 A1
20130124361 Bryson May 2013 A1
20130136252 Kosiba et al. May 2013 A1
20130223608 Flockhart et al. Aug 2013 A1
20130236002 Jennings et al. Sep 2013 A1
20130304581 Soroca et al. Nov 2013 A1
20140012603 Scanlon et al. Jan 2014 A1
20140039944 Humbert et al. Feb 2014 A1
20140099916 Mallikarjunan et al. Apr 2014 A1
20140101261 Wu et al. Apr 2014 A1
20140136346 Teso May 2014 A1
20140140494 Zhakov May 2014 A1
20140143018 Nies et al. May 2014 A1
20140143249 Cazzanti et al. May 2014 A1
20140164502 Khodorenko Jun 2014 A1
20140177819 Vymenets et al. Jun 2014 A1
20140200988 Kassko et al. Jul 2014 A1
20140254790 Shaffer et al. Sep 2014 A1
20140257908 Steiner et al. Sep 2014 A1
20140270138 Uba et al. Sep 2014 A1
20140270142 Bischoff et al. Sep 2014 A1
20140270145 Erhart Sep 2014 A1
20140278605 Borucki et al. Sep 2014 A1
20140278649 Guerinik et al. Sep 2014 A1
20140279045 Shottan et al. Sep 2014 A1
20140335480 Asenjo et al. Nov 2014 A1
20140372171 Martin et al. Dec 2014 A1
20140379424 Shroff Dec 2014 A1
20150012278 Metcalf Jan 2015 A1
20150023484 Ni et al. Jan 2015 A1
20150030152 Waxman et al. Jan 2015 A1
20150066632 Gonzalez et al. Mar 2015 A1
20150100473 Manoharan et al. Apr 2015 A1
20150127441 Feldman May 2015 A1
20150127677 Wang et al. May 2015 A1
20150172463 Quast et al. Jun 2015 A1
20150178371 Seth Jun 2015 A1
20150213454 Vedula Jul 2015 A1
20150262188 Franco Sep 2015 A1
20150262208 Bjontegard et al. Sep 2015 A1
20150269377 Gaddipati Sep 2015 A1
20150281445 Kumar et al. Oct 2015 A1
20150281449 Milstein et al. Oct 2015 A1
20150295788 Witzman et al. Oct 2015 A1
20150296081 Jeong Oct 2015 A1
20150302301 Petersen Oct 2015 A1
20150339446 Sperling et al. Nov 2015 A1
20150339620 Esposito et al. Nov 2015 A1
20150339769 Deoliveira et al. Nov 2015 A1
20150347900 Bell et al. Dec 2015 A1
20150350429 Kumar et al. Dec 2015 A1
20160026629 Clifford et al. Jan 2016 A1
20160042749 Hirose Feb 2016 A1
20160055499 Hawkins et al. Feb 2016 A1
20160085891 Ter et al. Mar 2016 A1
20160112867 Martinez Apr 2016 A1
20160124937 Elhaddad May 2016 A1
20160125456 Wu et al. May 2016 A1
20160134624 Jacobson et al. May 2016 A1
20160140627 Moreau et al. May 2016 A1
20160150086 Pickford May 2016 A1
20160155080 Gnanasambandam et al. Jun 2016 A1
20160173692 Wicaksono et al. Jun 2016 A1
20160180381 Kaiser et al. Jun 2016 A1
20160191699 Agrawal et al. Jun 2016 A1
20160191709 Pullamplavil et al. Jun 2016 A1
20160191712 Bouzid et al. Jun 2016 A1
20160261747 Thirugnanasundaram Sep 2016 A1
20160295018 Loftus Oct 2016 A1
20160300573 Carbune et al. Oct 2016 A1
20160335576 Peng Nov 2016 A1
20160360336 Gross Dec 2016 A1
20160378569 Ristock et al. Dec 2016 A1
20160381222 Ristock et al. Dec 2016 A1
20170006135 Siebel et al. Jan 2017 A1
20170006161 Riahi et al. Jan 2017 A9
20170024762 Swaminathan Jan 2017 A1
20170032436 Disalvo et al. Feb 2017 A1
20170034226 Bostick Feb 2017 A1
20170068436 Auer et al. Mar 2017 A1
20170068854 Markiewicz et al. Mar 2017 A1
20170098197 Yu et al. Apr 2017 A1
20170104875 Im et al. Apr 2017 A1
20170111505 Mcgann et al. Apr 2017 A1
20170148073 Nomula et al. May 2017 A1
20170162197 Cohen Jun 2017 A1
20170207916 Luce et al. Jul 2017 A1
20170316386 Joshi et al. Nov 2017 A1
20170323344 Nigul Nov 2017 A1
20170337578 Chittilappilly et al. Nov 2017 A1
20170344988 Cusden et al. Nov 2017 A1
20180018705 Tognetti Jan 2018 A1
20180032997 Gordon et al. Feb 2018 A1
20180053401 Martin et al. Feb 2018 A1
20180061256 Elchik et al. Mar 2018 A1
20180077250 Prasad et al. Mar 2018 A1
20180083898 Pham Mar 2018 A1
20180097910 D'Agostino Apr 2018 A1
20180121766 Mccord et al. May 2018 A1
20180137472 Gorzela May 2018 A1
20180137555 Clausse et al. May 2018 A1
20180165692 McCoy Jun 2018 A1
20180165723 Wright et al. Jun 2018 A1
20180174198 Wilkinson et al. Jun 2018 A1
20180189273 Campos et al. Jul 2018 A1
20180190144 Corelli et al. Jul 2018 A1
20180198917 Ristock et al. Jul 2018 A1
20180260857 Kar et al. Sep 2018 A1
20180286000 Berry et al. Oct 2018 A1
20180293327 Miller et al. Oct 2018 A1
20180293532 Singh et al. Oct 2018 A1
20180300295 Maksak Oct 2018 A1
20180349858 Walker et al. Dec 2018 A1
20180361253 Grosso Dec 2018 A1
20180365651 Sreedhara et al. Dec 2018 A1
20180372486 Farniok et al. Dec 2018 A1
20190013017 Kang et al. Jan 2019 A1
20190028587 Unitt et al. Jan 2019 A1
20190042988 Brown et al. Feb 2019 A1
20190108834 Nelson et al. Apr 2019 A1
20190124202 Dubey Apr 2019 A1
20190130329 Fama et al. May 2019 A1
20190132443 Munns et al. May 2019 A1
20190146647 Ramchandran May 2019 A1
20190147045 Kim May 2019 A1
20190172291 Naseath Jun 2019 A1
20190180095 Ferguson et al. Jun 2019 A1
20190182383 Shaev et al. Jun 2019 A1
20190205389 Tripathi et al. Jul 2019 A1
20190236205 Jia et al. Aug 2019 A1
20190238680 Narayanan et al. Aug 2019 A1
20190287517 Green et al. Sep 2019 A1
20190295027 Dunne et al. Sep 2019 A1
20190306315 Portman et al. Oct 2019 A1
20190335038 Alonso Y Caloca et al. Oct 2019 A1
20190349477 Kotak Nov 2019 A1
20190377789 Jegannathan et al. Dec 2019 A1
20190394333 Jiron et al. Dec 2019 A1
20200012697 Fan et al. Jan 2020 A1
20200019893 Lu Jan 2020 A1
20200050996 Generes, Jr. et al. Feb 2020 A1
20200097544 Alexander Mar 2020 A1
20200118215 Rao et al. Apr 2020 A1
20200119936 Balasaygun et al. Apr 2020 A1
20200125919 Liu et al. Apr 2020 A1
20200126126 Briancon et al. Apr 2020 A1
20200134492 Copeland Apr 2020 A1
20200134648 Qi et al. Apr 2020 A1
20200175478 Lee et al. Jun 2020 A1
20200193335 Sekhar et al. Jun 2020 A1
20200193983 Choi Jun 2020 A1
20200211120 Wang et al. Jul 2020 A1
20200218766 Yaseen et al. Jul 2020 A1
20200219500 Bender et al. Jul 2020 A1
20200242540 Rosati et al. Jul 2020 A1
20200250272 Kantor Aug 2020 A1
20200250557 Kishimoto et al. Aug 2020 A1
20200257996 London Aug 2020 A1
20200280578 Hearty et al. Sep 2020 A1
20200280635 Barinov et al. Sep 2020 A1
20200285936 Sen Sep 2020 A1
20200336567 Dumaine Oct 2020 A1
20200351375 Lepore et al. Nov 2020 A1
20200351405 Pace Nov 2020 A1
20200357026 Liu et al. Nov 2020 A1
20200365148 Ji et al. Nov 2020 A1
20210004536 Adibi et al. Jan 2021 A1
20210005206 Adibi et al. Jan 2021 A1
20210056481 Wicaksono et al. Feb 2021 A1
20210081869 Zeelig et al. Mar 2021 A1
20210081955 Zeelig et al. Mar 2021 A1
20210082417 Zeelig et al. Mar 2021 A1
20210082418 Zeelig et al. Mar 2021 A1
20210084149 Zeelig et al. Mar 2021 A1
20210090570 Aharoni Mar 2021 A1
20210091996 Mcconnell et al. Mar 2021 A1
20210105361 Bergher et al. Apr 2021 A1
20210125275 Adibi Apr 2021 A1
20210133763 Adibi et al. May 2021 A1
20210133765 Adibi et al. May 2021 A1
20210134282 Adibi et al. May 2021 A1
20210134283 Adibi et al. May 2021 A1
20210134284 Adibi et al. May 2021 A1
20210136204 Adibi et al. May 2021 A1
20210136205 Adibi et al. May 2021 A1
20210136206 Adibi et al. May 2021 A1
20220129905 Sethumadhavan Apr 2022 A1
Foreign Referenced Citations (4)
Number Date Country
1418 519 May 2004 EP
2006037836 Apr 2006 WO
2012024316 Feb 2012 WO
2015099587 Jul 2015 WO
Non-Patent Literature Citations (14)
Entry
Gaietto, Molly., “What is Customer DNA?”,—NGDATA Product News, Oct. 27, 2015, 10 pages.
Fan et al., “Demystifying Big Data Analytics for Business Intelligence Through the Lens of Marketing Mix”, Big Data Research, vol. 2, Issue 1, Mar. 2015, 16 pages.
An et al,, Towards Automatic Persona Generation Using Social Media Aug. 2016 2016 IEEE 4th International Conference on Future Internet of Things and Cloud Workshops (FiCloudW), 2 pages.
Bean-Mellinger, Barbara., “What Is the Difference Between Marketing and Advertising?”, available on Feb. 12, 2019, retrieved from https://smallbusiness.chron .com/difference-between-marketing-advertising-2504 7 html, Feb. 12, 2019, 6 pages.
Twin, Alexandra., “Marketing”, URL: https://www.investopedia.com/lerms/m/marketing.asp, Mar. 29, 2019, 5 pages.
Dictionary.com, “Marketing”, URL: https://www.dictionary.com/browse/marketing, Apr. 6, 2019, 7 pages.
Ponn et al., “Correlational Analysis between Weather and 311 Service Request Volume”, eil.mie.utoronto.ca., 2017, 16 pages.
Zhang et al., “A Bayesian approach for modeling and analysis of call center arrivals”, 2013 Winter Simulations Conference (WSC), ieeexplore.ieee.org, pp. 713-723.
Mehrotra et al., “Call Center Simulation Modeling: Methods, Challenges, and Opportunities”, Proceedings of the 2003 Winter Simulation Conference, vol. 1, 2003, pp. 135-143.
Mandelbaum et al., “Staffing Many-Server Queues with Impatient Customers: Constraint Satisfaction in Call Center”, Operations Research, Sep.-Oct., 2009, vol. 57, No. 5 (Sep.-Oct., 2009), pp. 1189-1205.
Fukunaga et al., “Staff Scheduling for Inbound Call Centers and Customer Contact Centers”, AI Magazine, Winter, vol. 23, No. 4, 2002, pp. 30-40.
Feldman et al., “Staffing of Time-Varying Queues to Achieve Time-Stable Performance”, Management Science, Feb., 2008, vol. 54, No. 2, Call Center Management, pp. 324-338.
Business Wire, “Rockwell SSD announces Call Center Simulator”, Feb. 4, 1997, 4 pages.
Nathan, Stearns., “Using skills-based routing to the advantage of your contact center”, Customer Inter@ction Solutions, Technology Marketing Corporation, May 2001, vol. 19 No. 11, pp. 54-56.
Related Publications (1)
Number Date Country
20210004834 A1 Jan 2021 US
Provisional Applications (1)
Number Date Country
62870913 Jul 2019 US