The present disclosure relates generally to an interactive voice response system and more particularly to the use of virtual departments and virtual agents in such a system.
Millions of telephone calls are made to call centers and to individuals conducting businesses during every business hour. In an effort to service these calls most modem call centers have specialized departments and utilize an automated call-processing system to process incoming calls. In such systems an incoming call is typically answered by an automated voice and then the call is routed to the appropriate department responsive to caller input. Automated call receiving systems that can accept caller input are often referred to as interactive voice response (IVR) systems. A more sophisticated IVR system that can process or recognize speech input is often referred to as a speech recognition IVR. Parties typically call a call center to make an inquiry or request a service. A universal concern that callers have during interactions with IVR systems is where they are in process or system, where they are going to be routed and where they have been. Accordingly, there is a need for a call handling system that provides improved situational awareness for the party of a call.
It will be appreciated that for simplicity and clarity of illustration, elements illustrated in the Figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements are exaggerated relative to other elements. Embodiments incorporating teachings of the present disclosure are shown and described with respect to the drawings presented herein, in which:
Larger businesses typically organize their call centers into departments wherein each department has the tools, knowledge and expertise to solve problems associated with a particular subject matter. Many call centers utilize an automated main operator to greet callers and prompt callers for the purpose of the call. However, automating an entire call servicing system with a main operator can seem void of personality, expertise, flow and purpose, quite possibly creating an unpleasant experience for a caller. Further, purely informational call centers such as those providing a restaurant locator service or a movie listing service can seem impersonal and awkward with a single automated operator. In accordance with the teachings herein, a call center with virtual agents attending to the duties of virtual departments is described. The virtual agents may purport to have subject matter expertise, such that expert service can be provided to the party.
The present disclosure teaches a system and method that provides an interactive voice response system (IVR) that segments caller-IVR interactions into virtual departments by utilizing virtual agent to address subject matter assigned to the virtual department. In operation a connection between a party and an interactive voice response (IVR) system is made and multiple virtual agents are utilized to allow a party to associate a virtual agent with a virtual department, thus providing indicia of a location within the call processing procedure. A caller or party can be prompted with a first virtual agent when addressing content related to a first virtual department and then prompted by a second virtual agent when addressing content related to a second virtual department.
The term “caller” and “party” are used interchangeably herein to refer to either a party making a call or a party receiving a call. Depending upon implementation detail, the teachings disclosed herein may be implemented in an outbound call center, an inbound call center, and/or a hybrid call center capable of both placing and receiving calls.
Depending upon implementation detail, a party can be prompted by a first virtual agent, possibly a main operator and in response to a party's reply, the first virtual agent or voice persona can advise the party that the call is being routed to a specialized virtual agent, representing a virtual department who is learned in addressing the party's issue(s). The second virtual department, virtual agent or voice persona can again prompt the party to learn more about the party's issue's to be addressed. The second virtual agent can also have a discrete persona and purport to be from a specialized department and have subject matter expertise for addressing the party's request. Thus, the IVR can provide a party with personal, specialized services utilizing multiple virtual agents. This preferential treatment allows a party to experience the feeling of being routed between specialized departments as they interface with the different virtual agents specialized in providing specialized information or solving subject matter based problems.
Referring to
In one operational embodiment the IVR system 108 can receive a call from a caller, or place a call to a caller and processor 115 retrieves a first voice persona, such as a main operator persona from voice library 110 and utilizes the first voice persona to greet the caller and prompt the caller for the intent of the call. The speech recognition module 112 or the DTMF detector 114 can receive the caller's response and the processor 115 can select a second voice persona based on the caller's response.
In one embodiment the caller's response, such as a request for information, a request to address an issue or a request to solve a problem could be a classified according to a specific subject matter. When the IVR 108 determines that new or different subject matter is at issue, the processor 115 would facilitate usage of a voice persona assigned to such subject matter. The IVR system 108 may have discrete, distinguished and exclusive digital voice personas associated with each issue or identified subject matter. Examples of subject matter can include billing, account services, dispute resolution, maintenance, subscriptions, technical support, engineering support, and discrete information delivery. The list above is merely illustrative as thousands of subject matter classification could be utilized in accordance with the teachings herein.
Depending upon implementation detail, the interactive voice response system 108 can contain a voice library 110 that stores many different voice personas in digital format. The voice personas can include a main operator and a plurality of voice personalities with purported subject matter expertise that can be utilized to address caller concerns. The different voice personas or distinguishable voices in the voice library 110 can be stored by, or linked to, subject matter and the voice personas may include audio that provides cues to the caller that the voice is associated with discrete subject matter. As such, a person calling system 108 may be left with the impression that system 108 is a physical location were people—who are subject matter experts—work to help customers resolve concerns and/or answer questions. System 108 may be a call center in a box and may rely on processor 115 utilizing instructions retrieved from memory 116 to select digital voices stored in the voice library 110 based on the subject matter of the caller-IVR interaction.
The distinguished voice-subject matter correlation can provide a conscious knowledge-of-location within the system to the caller when the distinguished voice verbally identifies its' subject matter expertise and when hand-off cues are provided by virtual agents during a change of agents. The distinguishable voice-subject matter correlation can also provide an unconscious knowledge-of-location by the mannerisms, linguistics and verbal explanations in the voice. For example, callers to a restaurant locater call center may “tell” a female hostess agent that they want to know about French restaurants in the area and in response, the caller may be routed to a new male virtual agent expert, who would greet the caller with a French accent, “Bonjour! Let me help you find a romantic French restaurant, what is the price range you would like to explore” (accent omitted).
In one embodiment, the discrete voices can have different personas, accents or virtual personalities that are conducive to provide information or process the caller's request in a pleasurable and memorable manner. As described above, the voices provided subsequent to the greeting voice, could have personna's that are associated with, and exclusive within system 108 to, the subject matter being addressed by the call. Further, the subsequent voices can provide indicia of having expertise by informing the party.
The discrete voices provided can have many personas such as a celebrity voice persona and a dialect persona. For example, when a caller is routed to a paint department in a department store, a Martha Stewart emulation could address the caller or when a caller is seeking help to purchase a computer, a Bill Gates impersonation could address the caller. The subject matter expertise of the voice personas can include a receptionist/main operator, a billing, accounting, dispute resolution department, new subscriptions, maintenance, customer service, and an engineering department.
Referring now to
The interactive voice response system 208 can be coupled to a plurality of virtual agent/virtual departments such as first, second third fourth and fifth virtual agents, 220, 222, 224, 226, and 228 respectively. Depending on implementation detail, first virtual agent 220 may be utilized exclusively for addressing subject matter or issues such as greeting and prompting a caller while second virtual agent 222 may exclusively address issues regarding billing. Further, third virtual agent 224 may only address issues regarding account management and fourth virtual agent 226 may only address issues regarding dispute resolution, while fifth virtual agent 228 may only address issues such as adding a new service.
In an illustrative embodiment, the first virtual agent 222 could be the main operator that would initially interface a caller. For example, the first virtual agent 222 may address the caller by saying, “Hello, I am Amy the receptionist for XX company. I will be assisting you in locating an expert within our company who can provide you with information and address your concerns, please tell me how we can help you.” In reply, the caller may say, “I have a question about my bill” and the speech recognition module 212 could process the caller utterance. The speech recognition module 212 could associate the caller's input with IVR departments or stages such as the billing department, (a virtual department) and route the call to the second virtual agent 222 who can provide a virtual expertise associated with the virtual billing department. Thus, the first virtual agent 220 may provide, “Thanks for your reply, I am going to transfer you to Bill the billing specialist.”
Depending upon implementation detail, the processor 230 utilizing instructions retrieved from memory 216 could retrieve a voice personality from voice library that is associated with the selected virtual department. Thus, after the call is routed to the second virtual agent 222 a virtual agent voice pattern can be retrieved from the voice library 218 and the output engine 210 can provide to the caller, “Hi, I am Bill the billing agent, and I am from the billing department and I am here to help you with your billing inquiry, what would you like to know about your bill?
When a caller responds with additional questions or issues the interactive voice response system 208 could route the call to any of the virtual departments/virtual agents specializing in addressing the issue such as the, virtual account manager agent 224, the virtual dispute resolution agent 226 and the virtual new service agent 228. In the subject illustration only five virtual agents are depicted, however hundreds and even thousands of virtual agents/department could be utilized. In alternate embodiments the caller can reply to virtual agent prompts by making touch-tone inputs and the DTMF 214 module could process caller input.
The virtual agents 220-228 can have different personalities such as a male or female personalities, different enthusiasms, demeanors, accents, and tones. Further, the virtual agents can have celebrity characteristics, speak a specific dialect, provide a regional accent, provide foreign language characteristics, provide colloquial phrases, and have a groomed voice characteristic. A groomed voice characteristic may be a distinct voice that after a few calls or interactions, a caller will recognize. The virtual agent can have a name and provide the name and a description of the virtual agents expertise to the caller during interaction with the caller.
Referring now to
At decision step 310 it can be determined if the party's response facilitates new or different subject matter. When the party's response does not facilitate new subject matter, it can be determined if the call is complete at decision step 312. If the call is not complete, the method proceeds back to step 306 where the party is again prompted however, if the call is complete the method proceeds to end at step 324.
At decision step 310 when the party provides a response such as a request that prompts a change in subject matter to be addressed by the IVR, a new or different voice personality can be selected to interact with the party at step 314. In one embodiment, the new voice personality is selected based on the newly identified subject matter. The new voice personality can introduce itself as an expert on the subject matter of the interaction and prompt the party with a distinguishable voice personality at step 316. Based on a party's response to the prompt it can be determined at decision step 318 if the response facilitates new or additional subject matter.
When the party's response facilitates new subject matter to be processed by the IVR at decision step 318 a new voice personality is again selected possibly based on the new subject matter at step 320 and the process returns to step 314. When the party's response does not facilitate addressing new or additional subject matter, it can be determined if the call is completed at step 322. When the call is completed, the process ends at 324 and if the call is not completed the method can proceed back to step 316.
In one embodiment a voice personality can also be chosen based on a perceived characteristic of the party or on a perceived characteristic of the party and newly identified subject matter. The perceived characteristic can determined by processing an utterance of the party. For example, a detected regional accent, a non-English language, slang dialect, a demeanor, a personality, an age, and an intelligence level could aid in selecting a voice persona that the caller would appreciate. The IVR system and method disclosed herein is configured to simulate the routing of a call to different specialized departments, wherein each specialized department has purported expertise to address specific concerns provided by the caller.
Referring now to
The above-disclosed subject matter is to be considered illustrative, and not restrictive, and the appended claims are intended to cover all such modifications, enhancements, and other embodiments that fall within the true spirit and scope of the present invention. Thus, to the maximum extent allowed by law, the scope of the present invention is to be determined by the broadest permissible interpretation of the following claims and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
Number | Name | Date | Kind |
---|---|---|---|
4953204 | Cuschleg, Jr. et al. | Aug 1990 | A |
4967405 | Upp et al. | Oct 1990 | A |
5335269 | Steinlicht | Aug 1994 | A |
5530744 | Charalambous et al. | Jun 1996 | A |
5590186 | Liao et al. | Dec 1996 | A |
5953704 | McIlroy et al. | Sep 1999 | A |
5987116 | Petrunka et al. | Nov 1999 | A |
6070142 | McDonough et al. | May 2000 | A |
6119101 | Peckover | Sep 2000 | A |
6144938 | Surace et al. | Nov 2000 | A |
6173266 | Marx et al. | Jan 2001 | B1 |
6269153 | Carpenter et al. | Jul 2001 | B1 |
6269336 | Ladd et al. | Jul 2001 | B1 |
6317439 | Cardona et al. | Nov 2001 | B1 |
6333980 | Hollatz et al. | Dec 2001 | B1 |
6381329 | Uppaluru et al. | Apr 2002 | B1 |
6385584 | McAllister et al. | May 2002 | B1 |
6389400 | Bushey et al. | May 2002 | B1 |
6400804 | Bilder | Jun 2002 | B1 |
6400996 | Hoffberg et al. | Jun 2002 | B1 |
6405159 | Bushey et al. | Jun 2002 | B2 |
6414966 | Kulkarni et al. | Jul 2002 | B1 |
6418424 | Hoffberg et al. | Jul 2002 | B1 |
6510414 | Chaves | Jan 2003 | B1 |
6519562 | Phillips et al. | Feb 2003 | B1 |
6529871 | Kanevsky et al. | Mar 2003 | B1 |
6570967 | Katz | May 2003 | B2 |
6587558 | Lo | Jul 2003 | B2 |
6598136 | Norrod et al. | Jul 2003 | B1 |
6614781 | Elliott et al. | Sep 2003 | B1 |
6631186 | Adams et al. | Oct 2003 | B1 |
6678360 | Katz | Jan 2004 | B1 |
6690788 | Bauer et al. | Feb 2004 | B1 |
6694012 | Posthuma | Feb 2004 | B1 |
6697460 | Knott et al. | Feb 2004 | B2 |
6700972 | McHugh et al. | Mar 2004 | B1 |
6707789 | Arslan et al. | Mar 2004 | B1 |
6714631 | Martin et al. | Mar 2004 | B1 |
6721416 | Farrell | Apr 2004 | B1 |
6751306 | Himmel et al. | Jun 2004 | B2 |
6757306 | Klish, II et al. | Jun 2004 | B1 |
6766320 | Wang et al. | Jul 2004 | B1 |
6775359 | Ron et al. | Aug 2004 | B1 |
6778643 | Bushey et al. | Aug 2004 | B1 |
6792096 | Martin et al. | Sep 2004 | B2 |
6807274 | Joseph et al. | Oct 2004 | B2 |
6831932 | Boyle et al. | Dec 2004 | B1 |
6832224 | Gilmour | Dec 2004 | B2 |
6842504 | Mills et al. | Jan 2005 | B2 |
6847711 | Knott et al. | Jan 2005 | B2 |
6853722 | Joseph et al. | Feb 2005 | B2 |
6853966 | Bushey et al. | Feb 2005 | B2 |
6885734 | Eberle et al. | Apr 2005 | B1 |
6891932 | Bhargava et al. | May 2005 | B2 |
6901366 | Kuhn et al. | May 2005 | B1 |
6907119 | Case et al. | Jun 2005 | B2 |
7065201 | Bushey et al. | Jun 2006 | B2 |
7127400 | Koch | Oct 2006 | B2 |
7184540 | Dezonno et al. | Feb 2007 | B2 |
7516190 | Kurganov | Apr 2009 | B2 |
20010011211 | Bushey et al. | Aug 2001 | A1 |
20010018672 | Petters et al. | Aug 2001 | A1 |
20010032229 | Hulls et al. | Oct 2001 | A1 |
20010034662 | Morris | Oct 2001 | A1 |
20020087385 | Vincent | Jul 2002 | A1 |
20020133394 | Bushey et al. | Sep 2002 | A1 |
20020133413 | Chang et al. | Sep 2002 | A1 |
20020156699 | Gray et al. | Oct 2002 | A1 |
20020196277 | Bushey et al. | Dec 2002 | A1 |
20030026409 | Bushey et al. | Feb 2003 | A1 |
20030028498 | Hayes-Roth | Feb 2003 | A1 |
20030143981 | Kortum et al. | Jul 2003 | A1 |
20030144919 | Trompette et al. | Jul 2003 | A1 |
20030156133 | Martin et al. | Aug 2003 | A1 |
20030179876 | Fox et al. | Sep 2003 | A1 |
20030187732 | Seta | Oct 2003 | A1 |
20030187773 | Santos et al. | Oct 2003 | A1 |
20030194063 | Martin et al. | Oct 2003 | A1 |
20030202640 | Knott et al. | Oct 2003 | A1 |
20030202643 | Joseph et al. | Oct 2003 | A1 |
20030202649 | Haug, Jr. et al. | Oct 2003 | A1 |
20030204435 | McQuilkin et al. | Oct 2003 | A1 |
20030212558 | Matula | Nov 2003 | A1 |
20040005047 | Joseph et al. | Jan 2004 | A1 |
20040006473 | Mills et al. | Jan 2004 | A1 |
20040032862 | Schoeneberger et al. | Feb 2004 | A1 |
20040032935 | Mills et al. | Feb 2004 | A1 |
20040042592 | Knott et al. | Mar 2004 | A1 |
20040044950 | Mills et al. | Mar 2004 | A1 |
20040066401 | Bushey et al. | Apr 2004 | A1 |
20040066416 | Knott et al. | Apr 2004 | A1 |
20040073569 | Knott et al. | Apr 2004 | A1 |
20040088285 | Martin et al. | May 2004 | A1 |
20040101127 | Dezonno et al. | May 2004 | A1 |
20040103017 | Reed et al. | May 2004 | A1 |
20040109555 | Williams | Jun 2004 | A1 |
20040125937 | Turcan et al. | Jul 2004 | A1 |
20040125938 | Turcan et al. | Jul 2004 | A1 |
20040125940 | Turcan et al. | Jul 2004 | A1 |
20040161078 | Knott et al. | Aug 2004 | A1 |
20040161094 | Martin et al. | Aug 2004 | A1 |
20040161096 | Knott et al. | Aug 2004 | A1 |
20040174980 | Knott et al. | Sep 2004 | A1 |
20040230438 | Pasquale et al. | Nov 2004 | A1 |
20040240635 | Bushey et al. | Dec 2004 | A1 |
20040243568 | Wang et al. | Dec 2004 | A1 |
20050008141 | Kortum et al. | Jan 2005 | A1 |
20050015744 | Bushey et al. | Jan 2005 | A1 |
20050027535 | Martin et al. | Feb 2005 | A1 |
20050041796 | Joseph et al. | Feb 2005 | A1 |
20050047578 | Knott et al. | Mar 2005 | A1 |
20050055216 | Bushey et al. | Mar 2005 | A1 |
20050058264 | Joseph et al. | Mar 2005 | A1 |
20050075894 | Bushey et al. | Apr 2005 | A1 |
20050078805 | Mills et al. | Apr 2005 | A1 |
20050080630 | Mills et al. | Apr 2005 | A1 |
20050080667 | Knott et al. | Apr 2005 | A1 |
20050131892 | Knott et al. | Jun 2005 | A1 |
20050132262 | Bushey et al. | Jun 2005 | A1 |
20050135595 | Bushey et al. | Jun 2005 | A1 |
20050169453 | Knott et al. | Aug 2005 | A1 |
20060190424 | Beale et al. | Aug 2006 | A1 |
20080152094 | Perlmutter | Jun 2008 | A1 |
Number | Date | Country |
---|---|---|
0 424 015 | Apr 1991 | EP |
0 424 015 | Apr 1991 | EP |
0 424 015 | Apr 1991 | EP |
0 876 652 | Sep 1996 | EP |
WO 9726612 | Jul 1997 | WO |
WO 0137539 | May 2001 | WO |
WO 0137539 | May 2001 | WO |
WO 2004017584 | Feb 2004 | WO |
WO 2004049222 | Jun 2004 | WO |
Number | Date | Country | |
---|---|---|---|
20060215831 A1 | Sep 2006 | US |