System, method and software program for enabling communications between customer service agents and users of communication devices

Information

  • Patent Grant
  • 9386154
  • Patent Number
    9,386,154
  • Date Filed
    Friday, December 21, 2007
    18 years ago
  • Date Issued
    Tuesday, July 5, 2016
    9 years ago
Abstract
The present invention provides a system, method and software application for enabling a customer service agent to efficiently communicate with users of a communication device. When a user enters speech input into his communication device, the speech is converted to text, and the text is displayed to the customer service agent on the agent's computer screen. Alternately, the user's speech input is provided to the customer service agent in the form of an audio file. The agent types a response, and the agent's response is provided to the user on the user's communication device. The agent's response may be converted to speech and played to the user, and/or the agent's response may be displayed as text on the display screen of the user's communication device.
Description
BACKGROUND OF THE INVENTION

1. Field of the Invention


This invention relates generally to telecommunications and, more specifically, to communications between users of communication devices and customer service agents.


2. Description of the Background Art


Most companies need to provide customer care and make customer service agents available to customers. Each time a customer calls a customer service agent and consumes the customer service agent's time, it costs the company money and cuts into profit margins. To reduce these costs, many companies have implemented automated customer care options via Interactive Voice Response (IVR) systems, or self-service customer care options via the web. However, sometimes a user really does need or want to talk to a customer service agent. Therefore, in order to provide users' with adequate customer care and reduce customer care costs, there is a need for a system and method that enables customer service agents to efficiently serve customers.


SUMMARY

The present invention provides a method, system, and software application that enable customer service agents to more efficiently assist customers. Specifically, the present invention enables a customer service agent to simultaneously engage in communication sessions with multiple users.


In one embodiment of the present invention, a user speaks a request, question, or statement into a communication device. The user's speech input is converted to text and the text is sent to a customer service agent. The customer service agent reads the text, and types a response. The customer service agent's text response is played to the customer as speech on the communication device, and the user hears the response on the communication device. The user may also see the response as text on the display screen of his communication device.


In an alternate embodiment, the user's speech input is provided to the customer service agent in the form of an audio file. The customer service agent then listens to the audio file, and types a text response. The response is then provided to the user, either in text form, speech form (by converting the text to speech), or both.


In a further embodiment, the user's speech input is converted to text and the text is sent to a customer service agent. The customer service agent reads the text and records a speech response, which is stored as an audio file. The audio file is then played back to the user.


Since the customer service agent is not talking live on the communication device with a user, the customer service agent can engage in communication sessions with multiple users simultaneously. While one user is digesting a customer service agent's response, the customer service agent can be responding to another user.





BRIEF DESCRIPTION OF THE DRAWINGS


FIGS. 1a-b illustrate a method for enabling users of communication devices to communicate with customer service agents.



FIG. 2 illustrates an example interaction between a user of a communication device and a customer service agent.



FIG. 3 illustrates an example architecture according to one embodiment of the present invention.



FIGS. 4 is a flow chart that illustrates the operation of the Client Application, Server Application, and Agent Application shown in FIG. 3.



FIG. 5 is a flowchart that illustrates an alternate method of the present invention.





DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS


FIGS. 1a-b illustrate a method according to one embodiment of the present invention. A customer service system (such as a server running the Server Application 330 described with respect to FIGS. 3-5) receives requests from multiple users to communicate with a customer service agent (step 110). The request comes from users of communication devices, where each communication device includes a voice interface to the user. In the preferred embodiment, the communication device includes a voice and a visual interface (where text can be displayed in the visual interface). An example of a communication device is a mobile phone. Examples of ways in which a user may initiate a request to speak with a customer service agent include dialing a number on the communication device, pushing a button on the communication device, clicking on a link on the communication device, or speaking certain words into the communication device.


For each eligible user requesting to communicate with a customer service agent, the system opens up a communication session for the user (120). A communication session is a set of related communications between a user and one or more customer service agents. A communication session is associated with a record of the communications between a user and customer service agent(s). When a communication session is open, the record is updated with each communication between the user and the agent.


During a communication session, the system enables the user to enter speech input for a customer service agent (130). The user enters speech input by speaking into his communication device. The speech input is then converted to text (140). The session record is updated with the text (150), and the system provides the customer service agent with the session record, where the user's speech input is displayed as text on the customer service agent's screen (160).


The customer service agent provides a text response (170) (or enters a speech response which is converted to text), and the session is updated with the text response (180). The customer's service agent's text response is converted to speech and played to the user in the form of speech (190). In one embodiment, the user is provided with the customer service agent's response in both speech and text form (e.g., the user hears the customer service agent's response and see the text response in the display screen of his mobile phone). Alternatively, the customer service agent's response is provided to the user only in text form.


In an alternate embodiment, the user's speech input is provided to the customer service agent in the form of an audio file. The customer service agent then listens to the audio file, and types a text response. The response is then provided to the user, either in text form, speech form (by converting the text to speech), or both.


Since the customer service agent is not talking to the user live, the customer service agent can engage in communication sessions with multiple users simultaneously. While one user is digesting a customer service agent's response, the customer service agent can be responding to another user.


During a communication session, a user may communicate with the same customer service agent, or may communicate with multiple customer service agents. In most cases, it will be most efficient for the same customer service agent to service the user during a communication session. However, it is possible for different customer service agents to service the user during a single communication session. For instance, during the same communication session, one customer service agent may respond to a first question spoken by a user, and another customer service agent may respond to a second question from the user. In this way, the invention can “packetize” interactions between users and customer service agents, where one user input/agent response is like a “packet.” The system can packetize interactions to load balance and/or to ensure that the user inquiry is routed to a customer service agent best suited to respond to the inquiry (e.g., to provide first- and second-level support to the user). The fact that multiple customer service agents are responding to a user during a communication session may not be apparent to the user (i.e., the user experience may be that he is communicating with the same customer service agent).



FIG. 2 illustrates an example interaction between a user and a customer service agent. The user speaks a request, question, or statement into his communication device (210). The user's speech input is transcribed to text and the text is sent to a customer service agent (220). The customer service agent reads the text (230), and types a response (240). The customer service agent's text response is read to customer as speech (250), and the user hears the response (260). The user may also see the response as text on the display screen of his communication device. Steps 210-260 are repeated until the user receives the help he needs or otherwise decides to end the communication session. As stated above, during a communication session, multiple customer service agents can assist the user.



FIG. 3 illustrates an example architecture for implementing one embodiment of the invention. The architecture comprises a Client Application 320 executing on a user's communication device, a Server Application 330 executing on a server, and an Agent Application 340 executing on a customer service agent's computer. The Client Application 320 comprises (1) a Client User Interface Module 322 that provides a voice and visual interface to the user; (2) a Translational Module 324 that converts text to speech and speech to text, and (3) a Network Module 328 that interfaces with a network, such as the Internet, a private network, or a wireless network (such as a mobile phone network). User Interface Modules and Network Modules are well known in the art. An example of a Translational Module is Nuance Communications' Voice to SMS solution that leverages Nuance's Mobile Dictation technology.


The Server Application 330 includes (1) a Session Manager 332 that keeps track of open communication sessions between users and customer service agents; (2) a Load Balancer 334 that allocates an agent to a particular session or communication from a user; and (3) a Server Network Module 336 that interfaces with a network.


The Agent Application 340 on the customer service agent's computer includes (1) a Agent User Interface Module 342 that provides a visual interface to the customer service agent (on the agent's computer screen); and (2) a Client Network Module 328 that interfaces with a network.


Those skilled in the art will appreciate that the user's communication device, the Server, and the customer service agent's computer will include additional functionality not represented by the above Client Application 320, Server Application 330, and Agent Application 340. However, such functionality is well known in the art and a discussion of such functionality is not relevant for an understanding of the invention described herein. Moreover, those skilled in the art will appreciate that there may be many ways to implement the present invention, and the software implementations described herein with respect to FIGS. 3-5 are just examples of implementations.



FIG. 4 illustrates how the Client Application 320, Server Application 330, and Agent Application 340 operate according to one example embodiment of the present invention. During a communication session, the Client User Interface Module 322 on the Client Application 320 receives speech input from the user (405). The Translation Module 324 translates the speech to text with the Dictation Engine 325 (410). The Client Network Module 328 then transmits to the text to the Server Application 330 (415).


The Server Network Module 330 receives the text from the Client Network Module 328 (420). The Session Manager 332 on the Server then updates the users communication session with the text (425). This involves determining if an open communication session exists for the user. If an open communication session exists (i.e., the text from the user is part of an ongoing, existing conversation with a customer service agent), the Session Manager 332 updates the existing communication session. If an open communication session does not exist (i.e., the user is initiating a conversation with a customer service agent), the Session Manager 332 opens a new communication session for the user and updates the new session with the text from the user.


The Load Balancer 334 on the Server then identifies an appropriate customer service agent to receive the session and transfer the session record to the customer service agent via the Server Network Module 330 (430). If the communication session is a new communication session, the Load Balancer 334 may use conventional load balancing techniques (e.g., round robin, agent load, etc.) to select an agent. If the communication session is an existing communication session, the Load Balancer 334 may either select the agent that previously handled the session, or it may use conventional load balancing techniques to identify an agent with availability. The Load Balancer 334 may also factor in agent expertise in selecting an agent.


The Agent Network Module 344 receives the communication session record from the server (435), and the Agent User Interface Module 342 displays the contents of the record to the customer service agent in the form of text on the customer service agent's display screen (440).


The customer service agent types a response (or enters a speech response which is converted to text) (445), and the Agent Network Module 344 transmits the text to the server (450). The Server Network Module 336 receives the text from the Agent Application 340 (455), and the Session Manager 332 updates the communication session record with the text (460). The Server Network Module 336 then sends the customer service agent's response (in the form of text) to the user's communication device (465).


The Client Network Module 328 receives the text from the Server Application 330 (470), and the Translation Module 324 translates the text to speech with the Text-to-Speech Engine 326 (475). The Client User Interface Module 322 displays the text and plays the speech to the user (480). Steps 405-480 are repeated until the user or the customer agent terminates the communication session.


In an alternate embodiment of the invention, the Server Application 330 determines whether an automated response can be provided to the user prior to sending a user's session record to a customer service agent. FIG. 5 illustrates how the Server Application 330 handles speech input from the Client Application 320 in this embodiment. The Server Network Module 330 receives the speech input from the user in the form of text (510). The Session Manager 332 updates the session record with the text (520). The Server Application 330 then determines if an automated response can be provided to the user (530). If an automated response can be provided to the user, the Server Application 330 sends an automated response to the user, where the automated response is played to the user as speech or displayed as text in the user interface of the user's communication device (or both) (540). The Session Manager 332 then updates the user's communication session record with the automated response (550). If an automated response cannot be sent to the user, user's communication session record is sent to the Agent Application 340, as described with respect to step 430 in FIG. 4 (560).


In the embodiment described with respect to FIGS. 3-5, the speech to text conversion and the text to speech conversion are performed by the Client Application 320, but the Server Application 330 could perform such functionality instead.


In an alternate embodiment, in addition to or instead of receiving text of the user's speech input, a customer service agent can receive an audio file (e.g., a .wav file) of the user's speech input. The audio file enables the customer service agent to listen to the user's speech input if desired by the customer service agent. For example, in the method described with respect to FIGS. 1a-b, the session record provided to the customer service agent in step 160 could include a .wav file (or other audio file) with a recording of the user's speech input. Such audio file could be in addition to a text transcript or in lieu of a text transcript of the user's speech input in the session record.


In a further alternate embodiment, a user's speech input is converted to text and then provided to a customer service agent. The customer service agent reads the text input and then records a speech response, which is saved as an audio file. The audio file is then sent to the user's phone and played back to the user. A text transcript of the agent's speech response may optionally be provided to the user. Also, the agent's speech response may optionally be converted to text for the purpose of having a text transcript of the agent's response in the session record.


As will be understood by those familiar with the art, the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. Accordingly, the above disclosure of the present invention is intended to be illustrative and not limiting of the invention.

Claims
  • 1. A method for enabling communication between a user of a mobile communication device and a customer service agent, the method comprising: receiving speech input on a mobile communication device from a user for a customer service agent;determining automatically by means of a server application executing on a server, independent of agent availability, whether an automated response can be provided to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response can be provided to the user, sending from the server application an automated response to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response cannot be provided to the user, performing the following: converting the speech input into text on the mobile communication device;determining if an open communication session exists for the user, wherein: if an open communication session exists, updating the open communication session with the text from the user and identifying and selecting the customer service agent that previously handled the open communication session, andif an open communication session does not exist, opening a new communication session for the user, updating the new communication session with the text from the user, and identifying and selecting the customer service agent based at least in part on the expertise of the customer service agent;recording a speech response from the customer service agent in an audio file during the open communication session, wherein the audio file includes the customer service agent's entire recorded response;sending the audio file to the user's mobile communication device during the open communication session; andplaying the customer service agent's speech response to the user during the open communication session, wherein in addition to playing the speech to the user, converting the audio file into text on the mobile communication device, displaying the text from the customer service agent to the user on the user's mobile communication device, and updating the open communication session with the text from the customer service agent.
  • 2. A method for enabling a customer service agent to engage in simultaneous communication sessions with a plurality of users, wherein each user is using a mobile communication device, the method comprising: receiving a plurality of requests to open a communication session with a customer service agent, wherein each request comes from a different user desiring to communicate with a customer service agent;for each eligible user requesting to open a communication session with a customer service agent: determining automatically by means of a server application executing on a server, independent of agent availability, whether an automated response can be provided to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response can be provided to the user, sending from the server application an automated response to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response cannot be provided to the user, performing the following: determining if an open communication session exists for the user, wherein if an open communication session exists, updating the open communication session with the text from the user and wherein if an open communication session does not exist, opening a new communication session for the user and updating the new communication session with the text from the user,identifying and selecting a customer service agent that is available to engage in the communication session with the user based at least in part on agent expertise,wherein a single customer service agent may engage in multiple communication sessions simultaneously and wherein multiple customer service agents may be identified and selected to engage the user in an open communication session;during each communication session between a user and a customer service agent, enabling the user to enter speech input for the customer service agent, wherein the speech is then converted to text on the mobile communication device and provided to the customer service agent in the form of text; andduring each communication session between a user and a customer service agent, recording a speech response from the customer service agent in an audio file during the open communication session, wherein the audio file includes the customer service agent's entire recorded response, sending the audio file to the user's mobile communication device during the open communication session, and playing the customer service agent's speech response to the user during the open communication session, wherein in addition to playing the speech to the user, converting the audio file into text on the mobile communication device, displaying the text from the customer service agent to the user on the user's mobile communication device, and updating the open communication session with the text from the customer service agent.
  • 3. A method for enabling communication between a user of a mobile communication device and a customer service agent, the method comprising: receiving speech input on a mobile communication device from a user for a customer service agent;determining automatically by means of a server application executing on a server, independent of agent availability, whether an automated response can be provided to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response can be provided to the user, sending from the server application an automated response to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response cannot be provided to the user, performing the following: converting the speech input into text on the mobile communication device;determining if an open communication session exists for the user, wherein: if an open communication session exists, updating the open communication session with the text from the user and identifying and selecting either the customer service agent that previously handled the open communication session or another customer service agent based on conventional load balancing techniques, andif an open communication session does not exist, opening a new communication session for the user, updating the new communication session with the text from the user, and identifying and selecting the customer service agent based on conventional load balancing techniques, wherein the load balancing techniques factor in agent expertise;recording a speech response from the customer service agent in an audio file during the open communication session, wherein the audio file includes the customer service agent's entire recorded response;sending the audio file to the user's mobile communication device during the open communication session; andplaying the customer service agent's speech response to the user during the open communication session, wherein in addition to playing the speech to the user, converting the audio file into text on the mobile communication device, displaying the text from the customer service agent to the user on the user's mobile communication device, and updating the open communication session with the text from the customer service agent.
  • 4. The method of claim 3, wherein the automated response is provided to the user in the form of speech.
  • 5. The method of claim 3, wherein the automated response is provided to the user in the form of text.
  • 6. The method of claim 3, wherein the automated response is provided to the user in the form of text and speech.
  • 7. A computer-readable medium having computer-executable instructions for performing a method for enabling a customer service agent to engage in simultaneous communication sessions with a plurality of users, wherein each user is using a mobile communication device, the method comprising: receiving a plurality of requests to open a communication session with a customer service agent, wherein each request comes from a different user desiring to communicate with a customer service agent;for each eligible user requesting to open a communication session with a customer service agent: determining automatically by means of a server application executing on a server, independent of agent availability, whether an automated response can be provided to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response can be provided to the user, sending from the server application an automated response to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response cannot be provided to the user, performing the following: determining if an open communication session exists for the user, wherein if an open communication session exists, updating the open communication session with the text from the user and wherein if an open communication session does not exist, opening a new communication session for the user and updating the new communication session with the text from the user,identifying and selecting a customer service agent that is available to engage in the communication session with the user based at least in part on agent expertise,wherein a single customer service agent may engage in multiple communication sessions simultaneously and wherein multiple customer service agents may be identified and selected to engage the user in an open communication session;during each communication session between a user and a customer service agent, enabling the user to enter speech input for the customer service agent, wherein the speech is then converted to text on the mobile communication device and provided to the customer service agent in the form of text; andduring each communication session between a user and a customer service agent, recording a speech response from the customer service agent in an audio file during the open communication session, wherein the audio file includes the customer service agent's entire recorded response, sending the audio file to the user's mobile communication device during the open communication session, and playing the customer service agent's speech response to the user during the open communication session, wherein in addition to playing the speech to the user, converting the audio file into text on the mobile communication device, displaying the text from the customer service agent to the user on the user's mobile communication device, and updating the open communication session with the text from the customer service agent.
  • 8. The computer-readable medium of claim 7, further comprising also providing the customer service agent with an audio file of the user's speech input.
  • 9. A method for enabling communication between a user of a mobile communication device and a customer service agent, the method comprising: receiving speech input on the mobile communication device from the user for the customer service agent, wherein the customer service agent is selected in part based on the agent's expertise;determining automatically by means of a server application executing on a server, independent of agent availability, whether an automated response can be provided to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response can be provided to the user, sending from the server application an automated response to the user;in response to determining automatically by means of a server application executing on a server, independent of agent availability, that an automated response cannot be provided to the user, performing the following: converting the speech input into text on the mobile communication device;providing the text to the customer service agent;recording a speech response from the customer service agent in an audio file during an open communication session, wherein the audio file includes the customer service agent's entire recorded response;sending the audio file to the user's mobile communication device during the open communication session; andplaying the customer service agent's speech response to the user during the open communication session, wherein in addition to playing the speech to the user, converting the audio file into text on the mobile communication device, displaying the text from the customer service agent to the user on the user's mobile communication device, and updating the open communication session with the text from the customer service agent.
US Referenced Citations (145)
Number Name Date Kind
4164025 Dubnowski et al. Aug 1979 A
4697282 Winter et al. Sep 1987 A
4850007 Marino et al. Jul 1989 A
4918322 Winter et al. Apr 1990 A
4945557 Kaneuchi et al. Jul 1990 A
5136636 Wegrzynowicz Aug 1992 A
5386455 Cooper Jan 1995 A
5553119 McAllister et al. Sep 1996 A
5638425 Meador et al. Jun 1997 A
5677990 Junqua Oct 1997 A
5724481 Garberg et al. Mar 1998 A
5799065 Junqua et al. Aug 1998 A
5819265 Ravin et al. Oct 1998 A
5875394 Daly et al. Feb 1999 A
5880770 Ilcisin et al. Mar 1999 A
5946613 Hayes et al. Aug 1999 A
5991720 Galler et al. Nov 1999 A
5991739 Cupps et al. Nov 1999 A
6016336 Hanson Jan 2000 A
6125347 Cote et al. Sep 2000 A
6167383 Henson Dec 2000 A
6173266 Marx et al. Jan 2001 B1
6181927 Welling et al. Jan 2001 B1
6208965 Brown et al. Mar 2001 B1
6216111 Walker et al. Apr 2001 B1
6253174 Ishii et al. Jun 2001 B1
6314165 Junqua et al. Nov 2001 B1
6334103 Surace et al. Dec 2001 B1
6363357 Rosenberg et al. Mar 2002 B1
6404876 Smith et al. Jun 2002 B1
6473734 Dvorak Oct 2002 B1
6526273 Link et al. Feb 2003 B1
6567658 Van De Graaf May 2003 B1
6584180 Nemoto Jun 2003 B2
6587558 Lo Jul 2003 B2
6618704 Kanevsky et al. Sep 2003 B2
6650887 McGregor et al. Nov 2003 B2
6654447 Dewan Nov 2003 B1
6662163 Albayrak et al. Dec 2003 B1
6728353 Espejo et al. Apr 2004 B1
6731737 Davis May 2004 B2
6771761 LaPierre Aug 2004 B1
6792102 Shires Sep 2004 B2
6853987 Cook Feb 2005 B1
6856673 Banks et al. Feb 2005 B1
6876728 Kredo et al. Apr 2005 B2
6907118 Henderson et al. Jun 2005 B2
6915112 Sutton et al. Jul 2005 B1
6917802 Nilsson Jul 2005 B1
6941273 Loghmani et al. Sep 2005 B1
6985753 Rodriguez et al. Jan 2006 B2
6996531 Korall et al. Feb 2006 B2
7006971 Stahl et al. Feb 2006 B1
7023979 Wu et al. Apr 2006 B1
7106851 Tang et al. Sep 2006 B2
7113571 Matsubara et al. Sep 2006 B2
7120235 Altberg et al. Oct 2006 B2
7143037 Chestnut Nov 2006 B1
7187761 Bookstaff Mar 2007 B2
7215969 Benco et al. May 2007 B2
7353016 Roundtree et al. Apr 2008 B2
7424427 Liu et al. Sep 2008 B2
7487095 Hill et al. Feb 2009 B2
7529678 Kobal May 2009 B2
7698140 Bhardwaj et al. Apr 2010 B2
7724878 Timmins et al. May 2010 B2
7779408 Papineau Aug 2010 B1
7783755 Goss et al. Aug 2010 B2
7822414 Bender et al. Oct 2010 B2
7870199 Galli Jan 2011 B2
7881703 Roundtree et al. Feb 2011 B2
7970118 O'Dell, III Jun 2011 B2
7996251 Kannan et al. Aug 2011 B2
8000973 Williams et al. Aug 2011 B2
8081749 Shaffer et al. Dec 2011 B1
20010010714 Nemoto Aug 2001 A1
20010011230 Morganstein et al. Aug 2001 A1
20010037241 Puri Nov 2001 A1
20010039492 Nemoto Nov 2001 A1
20010048737 Goldberg et al. Dec 2001 A1
20010056359 Abreu Dec 2001 A1
20020010000 Chern et al. Jan 2002 A1
20020034940 Takae et al. Mar 2002 A1
20020044639 Shioda et al. Apr 2002 A1
20020065736 Willner et al. May 2002 A1
20020077833 Arons et al. Jun 2002 A1
20020077898 Koulouris Jun 2002 A1
20020087323 Thomas et al. Jul 2002 A1
20020091726 MacLeod Beck et al. Jul 2002 A1
20020103641 Kuo et al. Aug 2002 A1
20020120582 Elston et al. Aug 2002 A1
20020159572 Fostick Oct 2002 A1
20020168986 Lau et al. Nov 2002 A1
20020169618 Caspari Nov 2002 A1
20020177914 Chase Nov 2002 A1
20030007464 Balani Jan 2003 A1
20030023439 Ciurpita et al. Jan 2003 A1
20030050043 Ohrstrom et al. Mar 2003 A1
20030061171 Gilbert et al. Mar 2003 A1
20030064720 Valins et al. Apr 2003 A1
20030130904 Katz et al. Jul 2003 A1
20030162561 Johnson et al. Aug 2003 A1
20030177009 Odinak et al. Sep 2003 A1
20030185359 Moore et al. Oct 2003 A1
20030204444 Van Luchene et al. Oct 2003 A1
20040012501 Mazzara et al. Jan 2004 A1
20040019487 Kleindienst et al. Jan 2004 A1
20040047453 Fraser Mar 2004 A1
20040091093 Bookstaff May 2004 A1
20040102225 Furuta et al. May 2004 A1
20040111267 Jadhav et al. Jun 2004 A1
20040161097 Henry Aug 2004 A1
20040162724 Hill et al. Aug 2004 A1
20040185833 Walden et al. Sep 2004 A1
20040203728 Schwinke et al. Oct 2004 A1
20040207508 Lin et al. Oct 2004 A1
20040242209 Kruis et al. Dec 2004 A1
20050044254 Smith Feb 2005 A1
20050074102 Altberg et al. Apr 2005 A1
20050131910 Yanagisawa Jun 2005 A1
20050147052 Wu Jul 2005 A1
20050163296 Smith et al. Jul 2005 A1
20050177368 Odinak Aug 2005 A1
20050183032 Bushey et al. Aug 2005 A1
20050201540 Rampey et al. Sep 2005 A1
20050213743 Huet et al. Sep 2005 A1
20050222712 Orita Oct 2005 A1
20050261990 Gocht et al. Nov 2005 A1
20050286691 Taylor et al. Dec 2005 A1
20060009218 Moss Jan 2006 A1
20060080107 Hill et al. Apr 2006 A1
20060098619 Nix et al. May 2006 A1
20060100851 Schonebeck May 2006 A1
20060106610 Napper May 2006 A1
20060126804 Lee et al. Jun 2006 A1
20060135215 Chengalvarayan et al. Jun 2006 A1
20060203989 Ollason Sep 2006 A1
20060217113 Rao et al. Sep 2006 A1
20060262919 Danson et al. Nov 2006 A1
20070117584 Davis et al. May 2007 A1
20070190986 Lee Aug 2007 A1
20070280439 Prywes Dec 2007 A1
20080088440 Palushaj Apr 2008 A1
20080255851 Ativanichayaphong et al. Oct 2008 A1
20100267378 Hamabe et al. Oct 2010 A1
Foreign Referenced Citations (6)
Number Date Country
1435720 Jul 2004 EP
2206265 Dec 1988 GB
2360418 Sep 2001 GB
0062518 Oct 2000 WO
03079656 Sep 2003 WO
WO 03079656 Sep 2003 WO
Non-Patent Literature Citations (3)
Entry
International Search Report in related PCT Application No. PCT/US2008/013893.
Written Opinion of ISA in corresponding PCT Application No. PCT/US2008/013893.
Simoudis, E. (2000). If it's not one channel, then it's another. Bank Marketing, 32(1), 48-50+.
Related Publications (1)
Number Date Country
20090164214 A1 Jun 2009 US