The invention relates generally to telecommunications and particularly to teleconferencing.
A telephone or audio conference (hereinafter “teleconference”) enables multiple teleconference participants to hear and be heard by all other participants to the teleconference.
Chiefly for the benefit of people who are deaf or hard-of-hearing, techniques have been developed that permit the text captions of a telephone call or teleconference to be displayed in close to real-time on appropriately equipped telephony endpoints. An illustrative example is the CapTel™ system by Ultratec. Users of this service may listen to a telephone call or teleconference while simultaneously reading the captions on their telephone's display. The captions are generated by a centralized relay center operated by Ultratec, using Speech-To-Text or STT recognition software corrected by a human reviewer. Communication between the Ultratec relay center and the end-user is via standard analog phone lines.
Within the emerging field of Voice over Internet Protocol telephony, international standards that support the intermixing of voice and text on the same call have already been proposed and adopted. Concurrently, automated captioning services, such as those offered by Ultratec, are becoming more sophisticated and less expensive. Although intended originally for the deaf and hard-of-hearing community, there is no legal, regulatory, or technical reason why such capabilities should not be used to address the needs of others.
There is a need for a teleconferencing system that allows participants to participate in multiple conference calls simultaneously. In particular, there is a need to provide this capability to users of traditional, current-generation analog telephones, digital telephones, IP hardphones and IP softphones. The reason for this need is that, when there is more than one simultaneously occurring teleconference of interest, a participant must choose which conference to attend, or must hop back-and-forth among the conferences. Items of potential interest, that are presented while an individual is in the “wrong” conference, will be discussed without that individual's participation and, possibly, without that individual's knowledge.
These and other needs are addressed by the various embodiments and configurations of the present invention. The present invention is directed generally to the simultaneous or substantially simultaneous provision of text and voice streams from different calls to the same communication device.
In a first embodiment of the present invention, a teleconferencing method is provided that includes the steps of:
(a) when a first call is on hold and a second call is active on a selected communication device, the selected communication device displays a first text representation of a first voice stream received from a first set of endpoints involved in the first call; and
(b) the selected communication device simultaneously and audibly plays a second voice stream received from a second set of endpoints involved in the second call. During the performance of steps (a) and (b), the selected communication device does not receive one or both of the first voice stream and a second text representation of the second voice stream.
In a second embodiment, a teleconferencing method is provided that includes the steps of:
The selected communication device can be an analog, digital, or IP communication device. As will be appreciated, an analog telephone uses a continuous signal that uses the entire range of voltages while a digital telephone transmits specific discrete voltage values of “1 ” and “0” to transmit information. An analog voice waveform is converted into its digital equivalent using pulse-code modulation.
The present invention can provide a number of advantages depending on the particular configuration. The invention can provide a teleconferencing system that allows participants to participate in multiple conference calls simultaneously using not only IP softphones but also IP hardphones and digital and analog phones. When there is more than one simultaneously occurring teleconference of interest, a participant can avoid having to select which conference to attend or hopping back-and-forth among the conferences without receiving any feedback as to what is happening on the call on-hold. Conventional analog, digital, and IP phones can be readily adapted to the present invention. For conventional analog and digital phones for example, the responsibility for decoding the text packets and populating the phone's display resides on the switch rather than on the phone itself. By locating the intelligence for the present invention on the switch, the simultaneous streaming of text and voice from different phone calls can be readily effected.
These and other advantages will be apparent from the disclosure of the invention(s) contained herein.
As used herein, “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
The above-described embodiments and configurations are neither complete nor exhaustive. As will be appreciated, other embodiments of the invention are possible utilizing, alone or in combination, one or more of the features set forth above or described in detail below.
The invention will be illustrated below in conjunction with an exemplary communication system. Although well suited for use with, e.g., a system having a private branch exchange (PBX) or other similar contact processing switch, the invention is not limited to use with any particular type of communication system switch or configuration of system elements. Those skilled in the art will recognize that the disclosed techniques may be used in any communication application in which it is desirable to provide improved contact processing directed from an external network into a PBX or other communication system switch. The term “contact” or “call” as used herein is intended to include any live voice communications, whether circuit switched or packet switched.
The term “switch” as used herein should be understood to include a PBX, an enterprise switch, or other type of telecommunications system switch, as well as other types of processor-based communication control devices such as servers, computers, adjuncts, etc. By way of example, the switch 102 in the exemplary embodiment may be implemented as an otherwise conventional DEFINITY™ or MULTIVANTAGE™ Enterprise Communication Service (ECS) communication system switch available from Avaya Inc. Other types of known switches are well known in the art and therefore not described in detail herein.
The communication devices 106 may be wired desktop telephone terminals or any other type of terminals capable of communicating with the switch 102. The word “terminal” as used herein should therefore be understood to include not only wired or wireless desktop telephone terminals but also other types of processor-based communication devices, including without limitation IP softphones, IP hardphones, mobile telephones, personal computers, laptops, personal digital assistants (PDAs), etc.
The switch 102 is also coupled via one or more communication lines 110 to a network 112. In one configuration, the communication lines are trunk lines and the network is the public switched telephone network (PSTN). In another configuration, the communication lines pass through an optional gateway 116 to a packet-switched network 112, such as the Internet. In any event, the lines 110 carry incoming contacts from the network 112 to the switch 102 for processing and carry outgoing contacts from the switch 102 to the network 112. The network 112 is coupled via the network 112 to communication devices 116-1, 116-2, . . . 116-M. Preferably to permit effective mapping by the switch, the wireless networks or other transit networks between the user's external terminal and corresponding internal terminal are configured such that the switch receives not only the intended destination address but also the source address or identity of the external device initiating the contact.
It should be noted that the invention does not require any particular type of information transport medium between switch 102 and terminals 116, i.e., the invention may be implemented with any desired type of transport medium as well as combinations of different types of transport media.
Each of the communication devices 116-1, 116-2, . . . 116-M represents an external terminal not corresponding to any internal extension of the switch 102. These terminals are referred to as “external” in that they are not directly supported as terminal endpoints by the switch 102. Like the terminals 106, the terminals 116 may be wired or wireless desk sets, mobile telephones, personal computers, PDAs, etc. The terminals 116 are an example of devices more generally referred to herein as “external endpoints.” As will be appreciated, the present invention can be implemented using any desired type of external endpoint and network connection.
As will be described in greater detail below, the present invention in accordance with an embodiment of the present invention configures the switch 102 such that one or more of the external terminals 116 are treated substantially as internal switch extensions. Advantageously, this allows a given external terminal to access at least a subset of the desirable contact processing features provided by the switch 102.
It should be emphasized that the configuration of the switch, user terminals, and other elements as shown in
The switch 102 in one implementation includes a processor (not shown), memory (not shown), a database (not shown), one or more interfaces (not shown), a switch fabric (not shown), and a set of service circuits (not shown). The processor may be implemented as a central processing unit (CPU), microprocessor, application-specific integrated circuit (ASIC) or other type of digital data processor as well as various portions or combinations of such elements. The memory may be a random access memory (RAM), a read-only memory (ROM), or combinations of these and other types of electronic memory devices.
The processor operating in conjunction with the memory executes one or more software programs depicted in
The multiplexing agent 150 controls the text and audio streams provided to the subscriber communication devices 106. In teleconference calls, the agent 150 provides automatically a text representation (or text captioning) of participant speech on a monitored conference when the subscriber places the teleconference call on hold. This is normally done by activating a set of one or more activators (e.g., buttons) on a desk set. As used herein, an “activator” refers to the user interface controls on a communication device that permits the user to effect a selected operation (e.g., activate a feature, dial a number, etc.) of the communication device. Although the interface controls typically are implemented as a plurality of buttons, they may be implemented in many other forms, such as a touch screen, toggles, a pointer device such as a mouse, and combinations thereof. When the set of activators is activated, the text captioning of the on-hold or inactive conference call starts streaming to the communication device's display. Stated another way, the subscriber can toggle back-and-forth between audio and text representations of participant speech on different conference calls, and thereby participate simultaneously in both conference calls, simply by activating the set of feature activators and specifying which of the calls will be placed on hold. As will be appreciated, at any one time, one conference call will be on hold and the other active as the communication device has only one display and one speaker. In one configuration, text captioning is initiated when the subscriber activates a selected activator after the switch receives a command to place the call on hold. There will likely be instances where the subscriber desires to disable text captioning, such as when the call is placed on hold so that the subscriber can initiate another call.
The STT module 154 is conventional. For example, it can be any number of commercially software packages, such as IBM's ViaVoice Telephony™, Dragon Systems DragonDictate™, or other suitable software, such as software using “speaker adaptive” technologies. As will be appreciated, speaker adaptive speech recognition software maintains personal training data for each user. The speech recognition can then be performed on a user's local computer (rather than at the switch) or the system may be adapted to the user's personal training data. Typically, the STT module is speaker independent; that is, it is not configured for a particular user. A human operator can review and edit the transcription, depending on the accuracy of the STT module. In one configuration, each transcript is associated with a speaker identifier. In this configuration, the speaker's identity is displayed at a selected point before, during, or after the display of the text transcript of his or her speech on the monitoring internal communication device 106. As will be appreciated, the STT module may be contained in an adjunct processor.
The service circuits may include tone generators, announcement circuits, etc. These circuits and the interfaces are controlled by processor in implementing call processing functions in the switch.
The switch may include additional elements that are omitted from
Also associated with the switch 102 may be an administrator terminal (not shown) that is used to program the operation of the switch 102 during a system administration, e.g., an initial set-up and configuration of the system of a subsequent system-level or user-level configuration.
Other devices not shown in the figures may be associated with the switch 102, such as an adjunct feature server. Such an adjunct may be physically incorporated within the switch and may be partially or completely implemented using other switch elements such as processor and memory.
A subscriber communication device 106, according to an embodiment of the present invention, is depicted in
The character display is generally a Liquid Crystal Display or LCD that is limited in size. At a minimum, the alphanumeric display of a typical PBX-connected telephone will permit at least two lines of text, 24 characters per line, to be presented. As part of the recent trend toward adding Internet browser functionality to telephones, displays of considerably higher capacity are becoming more common, illustrative examples being the Avaya Model 2420 digital telephone and the Avaya Model 4625 IP telephone. It is generally not enabled to display graphics or media-rich images. As can be seen, the display 208 is in the process of receiving a text representation of a conference participant's voice for an on hold conference call. The text representation states “This conference call is starting . . . ”
The communication device includes a number of activators, such as pushbutton keys. The activators include soft keys, volume control button(s), and call appearance/feature buttons. Activator 250 is the on-hold button. The remaining activators are self-explanatory. Administration and ongoing maintenance, including key/button labeling, of the communication device can be performed automatically by the switch.
The communication device 106 can have any connection interface to the switch 102. Typical interfaces include a two- or four-wire (twisted pair) input or output interface. Two-wire transmission is where both the transmit and receive paths are carried on the same wire pair or other single medium. Four-wire transmission is where the transmit and receive paths are separate and a wire pair is assigned to each path. As will be appreciated, two wires can be a tip or ring interface. The device 106 is generally connected to the switch via a standard telephone jack (not shown).
When the user of a traditional PBX-connected telephone presses the HOLD button or dials the appropriate DTMF feature access code, the phone is put on hold not because of a change that takes place within the phone itself, but instead because the PBX that controls the telephone has received and obeyed a command to change the manner in which it sends (or doesn't send) signals to that phone.
Exemplary analog, digital, and IP telephones that can be used for the communication device include the Avaya, Inc., 2400, 2500, 4400, 4600, 5400, 5600, 6200, 6400 7100, 7300, 7400, 8100, 8400, 9100, and 9400 series telephones, Avaya, Inc., 3810 and 3910 wireless telephones, Avaya, Inc., ISDN 7500 and 8500 series telephones, Merlin Legend telephones, Merlin ETR/MLS and MLX series telephones, Partner telephones, Partner MLS series telephones, Avaya, Inc., single line business telephones, Avaya, Inc., and SIP softphone. The display may be augmented using an expansion unit, such as the EU24 by Avaya, Inc.
A particular preferred telephone is a circuit-switched digital telephone, such as a Digital Control Protocol enabled telephone of Avaya, Inc. As will be appreciated, the DCP is used in a time division multiplexed architecture. In DCP, control signals from the controlling switch are generally not part of the voice path. In other words, DCP generally uses different channels for controlling the communication device display and conveying the incoming and outgoing voice streams. Stated another way, DCP typically uses out-of-band signaling for controlling operations of the subscriber communication device and in-band signaling for conveying the incoming and outgoing voice streams. In a four-wire interface, two of the wires may be used for analog voice communications (incoming and outgoing voice communications respectively) and two for digital control signaling. Although DCP is discussed, it is to be understood that other digital protocols may be used.
As will be appreciated, in packet-switched communications, particularly VoIP communications, various standards organizations have proposed IP mechanisms by which voice and conversational text can be intermixed in the same phone call. For example, ITU-T Recommendation T.140 and RFC 4103 describe a mechanism by which voice and text are intermixed. Additionally, concurrent intermixing of text and voice is currently supported by Avaya Inc.'s TTY-on-VoIP architecture, in which text is transported on VoIP networks as RFC2833-format descriptions of the corresponding Baudot TTY tones. Unlike DCP, control signals and voice stream data is conveyed along the same channel, or in-band, using different types of packets (i.e., using different packet headers).
The first subscriber communication device switches between the first and second operating modes by the subscriber pressing a set of activators, which typically includes the on-hold button. To make the switch between operating modes seamless to the subscriber, the STT module 154 is typically converting each of the incoming audio streams to text regardless of which conference call is on hold. In other words, the STT module 154, in the example above, is continuously converting into text each of the voice streams 312, 316, 324, and 336.
The operation of the multiplexing agent 150 will now be discussed with reference to
In step 400, the agent 150 receives a signal from the monitored subscriber's communication device that the on-hold activator has been pressed. For typical analog endpoints, this is typically done by receiving a series of Dual Tone Multi-Frequency or DTMF signals uniquely associated with this activator. (Many manufacturers, including Avaya, refer to analog DTMF signals of this sort as “feature access codes.”) In digital and IP endpoints, the indication that the on-hold activator has been pressed is carried as a non-audio transmission, typically via a proprietary digital signaling protocol (such as Avaya DCP) or as readily identifiable IP packets.
In decision diamond 404, the agent 150 determines whether the call that is being placed on hold is a conference call. This can be done, for example, by accessing the data structures associated with the call. A conference call will have at least two other endpoints associated with the call.
When the call being placed on hold is a conference call, the agent 150, in step 408, provides a text representation of the on-hold conference call to the character display of the subscriber's communication device and an audio representation of the active call (which itself may be a conference call) to the speaker of the subscriber's communication device.
When the call being placed on hold is not a conference call or after step 408, the agent 150 updates the data structures associated with the conference call to indicate that it has been placed on hold.
A number of variations and modifications of the invention can be used. It would be possible to provide for some features of the invention without providing others.
For example in one alternative embodiment, the STT module is not located in spatial proximity to the switch. For example, the STT module can be located at each of the endpoints participating in the call or at a central location through which the various audio streams pass, such as in the teleconferencing software.
In another alternative embodiment, more than two teleconferences are monitored at one time. Although support for more than two concurrent teleconferences is possible using the proposed architecture, most users are unable to track more than two conversations at a time.
In yet another alternative embodiment, the server buffers the text stream being streamed to the subscriber communication device. The text stream is normally generated in real- or near-real time. Typically, the reading speed of the subscriber is less than the speech speed of the speaker. To compensate for the disparity, the server buffers the text captioning of the monitored call such that the captioning of the monitored call is not provided in real time to the subscriber's device. When the subscriber toggles to the monitored conference call such that it becomes active, the server can provide the audio equivalent to the buffered text speech at an accelerated rate to bring the subscriber current to the voice conversation taking place on the conference call. This can be done by marking dynamically the voice stream to indicate the point at which the captioning is being streamed to the subscriber's device. Alternatively, the buffered captions can be converted back to speech using a Text-To-Speech module, which are well known in the art, and the synthesized speech provided to the subscriber at an accelerated rate. In one configuration, the subscriber can use a designated set of keys to accelerate or decelerate the rate at which text captions are streamed to the display.
In this embodiment, the activators can be redefined when text streaming is being provided to the corresponding subscriber communication device to provide a “TiVo”-type effect. Activators are defined to provide selected features, such as fast forwarding through and rewinding and pausing the streamed text. The memory of the switch enables the rewinding and fast forwarding features while the switch buffer enables the fast forwarding and pausing features. In this manner, the subscriber can pause the streamed text and/or rewind the streamed text to an earlier point in the text that he or she missed while participating in the active call. Conversely, the subscriber can fast forward through the text to eliminate substantially any gap between the speech and its the streamed text counterpart. In one configuration, the subscriber activates a selected set of activators to enable the redefinition of the activators to provide rewind, fast forward, and pause features. In another configuration, the activators are automatically redefined when a call is placed on hold. In this configuration, the activators are selected so that they will not interfere with the subscriber initiating another call.
In yet another embodiment, multi-lingual captioning is provided. Illustratively, if the conference participants are speaking German, separate conference bridge numbers could be provided for the streaming English captions, the streaming French captions, and so on. In such an environment, an English-speaking conference participant with no fluency in German could call into the audio conference on one line, call into the English text-only conference on a second line, put the second line on hold, return to the first line, and then receive streaming English “sub-titles” while listening to the German speakers. Conventional products, such as the L&H Power Translator Pro™, from Lernout & Hauspie Speech Products N.V. of Belgium, translates text in a first language into text in a second language. A multi-lingual teleconferencing architecture is discussed in U.S. Pat. No. 6,816,468, which is incorporated herein by this reference. In this system, the speech of each teleconference participant is transcribed using voice recognition technology in real or near-real time. The transcribed text is translated into a selected language. The translation is generated in real or near-real time on a word-by-word basis or, alternatively, on a phrase-by-phrase or sentence-by-sentence basis. The translated and transcribed text is displayed for a participant using the established data connection. The transcribed (and possibly translated) text may be displayed in real or near-real time during a participant's speech. Audio translation services are also provided to a participant using text-to-speech software to generate an audio signal from the translated and transcribed text.
In yet another embodiment, the communication device is configured as a web browser and receives streaming text from a Universal Resource Locator or URL accessed by the browser. As will be appreciated, a web server associated with the URL provides the streaming text to the communication device, which displays the received text to the user. In this manner, a user can receive the captions of one conversation while participating by voice in another.
In yet another embodiment, dedicated hardware implementations including, but not limited to, Application Specific Integrated Circuits or ASICs, programmable logic arrays, and other hardware devices can likewise be constructed to implement the methods described herein. Furthermore, alternative software implementations including, but not limited to, distributed processing or component/object distributed processing, parallel processing, or virtual machine processing can also be constructed to implement the methods described herein.
It should also be stated that the software implementations of the present invention are optionally stored on a tangible storage medium, such as a magnetic medium like a disk or tape, a magneto-optical or optical medium like a disk, or a solid state medium like a memory card or other package that houses one or more read-only (non-volatile) memories. A digital file attachment to e-mail or other self-contained information archive or set of archives is considered a distribution medium equivalent to a tangible storage medium. Accordingly, the invention is considered to include a tangible storage medium or distribution medium and prior art-recognized equivalents and successor media, in which the software implementations of the present invention are stored.
Although the present invention describes components and functions implemented in the embodiments with reference to particular standards and protocols, the invention is not limited to such standards and protocols. Other similar standards and protocols not mentioned herein are in existence and are considered to be included in the present invention. Moreover, the standards and protocols mentioned herein and other similar standards and protocols not mentioned herein are periodically superseded by faster or more effective equivalents having essentially the same functions. Such replacement standards and protocols having the same functions are considered equivalents included in the present invention.
The present invention, in various embodiments, includes components, methods, processes, systems and/or apparatus substantially as depicted and described herein, including various embodiments, subcombinations, and subsets thereof. Those of skill in the art will understand how to make and use the present invention after understanding the present disclosure. The present invention, in various embodiments, includes providing devices and processes in the absence of items not depicted and/or described herein or in various embodiments hereof, including in the absence of such items as may have been used in previous devices or processes, e.g., for improving performance, achieving ease and\or reducing cost of implementation.
The foregoing discussion of the invention has been presented for purposes of illustration and description. The foregoing is not intended to limit the invention to the form or forms disclosed herein. In the foregoing Detailed Description for example, various features of the invention are grouped together in one or more embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed invention requires more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive aspects lie in less than all features of a single foregoing disclosed embodiment. Thus, the following claims are hereby incorporated into this Detailed Description, with each claim standing on its own as a separate preferred embodiment of the invention.
Moreover, though the description of the invention has included description of one or more embodiments and certain variations and modifications, other variations and modifications are within the scope of the invention, e.g., as may be within the skill and knowledge of those in the art, after understanding the present disclosure. It is intended to obtain rights which include alternative embodiments to the extent permitted, including alternate, interchangeable and/or equivalent structures, functions, ranges or steps to those claimed, whether or not such alternate, interchangeable and/or equivalent structures, functions, ranges or steps are disclosed herein, and without intending to publicly dedicate any patentable subject matter.
| Number | Name | Date | Kind |
|---|---|---|---|
| 4163124 | Jolissaint | Jul 1979 | A |
| 4567323 | Lottes et al. | Jan 1986 | A |
| 4737983 | Frauenthal et al. | Apr 1988 | A |
| 4777469 | Engelke et al. | Oct 1988 | A |
| 4797911 | Szlam et al. | Jan 1989 | A |
| 4894857 | Szlam et al. | Jan 1990 | A |
| 4897868 | Engelke et al. | Jan 1990 | A |
| 4953159 | Hayden et al. | Aug 1990 | A |
| 4959847 | Engelke et al. | Sep 1990 | A |
| 5001710 | Gawrys et al. | Mar 1991 | A |
| 5097528 | Gursahaney et al. | Mar 1992 | A |
| 5101425 | Darland | Mar 1992 | A |
| 5155761 | Hammond | Oct 1992 | A |
| 5164983 | Brown et al. | Nov 1992 | A |
| 5206903 | Kohler et al. | Apr 1993 | A |
| 5210789 | Jeffus et al. | May 1993 | A |
| 5274700 | Gechter et al. | Dec 1993 | A |
| 5278898 | Cambray et al. | Jan 1994 | A |
| 5291550 | Levy et al. | Mar 1994 | A |
| 5299260 | Shaio | Mar 1994 | A |
| 5309513 | Rose | May 1994 | A |
| 5325417 | Engelke et al. | Jun 1994 | A |
| 5327479 | Engelke et al. | Jul 1994 | A |
| 5335268 | Kelly, Jr. et al. | Aug 1994 | A |
| 5335269 | Steinlicht | Aug 1994 | A |
| 5351288 | Engelke et al. | Sep 1994 | A |
| 5390243 | Casselman et al. | Feb 1995 | A |
| 5432837 | Engelke et al. | Jul 1995 | A |
| 5436965 | Grossman et al. | Jul 1995 | A |
| 5444774 | Friedes | Aug 1995 | A |
| 5469503 | Butensky et al. | Nov 1995 | A |
| 5469504 | Blaha | Nov 1995 | A |
| D364865 | Engelke et al. | Dec 1995 | S |
| 5473773 | Aman et al. | Dec 1995 | A |
| 5479497 | Kovarik | Dec 1995 | A |
| 5500795 | Powers et al. | Mar 1996 | A |
| 5504894 | Ferguson et al. | Apr 1996 | A |
| 5506898 | Costantini et al. | Apr 1996 | A |
| 5530744 | Charalambous et al. | Jun 1996 | A |
| 5537470 | Lee | Jul 1996 | A |
| 5537542 | Eilert et al. | Jul 1996 | A |
| 5544232 | Baker et al. | Aug 1996 | A |
| 5546452 | Andrews et al. | Aug 1996 | A |
| 5581593 | Engelke et al. | Dec 1996 | A |
| 5592378 | Cameron et al. | Jan 1997 | A |
| 5592542 | Honda et al. | Jan 1997 | A |
| 5594726 | Thompson et al. | Jan 1997 | A |
| 5604786 | Engelke et al. | Feb 1997 | A |
| 5606361 | Davidsohn et al. | Feb 1997 | A |
| 5611076 | Durflinger et al. | Mar 1997 | A |
| 5627884 | Williams et al. | May 1997 | A |
| 5642515 | Jones et al. | Jun 1997 | A |
| 5653242 | Brockelsby | Aug 1997 | A |
| 5684872 | Flockhart et al. | Nov 1997 | A |
| 5684964 | Powers et al. | Nov 1997 | A |
| 5689698 | Jones et al. | Nov 1997 | A |
| 5703943 | Otto | Dec 1997 | A |
| 5713014 | Durflinger et al. | Jan 1998 | A |
| 5724092 | Davidsohn et al. | Mar 1998 | A |
| 5724405 | Engelke et al. | Mar 1998 | A |
| 5740238 | Flockhart et al. | Apr 1998 | A |
| 5742675 | Kilander et al. | Apr 1998 | A |
| 5745711 | Kitahara et al. | Apr 1998 | A |
| 5748468 | Notenboom et al. | May 1998 | A |
| 5749079 | Yong et al. | May 1998 | A |
| 5751707 | Voit et al. | May 1998 | A |
| 5752027 | Familiar | May 1998 | A |
| 5754639 | Flockhart et al. | May 1998 | A |
| 5754776 | Hales et al. | May 1998 | A |
| 5754841 | Carino, Jr. | May 1998 | A |
| 5757904 | Anderson | May 1998 | A |
| 5790677 | Fox et al. | Aug 1998 | A |
| 5794250 | Carino, Jr. et al. | Aug 1998 | A |
| 5796393 | MacNaughton et al. | Aug 1998 | A |
| 5802282 | Hales et al. | Sep 1998 | A |
| 5809425 | Colwell et al. | Sep 1998 | A |
| 5818907 | Maloney et al. | Oct 1998 | A |
| 5825869 | Brooks et al. | Oct 1998 | A |
| 5828747 | Fisher et al. | Oct 1998 | A |
| 5838968 | Culbert | Nov 1998 | A |
| 5839117 | Cameron et al. | Nov 1998 | A |
| 5862519 | Sharma et al. | Jan 1999 | A |
| 5875437 | Atkins | Feb 1999 | A |
| 5880720 | Iwafune et al. | Mar 1999 | A |
| 5881238 | Aman et al. | Mar 1999 | A |
| 5884032 | Bateman et al. | Mar 1999 | A |
| 5889956 | Hauser et al. | Mar 1999 | A |
| 5897622 | Blinn et al. | Apr 1999 | A |
| 5903641 | Tonisson | May 1999 | A |
| 5903877 | Berkowitz et al. | May 1999 | A |
| 5905793 | Flockhart et al. | May 1999 | A |
| 5909482 | Engelke | Jun 1999 | A |
| 5915012 | Miloslavsky | Jun 1999 | A |
| 5926538 | Deryugin et al. | Jul 1999 | A |
| 5930786 | Carino, Jr. et al. | Jul 1999 | A |
| 5937051 | Hurd et al. | Aug 1999 | A |
| 5937402 | Pandilt | Aug 1999 | A |
| 5940496 | Gisby et al. | Aug 1999 | A |
| 5943416 | Gisby | Aug 1999 | A |
| 5948065 | Eilert et al. | Sep 1999 | A |
| 5960073 | Kikinis et al. | Sep 1999 | A |
| 5963635 | Szlam et al. | Oct 1999 | A |
| 5963911 | Walker et al. | Oct 1999 | A |
| 5970132 | Brady | Oct 1999 | A |
| 5974116 | Engelke et al. | Oct 1999 | A |
| 5974135 | Breneman et al. | Oct 1999 | A |
| 5974462 | Aman et al. | Oct 1999 | A |
| 5978654 | Colwell et al. | Nov 1999 | A |
| 5982873 | Flockhart et al. | Nov 1999 | A |
| 5987117 | McNeil et al. | Nov 1999 | A |
| 5991392 | Miloslavsky | Nov 1999 | A |
| 5996013 | Delp et al. | Nov 1999 | A |
| 5999963 | Bruno et al. | Dec 1999 | A |
| 6000832 | Franklin et al. | Dec 1999 | A |
| 6011844 | Uppaluru et al. | Jan 2000 | A |
| 6038293 | Mcnerney et al. | Mar 2000 | A |
| 6044144 | Becker et al. | Mar 2000 | A |
| 6044205 | Reed et al. | Mar 2000 | A |
| 6044355 | Crockett et al. | Mar 2000 | A |
| 6049547 | Fisher et al. | Apr 2000 | A |
| 6052723 | Ginn | Apr 2000 | A |
| 6055308 | Miloslavsky et al. | Apr 2000 | A |
| 6064730 | Ginsberg | May 2000 | A |
| 6064731 | Flockhart et al. | May 2000 | A |
| 6075841 | Engelke et al. | Jun 2000 | A |
| 6075842 | Engelke et al. | Jun 2000 | A |
| 6084954 | Harless | Jul 2000 | A |
| 6088441 | Flockhart et al. | Jul 2000 | A |
| 6115462 | Servi et al. | Sep 2000 | A |
| 6151571 | Pertrushin | Nov 2000 | A |
| 6154769 | Cherkasova et al. | Nov 2000 | A |
| 6163607 | Bogart et al. | Dec 2000 | A |
| 6173053 | Bogart et al. | Jan 2001 | B1 |
| 6175564 | Miloslavsky et al. | Jan 2001 | B1 |
| 6178441 | Elnozahy | Jan 2001 | B1 |
| 6185292 | Miloslavsky | Feb 2001 | B1 |
| 6192122 | Flockhart et al. | Feb 2001 | B1 |
| 6215865 | McCalmont | Apr 2001 | B1 |
| 6226377 | Donaghue, Jr. | May 2001 | B1 |
| 6229819 | Darland et al. | May 2001 | B1 |
| 6230183 | Yocom et al. | May 2001 | B1 |
| 6233314 | Engelke | May 2001 | B1 |
| 6233333 | Dezonmo | May 2001 | B1 |
| 6240417 | Eastwick | May 2001 | B1 |
| 6259969 | Tackett et al. | Jul 2001 | B1 |
| 6263066 | Shtivelman et al. | Jul 2001 | B1 |
| 6263359 | Fong et al. | Jul 2001 | B1 |
| 6272544 | Mullen | Aug 2001 | B1 |
| 6275806 | Pertrushin | Aug 2001 | B1 |
| 6275812 | Haq et al. | Aug 2001 | B1 |
| 6275991 | Erlin | Aug 2001 | B1 |
| 6278777 | Morley | Aug 2001 | B1 |
| 6292550 | Burritt | Sep 2001 | B1 |
| 6295353 | Flockhart et al. | Sep 2001 | B1 |
| 6307921 | Engelke et al. | Oct 2001 | B1 |
| 6353810 | Petrushin | Mar 2002 | B1 |
| 6356632 | Foster et al. | Mar 2002 | B1 |
| 6356633 | Armstrong | Mar 2002 | B1 |
| 6366668 | Borst et al. | Apr 2002 | B1 |
| 6389028 | Bondarenko et al. | May 2002 | B1 |
| 6389132 | Price | May 2002 | B1 |
| 6389400 | Bushey et al. | May 2002 | B1 |
| 6424709 | Doyle et al. | Jul 2002 | B1 |
| 6426950 | Mistry | Jul 2002 | B1 |
| 6427137 | Petrushin | Jul 2002 | B2 |
| 6430282 | Bannister et al. | Aug 2002 | B1 |
| 6434230 | Gabriel | Aug 2002 | B1 |
| 6449356 | Dezonno | Sep 2002 | B1 |
| 6449358 | Anisimov et al. | Sep 2002 | B1 |
| 6449646 | Sikora et al. | Sep 2002 | B1 |
| 6453038 | McFarlane et al. | Sep 2002 | B1 |
| 6463148 | Brady | Oct 2002 | B1 |
| 6463346 | Flockhart et al. | Oct 2002 | B1 |
| 6463415 | St. John | Oct 2002 | B2 |
| 6480826 | Pertrushin | Nov 2002 | B2 |
| 6490350 | McDuff et al. | Dec 2002 | B2 |
| 6493426 | Engelke et al. | Dec 2002 | B2 |
| 6510206 | Engelke et al. | Jan 2003 | B2 |
| 6535600 | Fisher et al. | Mar 2003 | B1 |
| 6535601 | Flockhart et al. | Mar 2003 | B1 |
| 6549611 | Engelke et al. | Apr 2003 | B2 |
| 6560330 | Gabriel | May 2003 | B2 |
| 6560649 | Mullen et al. | May 2003 | B1 |
| 6560707 | Curtis et al. | May 2003 | B2 |
| 6563920 | Flockhart et al. | May 2003 | B1 |
| 6567503 | Engelke et al. | May 2003 | B2 |
| 6594346 | Engelke | Jul 2003 | B2 |
| 6597685 | Miloslavsky et al. | Jul 2003 | B2 |
| 6603835 | Engelke et al. | Aug 2003 | B2 |
| 6614903 | Flockhart et al. | Sep 2003 | B1 |
| 6650748 | Edwards et al. | Nov 2003 | B1 |
| 6697457 | Petrushin | Feb 2004 | B2 |
| 6704409 | Dilip et al. | Mar 2004 | B1 |
| 6707903 | Burok et al. | Mar 2004 | B2 |
| 6714976 | Wilson et al. | Mar 2004 | B1 |
| 6728934 | Scopes | Apr 2004 | B1 |
| 6754333 | Flockhart et al. | Jun 2004 | B1 |
| 6766013 | Flockhart et al. | Jul 2004 | B2 |
| 6766014 | Flockhart et al. | Jul 2004 | B2 |
| 6816468 | Cruickshank | Nov 2004 | B1 |
| 6882707 | Engelke et al. | Apr 2005 | B2 |
| 6934366 | Engelke et al. | Aug 2005 | B2 |
| 6947543 | Alvarado et al. | Sep 2005 | B2 |
| 7003082 | Engelke et al. | Feb 2006 | B2 |
| 7006604 | Engelke | Feb 2006 | B2 |
| 7035927 | Flockhart et al. | Apr 2006 | B2 |
| 7088812 | Atwood et al. | Aug 2006 | B1 |
| 7133513 | Zhang | Nov 2006 | B1 |
| 7236580 | Sarkar et al. | Jun 2007 | B1 |
| 20020055950 | Witteman | May 2002 | A1 |
| 20020087579 | Chasanoff et al. | Jul 2002 | A1 |
| 20020124051 | Ludwig et al. | Sep 2002 | A1 |
| 20020124053 | Adams et al. | Sep 2002 | A1 |
| 20020161578 | Saindon et al. | Oct 2002 | A1 |
| 20020194002 | Petrushin | Dec 2002 | A1 |
| 20030038754 | Goldstein et al. | Feb 2003 | A1 |
| 20030126141 | Hassman et al. | Jul 2003 | A1 |
| 20030167276 | Simpson et al. | Sep 2003 | A1 |
| 20030174826 | Hesse | Sep 2003 | A1 |
| 20030174830 | Boyer et al. | Sep 2003 | A1 |
| 20030177017 | Boyer et al. | Sep 2003 | A1 |
| 20030227478 | Chatfield | Dec 2003 | A1 |
| 20040002049 | Beavers et al. | Jan 2004 | A1 |
| 20040058694 | Mendiola et al. | Mar 2004 | A1 |
| 20040081292 | Brown et al. | Apr 2004 | A1 |
| 20040081293 | Brown et al. | Apr 2004 | A1 |
| 20040125932 | Orbach et al. | Jul 2004 | A1 |
| 20040199580 | Zhakov et al. | Oct 2004 | A1 |
| 20040203878 | Thomson | Oct 2004 | A1 |
| 20050053214 | Reding et al. | Mar 2005 | A1 |
| 20050084086 | Hesse | Apr 2005 | A1 |
| 20050188412 | Dacosta | Aug 2005 | A1 |
| 20050232169 | McLaughlin et al. | Oct 2005 | A1 |
| 20050283752 | Fruchter | Dec 2005 | A1 |
| 20060149815 | Spradling et al. | Jul 2006 | A1 |
| 20070033003 | Morris | Feb 2007 | A1 |
| 20070118433 | Bess | May 2007 | A1 |
| 20070127645 | Bloebaum et al. | Jun 2007 | A1 |
| 20070195940 | Miloslavsky et al. | Aug 2007 | A1 |
| 20070244895 | Mohler et al. | Oct 2007 | A1 |
| 20070299838 | Behrens | Dec 2007 | A1 |
| 20080005072 | Meek et al. | Jan 2008 | A1 |
| 20080005249 | Hart | Jan 2008 | A1 |
| 20080062895 | Chapman et al. | Mar 2008 | A1 |
| 20080147722 | Dolin et al. | Jun 2008 | A1 |
| 20080215614 | Slattery | Sep 2008 | A1 |
| Number | Date | Country |
|---|---|---|
| 2143198 | Jan 1995 | CA |
| 2174762 | Mar 1996 | CA |
| 0501189 | Sep 1992 | EP |
| 0740450 | Oct 1996 | EP |
| 0772335 | May 1997 | EP |
| 0829996 | Mar 1998 | EP |
| 0855826 | Jul 1998 | EP |
| 0863651 | Sep 1998 | EP |
| 0866407 | Sep 1998 | EP |
| 899673 | Mar 1999 | EP |
| 998108 | May 2000 | EP |
| 1091307 | Apr 2001 | EP |
| 1150236 | Oct 2001 | EP |
| 1288795 | Mar 2003 | EP |
| 0 899 952 | Jun 2003 | EP |
| 1 469 663 | Oct 2004 | EP |
| 2273418 | Jun 1994 | GB |
| 2290192 | Dec 1995 | GB |
| WO 9607141 | Mar 1996 | WO |
| WO 9728635 | Aug 1997 | WO |
| WO 9856207 | Dec 1998 | WO |
| WO 2005017674 | Feb 2005 | WO |
| WO 2006078683 | Jul 2006 | WO |
| WO 2009041982 | Apr 2009 | WO |
| Entry |
|---|
| U.S. Appl. No. 12/241,988, filed Sep. 30, 2008, Katz. |
| U.S. Appl. No. 12/270,443, filed Nov. 13, 2008, Kohler et al. |
| U.S. Appl. No. 12/372,903, filed Feb. 18, 2009, Erhart et al. |
| U.S. Appl. No. 12/389,240, filed Feb. 19, 2009, Gartner. |
| U.S. Appl. No. 12/464,659, filed May 12, 2009, Bland et al. |
| U.S. Appl. No. 12/540,202, filed Aug. 12, 2009, Gartner et al. |
| “http://en.wikipedia.org/wiki/Conference—Call”, Wikipedia, Feb. 25, 2009, Publisher, Wikimedia Foundation, Inc., Published in US. |
| http://en.wikipedia.org/wiki/Web—conferencing, Wikipedia, Feb. 25, 2009, Publisher: Wikimedia Foundation, Inc., Published in US. |
| “Answering the Social Phone” Media Philosopher, http://www.mediaphilosopher.com/2008/04/14/answering-the-social-phone/, Apr. 14, 2008, pp. 1-6. |
| “Applications, NPRI's Predictive Dialing Package,” Computer Technology (Fall 1993), p. 86. |
| “Call Center Software You Can't Outgrow,” Telemarketing® (Jul. 1993), p. 105. |
| “Domain Name Services,” available at http://www.pism.com/chapt09/chapt09.html, downloaded Mar. 31, 2003, 21 pages. |
| “eGain's Commerce 2000 Platform Sets New Standard for eCommerce Customer Communications,” Business Wire (Nov. 15, 1999)., 3 pages. |
| “Internet Protocol Addressing,” available at http://samspade.org/d/ipdns.html, downloaded Mar. 31, 2003, 9 pages. |
| “Product Features,” Guide to Call Center Automation, Brock Control Systems, Inc., Activity Managers Series™, Section 5—Company B120, p. 59, 1992. |
| “Product Features,” Guide to Call Center Automation, CRC Information Systems, Inc., Tel-Athena, Section 5—Company C520, p. 95, 1992. |
| “The Communications Factors: Comcast is Taking the First Step in the Relationship Economy” http://onthemarkwriting.com/2008/02/20/comcast-is-taking-the-first-step-in-the-relationship-economy/, Feb. 20, 2008, pp. 1-3. |
| “VAST™, Voicelink Application Software for Teleservicing®,” System Manager User's Guide, Digital Systems (1994), pp. ii, vii-ix, 1-2, 2-41 through 2-77. |
| “Welcome to the Service Cloud” salesforce.com, Copyright 2000-2009. |
| “When Talk Isn't Cheap,” Sm@rt Reseller, v. 3, n. 13 (Apr. 3, 2000), p. 50. |
| “Word Frequencies in Written and Spoken English” (Andrew Wilson, Geoffery Leech, Paul Rayson, ISBN 0582-32007-0, Prentice Hall, 2001. |
| Ahmed, Sarah, “A Scalable Byzantine Fault Tolerant Secure Domain Name System,” thesis submitted to Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, Jan. 22, 2001, 101 pages. |
| Alston, David Consumers are Shouting into your Brand's “Social Phone” http://www.radian6.com/blog/76/comsumers-are-shouting-into-your-band's-%22social-phone%22/, Aug. 19, 2008, pp. 1-2. |
| Avaya, Inc. Business Advocate Options, at http://www.avaya.com, downloaded on Feb. 15, 2003, Avaya, Inc. 2003. |
| Avaya, Inc. Business Advocate Product Summary, at http://www.avaya.com, downloaded on Feb. 15, 2003, Avaya, Inc. 2003, 3 pages. |
| Avaya, Inc. CentreVu Advocate, Release 9, User Guide, Dec. 2000. |
| Avaya, Inc., “Better Implementation of IP in Large Networks,” Avaya, Inc. 2002, 14 pages. |
| Avaya, Inc., “The Advantages of Load Balancing in the Multi-Call Center Enterprise,” Avaya, Inc., 2002, 14 pages. |
| Avaya, Inc., “Voice Over IP Via Virtual Private Networks: An Overview,” Avaya, Inc., Feb. 2001, 9 pages. |
| Bellsouth Corp., “Frequently Asked Questions—What is a registrar?,” available at https://registration.bellsouth.net/NASApp/DNSWebUI/FAQ.jsp, downloaded Mar. 31, 2003, 4 pages. |
| Binhammer, Richard “In Depth: Dell” Dell, Inc., http://blog.fluentsimplicity.com/twitter-brand-index/dell/, Aug. 2008, pp. 1-3. |
| Chavez, David, et al., “Avaya MultiVantage Software: Adapting Proven Call Processing for the Transition to Converged IP Networks,” Avaya, Inc., Aug. 2002. |
| Coles, Scott, “A Guide for Ensuring Service Quality in IP Voice Networks,” Avaya, Inc., 2002, pp. 1-17. |
| Damerau, Fred, “Generating and evaluation domain-oriented multi-word terms from texts,” Information Processing and Management 29(4):433-447, 1993. |
| Dawson, “NPRI's Powerguide, Software Overview” Call Center Magazine (Jun. 1993), p. 85. |
| Doo-Hyun Kim et al. “Collaborative Multimedia Middleware Architecture and Advanced Internet Call Center,” Proceedings at the International Conference on Information Networking (Jan. 31, 2001), pp. 246-250. |
| E. Noth et al., “Research Issues for the Next Generation Spoken”: University of Erlangen-Nuremberg, Bavarian Research Centre for Knowledge-Based Systems, at http://www5.informatik.uni-erlangen.de/literature/psdir/1999/Noeth99:RIF.ps.gz, printed Feb. 10, 2003; 8 pages. |
| Eliason, Frank “Background Noise: Musings on Internet Media, Technology, and Pretty Much Whatever Else I Feel Like” http://kzimmerman.typepad.com/background—noise/2008/04/dear-comcast-i.html, Apr. 7, 2008, pp. 1-3. |
| Foster, Robin, et al., “Avaya Business Advocate and its Relationship to Multi-Site Load Balancing Applications,” Avaya, Inc., Mar. 2002, 14 pages. |
| Geotel Communications Corporation Web site printout entitled “Intelligent CallRouter” Optimizing the Interaction Between Customers and Answering Resources., 1998, 6 pages. |
| Ives, Bill “Radian6—Monitoring Social Media” The {App} Gap, http://www.theappgap.com/radian6—monitoring-social-meida.html, Nov. 10, 2008, pp. 1-14. |
| John H.L. Hansen and Levent M. Arsian, Foreign Accent Classification Using Source Generator Based Prosodic Features, IEEE Proc. ICASSP, vol. 1, pp. 836-839, Detroit USA (May 1995). |
| L.F. Lamel and J.L. Gauvain, Language Identification Using Phone-Based Acoustic Likelihood, ICASSP-94, date unknown; 4 pages. |
| Levent M. Arsian and John H.L. Hansen, Language Accent Classification in American English, Robust Speech Processing Laboratory, Duke University Department of Electrical Engineering, Durham, NC, Technical Report RSPL-96-7, revised Jan. 29, 1996. pp. 1-16. |
| Levent M. Arsian, Foreign Accent Classification in American English, Department of Electrical Computer Engineering, Duke University, Thesis, pp. 1-200 (1996). |
| MIT Project Oxygen, Pervasive, Human-Centered Computing (MIT Laboratory for Computer Science) (Jun. 2000) pp. 1-15. |
| Perez, Sara “Read Write Web: How to Get Customer Service Via Twitter” http://www.readwriteweb.com/archives/how—to—get—customer—service—via—twitter.php, Apr. 10, 2008, pp. 1-14. |
| Presentation by Victor Zue, The MIT Oxygen Project, MIT Laboratory for Computer Science (Apr. 25-26, 2000) 9 pages. |
| RADIAN6 company website, http://www.radian6.com/cms/home, Printed Aug. 7, 2009; 1 page. |
| Stevenson et al.; “Name Resolution in Network and Systems Management Environments”; http://netman.cit.buffalo.edu/Doc/DStevenson/NR-NMSE.html; printed Mar. 31, 2003; 16 pages. |
| http://support.avaya.com/japple/css/japple?PAGE+ProductIndex; printed Mar. 17, 2006; 11 pages. |
| http://www.ultratec.com/captel/; printed Feb. 24, 2006; 2 pages. |
| http://www.captionedtelephone.com/how-it-works.phtml; printed Feb. 24, 2006; 2 pages. |
| http://www.captionedtelephone.com/faqs.phtml; printed Feb. 24, 2006; 6 pages. |
| http://www.avaya.com/gcm/master-usa/en-us/products/offers/5400—series—digital—telephone.htm&View=ProdDesc; printed Mar. 17, 2006; 2 pages. |
| Avaya 5400 Series Digital Telephones Fact Sheet; 2 pages, 2005. |
| Avaya 5400 Series Digital Telephones Product Features Sheet; 4 pages, 2006. |
| Avaya Digital & IP Telephones Fact Sheet; 2 pages, 2005. |
| The 6402 and 6402D Telephones Instruction Sheets; 20 pages. |
| AT&T The 8400-Series Voice Terminals; Instructions for Installation, Switch Administration, and Programming the Options; Issue 2, Jan. 1996; 886 pages. |