Speech analytics system and system and method for determining structured speech

Information

  • Patent Grant
  • 9401145
  • Patent Number
    9,401,145
  • Date Filed
    Monday, May 5, 2014
    10 years ago
  • Date Issued
    Tuesday, July 26, 2016
    8 years ago
Abstract
A method for converting speech to text in a speech analytics system is provided. The method includes receiving audio data containing speech made up of sounds from an audio source, processing the sounds with a phonetic module resulting in symbols corresponding to the sounds, and processing the symbols with a language module and occurrence table resulting in text. The method also includes determining a probability of correct translation for each word in the text, comparing the probability of correct translation for each word in the text to the occurrence table, and adjusting the occurrence table based on the probability of correct translation for each word in the text.
Description
BACKGROUND

Conversations within contact centers are typically more structured than everyday speech. The contact center conversations may contain a mixture of free conversation and structured speech. Structured speech is sequences that have a higher repetition rate than free speech. Structured speech may include scripts that are read word-for-word by agents, computer generated voice mail messages, interactive voice response (IVR) generated speech, and figures of speech.


Accuracy is also a concern when translating speech to text to generate transcripts of conversations in the contact center. When performing speech to text there are often errors in the conversion. This may be cause by noise on the line, speakers do not speak clearly, or transcription system itself has errors. In long texts, the probability of errors increases. Thus, it is difficult to determine agent compliance to scripts and to verify quality assurance.


SUMMARY

In accordance with some implementations described herein, there is provided a method for determining structured speech. The method may include receiving a transcript of an audio recording created by a text-to-speech communication processing system. Thereafter, analyzing text in the transcript to determine repetitions within the text, where the repetitions being indicative of structured speech. From the repetitions, a duration distribution of the repetitions may be determined to ascertain a first type of structured speech. The first type of structured speech many be interactive voice response (IVR) generated speech. A length of the repetitions may be determined to ascertain a second type of structured speech. The second type of structured speech may be scripts spoken by, e.g., agents in the contact center.


In accordance with some implementations, there is provided a method for converting speech to text in a speech analytics system. The method may include receiving audio data containing speech made up of sounds from an audio source, processing the sounds with a phonetic module resulting in symbols corresponding to the sounds, and processing the symbols with a language module and occurrence table resulting in text. The method also may include determining a probability of correct translation for each word in the text, comparing the probability of correct translation for each word in the text to the occurrence table, and adjusting the occurrence table based on the probability of correct translation for each word in the text.


This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the detailed description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.





BRIEF DESCRIPTION OF THE DRAWINGS

The foregoing summary, as well as the following detailed description of illustrative implementations, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the implementations, there are shown in the drawings example constructions; however, the implementations are not limited to the specific methods and instrumentalities disclosed. In the drawings:



FIG. 1 illustrates a speech to text translation system;



FIG. 2 illustrates a method for translating speech to text in a speech analytics system;



FIG. 3 illustrates a method of determining a type of speech in accordance with a structure;



FIGS. 4A and 4B illustrate graphical representations of deviations from a particular sentence;



FIG. 5A illustrates an example script marked within a contact



FIG. 5B illustrates a relationship of a number of mistakes to a percentage with respect to identifying a sentence as being a script of FIG. 5A; and



FIG. 6 illustrates the communication processing system of FIG. 1 in greater detail.





DETAILED DESCRIPTION

The following description and associated drawings teach the best mode of the invention. For the purpose of teaching inventive principles, some conventional aspects of the best mode may be simplified or omitted. The following claims specify the scope of the invention. Some aspects of the best mode may not fall within the scope of the invention as specified by the claims. Thus, those skilled in the art will appreciate variations from the best mode that fall within the scope of the invention. Those skilled in the art will appreciate that the features described below can be combined in various ways to form multiple variations of the invention. As a result, the invention is not limited to the specific examples described below, but only by claims and their equivalents.


Structured speech has a different statistical behavior than free conversation. By understanding the statistical distinction, structured speech may be automatically identified based in Large-Vocabulary Continuous Speech Recognition (LVCSR) outputs from contact centers, given a set of transcribed calls. For example, a different retrieval criterion may be used for structured speech than free conversation. By exploiting the fact that structured speech includes long sentences, a high level of precision and recall may be obtained, in view of the following.



FIG. 1 is a block diagram illustrating a speech analytics system 100. Speech analytics system 100 includes audio source 102, audio source 104, communication network 106, communication processing system 108, recorder 109 and database 110. Audio source 102 exchanges data with communication network 106 through link 114, while audio source 104 exchanges data with communication network 106 through link 116. Communication processing system 108 exchanges data with communication network 106 through link 118, with database 110 through link 120, and with recorder 109 through link 122. Recorder 109 may also communicate with the network 106 over link 124.


Links 110, 114, 116, 118, 120, 122 and 124 may use any of a variety of communication media, such as air, metal, optical fiber, or any other signal propagation path, including combinations thereof. In addition, links 110, 114, 116, 118, 120, 122 and 124 may use any of a variety of communication protocols, such as internet, telephony, optical networking, wireless communication, wireless fidelity, or any other communication protocols and formats, including combinations thereof. Further, links 110, 114, 116, 118, 120, 122 and 124 could be direct links or they might include various intermediate components, systems, and networks.


Communication network 106 may be any type of network such as a local area network (LAN), wide area network (WAN), the internet, a wireless communication network, or the like. Any network capable of transferring data from one device to another device may operate as communication network 106.


The speech analytics system 100 may include recorder 109 that stores the speech from audio source 102 or audio source 104 for later retrieval by communication processing system 108 or other downstream devices or systems. The audio sources 102 and 104 may be any source, such as a telephone, VoIP endpoint, mobile device, general purpose computing device, etc. The recorded speech is made up of a plurality of sounds that are then translated to text.


Communication processing system 108 may be any device capable of receiving data through communication network 106 from other devices, such as audio sources 102 and 104, processing the data, and transmitting data through network 106 to other devices. For example, communication processing system 108 may include a processing system for processing data, a communication interface for receiving and transmitting data, a storage system for storing data, and a user interface. One example embodiment of communication processing system 108 is represented by the communication processing system 108 illustrated in FIG. 6 and described in detail below.


Communications processing system 108 receives speech made up of sounds from audio source 102, audio source 104 or recorder 109, and proceeds to convert the speech to text. First, communication processing system 108 uses a phonetic module to convert the sounds into symbols corresponding to the sounds. Next, communication processing system 108 uses a language module and occurrence table to convert the symbols into text. In addition, communication processing system 108 determines a probable accuracy for each word translated. This probability may be based upon the words proceeding or following the selected word. Finally, communication processing system 108 compares the probable accuracies for each word translated with an occurrence table and adjusts the occurrence table as indicated by the probable accuracies for each word.


In an example, the occurrence table may be based upon the occurrence of words in a test sample. There may be a variety of test samples and occurrence tables corresponding to different dialects, languages, or regional slang. There may also be a variety of test samples and occurrence tables corresponding to different domains such as banking, law, commerce, phone centers, technical support lines, or the like. When speech of a known dialect or domain is received, it is compared to a corresponding occurrence table, and the appropriate occurrence table is updated based upon the probable accuracy of the translation of the speech. Thus, the occurrence table for different dialects and domains are continually being updated as more speech is translated.


While FIG. 1 illustrates communication processing system 108 as a single device, other embodiments may perform the functions of communication processing system 108 in a plurality of devices distributed throughout communication network 108. For example, separate devices may be provided for each different method of speech to text translation, and their resulting transcriptions may then be transmitted to communication processing system 108 for compilation into database 110 and for generation of a database index. Still other examples may provide for translation of the audio data to text within communication processing system 108, while the function of creating the database index may actually be performed within database 110. FIG. 1 is simply representative of one possible structure for performing the methods described here for indexing a database.


In an example embodiment, communication processing system 108 receives audio data from audio sources 102 and 104 through communication network 106. This audio data may utilize any of a wide variety of formats. The audio data may be recorded as .mp3 or .wav files or the like. Further, the audio data may include one or more conversations within a single data file or group of data files. In some embodiments, the audio data may be translated from speech to text by other elements (not shown) within communication network 106, and the translated text may then be provided to communication processing system 108.


Communication processing system 108 processes audio data received from audio sources 102 and 104, producing an index of symbols found within the audio data. These symbols may include phonemes, words, phrases, or the like. The index of symbols may be stored in database 110 in some embodiments. Communication processing system 108 then processes the index of symbols searching for symbols that have a deviation in frequency of occurrence within a time period. This time period may be of any length. For example, communication processing system 108 may receive daily updates of audio data and search for symbols having a deviation in frequency of occurrence in comparison to the audio data received for the previous week. Other embodiments may use other periods of time in a similar method.



FIG. 2 illustrates a method of performing speech to text translation in a speech analytics system 106. Audio data containing speech made up of sounds is received from either audio source 102 or recorder 104 (operation 200). Speech analytics system 106 processes the sounds using a phonetic module producing symbols corresponding to the sounds (operation 202). Speech analytics system 106 then processes the symbols using a language module and occurrence table producing text (operation 204).


Speech analytics system 106 determines a probability of correct translation for each word in the text (operation 206). This probability may be based in part or in whole on the words proceeding or following the selected word. Speech analytics system 106 compares the probability of correct translation for each word in the text to an appropriate occurrence table (operation 208). This occurrence table may be selected based upon a number of factors such as dialect or language of the speech, and the domain in which the speech was obtained.


Speech analytics system 106 then modifies the occurrence table based on the probability of correct translation for each word in the text (operation 210). This modification may simply change the occurrence probability by a fixed percentage, or by a variable percentage based on the probability of correct translation of the given word, or any other of a wide variety of methods for modification.



FIG. 2 illustrates a method of performing speech to text translation in a speech analytics system 106. Audio data containing speech made up of sounds is received from either audio source 102 or recorder 104 (operation 200). Speech analytics system 106 processes the sounds using a phonetic module producing symbols corresponding to the sounds (operation 202). Speech analytics system 106 then processes the symbols using a language module and occurrence table producing text (operation 204).


Speech analytics system 106 determines a probability of correct translation for each word in the text (operation 206). This probability may be based in part or in whole on the words proceeding or following the selected word. Speech analytics system 106 compares the probability of correct translation for each word in the text to an appropriate occurrence table (operation 208). This occurrence table may be selected based upon a number of factors such as dialect or language of the speech, and the domain in which the speech was obtained.


Speech analytics system 106 then modifies the occurrence table based on the probability of correct translation for each word in the text (operation 210). This modification may simply change the occurrence probability by a fixed percentage, or by a variable percentage based on the probability of correct translation of the given word, or any other of a wide variety of methods for modification.


In accordance with aspects of the disclosure, structured speech may be identified. Structure speech may include IVR, scripts, and figures of speech. With regard to IVR, this type of structured speech is repetitive. Thus, the communication processing system 108 can recognizing errors in a transcription by taking advantage of the repetitive nature of IVR speech. An IVR message may be, “Welcome to ABC Products customer service center. All of our representatives are busy assisting other customers.” Similarly, a voicemail system prompt may be, “You have reached the voice mail box of Jane Doe, please leave your message after the tone.”


Scripts are another type of structured speech, and are typically statements spoken by agents are, as required by law, certain situations (e.g., disclaimers), and in response to customer inquiries, etc. The scripts are spoken by many different agents with typically only minor modification and timing between the agents. An agent script may be, for example, “For security purposes can you please verify the last four digits of your social security number.” Scripts may have medium length sentences, but are repeated among conversations in the contact center.


Figures of speech are small to medium-sized sentences that people tend to say even thought they are not written text read aloud. The figures of speech are typically common phrases, such as “Oh, my!” They occur with some repetition, but are typically shorter than scripts. Similar to a script, figures of speech tend to have some repetition among conversations in the contact center, but are typically shorter in length and of lower frequency.


With regard to free speech, the order and length of words in is variable. Free speech typically does not repeat among conversations. An example of free speech is, “Well, you see, first click on start.”


The communication processing system 108 can make determinations of the type of speech by looking words within the transcript. For example, for IVR speech, if a predetermined number or percentage of words in an IVR recording are recognized (e.g., 9 out of 16), the communication processing system 108 can make a determination that a particular segment of the transcript is IVR speech. The communication processing system 108 may make a determination not to index each and every word of the recognized IVR speech.



FIG. 3 illustrates a method of determining a type of speech in accordance with a structure. Audio data containing speech made up of sounds is received from either audio source 102 or recorder 104 (operation 300). Communication processing system 108 processes the sounds using a phonetic module producing symbols corresponding to the sounds (operation 302). Communication processing system 108 then processes the symbols using a language module and occurrence table producing text (operation 304).


The communication processing system 108 can make analyze the text of the transcribed speech to make determinations of the type of speech (operation 306) by looking words within the transcript. For example, the communication processing system 108 may identify structured speech based upon repetitions. The communication processing system 108 may identify IVR speech based on durations and distributions. For example, FIG. 4A illustrates patterns of IVR speech. When a certain sentence, phrase, statement, etc. repeats over and over, it can be identified. As illustrated, phrases associated with IVR speech show little deviation. However, scripts spoken by agents may exhibit a higher degree of deviation, as shown in FIG. 4B. Thus, the two types of structured speech can be identified and separated.


The communication processing system 108 may make a determination (operation 308) that a particular segment of the transcript is IVR speech based on the duration distribution of the particular segment. As such, the IVR speech can be separated (operation 310). For example, the segment, “All of our representatives are busy assisting other customers” can be identified as IVR speech.


Using this knowledge, the communication processing system 108 may determine if a predetermined number of words in an IVR recording are recognized. In accordance with the determination, the communication processing system 108 may make a determination not to index each and every word of the recognized IVR speech.


After separating IVR phrases, scripts spoken by agents can be identified by, e.g., examining a length of the phrase. As noted above, scripts are read from text or are statements that agents are trained to say, (“A Federal Law and your decision will not affect your service”). Also as noted above, figures of speech are customary statement (“Hello, how can I help you?”). Because figures of speech tend to be a few words, whereas the scripts tend to be longer sentences, a phrase can be categorized as a script (operation 312) or a figure of speech (operation 314). Thus, figures of speech can be separated out from scripts based on length.


Those phrases that do not fall into the above structures are likely to be free speech (operation 316). Separating the structures may be useful, because for example, in scripts, the word “rebate” may have a different meaning than when it occurs in a figure of speech or free speech. Thus, as will be described below, when searching on the word “rebate,” a context (script, figure of speech, or free speech) may be included in the index and searched.


Identifying scripts within a contact corpus is useful for analysis purposes (operation 318). In some implementations, agent compliance may be determined. For example, it may be determined which agents do or do not strictly adhere to scripts that include, e.g., disclaimers. Contacts may be reviewed that should include a script, but do not. Agents may be ranked based on their compliance to scripts. In addition, identifying scripts may be used to determine which agents are more or less polite than others. Politeness may be analyzed to determine if agents who are more polite helping with customer retention, sales, etc. Yet further, identifying scripts may determine if agents are attempting to up-sell, and what the characteristics of the calls are in which up-selling is performed.


For contacts within the corpus, scores for a script can be determined by setting a minimum distance between the script and words in the contact. A script may be identified by looking for a word or group of words, a Boolean expression or weighting of words. Pattern matching may be performed if a number of errors are small. However, there is not a need to search each and every word in script for it to be correct. In some implementations, an order of the words may be used.


For example, as show in FIG. 5, a threshold number of mistakes (insertions, replacements, deletions) may be set, e.g., 18 to identify a percentage of sentences as being the script. Using this approach a higher recall and precision may be obtained because the script itself has a more accurate signature that looking for the each word by itself. For example, the sequence and/or timing of the words can be used.


In some implementations, scripts may be used as categories. For example, the communication processing system 108 may identify the scripts and output a list. The communication processing system 108 may evaluate each contact for the scripts it contains (e.g., binary output). A user may use a “Script Definition Tool” (SDT) to assign a script with a name, color, impact, etc. The user may assign the script with a type, such as a greeting, authentication, hold, transfer or closure. Additional types assigned can be legal, company policy, up-sale, politeness etc. Manual edit of scripts may be performed by the user who may edit the scripts list to focus on interesting scripts and perform “fine tuning.” Each script can be given a name, color, impact similarly to categories.


In some implementations, the communication processing system 108 may utilize scripts similarly to categories. For example, scripts may be used as a filter in a query. Since the script is binary, a “NOT” operator can be used for checking compliance. Scripts may be displayed, and impact and relevance determined for a specific query. In a player application, scripts may be marked within a contact (see, e.g., FIG. 5A).


In some implementations, scripts may be identified as part of a quality management and scorecards. A supervisor may obtain a list of sentences that each of his/her agents tends to use. The supervisor may ascertain the greetings/closure each agent uses. The supervisor may determine an agents first call resolution capabilities.


Scripts may be exported to database 110. From database 110, the scripts can be integrated with evaluation forms (QM) and Scorecards. Scripts compliance can be used in QM as metrics for evaluations, and for training purposes. Script adherence reports may be generated to determine which agents exceptionally use certain scripts. The reports may also surface scripts that have exceptionally low or high compliance. For each agent, a graph of his/her compliance to various scripts may be generated, as well as an overall scripts compliance graph for all scripts.


In some implementations, analytics may be performed to determine how the usage of an up-sale script contribute to sales (e.g., using meta data); whether agent politeness leads to better customer satisfaction (e.g., using categories); and whether a polite agent helps improve customer retention; whether complying to company policy has a positive effect on sales. Analytics may determine other aspects, such as what characterizes a specific agent group and what their common scripts are. In addition, it may be determined what characterizes good agents; and what are their common scripts (e.g., using QM data).


In the above, the user need not type in whole script when monitoring agents for compliance, QM, etc. Special identifiers can be added to the index and used to searching purposes.



FIG. 6 illustrates the communication processing system of FIG. 1 in greater detail. The communication processing system 108 may include communication interface 301, user interface 302, and processing system 303. Processing system 303 is linked to communication interface 301 and user interface 302. Processing system 303 includes processing circuitry 305 and memory device 306 that stores operating software 307.


Communication interface 301 includes components that communicate over communication links, such as network cards, ports, RF transceivers, processing circuitry and software, or some other communication devices. Communication interface 301 may be configured to communicate over metallic, wireless, or optical links Communication interface 301 may be configured to use TDM, IP, Ethernet, optical networking, wireless protocols, communication signaling, or some other communication format—including combinations thereof. In this example, communication interface 301 is configured to receive audio data from recorder 104 or directly from audio source 102.


User interface 302 includes components that interact with a user. User interface 302 may include a keyboard, display screen, mouse, touch pad, or some other user input/output apparatus. User interface 302 may be omitted in some examples.


Processing circuitry 305 includes microprocessor and other circuitry that retrieves and executes operating software 307 from memory device 306. Memory device 306 includes a disk drive, flash drive, data storage circuitry, or some other memory apparatus. Operating software 307 includes computer programs, firmware, or some other form of machine-readable processing instructions. Operating software 307 may include an operating system, utilities, drivers, network interfaces, applications, or some other type of software. When executed by circuitry 305, operating software 307 directs processing system 303 to operate communication processing system 108 as described herein.


In this example, operating software 307 includes a phonetic module that directs processing circuitry 305 to translate speech to sounds, a language module that directs processing circuitry 305 to translate sounds to text, and an occurrence table that is used with the language module to improve the accuracy of the sounds to text translation.


The above description and associated figures teach the best mode of the invention. The following claims specify the scope of the invention. Note that some aspects of the best mode may not fall within the scope of the invention as specified by the claims. Those skilled in the art will appreciate that the features described above can be combined in various ways to form multiple variations of the invention. As a result, the invention is not limited to the specific embodiments described above, but only by the following claims and their equivalents.

Claims
  • 1. A method for generating a database index for a transcript based on automatic identification by a communication processing system of different types of structured speech in the transcript, the method comprising: receiving, by a communication processing system, a transcript of an audio recording created by a speech analytics system;analyzing, by the communication processing system, text in the transcript to determine repetitions within the text that are indicative of structured speech;calculating, by the communication processing system, a duration distribution of the repetitions within the text to ascertain, by the communication processing system, whether a first segment of the transcript comprises a first type of structured speech, wherein the first type of structured speech includes interactive voice response (IVR) generated speech;calculating, by the communication processing system, a length of the repetitions within the text to ascertain, by the communication processing system, whether a second segment of the transcript comprises a second or third type of structured speech, wherein the communication processing systems determines that the second segment comprises the third type of structured speech as opposed to the second type of structured speech when the length of the repetitions found in the text is greater than a predetermined threshold, wherein the second type of structured speech includes scripts spoken by agents and the third type of speech includes figures of speech; andgenerating, by the communication processing system, a database index for the transcript such that the first segment is marked in a transcript database as comprising the first type of structured speech and the second segment is marked in the transcript database as comprising the second or third type of structured speech.
  • 2. The method of claim 1, further comprising applying a distance threshold to determine contacts that contain the second type of structured speech.
  • 3. The method of claim 2, wherein the distance threshold is approximately 18.
  • 4. The method of claim 1, further comprising analyzing the second type of speech to determine one of compliance, quality management, and categories.
  • 5. The method of claim 4, further comprising: defining the second type of speech as a category; andevaluating a contact to determine if the contact contains the category.
  • 6. The method of claim 4, further comprising using the second type of speech as a filter in a query.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a division of U.S. patent application Ser. No. 12/755,549, filed Apr. 7, 2010, entitled “SPEECH ANALYTICS SYSTEM AND SYSTEM AND METHOD FOR DETERMINING STRUCTURED SPEECH,” which claims priority to U.S. Provisional Patent Application Ser. No. 61/167,495, and entitled “STRUCTURED SPEECH”, filed on Apr. 7, 2009, and Application Ser. No. 61/178,795, and entitled “SPEECH ANALYTICS SYSTEM”, filed on May 15, 2009, the contents of the above are hereby incorporated by reference in their entireties.

US Referenced Citations (308)
Number Name Date Kind
3594919 De Bell et al. Jul 1971 A
3705271 De Bell et al. Dec 1972 A
4510351 Costello et al. Apr 1985 A
4684349 Ferguson et al. Aug 1987 A
4694483 Cheung Sep 1987 A
4763353 Canale et al. Aug 1988 A
4815120 Kosich Mar 1989 A
4924488 Kosich May 1990 A
4953159 Hayden et al. Aug 1990 A
5016272 Stubbs et al. May 1991 A
5101402 Chiu et al. Mar 1992 A
5117225 Wang May 1992 A
5210789 Jeffus et al. May 1993 A
5239460 LaRoche Aug 1993 A
5241625 Epard et al. Aug 1993 A
5267865 Lee et al. Dec 1993 A
5299260 Shaio Mar 1994 A
5311422 Loftin et al. May 1994 A
5315711 Barone et al. May 1994 A
5317628 Misholi et al. May 1994 A
5347306 Nitta Sep 1994 A
5388252 Dreste et al. Feb 1995 A
5396371 Henits et al. Mar 1995 A
5432715 Shigematsu et al. Jul 1995 A
5465286 Clare et al. Nov 1995 A
5475625 Glaschick Dec 1995 A
5485569 Goldman et al. Jan 1996 A
5491780 Fyles et al. Feb 1996 A
5499291 Kepley Mar 1996 A
5535256 Maloney et al. Jul 1996 A
5572652 Robusto et al. Nov 1996 A
5577112 Cambray et al. Nov 1996 A
5590171 Howe et al. Dec 1996 A
5597312 Bloom et al. Jan 1997 A
5619183 Ziegra et al. Apr 1997 A
5696906 Peters et al. Dec 1997 A
5717879 Moran et al. Feb 1998 A
5721842 Beasley et al. Feb 1998 A
5740318 Naito et al. Apr 1998 A
5742670 Bennett Apr 1998 A
5748499 Trueblood May 1998 A
5757644 Jorgensen et al. May 1998 A
5778182 Cathey et al. Jul 1998 A
5784452 Carney Jul 1998 A
5790798 Beckett, II et al. Aug 1998 A
5796952 Davis et al. Aug 1998 A
5809247 Richardson et al. Sep 1998 A
5809250 Kisor Sep 1998 A
5825869 Brooks et al. Oct 1998 A
5835572 Richardson, Jr. et al. Nov 1998 A
5862330 Anupam et al. Jan 1999 A
5864772 Alvarado et al. Jan 1999 A
5884032 Bateman et al. Mar 1999 A
5907680 Nielsen May 1999 A
5918214 Perkowski Jun 1999 A
5918222 Fukui et al. Jun 1999 A
5923746 Baker et al. Jul 1999 A
5933811 Angles et al. Aug 1999 A
5944791 Scherpbier Aug 1999 A
5948061 Merriman et al. Sep 1999 A
5958016 Chang et al. Sep 1999 A
5964836 Rowe et al. Oct 1999 A
5978648 George et al. Nov 1999 A
5982857 Brady Nov 1999 A
5987466 Greer et al. Nov 1999 A
5990852 Szamrej Nov 1999 A
5991373 Pattison et al. Nov 1999 A
5991796 Anupam et al. Nov 1999 A
6005932 Bloom Dec 1999 A
6009429 Greer et al. Dec 1999 A
6014134 Bell et al. Jan 2000 A
6014647 Nizzari et al. Jan 2000 A
6018619 Allard et al. Jan 2000 A
6035332 Ingrassia et al. Mar 2000 A
6038544 Machin et al. Mar 2000 A
6039575 L'Allier et al. Mar 2000 A
6057841 Thurlow et al. May 2000 A
6058163 Pattison et al. May 2000 A
6061798 Coley et al. May 2000 A
6072860 Kek et al. Jun 2000 A
6076099 Chen et al. Jun 2000 A
6078894 Clawson et al. Jun 2000 A
6091712 Pope et al. Jul 2000 A
6100891 Thorne Aug 2000 A
6108711 Beck et al. Aug 2000 A
6122665 Bar et al. Sep 2000 A
6122668 Teng et al. Sep 2000 A
6130668 Stein Oct 2000 A
6138139 Beck et al. Oct 2000 A
6144991 England Nov 2000 A
6146148 Stuppy Nov 2000 A
6151622 Fraenkel et al. Nov 2000 A
6154771 Rangan et al. Nov 2000 A
6157808 Hollingsworth Dec 2000 A
6171109 Ohsuga Jan 2001 B1
6173437 Polcyn Jan 2001 B1
6182094 Humpleman et al. Jan 2001 B1
6195679 Bauersfeld et al. Feb 2001 B1
6201948 Cook et al. Mar 2001 B1
6211451 Tohgi et al. Apr 2001 B1
6225993 Lindblad et al. May 2001 B1
6230197 Beck et al. May 2001 B1
6236977 Verba et al. May 2001 B1
6244758 Solymar et al. Jun 2001 B1
6282548 Burner et al. Aug 2001 B1
6286030 Wenig et al. Sep 2001 B1
6286046 Bryant Sep 2001 B1
6288753 DeNicola et al. Sep 2001 B1
6289340 Purnam et al. Sep 2001 B1
6301462 Freeman et al. Oct 2001 B1
6301573 McIlwaine et al. Oct 2001 B1
6324282 McIlwaine et al. Nov 2001 B1
6347374 Drake et al. Feb 2002 B1
6351467 Dillon Feb 2002 B1
6353851 Anupam et al. Mar 2002 B1
6360250 Anupam et al. Mar 2002 B1
6370574 House et al. Apr 2002 B1
6404857 Blair et al. Jun 2002 B1
6411989 Anupam et al. Jun 2002 B1
6418471 Shelton et al. Jul 2002 B1
6459787 McIlwaine et al. Oct 2002 B2
6487195 Choung et al. Nov 2002 B1
6493758 McLain Dec 2002 B1
6502131 Vaid et al. Dec 2002 B1
6510220 Beckett, II et al. Jan 2003 B1
6535909 Rust Mar 2003 B1
6542602 Elazar Apr 2003 B1
6546405 Gupta et al. Apr 2003 B2
6560328 Bondarenko et al. May 2003 B1
6583806 Ludwig et al. Jun 2003 B2
6606657 Zilberstein et al. Aug 2003 B1
6665644 Kanevsky et al. Dec 2003 B1
6674447 Chiang et al. Jan 2004 B1
6683633 Holtzblatt et al. Jan 2004 B2
6697858 Ezerzer et al. Feb 2004 B1
6721734 Subasic et al. Apr 2004 B1
6724887 Eilbacher et al. Apr 2004 B1
6738456 Wrona et al. May 2004 B2
6751614 Rao Jun 2004 B1
6757361 Blair et al. Jun 2004 B2
6772396 Cronin et al. Aug 2004 B1
6775377 McIlwaine et al. Aug 2004 B2
6792575 Samaniego et al. Sep 2004 B1
6810414 Brittain Oct 2004 B1
6820083 Nagy et al. Nov 2004 B1
6823054 Suhm et al. Nov 2004 B1
6823384 Wilson et al. Nov 2004 B1
6870916 Henrikson et al. Mar 2005 B2
6871229 Nisani et al. Mar 2005 B2
6901438 Davis et al. May 2005 B1
6959078 Eilbacher et al. Oct 2005 B1
6965886 Govrin et al. Nov 2005 B2
7039166 Peterson et al. May 2006 B1
7170979 Byrne et al. Jan 2007 B1
7295980 Garner et al. Nov 2007 B2
7310600 Garner et al. Dec 2007 B1
7346509 Gallino Mar 2008 B2
7577246 Idan et al. Aug 2009 B2
7590542 Williams et al. Sep 2009 B2
7613717 Reed et al. Nov 2009 B1
7664747 Petras et al. Feb 2010 B2
7706520 Waterson et al. Apr 2010 B1
7720214 Ricketts May 2010 B2
7865510 Hillary et al. Jan 2011 B2
7913063 Lyerly Mar 2011 B1
7953219 Freedman et al. May 2011 B2
7991613 Blair Aug 2011 B2
7996210 Godbole et al. Aug 2011 B2
8023636 Koehler et al. Sep 2011 B2
8054964 Flockhart et al. Nov 2011 B2
8055503 Scarano et al. Nov 2011 B2
8060364 Bachar et al. Nov 2011 B2
8108237 Bourne et al. Jan 2012 B2
8112306 Lyerly et al. Feb 2012 B2
8200527 Thompson et al. Jun 2012 B1
8219555 Mianji Jul 2012 B1
8396741 Kannan et al. Mar 2013 B2
8417713 Blair-Goldensohn et al. Apr 2013 B1
8463595 Rehling et al. Jun 2013 B1
8463606 Scott et al. Jun 2013 B2
8504371 Vacek et al. Aug 2013 B1
8531501 Portman et al. Sep 2013 B2
8583434 Gallino Nov 2013 B2
8626753 Aggarwal et al. Jan 2014 B1
8805717 Fleming et al. Aug 2014 B2
8965765 Zweig et al. Feb 2015 B2
20010000962 Rajan May 2001 A1
20010032335 Jones Oct 2001 A1
20010043697 Cox et al. Nov 2001 A1
20020002460 Pertrushin Jan 2002 A1
20020038363 MacLean Mar 2002 A1
20020052948 Baudu et al. May 2002 A1
20020065911 Von Klopp et al. May 2002 A1
20020065912 Catchpole et al. May 2002 A1
20020128821 Ehsani et al. Sep 2002 A1
20020128925 Angeles Sep 2002 A1
20020143925 Pricer et al. Oct 2002 A1
20020165954 Eshghi et al. Nov 2002 A1
20020188507 Busche Dec 2002 A1
20030040909 Ghali Feb 2003 A1
20030055654 Oudeyer Mar 2003 A1
20030055883 Wiles et al. Mar 2003 A1
20030079020 Gourraud et al. Apr 2003 A1
20030099335 Tanaka et al. May 2003 A1
20030144900 Whitmer Jul 2003 A1
20030154240 Nygren et al. Aug 2003 A1
20030204404 Weldon et al. Oct 2003 A1
20040062364 Dezonno et al. Apr 2004 A1
20040068406 Maekawa et al. Apr 2004 A1
20040098265 Kelly et al. May 2004 A1
20040100507 Hayner et al. May 2004 A1
20040165717 McIlwaine et al. Aug 2004 A1
20050022106 Kawai et al. Jan 2005 A1
20050108518 Pandaya et al. May 2005 A1
20050133565 Lee et al. Jun 2005 A1
20050165819 Kudoh et al. Jul 2005 A1
20050170326 Koehler et al. Aug 2005 A1
20050216269 Scahill et al. Sep 2005 A1
20050221268 Chaar et al. Oct 2005 A1
20060074689 Cosatto et al. Apr 2006 A1
20060080107 Hill et al. Apr 2006 A1
20060085186 Ma et al. Apr 2006 A1
20060179064 Paz et al. Aug 2006 A1
20060188075 Peterson Aug 2006 A1
20060235690 Tomasic et al. Oct 2006 A1
20070011005 Morrison et al. Jan 2007 A1
20070016580 Mann et al. Jan 2007 A1
20070043608 May et al. Feb 2007 A1
20070071206 Gainsboro et al. Mar 2007 A1
20070150275 Garner et al. Jun 2007 A1
20070198249 Adachi et al. Aug 2007 A1
20070198329 Lyerly et al. Aug 2007 A1
20070198330 Korenblit et al. Aug 2007 A1
20070211881 Parker-Stephen Sep 2007 A1
20080022211 Jones et al. Jan 2008 A1
20080080698 Williams et al. Apr 2008 A1
20080082329 Watson Apr 2008 A1
20080082330 Blair Apr 2008 A1
20080082341 Blair Apr 2008 A1
20080097985 Olstad et al. Apr 2008 A1
20080120253 Abdulali May 2008 A1
20080177538 Roy et al. Jul 2008 A1
20080195385 Pereg et al. Aug 2008 A1
20080215543 Huang et al. Sep 2008 A1
20080235018 Eggen et al. Sep 2008 A1
20080249764 Huang et al. Oct 2008 A1
20080281581 Henshaw et al. Nov 2008 A1
20080300872 Basu Dec 2008 A1
20090087822 Stanton et al. Apr 2009 A1
20090092241 Minert et al. Apr 2009 A1
20090119268 Bandaru et al. May 2009 A1
20090138262 Agarwal et al. May 2009 A1
20090222551 Neely et al. Sep 2009 A1
20090228264 Williams et al. Sep 2009 A1
20090228428 Dan et al. Sep 2009 A1
20090248399 Au Oct 2009 A1
20090258333 Yu Oct 2009 A1
20090265332 Mushtaq et al. Oct 2009 A1
20090292538 Barnish Nov 2009 A1
20090313016 Cevik et al. Dec 2009 A1
20090327279 Adachi et al. Dec 2009 A1
20100005081 Bennett Jan 2010 A1
20100076765 Zweig et al. Mar 2010 A1
20100091954 Dayanidhi et al. Apr 2010 A1
20100098225 Ashton et al. Apr 2010 A1
20100104086 Park Apr 2010 A1
20100104087 Byrd et al. Apr 2010 A1
20100119053 Goeldi May 2010 A1
20100121857 Elmore et al. May 2010 A1
20100145940 Chen et al. Jun 2010 A1
20100161315 Melamed et al. Jun 2010 A1
20100198584 Habu et al. Aug 2010 A1
20100246799 Lubowich et al. Sep 2010 A1
20100253792 Kawaguchi et al. Oct 2010 A1
20100262454 Sommer et al. Oct 2010 A1
20100274618 Byrd et al. Oct 2010 A1
20100329437 Jeffs et al. Dec 2010 A1
20100332287 Gates et al. Dec 2010 A1
20110010173 Scott et al. Jan 2011 A1
20110055223 Elmore et al. Mar 2011 A1
20110078167 Sundaresan et al. Mar 2011 A1
20110082874 Gainsboro et al. Apr 2011 A1
20110093479 Fuchs Apr 2011 A1
20110178803 Petrushin Jul 2011 A1
20110191106 Khor et al. Aug 2011 A1
20110196677 Deshmukh et al. Aug 2011 A1
20110208522 Pereg et al. Aug 2011 A1
20110216905 Gavalda et al. Sep 2011 A1
20110225115 Moitra et al. Sep 2011 A1
20110238670 Mercuri Sep 2011 A1
20110246442 Bartell Oct 2011 A1
20110249811 Conway et al. Oct 2011 A1
20110282661 Dobry et al. Nov 2011 A1
20110307257 Pereg et al. Dec 2011 A1
20120046938 Godbole et al. Feb 2012 A1
20120130771 Kannan et al. May 2012 A1
20120131021 Blair-Goldensohn et al. May 2012 A1
20120143597 Mushtaq et al. Jun 2012 A1
20120215535 Wasserblat et al. Aug 2012 A1
20120245942 Zechner et al. Sep 2012 A1
20120253792 Bespalov et al. Oct 2012 A1
20130018875 Qiao Jan 2013 A1
20130204613 Godbole et al. Aug 2013 A1
20130297581 Ghosh et al. Nov 2013 A1
20130297619 Chandrasekaran et al. Nov 2013 A1
20130325660 Callaway Dec 2013 A1
20140012863 Sundaresan et al. Jan 2014 A1
20140067390 Webb Mar 2014 A1
Foreign Referenced Citations (6)
Number Date Country
0453128 Oct 1991 EP
0773687 May 1997 EP
0989720 Mar 2000 EP
2369263 May 2002 GB
9843380 Nov 1998 WO
0016207 Mar 2000 WO
Non-Patent Literature Citations (108)
Entry
PCT International Search Report, International Application No. PCT/US03/02541, mailed May 12, 2003.
Phaup, “New Software Puts Computerized Tests on the Internet: Presence Corporation announces breakthrough Question Mark Web product,” Web page, unverified print date of Apr. 1, 2002.
Phaup, “QM Perception Links with Integrity Training's WBT Manager to Provide Enhanced Assessments for Web-Based Courses,” Web page, unverified print date of Apr. 1, 2002, unverified cover date of Mar. 25, 1999.
Phaup, “Question Mark Introduces Access Export Software,” Web page, unverified print date of Apr. 2, 2002, unverified cover date of Mar. 1, 1997.
Phaup, “Question Mark Offers Instant Online Feedback for Web Quizzes and Questionnaires: University of California assist with Beta Testing, Server scripts now available to high-volume users,” Web page, unverified print date of Apr. 1, 2002, unverified cover date of May 6, 1996.
Piskurich, “Now-You-See-'Em, Now-You-Don't Learning Centers,” Technical Training pp. 18-21 (Jan./Feb. 1999).
Read, “Sharpening Agents' Skills,” pp. 1-15, Web page, unverified print date of Mar. 20, 2002, unverified cover date of Oct. 1, 1999.
Reid, “On Target: Assessing Technical Skills,” Technical Skills and Training pp. 6-8 (May/Jun. 1995).
Rohde, “Gates Touts Interactive TV”, InfoWorld, Oct. 14, 1999.
Ross, “Broadcasters Use TV Signals to Send Data”, PC World, Oct. 1996
“Setting up switched port analyzer for monitoring and recording IP-ICD agents on the Cisco ICS 7750”, Cisco Systems, Nov. 22, 2002. http://www.cisco.com/en/US/docs/routers/access/ics7750/software/notes/icsspan.html.
Stewart, “Interactive Television at Home: Television Meets the Internet”, Aug. 1998.
Stormes, “Case Study: Restructuring Technical Training Using ISD,” Technical Skills and Training pp. 23-26 (Feb./Mar. 1997).
Tennyson, “Artificial Intelligence Methods in Computer-Based Instructional Design,” Journal of Instruction Development 7(3):17-22 (1984).
The Editors, Call Center, The Most Innovative Call.
Tinoco et al., “Online Evaluation in WWW-based Courseware,” ACM pp. 194-198 (1997).
Uiterwijk et al., “The virtual classroom,” InfoWorld 20(47):6467 (Nov. 23, 1998).
Unknown Author, “Long-distance learning,” InfoWorld 20(36):7276 (1998).
Aspect Call Center Product Specification, “Release 2.0”, Aspect Telecommunications Corporation, May 23, 1998, p. 798.
“Customer Spotlight: Navistar International,” Web page, unverified print date of Apr. 1, 2002.
DKSystems Integrates QM Perception with OnTrack for Training, Web page, unverified print date of Apr. 1, 2002, unverified cover date of Jun. 15, 1999.
“OnTrack Online” Delivers New Web Functionality, Web page, unverified print date of Apr. 2, 2002, unverified cover date of Oct. 5, 1999.
“Price Waterhouse Coopers Case Study: The Business Challenge,” Web page, unverified cover date of 2000.
Abstract, net.working: “An Online Webliography,” Technical Training pp. 4-5 (Nov./Dec. 1998).
Adams et al., “Our Turn-of-the-Century Trend Watch” Technical Training, pp. 46-47, (Nov./Dec. 1998).
Anderson: Interactive TVs New Approach, The Standard, Oct. 1, 1999.
Ante, “Everything You Ever Wanted to Know About Cryptography Legislation . . . (But Were too Sensible to Ask)”, PC World Online, Dec. 14, 1999.
Barron, “The Road to Performance: Three Vignettes,” Technical Skills and Training, pp. 12-14 (Jan. 1997).
Bauer, “Technology Tools: Just-in-Time Desktop Training is Quick, Easy, and Affordable,” Technical Training, pp. 8-11 (May/Jun. 1998).
Beck et al., “Applications of AI in Education,” AMC Crossroads vol. 1:1-13 (Fall 1996), Web page, unverified print date of Apr. 12, 2002.
Benson and Cheney, “Best Practices in Training Delivery,” Technical Training pp. 14-17 (Oct. 1996).
Bental and Cawsey, “Personalized and Adaptive Systems for Medical Consumer Applications,” Communications ACM 45(5):62-63 (May 2002).
Berst, “It's Baa-aack. How Interactive TV is Sneaking Into Your Living Room”, The AnchorDesk, May 10, 1999.
Berst, “Why Interactive TV Won't Turn You On (Yet)”, The AnchorDesk, Jul. 13, 1999.
Blumenthal et al., “Reducing Development Costs with Intelligent Tutoring System Shells,” pp. 1-5, Web page, unverified print date of Apr. 9, 2002, unverified cover date of Jun. 10, 1996.
Borland and Davis, “US West Plans Web Services on TV”, CNETNews.com, Nov. 22, 1999.
Brown, “Let PC Technology Be Your TV Guide”, PC Magazine, Jun. 7, 1999.
Brown, “Interactive TV: The Sequel”, NewMedia, Feb. 10, 1998.
Brusilovsky, “Adaptive Educational Systems on the World-Wide-Web: A Review of Available Technologies,” pp. 1-10, Web Page, unverified print date of Apr. 12, 2002.
Brusilovsky, et al., “Distributed intelligent tutoring on the Web,” Proceedings of the 8th World Conference of the AIED Society, Kobe, Japan, Aug. 18-22, pp. 1-9 Web page, unverified print date of Apr. 12, 2002, unverified cover date of Aug. 18-22, 1997.
Brusilovsky, et al., ISIS-Tutor: An Intelligent Learning Environment for CD/ISIS Users, pp. 1-15 Web page, unverified print date of May 2, 2002.
“Building Customer Loyalty Through Business-Driven Recording of Multimedia Interactions in your Contact Center,” Witness Systems promotional brochure for eQuality, (2000).
Byrnes et al., “The Development of a Multiple-Choice and True-False Testing Environment on the Web,” pp. 1-8, Web page, unverified print date Apr. 12, 2002, unverified cover date of 1995.
Calvi and De Bra, “Improving the Usability of Hypertext Courseware through Adaptive Linking,” ACM, unknown page numbers (1997).
Center Products We Saw in 1999, Web page, unverified print date of Mar. 20, 2002, unverified cover date of Feb. 1, 2000.
Cline, “Deja vu—Will Interactive TV Make It This Time Around?”, DevHead, Jul. 9, 1999.
Coffey, “Are Performance Objectives Really Necessary?” Technical Skills and Training pp. 25-27 (Oct. 1995).
Cohen, “Knowledge Management's Killer App,” pp. 1-11, Web page, unverified print date of Apr. 12, 2002, unverified cover date of 2001.
Cole-Gomolski, “New ways to manage E-Classes,” Computerworld 32(48):4344 (Nov. 30, 1998).
Cross, “Sun Microsystems—the SunTAN Story,” Internet Time Group 8 (2001).
Crouch, “TV Channels on the Web”, PC World, Sep. 15, 1999.
D'Amico, “Interactive TV Gets $99 set-top box,” IDG.net, Oct. 6, 1999.
Davis, “Satellite Systems Gear Up for Interactive TV Fight”, CNETNews.com, Sep. 30, 1999.
De Bra et al., “Adaptive Hypermedia: From Systems to Framework,” ACM (2000).
De Bra, “Adaptive Educational Hypermedia on the Web,” Communications ACM 45(5):60-61 (May 2002).
Dennis and Gruner, “Computer Managed Instruction at Arthur Andersen & Company: A Status Report,” Educational Technical, pp. 7-16 (Mar. 1992).
Diederich, “Web TV Data Gathering Raises Privacy Concerns”, ComputerWorld, Oct. 13, 1998.
Diessel et al., “Individualized Course Generation: A Marriage Between CAL and ICAL,” Computers Educational 22(1/2)57-64 (1994).
Dyreson, “An Experiment in Class Management Using the World-Wide Web,” pp. 1-12, Web page, unverified print date of Apr. 12, 2002.
EchoStar, “MediaX Mix Interactive Multimedia With Interactive Television”, PRNews Wire, Jan. 11, 1999.
E Learning Community, “Excellence in Practice Award: Electronic Learning Technologies,” Personal Learning Network pp. 1-11, Web page, unverified print date of Apr. 12, 2002.
Eklund and Brusilovsky, “The Value of Adaptivity in Hypermedia Learning Environments: A Short Review of Empirical Evidence,” pp. 1-8, Web page, unverified print date of May 2, 2002.
e-Learning the future of learning, THINQ Limited, London, Version 1.0 (2000).
Eline, “A Trainers Guide to Skill Building,” Technical Training pp. 34-41 (Sep./Oct. 1998).
Eline, “Case Study: Bridging the Gap in Canada's IT Skills,” Technical Skills and Training pp. 23-25 (Jul. 1997).
Eline, “Case Study: IBT's Place in the Sun,” Technical Training pp. 12-17 (Aug./Sep. 1997).
Fritz, “CB templates for productivity: Authoring system templates for trainers,” Emedia Professional 10(8):6876 (Aug. 1997).
Fritz, “ToolBook II: Asymetrix's updated authoring software tackles the Web,” Emedia Professional 10(2):102106 (Feb. 1997).
Furger, “The Internet Meets the Couch Potato”, PCWorld, Oct. 1996.
Gibson et al., “A Comparative Analysis of Web-Based Testing and Evaluation Systems,” pp. 1-8, Web page, unverified print date of Apr. 11, 2002.
Hallberg and DeFlore, “Curving Toward Performance: Following a Hierarchy of Steps Toward a Performance Orientation,” Technical Skills and Training pp. 9-11 (Jan. 1997).
Harsha, “Online Training “Sprints” Ahead,” Technical Training pp. 27-29 (Jan./Feb. 1999).
Heideman, “Training Technicians for a High-Tech Future: These six steps can help develop technician training for high-tech work,” pp. 11-14 (Feb./Mar. 1995).
Heideman, “Writing Performance Objectives Simple as A-B-C (and D),” Technical Skills and Training pp. 5-7 (May/Jun. 1996).
Hollman, “Train Without Pain: The Benefits of Computer-Based Training Tools,” pp. 1-11, Web page, unverified print date of Mar. 20, 2002, unverified cover date of Jan. 1, 2000.
“Hong Kong Comes First with Interactive TV”, SCI-TECH, Dec. 4, 1997.
Kane, AOL-Tivo: You've Got Interactive TV, ZDNN, Aug. 17, 1999.
Kay, “E-Mail in Your Kitchen”, PC World Online, Mar. 28, 1996.
“Keeping an Eye on Your Agents,” Call Center Magazine, pp. 32-34, Feb. 1993 LPRs & 798.
Kenny, “TV Meets Internet”, PC World Online, Mar. 28, 1996.
Koonce, “Where Technology and Training Meet,” Technical Training pp. 10-15 (Nov./Dec. 1998).
Kursh, “Going the distance with Web-based training,” Training and Development 52(3):5053 (Mar. 1998).
Larson, “Enhancing Performance Through Customized Online Learning Support,” Technical Skills and Training pp. 25-27 (May/Jun. 1997).
Linderholm, “Avatar Debuts Home Theater PC”, PC World Online, Dec. 1, 1999.
Linton et al., “OWL: A Recommender System for Organization-Wide Learning,” Educational Technical Society 3(1):62-76 (2000).
Lucadamo and Cheney, “Best Practices in Technical Training,” Technical Training pp. 21-26 (Oct. 1997).
McNamara, “Monitoring Solutions: Quality Must Be Seen and Heard,” Inbound/Outbound pp. 66-67 (Dec. 1989).
Merrill, “The New Component Design Theory: Instruction design for courseware authoring,” Instructional Science 16:19-34 (1987).
Metheus X Window Record and Playback, XRP Features and Benefits, 2 pages, Sep. 1994 LPRs.
Minton-Eversole, “IBT Training Truths Behind the Hype,” Technical Skills and Training pp. 15-19 (Jan. 1997).
Mizoguchi, “Intelligent Tutoring Systems: The Current State of the Art,” Trans. IEICE E73(3):297-307 (Mar. 1990).
Mostow and Aist, “The Sounds of Silence: Towards Automated Evaluation of Student Learning a Reading Tutor that Listens” American Association for Artificial Intelligence, Web page, unknown date Aug. 1997.
Mullier et al., “A Web base Intelligent Tutoring System,” pp. 1-6, Web page, unverified print date of May 2, 2002.
Nash, Database Marketing, 1993, pp. 158-165, 172-185, McGraw Hill, Inc., USA.
Needle, “Will the Net Kill Network TV?”, PC World Online, Mar. 10, 1999.
Nelson et al., “The Assessment of End-User Training Needs,” Communications ACM 38(7):27-39 (Jul. 1995).
“NICE and Cisco ICM/IPCC integration”, (Feb. 2003), http://www.cisco.com/en/US/solutions/collateral/ns340/ns394/ns165/ns45/ns14/net—brochure09186a00800a3292.pdf.
“NICE Systems announces the next generation of active VoIP recording solutions”, Press Release, NICE Systems, Mar. 14, 2006, http://www.nice.com/news/show—pr.php?id=581.
“NICE Systems announces interoperability of its VoIP recording technology with Cisco Systems' customer contact software platform”, Business Wire, Jul. 3, 2001. http://findarticles.com/p/articles/mi—m0EIN/is—2001—July—3/ai—76154034.
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, dated Jun. 6, 2008.
O'Herron, “CenterForce Technologies' CenterForce Analyzer,” Web page, unverified print date of Mar. 20, 2002, unverified cover date of Jun. 1, 1999.
O'Roark, “Basic Skills Get a Boost,” Technical Training pp. 10-13 (Jul./Aug. 1998).
Pamphlet, “On Evaluating Educational Innovations,” authored by Alan Lesgold, unverified cover date of Mar. 5, 1998.
Papa et al., “A Differential Diagnostic Skills Assessment and Tutorial Tool,” Computer Education 18(1-3):45-50 (1992).
Untitled, 10th Mediterranean Electrotechnical Conference vol. 1 pp. 124-126 (2000).
Watson and Belland, “Use of Learner Data in Selecting Instructional Content for Continuing Education,” Journal of Instructional Development 8(4):29-33 (1985).
Weinschenk, “Performance Specifications as Change Agents,” Technical Training pp. 12-15 (Oct. 1997).
Wilson, “U.S. West Revisits Interactive TV”, Interactive Week, Nov. 28, 1999.
Provisional Applications (2)
Number Date Country
61178795 May 2009 US
61167495 Apr 2009 US
Divisions (1)
Number Date Country
Parent 12755549 Apr 2010 US
Child 14270280 US