User profiling for selecting user specific voice input processing information

Information

  • Patent Grant
  • 8682667
  • Patent Number
    8,682,667
  • Date Filed
    Thursday, February 25, 2010
    15 years ago
  • Date Issued
    Tuesday, March 25, 2014
    10 years ago
Abstract
This is directed to processing voice inputs received by an electronic device. In particular, this is directed to receiving a voice input and identifying the user providing the voice input. The voice input can be processed using a subset of words from a library used to identify the words or phrases of the voice input. The particular subset can be selected such that voice inputs provided by the user are more likely to include words from the subset. The subset of the library can be selected using any suitable approach, including for example based on the user's interests and words that relate to those interests. For example, the subset can include one or more words related to media items selected by the user for storage on the electronic device, names of the user's contacts, applications or processes used by the user, or any other words relating to the user's interactions with the device.
Description
BACKGROUND

This is directed to processing received voice inputs by identifying an instruction likely to be provided by the user of the voice. In particular, this is directed to identifying the user providing a voice input and processing the voice input using a subset of resources


Many electronic devices provide a significant number of features or operations accessible to a user. The number of available features or operations may often exceed the number of inputs available using an input interface of the electronic device. To allow users to access electronic device operations that are not specifically tied to particular inputs (e.g., inputs not associated with a key sequence or button press, such as a MENU button on an iPod, available from Apple Inc.), the electronic device may provide menus with selectable options, where the options are associated with electronic device operations. For example, an electronic device may display a menu with selectable options on a display, for example in response to receiving an input associated with the menu from an input interface (e.g., a MENU button).


Because the menu is typically displayed on an electronic device display, a user may be required to look at the display to select a particular option. This may sometimes not be desirable. For example, if a user desires to conserve power (e.g., in a portable electronic device), requiring the electronic device to display a menu and move a highlight region navigated by the user to provide a selection may use up power. As another example, if a user is in a dark environment and the display does not include back lighting, the user may not be able to distinguish displayed options of the menu. As still another example, if a user is blind or visually impaired, the user may not be able to view a displayed menu.


To overcome this issue, some systems may allow users to provide instructions by voice. In particular, the electronic device can include audio input circuitry for detecting words spoken by a user. Processing circuitry of the device can then process the words to identify a corresponding instruction to the electronic device, and execute the corresponding instruction. To process received voice inputs, the electronic device can include a library of words to which the device can compare the received voice input, and from which the device can extract the corresponding instruction.


In some cases, however, the size of the word library can be so large that it may be prohibitive to process voice inputs, and in particular time and resource-prohibitive to process long voice inputs. In addition, the electronic device can require significant resources to parse complex instructions that include several variables provided as part of the voice instruction (e.g., an instruction that includes several filter values for selecting a subset of media items available for playback by the electronic device).


SUMMARY

This is directed to systems and methods for identifying a user providing a voice input, and processing the input to identify a corresponding instruction based on the user's identity. In particular, this is directed to processing a received voice input using the subset of library terms used to process the voice input.


An electronic device can receive a voice input for directing the device to perform one or more operations. The device can then process the received input by comparing the analog input signal with words from a library. To reduce the load for processing the received voice input, the electronic device can limit the size of a library to which to compare the voice input (e.g., the number of library words) based on the identity of the user providing the input.


The electronic device can identify the user using any suitable approach. For example, the electronic device can identify a user from the content of an input provided by the user (e.g., a user name and password). As another example, the electronic device can identify a user by the type of interaction of the user with the device (e.g., the particular operations the user directs the device to perform). As still another example, the electronic device can identify a user based on biometric information (e.g., a voice print). Once the user has been identified, the electronic device can determine the user's interests and define the library subset based on those interests. For example, the subset can include words corresponding to metadata related to content selected by the user for storage on the device (e.g., transferred media items) or content added to the device by the user (e.g., the content of messages sent by the user). As another example, the subset can include words corresponding to application operations that the user is likely to use (e.g., words relating to media playback instructions).


In response to identifying the words of a particular voice input, the electronic device can identify one or more instructions that correspond to the voice input. The instructions can then be passed on to appropriate circuitry of the electronic device for the device to perform an operation corresponding to the instruction. In some embodiments, the instruction can identify a particular device operation and a variable or argument characterizing the operation.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other features of the present invention, its nature and various advantages will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:



FIG. 1 is a schematic view of a electronic device in accordance with one embodiment of the invention;



FIG. 2 is a schematic view of an illustrative display dedicated to monitoring for a voice input in accordance with one embodiment of the invention;



FIG. 3 is a schematic view of an illustrative display having an indication that the device is monitoring for voice inputs in accordance with one embodiment of the invention;



FIG. 4 is a schematic view of an illustrative system for identifying device operations to perform in response to a voice input in accordance with one embodiment of the invention;



FIG. 5 is a flowchart of an illustrative process for selecting a subset of a voice input library in accordance with one embodiment of the invention;



FIG. 6 is a flowchart of an illustrative process for identifying a user providing a voice input in accordance with one embodiment of the invention;



FIG. 7 is a schematic view of an illustrative system for processing voice inputs based on the user's identity in accordance with one embodiment of the invention;



FIG. 8 is a flowchart of an illustrative process for performing a device operation in response to a voice input from an identified user in accordance with one embodiment of the invention;



FIG. 9 is a flowchart of an illustrative process for processing voice inputs based on a user's identity in accordance with one embodiment of the invention; and



FIG. 10 is a flowchart of an illustrative process for defining a subset of library words related to a user in accordance with one embodiment of the invention.





DETAILED DESCRIPTION

An electronic device is operative to receive voice inputs provided by a user to control electronic device operations. In particular, an electronic device is operative to receive and process voice inputs to identify words spoken by the user, and to determine an instruction for performing a device operation corresponding to the identified words.


The electronic device can include a processor and an input interface that includes audio input circuitry. Using the audio input circuitry, a user can provide voice inputs to the device for directing the device to perform one or more operations. The voice inputs can have any suitable form, including for example pre-defined strings corresponding to specific instructions (e.g., “play artist Mika”), arbitrary or natural language instructions (e.g., “pick something good”), or combinations of these.


The electronic device can parse a received voice input to identify the words of the input. In particular, the electronic device can compare words of the received input with a library of words. In the context of an electronic device used to play back media items, the number of words in the library can be significant (e.g., including the artist names, album names and track names of media items in a user's media library). Comparing the voice input to an entire word library can take a significant amount of time, so it may be beneficial to reduce the amount of the library to which the voice input is compared. In some embodiments, one or more subsets can be defined in the voice library based on the identity of the user providing the voice input.


The electronic device can define, for each user, a preference profile or other information describing the users interests, the particular manner in which the user typically interacts with the device, or both. For example, the profile can include information identifying the types of media items played back by the user, applications used by the user, typical playback behavior (e.g., pick a playlist and don't interact much with the device, or regularly change the played back media item). As another example, the profile can include information regarding the types of media items that the user typically plays back or does not play back. Using the profile information, the electronic device can define a subset of library words that relate to the profile, and initially limit or reduce the processing of a received voice command to the defined subset of library words.


The electronic device can identify the user using any suitable approach. In some embodiments, the electronic device can identify the user based on a particular input of the user (e.g., the entry of a username or password), from attributes of the entry (e.g., a voice print of the voice input), biometric information detected by the device, or any other suitable approach.



FIG. 1 is a schematic view of a electronic device in accordance with one embodiment of the invention. Electronic device 100 may include processor 102, storage 104, memory 106, input interface 108, and output interface 110. In some embodiments, one or more of electronic device components 100 may be combined or omitted (e.g., combine storage 104 and memory 106). In some embodiments, electronic device 100 may include other components not combined or included in those shown in FIG. 1 (e.g., communications circuitry, location circuitry, sensing circuitry detecting the device environment, a power supply, or a bus), or several instances of the components shown in FIG. 1. For the sake of simplicity, only one of each of the components is shown in FIG. 1.


Processor 102 may include any processing circuitry or control circuitry operative to control the operations and performance of electronic device 100. For example, processor 102 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, or any other application. In some embodiments, a processor may drive a display and process inputs received from a user interface.


Storage 104 may include, for example, one or more storage mediums including a hard-drive, solid state drive, flash memory, permanent memory such as ROM, any other suitable type of storage component, or any combination thereof. Storage 104 may store, for example, media data (e.g., music and video files), application data (e.g., for implementing functions on device 100), firmware, user preference information (e.g., media playback preferences), authentication information (e.g. libraries of data associated with authorized users), lifestyle information (e.g., food preferences), exercise information (e.g., information obtained by exercise monitoring equipment), transaction information (e.g., information such as credit card information), wireless connection information (e.g., information that may enable electronic device 100 to establish a wireless connection), subscription information (e.g., information that keeps track of podcasts or television shows or other media a user subscribes to), contact information (e.g., telephone numbers and email addresses), calendar information, and any other suitable data or any combination thereof.


Memory 106 can include cache memory, semi-permanent memory such as RAM, and/or one or more different types of memory used for temporarily storing data. In some embodiments, memory 106 can also be used for storing data used to operate electronic device applications, or any other type of data that may be stored in storage 104. In some embodiments, memory 106 and storage 104 may be combined as a single storage medium.


Input interface 108 may provide inputs to input/output circuitry of the electronic device. Input interface 108 may include any suitable input interface, such as for example, a button, keypad, dial, a click wheel, or a touch screen. In some embodiments, electronic device 100 may include a capacitive sensing mechanism, or a multi-touch capacitive sensing mechanism. In some embodiments, input interface can include a microphone or other audio input interface for receiving a user's voice inputs. The input interface can include an analog to digital converter for converting received analog signals corresponding to a voice input to a digital signal that can be processed and analyzed to identify specific words or instructions.


Output interface 110 may include one or more interfaces for providing an audio output, visual output, or other type of output (e.g., odor, taste or haptic output). For example, output interface 110 can include one or more speakers (e.g., mono or stereo speakers) built into electronic device 100, or an audio connector (e.g., an audio jack or an appropriate Bluetooth connection) operative to be coupled to an audio output mechanism. Output interface 110 may be operative to provide audio data using a wired or wireless connection to a headset, headphones or earbuds. As another example, output interface 110 can include display circuitry (e.g., a screen or projection system) for providing a display visible to the user. The display can include a screen (e.g., an LCD screen) that is incorporated in electronic device 100, a movable display or a projecting system for providing a display of content on a surface remote from electronic device 100 (e.g., a video projector), or any other suitable display. Output interface 110 can interface with the input/output circuitry (not shown) to provide outputs to a user of the device.


In some embodiments, electronic device 100 may include a bus operative to provide a data transfer path for transferring data to, from, or between control processor 102, storage 104, memory 106, input interface 108, output interface 110, and any other component included in the electronic device.


A user can interact with the electronic device using any suitable approach. In some embodiments, the user can provide inputs using one or more fingers touching an input interface, such as a keyboard, button, mouse, or touch-sensitive surface. In some embodiments, a user can instead or in addition provide an input by shaking or moving the electronic device in a particular manner (e.g., such that a motion sensing component of the input interface detects the user movement). In some embodiments, a user can instead or in addition provide a voice input to the electronic device. For example, the user can speak into a microphone embedded in or connected to the electronic device.


The user can provide voice inputs to the electronic device at any suitable time. In some embodiments, the electronic device can continuously monitor for voice inputs (e.g., when the device is not in sleep mode, or at all times). In some embodiments, the electronic device can monitor for voice inputs in response to a user input or instruction to enter a voice input. For example, a user can select a button or option, or place the electronic device in such a manner that a sensor detects that the user wishes to provided a voice input (e.g., a proximity sensor detects that the user has brought the device up to the user's mouth). In some embodiments, the electronic device can monitor for user inputs when one or more particular applications or processes are running on the device. For example, the electronic device can monitor for voice inputs in a media playback application, a voice control application, a searching application, or any other suitable application.



FIG. 2 is a schematic view of an illustrative display dedicated to monitoring for a voice input in accordance with one embodiment of the invention. Display 200 can include information region 210 indicating that the device is monitoring for a voice input. For example, information region 210 can include title 212 specifying the name of the application or process monitoring for a voice input. As another example, information region 210 can include waveform 214 providing a depiction of the detected voice input. The content displayed in information region 210 can change dynamically as a received input is detected (e.g., wave form 214 changes), or the content can instead or in addition remain static. Display 200 can include option 220 for directing the device to initialize or cancel monitoring for a voice input. For example, option 220 can switch between “start” and “cancel” options based on the state of the device. In some cases, the electronic device can instead or in addition include a “complete” option for indicating when a voice input has been completely entered.


In some embodiments, the electronic device can display one or more discreet elements on an existing electronic device display to indicate that the device is monitoring for voice inputs. FIG. 3 is a schematic view of an illustrative display having an indication that the device is monitoring for voice inputs in accordance with one embodiment of the invention. Display 300 can include a “now playing” display for a media playback application. For example, display 400 can include art 310 depicting a current, previous or future media item to be played back, and media selection options 320. To indicate that the electronic device is monitoring for voice inputs, display 300 can include element 330. Element 330 can include any suitable display element, including for example text, a graphic, glyph, or other content. In some embodiments, element 330 can include one or more elements of display 200 (FIG. 2) In some embodiments, element 330 can instead or in addition include a selectable option for enabling or disabling the monitoring for voice inputs. This may allow a user to control the resource consumption required for voice inputs. In some embodiments, element 330 can instead or in addition provide an indication that monitoring for voice inputs is available if the user provides a proper input using the input interface (e.g., the user approaches the device to his mouth, as detected by a proximity sensor of the device).


Voice inputs can include instructions for performing any suitable electronic device operation. In some embodiments, voice inputs can relate to a specific set or library of instructions that the device can detect. For example, the device can be limited to detecting particular keywords for related to specific device operations, such as “play,” “call,” “dial,” “shuffle,” “next,” “previous,” or other keywords. In some cases, each keyword can be accompanied by one or more variables or arguments qualifying the particular keyword. For example, the voice input can be “call John's cell phone,” in which the keyword “voice” is qualified by the phrase “John's cell phone,” which defines two variables for identifying the number to call (e.g., John and his cell phone). As another example, the voice input can be “play track 3 of 2005 album by the Plain White T's,” in which the keyword “play” is qualified by the phrase “track 3 of 2005 album by the Plain White T's.” This phrase has three variables for identifying a particular song to play back (e.g., artist Plain White T's, 2005 album, and track 3). As still another example, the phrase “shuffle then go next five times” can include two keywords, “shuffle” and “next” as well as a qualifier for the “next” keyword (e.g., “five times”).


In some cases, the electronic device can detect and parse natural language voice inputs. For example, the electronic device can parse and process an input such as “find my most played song with a 4-star rating and create a Genius playlist using it as a seed.” This voice input can require significant processing to first identify the particular media item to serve as a seed for a new playlist (e.g., most played song with a particular rating), and then determine the operation to perform based on that media item (e.g., create a playlist). As another example, a natural language voice input can include “pick a good song to add to a party mix.” This voice input can require identifying the device operation (e.g., add a song to a party mix) and finding an appropriate value or argument to provide the device operation, where the value can be user-specific.


The voice input provided to the electronic device can therefore be complex, and require significant processing to first identify the individual words of the input before extracting an instruction from the input and executing a corresponding device operation. The electronic device can identify particular words of the voice input using any suitable approach, including for example by comparing detected words of the voice input to a library or dictionary of locally stored words. The library can include any suitable words, including for example a set of default or standard words that relate generally to the electronic device, its processes and operations, and characteristics of information used the processes and operations of the device. For example, default words in the library can include terms relating to operations of one or more applications (e.g., play, pause, next, skip, call, hang up, go to, search for, start, turn off), terms related to information used by applications (e.g., star rating, genre, title, artist, album, name, play count, mobile phone, home phone, address, directions from, directions to), or other such words that may be used for by any user of an electronic device.


In some embodiments, the library can instead or in addition include words that relate specifically to a user of the device. For example, the library can include words determined from metadata values of content or information stored by the user on the device. Such words can include, for example, titles, artists and album names of media items stored by a user on the device, genre, year and star rating values for one or more media items, contact names, streets, cities and countries, email addresses, or any other content that a user can store on the device that may be specific to a particular user. The electronic device can define a library using any suitable approach, including for example by augmenting a default library with words derived from user-specific content of a user using the device.



FIG. 4 is a schematic view of an illustrative system for identifying device operations to perform in response to a voice input in accordance with one embodiment of the invention. System 400 can include voice input 410 provided by a user. Voice input 410 can be detected or received by any suitable combination of hardware, firmware and software for detecting audio provided by a user to an electronic device, converting the analog audio signal to a digital signal, and cleaning up the digital signal for further processing. For example, the electronic device can include a microphone for detecting the analog voice input, and an analog to digital converter for converting the voice input. The electronic device can encode the voice input using any suitable approach, including any suitable encoding scheme.


The voice input can be provided to voice input processing module 420. The provided voice input can be provided in any suitable form, including for example in digitized form or in analog form (e.g., if some or all of the circuitry and software for converting an analog voice input to a digital signal are in voice input processing module 420). For example, voice input processing module 420 can be integrated in the electronic device used by the user. As another example, voice input processing module can totally or in part be integrated in a remote device or server to which the device can connect to process voice inputs. Voice input processing module 420 can analyze the received voice input to identify specific words or phrases within the voice input. For example, voice input processing module 420 can compare identified words or phrases of the voice signal to words or phrases of library 422 of words. Library 422 can be separate from voice input processing module 420, or instead or in addition embedded within voice input processing module 420. Library 422 can include any suitable words, including for example default words associated with the electronic device detecting the voice input, specific words derived from the user's interactions with the electronic device (e.g., with content transferred to the electronic device by the user), or other words or phrases.


Voice input processing module 420 can analyze the detected words or phrases, and identify one or more particular electronic device operations associated with the detected words or phrases. For example, voice input processing module 420 can identify one or more keywords specifying an instruction to the device, where the instruction can include one or more variables or values qualifying the instruction. The instruction (e.g., “play”), including the variables or values specifying how the instruction is to be executed (e.g., “Mika's latest album”) can be analyzed to identify one or more electronic device operations corresponding to the instruction.


Voice input processing module 420 can provide the identified device operation to the device so that device 430 performs an operation. Device 430 can perform one or more operations, including for example operating one or more applications or processes within one or more applications, and can include a punctual, repeating, or lasting operation (e.g., monitor all incoming email for particular flagged messages). Device 430 can include any suitable device, and can include some or all of the features of electronic device 100 (FIG. 1). In some embodiments, device 430 can detect and provide voice input 410 to voice input processing module 420, which can also reside on device 430. In some embodiments, device 430 can instead or in addition be a distinct device that receives instructions to perform operations from a remote voice input processing module 420. Device 430 can receive instructions from processing module 420 over path 444. Processing module 420 can compare voice inputs received over path 440 with library 422 over path 442. Each of paths 440, 442 and 444 can be provided over any suitable communications network or protocol, including for example wired and wireless networks and protocols.


Because of the complexity of voice inputs, and the size of the resulting library used to identify instructions within a voice input, the voice input processing module can take a significant amount of time, resources, or both to process a particular voice input. To reduce the processing required for each voice input, the voice input processing module may benefit by comparing the voice input to a reduced set of library words. In particular, by reducing the number of words in the library to which a voice input is compared, the voice input processing module can more rapidly process voice inputs at a lower device resource cost.


The voice input processing module can determine which library words to include in a particular subset using any suitable approach. In some embodiments, a subset of the library can be selected based on the identity of the user providing the voice input. The voice input processing module can determine which words in a library to associate with a user using any suitable approach. For example, the voice input processing module can select default words that relate to applications or operations used often by the user (e.g., used more than a threshold amount). As another example, the voice input processing module can prompt the user to provide preference or interest information from which related library words can be extracted. As still another example, the voice input processing module can instead or in addition monitor the user's use of the device to determine the user's preferences. In some embodiments, the voice input processing module can analyze previously received voice inputs to identify particular words or types of words that are often used. FIG. 5 is a flowchart of an illustrative process for selecting a subset of a voice input library in accordance with one embodiment of the invention. Process 500 can begin at step 502. At step 504, a voice input processing module can determine whether a user providing a voice input has been identified. For example, the processing module can determine whether one or more characteristics of the current user match characteristics stored in memory. As another example, the processing module can determine whether the user has provided an input that is associated only with the user. If the processing module determines that the user has not been identified, process 500 can move to step 506 and end. Alternatively, the processing module can define a new profile for identifying the new user's interests, and move to step 508.


If, at step 504, the processing module instead determines that the user has been identified, process 500 can move to step 508. At step 508, the processing module can identify user interest information. In particular, the processing module can identify content or other information specifying the user's interests, and can use the information to generate a preference profile for the user. The processing module can identify user interest information using any suitable approach, including one or more of the approaches described within step 508. At step 510, the processing module can review past user use of the device. For example, the processing module can review feedback information related to media playback (e.g., which media items were selected for playback, skipped, or ranked). As another example, the processing module can review the particular applications or operations that the user directed the device to perform (e.g., the user often uses an email application and sports scores application). As still another example, the processing module can review the types of inputs that the user provided to particular applications or in the context of specific operations (e.g., the user is interested in baseball scores and news, but not basketball or hockey scores and news). At step 512, the processing module can identify user-selected content stored on the device. For example, the processing module can identify attributes of media items that the user selected to transfer from a media library to the device. As another example, the processing module can identify attributes of particular applications that the user has installed or loaded on the device.


At step 514, the processing module can request preference information from the user. For example, the processing module can provide a number of questions to the user (e.g., select from the following list your preferred genres, or identify specific media items that you like). As another example, the processing module can direct the user to indicate a preference for currently provided content (e.g., direct the user to approve or reject a currently played back media item, or a game that the user is trying). At step 516, the processing module can review words identified from previous voice inputs. For example, the processing module can review previously received voice inputs, and the types of words or phrases identified in the previous inputs. In some embodiments, the processing module can further determine which of the identified words were properly identified (e.g., the words for which the corresponding device operation executed by the device was approved by the user).


At step 518, the processing module can identify particular library words associated with the user interest information. For example, the processing module can select a subset of default library words that are associated with particular operations or processes most often used by the user. As another example, the processing module can select a subset of user-specific library words that relate particularly to the content of most interest to the user (e.g., words for metadata related to the media items preferred by the user). In particular, the processing module can identify particular metadata associated with media items of most interest to the user (e.g., media items most recently added to the user's media library, transferred to the device, having the highest user ranking, popular media based on external popularity sources, media by a particular favorite artist or within a genre, media items with higher playcounts). At step 520, the processing module can define a subset of the library that includes at least the identified library words. In some embodiments, the defined subset can include additional words, including for example default library words, or other words commonly used or associated with other users (e.g., words associated with other users of the same device, with users using the same type of device, or with users within a particular community or location). Process 500 can then move to step 506 and end.


The voice input processing module can identify a user using any suitable approach. FIG. 6 is a flowchart of an illustrative process for identifying a user providing a voice input in accordance with one embodiment of the invention. Process 600 can begin at step 602. At step 604, the processing module can determine whether an input was received. For example, the processing module can determine whether an input interface of an electronic device has detected or received an input from a user. The input can be in any suitable form, including for example a voice input or an input provided by the user using his hand or fingers. If the processing module determines that no input has been received, process 600 can move to step 606 and end.


If, at step 604, the processing module instead determines that an input has been received, process 600 can move to step 608. At step 608, the processing module can identify the user providing the input. The processing module can identify user providing an input using any suitable approach, including one or more of the approaches described within step 608. At step 610, the processing module can identify the user from a user-specific input. For example, the processing circuitry can identify the user from a username and password, token, or other key or secret known only the user. At step 612, the processing module can identify the user from the type of input received. For example, the processing module can determine that the input corresponds to an operation or process typically performed by a particular user (e.g., only one user uses a particular application). As another example, the processing module can determine that the input was provided at a particular time of day during which the same user uses the device. As step 614, the processing module can identify the user from biometric information of the input. For example, the processing module can identify a user from a voiceprint, fingerprint, recognition of one or more facial features, or any other detected biometric attribute of the user (e.g., by comparing the biometric attribute to a library of known biometric attributes each associated with particular known users of the device).


At step 616, the processing module can use the user's identity for voice processing. In particular, the processing module can retrieve a subset of the word library used for processing voice inputs to streamline the voice input processing. Process 600 can then end at step 606.



FIG. 7 is a schematic view of an illustrative system for processing voice inputs based on the user's identity in accordance with one embodiment of the invention. System 700 can include some or all of the features of system 400 (FIG. 4), described above. System 700 can include voice input 710, which can include some or all of the features of voice input 410 (FIG. 4). Voice input 710 can be provided to voice input processing module 720 to identify one or more device operations to perform in response to the voice input. Voice input processing module 720 can include some or all of the features of voice input processing module 420 (FIG. 4). In some embodiments, voice input processing module 720 can include additional features not included in voice input processing module 420. For example, voice input processing module 720 can include one or more of software, firmware and hardware to perform user identification 722. In particular, processing module 720 can identify a user based on a user's inputs to a device or biometric information received form the user. For example, processing module 720 can detect a password or key known only to a particular user, detect an input for performing a device operation typically selected by a particular user, or receiving biometric data from an appropriate sensor.


Using user identification 722, processing module 720 can retrieve a particular subset 732 of words from library 730 for processing voice input 710 and identifying particular words or phrases of the voice input. Processing module 720 can provide user identification 722 to library 730 such that library 730 can retrieve a particular subset of library words associated with the identified user. Processing module 720 can then compare voice input 710 with library subset 732 to more efficiently identify specific words or phrases within the voice input (e.g., only comparing to the most relevant words or phrases, or most likely words or phrases to be used in the voice input). For example, voice input processing module 720 can identify one or more keywords specifying an instruction to the device, where the instruction can include one or more variables or values qualifying the instruction. The instruction (e.g., “play”), including the variables or values specifying how the instruction is to be executed (e.g., “Mika's latest album”) can be analyzed to identify one or more electronic device operations corresponding to the instruction.


Library 730 can include some or all of the features of library 422 (FIG. 4). For example, library 730 can be separate from voice input processing module 720, or instead or in addition embedded within voice input processing module 720. Library 730 can include any suitable words, including for example default words associated with the electronic device detecting the voice input, specific words derived from the user's interactions with the electronic device (e.g., with content transferred to the electronic device by the user), or other words or phrases. Subset 732 of library words can include any suitable subset of the library, including for example default words or user-specific words.


The particular words or phrases to place in subset 732 can be selected using any suitable approach. In some embodiments, processing module 720 can determine the user's interests 724 and select a particular subset of library words based on the user's interests. Alternatively, library 730 can receive users interests 724 from the processing module, or can retrieve the user's interests directly from the user or from an electronic device. Library 730 can then select the particular words or phrases to include in subset 732. Any suitable approach can be used to correlate a user's interests to words or phrases of a library. For example, words can be selected based on the types of applications or processes used by the user. As another example, words can be selected based on content consumed by the user (e.g., media items played back by the user). As still another example, words can be selected based on data used to perform one or more device operations (e.g., contact information of particular contacts to whom the user sends emails or messages).


Processing module 720 can identify the user's interests 724 using any suitable approach. In some embodiments, processing module 720 can receive user feedback 742 from electronic device 740. The user feedback can include any suitable type of feedback from which user interests 724 can be derived. For example, user feedback 742 can include playback information for media items (e.g., which media items are selected for playback, or skipped during playback), user interactions with the device such as user instructions relating to content accessed using the device (e.g., star rankings provided by the user for media items) or particular applications or operations that the user selects to execute (e.g., a particular game that the user plays), or any other feedback describing a user's interactions with the device. In some cases, user feedback 742 can be provided to library 730 instead of or in addition to processing module 720 for creating subset 732 in the library.


Voice input processing module 720 can provide an instruction derived from the identified words of voice input 710 to device 740. Device 740 can in turn identify one or more operations to perform in response to the received instruction, and execute the one or more operations. In some embodiments, processing module 720 can instead or in addition identify the one or more operations related to a derived instruction, and provide the operations direction to device 740 for execution. Device 740 can perform any suitable operation, including for example operations relating to one or more applications or processes within one or more applications, and can include a punctual, repeating, or lasting operation (e.g., monitor all incoming email for particular flagged messages). Device 740 can include any suitable device, and can include some or all of the features of electronic device 100 (FIG. 1). In some embodiments, device 740 can detect and provide voice input 710 to voice input processing module 720, which can also reside on device 740. In some embodiments, device 740 can instead or in addition be a distinct device that receives instructions to perform operations from a remote voice input processing module 720. Device 740 can receive instructions from processing module 720 and can provide user feedback to processing module 740 over path 754. Processing module 720 can compare voice inputs received over path 750 with library 730, and can assist in the selection of subset 732 via communications over path 752. Each of paths 750, 752 and 754 can be provided over any suitable communications network or protocol, including for example wired and wireless networks and protocols.


In some embodiments, the voice input can include a word defining an arbitrary or user-specific variable for a device operation. For example, the user can provide a voice input directing the device to play back and a media item that the user will find “good.” The processing module can use user's interests 724 to quantify abstract or qualifying terms and provide actual variables or arguments for the device operations. For example, the electronic device can select recently added or loaded media items, current hits or higher ranked media items, media items with higher play counts, or media items by a favorite artist or within a preferred genre.


The following flowcharts describe various processes performed in some embodiments of this invention. Although the descriptions for the following flowcharts will be provided in the context of an electronic device, it will be understood that a voice input processing module can perform some or all of the process steps. FIG. 8 is a flowchart of an illustrative process for performing a device operation in response to a voice input from an identified user in accordance with one embodiment of the invention. Process 800 can begin at step 802. At step 804, the electronic device can determine whether a voice input was received. For example, the electronic device can determine whether an input interface detected an analog signal corresponding to a voice input. If no voice input is received, process 800 can move to step 806 and end.


If, at step 804, the electronic device instead determines that a voice input is received, process 800 can move to step 808. At step 808, the electronic device can determine whether the user providing the voice input was identified. For example, the electronic device can determine whether the user provided an input characteristic of the user (e.g., a user name and password, or using a particular application specific to a user). As another example, the electronic device can determine whether biometric information related to the user providing the input has been detected. The electronic device can compare the identification information with a library of authentication or identification information to identify the user. If the user is not identified, process 800 can move to step 810. At step 810, the electronic device can process the received voice input using a full library. For example, the electronic device can identify particular words or phrases of the voice input from an entire library of words used to process voice inputs. Process 800 can then move to step 810.


If, at step 808, the electronic device instead determines that the user was identified, process 800 can move to step 812. At step 812, the electronic device can identify a subset of a library used to process voice inputs. The identified subset can be associated with the identified user, such that words in the subset relate to interests of the user, or to words that the user is likely to use when providing voice inputs. For example, words in the identified subset can include metadata values that relate to content (e.g., media items or contacts) stored by the user on the device. At step 814, the electronic device can process the voice output using the identified subset of the library. For example, the electronic device can compare the received voice input with words of the subset, and identify specific words or phrases of the voice input. At step 816, the electronic device can identify an electronic device operation corresponding to the processed voice input. For example, the electronic device can identify one or more operations or processes to perform based on the voice instruction (e.g., generate a playlist based on a particular media item). At step 818, the electronic device can perform the identified device operation. Process 800 can then end at step 806.



FIG. 9 is a flowchart of an illustrative process for processing voice inputs based on a user's identity in accordance with one embodiment of the invention. Process 900 can start at step 902. At step 904, the electronic device can receive a voice input. For example, an input interface of the electronic device can receive a voice input using a microphone. At step 906, the electronic device can identify the user providing the voice input. For example, the electronic device can identify a user from an input characteristic of the user (e.g., a user name and password, or using a particular application specific to a user). As another example, the electronic device can identify a user from detected biometric information. At step 908, the electronic device can identify a subset of library words associated with the identified user. The subset can include words that relate to interests of the user, or words that the user is likely to use when providing voice inputs. For example, words in the identified subset can include metadata values that relate to content (e.g., media items or contacts) stored by the user on the device. At step 910, the electronic device can process the voice output using the identified subset of the library. For example, the electronic device can compare the received voice input with words of the subset, and identify specific words or phrases of the voice input. In some embodiments, the electronic device can identify an instruction to provide to a processor, or an operation for the device to perform from the processed voice input. Process 900 can end at step 912.



FIG. 10 is a flowchart of an illustrative process for defining a subset of library words related to a user in accordance with one embodiment of the invention. Process 1000 can begin at step 1002. At step 1004, the electronic device can retrieve or access a library of words for processing voice inputs received by the device. For example, the electronic device can access a library of words typically used for providing voice inputs to an electronic device. At step 1006, the electronic device can identify a user's interests. For example, the electronic device can review past user use of the device, user-selected content stored on the device, request preference information from the user, or review words identified from previous voice inputs. At step 1008, the electronic device can extract words that the user is likely to use to provide a voice input. For example, the electronic device can identify particular words that relate to the user's interests, or words that the user is likely to use based on the types of applications used by the user. At step 1010, the electronic device can define a subset of the library that includes the extracted words. For example, the subset of the library can include the intersection of the extracted words and of the library. As another example, the electronic device can identify words of the library that share a root or other common feature with the extracted words. Process 1000 can then end at step 1012.


Although many of the embodiments of the present invention are described herein with respect to personal computing devices, it should be understood that the present invention is not limited to personal computing applications, but is generally applicable to other applications.


Embodiments of the invention are preferably implemented by software, but can also be implemented in hardware or a combination of hardware and software. Embodiments of the invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.


Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.


The above described embodiments of the invention are presented for purposes of illustration and not of limitation.

Claims
  • 1. A method for processing a voice input, comprising: at an electronic device with one or more processors and memory storing one or more programs for execution by the one or more processors: receiving a voice input;identifying a user providing the voice input;identifying a subset of library words associated with the identified user,wherein the subset of library words relates to user interest information that is based on words or phrases from previous received voice inputs that were determined to be properly identified; and processing the received voice input using the identified subset.
  • 2. The method of claim 1, further comprising: identifying an electronic device operation corresponding to the processed voice input.
  • 3. The method of claim 1, further comprising: retrieving at least one instruction from the processed voice input; andidentifying at least one electronic device operation corresponding to the retrieved at least one instruction.
  • 4. The method of claim 3, wherein: the at least one instruction comprises an operation and an argument qualifying the operation.
  • 5. The method of claim 4, wherein: the operation comprises a media playback operation; andthe argument comprises a particular media item.
  • 6. The method of claim 1, wherein processing further comprises: detecting a plurality of words in the received voice input;comparing the detected plurality of words with the identified subset of library words; andidentifying a plurality of words from the identified subset that correspond to the detected plurality of words.
  • 7. The method of claim 6, further comprising: extracting an instruction for the identified plurality of words; andidentifying an operation corresponding to the extracted instruction.
  • 8. The method of claim 1, wherein identifying the user further comprises: extracting a voice print from the received voice input;comparing the extracted voice print with a library of known voice prints; andidentifying the user having a voice print in the library of known voice prints that corresponds to the received voice print.
  • 9. The method of claim 1, wherein the words or phrases are determined to be properly identified when a device operation corresponding to the words or phrases is approved by the user.
  • 10. The method of claim 1, wherein the subset of library words includes words associated with an application that is used by the user more than a threshold amount.
  • 11. An electronic device controllable by voice inputs, comprising a processor, an input interface, and an output interface, the processor operative to: direct the input interface to receive a voice input from a user;identify the user providing the received voice input;provide the identity of the user to a library of words used to process voice inputs;receive a subset of the library of words, wherein the subset includes words likely to be used by the identified user, and wherein the subset of library words relates to user interest information that is based on words or phrases from previously received voice inputs that were determined to be properly identified;process the voice input using the received subset; anddirect the output interface to provide an output based on the processed voice input.
  • 12. The electronic device of claim 11, wherein the processor is further operative to: direct the output interface to play back a media item.
  • 13. The electronic device of claim 12, wherein the processor is further operative to: identify a media playback operation from the voice input; andidentify a media item qualifying the media playback operation from the voice input.
  • 14. The electronic device of claim 11, wherein the processor is further operative to identify the user from at least one of: the content of the voice input;the time at which the voice input was provided; andthe voice signature of the voice print.
  • 15. The electronic device of claim 11, wherein: the subset of media item words includes words corresponding to metadata values of content selected by the user for storage on the electronic device.
  • 16. The electronic device of claim 15, wherein the content selected by the user for storage on the electronic device comprises at least one of: media items;contact information;applications;calendar information; andsettings.
  • 17. The electronic device of claim 11, wherein the words or phrases are determined to be properly identified when a device operation corresponding to the words or phrases is approved by the user.
  • 18. The electronic device of claim 11, wherein the subset of library words includes words associated with an application that is used by the user more than a threshold amount.
  • 19. A method for defining a subset of a library used for processing voice inputs, comprising: at an electronic device with one or more processors and memory storing one or more programs for execution by the one or more processorsproviding a library of words from which to process voice inputs;identifying user interest information by identifying words or phrases from previously received voice inputs that were determined to be properly identified;extracting, from the user interest information, words that the user is likely to use to provide a voice input; anddefining a subset of the library, wherein the subset comprises at least the words of the library matching the extracted words.
  • 20. The method of claim 19, further comprising: identifying particular media items of interest to the user; andincluding metadata values for the identified particular media items in the defined subset.
  • 21. The method of claim 20, wherein the metadata values comprise at least one of: artist;title;album;genre;year;play count;rating; andplaylist.
  • 22. The method of claim 19, further comprising: comparing the extracted words to the words of the library;identifying words of the library that share at least a common root with at least one extracted word; andincluding the identified words of the library in the defined subset.
  • 23. The method of claim 19, wherein the words or phrases are determined to be properly identified when a device operation corresponding to the words or phrases is approved by the user.
  • 24. The method of claim 19, wherein the subset of library words includes words associated with an application that is used by the user more than a threshold amount.
  • 25. A non-transitory computer readable medium for processing a voice input, the computer readable media comprising computer program logic recorded thereon for: receiving a voice input;identifying a user providing the voice input;identifying a subset of library words associated with the identified user, wherein the subset of library words relates to user interest information that is based on words or phrases from previously received voice inputs that were determined to be properly identified; andprocessing the received voice input using the identified subset.
  • 26. The non-transitory computer readable medium of claim 25, further comprising additional computer program logic recorded thereon for: detecting a plurality of words in the received voice input;comparing the detected plurality of words with the identified subset of library words; andidentifying a plurality of words from the identified subset that correspond to the detected plurality of words.
  • 27. The non-transitory computer readable medium of claim 26, further comprising additional computer program logic recorded thereon for: extracting an instruction for the identified plurality of words; andidentifying an operation corresponding to the extracted instruction.
  • 28. The non-transitory computer readable medium of claim 25, wherein the words or phrases are determined to be properly identified when a device operation corresponding to the words or phrases is approved by the user.
  • 29. The non-transitory computer readable medium of claim 25, wherein the subset of library words includes words associated with an application that is used by the user more than a threshold amount.
US Referenced Citations (658)
Number Name Date Kind
3704345 Coker et al. Nov 1972 A
3828132 Flanagan et al. Aug 1974 A
3979557 Schulman et al. Sep 1976 A
4278838 Antonov Jul 1981 A
4282405 Taguchi Aug 1981 A
4310721 Manley et al. Jan 1982 A
4348553 Baker et al. Sep 1982 A
4653021 Takagi Mar 1987 A
4688195 Thompson et al. Aug 1987 A
4692941 Jacks et al. Sep 1987 A
4718094 Bahl et al. Jan 1988 A
4724542 Williford Feb 1988 A
4726065 Froessl Feb 1988 A
4727354 Lindsay Feb 1988 A
4776016 Hansen Oct 1988 A
4783807 Marley Nov 1988 A
4811243 Racine Mar 1989 A
4819271 Bahl et al. Apr 1989 A
4827520 Zeinstra May 1989 A
4829576 Porter May 1989 A
4833712 Bahl et al. May 1989 A
4839853 Deerwester et al. Jun 1989 A
4852168 Sprague Jul 1989 A
4862504 Nomura Aug 1989 A
4878230 Murakami et al. Oct 1989 A
4903305 Gillick et al. Feb 1990 A
4905163 Garber et al. Feb 1990 A
4914586 Swinehart et al. Apr 1990 A
4914590 Loatman et al. Apr 1990 A
4944013 Gouvianakis et al. Jul 1990 A
4955047 Morganstein et al. Sep 1990 A
4965763 Zamora Oct 1990 A
4974191 Amirghodsi et al. Nov 1990 A
4977598 Doddington et al. Dec 1990 A
4992972 Brooks et al. Feb 1991 A
5010574 Wang Apr 1991 A
5020112 Chou May 1991 A
5021971 Lindsay Jun 1991 A
5022081 Hirose et al. Jun 1991 A
5027406 Roberts et al. Jun 1991 A
5031217 Nishimura Jul 1991 A
5032989 Tornetta Jul 1991 A
5040218 Vitale et al. Aug 1991 A
5047617 Shepard et al. Sep 1991 A
5057915 Kohorn, et al. Oct 1991 A
5072452 Brown et al. Dec 1991 A
5091945 Kleijn Feb 1992 A
5127053 Koch Jun 1992 A
5127055 Larkey Jun 1992 A
5128672 Kaehler Jul 1992 A
5133011 McKiel, Jr. Jul 1992 A
5142584 Ozawa Aug 1992 A
5164900 Bernath Nov 1992 A
5165007 Bahl et al. Nov 1992 A
5179652 Rozmanith et al. Jan 1993 A
5194950 Murakami et al. Mar 1993 A
5197005 Shwartz et al. Mar 1993 A
5199077 Wilcox et al. Mar 1993 A
5202952 Gillick et al. Apr 1993 A
5208862 Ozawa May 1993 A
5216747 Hardwick et al. Jun 1993 A
5220639 Lee Jun 1993 A
5220657 Bly et al. Jun 1993 A
5222146 Bahl et al. Jun 1993 A
5230036 Akamine et al. Jul 1993 A
5235680 Bijnagte Aug 1993 A
5267345 Brown et al. Nov 1993 A
5268990 Cohen et al. Dec 1993 A
5282265 Rohra Suda et al. Jan 1994 A
RE34562 Murakami et al. Mar 1994 E
5291286 Murakami et al. Mar 1994 A
5293448 Honda Mar 1994 A
5293452 Picone et al. Mar 1994 A
5297170 Eyuboglu et al. Mar 1994 A
5301109 Landauer et al. Apr 1994 A
5303406 Hansen et al. Apr 1994 A
5309359 Katz et al. May 1994 A
5317507 Gallant May 1994 A
5317647 Pagallo May 1994 A
5325297 Bird et al. Jun 1994 A
5325298 Gallant Jun 1994 A
5327498 Hamon Jul 1994 A
5333236 Bahl et al. Jul 1994 A
5333275 Wheatley et al. Jul 1994 A
5345536 Hoshimi et al. Sep 1994 A
5349645 Zhao Sep 1994 A
5353377 Kuroda et al. Oct 1994 A
5377301 Rosenberg et al. Dec 1994 A
5384892 Strong Jan 1995 A
5384893 Hutchins Jan 1995 A
5386494 White Jan 1995 A
5386556 Hedin et al. Jan 1995 A
5390279 Strong Feb 1995 A
5396625 Parkes Mar 1995 A
5400434 Pearson Mar 1995 A
5404295 Katz et al. Apr 1995 A
5412756 Bauman et al. May 1995 A
5412804 Krishna May 1995 A
5412806 Du et al. May 1995 A
5418951 Damashek May 1995 A
5424947 Nagao et al. Jun 1995 A
5434777 Luciw Jul 1995 A
5444823 Nguyen Aug 1995 A
5455888 Iyengar et al. Oct 1995 A
5469529 Bimbot et al. Nov 1995 A
5471611 McGregor Nov 1995 A
5475587 Anick et al. Dec 1995 A
5479488 Lennig et al. Dec 1995 A
5491772 Hardwick et al. Feb 1996 A
5493677 Balogh Feb 1996 A
5495604 Harding et al. Feb 1996 A
5502790 Yi Mar 1996 A
5502791 Nishimura et al. Mar 1996 A
5515475 Gupta et al. May 1996 A
5536902 Serra et al. Jul 1996 A
5537618 Boulton et al. Jul 1996 A
5574823 Hassanein et al. Nov 1996 A
5577241 Spencer Nov 1996 A
5578808 Taylor Nov 1996 A
5579436 Chou et al. Nov 1996 A
5581655 Cohen et al. Dec 1996 A
5584024 Shwartz Dec 1996 A
5596676 Swaminathan et al. Jan 1997 A
5596994 Bro Jan 1997 A
5608624 Luciw Mar 1997 A
5613036 Strong Mar 1997 A
5617507 Lee et al. Apr 1997 A
5619694 Shimazu Apr 1997 A
5621859 Schwartz et al. Apr 1997 A
5621903 Luciw et al. Apr 1997 A
5642464 Yue et al. Jun 1997 A
5642519 Martin Jun 1997 A
5644727 Atkins Jul 1997 A
5664055 Kroon Sep 1997 A
5675819 Schuetze Oct 1997 A
5682539 Conrad et al. Oct 1997 A
5687077 Gough, Jr. Nov 1997 A
5696962 Kupiec Dec 1997 A
5701400 Amado Dec 1997 A
5706442 Anderson et al. Jan 1998 A
5710886 Christensen et al. Jan 1998 A
5712957 Waibel et al. Jan 1998 A
5715468 Budzinski Feb 1998 A
5721827 Logan et al. Feb 1998 A
5727950 Cook et al. Mar 1998 A
5729694 Holzrichter et al. Mar 1998 A
5732390 Katayanagi et al. Mar 1998 A
5734791 Acero et al. Mar 1998 A
5737734 Schultz Apr 1998 A
5748974 Johnson May 1998 A
5749081 Whiteis May 1998 A
5759101 Von Kohorn Jun 1998 A
5790978 Olive et al. Aug 1998 A
5794050 Dahlgren et al. Aug 1998 A
5794182 Manduchi et al. Aug 1998 A
5794207 Walker et al. Aug 1998 A
5794237 Gore, Jr. Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5822730 Roth et al. Oct 1998 A
5822743 Gupta et al. Oct 1998 A
5825881 Colvin, Sr. Oct 1998 A
5826261 Spencer Oct 1998 A
5828999 Bellegarda et al. Oct 1998 A
5835893 Ushioda Nov 1998 A
5839106 Bellegarda Nov 1998 A
5845255 Mayaud Dec 1998 A
5857184 Lynch Jan 1999 A
5860063 Gorin et al. Jan 1999 A
5862233 Poletti Jan 1999 A
5864806 Mokbel et al. Jan 1999 A
5864844 James et al. Jan 1999 A
5867799 Lang et al. Feb 1999 A
5873056 Liddy et al. Feb 1999 A
5875437 Atkins Feb 1999 A
5884323 Hawkins et al. Mar 1999 A
5895464 Bhandari et al. Apr 1999 A
5895466 Goldberg et al. Apr 1999 A
5899972 Miyazawa et al. May 1999 A
5913193 Huang et al. Jun 1999 A
5915249 Spencer Jun 1999 A
5930769 Rose Jul 1999 A
5933822 Braden-Harder et al. Aug 1999 A
5936926 Yokouchi et al. Aug 1999 A
5940811 Norris Aug 1999 A
5941944 Messerly Aug 1999 A
5943670 Prager Aug 1999 A
5948040 DeLorme et al. Sep 1999 A
5956699 Wong et al. Sep 1999 A
5960422 Prasad Sep 1999 A
5963924 Williams et al. Oct 1999 A
5966126 Szabo Oct 1999 A
5970474 LeRoy et al. Oct 1999 A
5974146 Randle et al. Oct 1999 A
5982891 Ginter et al. Nov 1999 A
5987132 Rowney Nov 1999 A
5987140 Rowney et al. Nov 1999 A
5987404 Della Pietra et al. Nov 1999 A
5987440 O'Neil et al. Nov 1999 A
5999908 Abelow Dec 1999 A
6016471 Kuhn et al. Jan 2000 A
6023684 Pearson Feb 2000 A
6024288 Gottlich et al. Feb 2000 A
6026345 Shah et al. Feb 2000 A
6026375 Hall et al. Feb 2000 A
6026388 Liddy et al. Feb 2000 A
6026393 Gupta et al. Feb 2000 A
6029132 Kuhn et al. Feb 2000 A
6038533 Buchsbaum et al. Mar 2000 A
6052656 Suda et al. Apr 2000 A
6055514 Wren Apr 2000 A
6055531 Bennett et al. Apr 2000 A
6064960 Bellegarda et al. May 2000 A
6070139 Miyazawa et al. May 2000 A
6070147 Harms et al. May 2000 A
6076051 Messerly et al. Jun 2000 A
6076088 Paik et al. Jun 2000 A
6078914 Redfern Jun 2000 A
6081750 Hoffberg et al. Jun 2000 A
6081774 de Hita et al. Jun 2000 A
6088731 Kiraly et al. Jul 2000 A
6094649 Bowen et al. Jul 2000 A
6105865 Hardesty Aug 2000 A
6108627 Sabourin Aug 2000 A
6119101 Peckover Sep 2000 A
6122616 Henton Sep 2000 A
6125356 Brockman et al. Sep 2000 A
6144938 Surace et al. Nov 2000 A
6173261 Arai et al. Jan 2001 B1
6173279 Levin et al. Jan 2001 B1
6188999 Moody Feb 2001 B1
6195641 Loring et al. Feb 2001 B1
6205456 Nakao Mar 2001 B1
6208971 Bellegarda et al. Mar 2001 B1
6233559 Balakrishnan May 2001 B1
6233578 Machihara et al. May 2001 B1
6246981 Papineni et al. Jun 2001 B1
6260024 Shkedy Jul 2001 B1
6266637 Donovan et al. Jul 2001 B1
6275824 O'Flaherty et al. Aug 2001 B1
6285786 Seni et al. Sep 2001 B1
6308149 Gaussier et al. Oct 2001 B1
6311189 deVries et al. Oct 2001 B1
6317594 Gossman et al. Nov 2001 B1
6317707 Bangalore et al. Nov 2001 B1
6317831 King Nov 2001 B1
6321092 Fitch et al. Nov 2001 B1
6334103 Surace et al. Dec 2001 B1
6356854 Schubert et al. Mar 2002 B1
6356905 Gershman et al. Mar 2002 B1
6366883 Campbell et al. Apr 2002 B1
6366884 Bellegarda et al. Apr 2002 B1
6421672 McAllister et al. Jul 2002 B1
6430551 Thelen et al. Aug 2002 B1
6434524 Weber Aug 2002 B1
6446076 Burkey et al. Sep 2002 B1
6449620 Draper et al. Sep 2002 B1
6453292 Ramaswamy et al. Sep 2002 B2
6460029 Fries et al. Oct 2002 B1
6466654 Cooper et al. Oct 2002 B1
6477488 Bellegarda Nov 2002 B1
6487534 Thelen et al. Nov 2002 B1
6499013 Weber Dec 2002 B1
6501937 Ho et al. Dec 2002 B1
6505158 Conkie Jan 2003 B1
6505175 Silverman et al. Jan 2003 B1
6505183 Loofbourrow et al. Jan 2003 B1
6510417 Woods et al. Jan 2003 B1
6513063 Julia et al. Jan 2003 B1
6523061 Halverson et al. Feb 2003 B1
6523172 Martinez-Guerra et al. Feb 2003 B1
6526382 Yuschik Feb 2003 B1
6526395 Morris Feb 2003 B1
6532444 Weber Mar 2003 B1
6532446 King Mar 2003 B1
6546388 Edlund et al. Apr 2003 B1
6553344 Bellegarda et al. Apr 2003 B2
6556983 Altschuler et al. Apr 2003 B1
6584464 Warthen Jun 2003 B1
6598039 Livowsky Jul 2003 B1
6601026 Appelt et al. Jul 2003 B2
6601234 Bowman-Amuah Jul 2003 B1
6604059 Strubbe et al. Aug 2003 B2
6615172 Bennett et al. Sep 2003 B1
6615175 Gazdzinski Sep 2003 B1
6615220 Austin et al. Sep 2003 B1
6625583 Silverman et al. Sep 2003 B1
6631346 Karaorman et al. Oct 2003 B1
6633846 Bennett et al. Oct 2003 B1
6647260 Dusse et al. Nov 2003 B2
6650735 Burton et al. Nov 2003 B2
6654740 Tokuda et al. Nov 2003 B2
6665639 Mozer et al. Dec 2003 B2
6665640 Bennett et al. Dec 2003 B1
6665641 Coorman et al. Dec 2003 B1
6684187 Conkie Jan 2004 B1
6691064 Vroman Feb 2004 B2
6691111 Lazaridis et al. Feb 2004 B2
6691151 Cheyer et al. Feb 2004 B1
6697780 Beutnagel et al. Feb 2004 B1
6697824 Bowman-Amuah Feb 2004 B1
6701294 Ball et al. Mar 2004 B1
6711585 Copperman et al. Mar 2004 B1
6718324 Edlund et al. Apr 2004 B2
6721728 McGreevy Apr 2004 B2
6735632 Kiraly et al. May 2004 B1
6742021 Halverson et al. May 2004 B1
6751595 Busayapongchai et al. Jun 2004 B2
6757362 Cooper et al. Jun 2004 B1
6757718 Halverson et al. Jun 2004 B1
6766320 Want et al. Jul 2004 B1
6778951 Contractor Aug 2004 B1
6778952 Bellegarda Aug 2004 B2
6778962 Kasai et al. Aug 2004 B1
6778970 Au Aug 2004 B2
6792082 Levine Sep 2004 B1
6807574 Partovi et al. Oct 2004 B1
6810379 Vermeulen et al. Oct 2004 B1
6813491 McKinney Nov 2004 B1
6829603 Chai et al. Dec 2004 B1
6832194 Mozer et al. Dec 2004 B1
6842767 Partovi et al. Jan 2005 B1
6847966 Sommer et al. Jan 2005 B1
6847979 Allemang et al. Jan 2005 B2
6851115 Cheyer et al. Feb 2005 B1
6859931 Cheyer et al. Feb 2005 B1
6895380 Sepe, Jr. May 2005 B2
6895558 Loveland May 2005 B1
6901399 Corston et al. May 2005 B1
6912499 Sabourin et al. Jun 2005 B1
6924828 Hirsch Aug 2005 B1
6928614 Everhart Aug 2005 B1
6931384 Horvitz et al. Aug 2005 B1
6937975 Elworthy Aug 2005 B1
6937986 Denenberg et al. Aug 2005 B2
6964023 Maes et al. Nov 2005 B2
6980949 Ford Dec 2005 B2
6980955 Okutani et al. Dec 2005 B2
6985865 Packingham et al. Jan 2006 B1
6988071 Gazdzinski Jan 2006 B1
6996531 Korall et al. Feb 2006 B2
6999927 Mozer et al. Feb 2006 B2
7020685 Chen et al. Mar 2006 B1
7027974 Busch et al. Apr 2006 B1
7036128 Julia et al. Apr 2006 B1
7050977 Bennett May 2006 B1
7058569 Coorman et al. Jun 2006 B2
7062428 Hogenhout et al. Jun 2006 B2
7069560 Cheyer et al. Jun 2006 B1
7092887 Mozer et al. Aug 2006 B2
7092928 Elad et al. Aug 2006 B1
7093693 Gazdzinski Aug 2006 B1
7127046 Smith et al. Oct 2006 B1
7127403 Saylor et al. Oct 2006 B1
7136710 Hoffberg et al. Nov 2006 B1
7137126 Coffman et al. Nov 2006 B1
7139714 Bennett et al. Nov 2006 B2
7139722 Perrella et al. Nov 2006 B2
7152070 Musick et al. Dec 2006 B1
7177798 Hsu et al. Feb 2007 B2
7197460 Gupta et al. Mar 2007 B1
7200559 Wang Apr 2007 B2
7203646 Bennett Apr 2007 B2
7216073 Lavi et al. May 2007 B2
7216080 Tsiao et al. May 2007 B2
7225125 Bennett et al. May 2007 B2
7233790 Kjellberg et al. Jun 2007 B2
7233904 Luisi Jun 2007 B2
7266496 Wang et al. Sep 2007 B2
7277854 Bennett et al. Oct 2007 B2
7290039 Lisitsa et al. Oct 2007 B1
7299033 Kjellberg et al. Nov 2007 B2
7310600 Garner et al. Dec 2007 B1
7324947 Jordan et al. Jan 2008 B2
7349953 Lisitsa et al. Mar 2008 B2
7376556 Bennett May 2008 B2
7376645 Bernard May 2008 B2
7379874 Schmid et al. May 2008 B2
7386449 Sun et al. Jun 2008 B2
7389224 Elworthy Jun 2008 B1
7392185 Bennett Jun 2008 B2
7398209 Kennewick et al. Jul 2008 B2
7403938 Harrison et al. Jul 2008 B2
7409337 Potter et al. Aug 2008 B1
7415100 Cooper et al. Aug 2008 B2
7418392 Mozer et al. Aug 2008 B1
7426467 Nashida et al. Sep 2008 B2
7427024 Gazdzinski et al. Sep 2008 B1
7447635 Konopka et al. Nov 2008 B1
7454351 Jeschke et al. Nov 2008 B2
7467087 Gillick et al. Dec 2008 B1
7475010 Chao Jan 2009 B2
7483894 Cao Jan 2009 B2
7487089 Mozer Feb 2009 B2
7496498 Chu et al. Feb 2009 B2
7496512 Zhao et al. Feb 2009 B2
7502738 Kennewick et al. Mar 2009 B2
7508373 Lin et al. Mar 2009 B2
7522927 Fitch et al. Apr 2009 B2
7523108 Cao Apr 2009 B2
7526466 Au Apr 2009 B2
7529671 Rockenbeck et al. May 2009 B2
7529676 Koyama May 2009 B2
7539656 Fratkina et al. May 2009 B2
7546382 Healey et al. Jun 2009 B2
7548895 Pulsipher Jun 2009 B2
7552055 Lecoeuche Jun 2009 B2
7555431 Bennett Jun 2009 B2
7558730 Davis et al. Jul 2009 B2
7571106 Cao et al. Aug 2009 B2
7599918 Shen et al. Oct 2009 B2
7620549 Di Cristo et al. Nov 2009 B2
7624007 Bennett Nov 2009 B2
7634409 Kennewick et al. Dec 2009 B2
7636657 Ju et al. Dec 2009 B2
7640160 Di Cristo et al. Dec 2009 B2
7647225 Bennett et al. Jan 2010 B2
7657424 Bennett Feb 2010 B2
7672841 Bennett Mar 2010 B2
7676026 Baxter, Jr. Mar 2010 B1
7684985 Dominach et al. Mar 2010 B2
7693715 Hwang et al. Apr 2010 B2
7693720 Kennewick et al. Apr 2010 B2
7698131 Bennett Apr 2010 B2
7702500 Blaedow Apr 2010 B2
7702508 Bennett Apr 2010 B2
7707027 Balchandran et al. Apr 2010 B2
7707032 Wang et al. Apr 2010 B2
7707267 Lisitsa et al. Apr 2010 B2
7711565 Gazdzinski May 2010 B1
7711672 Au May 2010 B2
7716056 Weng et al. May 2010 B2
7720674 Kaiser et al. May 2010 B2
7720683 Vermeulen et al. May 2010 B1
7725307 Bennett May 2010 B2
7725318 Gavalda et al. May 2010 B2
7725320 Bennett May 2010 B2
7725321 Bennett May 2010 B2
7729904 Bennett Jun 2010 B2
7729916 Coffman et al. Jun 2010 B2
7734461 Kwak et al. Jun 2010 B2
7747616 Yamada et al. Jun 2010 B2
7752152 Paek et al. Jul 2010 B2
7756868 Lee Jul 2010 B2
7774204 Mozer et al. Aug 2010 B2
7783486 Rosser et al. Aug 2010 B2
7801729 Mozer Sep 2010 B2
7809570 Kennewick et al. Oct 2010 B2
7809610 Cao Oct 2010 B2
7818176 Freeman et al. Oct 2010 B2
7822608 Cross, Jr. et al. Oct 2010 B2
7826945 Zhang et al. Nov 2010 B2
7831426 Bennett Nov 2010 B2
7840400 Lavi et al. Nov 2010 B2
7840447 Kleinrock et al. Nov 2010 B2
7853574 Kraenzel et al. Dec 2010 B2
7873519 Bennett Jan 2011 B2
7873654 Bernard Jan 2011 B2
7881936 Longé et al. Feb 2011 B2
7890652 Bull et al. Feb 2011 B2
7912702 Bennett Mar 2011 B2
7917367 Di Cristo et al. Mar 2011 B2
7917497 Harrison et al. Mar 2011 B2
7920678 Cooper et al. Apr 2011 B2
7925525 Chin Apr 2011 B2
7930168 Weng et al. Apr 2011 B2
7949529 Weider et al. May 2011 B2
7949534 Davis et al. May 2011 B2
7974844 Sumita Jul 2011 B2
7974972 Cao Jul 2011 B2
7983915 Knight et al. Jul 2011 B2
7983917 Kennewick et al. Jul 2011 B2
7983997 Allen et al. Jul 2011 B2
7986431 Emori et al. Jul 2011 B2
7987151 Schott et al. Jul 2011 B2
7996228 Miller et al. Aug 2011 B2
8000453 Cooper et al. Aug 2011 B2
8005679 Jordan et al. Aug 2011 B2
8015006 Kennewick et al. Sep 2011 B2
8024195 Mozer et al. Sep 2011 B2
8036901 Mozer Oct 2011 B2
8041570 Mirkovic et al. Oct 2011 B2
8041611 Kleinrock et al. Oct 2011 B2
8055708 Chitsaz et al. Nov 2011 B2
8065155 Gazdzinski Nov 2011 B1
8065156 Gazdzinski Nov 2011 B2
8069046 Kennewick et al. Nov 2011 B2
8073681 Baldwin et al. Dec 2011 B2
8078473 Gazdzinski Dec 2011 B1
8082153 Coffman et al. Dec 2011 B2
8095364 Longé et al. Jan 2012 B2
8099289 Mozer et al. Jan 2012 B2
8107401 John et al. Jan 2012 B2
8112275 Kennewick et al. Feb 2012 B2
8112280 Lu Feb 2012 B2
8117037 Gazdzinski Feb 2012 B2
8131557 Davis et al. Mar 2012 B2
8140335 Kennewick et al. Mar 2012 B2
8165886 Gagnon et al. Apr 2012 B1
8166019 Lee et al. Apr 2012 B1
8190359 Bourne May 2012 B2
8195467 Mozer et al. Jun 2012 B2
8204238 Mozer Jun 2012 B2
8205788 Gazdzinski et al. Jun 2012 B1
8219407 Roy et al. Jul 2012 B1
8285551 Gazdzinski Oct 2012 B2
8285553 Gazdzinski Oct 2012 B2
8290778 Gazdzinski Oct 2012 B2
8290781 Gazdzinski Oct 2012 B2
8296146 Gazdzinski Oct 2012 B2
8296153 Gazdzinski Oct 2012 B2
8301456 Gazdzinski Oct 2012 B2
8311834 Gazdzinski Nov 2012 B1
8370158 Gazdzinski Feb 2013 B2
8371503 Gazdzinski Feb 2013 B2
8374871 Ehsani et al. Feb 2013 B2
8447612 Gazdzinski May 2013 B2
20010047264 Roundtree Nov 2001 A1
20020032564 Ehsani et al. Mar 2002 A1
20020046025 Hain Apr 2002 A1
20020069063 Buchner et al. Jun 2002 A1
20020077817 Atal Jun 2002 A1
20020103641 Kuo et al. Aug 2002 A1
20020164000 Cohen et al. Nov 2002 A1
20020198714 Zhou Dec 2002 A1
20040135701 Yasuda et al. Jul 2004 A1
20040236778 Junqua et al. Nov 2004 A1
20050055403 Brittan Mar 2005 A1
20050071332 Ortega et al. Mar 2005 A1
20050080625 Bennett et al. Apr 2005 A1
20050091118 Fano Apr 2005 A1
20050102614 Brockett et al. May 2005 A1
20050108001 Aarskog May 2005 A1
20050114124 Liu et al. May 2005 A1
20050119897 Bennett et al. Jun 2005 A1
20050143972 Gopalakrishnan et al. Jun 2005 A1
20050165607 DiFabbrizio et al. Jul 2005 A1
20050182629 Coorman et al. Aug 2005 A1
20050196733 Budra et al. Sep 2005 A1
20050288936 Busayapongchai et al. Dec 2005 A1
20060018492 Chiu et al. Jan 2006 A1
20060074660 Waters et al. Apr 2006 A1
20060106592 Brockett et al. May 2006 A1
20060106594 Brockett et al. May 2006 A1
20060106595 Brockett et al. May 2006 A1
20060117002 Swen Jun 2006 A1
20060122834 Bennett Jun 2006 A1
20060143007 Koh et al. Jun 2006 A1
20070055529 Kanevsky et al. Mar 2007 A1
20070058832 Hug et al. Mar 2007 A1
20070088556 Andrew Apr 2007 A1
20070100790 Cheyer et al. May 2007 A1
20070106674 Agrawal et al. May 2007 A1
20070118377 Badino et al. May 2007 A1
20070135949 Snover et al. Jun 2007 A1
20070174188 Fish Jul 2007 A1
20070185917 Prahlad et al. Aug 2007 A1
20070282595 Tunning et al. Dec 2007 A1
20080015864 Ross et al. Jan 2008 A1
20080021708 Bennett et al. Jan 2008 A1
20080034032 Healey et al. Feb 2008 A1
20080052063 Bennett et al. Feb 2008 A1
20080120112 Jordan et al. May 2008 A1
20080129520 Lee Jun 2008 A1
20080140657 Azvine et al. Jun 2008 A1
20080189106 Low et al. Aug 2008 A1
20080221903 Kanevsky et al. Sep 2008 A1
20080228496 Yu et al. Sep 2008 A1
20080247519 Abella et al. Oct 2008 A1
20080249770 Kim et al. Oct 2008 A1
20080300878 Bennett Dec 2008 A1
20080319763 Di Fabbrizio et al. Dec 2008 A1
20090006100 Badger et al. Jan 2009 A1
20090006343 Platt et al. Jan 2009 A1
20090030800 Grois Jan 2009 A1
20090055179 Cho et al. Feb 2009 A1
20090058823 Kocienda Mar 2009 A1
20090076796 Daraselia Mar 2009 A1
20090077165 Rhodes et al. Mar 2009 A1
20090100049 Cao Apr 2009 A1
20090106026 Ferrieux Apr 2009 A1
20090112677 Rhett Apr 2009 A1
20090150156 Kennewick et al. Jun 2009 A1
20090157401 Bennett Jun 2009 A1
20090164441 Cheyer Jun 2009 A1
20090171664 Kennewick et al. Jul 2009 A1
20090287583 Holmes Nov 2009 A1
20090290718 Kahn et al. Nov 2009 A1
20090299745 Kennewick et al. Dec 2009 A1
20090299849 Cao et al. Dec 2009 A1
20090307162 Bui et al. Dec 2009 A1
20100005081 Bennett Jan 2010 A1
20100023320 Di Cristo et al. Jan 2010 A1
20100036660 Bennett Feb 2010 A1
20100042400 Block et al. Feb 2010 A1
20100088020 Sano et al. Apr 2010 A1
20100138215 Williams Jun 2010 A1
20100145700 Kennewick et al. Jun 2010 A1
20100204986 Kennewick et al. Aug 2010 A1
20100217604 Baldwin et al. Aug 2010 A1
20100228540 Bennett Sep 2010 A1
20100235341 Bennett Sep 2010 A1
20100257160 Cao Oct 2010 A1
20100262599 Nitz Oct 2010 A1
20100277579 Cho et al. Nov 2010 A1
20100280983 Cho et al. Nov 2010 A1
20100286985 Kennewick et al. Nov 2010 A1
20100299142 Freeman et al. Nov 2010 A1
20100312547 van Os et al. Dec 2010 A1
20100318576 Kim Dec 2010 A1
20100332235 David Dec 2010 A1
20100332348 Cao Dec 2010 A1
20110047072 Ciurea Feb 2011 A1
20110060807 Martin et al. Mar 2011 A1
20110082688 Kim et al. Apr 2011 A1
20110112827 Kennewick et al. May 2011 A1
20110112921 Kennewick et al. May 2011 A1
20110119049 Ylonen May 2011 A1
20110125540 Jang et al. May 2011 A1
20110130958 Stahl et al. Jun 2011 A1
20110131036 Di Cristo et al. Jun 2011 A1
20110131045 Cristo et al. Jun 2011 A1
20110143811 Rodriguez Jun 2011 A1
20110144999 Jang et al. Jun 2011 A1
20110161076 Davis et al. Jun 2011 A1
20110161309 Lung et al. Jun 2011 A1
20110175810 Markovic et al. Jul 2011 A1
20110184730 LeBeau et al. Jul 2011 A1
20110218855 Cao et al. Sep 2011 A1
20110231182 Weider et al. Sep 2011 A1
20110231188 Kennewick et al. Sep 2011 A1
20110264643 Cao Oct 2011 A1
20110279368 Klein et al. Nov 2011 A1
20110306426 Novak et al. Dec 2011 A1
20120002820 Leichter Jan 2012 A1
20120016678 Gruber et al. Jan 2012 A1
20120020490 Leichter Jan 2012 A1
20120022787 LeBeau et al. Jan 2012 A1
20120022857 Baldwin et al. Jan 2012 A1
20120022860 Lloyd et al. Jan 2012 A1
20120022868 LeBeau et al. Jan 2012 A1
20120022869 Lloyd et al. Jan 2012 A1
20120022870 Kristjansson et al. Jan 2012 A1
20120022874 Lloyd et al. Jan 2012 A1
20120022876 LeBeau et al. Jan 2012 A1
20120023088 Cheng et al. Jan 2012 A1
20120034904 LeBeau et al. Feb 2012 A1
20120035908 LeBeau et al. Feb 2012 A1
20120035924 Jitkoff et al. Feb 2012 A1
20120035931 LeBeau et al. Feb 2012 A1
20120035932 Jitkoff et al. Feb 2012 A1
20120042343 Laligand et al. Feb 2012 A1
20120137367 Dupont et al. May 2012 A1
20120173464 Tur et al. Jul 2012 A1
20120265528 Gruber et al. Oct 2012 A1
20120271676 Aravamudan et al. Oct 2012 A1
20120311583 Gruber et al. Dec 2012 A1
20130110518 Gruber et al. May 2013 A1
20130110520 Cheyer et al. May 2013 A1
Foreign Referenced Citations (47)
Number Date Country
681573 Apr 1993 CH
3837590 May 1990 DE
0138061 Sep 1984 EP
0138061 Apr 1985 EP
0218859 Apr 1987 EP
0262938 Apr 1988 EP
0293259 Nov 1988 EP
0299572 Jan 1989 EP
0313975 May 1989 EP
0314908 May 1989 EP
0327408 Aug 1989 EP
0389271 Sep 1990 EP
0411675 Feb 1991 EP
0559349 Sep 1993 EP
0559349 Sep 1993 EP
0570660 Nov 1993 EP
0863453 Sep 1998 EP
1245023 Oct 2002 EP
2 109 295 Oct 2009 EP
2293667 Apr 1996 GB
06 019965 Jan 1994 JP
2001 125896 May 2001 JP
2002 024212 Jan 2002 JP
2003517158 May 2003 JP
2009 036999 Feb 2009 JP
10-2007-0057496 Jun 2007 KR
10-0776800 Nov 2007 KR
10-2008-001227 Feb 2008 KR
10-0810500 Mar 2008 KR
10 2008 10932 Dec 2008 KR
10 2009 08680 Aug 2009 KR
10-0920267 Oct 2009 KR
10-2010-0032792 Apr 2010 KR
10 2011 01134 Oct 2011 KR
WO 9502221 Jan 1995 WO
WO 9726612 Jul 1997 WO
WO 9841956 Sep 1998 WO
WO 9901834 Jan 1999 WO
WO 9908238 Feb 1999 WO
WO 9956227 Nov 1999 WO
WO 0060435 Oct 2000 WO
WO 0060435 Oct 2000 WO
WO 02073603 Sep 2002 WO
WO 2006129967 Dec 2006 WO
WO 2008085742 Jul 2008 WO
WO 2008109835 Sep 2008 WO
WO 2011088053 Jul 2011 WO
Non-Patent Literature Citations (397)
Entry
Bussler, C., et al., “Web Service Execution Environment (WSMX),” Jun. 3, 2005, W3C Member Submission, http://www.w3.org/Submission/WSMX, 29 pages.
Cheyer, A., “About Adam Cheyer,” Sep. 17, 2012, http://www.adam.cheyer.com/about.html, 2 pages.
Cheyer, A., “A Perspective on AI & Agent Technologies for SCM,” VerticalNet, 2001 presentation, 22 pages.
Domingue, J., et al., “Web Service Modeling Ontology (WSMO)—An Ontology for Semantic Web Services,” Jun. 9-10, 2005, position paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, 6 pages.
Guzzoni, D., et al., “A Unified Platform for Building Intelligent Web Interaction Assistants,” Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 4 pages.
Roddy, D., et al., “Communication and Collaboration in a Landscape of B2B eMarketplaces,” VerticalNet Solutions, white paper, Jun. 15, 2000, 23 pages.
Elio, R. et al., “On Abstract Task Models and Conversation Policies,” http://webdocs.cs.ualberta.ca/˜ree/publications/papers2/ATS.AA99.pdf, May 1999, 10 pages.
Rice, J., et al., “Using the Web Instead of a Window System,” Knowledge Systems Laboratory, Stanford University, (http://tomgruber.org/writing/ksl-95-69.pdf, Sep. 1995.) CHI '96 Proceedings: Conference on Human Factors in Computing Systems, Apr. 13-18, 1996, Vancouver, BC, Canada, 14 pages.
Glass, J., et al., “Multilingual Spoken-Language Understanding in the MIT Voyager System,” Aug. 1995, http://groups.csail.mitedu/sls/publications/1995/speechcomm95-voyager.pdf, 29 pages.
Goddeau, D., et al., “A Form-Based Dialogue Manager for Spoken Language Applications,” Oct. 1996, http://phasedance.com/pdf/icslp96.pdf, 4 pages.
Goddeau, D., et al., “Galaxy: A Human-Language Interface to On-Line Travel Information,” 1994 International Conference on Spoken Language Processing, Sep. 18-22, 1994, Pacific Convention Plaza Yokohama, Japan, 6 pages.
Meng, H., et al., “Wheels: A Conversational System in the Automobile Classified Domain,” Oct. 1996, httphttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.3022, 4 pages.
Phoenix Solutions, Inc. v. West Interactive Corp., Document 40, Declaration of Christopher Schmandt Regarding the MIT Galaxy System dated Jul. 2, 2010, 162 pages.
Seneff, S., et al., “A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains,” Oct. 1996, citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16...rep..., 4 pages.
Vlingo InCar, “Distracted Driving Solution with Vlingo InCar,” 2:38 minute video uploaded to YouTube by Vlingo Voice on Oct. 6, 2010, http://www.youtube.com/watch?v=Vqs8XfXxgz4, 2 pages.
Zue, V., “Conversational Interfaces: Advances and Challenges,” Sep. 1997, http://www.cs.cmu.edu/˜dod/papers/zue97.pdf, 10 pages.
Zue, V. W., “Toward Systems that Understand Spoken Language,” Feb. 1994, ARPA Strategic Computing Institute, ©1994 IEEE, 9 pages.
Alfred App, 2011, http://www.alfredapp.com/, 5 pages.
Ambite, JL., et al., “Design and Implementation of the CALO Query Manager,” Copyright © 2006, American Association for Artificial Intelligence, (www.aaai.org), 8 pages.
Ambite, JL., et al., “Integration of Heterogeneous Knowledge Sources in the CALO Query Manager,” 2005, The 4th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), Agia Napa, Cyprus, ttp://www.isi.edu/people/ambite/publications/integration—heterogeneous—knowledge—sources—calo—query—manager, 18 pages.
Belvin, R. et al., “Development of the HRL Route Navigation Dialogue System,” 2001, In Proceedings of the First International Conference on Human Language Technology Research, Paper, Copyright © 2001 HRL Laboratories, LLC, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6538, 5 pages.
Berry, P. M., et al. “PTIME: Personalized Assistance for Calendaring,” ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Publication date: Jul. 2011, 40:1-22, 22 pages.
Butcher, M., “EVI arrives in town to go toe-to-toe with Siri,” Jan. 23, 2012, http://techcrunch.com/2012/01/23/evi-arrives-in-town-to-go-toe-to-toe-with-siri/, 2 pages.
Chen, Y., “Multimedia Siri Finds and Plays Whatever You Ask For,” Feb. 9, 2012, http://www.psfk.com/2012/02/multimedia-siri.html, 9 pages.
Cheyer, A. et al., “Spoken Language and Multimodal Applications for Electronic Realties,” © Springer-Verlag London Ltd, Virtual Reality 1999, 3:1-15, 15 pages.
Cutkosky, M. R. et al., “PACT: An Experiment in Integrating Concurrent Engineering Systems,” Journal, Computer, vol. 26 Issue 1, Jan. 1993, IEEE Computer Society Press Los Alamitos, CA, USA, http://dl.acm.org/citation.cfm?id=165320, 14 pages.
Ericsson, S. et al., “Software illustrating a unified approach to multimodality and multilinguality in the in-home domain,” Dec. 22, 2006, Talk and Look: Tools for Ambient Linguistic Knowledge, http://www.talk-project.eurice.eu/fileadmin/talk/publications—public/deliverables—public/D1—6.pdf, 127 pages.
Evi, “Meet Evi: the one mobile app that provides solutions for your everyday problems,” Feb. 8, 2012, http://www.evi.com/, 3 pages.
Feigenbaum, E., et al., “Computer-assisted Semantic Annotation of Scientific Life Works,” 2007, http://tomgruber.org/writing/stanford-cs300.pdf, 22 pages.
Gannes, L., “Alfred App Gives Personalized Restaurant Recommendations,” allthingsd.com, Jul. 18, 2011, http://alithingsd.com/20110718/alfred-app-gives-personalized-restaurant-recommendations/, 3 pages.
Gautier, P. O., et al. “Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering,” 1993, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.8394, 9 pages.
Gervasio, M. T., et al., Active Preference Learning for Personalized Calendar Scheduling Assistancae, Copyright © 2005, http://www.ai.sri.com/˜gervasio/pubs/gervasio-iui05.pdf, 8 pages.
Glass, A., “Explaining Preference Learning,” 2006, http://cs229.stanford.edu/proj2006/Glass-ExplainingPreferenceLearning.pdf, 5 pages.
Gruber, T. R., et al., “An Ontology for Engineering Mathematics,” In Jon Doyle, Piero Torasso, & Erik Sandewall, Eds., Fourth International Conference on Principles of Knowledge Representation and Reasoning, Gustav Stresemann Institut, Bonn, Germany, Morgan Kaufmann, 1994, http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html, 22 pages.
Gruber, T. R., “A Translation Approach to Portable Ontology Specifications,” Knowledge Systems Laboratory, Stanford University, Sep. 1992, Technical Report KSL 92-71, Revised Apr. 1993, 27 pages.
Gruber, T. R., “Automated Knowledge Acquisition for Strategic Knowledge,” Knowledge Systems Laboratory, Machine Learning, 4, 293-336 (1989), 44 pages.
Gruber, T. R., “(Avoiding) the Travesty of the Commons,” Presentation at NPUC 2006, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006. http://tomgruber.org/writing/avoiding-travestry.htm, 52 pages.
Gruber, T. R., “Big Think Small Screen: How semantic computing in the cloud will revolutionize the consumer experience on the phone,” Keynote presentation at Web 3.0 conference, Jan. 27, 2010, http://tomgruber.org/writing/web30jan2010.htm, 41 pages.
Gruber, T. R., “Collaborating around Shared Content on the WWW,” W3C Workshop on WWW and Collaboration, Cambridge, MA, Sep. 11, 1995, http://www.w3.org/Collaboration/Workshop/Proceedings/P9.html, 1 page.
Gruber, T. R., “Collective Knowledge Systems: Where the Social Web meets the Semantic Web,” Web Semantics: Science, Services and Agents on the World Wide Web (2007), doi:10.1016/j.websem.2007.11.011, keynote presentation given at the 5th International Semantic Web Conference, Nov. 7, 2006, 19 pages.
Gruber, T. R., “Where the Social Web meets the Semantic Web,” Presentation at the 5th International Semantic Web Conference, Nov. 7, 2006, 38 pages.
Gruber, T. R., “Despite our Best Efforts, Ontologies are not the Problem,” AAAI Spring Symposium, Mar. 2008, http://tomgruber.org/writing/aaai-ss08.htm, 40 pages.
Gruber, T. R., “Enterprise Collaboration Management with Intraspect,” Intraspect Software, Inc., Instraspect Technical White Paper Jul. 2001, 24 pages.
Gruber, T. R., “Every ontology is a treaty—a social agreement—among people with some common motive in sharing,” Interview by Dr. Miltiadis D. Lytras, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, Issue 3, 2004, http://www.sigsemis.org 1, 5 pages.
Gruber, T. R., et al., “Generative Design Rationale: Beyond the Record and Replay Paradigm,” Knowledge Systems Laboratory, Stanford University, Dec. 1991, Technical Report KSL 92-59, Updated Feb. 1993, 24 pages.
Gruber, T. R., “Helping Organizations Collaborate, Communicate, and Learn,” Presentation to NASA Ames Research, Mountain View, CA, Mar. 2003, http://tomgruber.org/writing/organizational-intelligence-talk.htm, 30 pages.
Gruber, T. R., “Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience,” Presentation at Semantic Technologies conference (SemTech08), May 20, 2008, http://tomgruber.org/writing.htm, 40 pages.
Gruber, T. R., “Interactive Acquisition of Justifications: Learning “Why” by Being Told What” Knowledge Systems Laboratory, Stanford University, Oct. 1990, Technical Report KSL 91-17, Revised Feb. 1991, 24 pages.
Gruber, T. R., “It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing,” (c) 2000, 2003, http://www.cidoc-crm.org/docs/symposium—presentations/gruber—cidoc-ontology-2003.pdf, 21 pages.
Gruber, T. R., et al., “Machine-generated Explanations of Engineering Models: A Compositional Modeling Approach,” (1993) In Proc. International Joint Conference on Artificial Intelligence, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.930, 7 pages.
Gruber, T. R., “2021: Mass Collaboration and the Really New Economy,” TNTY Futures, the newsletter of The Next Twenty Years series, vol. 1, Issue 6, Aug. 2001, http://www.tnty.com/newsletter/futures/archive/v01-05business.html, 5 pages.
Gruber, T. R., et al.,“Nike: A National Infrastructure for Knowledge Exchange,” Oct. 1994, http://www.eit.com/papers/nike/nike.html and nike.ps, 10 pages.
Gruber, T. R., “Ontologies, Web 2.0 and Beyond,” Apr. 24, 2007, Ontology Summit 2007, http://tomgruber.org/writing/ontolog-social-web-keynote.pdf, 17 pages.
Gruber, T. R., “Ontology of Folksonomy: A Mash-up of Apples and Oranges,” Originally published to the web in 2005, Int'l Journal on Semantic Web & Information Systems, 3(2), 2007, 7 pages.
Gruber, T. R., “Siri, a Virtual Personal Assistant—Bringing Intelligence to the Interface,” Jun. 16, 2009, Keynote presentation at Semantic Technologies conference, Jun. 2009. http://tomgruber.org/writing/semtech09.htm, 22 pages.
Gruber, T. R., “TagOntology,” Presentation to Tag Camp, www.tagcamp.org, Oct. 29, 2005, 20 pages.
Gruber, T. R., et al., “Toward a Knowledge Medium for Collaborative Product Development,” In Artificial Intelligence in Design 1992, from Proceedings of the Second International Conference on Artificial Intelligence in Design, Pittsburgh, USA, Jun. 22-25, 1992, 19 pages.
Gruber, T. R., “Toward Principles for the Design of Ontologies Used for Knowledge Sharing,” In International Journal Human-Computer Studies 43, p. 907-928, substantial revision of paper presented at the International Workshop on Formal Ontology, Mar. 1993, Padova, Italy, available as Technical Report KSL 93-04, Knowledge Systems Laboratory, Stanford University, further revised Aug. 23, 1993, 23 pages.
Guzzoni, D., et al., “Active, A Platform for Building Intelligent Operating Rooms,” Surgetica 2007 Computer-Aided Medical Interventions: tools and applications, pp. 191-198, Paris, 2007, Sauramps Médical, http://Isro.epfl.ch/page-68384-en.html, 8 pages.
Guzzoni, D., et al., “Active, A Tool for Building Intelligent User Interfaces,” ASC 2007, Palma de Mallorca, http://Isro.epfl.ch/page-34241.html, 6 pages.
Guzzoni, D., et al., “Modeling Human-Agent Interaction with Active Ontologies,” 2007, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 8 pages.
Hardawar, D., “Driving app Waze builds its own Siri for hands-free voice control,” Feb. 9, 2012, http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/, 4 pages.
Intraspect Software, “The Intraspect Knowledge Management Solution: Technical Overview,” http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf, 18 pages.
Julia, L., et al., Un éditeur interactif de tableaux dessinés à main levée (An Interactive Editor for Hand-Sketched Tables), Traitement du Signal 1995, vol. 12, No. 6, 8 pages.
Karp, P. D., “A Generic Knowledge-Base Access Protocol,” May 12, 1994, http://lecture.cs.buu.ac.th/˜f50353/Document/gfp.pdf, 66 pages.
Lemon, O., et al., “Multithreaded Context for Robust Conversational Interfaces: Context-Sensitive Speech Recognition and Interpretation of Corrective Fragments,” Sep. 2004, ACM Transactions on Computer-Human Interaction, vol. 11, No. 3, 27 pages.
Leong, L., et al., “CASIS: A Context-Aware Speech Interface System,” IUI'05, Jan. 9-12, 2005, Proceedings of the 10th international conference on Intelligent user interfaces, San Diego, California, USA, 8 pages.
Lieberman, H., et al., “Out of context: Computer systems that adapt to, and learn from, context,” 2000, IBM Systems Journal, vol. 39, Nos. 3/4, 2000, 16 pages.
Lin, B., et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History,” 1999, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.272, 4 pages.
McGuire, J., et al., “SHADE: Technology for Knowledge-Based Collaborative Engineering,” 1993, Journal of Concurrent Engineering: Applications and Research (CERA), 18 pages.
Milward, D., et al., “D2.2: Dynamic Multimodal Interface Reconfiguration, Talk and Look: Tools for Ambient Linguistic Knowledge,” Aug. 8, 2006, http://www.ihmc.us/users/nblaylock/Pubs/Files/talk—d2.2.pdf, 69 pages.
Mitra, P., et al., “A Graph-Oriented Model for Articulation of Ontology Interdependencies,” 2000, http://ilpubs.stanford.edu:8090/442/1/2000-20.pdf, 15 pages.
Moran, D. B., et al., “Multimodal User Interfaces in the Open Agent Architecture,” Proc. of the 1997 International Conference on Intelligent User Interfaces (IUI97), 8 pages.
Mozer, M., “An Intelligent Environment Must be Adaptive,” Mar./Apr. 1999, IEEE Intelligent Systems, 3 pages.
Mühlhäuser, M., “Context Aware Voice User Interfaces for Workflow Support,” Darmstadt 2007, http://tuprints.ulb.tu-darmstadt.de/876/1/PhD.pdf, 254 pages.
Naone, E., “TR10: Intelligent Software Assistant,” Mar.-Apr. 2009, Technology Review, http://www.technologyreview.com/printer—friendly—article.aspx?id=22117, 2 pages.
Neches, R., “Enabling Technology for Knowledge Sharing,” Fall 1991, AI Magazine, pp. 37-56, (21 pages).
Nöth, E., et al., “Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System,” IEEE Transactions on Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, 14 pages.
Rice, J., et al., “Monthly Program: Nov. 14, 1995,” The San Francisco Bay Area Chapter of ACM SIGCHI, http://www.baychi.org/calendar/19951114/, 2 pages.
Rivlin, Z., et al., “Maestro: Conductor of Multimedia Analysis Technologies,” 1999 SRI International, Communications of the Association for Computing Machinery (CACM), 7 pages.
Sheth, A., et al., “Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships,” Oct. 13, 2002, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, SpringerVerlag, 38 pages.
Simonite, T., “One Easy Way to Make Siri Smarter,” Oct. 18, 2011, Technology Review, http://www.technologyreview.com/printer—friendly—article.aspx?id=38915, 2 pages.
Stent, A., et al., “The CommandTalk Spoken Dialogue System,” 1999, http://acl.ldc.upenn.edu/P/P99/P99-1024.pdf, 8 pages.
Tofel, K., et al., “SpeakTolt: A personal assistant for older iPhones, iPads,” Feb. 9, 2012, http://gigaonn.com/apple/speaktoit-siri-for-older-iphones-ipads/, 7 pages.
Tucker, J., “Too lazy to grab your TV remote? Use Siri instead,” Nov. 30, 2011, http://www.engadget.com/2011/11/30/too-lazy-to-grab-your-tv-remote-use-siri-instead/, 8 pages.
Tur, G., et al., “The CALO Meeting Speech Recognition and Understanding System,” 2008, Proc. IEEE Spoken Language Technology Workshop, 4 pages.
Tur, G., et al., “The-CALO-Meeting-Assistant System,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 6, Aug. 2010, 11 pages.
Vlingo, “Vlingo Launches Voice Enablement Application on Apple App Store,” Vlingo press release dated Dec. 3, 2008, 2 pages.
YouTube, “Knowledge Navigator,” 5:34 minute video uploaded to YouTube by Knownav on Apr. 29, 2008, http://www.youtube.com/watch?v=QRH8eimU—20on Aug. 3, 2006, 1 page.
YouTube,“Send Text, Listen to and Send E-Mail ‘By Voice’ www.voiceassist.com,” 2:11 minute video uploaded to YouTube by VoiceAssist on Jul. 30, 2009, http://www.youtube.com/watch?v=0tEU61nHHA4, 1 page.
YouTube,“Text'nDrive App Demo—Listen and Reply to your Messages by Voice while Driving!,” 1:57 minute video uploaded to YouTube by TextnDrive on Apr. 27, 2010, http://www.youtube.com/watch?v=WaGfzoHsAMw, 1 page.
YouTube, “Voice on the Go (BlackBerry),” 2:51 minute video uploaded to YouTube by VoiceOnTheGo on Jul. 27, 2009, http://www.youtube.com/watch?v=pJqpWgQS98w, 1 page.
International Search Report and Written Opinion dated Nov. 29, 2011, received in International Application No. PCT/US2011/20861, which corresponds to U.S. Appl. No. 12/987,982, 15 pages (Thomas Robert Gruber).
Acero, A., et al., “Environmental Robustness in Automatic Speech Recognition,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages.
Acero, A., et al., “Robust Speech Recognition by Normalization of the Acoustic Space,” International Conference on Acoustics, Speech, and Signal Processing, 1991, 4 pages.
Ahlbom, G., et al., “Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques,” IEEE International Conference of Acoustics, Speech, and Signal Processing (ICASSP'87), Apr. 1987, vol. 12, 4 pages.
Aikawa, K., “Speech Recognition Using Time-Warping Neural Networks,” Proceedings of the 1991 IEEE Workshop on Neural Networks for Signal Processing, Sep. 30 to Oct. 1, 1991, 10 pages.
Anastasakos, A., et al., “Duration Modeling in Large Vocabulary Speech Recognition,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages.
Anderson, R. H., “Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics,” In Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium, © 1967, 12 pages.
Ansari, R., et al., “Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach,” IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, 3 pages.
Anthony, N. J., et al., “Supervised Adaption for Signature Verification System,” Jun. 1, 1978, IBM Technical Disclosure, 3 pages.
Apple Computer, “Guide Maker User's Guide,” © Apple Computer, Inc., Apr. 27, 1994, 8 pages.
Apple Computer, “Introduction to Apple Guide,” © Apple Computer, Inc., Apr. 28, 1994, 20 pages.
Asanović, K., et al., “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks,” In Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkelev.EDU, 7 pages.
Atal, B. S., “Efficient Coding of LPC Parameters by Temporal Decomposition,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'83), Apr. 1983, 4 pages.
Bahl, L. R., et al., “Acoustic Markov Models Used in the Tangora Speech Recognition System,” In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 4 pages.
Bahl, L. R., et al., “A Maximum Likelihood Approach to Continuous Speech Recognition,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, 13 pages.
Bahl, L. R., et al., “A Tree-Based Statistical Language Model for Natural Language Speech Recognition,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, Issue 7, Jul. 1989, 8 pages.
Bahl, L. R., et al., “Large Vocabulary Natural Language Continuous Speech Recognition,” In Proceedings of 1989 International Conference on Acoustics, Speech, and Signal Processing, May 23-26, 1989, vol. 1, 6 pages.
Bahl, L. R., et al, “Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, 11 pages.
Bahl, L. R., et al., “Speech Recognition with Continuous-Parameter Hidden Markov Models,” In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 8 pages.
Banbrook, M., “Nonlinear Analysis of Speech from a Synthesis Perspective,” A thesis submitted for the degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages.
Belaid, A., et al., “A Syntactic Approach for Handwritten Mathematical Formula Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, 7 pages.
Bellegarda, E. J., et al., “On-Line Handwriting Recognition Using Statistical Mixtures,” Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris-France, Jul. 1993, 11 pages.
Bellegarda, J. R., “A Latent Semantic Analysis Framework for Large-Span Language Modeling,” 5th European Conference on Speech, Communication and Technology, (EUROSPEECH'97), Sep. 22-25, 1997, 4 pages.
Bellegarda, J. R., “A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 6, No. 5, Sep. 1998, 12 pages.
Bellegarda, J. R., et al., “A Novel Word Clustering Algorithm Based on Latent Semantic Analysis,” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, 4 pages.
Bellegarda, J. R., et al., “Experiments Using Data Augmentation for Speaker Adaptation,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages.
Bellegarda, J. R., “Exploiting Both Local and Global Constraints for Multi-Span Statistical Language Modeling,” Proceeding of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'98), vol. 2, May 12-15, 1998, 5 pages.
Bellegarda, J. R., “Exploiting Latent Semantic Information in Statistical Language Modeling,” In Proceedings of the IEEE, Aug. 2000, vol. 88, No. 8, 18 pages.
Bellegarda, J. R., “Interaction-Driven Speech Input—A Data-Driven Approach to the Capture of Both Local and Global Language Constraints,” 1992, 7 pages, available at http://old.sigchi.org/bulletin/1998.2/bellegarda.html.
Bellegarda, J. R., “Large Vocabulary Speech Recognition with Multispan Statistical Language Models,” IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, 9 pages.
Bellegarda, J. R., et al., “Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the ARPA Wall Street Journal Task,” Signal Processing VII: Theories and Applications, © 1994 European Association for Signal Processing, 4 pages.
Bellegarda, J. R., et al., “The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation,” IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 8 pages.
Black, A. W., et al., “Automatically Clustering Similar Units for Unit Selection in Speech Synthesis,” In Proceedings of Eurospeech 1997, vol. 2, 4 pages.
Blair, D. C., et al., “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System,” Communications of the ACM, vol. 28, No. 3, Mar. 1985, 11 pages.
Briner, L. L., “Identifying Keywords in Text Data Processing,” In Zelkowitz, Marvin V., ED, Directions and Challenges,15th Annual Technical Symposium, Jun. 17, 1976, Gaithersbury, Maryland, 7 pages.
Bulyko, I., et al., “Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis,” Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages.
Bussey, H. E., et al., “Service Architecture, Prototype Description, and Network Implications of a Personalized Information Grazing Service,” INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, Jun. 3-7, 1990, http://slrohall.com/publications/, 8 pages.
Buzo, A., et al., “Speech Coding Based Upon Vector Quantization,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, 13 pages.
Caminero-Gil, J., et al., “Data-Driven Discourse Modeling for Semantic Interpretation,” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, May 7-10, 1996, 6 pages.
Cawley, G. C., “The Application of Neural Networks to Phonetic Modelling,” PhD Thesis, University of Essex, Mar. 1996, 13 pages.
Chang, S., et al., “A Segment-based Speech Recognition System for Isolated Mandarin Syllables,” Proceedings TENCON '93, IEEE Region 10 conference on Computer, Communication, Control and Power Engineering, Oct. 19-21, 1993, vol. 3, 6 pages.
Conklin, J., “Hypertext: An Introduction and Survey,” Computer Magazine, Sep. 1987, 25 pages.
Connolly, F. T., et al., “Fast Algorithms for Complex Matrix Multiplication Using Surrogates,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Jun. 1989, vol. 37, No. 6, 13 pages.
Deerwester, S., et al., “Indexing by Latent Semantic Analysis,” Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, 19 pages.
Deller, Jr., J. R., et al., “Discrete-Time Processing of Speech Signals,” © 1987 Prentice Hall, ISBN: 0-02-328301-7, 14 pages.
Digital Equipment Corporation, “Open VMS Software Overview,” Dec. 1995, software manual, 159 pages.
Donovan, R. E., “A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers,” 2001, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.6398, 4 pages.
Frisse, M. E., “Searching for Information in a Hypertext Medical Handbook,” Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages.
Goldberg, D., et al., “Using Collaborative Filtering to Weave an Information Tapestry,” Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages.
Gorin, A. L., et al., “On Adaptive Acquisition of Language,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), vol. 1, Apr. 3-6, 1990, 5 pages.
Gotoh, Y., et al., “Document Space Models Using Latent Semantic Analysis,” In Proceedings of Eurospeech, 1997, 4 pages.
Gray, R. M., “Vector Quantization,” IEEE ASSP Magazine, Apr. 1984, 26 pages.
Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform,” In Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages.
Helm, R., et al., “Building Visual Language Parsers,” In Proceedings of CHI'91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 8 pages.
Hermansky, H., “Perceptual Linear Predictive (PLP) Analysis of Speech,” Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages.
Hermansky, H., “Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing,” In proceedings of IEEE International Conference on Acoustics, speech, and Signal Processing (ICASSP'93), Apr. 27-30, 1993, 4 pages.
Hoehfeld M., et al., “Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm,” IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages.
Holmes, J. N., “Speech Synthesis and Recognition—Stochastic Models for Word Recognition,” Speech Synthesis and Recognition, Published by Chapman & Hall, London, ISBN 0 412 53430 4, © 1998 J. N. Holmes, 7 pages.
Hon, H.W., et al., “CMU Robust Vocabulary-Independent Speech Recognition System,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-91), Apr. 14-17, 1991, 4 pages.
IBM Technical Disclosure Bulletin, “Speech Editor,” vol. 29, No. 10, Mar. 10, 1987, 3 pages.
IBM Technical Disclosure Bulletin, “Integrated Audio-Graphics User Interface,” vol. 33, No. 11, Apr. 1991, 4 pages.
IBM Technical Disclosure Bulletin, “Speech Recognition with Hidden Markov Models of Speech Waveforms,” vol. 34, No. 1, Jun. 1991, 10 pages.
Iowegian International, “FIR Filter Properties,” dspGuro, Digital Signal Processing Central, http://www.dspguru.com/dsp/taqs/fir/properties, downloaded on Jul. 28, 2010, 6 pages.
Jacobs, P. S., et al., “Scisor: Extracting Information from On-Line News,” Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages.
Jelinek, F., “Self-Organized Language Modeling for Speech Recognition,” Readings in Speech Recognition, edited by Alex Waibel and Kai-Fu Lee, May 15, 1990, © 1990 Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 63 pages.
Jennings, A., et al., “A Personal News Service Based on a User Model Neural Network,” IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, Tokyo, JP, 12 pages.
Ji, T., et al., “A Method for Chinese Syllables Recognition based upon Sub-syllable Hidden Markov Model,” 1994 International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 4 pages.
Jones, J., “Speech Recognition for Cyclone,” Apple Computer, Inc., E.R.S., Revision 2.9, Sep. 10, 1992, 93 pages.
Katz, S. M., “Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages.
Kitano, H., “PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System,” Jun. 1991 Computer, vol. 24, No. 6, 13 pages.
Klabbers, E., et al., “Reducing Audible Spectral Discontinuities,” IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages.
Klatt, D. H., “Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence,” Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages.
Kominek, J., et al., “Impact of Durational Outlier Removal from Unit Selection Catalogs,” 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages.
Kubala, F., et al., “Speaker Adaptation from a Speaker-Independent Training Corpus,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages.
Kubala, F., et al., “The Hub and Spoke Paradigm for CSR Evaluation,” Proceedings of the Spoken Language Technology Workshop, Mar. 6-8, 1994, 9 pages.
Lee, K.F., “Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The SPHINX System,” Apr. 18, 1988, Partial fulfillment of the requirements for the degree of Doctor of Philosophy, Computer Science Department, Carnegie Mellon University, 195 pages.
Lee, L., et al., “A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary,” International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 3-6, 1990, 5 pages.
Lee, L., et al., “Golden Mandarin(II)—An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary,” 0-7803-0946-4/93 © 1993IEEE, 4 pages.
Lee, L., et al., “Golden Mandarin(II)—An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions,” International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 5 pages.
Lee, L., et al., “System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters,” International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, Nos. 3 & 4, Nov. 1991, 16 pages.
Lin, C.H., et al., “A New Framework for Recognition of Mandarin Syllables With Tones Using Sub-syllabic Unites,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-93), Apr. 27-30, 1993, 4 pages.
Linde, Y., et al., “An Algorithm for Vector Quantizer Design,” IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages.
Liu, F.H., et al., “Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering,” IEEE International Conference of Acoustics, Speech, and Signal Processing, ICASSP-92, Mar. 23-26, 1992, 4 pages.
Logan, B., “Mel Frequency Cepstral Coefficients for Music Modeling,” In International Symposium on Music Information Retrieval, 2000, 2 pages.
Lowerre, B. T., “The-HARPY Speech Recognition System,” Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages.
Maghbouleh, A., “An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations,” Revised version of a paper presented at the Computational Phonology in Speech Technology workshop, 1996 annual meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages.
Markel, J. D., et al., “Linear Prediction of Speech,” Springer-Verlag, Berlin Heidelberg New York 1976, 12 pages.
Morgan, B., “Business Objects,” (Business Objects for Windows) Business Objects Inc., DBMS Sep. 1992, vol. 5, No. 10, 3 pages.
Mountford, S. J., et al., “Talking and Listening to Computers,” The Art of Human-Computer Interface Design, Copyright © 1990 Apple Computer, Inc. Addison-Wesley Publishing Company, Inc., 17 pages.
Murty, K. S. R., et al., “Combining Evidence from Residual Phase and Mfcc Features for Speaker Recognition,” IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages.
Murveit H. et al., “Integrating Natural Language Constraints into HMM-based Speech Recognition,” 1990 International Conference on Acoustics, Speech, and Signal Processing, Apr. 3-6, 1990, 5 pages.
Nakagawa, S., et al., “Speaker Recognition by Combining MFCC and Phase Information,” IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Mar. 14-19, 2010, 4 pages.
Niesler, T. R., et al., “A Variable-Length Category-Based N-Gram Language Model,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, May 7-10, 1996, 6 pages.
Papadimitriou, C. H., et al., “Latent Semantic Indexing: A Probabilistic Analysis,” Nov. 14, 1997, http://citeseerx.ist.psu.edu/messages/downloadsexceeded.html, 21 pages.
Parsons, T. W., “Voice and Speech Processing,” Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 5 pages.
Parsons, T. W., “Voice and Speech Processing,” Pitch and Formant Estimation, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 15 pages.
Picone, J., “Continuous Speech Recognition Using Hidden Markov Models,” IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages.
Rabiner, L. R., et al., “Fundamental of Speech Recognition,” © 1993 AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 17 pages.
Rabiner, L. R., et al., “Note on the Properties of a Vector Quantizer for LPC Coefficients,” The Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, 9 pages.
Ratcliffe, M., “ClearAccess 2.0 allows SQL searches off-line,” (Structured Query Language), ClearAcess Corp., MacWeek Nov. 16, 1992, vol. 6, No. 41, 2 pages.
Remde, J. R., et al., “SuperBook: An Automatic Tool for Information Exploration-Hypertext?,” In Proceedings of Hypertext'87 papers, Nov. 13-15, 1987, 14 pages.
Reynolds, C. F., “On-Line Reviews: A New Application of the HICOM Conferencing System,” IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages.
Rigoll, G., “Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'89), May 23-26, 1989, 4 pages.
Riley, M. D., “Tree-Based Modelling of Segmental Durations,” Talking Machines Theories, Models, and Designs, 1992 © Elsevier Science Publishers B.V., North-Holland, ISBN: 08-44489115.3, 15 pages.
Rivoira, S., et al., “Syntax and Semantics in a Word-Sequence Recognition System,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'79), Apr. 1979, 5 pages.
Rosenfeld, R., “A Maximum Entropy Approach to Adaptive Statistical Language Modelling,” Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages.
Roszkiewicz, A., “Extending your Apple,” Back Talk—Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, 5 pages.
Sakoe, H., et al., “Dynamic Programming Algorithm Optimization for Spoken Word Recognition,” IEEE Transactins on Acoustics, Speech, and Signal Processing, Feb. 1978, vol. ASSP-26 No. 1, 8 pages.
Salton, G., et al., “On the Application of Syntactic Methodologies in Automatic Text Analysis,” Information Processing and Management, vol. 26, No. 1, Great Britain 1990, 22 pages.
Savoy, J., “Searching Information in Hypertext Systems Using Multiple Sources of Evidence,” International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1993, 15 pages.
Scagliola, C., “Language Models and Search Algorithms for Real-Time Speech Recognition,” International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages.
Schmandt, C., et al., “Augmenting a Window System with Speech Input,” IEEE Computer Society, Computer Aug. 1990, vol. 23, No. 8, 8 pages.
Schütze, H., “Dimensions of Meaning,” Proceedings of Supercomputing'92 Conference, Nov. 16-20, 1992, 10 pages.
Sheth B., et al., “Evolving Agents for Personalized Information Filtering,” In Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1-5, 1993, 9 pages.
Shikano, K., et al., “Speaker Adaptation Through Vector Quantization,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages.
Sigurdsson, S., et al., “Mel Frequency Cepstral Coefficients: An Evaluation of Robustness of MP3 Encoded Music,” In Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR), 2006, 4 pages.
Silverman, K. E. A., et al., “Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration,” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 15-19, 1999, 5 pages.
Tenenbaum, A.M., et al., “Data Structure Using Pascal,” 1981 Prentice-Hall, Inc., 34 pages.
Tsai, W.H., et al., “Attributed Grammar—A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages.
Udell, J., “Computer Telephony,” BYTE, vol. 19, No. 7, Jul. 1, 1994, 9 pages.
van Santen, J. P. H., “Contextual Effects on Vowel Duration,” Journal Speech Communication, vol. 11, No. 6, Dec. 1992, 34 pages.
Vepa, J., et al., “New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis,” In Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 4 pages.
Verschelde, J., “MATLAB Lecture 8. Special Matrices in MATLAb,” Nov. 23, 2005, UIC Dept. of Math., Stat. & C.S., MCS 320, Introduction to Symbolic Computation, 4 pages.
Vingron, M. “Near-Optimal Sequence Alignment,” Deutsches Krebsforschungszentrum (DKFZ), Abteilung Theoretische Bioinformatik, Heidelberg, Germany, Jun. 1996, 20 pages.
Werner, S., et al., “Prosodic Aspects of Speech,” Université de Lausanne, Switzerland, 1994, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art, and Future Challenges, 18 pages.
Wolff, M., “Poststructuralism and the Artful Database: Some Theoretical Considerations,” Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages.
Wu, M., “Digital Speech Processing and Coding,” ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-2 course presentation, University of Maryland, College Park, 8 pages.
Wu, M., “Speech Recognition, Synthesis, and H.C.I.,” ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-3 course presentation, University of Maryland, College Park, 11 pages.
Wyle, M. F., “A Wide Area Network Information Filter,” In Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 9-11, 1991, 6 pages.
Yankelovich, N., et al., “Intermedia: The Concept and the Construction of a Seamless Information Environment,” Computer Magazine, Jan. 1988, © 1988 IEEE, 16 pages.
Yoon, K., et al., “Letter-to-Sound Rules for Korean,” Department of Linguistics, The Ohio State University, 2002, 4 pages.
Zhao, Y., “An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 15 pages.
International Search Report dated Nov. 9, 1994, in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 8 pages (Robert Don Strong).
International Preliminary Examination Report dated Mar. 1, 1995, in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 5 pages (Robert Don Strong).
International Preliminary Examination Report dated Apr. 10, 1995, in International Application No. PCT/US1993/12637, which corresponds to U.S. Appl. No. 07/999,354, 7 pages (Alejandro Acero).
International Search Report dated Feb. 8, 1995, in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 7 pages (Yen-Lu Chow).
International Preliminary Examination Report dated Feb. 28, 1996, in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages (Yen-Lu Chow).
Written Opinion dated Aug. 21, 1995, in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages (Yen-Lu Chow).
International Search Report dated Nov. 8, 1995, in International Application No. PCT/US1995/08369, which corresponds to U.S. Appl. No. 08/271,639, 6 pages (Peter V. De Souza).
International Preliminary Examination Report dated Oct. 9, 1996, in International Application No. PCT/US1995/08369, which corresponds to U.S. Appl. No. 08/271,639, 4 pages (Peter V. De Souza).
Agnäs, MS., et al., “Spoken Language Translator: First-Year Report,” Jan. 1994, SICS (ISSN 0283-3638), SRI and Telia Research AB, 161 pages.
Allen, J., “Natural Language Understanding,” 2nd Edition, Copyright © 1995 by The Benjamin/Cummings Publishing Company, Inc., 671 pages.
Alshawi, H., et al., “CLARE: A Contextual Reasoning and Cooperative Response Framework for the Core Language Engine,” Dec. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 273 pages.
Alshawi, H., et al., “Declarative Derivation of Database Queries from Meaning Representations,” Oct. 1991, Proceedings of the BANKAI Workshop on Intelligent Information Access, 12 pages.
Alshawi H., et al., “Logical Forms in the Core Language Engine,” 1989, Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 8 pages.
Alshawi, H., et al., “Overview of the Core Language Engine,” Sep. 1988, Proceedings of Future Generation Computing Systems, Tokyo, 13 pages.
Alshawi, H., “Translation and Monotonic Interpretation/Generation,” Jul. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 18 pages, http://www.cam.sri.com/tr/crc024/paper.ps.Z—1992.
Appelt, D., et al., “Fastus: A Finite-state Processor for Information Extraction from Real-world Text,” 1993, Proceedings of IJCAI, 8 pages.
Appelt, D., et al., “SRI: Description of the JV-FASTUS System Used for MUC-5,” 1993, SRI International, Artificial Intelligence Center, 19 pages.
Appelt, D., et al., SRI International Fastus System MUC-6 Test Results and Analysis, 1995, SRI International, Menlo Park, California, 12 pages.
Archbold, A., et al., “A Team User's Guide,” Dec. 21, 1981, SRI International, 70 pages.
Bear, J., et al., “A System for Labeling Self-Repairs in Speech,” Feb. 22, 1993, SRI International, 9 pages.
Bear, J., et al., “Detection and Correction of Repairs in Human-Computer Dialog,” May 5, 1992, SRI International, 11 pages.
Bear, J., et al., “Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog,” 1992, Proceedings of the 30th annual meeting on Association for Computational Linguistics (ACL), 8 pages.
Bear, J., et al., “Using Information Extraction to Improve Document Retrieval,” 1998, SRI International, Menlo Park, California, 11 pages.
Berry, P., et al., “Task Management under Change and Uncertainty Constraint Solving Experience with the CALO Project,” 2005, Proceedings of CP'05 Workshop on Constraint Solving under Change, 5 pages.
Bobrow, R. et al., “Knowledge Representation for Syntactic/Semantic Processing,” From: AAA-80 Proceedings. Copyright © 1980, AAAI, 8 pages.
Bouchou, B., et al., “Using Transducers in Natural Language Database Query,” Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 17 pages.
Bratt, H., et al., “The SRI Telephone-based ATIS System,” 1995, Proceedings of ARPA Workshop on Spoken Language Technology, 3 pages.
Bulyko, I. et al., “Error-Correction Detection and Response Generation in a Spoken Dialogue System,” © 2004 Elsevier B.V., specom.2004.09.009, 18 pages.
Burke, R., et al., “Question Answering from Frequently Asked Question Files,” 1997, AI Magazine, vol. 18, No. 2, 10 pages.
Burns, A., et al., “Development of a Web-Based Intelligent Agent for the Fashion Selection and Purchasing Process via Electronic Commerce,” Dec. 31, 1998, Proceedings of the Americas Conference on Information system (AMCIS), 4 pages.
Carter, D., “Lexical Acquisition in the Core Language Engine,” 1989, Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, 8 pages.
Carter, D., et al., “The Speech-Language Interface in the Spoken Language Translator,” Nov. 23, 1994, SRI International, 9 pages.
Chai, J., et al., “Comparative Evaluation of a Natural Language Dialog Based System and a Menu Driven System for Information Access: a Case Study,” Apr. 2000, Proceedings of the International Conference on Multimedia Information Retrieval (RIAO), Paris, 11 pages.
Cheyer, A., et al., “Multimodal Maps: An Agent-based Approach,” International Conference on Cooperative Multimodal Communication, 1995, 15 pages.
Cheyer, A., et al., “The Open Agent Architecture,” Autonomous Agents and Multi-Agent systems, vol. 4, Mar. 1, 2001, 6 pages.
Cheyer, A., et al., “The Open Agent Architecture: Building communities of distributed software agents” Feb. 21, 1998, Artificial Intelligence Center SRI International, Power Point presentation, downloaded from http://www.ai.sri.com/˜oaa/, 25 pages.
Codd, E. F., “Databases: Improving Usability and Responsiveness—‘How About Recently’,” Copyright © 1978, by Academic Press, Inc., 28 pages.
Cohen, P.R., et al., “An Open Agent Architecture,” 1994, 8 pages. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.480.
Coles, L. S., et al., “Chemistry Question-Answering,” Jun. 1969, SRI International, 15 pages.
Coles, L. S., “Techniques for Information Retrieval Using an Inferential Question-Answering System with Natural-Language Input,” Nov. 1972, SRI International, 198 Pages.
Coles, L. S., “The Application of Theorem Proving to Information Retrieval,” Jan. 1971, SRI International, 21 pages.
Constantinides, P., et al., “A Schema Based Approach to Dialog Control,” 1998, Proceedings of the International Conference on Spoken Language Processing, 4 pages.
Cox, R. V., et al., “Speech and Language Processing for Next-Millennium Communications Services,” Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 24 pages.
Craig, J., et al., “Deacon: Direct English Access and Control,” Nov. 7-10, 1966 AFIPS Conference Proceedings, vol. 19, San Francisco, 18 pages.
Dar, S., et al., “DTL's DataSpot: Database Exploration Using Plain Language,” 1998 Proceedings of the 24th VLDB Conference, New York, 5 pages.
Davis, Z., et al., “A Personal Handheld Multi-Modal Shopping Assistant,” 2006 IEEE, 9 pages.
Decker, K., et al., “Designing Behaviors for Information Agents,” The Robotics Institute, Carnegie-Mellon University, paper, Jul. 6, 1996, 15 pages.
Decker, K., et al., “Matchmaking and Brokering,” The Robotics Institute, Carnegie-Mellon University, paper, May 16, 1996, 19 pages.
Dowding, J., et al., “Gemini: A Natural Language System for Spoken-Language Understanding,” 1993, Proceedings of the Thirty-First Annual Meeting of the Association for Computational Linguistics, 8 pages.
Dowding, J., et al., “Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser,” 1994, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, 7 pages.
Epstein, M., et al., “Natural Language Access to a Melanoma Data Base,” Sep. 1978, SRI International, 7 pages.
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results,” Classes/Subclasses Manually Reviewed for the Search of US Patent No. 7,177,798, Mar. 22, 2013, 1 page.
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results,” List of Publications Manually reviewed for the Search of US Patent No. 7,177,798, Mar. 22, 2013, 1 page.
Ferguson, G., et al., “Trips: An Integrated Intelligent Problem-Solving Assistant,” 1998, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98) and Tenth Conference on Innovative Applications of Artificial Intelligence (IAAI-98), 7 pages.
Fikes, R., et al., “A Network-based knowledge Representation and its Natural Deduction System,” Jul. 1977, SRI International, 43 pages.
Gamback, B., et al., “The Swedish Core Language Engine,” 1992 NOTEX Conference, 17 pages.
Glass, J., et al., “Multilingual Language Generation Across Multiple Domains,” Sep. 18-22, 1994, International Conference on Spoken Language Processing, Japan, 5 pages.
Green, C. “The Application of Theorem Proving to Question-Answering Systems,” Jun. 1969, SRI Stanford Research Institute, Artificial Intelligence Group, 169 pages.
Gregg, D. G., “DSS Access on the WWW: An Intelligent Agent Prototype,” 1998 Proceedings of the Americas Conference on Information Systems-Association for Information Systems, 3 pages.
Grishman, R., “Computational Linguistics: An Introduction,” © Cambridge University Press 1986, 172 pages.
Grosz, B. et al., “Dialogic: A Core Natural-Language Processing System,” Nov. 9, 1982, SRI International, 17 pages.
Grosz, B. et al., “Research on Natural-Language Processing at SRI,” Nov. 1981, SRI International, 21 pages.
Grosz, B., et al., “TEAM: An Experiment in the Design of Transportable Natural-Language Interfaces,” Artificial Intelligence, vol. 32, 1987, 71 pages.
Grosz, B., “Team: A Transportable Natural-Language Interface System,” 1983, Proceedings of the First Conference on Applied Natural Language Processing, 7 pages.
Guida, G., et al., “NLI: A Robust Interface for Natural Language Person-Machine Communication,” Int. J. Man-Machine Studies, vol. 17, 1982, 17 pages.
Guzzoni, D., et al., “Active, A platform for Building Intelligent Software,” Computational Intelligence 2006, 5 pages. http://www.informatik.uni-trier.de/˜ley/pers/hd/g/Guzzoni:Didier.
Guzzoni, D., “Active: A unified platform for building intelligent assistant applications,” Oct. 25, 2007, 262 pages.
Guzzoni, D., et al., “Many Robots Make Short Work,” 1996 AAAI Robot Contest, SRI International, 9 pages.
Haas, N., et al., “An Approach to Acquiring and Applying Knowledge,” Nov. 1980, SRI International, 22 pages.
Hadidi, R., et al., “Students' Acceptance of Web-Based Course Offerings: An Empirical Assessment,” 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages.
Hawkins, J., et al., “Hierarchical Temporal Memory: Concepts, Theory, and Terminology,” Mar. 27, 2007, Numenta, Inc., 20 pages.
He, Q., et al., “Personal Security Agent: KQML-Based PKI,” The Robotics Institute, Carnegie-Mellon University, paper, Oct. 1, 1997, 14 pages.
Hendrix, G. et al., “Developing a Natural Language Interface to Complex Data,” ACM Transactions on Database Systems, vol. 3, No. 2, Jun. 1978, 43 pages.
Hendrix, G., “Human Engineering for Applied Natural Language Processing,” Feb. 1977, SRI International, 27 pages.
Hendrix, G., “Klaus: A System for Managing Information and Computational Resources,” Oct. 1980, SRI International, 34 pages.
Hendrix, G., “Lifer: A Natural Language Interface Facility,” Dec. 1976, SRI Stanford Research Institute, Artificial Intelligence Center, 9 pages.
Hendrix, G., “Natural-Language Interface,” Apr.-Jun. 1982, American Journal of Computational Linguistics, vol. 8, No. 2, 7 pages.
Hendrix, G., “The Lifer Manual: A Guide to Building Practical Natural Language Interfaces,” Feb. 1977, SRI International, 76 pages.
Hendrix, G., et al., “Transportable Natural-Language Interfaces to Databases,” Apr. 30, 1981, SRI International, 18 pages.
Hirschman, L., et al., “Multi-Site Data Collection and Evaluation in Spoken Language Understanding,” 1993, Proceedings of the workshop on Human Language Technology, 6 pages.
Hobbs, J., et al., “Fastus: A System for Extracting Information from Natural-Language Text,” Nov. 19, 1992, SRI International, Artificial Intelligence Center, 26 pages.
Hobbs, J., et al.,“Fastus: Extracting Information from Natural-Language Texts,” 1992, SRI International, Artificial Intelligence Center, 22 pages.
Hobbs, J., “Sublanguage and Knowledge,” Jun. 1984, SRI International, Artificial Intelligence Center, 30 pages.
Hodjat, B., et al., “Iterative Statistical Language Model Generation for Use with an Agent-Oriented Natural Language Interface,” vol. 4 of the Proceedings of HCI International 2003, 7 pages.
Huang, X., et al., “The SPHINX-II Speech Recognition System: An Overview,” Jan. 15, 1992, Computer, Speech and Language, 14 pages.
Issar, S., et al., “CMU's Robust Spoken Language Understanding System,” 1993, Proceedings of EUROSPEECH, 4 pages.
Issar, S., “Estimation of Language Models for New Spoken Language Applications,” Oct. 3-6, 1996, Proceedings of 4th International Conference on Spoken language Processing, Philadelphia, 4 pages.
Janas, J., “The Semantics-Based Natural Language Interface to Relational Databases,” © Springer-Verlag Berlin Heidelberg 1986, Germany, 48 pages.
Johnson, J., “A Data Management Strategy for Transportable Natural Language Interfaces,” Jun. 1989, doctoral thesis submitted to the Department of Computer Science, University of British Columbia, Canada, 285 pages.
Julia, L., et al., “Http://www.Speech.sri.com/demos/atis.html,” 1997, Proceedings of AAAI, Spring Symposium, 5 pages.
Kahn, M., et al., “CoABS Grid Scalability Experiments,” 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 8 pages.
Kamel, M., et al., “A Graph Based Knowledge Retrieval System,” © 1990 IEEE, 7 pages.
Katz, B., “Annotating the World Wide Web Using Natural Language,” 1997, Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet, 7 pages.
Katz, B., “A Three-Step Procedure for Language Generation,” Dec. 1980, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 42 pages.
Kats, B., et al., “Exploiting Lexical Regularities in Designing Natural Language Systems,” 1988, Proceedings of the 12th International Conference on Computational Linguistics, Coling'88, Budapest, Hungary, 22 pages.
Katz, B., et al., “REXTOR: A System for Generating Relations from Natural Language,” In Proceedings of the ACL Oct. 2000 Workshop on Natural Language Processing and Information Retrieval (NLP&IR), 11 pages.
Katz, B., “Using English for Indexing and Retrieving,” 1988 Proceedings of the 1st RIAO Conference on User-Oriented Content-Based Text and Image (RIAO'88), 19 pages.
Konolige, K., “A Framework for a Portable Natural-Language Interface to Large Data Bases,” Oct. 12, 1979, SRI International, Artificial Intelligence Center, 54 pages.
Laird, J., et al., “SOAR: An Architecture for General Intelligence,” 1987, Artificial Intelligence vol. 33, 64 pages.
Larks, “Intelligent Software Agents: Larks,” 2006, downloaded on Mar. 15, 2013 from http://www.cs.cmu.edu/larks.html, 2 pages.
Martin, D., et al., “Building Distributed Software Systems with the Open Agent Architecture,” Mar. 23-25, 1998, Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 23 pages.
Martin, D., et al., “Development Tools for the Open Agent Architecture,” Apr. 1996, Proceedings of the International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 17 pages.
Martin, D., et al., “Information Brokering in an Agent Architecture,” Apr., 1997, Proceedings of the second International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 20 pages.
Martin, D., et al., “PAAM '98 Tutorial: Building and Using Practical Agent Applications,” 1998, SRI International, 78 pages.
Martin, P., et al., “Transportability and Generality in a Natural-Language Interface System,” Aug. 8-12, 1983, Proceedings of the Eight International Joint Conference on Artificial Intelligence, West Germany, 21 pages.
Matiasek, J., et al., “Tamic-P: A System for NL Access to Social Insurance Database,” Jun. 17-19, 1999, Proceeding of the 4th International Conference on Applications of Natural Language to Information Systems, Austria, 7 pages.
Michos, S.E., et al., “Towards an adaptive natural language interface to command languages,” Natural Language Engineering 2 (3), © 1994 Cambridge University Press, 19 pages.
Milstead, J., et al., “Metadata: Cataloging by Any Other Name . . . ” Jan. 1999, Online, Copyright © 1999 Information Today, Inc., 18 pages.
Minker, W., et al., “Hidden Understanding Models for Machine Translation,” 1999, Proceedings of ETRW on Interactive Dialogue in Multi-Modal Systems, 4 pages.
Modi, P. J., et al., “CMRadar: A Personal Assistant Agent for Calendar Management,” © 2004, American Association for Artificial Intelligence, Intelligent Systems Demonstrations, 2 pages.
Moore, R., et al., “Combining Linguistic and Statistical Knowledge Sources in Natural-Language Processing for ATIS,” 1995, SRI International, Artificial Intelligence Center, 4 pages.
Moore, R., “Handling Complex Queries in a Distributed Data Base,” Oct. 8, 1979, SRI International, Artificial Intelligence Center, 38 pages.
Moore, R., “Practical Natural-Language Processing by Computer,” Oct. 1981, SRI International, Artificial Intelligence Center, 34 pages.
Moore, R., et al., “SRI's Experience with the ATIS Evaluation,” Jun. 24-27, 1990, Proceedings of a workshop held at Hidden Valley, Pennsylvania, 4 pages.
Moore, et al., “The Information Warefare Advisor: An Architecture for Interacting with Intelligent Agents Across the Web,” Dec. 31, 1998 Proceedings of Americas Conference on Information Systems (AMCIS), 4 pages.
Moore, R., “The Role of Logic in Knowledge Representation and Commonsense Reasoning,” Jun. 1982, SRI International, Artificial Intelligence Center, 19 pages.
Moore, R., “Using Natural-Language Knowledge Sources in Speech Recognition,” Jan. 1999, SRI International, Artificial Intelligence Center, 24 pages.
Moran, D., et al., “Intelligent Agent-based User Interfaces,” Oct. 12-13, 1995, Proceedings of International Workshop on Human Interface Technology, University of Aizu, Japan, 4 pages. http://www.dougmoran.com/dmoran/PAPERS/oaa-iwhit1995.pdf.
Moran, D., “Quantifier Scoping in the SRI Core Language Engine,” 1988, Proceedings of the 26th annual meeting on Association for Computational Linguistics, 8 pages.
Motro, A., “Flex: A Tolerant and Cooperative User Interface to Databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 2, No. 2, Jun. 1990, 16 pages.
Murveit, H., et al., “Speech Recognition in SRI's Resource Management and ATIS Systems,” 1991, Proceedings of the workshop on Speech and Natural Language (HTL'91), 7 pages.
OAA, “The Open Agent Architecture 1.0 Distribution Source Code,” Copyright 1999, SRI International, 2 pages.
Odubiyi, J., et al., “SAIRE—a scalable agent-based information retrieval engine,” 1997 Proceedings of the First International Conference on Autonomous Agents, 12 pages.
Owei, V., et al., “Natural Language Query Filtration in the Conceptual Query Language,” © 1997 IEEE, 11 pages.
Pannu, A., et al., “A Learning Personal Agent for Text Filtering and Notification,” 1996, The Robotics Institute School of Computer Science, Carnegie-Mellon University, 12 pages.
Pereira, “Logic for Natural Language Analysis,” Jan. 1983, SRI International, Artificial Intelligence Center, 194 pages.
Perrault, C.R., et al., “Natural-Language Interfaces,” Aug. 22, 1986, SRI International, 48 pages.
Pulman, S.G., et al., “Clare: A Combined Language and Reasoning Engine,” 1993, Proceedings of JFIT Conference, 8 pages. URL: http://www.cam.sri.com/tr/crc042/paper.ps.Z.
Ravishankar, “Efficient Algorithms for Speech Recognition,” May 15, 1996, Doctoral Thesis submitted to School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburg, 146 pages.
Rayner, M., et al., “Adapting the Core Language Engine to French and Spanish,” May 10, 1996, Cornell University Library, 9 pages. http://arxiv.org/abs/cmp-lg/9605015.
Rayner, M., “Abductive Equivalential Translation and its application to Natural Language Database Interfacing,” Sep. 1993 Dissertation paper, SRI International, 163 pages.
Rayner, M., et al., “Deriving Database Queries from Logical Forms by Abductive Definition Expansion,” 1992, Proceedings of the Third Conference on Applied Natural Language Processing, ANLC'92, 9 pages.
Rayner, M., “Linguistic Domain Theories: Natural-Language Database Interfacing from First Principles,” 1993, SRI International, Cambridge, 11 pages.
Rayner, M., et al., “Spoken Language Translation With Mid-90's Technology: A Case Study,” 1993, EUROSPEECH, ISCA, 4 pages. http://dblp.uni-trier.de/db/conf/interspeech/eurospeech1993.html#RaynerBCCDGKKLPP393.
Russell, S., et al., “Artificial Intelligence, A Modern Approach,” © 1995 Prentice Hall, Inc., 121 pages.
Sacerdoti, E., et al., “A Ladder User's Guide (Revised),” Mar. 1980, SRI International, Artificial Intelligence Center, 39 pages.
Sagalowicz, D., “A D-Ladder User's Guide,” Sep. 1980, SRI International, 42 pages.
Sameshima, Y., et al., “Authorization with security attributes and privilege delegation Access control beyond the ACL,” Computer Communications, vol. 20, 1997, 9 pages.
San-Segundo, R., et al., “Confidence Measures for Dialogue Management in the CU Communicator System,” Jun. 5-9, 2000, Proceedings of Acoustics, Speech, and Signal Processing (ICASSP'00), 4 pages.
Sato, H., “A Data Model, Knowledge Base, and Natural Language Processing for Sharing a Large Statistical Database,” 1989, Statistical and Scientific Database Management, Lecture Notes in Computer Science, vol. 339, 20 pages.
Schnelle, D., “Context Aware Voice User Interfaces for Workflow Support,” Aug. 27, 2007, Dissertation paper, 254 pages.
Sharoff, S., et al., “Register-domain Separation as a Methodology for Development of Natural Language Interfaces to Databases,” 1999, Proceedings of Human-Computer Interaction (INTERACT'99), 7 pages.
Shimazu, H., et al., “CAPIT: Natural Language Interface Design Tool with Keyword Analyzer and Case-Based Parser,” NEC Research & Development, vol. 33, No. 4, Oct. 1992, 11 pages.
Shinkle, L., “Team User's Guide,” Nov. 1984, SRI International, Artificial Intelligence Center, 78 pages.
Shklar, L., et al., “Info Harness: Use of Automatically Generated Metadata for Search and Retrieval of Heterogeneous Information,” 1995 Proceedings of CAiSE'95, Finland.
Singh, N., “Unifying Heterogeneous Information Models,” 1998 Communications of the ACM, 13 pages.
SRI2009, “SRI Speech: Products: Software Development Kits: EduSpeak,” 2009, 2 pages, available at http://web.archive.org/web/20090828084033/http://www.speechatsri.com/products/eduspeak.shtml.
Starr, B., et al., “Knowledge-Intensive Query Processing,” May 31, 1998, Proceedings of the 5th KRDB Workshop, Seattle, 6 pages.
Stern, R., et al. “Multiple Approaches to Robust Speech Recognition,” 1992, Proceedings of Speech and Natural Language Workshop, 6 pages.
Stickel, “A Nonclausal Connection-Graph Resolution Theorem-Proving Program,” 1982, Proceedings of AAAI'82, 5 pages.
Sugumaran, V., “A Distributed Intelligent Agent-Based Spatial Decision Support System,” Dec. 31, 1998, Proceedings of the Americas Conference on Information systems (AMCIS), 4 pages.
Sycara, K., et al., “Coordination of Multiple Intelligent Software Agents,” International Journal of Cooperative Information Systems (IJCIS), vol. 5, Nos. 2 & 3, Jun. & Sep. 1996, 33 pages.
Sycara, K., et al., “Distributed Intelligent Agents,” IEEE Expert, vol. 11, No. 6, Dec. 1996, 32 pages.
Sycara, K., et al., “Dynamic Service Matchmaking Among Agents in Open Information Environments,” 1999, SIGMOD Record, 7 pages.
Sycara, K., et al., “The RETSINA MAS Infrastructure,” 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 20 pages.
Tyson, M., et al., “Domain-Independent Task Specification in the Tacitus Natural Language System,” May 1990, SRI International, Artificial Intelligence Center, 16 pages.
Wahlster, W., et al., “Smartkom: multimodal communication with a life-like character,” 2001 EUROSPEECH —Scandinavia, 7th European Conference on Speech Communication and Technology, 5 pages.
Waldinger, R., et al., “Deductive Question Answering from Multiple Resources,” 2003, New Directions in Question Answering, published by AAAI, Menlo Park, 22 pages.
Walker, D., et al., “Natural Language Access to Medical Text,” Mar. 1981, SRI International, Artificial Intelligence Center, 23 pages.
Waltz, D., “An English Language Question Answering System for a Large Relational Database,” © 1978 ACM, vol. 21, No. 7, 14 pages.
Ward, W., et al., “A Class Based Language Model for Speech Recognition,” © 1996 IEEE, 3 pages.
Ward, W., et al., “Recent Improvements in the CMU Spoken Language Understanding System,” 1994, ARPA Human Language Technology Workshop, 4 pages.
Warren, D.H.D., et al., “An Efficient Easily Adaptable System for Interpreting Natural Language Queries,” Jul.-Dec. 1982, American Journal of Computational Linguistics, vol. 8, No. 3-4, 11 pages.
Weizenbaum, J., “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Communications of the ACM, vol. 9, No. 1, Jan. 1966, 10 pages.
Winiwarter, W., “Adaptive Natural Language Interfaces to FAQ Knowledge Bases,” Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 22 pages.
Wu, X. et al., “KDA: A Knowledge-based Database Assistant,” Data Engineering, Feb. 6-10, 1989, Proceeding of the Fifth International Conference on Engineering (IEEE Cat. No. 89CH2695-5), 8 pages.
Yang, J., et al., “Smart Sight: A Tourist Assistant System,” 1999 Proceedings of Third International Symposium on Wearable Computers, 6 pages.
Zeng, D., et al., “Cooperative Intelligent Software Agents,” The Robotics Institute, Carnegie-Mellon University, Mar. 1995, 13 pages.
Zhao, L., “Intelligent Agents for Flexible Workflow Systems,” Oct. 31, 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages.
Zue, V., et al., “From Interface to Content: Translingual Access and Delivery of On-Line Information,” 1997, EUROSPEECH, 4 pages.
Zue, V., et al., “Jupiter: A Telephone-Based Conversational Interface for Weather Information,” Jan. 2000, IEEE Transactions on Speech and Audio Processing, 13 pages.
Zue, V., et al., “Pegasus: A Spoken Dialogue Interface for On-Line Air Travel Planning,” 1994 Elsevier, Speech Communication 15 (1994), 10 pages.
Zue, V., et al., “The Voyager Speech Understanding System: Preliminary Development and Evaluation,” 1990, Proceedings of IEEE 1990 International Conference on Acoustics, Speech, and Signal Processing, 4 pages.
Related Publications (1)
Number Date Country
20110208524 A1 Aug 2011 US