This is directed to processing received voice inputs by identifying an instruction likely to be provided by the user of the voice. In particular, this is directed to identifying the user providing a voice input and processing the voice input using a subset of resources
Many electronic devices provide a significant number of features or operations accessible to a user. The number of available features or operations may often exceed the number of inputs available using an input interface of the electronic device. To allow users to access electronic device operations that are not specifically tied to particular inputs (e.g., inputs not associated with a key sequence or button press, such as a MENU button on an iPod, available from Apple Inc.), the electronic device may provide menus with selectable options, where the options are associated with electronic device operations. For example, an electronic device may display a menu with selectable options on a display, for example in response to receiving an input associated with the menu from an input interface (e.g., a MENU button).
Because the menu is typically displayed on an electronic device display, a user may be required to look at the display to select a particular option. This may sometimes not be desirable. For example, if a user desires to conserve power (e.g., in a portable electronic device), requiring the electronic device to display a menu and move a highlight region navigated by the user to provide a selection may use up power. As another example, if a user is in a dark environment and the display does not include back lighting, the user may not be able to distinguish displayed options of the menu. As still another example, if a user is blind or visually impaired, the user may not be able to view a displayed menu.
To overcome this issue, some systems may allow users to provide instructions by voice. In particular, the electronic device can include audio input circuitry for detecting words spoken by a user. Processing circuitry of the device can then process the words to identify a corresponding instruction to the electronic device, and execute the corresponding instruction. To process received voice inputs, the electronic device can include a library of words to which the device can compare the received voice input, and from which the device can extract the corresponding instruction.
In some cases, however, the size of the word library can be so large that it may be prohibitive to process voice inputs, and in particular time and resource-prohibitive to process long voice inputs. In addition, the electronic device can require significant resources to parse complex instructions that include several variables provided as part of the voice instruction (e.g., an instruction that includes several filter values for selecting a subset of media items available for playback by the electronic device).
This is directed to systems and methods for identifying a user providing a voice input, and processing the input to identify a corresponding instruction based on the user's identity. In particular, this is directed to processing a received voice input using the subset of library terms used to process the voice input.
An electronic device can receive a voice input for directing the device to perform one or more operations. The device can then process the received input by comparing the analog input signal with words from a library. To reduce the load for processing the received voice input, the electronic device can limit the size of a library to which to compare the voice input (e.g., the number of library words) based on the identity of the user providing the input.
The electronic device can identify the user using any suitable approach. For example, the electronic device can identify a user from the content of an input provided by the user (e.g., a user name and password). As another example, the electronic device can identify a user by the type of interaction of the user with the device (e.g., the particular operations the user directs the device to perform). As still another example, the electronic device can identify a user based on biometric information (e.g., a voice print). Once the user has been identified, the electronic device can determine the user's interests and define the library subset based on those interests. For example, the subset can include words corresponding to metadata related to content selected by the user for storage on the device (e.g., transferred media items) or content added to the device by the user (e.g., the content of messages sent by the user). As another example, the subset can include words corresponding to application operations that the user is likely to use (e.g., words relating to media playback instructions).
In response to identifying the words of a particular voice input, the electronic device can identify one or more instructions that correspond to the voice input. The instructions can then be passed on to appropriate circuitry of the electronic device for the device to perform an operation corresponding to the instruction. In some embodiments, the instruction can identify a particular device operation and a variable or argument characterizing the operation.
The above and other features of the present invention, its nature and various advantages will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:
An electronic device is operative to receive voice inputs provided by a user to control electronic device operations. In particular, an electronic device is operative to receive and process voice inputs to identify words spoken by the user, and to determine an instruction for performing a device operation corresponding to the identified words.
The electronic device can include a processor and an input interface that includes audio input circuitry. Using the audio input circuitry, a user can provide voice inputs to the device for directing the device to perform one or more operations. The voice inputs can have any suitable form, including for example pre-defined strings corresponding to specific instructions (e.g., “play artist Mika”), arbitrary or natural language instructions (e.g., “pick something good”), or combinations of these.
The electronic device can parse a received voice input to identify the words of the input. In particular, the electronic device can compare words of the received input with a library of words. In the context of an electronic device used to play back media items, the number of words in the library can be significant (e.g., including the artist names, album names and track names of media items in a user's media library). Comparing the voice input to an entire word library can take a significant amount of time, so it may be beneficial to reduce the amount of the library to which the voice input is compared. In some embodiments, one or more subsets can be defined in the voice library based on the identity of the user providing the voice input.
The electronic device can define, for each user, a preference profile or other information describing the users interests, the particular manner in which the user typically interacts with the device, or both. For example, the profile can include information identifying the types of media items played back by the user, applications used by the user, typical playback behavior (e.g., pick a playlist and don't interact much with the device, or regularly change the played back media item). As another example, the profile can include information regarding the types of media items that the user typically plays back or does not play back. Using the profile information, the electronic device can define a subset of library words that relate to the profile, and initially limit or reduce the processing of a received voice command to the defined subset of library words.
The electronic device can identify the user using any suitable approach. In some embodiments, the electronic device can identify the user based on a particular input of the user (e.g., the entry of a username or password), from attributes of the entry (e.g., a voice print of the voice input), biometric information detected by the device, or any other suitable approach.
Processor 102 may include any processing circuitry or control circuitry operative to control the operations and performance of electronic device 100. For example, processor 102 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, or any other application. In some embodiments, a processor may drive a display and process inputs received from a user interface.
Storage 104 may include, for example, one or more storage mediums including a hard-drive, solid state drive, flash memory, permanent memory such as ROM, any other suitable type of storage component, or any combination thereof. Storage 104 may store, for example, media data (e.g., music and video files), application data (e.g., for implementing functions on device 100), firmware, user preference information (e.g., media playback preferences), authentication information (e.g. libraries of data associated with authorized users), lifestyle information (e.g., food preferences), exercise information (e.g., information obtained by exercise monitoring equipment), transaction information (e.g., information such as credit card information), wireless connection information (e.g., information that may enable electronic device 100 to establish a wireless connection), subscription information (e.g., information that keeps track of podcasts or television shows or other media a user subscribes to), contact information (e.g., telephone numbers and email addresses), calendar information, and any other suitable data or any combination thereof.
Memory 106 can include cache memory, semi-permanent memory such as RAM, and/or one or more different types of memory used for temporarily storing data. In some embodiments, memory 106 can also be used for storing data used to operate electronic device applications, or any other type of data that may be stored in storage 104. In some embodiments, memory 106 and storage 104 may be combined as a single storage medium.
Input interface 108 may provide inputs to input/output circuitry of the electronic device. Input interface 108 may include any suitable input interface, such as for example, a button, keypad, dial, a click wheel, or a touch screen. In some embodiments, electronic device 100 may include a capacitive sensing mechanism, or a multi-touch capacitive sensing mechanism. In some embodiments, input interface can include a microphone or other audio input interface for receiving a user's voice inputs. The input interface can include an analog to digital converter for converting received analog signals corresponding to a voice input to a digital signal that can be processed and analyzed to identify specific words or instructions.
Output interface 110 may include one or more interfaces for providing an audio output, visual output, or other type of output (e.g., odor, taste or haptic output). For example, output interface 110 can include one or more speakers (e.g., mono or stereo speakers) built into electronic device 100, or an audio connector (e.g., an audio jack or an appropriate Bluetooth connection) operative to be coupled to an audio output mechanism. Output interface 110 may be operative to provide audio data using a wired or wireless connection to a headset, headphones or earbuds. As another example, output interface 110 can include display circuitry (e.g., a screen or projection system) for providing a display visible to the user. The display can include a screen (e.g., an LCD screen) that is incorporated in electronic device 100, a movable display or a projecting system for providing a display of content on a surface remote from electronic device 100 (e.g., a video projector), or any other suitable display. Output interface 110 can interface with the input/output circuitry (not shown) to provide outputs to a user of the device.
In some embodiments, electronic device 100 may include a bus operative to provide a data transfer path for transferring data to, from, or between control processor 102, storage 104, memory 106, input interface 108, output interface 110, and any other component included in the electronic device.
A user can interact with the electronic device using any suitable approach. In some embodiments, the user can provide inputs using one or more fingers touching an input interface, such as a keyboard, button, mouse, or touch-sensitive surface. In some embodiments, a user can instead or in addition provide an input by shaking or moving the electronic device in a particular manner (e.g., such that a motion sensing component of the input interface detects the user movement). In some embodiments, a user can instead or in addition provide a voice input to the electronic device. For example, the user can speak into a microphone embedded in or connected to the electronic device.
The user can provide voice inputs to the electronic device at any suitable time. In some embodiments, the electronic device can continuously monitor for voice inputs (e.g., when the device is not in sleep mode, or at all times). In some embodiments, the electronic device can monitor for voice inputs in response to a user input or instruction to enter a voice input. For example, a user can select a button or option, or place the electronic device in such a manner that a sensor detects that the user wishes to provided a voice input (e.g., a proximity sensor detects that the user has brought the device up to the user's mouth). In some embodiments, the electronic device can monitor for user inputs when one or more particular applications or processes are running on the device. For example, the electronic device can monitor for voice inputs in a media playback application, a voice control application, a searching application, or any other suitable application.
In some embodiments, the electronic device can display one or more discreet elements on an existing electronic device display to indicate that the device is monitoring for voice inputs.
Voice inputs can include instructions for performing any suitable electronic device operation. In some embodiments, voice inputs can relate to a specific set or library of instructions that the device can detect. For example, the device can be limited to detecting particular keywords for related to specific device operations, such as “play,” “call,” “dial,” “shuffle,” “next,” “previous,” or other keywords. In some cases, each keyword can be accompanied by one or more variables or arguments qualifying the particular keyword. For example, the voice input can be “call John's cell phone,” in which the keyword “voice” is qualified by the phrase “John's cell phone,” which defines two variables for identifying the number to call (e.g., John and his cell phone). As another example, the voice input can be “play track 3 of 2005 album by the Plain White T's,” in which the keyword “play” is qualified by the phrase “track 3 of 2005 album by the Plain White T's.” This phrase has three variables for identifying a particular song to play back (e.g., artist Plain White T's, 2005 album, and track 3). As still another example, the phrase “shuffle then go next five times” can include two keywords, “shuffle” and “next” as well as a qualifier for the “next” keyword (e.g., “five times”).
In some cases, the electronic device can detect and parse natural language voice inputs. For example, the electronic device can parse and process an input such as “find my most played song with a 4-star rating and create a Genius playlist using it as a seed.” This voice input can require significant processing to first identify the particular media item to serve as a seed for a new playlist (e.g., most played song with a particular rating), and then determine the operation to perform based on that media item (e.g., create a playlist). As another example, a natural language voice input can include “pick a good song to add to a party mix.” This voice input can require identifying the device operation (e.g., add a song to a party mix) and finding an appropriate value or argument to provide the device operation, where the value can be user-specific.
The voice input provided to the electronic device can therefore be complex, and require significant processing to first identify the individual words of the input before extracting an instruction from the input and executing a corresponding device operation. The electronic device can identify particular words of the voice input using any suitable approach, including for example by comparing detected words of the voice input to a library or dictionary of locally stored words. The library can include any suitable words, including for example a set of default or standard words that relate generally to the electronic device, its processes and operations, and characteristics of information used the processes and operations of the device. For example, default words in the library can include terms relating to operations of one or more applications (e.g., play, pause, next, skip, call, hang up, go to, search for, start, turn off), terms related to information used by applications (e.g., star rating, genre, title, artist, album, name, play count, mobile phone, home phone, address, directions from, directions to), or other such words that may be used for by any user of an electronic device.
In some embodiments, the library can instead or in addition include words that relate specifically to a user of the device. For example, the library can include words determined from metadata values of content or information stored by the user on the device. Such words can include, for example, titles, artists and album names of media items stored by a user on the device, genre, year and star rating values for one or more media items, contact names, streets, cities and countries, email addresses, or any other content that a user can store on the device that may be specific to a particular user. The electronic device can define a library using any suitable approach, including for example by augmenting a default library with words derived from user-specific content of a user using the device.
The voice input can be provided to voice input processing module 420. The provided voice input can be provided in any suitable form, including for example in digitized form or in analog form (e.g., if some or all of the circuitry and software for converting an analog voice input to a digital signal are in voice input processing module 420). For example, voice input processing module 420 can be integrated in the electronic device used by the user. As another example, voice input processing module can totally or in part be integrated in a remote device or server to which the device can connect to process voice inputs. Voice input processing module 420 can analyze the received voice input to identify specific words or phrases within the voice input. For example, voice input processing module 420 can compare identified words or phrases of the voice signal to words or phrases of library 422 of words. Library 422 can be separate from voice input processing module 420, or instead or in addition embedded within voice input processing module 420. Library 422 can include any suitable words, including for example default words associated with the electronic device detecting the voice input, specific words derived from the user's interactions with the electronic device (e.g., with content transferred to the electronic device by the user), or other words or phrases.
Voice input processing module 420 can analyze the detected words or phrases, and identify one or more particular electronic device operations associated with the detected words or phrases. For example, voice input processing module 420 can identify one or more keywords specifying an instruction to the device, where the instruction can include one or more variables or values qualifying the instruction. The instruction (e.g., “play”), including the variables or values specifying how the instruction is to be executed (e.g., “Mika's latest album”) can be analyzed to identify one or more electronic device operations corresponding to the instruction.
Voice input processing module 420 can provide the identified device operation to the device so that device 430 performs an operation. Device 430 can perform one or more operations, including for example operating one or more applications or processes within one or more applications, and can include a punctual, repeating, or lasting operation (e.g., monitor all incoming email for particular flagged messages). Device 430 can include any suitable device, and can include some or all of the features of electronic device 100 (
Because of the complexity of voice inputs, and the size of the resulting library used to identify instructions within a voice input, the voice input processing module can take a significant amount of time, resources, or both to process a particular voice input. To reduce the processing required for each voice input, the voice input processing module may benefit by comparing the voice input to a reduced set of library words. In particular, by reducing the number of words in the library to which a voice input is compared, the voice input processing module can more rapidly process voice inputs at a lower device resource cost.
The voice input processing module can determine which library words to include in a particular subset using any suitable approach. In some embodiments, a subset of the library can be selected based on the identity of the user providing the voice input. The voice input processing module can determine which words in a library to associate with a user using any suitable approach. For example, the voice input processing module can select default words that relate to applications or operations used often by the user (e.g., used more than a threshold amount). As another example, the voice input processing module can prompt the user to provide preference or interest information from which related library words can be extracted. As still another example, the voice input processing module can instead or in addition monitor the user's use of the device to determine the user's preferences. In some embodiments, the voice input processing module can analyze previously received voice inputs to identify particular words or types of words that are often used.
If, at step 504, the processing module instead determines that the user has been identified, process 500 can move to step 508. At step 508, the processing module can identify user interest information. In particular, the processing module can identify content or other information specifying the user's interests, and can use the information to generate a preference profile for the user. The processing module can identify user interest information using any suitable approach, including one or more of the approaches described within step 508. At step 510, the processing module can review past user use of the device. For example, the processing module can review feedback information related to media playback (e.g., which media items were selected for playback, skipped, or ranked). As another example, the processing module can review the particular applications or operations that the user directed the device to perform (e.g., the user often uses an email application and sports scores application). As still another example, the processing module can review the types of inputs that the user provided to particular applications or in the context of specific operations (e.g., the user is interested in baseball scores and news, but not basketball or hockey scores and news).
At step 512, the processing module can identify user-selected content stored on the device. For example, the processing module can identify attributes of media items that the user selected to transfer from a media library to the device. As another example, the processing module can identify attributes of particular applications that the user has installed or loaded on the device.
At step 514, the processing module can request preference information from the user. For example, the processing module can provide a number of questions to the user (e.g., select from the following list your preferred genres, or identify specific media items that you like). As another example, the processing module can direct the user to indicate a preference for currently provided content (e.g., direct the user to approve or reject a currently played back media item, or a game that the user is trying). At step 516, the processing module can review words identified from previous voice inputs. For example, the processing module can review previously received voice inputs, and the types of words or phrases identified in the previous inputs. In some embodiments, the processing module can further determine which of the identified words were properly identified (e.g., the words for which the corresponding device operation executed by the device was approved by the user).
At step 518, the processing module can identify particular library words associated with the user interest information. For example, the processing module can select a subset of default library words that are associated with particular operations or processes most often used by the user. As another example, the processing module can select a subset of user-specific library words that relate particularly to the content of most interest to the user (e.g., words for metadata related to the media items preferred by the user). In particular, the processing module can identify particular metadata associated with media items of most interest to the user (e.g., media items most recently added to the user's media library, transferred to the device, having the highest user ranking, popular media based on external popularity sources, media by a particular favorite artist or within a genre, media items with higher playcounts). At step 520, the processing module can define a subset of the library that includes at least the identified library words. In some embodiments, the defined subset can include additional words, including for example default library words, or other words commonly used or associated with other users (e.g., words associated with other users of the same device, with users using the same type of device, or with users within a particular community or location). Process 500 can then move to step 506 and end.
The voice input processing module can identify a user using any suitable approach.
If, at step 604, the processing module instead determines that an input has been received, process 600 can move to step 608. At step 608, the processing module can identify the user providing the input. The processing module can identify user providing an input using any suitable approach, including one or more of the approaches described within step 608. At step 610, the processing module can identify the user from a user-specific input. For example, the processing circuitry can identify the user from a username and password, token, or other key or secret known only the user. At step 612, the processing module can identify the user from the type of input received. For example, the processing module can determine that the input corresponds to an operation or process typically performed by a particular user (e.g., only one user uses a particular application). As another example, the processing module can determine that the input was provided at a particular time of day during which the same user uses the device. As step 614, the processing module can identify the user from biometric information of the input. For example, the processing module can identify a user from a voiceprint, fingerprint, recognition of one or more facial features, or any other detected biometric attribute of the user (e.g., by comparing the biometric attribute to a library of known biometric attributes each associated with particular known users of the device).
At step 616, the processing module can use the user's identity for voice processing. In particular, the processing module can retrieve a subset of the word library used for processing voice inputs to streamline the voice input processing. Process 600 can then end at step 606.
Using user identification 722, processing module 720 can retrieve a particular subset 732 of words from library 730 for processing voice input 710 and identifying particular words or phrases of the voice input. Processing module 720 can provide user identification 722 to library 730 such that library 730 can retrieve a particular subset of library words associated with the identified user. Processing module 720 can then compare voice input 710 with library subset 732 to more efficiently identify specific words or phrases within the voice input (e.g., only comparing to the most relevant words or phrases, or most likely words or phrases to be used in the voice input). For example, voice input processing module 720 can identify one or more keywords specifying an instruction to the device, where the instruction can include one or more variables or values qualifying the instruction. The instruction (e.g., “play”), including the variables or values specifying how the instruction is to be executed (e.g., “Mika's latest album”) can be analyzed to identify one or more electronic device operations corresponding to the instruction.
Library 730 can include some or all of the features of library 422 (
The particular words or phrases to place in subset 732 can be selected using any suitable approach. In some embodiments, processing module 720 can determine the user's interests 724 and select a particular subset of library words based on the user's interests. Alternatively, library 730 can receive users interests 724 from the processing module, or can retrieve the user's interests directly from the user or from an electronic device. Library 730 can then select the particular words or phrases to include in subset 732. Any suitable approach can be used to correlate a user's interests to words or phrases of a library. For example, words can be selected based on the types of applications or processes used by the user. As another example, words can be selected based on content consumed by the user (e.g., media items played back by the user). As still another example, words can be selected based on data used to perform one or more device operations (e.g., contact information of particular contacts to whom the user sends emails or messages).
Processing module 720 can identify the user's interests 724 using any suitable approach. In some embodiments, processing module 720 can receive user feedback 742 from electronic device 740. The user feedback can include any suitable type of feedback from which user interests 724 can be derived. For example, user feedback 742 can include playback information for media items (e.g., which media items are selected for playback, or skipped during playback), user interactions with the device such as user instructions relating to content accessed using the device (e.g., star rankings provided by the user for media items) or particular applications or operations that the user selects to execute (e.g., a particular game that the user plays), or any other feedback describing a user's interactions with the device. In some cases, user feedback 742 can be provided to library 730 instead of or in addition to processing module 720 for creating subset 732 in the library.
Voice input processing module 720 can provide an instruction derived from the identified words of voice input 710 to device 740. Device 740 can in turn identify one or more operations to perform in response to the received instruction, and execute the one or more operations. In some embodiments, processing module 720 can instead or in addition identify the one or more operations related to a derived instruction, and provide the operations direction to device 740 for execution. Device 740 can perform any suitable operation, including for example operations relating to one or more applications or processes within one or more applications, and can include a punctual, repeating, or lasting operation (e.g., monitor all incoming email for particular flagged messages). Device 740 can include any suitable device, and can include some or all of the features of electronic device 100 (
In some embodiments, the voice input can include a word defining an arbitrary or user-specific variable for a device operation. For example, the user can provide a voice input directing the device to play back and a media item that the user will find “good.” The processing module can use user's interests 724 to quantify abstract or qualifying terms and provide actual variables or arguments for the device operations. For example, the electronic device can select recently added or loaded media items, current hits or higher ranked media items, media items with higher play counts, or media items by a favorite artist or within a preferred genre.
The following flowcharts describe various processes performed in some embodiments of this invention. Although the descriptions for the following flowcharts will be provided in the context of an electronic device, it will be understood that a voice input processing module can perform some or all of the process steps.
Process 800 can begin at step 802. At step 804, the electronic device can determine whether a voice input was received. For example, the electronic device can determine whether an input interface detected an analog signal corresponding to a voice input. If no voice input is received, process 800 can move to step 806 and end.
If, at step 804, the electronic device instead determines that a voice input is received, process 800 can move to step 808. At step 808, the electronic device can determine whether the user providing the voice input was identified. For example, the electronic device can determine whether the user provided an input characteristic of the user (e.g., a user name and password, or using a particular application specific to a user). As another example, the electronic device can determine whether biometric information related to the user providing the input has been detected. The electronic device can compare the identification information with a library of authentication or identification information to identify the user. If the user is not identified, process 800 can move to step 810. At step 810, the electronic device can process the received voice input using a full library. For example, the electronic device can identify particular words or phrases of the voice input from an entire library of words used to process voice inputs. Process 800 can then move to step 810.
If, at step 808, the electronic device instead determines that the user was identified, process 800 can move to step 812. At step 812, the electronic device can identify a subset of a library used to process voice inputs. The identified subset can be associated with the identified user, such that words in the subset relate to interests of the user, or to words that the user is likely to use when providing voice inputs. For example, words in the identified subset can include metadata values that relate to content (e.g., media items or contacts) stored by the user on the device. At step 814, the electronic device can process the voice output using the identified subset of the library. For example, the electronic device can compare the received voice input with words of the subset, and identify specific words or phrases of the voice input. At step 816, the electronic device can identify an electronic device operation corresponding to the processed voice input. For example, the electronic device can identify one or more operations or processes to perform based on the voice instruction (e.g., generate a playlist based on a particular media item). At step 818, the electronic device can perform the identified device operation. Process 800 can then end at step 806.
Although many of the embodiments of the present invention are described herein with respect to personal computing devices, it should be understood that the present invention is not limited to personal computing applications, but is generally applicable to other applications.
Embodiments of the invention are preferably implemented by software, but can also be implemented in hardware or a combination of hardware and software. Embodiments of the invention can also be embodied as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data which can thereafter be read by a computer system. Examples of the computer readable medium include read-only memory, random-access memory, CD-ROMs, DVDs, magnetic tape, and optical data storage devices. The computer readable medium can also be distributed over network-coupled computer systems so that the computer readable code is stored and executed in a distributed fashion.
Insubstantial changes from the claimed subject matter as viewed by a person with ordinary skill in the art, now known or later devised, are expressly contemplated as being equivalently within the scope of the claims. Therefore, obvious substitutions now or later known to one with ordinary skill in the art are defined to be within the scope of the defined elements.
The above described embodiments of the invention are presented for purposes of illustration and not of limitation.
This application is a continuation of U.S. patent application Ser. No. 12/712,988, filed Feb. 25, 2010, which is incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
3704345 | Coker et al. | Nov 1972 | A |
3828132 | Flanagan et al. | Aug 1974 | A |
3979557 | Schulman et al. | Sep 1976 | A |
4278838 | Antonov | Jul 1981 | A |
4282405 | Taguchi | Aug 1981 | A |
4310721 | Manley et al. | Jan 1982 | A |
4348553 | Baker et al. | Sep 1982 | A |
4653021 | Takagi | Mar 1987 | A |
4677570 | Taki | Jun 1987 | A |
4680805 | Scott | Jul 1987 | A |
4688195 | Thompson et al. | Aug 1987 | A |
4692941 | Jacks et al. | Sep 1987 | A |
4718094 | Bahl et al. | Jan 1988 | A |
4724542 | Williford | Feb 1988 | A |
4726065 | Froessl | Feb 1988 | A |
4727354 | Lindsay | Feb 1988 | A |
4776016 | Hansen | Oct 1988 | A |
4783807 | Marley | Nov 1988 | A |
4811243 | Racine | Mar 1989 | A |
4819271 | Bahl et al. | Apr 1989 | A |
4827518 | Feustel et al. | May 1989 | A |
4827520 | Zeinstra | May 1989 | A |
4829576 | Porter | May 1989 | A |
4829583 | Monroe et al. | May 1989 | A |
4833712 | Bahl et al. | May 1989 | A |
4839853 | Deerwester et al. | Jun 1989 | A |
4852168 | Sprague | Jul 1989 | A |
4862504 | Nomura | Aug 1989 | A |
4878230 | Murakami et al. | Oct 1989 | A |
4903305 | Gillick et al. | Feb 1990 | A |
4905163 | Garber et al. | Feb 1990 | A |
4914586 | Swinehart et al. | Apr 1990 | A |
4914590 | Loatman et al. | Apr 1990 | A |
4944013 | Gouvianakis et al. | Jul 1990 | A |
4955047 | Morganstein et al. | Sep 1990 | A |
4965763 | Zamora | Oct 1990 | A |
4974191 | Amirghodsi et al. | Nov 1990 | A |
4975975 | Filipski | Dec 1990 | A |
4977598 | Doddington et al. | Dec 1990 | A |
4992972 | Brooks et al. | Feb 1991 | A |
5007098 | Kumagai | Apr 1991 | A |
5010574 | Wang | Apr 1991 | A |
5020112 | Chou | May 1991 | A |
5021971 | Lindsay | Jun 1991 | A |
5022081 | Hirose et al. | Jun 1991 | A |
5027406 | Roberts et al. | Jun 1991 | A |
5031217 | Nishimura | Jul 1991 | A |
5032989 | Tornetta | Jul 1991 | A |
5040218 | Vitale et al. | Aug 1991 | A |
5047614 | Bianco | Sep 1991 | A |
5047617 | Shepard et al. | Sep 1991 | A |
5057915 | Von Kohorn | Oct 1991 | A |
5072452 | Brown et al. | Dec 1991 | A |
5091945 | Kleijn | Feb 1992 | A |
5127053 | Koch | Jun 1992 | A |
5127055 | Larkey | Jun 1992 | A |
5128672 | Kaehler | Jul 1992 | A |
5133011 | McKiel, Jr. | Jul 1992 | A |
5142584 | Ozawa | Aug 1992 | A |
5164900 | Bernath | Nov 1992 | A |
5165007 | Bahl et al. | Nov 1992 | A |
5179627 | Sweet et al. | Jan 1993 | A |
5179652 | Rozmanith et al. | Jan 1993 | A |
5194950 | Murakami et al. | Mar 1993 | A |
5197005 | Shwartz et al. | Mar 1993 | A |
5199077 | Wilcox et al. | Mar 1993 | A |
5202952 | Gillick et al. | Apr 1993 | A |
5208862 | Ozawa | May 1993 | A |
5216747 | Hardwick et al. | Jun 1993 | A |
5220639 | Lee | Jun 1993 | A |
5220657 | Bly et al. | Jun 1993 | A |
5222146 | Bahl et al. | Jun 1993 | A |
5230036 | Akamine et al. | Jul 1993 | A |
5231670 | Goldhor et al. | Jul 1993 | A |
5235680 | Bijnagte | Aug 1993 | A |
5267345 | Brown et al. | Nov 1993 | A |
5268990 | Cohen et al. | Dec 1993 | A |
5282265 | Rohra Suda et al. | Jan 1994 | A |
5289562 | Mizuta et al. | Feb 1994 | A |
RE34562 | Murakami et al. | Mar 1994 | E |
5291286 | Murakami et al. | Mar 1994 | A |
5293448 | Honda | Mar 1994 | A |
5293452 | Picone et al. | Mar 1994 | A |
5296642 | Konishi | Mar 1994 | A |
5297170 | Eyuboglu et al. | Mar 1994 | A |
5301109 | Landauer et al. | Apr 1994 | A |
5303406 | Hansen et al. | Apr 1994 | A |
5309359 | Katz et al. | May 1994 | A |
5317507 | Gallant | May 1994 | A |
5317647 | Pagallo | May 1994 | A |
5325297 | Bird et al. | Jun 1994 | A |
5325298 | Gallant | Jun 1994 | A |
5327498 | Hamon | Jul 1994 | A |
5333236 | Bahl et al. | Jul 1994 | A |
5333275 | Wheatley et al. | Jul 1994 | A |
5345536 | Hoshimi et al. | Sep 1994 | A |
5349645 | Zhao | Sep 1994 | A |
5353377 | Kuroda et al. | Oct 1994 | A |
5377103 | Lamberti et al. | Dec 1994 | A |
5377301 | Rosenberg et al. | Dec 1994 | A |
5377303 | Firman | Dec 1994 | A |
5384892 | Strong | Jan 1995 | A |
5384893 | Hutchins | Jan 1995 | A |
5386494 | White | Jan 1995 | A |
5386556 | Hedin et al. | Jan 1995 | A |
5390279 | Strong | Feb 1995 | A |
5396625 | Parkes | Mar 1995 | A |
5400434 | Pearson | Mar 1995 | A |
5404295 | Katz et al. | Apr 1995 | A |
5412756 | Bauman et al. | May 1995 | A |
5412804 | Krishna | May 1995 | A |
5412806 | Du et al. | May 1995 | A |
5418951 | Damashek | May 1995 | A |
5424947 | Nagao et al. | Jun 1995 | A |
5434777 | Luciw | Jul 1995 | A |
5444823 | Nguyen | Aug 1995 | A |
5455888 | Iyengar et al. | Oct 1995 | A |
5469529 | Bimbot et al. | Nov 1995 | A |
5471611 | McGregor | Nov 1995 | A |
5475587 | Anick et al. | Dec 1995 | A |
5479488 | Lennig et al. | Dec 1995 | A |
5491758 | Bellegarda et al. | Feb 1996 | A |
5491772 | Hardwick et al. | Feb 1996 | A |
5493677 | Balogh et al. | Feb 1996 | A |
5495604 | Harding et al. | Feb 1996 | A |
5500905 | Martin et al. | Mar 1996 | A |
5502790 | Yi | Mar 1996 | A |
5502791 | Nishimura et al. | Mar 1996 | A |
5515475 | Gupta et al. | May 1996 | A |
5533182 | Bates et al. | Jul 1996 | A |
5536902 | Serra et al. | Jul 1996 | A |
5537618 | Boulton et al. | Jul 1996 | A |
5544264 | Bellegarda et al. | Aug 1996 | A |
5555343 | Luther | Sep 1996 | A |
5574823 | Hassanein et al. | Nov 1996 | A |
5577164 | Kaneko et al. | Nov 1996 | A |
5577241 | Spencer | Nov 1996 | A |
5578808 | Taylor | Nov 1996 | A |
5579436 | Chou et al. | Nov 1996 | A |
5581655 | Cohen et al. | Dec 1996 | A |
5584024 | Shwartz | Dec 1996 | A |
5596676 | Swaminathan et al. | Jan 1997 | A |
5596994 | Bro | Jan 1997 | A |
5608624 | Luciw | Mar 1997 | A |
5613036 | Strong | Mar 1997 | A |
5617507 | Lee et al. | Apr 1997 | A |
5619694 | Shimazu | Apr 1997 | A |
5621859 | Schwartz et al. | Apr 1997 | A |
5621903 | Luciw et al. | Apr 1997 | A |
5642464 | Yue et al. | Jun 1997 | A |
5642519 | Martin | Jun 1997 | A |
5644727 | Atkins | Jul 1997 | A |
5649060 | Ellozy et al. | Jul 1997 | A |
5661787 | Pocock | Aug 1997 | A |
5664055 | Kroon | Sep 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
5682539 | Conrad et al. | Oct 1997 | A |
5687077 | Gough, Jr. | Nov 1997 | A |
5696962 | Kupiec | Dec 1997 | A |
5701400 | Amado | Dec 1997 | A |
5706442 | Anderson et al. | Jan 1998 | A |
5710886 | Christensen et al. | Jan 1998 | A |
5712957 | Waibel et al. | Jan 1998 | A |
5715468 | Budzinski | Feb 1998 | A |
5721827 | Logan et al. | Feb 1998 | A |
5727950 | Cook et al. | Mar 1998 | A |
5729694 | Holzrichter et al. | Mar 1998 | A |
5732216 | Logan et al. | Mar 1998 | A |
5732390 | Katayanagi et al. | Mar 1998 | A |
5734750 | Arai et al. | Mar 1998 | A |
5734791 | Acero et al. | Mar 1998 | A |
5737734 | Schultz | Apr 1998 | A |
5742705 | Parthasarathy | Apr 1998 | A |
5748974 | Johnson | May 1998 | A |
5749081 | Whiteis | May 1998 | A |
5757979 | Hongo et al. | May 1998 | A |
5759101 | Von Kohorn | Jun 1998 | A |
5777614 | Ando et al. | Jul 1998 | A |
5790978 | Olive et al. | Aug 1998 | A |
5794050 | Dahlgren et al. | Aug 1998 | A |
5794182 | Manduchi et al. | Aug 1998 | A |
5794207 | Walker et al. | Aug 1998 | A |
5794237 | Gore, Jr. | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5802526 | Fawcett et al. | Sep 1998 | A |
5812697 | Sakai et al. | Sep 1998 | A |
5812698 | Platt et al. | Sep 1998 | A |
5822730 | Roth et al. | Oct 1998 | A |
5822743 | Gupta et al. | Oct 1998 | A |
5825881 | Colvin, Sr. | Oct 1998 | A |
5826261 | Spencer | Oct 1998 | A |
5828999 | Bellegarda et al. | Oct 1998 | A |
5835893 | Ushioda | Nov 1998 | A |
5839106 | Bellegarda | Nov 1998 | A |
5841902 | Tu | Nov 1998 | A |
5845255 | Mayaud | Dec 1998 | A |
5857184 | Lynch | Jan 1999 | A |
5860063 | Gorin et al. | Jan 1999 | A |
5860064 | Henton | Jan 1999 | A |
5862223 | Walker et al. | Jan 1999 | A |
5862233 | Poletti | Jan 1999 | A |
5864806 | Mokbel et al. | Jan 1999 | A |
5864844 | James et al. | Jan 1999 | A |
5867799 | Lang et al. | Feb 1999 | A |
5873056 | Liddy et al. | Feb 1999 | A |
5875437 | Atkins | Feb 1999 | A |
5884323 | Hawkins et al. | Mar 1999 | A |
5890122 | Van et al. | Mar 1999 | A |
5895464 | Bhandari et al. | Apr 1999 | A |
5895466 | Goldberg et al. | Apr 1999 | A |
5899972 | Miyazawa et al. | May 1999 | A |
5909666 | Gould et al. | Jun 1999 | A |
5912952 | Brendzel | Jun 1999 | A |
5913193 | Huang et al. | Jun 1999 | A |
5915236 | Gould et al. | Jun 1999 | A |
5915249 | Spencer | Jun 1999 | A |
5920836 | Gould et al. | Jul 1999 | A |
5920837 | Gould et al. | Jul 1999 | A |
5930408 | Seto | Jul 1999 | A |
5930769 | Rose | Jul 1999 | A |
5933822 | Braden-Harder et al. | Aug 1999 | A |
5936926 | Yokouchi et al. | Aug 1999 | A |
5940811 | Norris | Aug 1999 | A |
5941944 | Messerly | Aug 1999 | A |
5943670 | Prager | Aug 1999 | A |
5948040 | Delorme et al. | Sep 1999 | A |
5950123 | Schwelb et al. | Sep 1999 | A |
5956699 | Wong et al. | Sep 1999 | A |
5960394 | Gould et al. | Sep 1999 | A |
5960422 | Prasad | Sep 1999 | A |
5963924 | Williams et al. | Oct 1999 | A |
5966126 | Szabo | Oct 1999 | A |
5970474 | Leroy et al. | Oct 1999 | A |
5974146 | Randle et al. | Oct 1999 | A |
5982891 | Ginter et al. | Nov 1999 | A |
5983179 | Gould et al. | Nov 1999 | A |
5987132 | Rowney et al. | Nov 1999 | A |
5987140 | Rowney et al. | Nov 1999 | A |
5987404 | Della Pietra et al. | Nov 1999 | A |
5987440 | O'Neil et al. | Nov 1999 | A |
5991441 | Jourjine | Nov 1999 | A |
5999895 | Forest | Dec 1999 | A |
5999908 | Abelow | Dec 1999 | A |
6016471 | Kuhn et al. | Jan 2000 | A |
6023684 | Pearson | Feb 2000 | A |
6024288 | Gottlich et al. | Feb 2000 | A |
6026345 | Shah et al. | Feb 2000 | A |
6026375 | Hall et al. | Feb 2000 | A |
6026388 | Liddy et al. | Feb 2000 | A |
6026393 | Gupta et al. | Feb 2000 | A |
6029132 | Kuhn et al. | Feb 2000 | A |
6035267 | Watanabe et al. | Mar 2000 | A |
6035336 | Lu et al. | Mar 2000 | A |
6038533 | Buchsbaum et al. | Mar 2000 | A |
6052656 | Suda et al. | Apr 2000 | A |
6055514 | Wren | Apr 2000 | A |
6055531 | Bennett et al. | Apr 2000 | A |
6064767 | Muir et al. | May 2000 | A |
6064959 | Young et al. | May 2000 | A |
6064960 | Bellegarda et al. | May 2000 | A |
6064963 | Gainsboro | May 2000 | A |
6070139 | Miyazawa et al. | May 2000 | A |
6070147 | Harms et al. | May 2000 | A |
6073097 | Gould et al. | Jun 2000 | A |
6076051 | Messerly et al. | Jun 2000 | A |
6076088 | Paik et al. | Jun 2000 | A |
6078914 | Redfern | Jun 2000 | A |
6081750 | Hoffberg et al. | Jun 2000 | A |
6081774 | De et al. | Jun 2000 | A |
6088671 | Gould et al. | Jul 2000 | A |
6088731 | Kiraly et al. | Jul 2000 | A |
6092043 | Squires et al. | Jul 2000 | A |
6094649 | Bowen et al. | Jul 2000 | A |
6101468 | Gould et al. | Aug 2000 | A |
6105865 | Hardesty | Aug 2000 | A |
6108627 | Sabourin | Aug 2000 | A |
6119101 | Peckover | Sep 2000 | A |
6122616 | Henton | Sep 2000 | A |
6125356 | Brockman et al. | Sep 2000 | A |
6138098 | Shieber et al. | Oct 2000 | A |
6144938 | Surace et al. | Nov 2000 | A |
6154720 | Onishi et al. | Nov 2000 | A |
6161084 | Messerly et al. | Dec 2000 | A |
6173251 | Ito et al. | Jan 2001 | B1 |
6173261 | Arai et al. | Jan 2001 | B1 |
6173279 | Levin et al. | Jan 2001 | B1 |
6177905 | Welch | Jan 2001 | B1 |
6188999 | Moody | Feb 2001 | B1 |
6195641 | Loring et al. | Feb 2001 | B1 |
6205456 | Nakao | Mar 2001 | B1 |
6208956 | Motoyama | Mar 2001 | B1 |
6208971 | Bellegarda et al. | Mar 2001 | B1 |
6226403 | Parthasarathy | May 2001 | B1 |
6233545 | Datig | May 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6233578 | Machihara et al. | May 2001 | B1 |
6246981 | Papineni et al. | Jun 2001 | B1 |
6249606 | Kiraly et al. | Jun 2001 | B1 |
6259826 | Pollard et al. | Jul 2001 | B1 |
6260011 | Heckerman et al. | Jul 2001 | B1 |
6260013 | Sejnoha | Jul 2001 | B1 |
6260024 | Shkedy | Jul 2001 | B1 |
6266637 | Donovan et al. | Jul 2001 | B1 |
6275824 | O'Flaherty et al. | Aug 2001 | B1 |
6282507 | Horiguchi et al. | Aug 2001 | B1 |
6285785 | Bellegarda et al. | Sep 2001 | B1 |
6285786 | Seni et al. | Sep 2001 | B1 |
6289124 | Okamoto | Sep 2001 | B1 |
6308149 | Gaussier et al. | Oct 2001 | B1 |
6311189 | deVries et al. | Oct 2001 | B1 |
6317594 | Gossman et al. | Nov 2001 | B1 |
6317707 | Bangalore et al. | Nov 2001 | B1 |
6317831 | King | Nov 2001 | B1 |
6321092 | Fitch et al. | Nov 2001 | B1 |
6324512 | Junqua et al. | Nov 2001 | B1 |
6334103 | Surace et al. | Dec 2001 | B1 |
6345250 | Martin | Feb 2002 | B1 |
6353794 | Davis et al. | Mar 2002 | B1 |
6356854 | Schubert et al. | Mar 2002 | B1 |
6356905 | Gershman et al. | Mar 2002 | B1 |
6360237 | Schulz et al. | Mar 2002 | B1 |
6366883 | Campbell et al. | Apr 2002 | B1 |
6366884 | Bellegarda et al. | Apr 2002 | B1 |
6397186 | Bush et al. | May 2002 | B1 |
6401065 | Kanevsky et al. | Jun 2002 | B1 |
6421672 | McAllister et al. | Jul 2002 | B1 |
6430551 | Thelen et al. | Aug 2002 | B1 |
6434522 | Tsuboka | Aug 2002 | B1 |
6434524 | Weber | Aug 2002 | B1 |
6438523 | Oberteuffer et al. | Aug 2002 | B1 |
6442518 | Van Thong et al. | Aug 2002 | B1 |
6446076 | Burkey et al. | Sep 2002 | B1 |
6448485 | Barile | Sep 2002 | B1 |
6449620 | Draper et al. | Sep 2002 | B1 |
6453281 | Walters et al. | Sep 2002 | B1 |
6453292 | Ramaswamy et al. | Sep 2002 | B2 |
6460029 | Fries et al. | Oct 2002 | B1 |
6463128 | Elwin | Oct 2002 | B1 |
6466654 | Cooper et al. | Oct 2002 | B1 |
6477488 | Bellegarda | Nov 2002 | B1 |
6487534 | Thelen et al. | Nov 2002 | B1 |
6489951 | Wong et al. | Dec 2002 | B1 |
6493428 | Hillier | Dec 2002 | B1 |
6499013 | Weber | Dec 2002 | B1 |
6501937 | Ho et al. | Dec 2002 | B1 |
6505158 | Conkie | Jan 2003 | B1 |
6505175 | Silverman et al. | Jan 2003 | B1 |
6505183 | Loofbourrow et al. | Jan 2003 | B1 |
6510417 | Woods et al. | Jan 2003 | B1 |
6513063 | Julia et al. | Jan 2003 | B1 |
6519565 | Clements et al. | Feb 2003 | B1 |
6519566 | Boyer et al. | Feb 2003 | B1 |
6523061 | Halverson et al. | Feb 2003 | B1 |
6523172 | Martinez-Guerra et al. | Feb 2003 | B1 |
6526351 | Whitham | Feb 2003 | B2 |
6526382 | Yuschik | Feb 2003 | B1 |
6526395 | Morris | Feb 2003 | B1 |
6532444 | Weber | Mar 2003 | B1 |
6532446 | King | Mar 2003 | B1 |
6546388 | Edlund et al. | Apr 2003 | B1 |
6553344 | Bellegarda et al. | Apr 2003 | B2 |
6556971 | Rigsby et al. | Apr 2003 | B1 |
6556983 | Altschuler et al. | Apr 2003 | B1 |
6563769 | Van Der Meulen | May 2003 | B1 |
6584464 | Warthen | Jun 2003 | B1 |
6598022 | Yuschik | Jul 2003 | B2 |
6598039 | Livowsky | Jul 2003 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6601234 | Bowman-Amuah | Jul 2003 | B1 |
6604059 | Strubbe et al. | Aug 2003 | B2 |
6606388 | Townsend et al. | Aug 2003 | B1 |
6615172 | Bennett et al. | Sep 2003 | B1 |
6615175 | Gazdzinski | Sep 2003 | B1 |
6615220 | Austin et al. | Sep 2003 | B1 |
6622121 | Crepy et al. | Sep 2003 | B1 |
6622136 | Russell | Sep 2003 | B2 |
6625583 | Silverman et al. | Sep 2003 | B1 |
6628808 | Bach et al. | Sep 2003 | B1 |
6631346 | Karaorman et al. | Oct 2003 | B1 |
6633846 | Bennett et al. | Oct 2003 | B1 |
6643401 | Kashioka et al. | Nov 2003 | B1 |
6647260 | Dusse et al. | Nov 2003 | B2 |
6650735 | Burton et al. | Nov 2003 | B2 |
6654740 | Tokuda et al. | Nov 2003 | B2 |
6658389 | Alpdemir | Dec 2003 | B1 |
6665639 | Mozer et al. | Dec 2003 | B2 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6665641 | Coorman et al. | Dec 2003 | B1 |
6680675 | Suzuki | Jan 2004 | B1 |
6684187 | Conkie | Jan 2004 | B1 |
6691064 | Vroman | Feb 2004 | B2 |
6691090 | Laurila et al. | Feb 2004 | B1 |
6691111 | Lazaridis et al. | Feb 2004 | B2 |
6691151 | Cheyer et al. | Feb 2004 | B1 |
6694295 | Lindholm et al. | Feb 2004 | B2 |
6697780 | Beutnagel et al. | Feb 2004 | B1 |
6697824 | Bowman-Amuah | Feb 2004 | B1 |
6701294 | Ball et al. | Mar 2004 | B1 |
6711585 | Copperman et al. | Mar 2004 | B1 |
6718324 | Edlund et al. | Apr 2004 | B2 |
6720980 | Lui et al. | Apr 2004 | B1 |
6721728 | McGreevy | Apr 2004 | B2 |
6721734 | Subasic et al. | Apr 2004 | B1 |
6728675 | Maddalozzo, Jr. et al. | Apr 2004 | B1 |
6728729 | Jawa et al. | Apr 2004 | B1 |
6731312 | Robbin | May 2004 | B2 |
6735632 | Kiraly et al. | May 2004 | B1 |
6741264 | Lesser | May 2004 | B1 |
6742021 | Halverson et al. | May 2004 | B1 |
6751595 | Chintrakulchai et al. | Jun 2004 | B2 |
6754504 | Reed | Jun 2004 | B1 |
6757362 | Cooper et al. | Jun 2004 | B1 |
6757718 | Halverson et al. | Jun 2004 | B1 |
6760754 | Isaacs et al. | Jul 2004 | B1 |
6766320 | Wang et al. | Jul 2004 | B1 |
6772123 | Cooklev et al. | Aug 2004 | B2 |
6778951 | Contractor | Aug 2004 | B1 |
6778952 | Bellegarda | Aug 2004 | B2 |
6778962 | Kasai et al. | Aug 2004 | B1 |
6778970 | Au | Aug 2004 | B2 |
6792082 | Levine | Sep 2004 | B1 |
6807574 | Partovi et al. | Oct 2004 | B1 |
6810379 | Vermeulen et al. | Oct 2004 | B1 |
6813491 | McKinney | Nov 2004 | B1 |
6829603 | Chai et al. | Dec 2004 | B1 |
6832194 | Mozer et al. | Dec 2004 | B1 |
6839464 | Hawkins et al. | Jan 2005 | B2 |
6839669 | Gould et al. | Jan 2005 | B1 |
6839670 | Stammler et al. | Jan 2005 | B1 |
6842767 | Partovi et al. | Jan 2005 | B1 |
6847966 | Sommer et al. | Jan 2005 | B1 |
6847979 | Allemang et al. | Jan 2005 | B2 |
6851115 | Cheyer et al. | Feb 2005 | B1 |
6859931 | Cheyer et al. | Feb 2005 | B1 |
6865533 | Addison et al. | Mar 2005 | B2 |
6882971 | Craner | Apr 2005 | B2 |
6885734 | Eberle et al. | Apr 2005 | B1 |
6895380 | Sepe, Jr. | May 2005 | B2 |
6895558 | Loveland | May 2005 | B1 |
6901364 | Nguyen et al. | May 2005 | B2 |
6901399 | Corston et al. | May 2005 | B1 |
6912498 | Stevens et al. | Jun 2005 | B2 |
6912499 | Sabourin et al. | Jun 2005 | B1 |
6915246 | Gusler et al. | Jul 2005 | B2 |
6917373 | Vong et al. | Jul 2005 | B2 |
6924828 | Hirsch | Aug 2005 | B1 |
6928614 | Everhart | Aug 2005 | B1 |
6931384 | Horvitz et al. | Aug 2005 | B1 |
6934684 | Alpdemir et al. | Aug 2005 | B2 |
6937975 | Elworthy | Aug 2005 | B1 |
6937986 | Denenberg et al. | Aug 2005 | B2 |
6944593 | Kuzunuki et al. | Sep 2005 | B2 |
6954755 | Reisman | Oct 2005 | B2 |
6957076 | Hunzinger | Oct 2005 | B2 |
6960734 | Park | Nov 2005 | B1 |
6964023 | Maes et al. | Nov 2005 | B2 |
6968311 | Knockeart et al. | Nov 2005 | B2 |
6970935 | Maes | Nov 2005 | B1 |
6978127 | Bulthuis et al. | Dec 2005 | B1 |
6980949 | Ford | Dec 2005 | B2 |
6980955 | Okutani et al. | Dec 2005 | B2 |
6983251 | Umemoto et al. | Jan 2006 | B1 |
6985865 | Packingham et al. | Jan 2006 | B1 |
6988071 | Gazdzinski | Jan 2006 | B1 |
6996520 | Levin | Feb 2006 | B2 |
6996531 | Korall et al. | Feb 2006 | B2 |
6999066 | Litwiller | Feb 2006 | B2 |
6999914 | Boerner et al. | Feb 2006 | B1 |
6999927 | Mozer et al. | Feb 2006 | B2 |
7000189 | Dutta et al. | Feb 2006 | B2 |
7003463 | Maes et al. | Feb 2006 | B1 |
7010581 | Brown et al. | Mar 2006 | B2 |
7020685 | Chen et al. | Mar 2006 | B1 |
7024363 | Comerford et al. | Apr 2006 | B1 |
7024364 | Guerra et al. | Apr 2006 | B2 |
7027974 | Busch et al. | Apr 2006 | B1 |
7027990 | Sussman | Apr 2006 | B2 |
7031530 | Driggs et al. | Apr 2006 | B2 |
7036128 | Julia et al. | Apr 2006 | B1 |
7050977 | Bennett | May 2006 | B1 |
7054888 | LaChapelle et al. | May 2006 | B2 |
7058569 | Coorman et al. | Jun 2006 | B2 |
7058888 | Gjerstad et al. | Jun 2006 | B1 |
7058889 | Trovato et al. | Jun 2006 | B2 |
7062428 | Hogenhout et al. | Jun 2006 | B2 |
7069220 | Coffman et al. | Jun 2006 | B2 |
7069560 | Cheyer et al. | Jun 2006 | B1 |
7084758 | Cole | Aug 2006 | B1 |
7085723 | Ross et al. | Aug 2006 | B2 |
7092887 | Mozer et al. | Aug 2006 | B2 |
7092928 | Elad et al. | Aug 2006 | B1 |
7093693 | Gazdzinski | Aug 2006 | B1 |
7107204 | Liu et al. | Sep 2006 | B1 |
7127046 | Smith et al. | Oct 2006 | B1 |
7127403 | Saylor et al. | Oct 2006 | B1 |
7136710 | Hoffberg et al. | Nov 2006 | B1 |
7137126 | Coffman et al. | Nov 2006 | B1 |
7139714 | Bennett et al. | Nov 2006 | B2 |
7139722 | Perrella et al. | Nov 2006 | B2 |
7149319 | Roeck | Dec 2006 | B2 |
7152070 | Musick et al. | Dec 2006 | B1 |
7166791 | Robbin et al. | Jan 2007 | B2 |
7174295 | Kivimaki | Feb 2007 | B1 |
7177794 | Mani et al. | Feb 2007 | B2 |
7177798 | Hsu et al. | Feb 2007 | B2 |
7190794 | Hinde | Mar 2007 | B2 |
7197120 | Luehrig et al. | Mar 2007 | B2 |
7197460 | Gupta et al. | Mar 2007 | B1 |
7200559 | Wang | Apr 2007 | B2 |
7203646 | Bennett | Apr 2007 | B2 |
7216008 | Sakata | May 2007 | B2 |
7216073 | Lavi et al. | May 2007 | B2 |
7216080 | Tsiao et al. | May 2007 | B2 |
7219063 | Schalk et al. | May 2007 | B2 |
7219123 | Fiechter et al. | May 2007 | B1 |
7225125 | Bennett et al. | May 2007 | B2 |
7228278 | Nguyen et al. | Jun 2007 | B2 |
7231343 | Treadgold et al. | Jun 2007 | B1 |
7233790 | Kjellberg et al. | Jun 2007 | B2 |
7233904 | Luisi | Jun 2007 | B2 |
7246151 | Isaacs et al. | Jul 2007 | B2 |
7260529 | Lengen | Aug 2007 | B1 |
7266496 | Wang et al. | Sep 2007 | B2 |
7269556 | Kiss et al. | Sep 2007 | B2 |
7277854 | Bennett et al. | Oct 2007 | B2 |
7290039 | Lisitsa et al. | Oct 2007 | B1 |
7292579 | Morris | Nov 2007 | B2 |
7299033 | Kjellberg et al. | Nov 2007 | B2 |
7302392 | Thenthiruperai et al. | Nov 2007 | B1 |
7302686 | Togawa | Nov 2007 | B2 |
7310600 | Garner et al. | Dec 2007 | B1 |
7315818 | Stevens et al. | Jan 2008 | B2 |
7319957 | Robinson et al. | Jan 2008 | B2 |
7324947 | Jordan et al. | Jan 2008 | B2 |
7349953 | Lisitsa et al. | Mar 2008 | B2 |
7362738 | Taube et al. | Apr 2008 | B2 |
7376556 | Bennett | May 2008 | B2 |
7376632 | Sadek et al. | May 2008 | B1 |
7376645 | Bernard | May 2008 | B2 |
7379874 | Schmid et al. | May 2008 | B2 |
7380203 | Keely et al. | May 2008 | B2 |
7386449 | Sun et al. | Jun 2008 | B2 |
7389224 | Elworthy | Jun 2008 | B1 |
7392185 | Bennett | Jun 2008 | B2 |
7398209 | Kennewick et al. | Jul 2008 | B2 |
7403938 | Harrison et al. | Jul 2008 | B2 |
7409337 | Potter et al. | Aug 2008 | B1 |
7415100 | Cooper et al. | Aug 2008 | B2 |
7418392 | Mozer et al. | Aug 2008 | B1 |
7426467 | Nashida et al. | Sep 2008 | B2 |
7427024 | Gazdzinski et al. | Sep 2008 | B1 |
7447635 | Konopka et al. | Nov 2008 | B1 |
7454351 | Jeschke et al. | Nov 2008 | B2 |
7460652 | Chang | Dec 2008 | B2 |
7467087 | Gillick et al. | Dec 2008 | B1 |
7475010 | Chao | Jan 2009 | B2 |
7478037 | Strong | Jan 2009 | B2 |
7483832 | Tischer | Jan 2009 | B2 |
7483894 | Cao | Jan 2009 | B2 |
7487089 | Mozer | Feb 2009 | B2 |
7496498 | Chu et al. | Feb 2009 | B2 |
7496512 | Zhao et al. | Feb 2009 | B2 |
7502738 | Kennewick et al. | Mar 2009 | B2 |
7508373 | Lin et al. | Mar 2009 | B2 |
7522927 | Fitch et al. | Apr 2009 | B2 |
7523108 | Cao | Apr 2009 | B2 |
7526466 | Au | Apr 2009 | B2 |
7528713 | Singh et al. | May 2009 | B2 |
7529671 | Rockenbeck et al. | May 2009 | B2 |
7529676 | Koyama | May 2009 | B2 |
7536565 | Girish et al. | May 2009 | B2 |
7539656 | Fratkina et al. | May 2009 | B2 |
7543232 | Easton, Jr. et al. | Jun 2009 | B2 |
7546382 | Healey et al. | Jun 2009 | B2 |
7546529 | Reynar et al. | Jun 2009 | B2 |
7548895 | Pulsipher | Jun 2009 | B2 |
7552055 | Lecoeuche | Jun 2009 | B2 |
7555431 | Bennett | Jun 2009 | B2 |
7558730 | Davis et al. | Jul 2009 | B2 |
7559026 | Girish et al. | Jul 2009 | B2 |
7561069 | Horstemeyer | Jul 2009 | B2 |
7571106 | Cao et al. | Aug 2009 | B2 |
7577522 | Rosenberg | Aug 2009 | B2 |
7580551 | Srihari et al. | Aug 2009 | B1 |
7580576 | Wang et al. | Aug 2009 | B2 |
7599918 | Shen et al. | Oct 2009 | B2 |
7603381 | Burke et al. | Oct 2009 | B2 |
7613264 | Wells et al. | Nov 2009 | B2 |
7617094 | Aoki et al. | Nov 2009 | B2 |
7620549 | Di Cristo et al. | Nov 2009 | B2 |
7624007 | Bennett | Nov 2009 | B2 |
7627481 | Kuo et al. | Dec 2009 | B1 |
7630901 | Omi | Dec 2009 | B2 |
7634409 | Kennewick et al. | Dec 2009 | B2 |
7634413 | Kuo et al. | Dec 2009 | B1 |
7636657 | Ju et al. | Dec 2009 | B2 |
7640160 | Di Cristo et al. | Dec 2009 | B2 |
7647225 | Bennett et al. | Jan 2010 | B2 |
7649454 | Singh et al. | Jan 2010 | B2 |
7657424 | Bennett | Feb 2010 | B2 |
7664558 | Lindahl et al. | Feb 2010 | B2 |
7664638 | Cooper et al. | Feb 2010 | B2 |
7672841 | Bennett | Mar 2010 | B2 |
7673238 | Girish et al. | Mar 2010 | B2 |
7676026 | Baxter, Jr. | Mar 2010 | B1 |
7684985 | Dominach et al. | Mar 2010 | B2 |
7684990 | Caskey et al. | Mar 2010 | B2 |
7693715 | Hwang et al. | Apr 2010 | B2 |
7693719 | Chu et al. | Apr 2010 | B2 |
7693720 | Kennewick et al. | Apr 2010 | B2 |
7698131 | Bennett | Apr 2010 | B2 |
7702500 | Blaedow | Apr 2010 | B2 |
7702508 | Bennett | Apr 2010 | B2 |
7707027 | Balchandran et al. | Apr 2010 | B2 |
7707032 | Wang et al. | Apr 2010 | B2 |
7707267 | Lisitsa et al. | Apr 2010 | B2 |
7711129 | Lindahl et al. | May 2010 | B2 |
7711565 | Gazdzinski | May 2010 | B1 |
7711672 | Au | May 2010 | B2 |
7716056 | Weng et al. | May 2010 | B2 |
7720674 | Kaiser et al. | May 2010 | B2 |
7720683 | Vermeulen et al. | May 2010 | B1 |
7721301 | Wong et al. | May 2010 | B2 |
7725307 | Bennett | May 2010 | B2 |
7725318 | Gavalda et al. | May 2010 | B2 |
7725320 | Bennett | May 2010 | B2 |
7725321 | Bennett | May 2010 | B2 |
7729904 | Bennett | Jun 2010 | B2 |
7729916 | Coffman et al. | Jun 2010 | B2 |
7734461 | Kwak et al. | Jun 2010 | B2 |
7747616 | Yamada et al. | Jun 2010 | B2 |
7752152 | Paek et al. | Jul 2010 | B2 |
7756868 | Lee | Jul 2010 | B2 |
7774204 | Mozer et al. | Aug 2010 | B2 |
7778632 | Kurlander et al. | Aug 2010 | B2 |
7783486 | Rosser et al. | Aug 2010 | B2 |
7801729 | Mozer | Sep 2010 | B2 |
7809570 | Kennewick et al. | Oct 2010 | B2 |
7809610 | Cao | Oct 2010 | B2 |
7818176 | Freeman et al. | Oct 2010 | B2 |
7818291 | Ferguson et al. | Oct 2010 | B2 |
7822608 | Cross, Jr. et al. | Oct 2010 | B2 |
7823123 | Sabbouh | Oct 2010 | B2 |
7826945 | Zhang et al. | Nov 2010 | B2 |
7827047 | Anderson et al. | Nov 2010 | B2 |
7831426 | Bennett | Nov 2010 | B2 |
7840400 | Lavi et al. | Nov 2010 | B2 |
7840447 | Kleinrock et al. | Nov 2010 | B2 |
7853444 | Wang et al. | Dec 2010 | B2 |
7853574 | Kraenzel et al. | Dec 2010 | B2 |
7853664 | Wang et al. | Dec 2010 | B1 |
7873519 | Bennett | Jan 2011 | B2 |
7873654 | Bernard | Jan 2011 | B2 |
7881936 | Longe et al. | Feb 2011 | B2 |
7885844 | Cohen et al. | Feb 2011 | B1 |
7890652 | Bull et al. | Feb 2011 | B2 |
7899666 | Varone | Mar 2011 | B2 |
7912702 | Bennett | Mar 2011 | B2 |
7917367 | Di Cristo et al. | Mar 2011 | B2 |
7917497 | Harrison et al. | Mar 2011 | B2 |
7920678 | Cooper et al. | Apr 2011 | B2 |
7920682 | Byrne et al. | Apr 2011 | B2 |
7920857 | Lau et al. | Apr 2011 | B2 |
7925525 | Chin | Apr 2011 | B2 |
7930168 | Weng et al. | Apr 2011 | B2 |
7930197 | Ozzie et al. | Apr 2011 | B2 |
7949529 | Weider et al. | May 2011 | B2 |
7949534 | Davis et al. | May 2011 | B2 |
7974844 | Sumita | Jul 2011 | B2 |
7974972 | Cao | Jul 2011 | B2 |
7983915 | Knight et al. | Jul 2011 | B2 |
7983917 | Kennewick et al. | Jul 2011 | B2 |
7983997 | Allen et al. | Jul 2011 | B2 |
7986431 | Emori et al. | Jul 2011 | B2 |
7987151 | Schott et al. | Jul 2011 | B2 |
7996228 | Miller et al. | Aug 2011 | B2 |
7999669 | Singh et al. | Aug 2011 | B2 |
8000453 | Cooper et al. | Aug 2011 | B2 |
8005664 | Hanumanthappa | Aug 2011 | B2 |
8005679 | Jordan et al. | Aug 2011 | B2 |
8015006 | Kennewick et al. | Sep 2011 | B2 |
8015144 | Zheng et al. | Sep 2011 | B2 |
8024195 | Mozer et al. | Sep 2011 | B2 |
8032383 | Bhardwaj et al. | Oct 2011 | B1 |
8036901 | Mozer | Oct 2011 | B2 |
8041570 | Mirkovic et al. | Oct 2011 | B2 |
8041611 | Kleinrock et al. | Oct 2011 | B2 |
8050500 | Batty et al. | Nov 2011 | B1 |
8055502 | Clark et al. | Nov 2011 | B2 |
8055708 | Chitsaz et al. | Nov 2011 | B2 |
8065143 | Yanagihara | Nov 2011 | B2 |
8065155 | Gazdzinski | Nov 2011 | B1 |
8065156 | Gazdzinski | Nov 2011 | B2 |
8069046 | Kennewick et al. | Nov 2011 | B2 |
8069422 | Sheshagiri et al. | Nov 2011 | B2 |
8073681 | Baldwin et al. | Dec 2011 | B2 |
8078473 | Gazdzinski | Dec 2011 | B1 |
8082153 | Coffman et al. | Dec 2011 | B2 |
8095364 | Longe et al. | Jan 2012 | B2 |
8099289 | Mozer et al. | Jan 2012 | B2 |
8103510 | Sato | Jan 2012 | B2 |
8107401 | John et al. | Jan 2012 | B2 |
8112275 | Kennewick et al. | Feb 2012 | B2 |
8112280 | Lu | Feb 2012 | B2 |
8117037 | Gazdzinski | Feb 2012 | B2 |
8122353 | Bouta | Feb 2012 | B2 |
8131557 | Davis et al. | Mar 2012 | B2 |
8138912 | Singh et al. | Mar 2012 | B2 |
8140335 | Kennewick et al. | Mar 2012 | B2 |
8150700 | Shin et al. | Apr 2012 | B2 |
8165886 | Gagnon et al. | Apr 2012 | B1 |
8166019 | Lee et al. | Apr 2012 | B1 |
8170790 | Lee et al. | May 2012 | B2 |
8179370 | Yamasani et al. | May 2012 | B1 |
8188856 | Singh et al. | May 2012 | B2 |
8190359 | Bourne | May 2012 | B2 |
8195467 | Mozer et al. | Jun 2012 | B2 |
8204238 | Mozer | Jun 2012 | B2 |
8205788 | Gazdzinski et al. | Jun 2012 | B1 |
8219406 | Yu et al. | Jul 2012 | B2 |
8219407 | Roy et al. | Jul 2012 | B1 |
8219608 | alSafadi et al. | Jul 2012 | B2 |
8224649 | Chaudhari et al. | Jul 2012 | B2 |
8239207 | Seligman et al. | Aug 2012 | B2 |
8285551 | Gazdzinski | Oct 2012 | B2 |
8285553 | Gazdzinski | Oct 2012 | B2 |
8290777 | Nguyen et al. | Oct 2012 | B1 |
8290778 | Gazdzinski | Oct 2012 | B2 |
8290781 | Gazdzinski | Oct 2012 | B2 |
8296146 | Gazdzinski | Oct 2012 | B2 |
8296153 | Gazdzinski | Oct 2012 | B2 |
8296383 | Lindahl | Oct 2012 | B2 |
8301456 | Gazdzinski | Oct 2012 | B2 |
8311834 | Gazdzinski | Nov 2012 | B1 |
8370158 | Gazdzinski | Feb 2013 | B2 |
8371503 | Gazdzinski | Feb 2013 | B2 |
8374871 | Ehsani et al. | Feb 2013 | B2 |
8447612 | Gazdzinski | May 2013 | B2 |
8498857 | Kopparapu et al. | Jul 2013 | B2 |
20010029455 | Chin et al. | Oct 2001 | A1 |
20010030660 | Zainoulline | Oct 2001 | A1 |
20010047264 | Roundtree | Nov 2001 | A1 |
20020002039 | Qureshey et al. | Jan 2002 | A1 |
20020002461 | Tetsumoto | Jan 2002 | A1 |
20020004703 | Gaspard, II | Jan 2002 | A1 |
20020010584 | Schultz et al. | Jan 2002 | A1 |
20020013852 | Janik | Jan 2002 | A1 |
20020031262 | Imagawa et al. | Mar 2002 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020032751 | Bharadwaj | Mar 2002 | A1 |
20020035474 | Alpdemir | Mar 2002 | A1 |
20020042707 | Zhao et al. | Apr 2002 | A1 |
20020045438 | Tagawa et al. | Apr 2002 | A1 |
20020046025 | Hain | Apr 2002 | A1 |
20020046315 | Miller et al. | Apr 2002 | A1 |
20020052747 | Sarukkai | May 2002 | A1 |
20020059068 | Rose et al. | May 2002 | A1 |
20020067308 | Robertson | Jun 2002 | A1 |
20020069063 | Buchner et al. | Jun 2002 | A1 |
20020072816 | Shdema et al. | Jun 2002 | A1 |
20020077817 | Atal | Jun 2002 | A1 |
20020080163 | Morey | Jun 2002 | A1 |
20020099552 | Rubin et al. | Jul 2002 | A1 |
20020103641 | Kuo et al. | Aug 2002 | A1 |
20020107684 | Gao | Aug 2002 | A1 |
20020116171 | Russell | Aug 2002 | A1 |
20020116185 | Cooper et al. | Aug 2002 | A1 |
20020116189 | Yeh et al. | Aug 2002 | A1 |
20020128827 | Bu et al. | Sep 2002 | A1 |
20020133347 | Schoneburg et al. | Sep 2002 | A1 |
20020135565 | Gordon et al. | Sep 2002 | A1 |
20020138265 | Stevens et al. | Sep 2002 | A1 |
20020143533 | Lucas et al. | Oct 2002 | A1 |
20020143551 | Sharma et al. | Oct 2002 | A1 |
20020151297 | Remboski et al. | Oct 2002 | A1 |
20020154160 | Hosokawa | Oct 2002 | A1 |
20020164000 | Cohen et al. | Nov 2002 | A1 |
20020169605 | Damiba et al. | Nov 2002 | A1 |
20020173889 | Odinak et al. | Nov 2002 | A1 |
20020184189 | Hay et al. | Dec 2002 | A1 |
20020198714 | Zhou | Dec 2002 | A1 |
20030001881 | Mannheimer et al. | Jan 2003 | A1 |
20030016770 | Trans et al. | Jan 2003 | A1 |
20030020760 | Takatsu et al. | Jan 2003 | A1 |
20030033153 | Olson et al. | Feb 2003 | A1 |
20030046401 | Abbott et al. | Mar 2003 | A1 |
20030074198 | Sussman | Apr 2003 | A1 |
20030078766 | Appelt et al. | Apr 2003 | A1 |
20030079038 | Robbin et al. | Apr 2003 | A1 |
20030083884 | Odinak et al. | May 2003 | A1 |
20030088414 | Huang et al. | May 2003 | A1 |
20030097210 | Horst et al. | May 2003 | A1 |
20030098892 | Hiipakka | May 2003 | A1 |
20030099335 | Tanaka et al. | May 2003 | A1 |
20030101045 | Moffatt et al. | May 2003 | A1 |
20030115060 | Junqua et al. | Jun 2003 | A1 |
20030115064 | Gusler et al. | Jun 2003 | A1 |
20030115552 | Jahnke et al. | Jun 2003 | A1 |
20030117365 | Shteyn | Jun 2003 | A1 |
20030120494 | Jost et al. | Jun 2003 | A1 |
20030125927 | Seme | Jul 2003 | A1 |
20030126559 | Fuhrmann | Jul 2003 | A1 |
20030135740 | Talmor et al. | Jul 2003 | A1 |
20030144846 | Denenberge et al. | Jul 2003 | A1 |
20030157968 | Boman et al. | Aug 2003 | A1 |
20030158737 | Csicsatka | Aug 2003 | A1 |
20030167318 | Robbin et al. | Sep 2003 | A1 |
20030167335 | Alexander | Sep 2003 | A1 |
20030190074 | Loudon et al. | Oct 2003 | A1 |
20030197744 | Irvine | Oct 2003 | A1 |
20030212961 | Soin et al. | Nov 2003 | A1 |
20030233230 | Ammicht et al. | Dec 2003 | A1 |
20030233237 | Garside et al. | Dec 2003 | A1 |
20030234824 | Litwiller | Dec 2003 | A1 |
20040051729 | Borden, IV | Mar 2004 | A1 |
20040052338 | Celi, Jr. et al. | Mar 2004 | A1 |
20040054535 | Mackie et al. | Mar 2004 | A1 |
20040054690 | Hillerbrand et al. | Mar 2004 | A1 |
20040055446 | Robbin et al. | Mar 2004 | A1 |
20040061717 | Menon et al. | Apr 2004 | A1 |
20040085162 | Agarwal et al. | May 2004 | A1 |
20040114731 | Gillett et al. | Jun 2004 | A1 |
20040127241 | Shostak | Jul 2004 | A1 |
20040135701 | Yasuda et al. | Jul 2004 | A1 |
20040145607 | Alderson | Jul 2004 | A1 |
20040162741 | Flaxer et al. | Aug 2004 | A1 |
20040176958 | Salmenkaita et al. | Sep 2004 | A1 |
20040186714 | Baker | Sep 2004 | A1 |
20040193420 | Kennewick et al. | Sep 2004 | A1 |
20040193426 | Maddux et al. | Sep 2004 | A1 |
20040199375 | Ehsani et al. | Oct 2004 | A1 |
20040199387 | Wang et al. | Oct 2004 | A1 |
20040205671 | Sukehiro et al. | Oct 2004 | A1 |
20040218451 | Said et al. | Nov 2004 | A1 |
20040220798 | Chi et al. | Nov 2004 | A1 |
20040225746 | Niell et al. | Nov 2004 | A1 |
20040236778 | Junqua et al. | Nov 2004 | A1 |
20040243419 | Wang | Dec 2004 | A1 |
20040257432 | Girish et al. | Dec 2004 | A1 |
20050002507 | Timmins et al. | Jan 2005 | A1 |
20050015254 | Beaman | Jan 2005 | A1 |
20050015772 | Saare et al. | Jan 2005 | A1 |
20050033582 | Gadd et al. | Feb 2005 | A1 |
20050045373 | Born | Mar 2005 | A1 |
20050049880 | Roth et al. | Mar 2005 | A1 |
20050055403 | Brittan | Mar 2005 | A1 |
20050058438 | Hayashi | Mar 2005 | A1 |
20050071332 | Ortega et al. | Mar 2005 | A1 |
20050080625 | Bennett et al. | Apr 2005 | A1 |
20050080780 | Colledge et al. | Apr 2005 | A1 |
20050086059 | Bennett | Apr 2005 | A1 |
20050091118 | Fano | Apr 2005 | A1 |
20050100214 | Zhang et al. | May 2005 | A1 |
20050102614 | Brockett et al. | May 2005 | A1 |
20050102625 | Lee et al. | May 2005 | A1 |
20050108001 | Aarskog | May 2005 | A1 |
20050108074 | Bloechl et al. | May 2005 | A1 |
20050108338 | Simske et al. | May 2005 | A1 |
20050114124 | Liu et al. | May 2005 | A1 |
20050114140 | Brackett et al. | May 2005 | A1 |
20050119897 | Bennett et al. | Jun 2005 | A1 |
20050125216 | Chitrapura et al. | Jun 2005 | A1 |
20050125235 | Lazay et al. | Jun 2005 | A1 |
20050132301 | Ikeda | Jun 2005 | A1 |
20050143972 | Gopalakrishnan et al. | Jun 2005 | A1 |
20050149332 | Kuzunuki et al. | Jul 2005 | A1 |
20050152602 | Chen et al. | Jul 2005 | A1 |
20050165607 | Di Fabbrizio et al. | Jul 2005 | A1 |
20050182616 | Kotipalli | Aug 2005 | A1 |
20050182628 | Choi | Aug 2005 | A1 |
20050182629 | Coorman et al. | Aug 2005 | A1 |
20050192801 | Lewis et al. | Sep 2005 | A1 |
20050196733 | Budra et al. | Sep 2005 | A1 |
20050201572 | Lindahl et al. | Sep 2005 | A1 |
20050203747 | Lecoeuche | Sep 2005 | A1 |
20050203991 | Kawamura et al. | Sep 2005 | A1 |
20050222843 | Kahn et al. | Oct 2005 | A1 |
20050228665 | Kobayashi et al. | Oct 2005 | A1 |
20050273626 | Pearson et al. | Dec 2005 | A1 |
20050283364 | Longe et al. | Dec 2005 | A1 |
20050288934 | Omi | Dec 2005 | A1 |
20050288936 | Busayapongchai et al. | Dec 2005 | A1 |
20050289463 | Wu et al. | Dec 2005 | A1 |
20060009973 | Nguyen et al. | Jan 2006 | A1 |
20060018492 | Chiu et al. | Jan 2006 | A1 |
20060061488 | Dunton | Mar 2006 | A1 |
20060067535 | Culbert et al. | Mar 2006 | A1 |
20060067536 | Culbert et al. | Mar 2006 | A1 |
20060074660 | Waters et al. | Apr 2006 | A1 |
20060077055 | Basir | Apr 2006 | A1 |
20060095846 | Nurmi | May 2006 | A1 |
20060095848 | Naik | May 2006 | A1 |
20060106592 | Brockett et al. | May 2006 | A1 |
20060106594 | Brockett et al. | May 2006 | A1 |
20060106595 | Brockett et al. | May 2006 | A1 |
20060111906 | Cross et al. | May 2006 | A1 |
20060116874 | Samuelsson et al. | Jun 2006 | A1 |
20060117002 | Swen | Jun 2006 | A1 |
20060119582 | Ng et al. | Jun 2006 | A1 |
20060122834 | Bennett | Jun 2006 | A1 |
20060122836 | Cross et al. | Jun 2006 | A1 |
20060143007 | Koh et al. | Jun 2006 | A1 |
20060143576 | Gupta et al. | Jun 2006 | A1 |
20060153040 | Girish et al. | Jul 2006 | A1 |
20060156252 | Sheshagiri et al. | Jul 2006 | A1 |
20060190269 | Tessel et al. | Aug 2006 | A1 |
20060193518 | Dong | Aug 2006 | A1 |
20060200253 | Hoffberg et al. | Sep 2006 | A1 |
20060200342 | Corston-Oliver et al. | Sep 2006 | A1 |
20060217967 | Goertzen et al. | Sep 2006 | A1 |
20060221788 | Lindahl et al. | Oct 2006 | A1 |
20060235700 | Wong et al. | Oct 2006 | A1 |
20060239471 | Mao et al. | Oct 2006 | A1 |
20060242190 | Wnek | Oct 2006 | A1 |
20060262876 | LaDue | Nov 2006 | A1 |
20060274051 | Longe et al. | Dec 2006 | A1 |
20060274905 | Lindahl et al. | Dec 2006 | A1 |
20060277058 | J'maev et al. | Dec 2006 | A1 |
20060282264 | Denny et al. | Dec 2006 | A1 |
20060293876 | Kamatani et al. | Dec 2006 | A1 |
20060293886 | Odell et al. | Dec 2006 | A1 |
20070006098 | Krumm et al. | Jan 2007 | A1 |
20070021956 | Qu et al. | Jan 2007 | A1 |
20070027732 | Hudgens | Feb 2007 | A1 |
20070033003 | Morris | Feb 2007 | A1 |
20070038436 | Cristoe et al. | Feb 2007 | A1 |
20070038609 | Wu | Feb 2007 | A1 |
20070041361 | Iso-Sipila | Feb 2007 | A1 |
20070043568 | Dhanakshirur et al. | Feb 2007 | A1 |
20070044038 | Horentrup et al. | Feb 2007 | A1 |
20070047719 | Dhawan et al. | Mar 2007 | A1 |
20070050191 | Weider et al. | Mar 2007 | A1 |
20070050712 | Hull et al. | Mar 2007 | A1 |
20070052586 | Horstemeyer | Mar 2007 | A1 |
20070055514 | Beattie et al. | Mar 2007 | A1 |
20070055525 | Kennewick et al. | Mar 2007 | A1 |
20070055529 | Kanevsky et al. | Mar 2007 | A1 |
20070058832 | Hug et al. | Mar 2007 | A1 |
20070073540 | Hirakawa et al. | Mar 2007 | A1 |
20070083467 | Lindahl et al. | Apr 2007 | A1 |
20070088556 | Andrew | Apr 2007 | A1 |
20070094026 | Ativanichayaphong et al. | Apr 2007 | A1 |
20070100635 | Mahajan et al. | May 2007 | A1 |
20070100790 | Cheyer et al. | May 2007 | A1 |
20070106674 | Agrawal et al. | May 2007 | A1 |
20070118377 | Badino et al. | May 2007 | A1 |
20070118378 | Skuratovsky | May 2007 | A1 |
20070124149 | Shen et al. | May 2007 | A1 |
20070135949 | Snover et al. | Jun 2007 | A1 |
20070157268 | Girish et al. | Jul 2007 | A1 |
20070174188 | Fish | Jul 2007 | A1 |
20070180383 | Naik | Aug 2007 | A1 |
20070182595 | Ghasabian | Aug 2007 | A1 |
20070185754 | Schmidt | Aug 2007 | A1 |
20070185917 | Prahlad et al. | Aug 2007 | A1 |
20070198269 | Braho et al. | Aug 2007 | A1 |
20070208569 | Subramanian et al. | Sep 2007 | A1 |
20070208579 | Peterson | Sep 2007 | A1 |
20070208726 | Krishnaprasad et al. | Sep 2007 | A1 |
20070211071 | Slotznick et al. | Sep 2007 | A1 |
20070225980 | Sumita | Sep 2007 | A1 |
20070239429 | Johnson et al. | Oct 2007 | A1 |
20070265831 | Dinur et al. | Nov 2007 | A1 |
20070276651 | Bliss et al. | Nov 2007 | A1 |
20070276714 | Beringer | Nov 2007 | A1 |
20070276810 | Rosen | Nov 2007 | A1 |
20070282595 | Tunning et al. | Dec 2007 | A1 |
20070288241 | Cross et al. | Dec 2007 | A1 |
20070291108 | Huber et al. | Dec 2007 | A1 |
20070294263 | Punj et al. | Dec 2007 | A1 |
20070299664 | Peters et al. | Dec 2007 | A1 |
20080012950 | Lee et al. | Jan 2008 | A1 |
20080015864 | Ross et al. | Jan 2008 | A1 |
20080021708 | Bennett et al. | Jan 2008 | A1 |
20080034032 | Healey et al. | Feb 2008 | A1 |
20080040339 | Zhou et al. | Feb 2008 | A1 |
20080042970 | Liang et al. | Feb 2008 | A1 |
20080048908 | Sato | Feb 2008 | A1 |
20080052063 | Bennett et al. | Feb 2008 | A1 |
20080052073 | Goto et al. | Feb 2008 | A1 |
20080056579 | Guha | Mar 2008 | A1 |
20080071544 | Beaufays et al. | Mar 2008 | A1 |
20080075296 | Lindahl et al. | Mar 2008 | A1 |
20080077384 | Agapi et al. | Mar 2008 | A1 |
20080077406 | Ganong | Mar 2008 | A1 |
20080079566 | Singh et al. | Apr 2008 | A1 |
20080082332 | Mallett et al. | Apr 2008 | A1 |
20080082338 | O'Neil et al. | Apr 2008 | A1 |
20080082390 | Hawkins et al. | Apr 2008 | A1 |
20080082651 | Singh et al. | Apr 2008 | A1 |
20080091406 | Baldwin et al. | Apr 2008 | A1 |
20080109222 | Liu | May 2008 | A1 |
20080118143 | Gordon et al. | May 2008 | A1 |
20080120102 | Rao | May 2008 | A1 |
20080120112 | Jordan et al. | May 2008 | A1 |
20080120342 | Reed et al. | May 2008 | A1 |
20080126100 | Grost et al. | May 2008 | A1 |
20080129520 | Lee | Jun 2008 | A1 |
20080131006 | Oliver | Jun 2008 | A1 |
20080133215 | Sarukkai | Jun 2008 | A1 |
20080133228 | Rao | Jun 2008 | A1 |
20080140413 | Millman et al. | Jun 2008 | A1 |
20080140416 | Shostak | Jun 2008 | A1 |
20080140652 | Millman et al. | Jun 2008 | A1 |
20080140657 | Azvine et al. | Jun 2008 | A1 |
20080154612 | Evermann et al. | Jun 2008 | A1 |
20080157867 | Krah | Jul 2008 | A1 |
20080165980 | Pavlovic et al. | Jul 2008 | A1 |
20080189106 | Low et al. | Aug 2008 | A1 |
20080189114 | Fail et al. | Aug 2008 | A1 |
20080208585 | Ativanichayaphong et al. | Aug 2008 | A1 |
20080208587 | Ben-David et al. | Aug 2008 | A1 |
20080221866 | Katragadda et al. | Sep 2008 | A1 |
20080221880 | Cerra et al. | Sep 2008 | A1 |
20080221889 | Cerra et al. | Sep 2008 | A1 |
20080221903 | Kanevsky et al. | Sep 2008 | A1 |
20080228463 | Mori et al. | Sep 2008 | A1 |
20080228490 | Fischer et al. | Sep 2008 | A1 |
20080228496 | Yu et al. | Sep 2008 | A1 |
20080240569 | Tonouchi | Oct 2008 | A1 |
20080247519 | Abella et al. | Oct 2008 | A1 |
20080249770 | Kim et al. | Oct 2008 | A1 |
20080253577 | Eppolito | Oct 2008 | A1 |
20080255845 | Bennett | Oct 2008 | A1 |
20080256613 | Grover | Oct 2008 | A1 |
20080270118 | Kuo et al. | Oct 2008 | A1 |
20080281510 | Shahine | Nov 2008 | A1 |
20080300878 | Bennett | Dec 2008 | A1 |
20080313335 | Jung et al. | Dec 2008 | A1 |
20080319763 | Di Fabbrizio et al. | Dec 2008 | A1 |
20090003115 | Lindahl et al. | Jan 2009 | A1 |
20090005891 | Batson et al. | Jan 2009 | A1 |
20090006100 | Badger et al. | Jan 2009 | A1 |
20090006343 | Platt et al. | Jan 2009 | A1 |
20090006488 | Lindahl et al. | Jan 2009 | A1 |
20090006671 | Batson et al. | Jan 2009 | A1 |
20090011709 | Akasaka et al. | Jan 2009 | A1 |
20090012775 | El Hady et al. | Jan 2009 | A1 |
20090018835 | Cooper et al. | Jan 2009 | A1 |
20090022329 | Mahowald | Jan 2009 | A1 |
20090028435 | Wu et al. | Jan 2009 | A1 |
20090030800 | Grois | Jan 2009 | A1 |
20090048845 | Burckart et al. | Feb 2009 | A1 |
20090055179 | Cho et al. | Feb 2009 | A1 |
20090058823 | Kocienda | Mar 2009 | A1 |
20090060472 | Bull et al. | Mar 2009 | A1 |
20090070097 | Wu et al. | Mar 2009 | A1 |
20090076792 | Lawson-Tancred | Mar 2009 | A1 |
20090076796 | Daraselia | Mar 2009 | A1 |
20090077165 | Rhodes et al. | Mar 2009 | A1 |
20090083047 | Lindahl et al. | Mar 2009 | A1 |
20090092260 | Powers | Apr 2009 | A1 |
20090092261 | Bard | Apr 2009 | A1 |
20090092262 | Costa et al. | Apr 2009 | A1 |
20090094033 | Mozer et al. | Apr 2009 | A1 |
20090100049 | Cao | Apr 2009 | A1 |
20090106026 | Ferrieux | Apr 2009 | A1 |
20090112572 | Thorn | Apr 2009 | A1 |
20090112677 | Rhett | Apr 2009 | A1 |
20090112892 | Cardie et al. | Apr 2009 | A1 |
20090123071 | Iwasaki | May 2009 | A1 |
20090125477 | Lu et al. | May 2009 | A1 |
20090144049 | Haddad et al. | Jun 2009 | A1 |
20090146848 | Ghassabian | Jun 2009 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20090154669 | Wood et al. | Jun 2009 | A1 |
20090157401 | Bennett | Jun 2009 | A1 |
20090164441 | Cheyer | Jun 2009 | A1 |
20090164655 | Pettersson et al. | Jun 2009 | A1 |
20090167508 | Fadell et al. | Jul 2009 | A1 |
20090167509 | Fadell et al. | Jul 2009 | A1 |
20090171664 | Kennewick et al. | Jul 2009 | A1 |
20090172542 | Girish et al. | Jul 2009 | A1 |
20090177461 | Ehsani et al. | Jul 2009 | A1 |
20090182445 | Girish et al. | Jul 2009 | A1 |
20090187577 | Reznik et al. | Jul 2009 | A1 |
20090191895 | Singh et al. | Jul 2009 | A1 |
20090192782 | Drewes | Jul 2009 | A1 |
20090204409 | Mozer et al. | Aug 2009 | A1 |
20090216704 | Zheng et al. | Aug 2009 | A1 |
20090222488 | Boerries et al. | Sep 2009 | A1 |
20090228273 | Wang et al. | Sep 2009 | A1 |
20090234655 | Kwon | Sep 2009 | A1 |
20090239552 | Churchill et al. | Sep 2009 | A1 |
20090248182 | Logan et al. | Oct 2009 | A1 |
20090252350 | Seguin | Oct 2009 | A1 |
20090253457 | Seguin | Oct 2009 | A1 |
20090253463 | Shin et al. | Oct 2009 | A1 |
20090254339 | Seguin | Oct 2009 | A1 |
20090271109 | Lee et al. | Oct 2009 | A1 |
20090271175 | Bodin et al. | Oct 2009 | A1 |
20090271178 | Bodin et al. | Oct 2009 | A1 |
20090287583 | Holmes | Nov 2009 | A1 |
20090290718 | Kahn et al. | Nov 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090299849 | Cao et al. | Dec 2009 | A1 |
20090306967 | Nicolov et al. | Dec 2009 | A1 |
20090306980 | Shin | Dec 2009 | A1 |
20090306981 | Cromack et al. | Dec 2009 | A1 |
20090306985 | Roberts et al. | Dec 2009 | A1 |
20090306989 | Kaji | Dec 2009 | A1 |
20090307162 | Bui et al. | Dec 2009 | A1 |
20090313026 | Coffman et al. | Dec 2009 | A1 |
20090319266 | Brown et al. | Dec 2009 | A1 |
20090326936 | Nagashima | Dec 2009 | A1 |
20090326938 | Marila et al. | Dec 2009 | A1 |
20100005081 | Bennett | Jan 2010 | A1 |
20100023320 | Di Cristo et al. | Jan 2010 | A1 |
20100030928 | Conroy et al. | Feb 2010 | A1 |
20100031143 | Rao et al. | Feb 2010 | A1 |
20100036660 | Bennett | Feb 2010 | A1 |
20100042400 | Block et al. | Feb 2010 | A1 |
20100049514 | Kennewick et al. | Feb 2010 | A1 |
20100057457 | Ogata et al. | Mar 2010 | A1 |
20100060646 | Unsal et al. | Mar 2010 | A1 |
20100063825 | Williams et al. | Mar 2010 | A1 |
20100064113 | Lindahl et al. | Mar 2010 | A1 |
20100070899 | Hunt et al. | Mar 2010 | A1 |
20100076760 | Kraenzel et al. | Mar 2010 | A1 |
20100081456 | Singh et al. | Apr 2010 | A1 |
20100081487 | Chen et al. | Apr 2010 | A1 |
20100082970 | Lindahl et al. | Apr 2010 | A1 |
20100088020 | Sano et al. | Apr 2010 | A1 |
20100088100 | Lindahl | Apr 2010 | A1 |
20100100212 | Lindahl et al. | Apr 2010 | A1 |
20100100384 | Ju et al. | Apr 2010 | A1 |
20100106500 | McKee et al. | Apr 2010 | A1 |
20100125460 | Mellott et al. | May 2010 | A1 |
20100131273 | Aley-Raz et al. | May 2010 | A1 |
20100138215 | Williams | Jun 2010 | A1 |
20100138224 | Bedingfield, Sr. | Jun 2010 | A1 |
20100138416 | Bellotti | Jun 2010 | A1 |
20100145694 | Ju et al. | Jun 2010 | A1 |
20100145700 | Kennewick et al. | Jun 2010 | A1 |
20100146442 | Nagasaka et al. | Jun 2010 | A1 |
20100161554 | Datuashvili et al. | Jun 2010 | A1 |
20100185448 | Meisel | Jul 2010 | A1 |
20100204986 | Kennewick et al. | Aug 2010 | A1 |
20100217604 | Baldwin et al. | Aug 2010 | A1 |
20100228540 | Bennett | Sep 2010 | A1 |
20100231474 | Yamagajo et al. | Sep 2010 | A1 |
20100235341 | Bennett | Sep 2010 | A1 |
20100257160 | Cao | Oct 2010 | A1 |
20100257478 | Longe et al. | Oct 2010 | A1 |
20100262599 | Nitz | Oct 2010 | A1 |
20100277579 | Cho et al. | Nov 2010 | A1 |
20100278320 | Arsenault et al. | Nov 2010 | A1 |
20100278453 | King | Nov 2010 | A1 |
20100280983 | Cho et al. | Nov 2010 | A1 |
20100286985 | Kennewick et al. | Nov 2010 | A1 |
20100299133 | Kopparapu et al. | Nov 2010 | A1 |
20100299142 | Freeman et al. | Nov 2010 | A1 |
20100312547 | Van Os et al. | Dec 2010 | A1 |
20100312566 | Odinak et al. | Dec 2010 | A1 |
20100318576 | Kim | Dec 2010 | A1 |
20100324905 | Kurzweil et al. | Dec 2010 | A1 |
20100332235 | David | Dec 2010 | A1 |
20100332280 | Bradley et al. | Dec 2010 | A1 |
20100332348 | Cao | Dec 2010 | A1 |
20110010178 | Lee et al. | Jan 2011 | A1 |
20110022952 | Wu et al. | Jan 2011 | A1 |
20110047072 | Ciurea | Feb 2011 | A1 |
20110054901 | Qin et al. | Mar 2011 | A1 |
20110060584 | Ferrucci et al. | Mar 2011 | A1 |
20110060807 | Martin et al. | Mar 2011 | A1 |
20110076994 | Kim et al. | Mar 2011 | A1 |
20110082688 | Kim et al. | Apr 2011 | A1 |
20110090078 | Kim et al. | Apr 2011 | A1 |
20110099000 | Rai et al. | Apr 2011 | A1 |
20110112827 | Kennewick et al. | May 2011 | A1 |
20110112921 | Kennewick et al. | May 2011 | A1 |
20110119049 | Ylonen | May 2011 | A1 |
20110125540 | Jang et al. | May 2011 | A1 |
20110130958 | Stahl et al. | Jun 2011 | A1 |
20110131036 | Dicristo et al. | Jun 2011 | A1 |
20110131045 | Cristo et al. | Jun 2011 | A1 |
20110143811 | Rodriguez | Jun 2011 | A1 |
20110144999 | Jang et al. | Jun 2011 | A1 |
20110161076 | Davis et al. | Jun 2011 | A1 |
20110161309 | Lung et al. | Jun 2011 | A1 |
20110175810 | Markovic et al. | Jul 2011 | A1 |
20110184721 | Subramanian et al. | Jul 2011 | A1 |
20110184730 | LeBeau et al. | Jul 2011 | A1 |
20110195758 | Damale et al. | Aug 2011 | A1 |
20110201387 | Paek et al. | Aug 2011 | A1 |
20110218855 | Cao et al. | Sep 2011 | A1 |
20110224972 | Millett et al. | Sep 2011 | A1 |
20110231182 | Weider et al. | Sep 2011 | A1 |
20110231188 | Kennewick et al. | Sep 2011 | A1 |
20110231474 | Locker et al. | Sep 2011 | A1 |
20110260861 | Singh et al. | Oct 2011 | A1 |
20110264643 | Cao | Oct 2011 | A1 |
20110279368 | Klein et al. | Nov 2011 | A1 |
20110288861 | Kurzweil et al. | Nov 2011 | A1 |
20110298585 | Barry | Dec 2011 | A1 |
20110306426 | Novak et al. | Dec 2011 | A1 |
20110314404 | Kotler et al. | Dec 2011 | A1 |
20120002820 | Leichter | Jan 2012 | A1 |
20120016678 | Gruber et al. | Jan 2012 | A1 |
20120020490 | Leichter | Jan 2012 | A1 |
20120022787 | LeBeau et al. | Jan 2012 | A1 |
20120022857 | Baldwin et al. | Jan 2012 | A1 |
20120022860 | Lloyd et al. | Jan 2012 | A1 |
20120022868 | LeBeau et al. | Jan 2012 | A1 |
20120022869 | Lloyd et al. | Jan 2012 | A1 |
20120022870 | Kristjansson et al. | Jan 2012 | A1 |
20120022872 | Gruber et al. | Jan 2012 | A1 |
20120022874 | Lloyd et al. | Jan 2012 | A1 |
20120022876 | LeBeau et al. | Jan 2012 | A1 |
20120023088 | Cheng et al. | Jan 2012 | A1 |
20120034904 | LeBeau et al. | Feb 2012 | A1 |
20120035908 | Lebeau et al. | Feb 2012 | A1 |
20120035924 | Jitkoff et al. | Feb 2012 | A1 |
20120035931 | LeBeau et al. | Feb 2012 | A1 |
20120035932 | Jitkoff et al. | Feb 2012 | A1 |
20120042343 | Laligand et al. | Feb 2012 | A1 |
20120078627 | Wagner | Mar 2012 | A1 |
20120084086 | Gilbert et al. | Apr 2012 | A1 |
20120108221 | Thomas et al. | May 2012 | A1 |
20120136572 | Norton | May 2012 | A1 |
20120137367 | Dupont et al. | May 2012 | A1 |
20120149394 | Singh et al. | Jun 2012 | A1 |
20120150580 | Norton | Jun 2012 | A1 |
20120173464 | Tur et al. | Jul 2012 | A1 |
20120185237 | Gajic et al. | Jul 2012 | A1 |
20120197998 | Kessel et al. | Aug 2012 | A1 |
20120214517 | Singh et al. | Aug 2012 | A1 |
20120221339 | Wang et al. | Aug 2012 | A1 |
20120245719 | Story, Jr. et al. | Sep 2012 | A1 |
20120245944 | Gruber et al. | Sep 2012 | A1 |
20120265528 | Gruber et al. | Oct 2012 | A1 |
20120271625 | Bernard | Oct 2012 | A1 |
20120271635 | Ljolje | Oct 2012 | A1 |
20120271676 | Aravamudan et al. | Oct 2012 | A1 |
20120284027 | Mallett et al. | Nov 2012 | A1 |
20120296649 | Bansal et al. | Nov 2012 | A1 |
20120309363 | Gruber et al. | Dec 2012 | A1 |
20120310642 | Cao et al. | Dec 2012 | A1 |
20120310649 | Cannistraro et al. | Dec 2012 | A1 |
20120311583 | Gruber et al. | Dec 2012 | A1 |
20120311584 | Gruber et al. | Dec 2012 | A1 |
20120311585 | Gruber et al. | Dec 2012 | A1 |
20120330661 | Lindahl | Dec 2012 | A1 |
20130006638 | Lindahl | Jan 2013 | A1 |
20130110505 | Gruber et al. | May 2013 | A1 |
20130110515 | Guzzoni et al. | May 2013 | A1 |
20130110518 | Gruber et al. | May 2013 | A1 |
20130110519 | Cheyer et al. | May 2013 | A1 |
20130110520 | Cheyer et al. | May 2013 | A1 |
20130111348 | Gruber et al. | May 2013 | A1 |
20130111487 | Cheyer et al. | May 2013 | A1 |
20130115927 | Gruber et al. | May 2013 | A1 |
20130117022 | Chen et al. | May 2013 | A1 |
20130185074 | Gruber et al. | Jul 2013 | A1 |
20130185081 | Cheyer et al. | Jul 2013 | A1 |
20130325443 | Begeja et al. | Dec 2013 | A1 |
Number | Date | Country |
---|---|---|
681573 | Apr 1993 | CH |
3837590 | May 1990 | DE |
19841541 | Dec 2007 | DE |
0138061 | Apr 1985 | EP |
0218859 | Apr 1987 | EP |
0262938 | Apr 1988 | EP |
0138061 | Jun 1988 | EP |
0293259 | Nov 1988 | EP |
0299572 | Jan 1989 | EP |
0313975 | May 1989 | EP |
0314908 | May 1989 | EP |
0327408 | Aug 1989 | EP |
0389271 | Sep 1990 | EP |
0411675 | Feb 1991 | EP |
0558312 | Sep 1993 | EP |
0559349 | Sep 1993 | EP |
0570660 | Nov 1993 | EP |
0863453 | Sep 1998 | EP |
0559349 | Jan 1999 | EP |
0981236 | Feb 2000 | EP |
1229496 | Aug 2002 | EP |
1245023 | Oct 2002 | EP |
1311102 | May 2003 | EP |
1315084 | May 2003 | EP |
1315086 | May 2003 | EP |
2109295 | Oct 2009 | EP |
2293667 | Apr 1996 | GB |
60-19965 | Jan 1994 | JP |
7-199379 | Aug 1995 | JP |
11-6743 | Jan 1999 | JP |
2001-125896 | May 2001 | JP |
2002-14954 | Jan 2002 | JP |
2002-24212 | Jan 2002 | JP |
2003-517158 | May 2003 | JP |
2004-152063 | May 2004 | JP |
2007-4633 | Jan 2007 | JP |
2008-236448 | Oct 2008 | JP |
2008-271481 | Nov 2008 | JP |
2009-36999 | Feb 2009 | JP |
2009-294913 | Dec 2009 | JP |
10-0757496 | Sep 2007 | KR |
10-0776800 | Nov 2007 | KR |
10-0801227 | Feb 2008 | KR |
10-0810500 | Mar 2008 | KR |
10-2008-0109322 | Dec 2008 | KR |
10-2009-0086805 | Aug 2009 | KR |
10-0920267 | Oct 2009 | KR |
10-2010-0119519 | Nov 2010 | KR |
10-1032792 | May 2011 | KR |
10-2011-0113414 | Oct 2011 | KR |
1014847 | Oct 2001 | NL |
9502221 | Jan 1995 | WO |
9710586 | Mar 1997 | WO |
9726612 | Jul 1997 | WO |
9841956 | Sep 1998 | WO |
9901834 | Jan 1999 | WO |
9908238 | Feb 1999 | WO |
9956227 | Nov 1999 | WO |
0029964 | May 2000 | WO |
0060435 | Oct 2000 | WO |
0060435 | Apr 2001 | WO |
0135391 | May 2001 | WO |
02073603 | Sep 2002 | WO |
2004008801 | Jan 2004 | WO |
2006129967 | Dec 2006 | WO |
2007080559 | Jul 2007 | WO |
2008085742 | Jul 2008 | WO |
2008109835 | Sep 2008 | WO |
2010075623 | Jul 2010 | WO |
2011088053 | Jul 2011 | WO |
2011133543 | Oct 2011 | WO |
2012167168 | Dec 2012 | WO |
Entry |
---|
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2010/037378, mailed on Aug. 25, 2010, 14 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/040571, mailed on Nov. 16, 2012, 14 pages. |
Extended European Search Report and Search Opinion received for European Patent Application No. 12185276.8, mailed on Dec. 18, 2012, 4 pages. |
Extended European Search Report received for European Patent Application No. 12186663.6, mailed on Jul. 16, 2013, 6 pages. |
“Top 10 Best Practices for Voice User Interface Design” available at <http://www.developer.com/voice/article.php/1567051/Top-10-Best-Practices-for-Voice-UserInterface-Design.htm>, Nov. 1, 2002, 4 pages. |
Apple Computer, “Knowledge Navigator”, published by Apple Computer no later than 2008, as depicted in “Exemplary Screenshots from video entitled “Knowledge Navigator””, 2008, 7 pages. |
Bellegarda, Jerome R., “Latent Semantic Mapping”, IEEE Signal Processing Magazine, vol. 22, No. 5, Sep. 2005, pp. 70-80. |
Car Working Group, “Hands-Free Profile 1.5 HFP1.5—SPEC”, Bluetooth Doc, available at <www.bluetooth.org>, Nov. 25, 2005, 93 pages. |
Cohen et al., “Voice User Interface Design,”, Excerpts from Chapter 1 and Chapter 10, 2004, 36 pages. |
Gong et al., “Guidelines for Handheld Mobile Device Interface Design”, Proceedings of DSI 2004 Annual Meeting, 2004, pp. 3751-3756. |
Horvitz et al., “Handsfree Decision Support: Toward a Non-invasive Human-Computer Interface”, Proceedings of the Symposium on Computer Applications in Medical Care, IEEE Computer Society Press, 1995, p. 955. |
Horvitz et al., “In Pursuit of Effective Handsfree Decision Support: Coupling Bayesian Inference, Speech Understanding, and User Models”, 1995, 8 pages. |
Martin et al., “The Open Agent Architecture: A Framework for Building Distributed Software Systems”, Applied Artificial Intelligence: An International Journal, vol. 13, No. 1-2, available at <http://adam.cheyer.com/papers/oaa.pdf>>, retrieved from internet on Jan.-Mar. 1999. |
Schnelle, D., “Context Aware Voice User Interfaces for Workflow Support”, Dissertation paper, Aug. 27, 2007, 254 pages. |
Gruber et al., “Nike: A National Infrastructure for Knowledge Exchange”, A Whitepaper Advocating and ATP Initiative on Technologies for Lifelong Learning, Oct. 1994, pp. 1-10. |
Gruber, Tom, “Ontologies, Web 2.0 and Beyond”, Ontology Summit, Available online at <http://tomgruber.org/writing/ontolog-social-web-keynote.htm>, Apr. 2007, 17 pages. |
Gruber, Tom, “Ontology of Folksonomy: A Mash-Up of Apples and Oranges”, Int'l Journal on Semantic Web & Information Systems, vol. 3, No. 2, 2007, 7 pages. |
Gruber, Tom, “Siri, A Virtual Personal Assistant-Bringing Intelligence to the Interface”, Semantic Technologies Conference, Jun. 16, 2009, 21 pages. |
Gruber, Tom, “TagOntology”, Presentation to Tag Camp, Oct. 29, 2005, 20 pages. |
Gruber et al., “Toward a Knowledge Medium for Collaborative Product Development”, Proceedings of the Second International Conference on Artificial Intelligence in Design, Jun. 1992, pp. 1-19. |
Gruber, Thomas R., “Toward Principles for the Design of Ontologies used for Knowledge Sharing?”, International Journal of Human-Computer Studies, vol. 43, No. 5-6, Nov. 1995, pp. 907-928. |
Gruber, Tom, “Where the Social Web Meets the Semantic Web”, Presentation at the 5th International Semantic Web Conference, Nov. 2006, 38 pages. |
Guida et al., “NLI: A Robust Interface for Natural Language Person-Machine Communication”, International Journal of Man-Machine Studies, vol. 17, 1982, 17 pages. |
Guzzoni et al., “A Unified Platform for Building Intelligent Web Interaction Assistants”, Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 2006, 4 pages. |
Guzzoni et al., “Active, A Platform for Building Intelligent Operating Rooms”, Surgetica 2007 Computer-Aided Medical Interventions: Tools and Applications, 2007, pp. 191-198. |
Guzzoni et al., “Active, A platform for Building Intelligent Software”, Computational Intelligence, Available online at <http://www.informatik.uni-trier.del-ley/pers/hd/g/Guzzoni:Didier>, 2006, 5 pages. |
Guzzoni et al., “Active, A Tool for Building Intelligent User Interfaces”, ASC 2007, Palma de Mallorca, Aug. 2007, 6 pages. |
Guzzoni, D., “Active: A Unified Platform for Building Intelligent Assistant Applications”, Oct. 25, 2007, 262 pages. |
Guzzoni et al., “Many Robots Make Short Work”, AAAI Robot Contest, SRI International, 1996, 9 pages. |
Guzzoni et al., “Modeling Human-Agent Interaction with Active Ontologies”, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 2007, 8 pages. |
Haas et al., “An Approach to Acquiring and Applying Knowledge”, SRI international, Nov. 1980, 22 pages. |
Hadidi et al., “Student's Acceptance of Web-Based Course Offerings: An Empirical Assessment”, Proceedings of the Americas Conference on Information Systems(AMCIS), 1998, 4 pages. |
Hardwar, Devindra, “Driving App Waze Builds its own Siri for Hands-Free Voice Control”, Available online at <http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/>, retrieved on Feb. 9, 2012, 4 pages. |
Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform”, in Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages. |
Hawkins et al., “Hierarchical Temporal Memory: Concepts, Theory and Terminology”, Numenta, Inc., Mar. 27, 2007, 20 pages. |
He et al., “Personal Security Agent: KQML-Based PKI”, The Robotics Institute, Carnegie-Mellon University, Paper, 1997, 14 pages. |
Helm et al., “Building Visual Language Parsers”, Proceedings of CHI'91, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1991, 8 pages. |
Hendrix et al., “Developing a Natural Language Interface to Complex Data”, ACM Transactions on Database Systems, vol. 3, No. 2, Jun. 1978, pp. 105-147. |
Hendrix, Gary G., “Human Engineering for Applied Natural Language Processing”, SRI International, Technical Note 139, Feb. 1977, 27 pages. |
Hendrix, Gary G., “Klaus: A System for Managing Information and Computational Resources”, SRI International, Technical Note 230, Oct. 1980, 34 pages. |
Hendrix, Gary G., “Lifer: A Natural Language Interface Facility”, SRI Stanford Research Institute, Technical Note 135, Dec. 1976, 9 pages. |
Hendrix, Gary G., “Natural-Language Interface”, American Journal of Computational Linguistics, vol. 8, No. 2, Apr.-Jun. 1982, pp. 56-61. |
Hendrix, Gary G., “The Lifer Manual: A Guide to Building Practical Natural Language Interfaces”, SRI International, Technical Note 138, Feb. 1977, 76 pages. |
Hendrix et al., “Transportable Natural-Language Interfaces to Databases”, SRI International, Technical Note 228, Apr. 30, 1981, 18 pages. |
Hermansky, H., “Perceptual Linear Predictive (PLP) Analysis of Speech”, Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages. |
Hermansky, H., “Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing”, Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'93), Apr. 1993, 4 pages. |
Hirschman et al., “Multi-Site Data Collection and Evaluation in Spoken Language Understanding”, Proceedings of the Workshop on Human Language Technology, 1993, pp. 19-24. |
Hobbs et al., “Fastus: A System for Extracting Information from Natural-Language Text”, SRI International, Technical Note 519, Nov. 19, 1992, 26 pages. |
Hobbs et al., “Fastus: Extracting Information from Natural-Language Texts”, SRI International, 1992, pp. 1-22. |
Hobbs, Jerry R., “Sublanguage and Knowledge”, SRI International, Technical Note 329, Jun. 1984, 30 pages. |
Hodjat et al., “Iterative Statistical Language Model Generation for use with an Agent-Oriented Natural Language Interface”, Proceedings of HCI International, vol. 4, 2003, pp. 1422-1426. |
Hoehfeld et al., “Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm”, IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages. |
Holmes, J. N., “Speech Synthesis and Recognition-Stochastic Models for Word Recognition”, Published by Chapman & Hall, London, ISBN 0 412 534304, 1998, 7 pages. |
Hon et al., “CMU Robust Vocabulary-Independent Speech Recognition System”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-91), Apr. 1991, 4 pages. |
Huang et al., “The SPHINX-II Speech Recognition System: An Overview”, Computer, Speech and Language, vol. 7, No. 2, 1993, 14 pages. |
IBM, “Integrated Audio-Graphics User Interface”, IBM Technical Disclosure Bulletin, vol. 33, No. 11, Apr. 1991, 4 pages. |
IBM, “Speech Editor”, IBM Technical Disclosure Bulletin, vol. 29, No. 10, Mar. 10, 1987, 3 pages. |
IBM, “Speech Recognition with Hidden Markov Models of Speech Waveforms”, IBM Technical Disclosure Bulletin, vol. 34, No. 1, Jun. 1991, 10 pages. |
Intraspect Software, “The Intraspect Knowledge Management Solution: Technical Overview”, Available online at <http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf>, 1998, 18 pages. |
Iowegian International, “FIR Filter Properties, DSPGuru, Digital Signal Processing Central”, Available online at <http://www.dspguru.com/dsp/faq/fir/properties> retrieved on Jul. 28, 2010, 6 pages. |
Issar et al., “CMU's Robust Spoken Language Understanding System”, Proceedings of Eurospeech, 1993, 4 pages. |
Issar, Sunil, “Estimation of Language Models for New Spoken Language Applications”, Proceedings of 4th International Conference on Spoken language Processing, Oct. 1996, 4 pages. |
Jacobs et al., “Scisor: Extracting Information from On-Line News”, Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages. |
Janas, Jurgen M., “The Semantics-Based Natural Language Interface to Relational Databases”, Chapter 6, Cooperative Interfaces to Information Systems, 1986, pp. 143-188. |
Rabiner et al., “Fundamental of Speech Recognition”, AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 1993, 17 pages. |
Rabiner et al., “Note on the Properties of a Vector Quantizer for LPC Coefficients”, Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, 9 pages. |
Ratcliffe, M., “ClearAccess 2.0 Allows SQL Searches Off-Line (Structured Query Language) (ClearAccess Corp. Preparing New Version of Data-Access Application with Simplified User Interface, New Features) (Product Announcement)”, MacWeek, vol. 6, No. 41, Nov. 16, 1992, 2 pages. |
Ravishankar, Mosur K., “Efficient Algorithms for Speech Recognition”, Doctoral Thesis Submitted to School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburgh, May 15, 1996, 146 pages. |
Rayner, M., “Abductive Equivalential Translation and its Application to Natural Language Database Interfacing”, Dissertation Paper, SRI International, Sep. 1993, 162 pages. |
Rayner et al., “Adapting the Core Language Engine to French and Spanish”, Cornell University Library, Available online at <http:l/arxiv.org/abs/cmp-lg/9605015>, May 10, 1996, 9 pages. |
Rayner et al., “Deriving Database Queries from Logical Forms by Abductive Definition Expansion”, Proceedings of the Third Conference on Applied Natural Language Processing, ANLC, 1992, 8 pages. |
Rayner, Manny, “Linguistic Domain Theories: Natural-Language Database Interfacing from First Principles”, SRI International, Cambridge, 1993, 11 pages. |
Rayner et al., “Spoken Language Translation with Mid-90's Technology: A Case Study”, Eurospeech, ISCA, Available online at <http://citeseerxist.psu.edu/viewdoc/summary?doi=10.1.1.54.8608>, 1993, 4 pages. |
Remde et al., “SuperBook: An Automatic Tool for Information Exploration-Hypertext?”, in Proceedings of Hypertext, 87 Papers, Nov. 1987, 14 pages. |
Reynolds, C. F., “On-Line Reviews: A New Application of the HICOM Conferencing System”, IEEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages. |
Rice et al., “Monthly Program: Nov. 14, 1995”, The San Francisco Bay Area Chapter of ACM SIGCHI, Available online at <http://www.baychi.org/calendar/19951114>, Nov. 14, 1995, 2 pages. |
Rice et al., “Using the Web Instead of a Window System”, Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI'96, 1996, pp. 1-14. |
Rigoll, G., “Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'89), May 1989, 4 pages. |
Riley, M D., “Tree-Based Modelling of Segmental Durations”, Talking Machines Theories, Models and Designs, Elsevier Science Publishers B.V., North-Holland, ISBN: 08-444-89115.3, 1992, 15 pages. |
Rivlin et al., “Maestro: Conductor of Multimedia Analysis Technologies”, SRI International, 1999, 7 pages. |
Rivoira et al., “Syntax and Semantics in a Word-Sequence Recognition System”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'79), Apr. 1979, 5 pages. |
Roddy et al., “Communication and Collaboration in a Landscape of B2B eMarketplaces”, VerticalNet Solutions, White Paper, Jun. 15, 2000, 23 pages. |
Rosenfeld, R., “A Maximum Entropy Approach to Adaptive Statistical Language Modelling”, Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages. |
Roszkiewicz, A., “Extending your Apple”, Back Talk-Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, 5 pages. |
Rudnicky et al., “Creating Natural Dialogs in the Carnegie Mellon Communicator System”, Proceedings of Eurospeech, vol. 4, 1999, pp. 1531-1534. |
Russell et al., “Artificial Intelligence, A Modern Approach”, Prentice Hall, Inc., 1995, 121 pages. |
Sacerdoti et al., “A Ladder User's Guide (Revised)”, SRI International Artificial Intelligence Center, Mar. 1980, 39 pages. |
Sagalowicz, D., “AD-Ladder User's Guide”, SRI International, Sep. 1980, 42 pages. |
Sakoe et al., “Dynamic Programming Algorithm Optimization for Spoken Word Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-26, No. 1, Feb. 1978, 8 pages. |
Salton et al., “On the Application of Syntactic Methodologies in Automatic Text Analysis”, Information Processing and Management, vol. 26, No. 1, Great Britain, 1990, 22 pages. |
Sameshima et al., “Authorization with Security Attributes and Privilege Delegation Access control beyond the ACL”, Computer Communications, vol. 20, 1997, 9 pages. |
San-Segundo et al., “Confidence Measures for Dialogue Management in the CU Communicator System”, Proceedings of Acoustics, Speech and Signal Processing (ICASSP'00), Jun. 2000, 4 pages. |
Sato, H., “A Data Model, Knowledge Base and Natural Language Processing for Sharing a Large Statistical Database”, Statistical and Scientific Database Management, Lecture Notes in Computer Science, vol. 339, 1989, 20 pages. |
Savoy, J., “Searching Information in Hypertext Systems Using Multiple Sources of Evidence”, International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1996, 15 pages. |
Scagliola, C., “Language Models and Search Algorithms for Real-Time Speech Recognition”, International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages. |
Schmandt et al., “Augmenting a Window System with Speech Input”, IEEE Computer Society, Computer, vol. 23, No. 8, Aug. 1990, 8 pages. |
Schütze, H., “Dimensions of Meaning”, Proceedings of Supercomputing'92 Conference, Nov. 1992, 10 pages. |
Seneff et al., “A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains”, Proceedings of Fourth International Conference on Spoken Language, vol. 2, 1996, 4 pages. |
Sharoff et al., “Register-Domain Separation as a Methodology for Development of Natural Language Interfaces to Databases”, Proceedings of Human-Computer Interaction (INTERACT'99), 1999, 7 pages. |
Sheth et al., “Evolving Agents for Personalized Information Filtering”, Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1993, 9 pages. |
Sheth et al., “Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships”, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, Oct. 13, 2002, pp. 1-38. |
Shikano et al., “Speaker Adaptation through Vector Quantization”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages. |
Shimazu et al., “CAPIT: Natural Language Interface Design Tool with Keyword Analyzer and Case-Based Parser”, NEG Research & Development, vol. 33, No. 4, Oct. 1992, 11 pages. |
Shinkle, L., “Team User's Guide”, SRI International, Artificial Intelligence Center, Nov. 1984, 78 pages. |
Shklar et al., “InfoHarness: Use of Automatically Generated Metadata for Search and Retrieval of Heterogeneous Information”, Proceedings of CAiSE'95, Finland, 1995, 14 pages. |
Sigurdsson et al., “Mel Frequency Cepstral Co-efficients: An Evaluation of Robustness of MP3 Encoded Music”, Proceedings of the 7th International Conference on Music Information Retrieval, 2006, 4 pages. |
Silverman et al., “Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Mar. 1999, 5 pages. |
Simonite, Tom, “One Easy Way to Make Siri Smarter”, Technology Review, Oct. 18, 2011, 2 pages. |
Singh, N., “Unifying Heterogeneous Information Models”, Communications of the ACM, 1998, 13 pages. |
SRI International, “The Open Agent Architecture TM 1.0 Distribution”, Open Agent Architecture (OAA), 1999, 2 pages. |
SRI2009, “SRI Speech: Products: Software Development Kits: EduSpeak”, Available online at <http://web.archive.org/web/20090828084033/http://www.speechatsri.com/products/eduspeak/shtml>, 2009, 2 pages. |
Starr et al., “Knowledge-Intensive Query Processing”, Proceedings of the 5th KRDB Workshop, Seattle, May 31, 1998, 6 pages. |
Stent et al., “The CommandTalk Spoken Dialogue System”, SRI International, 1999, pp. 183-190. |
Stern et al., “Multiple Approaches to Robust Speech Recognition”, Proceedings of Speech and Natural Language Workshop, 1992, 6 pages. |
Meng et al., “Wheels: A Conversational System in the Automobile Classified Domain”, Proceedings of Fourth International Conference on Spoken Language, ICSLP 96, vol. 1, Oct. 1996, 4 pages. |
Michos et al., “Towards an Adaptive Natural Language Interface to Command Languages”, Natural Language Engineering, vol. 2, No. 3, 1996, pp. 191-209. |
Milstead et al., “Metadata: Cataloging by Any Other Name”, Available online at <http://www.iicm.tugraz.at/thesis/cguetl—diss/literatur/Kapitel06/References/Milstead—et—al—1999/metadata.html>, Jan. 1999, 18 pages. |
Milward et al., “D2.2: Dynamic Multimodal Interface Reconfiguration, Talk and Look: Tools for Ambient Linguistic Knowledge”, Available online at <http://www.ihmc.us/users/nblaylock!Pubs/Files/talkd2.2.pdf>, Aug. 8, 2006, 69 pages. |
Minker et al., “Hidden Understanding Models for Machine Translation”, Proceedings of ETRW on Interactive Dialogue in Multi-Modal Systems, Jun. 1999, pp. 1-4. |
Mitra et al., “A Graph-Oriented Model for Articulation of Ontology Interdependencies”, Advances in Database Technology, Lecture Notes in Computer Science, vol. 1777, 2000, pp. 1-15. |
Modi et al., “CMRadar: A Personal Assistant Agent for Calendar Management”, AAAI, Intelligent Systems Demonstrations, 2004, pp. 1020-1021. |
Moore et al., “Combining Linguistic and Statistical Knowledge Sources in Natural- Language Processing for ATIS”, SRI International, Artificial Intelligence Center, 1995, 4 pages. |
Moore, Robert C., “Handling Complex Queries in a Distributed Data Base”, SRI International, Technical Note 170, Oct. 8, 1979, 38 pages. |
Moore, Robert C., “Practical Natural-Language Processing by Computer”, SRI International, Technical Note 251, Oct. 1981, 34 pages. |
Moore et al., “SRI's Experience with the ATIS Evaluation”, Proceedings of the Workshop on Speech and Natural Language, Jun. 1990, pp. 147-148. |
Moore et al., “The Information Warfare Advisor: An Architecture for Interacting with Intelligent Agents Across the Web”, Proceedings of Americas Conference on Information Systems (AMCIS), Dec. 31, 1998, pp. 186-188. |
Moore, Robert C., “The Role of Logic in Knowledge Representation and Commonsense Reasoning”, SRI International, Technical Note 264, Jun. 1982, 19 pages. |
Moore, Robert C., “Using Natural-Language Knowledge Sources in Speech Recognition”, SRI International, Artificial Intelligence Center, Jan. 1999, pp. 1-24. |
Moran et al., “Intelligent Agent-Based User Interfaces”, Proceedings of International Workshop on Human Interface Technology, Oct. 1995, pp. 1-4. |
Moran et al., “Multimodal User Interfaces in the Open Agent Architecture”, International Conference on Intelligent User Interfaces (IUI97), 1997, 8 pages. |
Moran, Douglas B., “Quantifier Scoping in the SRI Core Language Engine”, Proceedings of the 26th Annual Meeting on Association for Computational Linguistics, 1988, pp. 33-40. |
Morgan, B., “Business Objects (Business Objects for Windows) Business Objects Inc.”, DBMS, vol. 5, No. 10, Sep. 1992, 3 pages. |
Motro, Amihai, “Flex: A Tolerant and Cooperative User Interface to Databases”, IEEE Transactions on Knowledge and Data Engineering, vol. 2, No. 2, Jun. 1990, pp. 231-246. |
Mountford et al., “Talking and Listening to Computers”, The Art of Human-Computer Interface Design, Apple Computer, Inc., Addison-Wesley Publishing Company, Inc., 1990, 17 pages. |
Mozer, Michael C., “An Intelligent Environment must be Adaptive”, IEEE Intelligent Systems, 1999, pp. 11-13. |
Muhlhauser, Max, “Context Aware Voice User Interfaces for Workflow Support”, 2007, 254 pages. |
Murty et al., “Combining Evidence from Residual Phase and MFCC Features for Speaker Recognition”, IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages. |
Murveit et al., “Integrating Natural Language Constraints into HMM-Based Speech Recognition”, International Conference on Acoustics, Speech and Signal Processing, Apr. 1990, 5 pages. |
Murveit et al., “Speech Recognition in SRI's Resource Management and ATIS Systems”, Proceedings of the Workshop on Speech and Natural Language, 1991, pp. 94-100. |
Nakagawa et al., “Speaker Recognition by Combining MFCC and Phase Information”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Mar. 2010, 4 pages. |
Naone, Erica, “TR10: Intelligent Software Assistant”, Technology Review, Mar.-Apr. 2009, 2 pages. |
Neches et al., “Enabling Technology for Knowledge Sharing”, Fall, 1991, pp. 37-56. |
Niesler et al., “A Variable-Length Category-Based N-Gram Language Model”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'96), vol. 1, May 1996, 6 pages. |
Noth et al., “Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System”, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, pp. 519-532. |
Odubiyi et al., “SAIRE-A Scalable Agent-Based Information Retrieval Engine”, Proceedings of the First International Conference on Autonomous Agents, 1997, 12 pages. |
Owei et al., “Natural Language Query Filtration in the Conceptual Query Language”, IEEE, 1997, pp. 539-549. |
Pannu et al., “A Learning Personal Agent for Text Filtering and Notification”, Proceedings of the International Conference of Knowledge Based Systems, 1996, pp. 1-11. |
Papadimitriou et al., “Latent Semantic Indexing: A Probabilistic Analysis”, Available online at <http://citeseerx.ist.psu.edu/messaqes/downloadsexceeded.html>, Nov. 14, 1997, 21 pages. |
Parson, T. W., “Voice and Speech Processing”, Pitch and Formant Estimation, McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 1987, 15 pages. |
Parsons, T. W., “Voice and Speech Processing”, Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 1987, 5 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1993/012637, issued on Apr. 10, 1995, 7 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1993/012666, issued on Mar. 1, 1995, 5 pages. |
International Search Report received for PCT Patent Application No. PCT/US1993/012666, mailed on Nov. 9, 1994, 8 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1994/011011, issued on Feb. 28, 1996, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US1994/011011, mailed on Feb. 8, 1995, 7 pages. |
Written Opinion received for PCT Patent Application No. PCT/US1994/011011, mailed on Aug. 21, 1995, 4 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US1995/008369, issued on Oct. 9, 1996, 4 pages. |
International Search Report received for PCT Patent Application No. PCT/US1995/008369, mailed on Nov. 8, 1995, 6 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2011/020861, mailed on Nov. 29, 2011, 12 pages. |
Pereira, Fernando, “Logic for Natural Language Analysis”, SRI International, Technical Note 275, Jan. 1983, 194 pages. |
Perrault et al., “Natural-Language Interfaces”, SRI International, Technical Note 393, Aug. 22, 1986, 48 pages. |
Phoenix Solutions, Inc., “Declaration of Christopher Schmandt Regarding the MIT Galaxy System”, West Interactive Corp., A Delaware Corporation, Document 40, Jul. 2, 2010, 162 pages. |
Picone, J., “Continuous Speech Recognition using Hidden Markov Models”, IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages. |
Pulman et al., “Clare: A Combined Language and Reasoning Engine”, Proceedings of JFIT Conference, Available online at <http://www.cam.sri.com/tr/crc042/paper.ps.Z>, 1993, 8 pages. |
“Mel Scale”, Wikipedia the Free Encyclopedia, Last modified on Oct. 13, 2009 and retrieved on Jul. 28, 2010, Available online at <http://en.wikipedia.org/wiki/Mel—scale>, 2 pages. |
“Minimum Phase”, Wikipedia the free Encyclopedia, Last modified on Jan. 12, 2010 and retrieved on Jul. 28, 2010, Available online at <http://en.wikipedia.org/wiki/Minimum—phase>, 8 pages. |
Notice of Allowance received for U.S. Appl. No. 12/712,988, mailed on Nov. 20, 2013, 12 pages. |
Acero et al., “Environmental Robustness in Automatic Speech Recognition”, International Conference on Acoustics, Speech and Signal Processing (ICASSSP'90), Apr. 1990, 4 pages. |
Acero et al., “Robust Speech Recognition by Normalization of the Acoustic Space”, International Conference on Acoustics, Speech and Signal Processing, 1991, 4 pages. |
Agnas et al., “Spoken Language Translator: First-Year Report”, SICS (ISSN 0283-3638), SRI and Telia Research AB, Jan. 1994, 161 pages. |
Ahlbom et al., Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques, IEEE International Conference of Acoustics, Speech and Signal Processing (ICASSP'87), vol. 12, Apr. 1987, 4 pages. |
Aikawa et al., “Speech Recognition Using Time-Warping Neural Networks”, Proceedings of the 1991, IEEE Workshop on Neural Networks for Signal Processing, 1991, 10 pages. |
Alfred APP, “Alfred”, Available online at <http://www.alfredapp.com/>, retrieved on Feb. 8, 2012, 5 pages. |
Allen, J., “Natural Language Understanding”, 2nd Edition, The Benjamin/Cummings Publishing Company, Inc., 1995, 671 pages. |
Alshawi et al., “CLARE: A Contextual Reasoning and Co-operative Response Framework for the Core Language Engine”, SRI International, Cambridge Computer Science Research Centre, Cambridge, Dec. 1992, 273 pages. |
Alshawi et al., “Declarative Derivation of Database Queries from Meaning Representations”, Proceedings of the BANKAI Workshop on Intelligent Information Access, Oct. 1991, 12 pages. |
Alshawi et al., “Logical Forms in the Core Language Engine”, Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 1989, pp. 25-32. |
Alshawi et al., “Overview of the Core Language Engine”, Proceedings of Future Generation Computing Systems, Tokyo, 13 pages. |
Alshawi, H., “Translation and Monotonic Interpretation/Generation”, SRI International, Cambridge Computer Science Research Centre, Cambridge, Available online at <http://www.cam.sri.com/tr/crc024/paper.ps.Z 1992>, Jul. 1992, 18 pages. |
Ambite et al., “Design and Implementation of the CALO Query Manager”, American Association for Artificial Intelligence, 2006, 8 pages. |
Ambite et al., “Integration of Heterogeneous Knowledge Sources in the CALO Query Manager”, The 4th International Conference on Ontologies, Databases and Applications of Semantics (ODBASE), 2005, 18 pages. |
Anastasakos et al., “Duration Modeling in Large Vocabulary Speech Recognition”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'95), May 1995, pp. 628-631. |
Anderson et al., “Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics”, Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium, 1967, 12 pages. |
Ansari et al., “Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach”, IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, pp. 60-62. |
Anthony et al., “Supervised Adaption for Signature Verification System”, IBM Technical Disclosure, Jun. 1, 1978, 3 pages. |
Appelt et al., “Fastus: A Finite-State Processor for Information Extraction from Real-world Text”, Proceedings of IJCAI, 1993, 8 pages. |
Appelt et al., “SRI International Fastus System MUC-6 Test Results and Analysis”, SRI International, Menlo Park, California, 1995, 12 pages. |
Apple Computer, “Guide Maker User's Guide”, Apple Computer, Inc., Apr. 27, 1994, 8 pages. |
Apple Computer, “Introduction to Apple Guide”, Apple Computer, Inc., Apr. 28, 1994, 20 pages. |
Archbold et al., “A Team User's Guide”, SRI International, Dec. 21, 1981, 70 pages. |
Asanovic et al., “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks”, Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkelev.EDU, 1991, 7 pages. |
Atal et al., “Efficient Coding of LPC Parameters by Temporal Decomposition”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'83), Apr. 1983, 4 pages. |
Bahl et al., “A Maximum Likelihood Approach to Continuous Speech Recognition”, IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, 13 pages. |
Bahl et al., “A Tree-Based Statistical Language Model for Natural Language Speech Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, No. 7, Jul. 1989, 8 pages. |
Bahl et al., “Acoustic Markov Models Used in the Tangora Speech Recognition System”, Proceeding of International Conference on Acoustics, Speech and Signal Processing (ICASSP'88), vol. 1, Apr. 1988, 4 pages. |
Bahl et al., “Large Vocabulary Natural Language Continuous Speech Recognition”, Proceedings of 1989 International Conference on Acoustics, Speech and Signal Processing, vol. 1, May 1989, 6 pages. |
Bahl et al., “Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, 11 pages. |
Bahl et al., “Speech Recognition with Continuous-Parameter Hidden Markov Models”, Proceeding of International Conference on Acoustics, Speech and Signal Processing (ICASSP'88), vol. 1, Apr. 1988, 8 pages. |
Banbrook, M., “Nonlinear Analysis of Speech from a Synthesis Perspective”, A Thesis Submitted for the Degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages. |
Bear et al., “A System for Labeling Self-Repairs in Speech”, SRI International, Feb. 22, 1993, 9 pages. |
Bear et al., “Detection and Correction of Repairs in Human-Computer Dialog”, SRI International, May 1992, 11 pages. |
Bear et al., “Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog”, Proceedings of the 30th Annual Meeting on Association for Computational Linguistics (ACL), 1992, 8 pages. |
Bear et al., “Using Information Extraction to Improve Document Retrieval”, SRI International, Menlo Park, California, 1998, 11 pages. |
Belaid et al., “A Syntactic Approach for Handwritten Mathematical Formula Recognition”, IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, 7 pages. |
Bellegarda, Jerome R., “Exploiting both Local and Global Constraints for Multi-Span Statistical Language Modeling”, Proceeding of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing (1CASSP'98), vol. 2, May 1998, 5 pages. |
Bellegarda et al., “A Latent Semantic Analysis Framework for Large-Span Language Modeling”, 5th European Conference on Speech, Communication and Technology (EUROSPEECH'97), Sep. 1997, 4 pages. |
Bellegarda et al., “A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 6, No. 5, Sep. 1998, 12 pages. |
Bellegarda et al., “A Novel Word Clustering Algorithm Based on Latent Semantic Analysis”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP'96), vol. 1, 1996, 4 pages. |
Bellegarda et al., “Experiments Using Data Augmentation for Speaker Adaptation”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'95), May 1995, 4 pages. |
Bellegarda, Jerome R., “Exploiting Latent Semantic Information in Statistical Language Modeling”, Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 18 pages. |
Bellegarda, Jerome R., “Interaction-Driven Speech Input-A Data-Driven Approach to the Capture of both Local and Global Language Constraints”, Available online at <http://old.sig.chi.ora/bulletin/1998.2/bellegarda.html>, 1992, 7 pages. |
Bellegarda, Jerome R., “Large Vocabulary Speech Recognition with Multispan Statistical Language Models”, IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, 9 pages. |
Bellegarda et al., “On-Line Handwriting Recognition using Statistical Mixtures”, Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris, France, Jul. 1993, 11 pages. |
Appelt et al., “SRI: Description of the JV-FASTUS System used for MUC-5”, SRI International, Artificial Intelligence Center, 1993, 19 pages. |
Jelinek, F., “Self-Organized Language Modeling for Speech Recognition”, Readings in Speech Recognition, Edited by Alex Waibel and Kai-Fu Lee, Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 1990, 63 pages. |
Jennings et al., “A Personal News Service Based on a User Model Neural Network”, IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, 12 pages. |
Ji et al., “A Method for Chinese Syllables Recognition Based upon Sub-syllable Hidden Markov Model”, 1994 International Symposium on Speech, Image Processing and Neural Networks, Hong Kong, Apr. 1994, 4 pages. |
Johnson, Julia Ann., “A Data Management Strategy for Transportable Natural Language Interfaces”, Doctoral Thesis Submitted to the Department of Computer Science, University of British Columbia, Canada, Jun. 1989, 285 pages. |
Jones, J., “Speech Recognition for Cyclone”, Apple Computer, Inc., E.R.S. Revision 2.9, Sep. 10, 1992, 93 pages. |
Julia et al., “Http://www.speech.sri.com/demos/atis.html”, Proceedings of AAAI, Spring Symposium, 1997, 5 pages. |
Julia et al., “Un Editeur Interactif De Tableaux Dessines a Main Levee (An Interactive Editor for Hand-Sketched Tables)”, Traitement du Signal, vol. 12, No. 6, 1995, pp. 619-626. |
Kahn et al., “CoABS Grid Scalability Experiments”, Autonomous Agents and Multi-Agent Systems, vol. 7, 2003, pp. 171-178. |
Kamel et al., “A Graph Based Knowledge Retrieval System”, IEEE International Conference on Systems, Man and Cybernetics, 1990, pp. 269-275. |
Karp, P. D., “A Generic Knowledge-Base Access Protocol”, Available online at <http://lecture.cs.buu.ac.th/-f50353/Document/gfp.pdf>, May 12, 1994, 66 pages. |
Katz, Boris, “A Three-Step Procedure for Language Generation”, Massachusetts Institute of Technology, A.I. Memo No. 599, Dec. 1980, pp. 1-40. |
Katz, Boris, “Annotating the World Wide Web Using Natural Language”, Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet, 1997, 7 pages. |
Katz, S. M., “Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages. |
Katz et al., “Exploiting Lexical Regularities in Designing Natural Language Systems”, Proceedings of the 12th International Conference on Computational Linguistics, 1988, pp. 1-22. |
Katz et al., “REXTOR: A System for Generating Relations from Natural Language”, Proceedings of the ACL Workshop on Natural Language Processing and Information Retrieval (NLP&IR), Oct. 2000, 11 pages. |
Katz, Boris, “Using English for Indexing and Retrieving”, Proceedings of the 1st RIAO Conference on User-Oriented Content-Based Text and Image Handling, 1988, pp. 314-332. |
Kitano, H., “PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System”, Computer, vol. 24, No. 6, Jun. 1991, 13 pages. |
Klabbers et al., “Reducing Audible Spectral Discontinuities”, IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages. |
Klatt et al., “Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence”, Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages. |
Knownav, “Knowledge Navigator”, YouTube Video available at <http://www.youtube.com/watch?v=QRH8eimU—20>, Apr. 29, 2008, 1 page. |
Kominek et al., “Impact of Durational Outlier Removal from Unit Selection Catalogs”, 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages. |
Konolige, Kurt, “A Framework for a Portable Natural-Language Interface to Large Data Bases”, SRI International, Technical Note 197, Oct. 12, 1979, 54 pages. |
Kubala et al., “Speaker Adaptation from a Speaker-Independent Training Corpus”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'90), Apr. 1990, 4 pages. |
Kubala et al., “The Hub and Spoke Paradigm for CSR Evaluation”, Proceedings of the Spoken Language Technology Workshop, Mar. 1994, 9 pages. |
Laird et al., “SOAR: An Architecture for General Intelligence”, Artificial Intelligence, vol. 33, 1987, pp. 1-64. |
Langley et al., “A Design for the ICARUS Architechture”, SIGART Bulletin, vol. 2, No. 4, 1991, pp. 104-109. |
Larks, “Intelligent Software Agents”, Available online at <http://www.cs.cmu.edu/˜softagents/larks.html> retrieved on Mar. 15, 2013, 2 pages. |
Lee et al., “A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary”, International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 1990, 5 pages. |
Lee et al., “Golden Mandarin (II)-An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary”, IEEE International Conference of Acoustics, Speech and Signal Processing, vol. 2, 1993, 4 pages. |
Lee et al., “Golden Mandarin (II)-An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions”, International Symposium on Speech, Image Processing and Neural Networks, Hong Kong, Apr. 1994, 5 pages. |
Lee, K. F., “Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The SPHINX System”, Partial Fulfillment of the Requirements for the Degree of Doctorof Philosophy, Computer Science Department, Carnegie Mellon University, Apr. 1988, 195 pages. |
Lee et al., “System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters”, International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, No. 3 & 4, Nov. 1991, 16 pages. |
Lemon et al., “Multithreaded Context for Robust Conversational Interfaces: Context- Sensitive Speech Recognition and Interpretation of Corrective Fragments”, ACM Transactions on Computer-Human Interaction, vol. 11, No. 3, Sep. 2004, pp. 241-267. |
Leong et al., “CASIS: A Context-Aware Speech Interface System”, Proceedings of the 10th International Conference on Intelligent User Interfaces, Jan. 2005, pp. 231-238. |
Lieberman et al., “Out of Context: Computer Systems that Adapt to, and Learn from, Context”, IBM Systems Journal, vol. 39, No. 3 & 4, 2000, pp. 617-632. |
Lin et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History”, Available on line at <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.272>, 1999, 4 pages. |
Lin et al., “A New Framework for Recognition of Mandarin Syllables with Tones Using Sub-syllabic Unites”, IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP-93), Apr. 1993, 4 pages. |
Linde et al., “An Algorithm for Vector Quantizer Design”, IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages. |
Liu et al., “Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering”, IEEE International Conference of Acoustics, Speech and Signal Processing, ICASSP-92, Mar. 1992, 4 pages. |
Logan et al., “Mel Frequency Cepstral Co-efficients for Music Modeling”, International Symposium on Music Information Retrieval, 2000, 2 pages. |
Lowerre, B. T., “The-Harpy Speech Recognition System”, Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages. |
Maghbouleh, Arman, “An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations”, Revised Version of a Paper Presented at the Computational Phonology in Speech Technology Workshop, 1996 Annual Meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages. |
Markel et al., “Linear Prediction of Speech”, Springer-Verlag, Berlin, Heidelberg, New York, 1976, 12 pages. |
Martin et al., “Building and Using Practical Agent Applications”, SRI International, PAAM Tutorial, 1998, 78 pages. |
Martin et al., “Building Distributed Software Systems with the Open Agent Architecture”, Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Mar. 1998, pp. 355-376. |
Martin et al., “Development Tools for the Open Agent Architecture”, Proceedings of the International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Apr. 1996, pp. 1-17. |
Martin et al., “Information Brokering in an Agent Architecture”, Proceedings of the Second International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, Apr. 1997, pp. 1-20. |
Martin et al., “Transportability and Generality in a Natural-Language Interface System”, Proceedings of the Eighth International Joint Conference on Artificial Intelligence, Technical Note 293, Aug. 1983, 21 pages. |
Matiasek et al., “Tamic-P: A System for NL Access to Social Insurance Database”, 4th International Conference on Applications of Natural Language to Information Systems, Jun. 1999, 7 pages. |
McGuire et al., “SHADE: Technology for Knowledge-Based Collaborative Engineering”, Journal of Concurrent Engineering Applications and Research (CERA), 1993, 18 pages. |
Zue et al., “The Voyager Speech Understanding System: Preliminary Development and Evaluation”, Proceedings of IEEE, International Conference on Acoustics, Speech and Signal Processing, 1990, 4 pages. |
Zue, Victor W., “Toward Systems that Understand Spoken Language”, ARPA Strategic Computing Institute, Feb. 1994, 9 pages. |
Domingue et al., “Web Service Modeling Ontology (WSMO)-An Ontology for Semantic Web Services”, Position Paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, Jun. 2005, 6 pages. |
Donovan, R. E., “A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers”, Available online at <http://citeseerx.ist.osu.edu/viewdoc/summarv?doi=1 0.1.1.21.6398>, 2001, 4 pages. |
Dowding et al., “Gemini: A Natural Language System for Spoken-Language Understanding”, Proceedings of the Thirty-First Annual Meeting of the Association for Computational Linguistics, 1993, 8 pages. |
Dowding et al., “Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser”, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, 1994, 7 pages. |
Elio et al., “On Abstract Task Models and Conversation Policies”, Proc. Workshop on Specifying and Implementing Conversation Policies, Autonomous Agents'99 Conference, 1999, pp. 1-10. |
Epstein et al., “Natural Language Access to a Melanoma Data Base”, SRI International, Sep. 1978, 7 pages. |
Ericsson et al., “Software Illustrating a Unified Approach to Multimodality and Multilinguality in the In-Home Domain”, Talk and Look: Tools for Ambient Linguistic Knowledge, Dec. 2006, 127 pages. |
Evi, “Meet Evi: The One Mobile Application that Provides Solutions for your Everyday Problems”, Feb. 2012, 3 pages. |
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results”, List of Publications Manually Reviewed for the Search of U.S. Pat. No. 7,177,798, Mar. 22, 2013, 1 page. |
Feigenbaum et al., “Computer-Assisted Semantic Annotation of Scientific Life Works”, Oct. 15, 2007, 22 pages. |
Ferguson et al., “TRIPS: An Integrated Intelligent Problem-Solving Assistant”, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98) and Tenth Conference on Innovative Applications of Artificial Intelligence (IAAI-98), 1998, 7 pages. |
Fikes et al., “A Network-Based Knowledge Representation and its Natural Deduction System”, SRI International, Jul. 1977, 43 pages. |
Frisse, M. E., “Searching for Information in a Hypertext Medical Handbook”, Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages. |
Gamback et al., “The Swedish Core Language Engine”, NOTEX Conference, 1992, 17 pages. |
Gannes, Liz, “Alfred App Gives Personalized Restaurant Recommendations”, AllThingsD, Jul. 18, 2011, pp. 1-3. |
Gautier et al., “Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering”, CiteSeerx, 1993, pp. 89-97. |
Gervasio et al., “Active Preference Learning for Personalized Calendar Scheduling Assistance”, CiteSeerx, Proceedings of IUI'05, Jan. 2005, pp. 90-97. |
Glass, Alyssa, “Explaining Preference Learning”, CiteSeerx, 2006, pp. 1-5. |
Glass et al., “Multilingual Language Generation Across Multiple Domains”, International Conference on Spoken Language Processing, Japan, Sep. 1994, 5 pages. |
Glass et al., “Multilingual Spoken-Language Understanding in the Mit Voyager System”, Available online at <http://groups.csail.mit.edu/sls/publications/1995/speechcomm95-voyager.pdf>, Aug. 1995, 29 pages. |
Goddeau et al., “A Form-Based Dialogue Manager for Spoken Language Applications”, Available online at <http://phasedance.com/pdflicslp96.pdf>, Oct. 1996, 4 pages. |
Goddeau et al., “Galaxy: A Human-Language Interface to On-Line Travel Information”, International Conference on Spoken Language Processing, Yokohama, 1994, pp. 707-710. |
Goldberg et al., “Using Collaborative Filtering to Weave an Information Tapestry”, Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages. |
Gorin et al., “On Adaptive Acquisition of Language”, International Conference on Acoustics, Speech and Signal Processing (ICASSP'90), vol. 1, Apr. 1990, 5 pages. |
Gotoh et al., “Document Space Models Using Latent Semantic Analysis”, in Proceedings of Eurospeech, 1997, 4 pages. |
Gray, R. M., “Vector Quantization”, IEEE ASSP Magazine, Apr. 1984, 26 pages. |
Green, C., “The Application of Theorem Proving to Question-Answering Systems”, SRI Stanford Research Institute, Artificial Intelligence Group, Jun. 1969, 169 pages. |
Gregg et al., “DSS Access on the WWW: An Intelligent Agent Prototype”, Proceedings of the Americas Conference on Information Systems, Association for Information Systems, 1998, 3 pages. |
Grishman et al., “Computational Linguistics: An Introduction”, Cambridge University Press, 1986, 172 pages. |
Grosz et al., “Dialogic: A Core Natural-Language Processing System”, SRI International, Nov. 1982, 17 pages. |
Grosz et al., “Research on Natural-Language Processing at SRI”, SRI International, Nov. 1981, 21 pages. |
Grosz, B., “Team: A Transportable Natural-Language Interface System”, Proceedings of the First Conference on Applied Natural Language Processing, 1983, 7 pages. |
Grosz et al., “Team: An Experiment in the Design of Transportable Natural-Language Interfaces”, Artificial Intelligence, vol. 32, 1987, 71 pages. |
Gruber, Tom, “(Avoiding) the Travesty of the Commons”, Presentation at NPUC, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006, 52 pages. |
Gruber, Tom, “2021: Mass Collaboration and the Really New Economy”, TNTY Futures, vol. 1, No. 6, Available online at <http://tomgruber.org/writing/tnty2001.htm>, Aug. 2001, 5 pages. |
Gruber, Thomas R., “A Translation Approach to Portable Ontology Specifications”, Knowledge Acquisition, vol. 5, No. 2, Jun. 1993, pp. 199-220. |
Gruber et al., “An Ontology for Engineering Mathematics”, Fourth International Conference on Principles of Knowledge Representation and Reasoning, Available online at <http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html>, 1994, pp. 1-22. |
Gruber, Thomas R., “Automated Knowledge Acquisition for Strategic Knowledge”, Machine Learning, vol. 4, 1989, pp. 293-336. |
Gruber, Tom, “Big Think Small Screen: How Semantic Computing in the Cloud will Revolutionize the Consumer Experience on the Phone”, Keynote Presentation at Web 3.0 Conference, Jan. 2010, 41 pages. |
Gruber, Tom, “Collaborating Around Shared Content on the WWW, W3C Workshop on WWW and Collaboration”, Available online at <http://www.w3.org/Collaboration/Workshop/Proceedings/P9.html>, Sep. 1995, 1 page. |
Gruber, Tom, “Collective Knowledge Systems: Where the Social Web Meets the Semantic Web”, Web Semantics: Science, Services and Agents on the World Wide Web, 2007, pp. 1-19. |
Gruber, Tom, “Despite Our Best Efforts, Ontologies are not the Problem”, AAAI Spring Symposium, Available online at <http://tomgruber.org/writing/aaai-ss08.htm>, Mar. 2008, pp. 1-40. |
Gruber, Tom, “Enterprise Collaboration Management with Intraspect”, Intraspect Technical White Paper, Jul. 2001, pp. 1-24. |
Gruber, Tom, “Every Ontology is a Treaty-A Social Agreement-Among People with Some Common Motive in Sharing”, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, No. 3, 2004, pp. 1-5. |
Gruber et al., “Generative Design Rationale: Beyond the Record and Replay Paradigm”, Knowledge Systems Laboratory, Technical Report KSL 92-59, Dec. 1991, Updated Feb. 1993, 24 pages. |
Gruber, Tom, “Helping Organizations Collaborate, Communicate, and Learn”, Presentation to NASA Ames Research, Available online at <http://tomgruber.org/writing/organizational-intelligence-talk.htm>, Mar.-Oct. 2003, 30 pages. |
Gruber, Tom, “Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience”, Presentation at Semantic Technologies Conference, Available online at <http://tomgruber.org/writing/semtech08.htm>, May 20, 2008, pp. 1-40. |
Gruber, Thomas R., “Interactive Acquisition of Justifications: Learning “Why” by Being Told “What””, Knowledge Systems Laboratory, Technical Report KSL 91-17, Original Oct. 1990, Revised Feb. 1991, 24 pages. |
Gruber, Tom, “It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing”, Proceedings of the International CIDOC CRM Symposium, Available online at <http://tomgruber.org/writing/cidoc-ontology.htm>, Mar. 26, 2003, 21 pages. |
Gruber et al., “Machine-Generated Explanations of Engineering Models: A Compositional Modeling Approach”, Proceedings of International Joint Conference on Artificial Intelligence, 1993, 7 pages. |
Bellegarda et al., “Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the ARPA Wall Street Journal Task”, Signal Processing VII: Theories and Applications, European Association for Signal Processing, 1994, 4 pages. |
Bellegarda et al., “The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation”, IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 8 pages. |
Belvin et al., “Development of the HRL Route Navigation Dialogue System”, Proceedings of the First International Conference on Human Language Technology Research, Paper, 2001, 5 pages. |
Berry et al., “PTIME: Personalized Assistance for Calendaring”, ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Jul. 2011, pp. 1-22. |
Berry et al., “Task Management under Change and Uncertainty Constraint Solving Experience with the CALO Project”, Proceedings of CP'05 Workshop on Constraint Solving under Change, 2005, 5 pages. |
Black et al., “Automatically Clustering Similar Units for Unit Selection in Speech Synthesis”, Proceedings of Eurospeech, vol. 2, 1997, 4 pages. |
Blair et al., “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System”, Communications of the ACM, vol. 28, No. 3, Mar. 1985, 11 pages. |
Bobrow et al., “Knowledge Representation for Syntactic/Semantic Processing”, From: AAA-80 Proceedings, Copyright 1980, AAAI, 1980, 8 pages. |
Bouchou et al., “Using Transducers in Natural Language Database Query”, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, Jun. 1999, 17 pages. |
Bratt et al., “The SRI Telephone-Based ATIS System”, Proceedings of ARPA Workshop on Spoken Language Technology, 1995, 3 pages. |
Briner, L. L., “Identifying Keywords in Text Data Processing”, In Zelkowitz, Marvin V., Ed, Directions and Challenges, 15th Annual Technical Symposium, Gaithersbury, Maryland, Jun. 17, 1976, 7 pages. |
Bulyko et al., “Error-Correction Detection and Response Generation in a Spoken Dialogue System”, Speech Communication, vol. 45, 2005, pp. 271-288. |
Bulyko et al., “Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis”, Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages. |
Burke et al., “Question Answering from Frequently Asked Question Files”, AI Magazine, vol. 18, No. 2, 1997, 10 pages. |
Burns et al., “Development of a Web-Based Intelligent Agent for the Fashion Selection and Purchasing Process via Electronic Commerce”, Proceedings of the Americas Conference on Information System (AMCIS), Dec. 31, 1998, 4 pages. |
Bussey, et al., “Service Architecture, Prototype Description and Network Implications of a Personalized Information Grazing Service”, INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, Available online at <http://slrohall.com/oublications/>, Jun. 1990, 8 pages. |
Bussler et al., “Web Service Execution Environment (WSMX)”, Retrieved from Internet on Sep. 17, 2012, Available online at <http://www.w3.org/Submission/WSMX>, Jun. 3, 2005, 29 pages. |
Butcher, Mike, “EVI Arrives in Town to go Toe-to-Toe with Siri”, TechCrunch, Jan. 23, 2012, 2 pages. |
Buzo et al., “Speech Coding Based Upon Vector Quantization”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, 13 pages. |
Caminero-Gil et al., “Data-Driven Discourse Modeling for Semantic Interpretation”, Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, May 1996, 6 pages. |
Carter, D., “Lexical Acquisition in the Core Language Engine”, Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, 1989, 8 pages. |
Carter et al., “The Speech-Language Interface in the Spoken Language Translator”, SRI International, Nov. 23, 1994, 9 pages. |
Cawley, Gavin C. “The Application of Neural Networks to Phonetic Modelling”, PhD. Thesis, University of Essex, Mar. 1996, 13 pages. |
Chai et al., “Comparative Evaluation of a Natural Language Dialog Based System and a Menu Driven System for Information Access: A Case Study”, Proceedings of the International Conference on Multimedia Information Retrieval (RIAO), Paris, Apr. 2000, 11 pages. |
Chang et al., “A Segment-Based Speech Recognition System for Isolated Mandarin Syllables”, Proceedings TEN CON'93, IEEE Region 10 Conference on Computer, Communication, Control and Power Engineering, vol. 3, Oct. 1993, 6 pages. (3 pages of English Translation and 3 pages of Office Action). |
Chen, Yi, “Multimedia Siri Finds and Plays Whatever You Ask for”, PSFK Report, Feb. 9, 2012, 9 pages. |
Cheyer, Adam, “A Perspective on AI & Agent Technologies for SCM”, VerticalNet Presentation, 2001, 22 pages. |
Cheyer, Adam, “About Adam Cheyer”, Available online at <http://www.adam.cheyer.com/about.html>, retrieved on Sep. 17, 2012, 2 pages. |
Cheyer et al., “Multimodal Maps: An Agent-Based Approach”, International Conference on Co-operative Multimodal Communication, 1995, 15 pages. |
Cheyer et al., “Spoken Language and Multimodal Applications for Electronic Realties”, Virtual Reality, vol. 3, 1999, pp. 1-15. |
Cheyer et al., “The Open Agent Architecture”, Autonomous Agents and Multi-Agent Systems, vol. 4, Mar. 1, 2001, 6 pages. |
Cheyer et al., “The Open Agent Architecture: Building Communities of Distributed Software Agents”, Artificial Intelligence Center, SRI International, Power Point Presentation, Available online at <http://www.ai.sri.corn/-oaa/>, retrieved on Feb. 21, 1998, 25 pages. |
Codd, E. F., “Databases: Improving Usability and Responsiveness-How About Recently”, Copyright 1978, Academic Press, Inc., 1978, 28 pages. |
Cohen et al., “An Open Agent Architecture”, Available Online at <http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.480>, 1994, 8 pages. |
Coles et al., “Chemistry Question-Answering”, SRI International, Jun. 1969, 15 pages. |
Coles et al., “Techniques for Information Retrieval Using an Inferential Question-Answering System with Natural-Language Input”, SRI International, Nov. 1972, 198 pages. |
Coles et al., “The Application of Theorem Proving to Information Retrieval”, SRI International, Jan. 1971, 21 pages. |
Conklin, Jeff, “Hypertext: An Introduction and Survey”, Computer Magazine, Sep. 1987, 25 pages. |
Connolly et al., “Fast Algorithms for Complex Matrix Multiplication Using Surrogates”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, No. 6, Jun. 1989, 13 pages. |
Constantinides et al., “A Schema Based Approach to Dialog Control”, Proceedings of the International Conference on Spoken Language Processing, 1998, 4 pages. |
Cox et al., “Speech and Language Processing for Next-Millennium Communications Services”, Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 24 pages. |
Craig et al., “Deacon: Direct English Access and Control”, AFIPS Conference Proceedings, vol. 19, San Francisco, Nov. 1966, 18 pages. |
Cutkosky et al., “PACT: An Experiment in Integrating Concurrent Engineering Systems”, Journal & Magazines, Computer, vol. 26, No. 1, Jan. 1993, 14 pages. |
Dar et al., “DTL's DataSpot: Database Exploration Using Plain Language”, Proceedings of the 24th VLDB Conference, New York, 1998, 5 pages. |
Davis et al., “A Personal Handheld Multi-Modal Shopping Assistant”, IEEE, 2006, 9 pages. |
Decker et al., “Designing Behaviors for Information Agents”, The Robotics Institute, Carnegie-Mellon University, Paper, Jul. 1996, 15 pages. |
Decker et al., “Matchmaking and Brokering”, The Robotics Institute, Carnegie-Mellon University, Paper, May 1996, 19 pages. |
Deerwester et al., “Indexing by Latent Semantic Analysis”, Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, 19 pages. |
Deller, Jr. et al., “Discrete-Time Processing of Speech Signals”, Prentice Hall, ISBN: 0-02-328301-7, 1987, 14 pages. |
Digital Equipment Corporation, “Open VMS Software Overview”, Software Manual, Dec. 1995, 159 pages. |
Goliath, “2004 Chrysler Pacifica: U-Connect Hands-Free Communication System. (The Best and Brightest of 2004) (Brief Article)”, Automotive Industries, Sep. 2003, 1 pages. |
Massy, Kevin, “2007 Lexus GS 450H, 4Dr Sedan (3.5L, 6cyl Gas/Electric Hybrid CVT)”, ZDNet Reviews, Reviewed on Aug. 3, 2006, 10 pages. |
“All Music”, Available online at <http://www.allmusic.com/cg/amg.dll?p=amg&sql=32:amg/info—pages/a—about.html>, retrieved on Mar. 19, 2007, 2 pages. |
“BluePhoneElite: About”, Available online at <http:// www.reelintelligence.com/BluePhoneElite>, retrieved on Sep. 25, 2006, 2 pages. |
“BluePhoneElite: Features”, Available online at <http://www.reelintelligence.com/BluePhoneElite/features.shtml>, retrieved on Sep. 25, 2006, 2 pages. |
“Digital Audio in the New Era”, Electronic Design and Application, No. 6, Jun. 30, 2003, 3 pages. |
“Interactive Voice”, Available online at <http://www.helloivee.com/company/>, retrieved on Feb. 10, 2014, 2 pages. |
“Meet Ivee, Your Wi-Fi Voice Activated Assistant”, Available online at <http://www.helloivee.com/>, retrieved from on Feb. 10, 2014, 8 pages. |
Wireless Ground, “N200 Hands-Free Bluetooth Car Kit”, Available on line at <www.wirelessground.com>, retrieved on Mar. 19, 2007, 3 pages. |
“PhatNoise”, Voice Index on Tap, Kenwood Music Keg, Available online at <http://www.phatnoise.com/kenwood/kenwoodssamail.html>, retrieved on Jul. 13, 2006, 1 pages. |
“What is Fuzzy Logic?”, Available online at <http://www.cs.cmu.edu/Groups/Al/html/faqs/ai/fuzzy/part1/faq-doc-2.html>, retrieved on Mar. 19, 2007, 5 pages. |
“Windows XP: A Big Surprise!—Experiencing Amazement from Windows XP”, New Computer, No. 2, Feb. 28, 2002, 8 pages. |
Aikawa et al., “Generation for Multilingual MT”, Available online at <http://mtarchive.info/MTS-2001-Aikawa.pdf>, retrieved on Sep. 18, 2001, 6 pages. |
ANHUI USTC IFL YTEK Co. Ltd., “Flytek Research Center Information Datasheet”, Available online at <http://www. iflttek.com/english/Research.htm>, retrieved on Oct. 15, 2004, 3 pages. |
Anonymous, “Speaker Recognition”, Wikipedia, The Free Enclyclopedia, Nov. 2, 2010, 4 pages. |
Applebaum et al., “Enhancing the Discrimination of Speaker Independent Hidden Markov Models with Corrective Training”, International Conference on Acoustics, Speech, and Signal Processing, May 23, 1989, pp. 302-305. |
Bellegarda et al., “Tied Mixture Continuous Parameter Modeling for Speech Recognition”, IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 38, No. 12, Dec. 1990, pp. 2033-2045. |
Borden IV, G.R., “An Aural User Interface for Ubiquitous Computing”, Proceedings of the 6th International Symposium on Wearable Computers, IEEE, 2002, 2 pages. |
Brain, Marshall, “How MP3 Files Work”, Available online at <http://computer.howstuffworks.com/mp31.htm>, retrieved on Mar. 19, 2007, 4 pages. |
Chang et al., “Discriminative Training of Dynamic Programming Based Speech Recognizers”, IEEE Transactions on Speech and Audio Processing, vol. 1, No. 2, Apr. 1993, pp. 135-143. |
Cheyer et al., “Demonstration Video of Multimodal Maps Using an Agent Architecture”, Published by SRI International no later than 1996, as Depicted in Exemplary Screenshots from Video Entitled “Demonstration Video of Multimodal Maps Using an Agent Architecture”, 1996, 6 pages. |
Cheyer et al., “Demonstration Video of Multimodal Maps Using an Open-Agent Architecture”, Published by SRI International no later than 1996, as Depicted in Exemplary Screenshots from Video Entitled “Demonstration Video of Multimodal Maps Using an Open-Agent Architecture”, 6 pages. |
Cheyer, A., “Demonstration Video of Vanguard Mobile Portal”, Published by SRI International no later than 2004, as Depicted in Exemplary Screenshots from Video Entitled “Demonstration Video of Vanguard Mobile Portal”, 2004, 10 pages. |
Choi et al., “Acoustic and Visual Signal Based Context Awareness System for Mobile Application”, IEEE Transactions on Consumer Electronics, vol. 57, No. 2, May 2011, pp. 738-746. |
Dusan et al., “Multimodal Interaction on PDA's Integrating Speech and Pen Inputs”, Eurospeech Geneva, 2003, 4 pages. |
Kickstarter, “Ivee Sleek: Wi-Fi Voice-Activated Assistant”, Available online at <https://www.kickstarter.com/discover/categories/hardware?ref=category>, retrieved on Feb. 10, 2014, 13 pages. |
Lamel et al., “Generation and Synthesis of Broadcast Messages”, Proceedings of ESCA-NATO Workshop: Applications of Speech Technology, Sep. 10, 1993, 4 pages. |
Macsimum News, “Apple Files Patent for an Audio Interface for the iPod”, Available online at <http://www.macsimumnews.com/index.php/archive/apple—files—patent—for—an—audio—interface—for—the—ipod>, retrieved on May 4, 2006, 8 pages. |
Navigli, Roberto, “Word Sense Disambiguation: A Survey”, ACM Computing Surveys, vol. 41, No. 2, Article 10, Feb. 2009, 70 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2004/016519, mailed on Nov. 3, 2005, 16 pages. |
Partial International Search Report and Invitation to Pay Additional Fees received for PCT Patent Application No. PCT/US2004/016519, mailed on Aug. 4, 2005, 6 pages. |
International Search Report received for PCT Patent Application No. PCT/US2011/037014, mailed on Oct. 4, 2011, 6 pages. |
Invitation to Pay Additional Search Fees received for PCT Application No. PCT/US2011/037014, mailed on Aug. 2, 2011, 6 pages. |
International Preliminary Report on Patentability received for PCT Patent Application No. PCT/US2012/029810, mailed on Oct. 3, 2013, 9 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/029810, mailed on Aug. 17, 2012, 11 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/043098, mailed on Nov. 14, 2012, 9 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2012/056382, mailed on Dec. 20, 2012, 11 pages. |
International Search Report and Written Opinion received for PCT Patent Application No. PCT/US2013/040971, mailed on Nov. 12, 2013, 11 pages. |
Quazza et al., “Actor: A Multilingual Unit-Selection Speech Synthesis System”, Proceedings of 4th ISCA Tutorial and Research Workshop on Speech Synthesis, Jan. 1, 2001, 6 pages. |
Ricker, T., “Apple Patents Auciio User Interface”, Engadget, Available online at <http://www.engadget.com/2006/05/04/apple-patents-audio-user-interface>, May 4, 2006, 6 pages. |
Santaholma, M., “Grammar Sharing Techniques for Rule-Based Multilingual NLP Systems”, Proceedings of the 16th Nordic Conference of Computational Linguistics, NODALIDA 2007, May 25, 2007, 8 pages. |
Taylor et al., “Speech Synthesis by Phonological Structure Matching”, International Speech Communication Association, vol. 2, Section 3, 1999, 4 pages. |
Xu, “Speech-Based Interactive Games for Language Learning: Reading, Translation and Question-Answering”, Computational Linguistics and Chinese Language Processing, vol. 14, No. 2, Jun. 2009, pp. 133-160. |
Yunker, John, “Beyond Borders: Web Globalization Strategies”, New Riders, Aug. 22, 2002, 11 pages. |
Busemann et al., “Natural Language Diaglogue Service for Appointment Scheduling Agents”, Technical Report RR-97-02, Deutsches Forschungszentrum fur Kunstliche Intelligenz GmbH, 1997, 8 pages. |
Lyons et al., “Augmenting Conversations Using Dual-Purpose Speech”, available at <http://research.nokia.com/files/2004-Lyons-UIST04-DPS.pdf>, 2004, 10 pages. |
Combined Search Report and Examination Report under Sections 17 and 18(3) received for GB Patent Application No. 1009318.5, mailed on Oct. 8, 2010, 5 pages. |
Combined Search Report and Examination Report under Sections 17 and 18(3) received for GB Patent Application No. 1217449.6, mailed on Jan. 17, 2013, 6 pages. |
Stickel, Mark E., “A Nonclausal Connection-Graph Resolution Theorem-Proving Program”, Proceedings of AAAI'82, 1982, 5 pages. |
Sugumaran, V., “A Distributed Intelligent Agent-Based Spatial Decision Support System”, Proceedings of the Americas Conference on Information systems (AMCIS), Dec. 31, 1998, 4 pages. |
Sycara et al., “Coordination of Multiple Intelligent Software Agents”, International Journal of Cooperative Information Systems (IJCIS), vol. 5, No. 2 & 3, 1996, 31 pages. |
Sycara et al., “Distributed Intelligent Agents”, IEEE Expert, vol. 11, No. 6, Dec. 1996, 32 pages. |
Sycara et al., “Dynamic Service Matchmaking among Agents in Open Information Environments”, SIGMOD Record, 1999, 7 pages. |
Sycara et al., “The RETSINA MAS Infrastructure”, Autonomous Agents and Multi-Agent Systems, vol. 7, 2003, 20 pages. |
Tenenbaum et al., “Data Structure Using Pascal”, Prentice-Hall, Inc., 1981, 34 pages. |
Textndrive, “Text'nDrive App Demo-Listen and Reply to your Messages by Voice while Driving!”, YouTube Video available at <http://www.youtube.com/watch?v=WaGfzoHsAMw>, Apr. 27, 2010, 1 page. |
Tofel, Kevin C., “SpeakTolt: A Personal Assistant for Older iPhones, iPads”, Apple News, Tips and Reviews, Feb. 9, 2012, 7 pages. |
Tsai et al., “Attributed Grammar-A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition”, IEEE Transactions on Systems, Man and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages. |
Tucker, Joshua, “Too Lazy to Grab Your TV Remote? Use Siri Instead”, Engadget, Nov. 30, 2011, 8 pages. |
Tur et al., “The CALO Meeting Assistant System”, IEEE Transactions on Audio, Speech and Language Processing, vol. 18, No. 6, Aug. 2010, pp. 1601-1611. |
Tur et al., “The CALO Meeting Speech Recognition and Understanding System”, Proc. IEEE Spoken Language Technology Workshop, 2008, 4 pages. |
Tyson et al., “Domain-Independent Task Specification in the TACITUS Natural Language System”, SRI International, Artificial Intelligence Center, May 1990, 16 pages. |
Udell, J., “Computer Telephony”, BYTE, vol. 19, No. 7, Jul. 1994, 9 pages. |
Van Santen, J. P.H., “Contextual Effects on Vowel Duration”, Journal Speech Communication, vol. 11, No. 6, Dec. 1992, pp. 513-546. |
Vepa et al., “New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis”, Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 2002, 4 pages. |
Verschelde, Jan, “MATLAB Lecture 8. Special Matrices in MATLAB”, UIC, Dept. of Math, Stat. & CS, MCS 320, Introduction to Symbolic Computation, 2007, 4 pages. |
Vingron, Martin, “Near-Optimal Sequence Alignment”, Current Opinion in Structural Biology, vol. 6, No. 3, 1996, pp. 346-352. |
Vlingo, “Vlingo Launches Voice Enablement Application on Apple App Store”, Press Release, Dec. 3, 2008, 2 pages. |
Vlingo Lncar, “Distracted Driving Solution with Vlingo InCar”, YouTube Video, Available online at <http://www.youtube.com/watch?v=Vqs8XfXxgz4>, Oct. 2010, 2 pages. |
Voiceassist, “Send Text, Listen to and Send E-Mail by Voice”, YouTube Video, Available online at <http://www.youtube.com/watch?v=0tEU61nHHA4>, Jul. 30, 2009, 1 page. |
Voiceonthego, “Voice on the Go (BlackBerry)”, YouTube Video, available online at <http://www.youtube.com/watch?v=pJqpWgQS98w>, Jul. 27, 2009, 1 page. |
Wahlster et al., “Smartkom: Multimodal Communication with a Life-Like Character”, Eurospeech-Scandinavia, 7th European Conference on Speech Communication and Technology, 2001, 5 pages. |
Waldinger et al., “Deductive Question Answering from Multiple Resources”, New Directions in Question Answering, Published by AAAI, Menlo Park, 2003, 22 pages. |
Walker et al., “Natural Language Access to Medical Text”, SRI International, Artificial Intelligence Center, Mar. 1981, 23 pages. |
Waltz, D., “An English Language Question Answering System for a Large Relational Database”, ACM, vol. 21, No. 7, 1978, 14 pages. |
Ward et al., “A Class Based Language Model for Speech Recognition”, IEEE, 1996, 3 pages. |
Ward et al., “Recent Improvements in the CMU Spoken Language Understanding System”, ARPA Human Language Technology Workshop, 1994, 4 pages. |
Ward, Wayne, “The CMU Air Travel Information Service: Understanding Spontaneous Speech”, Proceedings of the Workshop on Speech and Natural Language, HLT '90, 1990, pp. 127-129. |
Warren et al., “An Efficient Easily Adaptable System for Interpreting Natural Language Queries”, American Journal of Computational Linguistics, vol. 8, No. 3-4 , 1982, 11 pages. |
Weizenbaum, J., “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine”, Communications of the ACM, vol. 9, No. 1, Jan. 1966, 10 pages. |
Werner et al., “Prosodic Aspects of Speech, Universite de Lausanne”, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art and Future Challenges, 1994, 18 pages. |
Winiwarter et al., “Adaptive Natural Language Interfaces to FAQ Knowledge Bases”, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, Jun. 1999, 22 pages. |
Wolff, M., “Post Structuralism and the ARTFUL Database: Some Theoretical Considerations”, Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages. |
Wu, M., “Digital Speech Processing and Coding”, Multimedia Signal Processing, Lecture-2 Course Presentation, University of Maryland, College Park, 2003, 8 pages. |
Wu et al., “KDA: A Knowledge-Based Database Assistant”, Proceeding of the Fifth International Conference on Engineering (IEEE Cat.No. 89CH2695-5), 1989, 8 pages. |
Wu, M., “Speech Recognition, Synthesis, and H.C.I.”, Multimedia Signal Processing, Lecture-3 Course Presentation, University of Maryland, College Park, 2003, 11 pages. |
Wyle, M. F., “A Wide Area Network Information Filter”, Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 1991, 6 pages. |
Yang et al., “Smart Sight: A Tourist Assistant System”, Proceedings of Third International Symposium on Wearable Computers, 1999, 6 pages. |
Yankelovich et al., “Intermedia: The Concept and the Construction of a Seamless Information Environment”, Computer Magazine, IEEE, Jan. 1988, 16 pages. |
Yoon et al., “Letter-to-Sound Rules for Korean”, Department of Linguistics, The Ohio State University, 2002, 4 pages. |
Zeng et al., “Cooperative Intelligent Software Agents”, The Robotics Institute, Carnegie-Mellon University, Mar. 1995, 13 pages. |
Zhao, Y., “An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition”, IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, pp. 380-394. |
Zhao et al., “Intelligent Agents for Flexible Workflow Systems”, Proceedings of the Americas Conference on Information Systems (AMCIS), Oct. 1998, 4 pages. |
Zovato et al., “Towards Emotional Speech Synthesis: A Rule based Approach”, Proceedings of 5th ISCA Speech Synthesis Workshop-Pittsburgh, 2004, pp. 219-220. |
Zue, Victor, “Conversational Interfaces: Advances and Challenges”, Spoken Language System Group, Sep. 1997, 10 pages. |
Zue et al., “From Interface to Content: Translingual Access and Delivery of On-Line Information”, Eurospeech, 1997, 4 pages. |
Zue et al., “Jupiter: A Telephone-Based Conversational Interface for Weather Information”, IEEE Transactions on Speech and Audio Processing, Jan. 2000, 13 pages. |
Zue et al., “Pegasus: A Spoken Dialogue Interface for On-Line Air Travel Planning”, Speech Communication, vol. 15, 1994, 10 pages. |
Number | Date | Country | |
---|---|---|---|
20140188471 A1 | Jul 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12712988 | Feb 2010 | US |
Child | 14196243 | US |