Audio user interface for displayless electronic device

Information

  • Patent Grant
  • 8862252
  • Patent Number
    8,862,252
  • Date Filed
    Friday, January 30, 2009
    15 years ago
  • Date Issued
    Tuesday, October 14, 2014
    9 years ago
Abstract
This invention is directed to an audio menu provided in an electronic device having no display. The electronic device can further include an input interface having only a single sensing element (e.g., a single button) for controlling audio playback of the device and for accessing and controlling the device audio menu. In response to a particular input detected by the single sensing element, the electronic device can enable an audio menu mode and play back audio clips associated with different menu options. The user can provide selection instructions using the single sensing element during the playback of an audio clip to select the menu option associated with the played back audio clip. In some embodiments, the audio menu can be multi-dimensional (e.g., the device plays back audio clips for sub-options in response to a selection of a menu option). Suitable menu options can include, for example, groupings of audio (e.g., playlists), options to toggle (e.g., a shuffle option), or options associated with particular metadata tags associated with audio available to the device.
Description
BACKGROUND OF THE INVENTION

This is directed to providing an audio user interface for an electronic device that does not have a display.


Users can interact with electronic devices using different interfaces. For example, if a device includes a display, the device processor can direct the display to display a graphical user interface. The graphical user interface can include information displayed for the user, such as application windows, text or images, or any other suitable information stored locally or retrieved from a remote source (e.g., the Internet or a host device). The graphical user interface can also include displayed selectable options that the user can select to direct the electronic device to perform operations. Such operations can include, for example, operations tied to particular applications, instructions to open or display information, instructions to close or end a process or application, or any other suitable electronic device operation. To select a displayed option, the user can provide an instruction using an input interface coupled to the device.


Not every electronic device, however, includes a display. To control different device operations, the electronic device can have different input interfaces each associated with different operations. For example, the electronic device can include several buttons associated with different operations. In some embodiments, a portable media device can include distinct buttons for controlling playback operations, for example buttons for each of play/pause, next/fast forward, previous/rewind, volume up, and volume down.


As the size of electronic devices reduces, the input interface can become a limiting factor to the size reduction. For example, the size of an electronic device can be reduced so that the device includes no display and only a single input interface (e.g., a single button). As another example, the size of the electronic device can be reduced such that the device does not include an input interface, but rather is coupled to a remote input interface (e.g., coupled to an in-cable button connected to a port of the device). If the device includes only a single input interface and no display, the device can require an audio-based user interface to allow the user to control device operations.


SUMMARY OF THE INVENTION

This invention is directed to systems and methods for providing an audio user interface in a device having only a single input interface and no display.


To control simple audio playback operations, the electronic device can associate different types of inputs provided by the single input interface with different simple operations. For example, the electronic device can associate different combinations of short and elongated presses of a button with different playback controls (e.g., play/pause, fast forward, and rewind). There may be insufficient suitable combinations of inputs using the single input interface, however, to provide instructions for more complex electronic device operations. In addition, a user may require some information before being able to direct the device to perform an operation using a single input interface (e.g., which of several playlists is the user selecting with an input).


The electronic device can include an audio menu operative to provide information to the user regarding the current audio being played back, and subsequently provide specific options that the user can select. Because the electronic device can have no display, the audio menu can include a succession of audio clips defining the available options for the device. For example, in response to receiving a user instruction to access an audio menu, the electronic device can initially play back an audio clip based on metadata characterizing the currently played back audio (e.g., a track announcement of the title and artist of the currently played back music) and then play back audio clips associated with available menu options. The user can provide a selection instruction during the playback of an audio clip to select the menu option associated with the audio clip.


The audio menu can include any suitable selectable option. For example, the selectable options can include playlists (e.g., playlist names or numbers), audiobook titles, options to toggle (e.g., shuffle and genius options), or any other suitable option for selecting all or a subset of the available audio. In some embodiments, the options can include metadata tag values or categories for selecting audio matching a selected tag or category value (e.g., all audio by a particular artist, or in a particular album). The menu can allow a user to refine an audio request using a multi-dimensional menu, for example by providing successive options and sub-options for selecting several different types of metadata tags to define the audio subset to play back.


The electronic device can receive or generate the audio clips to play back in the audio menu using any suitable approach. In some embodiments, the audio clips can be recorded and received from a host device. Alternatively, the audio clips can be generated using a text-to-speech engine of the device or of a host device. The host device can provide any suitable content to the electronic device, including for example audio clips for the audio menu, audio to play back (e.g., music), firmware or software updates, or any other suitable information. In some embodiments, the electronic device can provide text strings for which audio clips are necessary to the host device so that the host device can generate the audio clips using a text-to-speech engine.





BRIEF DESCRIPTION OF THE DRAWINGS

The above and other features of the present invention, its nature and various advantages will be more apparent upon consideration of the following detailed description, taken in conjunction with the accompanying drawings in which:



FIG. 1 is a schematic view of an electronic device in accordance with one embodiment of the invention;



FIG. 2 is a table depicting electronic device operations associated with different inputs from the input interface in accordance with one embodiment of the invention;



FIG. 3 is a schematic view of an illustrative diagram indicating the audio clips played back in the menu mode in accordance with one embodiment of the invention;



FIG. 4 is a schematic view of an array of menu options for which audio clips can be played back in accordance with one embodiment of the invention;



FIG. 5 is a schematic view of a multi-dimensional array of menu options for which audio clips can be played back in accordance with one embodiment of the invention;



FIG. 6 is a flowchart of an illustrative process for accessing an audio menu in accordance with one embodiment of the invention;



FIG. 7 is a flowchart of an illustrative process for interacting with an audio menu in accordance with one embodiment of the invention; and



FIG. 8 is a flowchart of an illustrative process for interacting with a multi-dimensional menu in accordance with one embodiment of the invention.





DETAILED DESCRIPTION

An electronic device operative to provide an audio menu is provided.


The electronic device may include a processor and a user interface having at least one sensing element, for example a mechanical sensing element (e.g., a button), a resistive sensing element (e.g., a resistive sensor), or a capacitive sensing element (e.g., a capacitive sensor). The input interface can be integrated in the device, or remotely coupled to the device (e.g., via a cable or wirelessly). Using the input interface, a user can provide inputs to control media playback. For example, the user can provide play/pause, next track, fast forward, previous track, and rewind instructions by providing different combinations of inputs using the input interface (e.g., different numbers and durations of button presses of a single button coupled to the electronic device). To control more advanced or elaborate electronic device operations, the electronic device may include an audio menu mode that the user can access by providing a particular input using the input interface (e.g., an elongated button press). In some embodiments, the user can control media playback and access and navigate the audio menu by providing inputs to a single sensing element of the input interface (e.g., inputs to a single mechanical button). In addition, the inputs detected by the single sensing mechanism can be of substantially the same type (e.g., button presses, finger contacts, or finger swipes) and inputs that are not associated with selecting an option displayed on a screen (e.g., selecting an option displayed on a capacitive touch screen).


In response to receiving the input for enabling the audio menu, the electronic device can play back an initial audio clip associated with the audio menu. For example, the initial audio clip can indicate the current audio being played back by the device, or the playlist with which the current audio is associated. As another example, the initial audio clip can include an indication of the current mode of operation of the device. Following the initial indication, the electronic device can play back audio clips associated with options for different operations of the device. As the subsequent audio clips are played back, the user can provide inputs for skipping or selecting the associated options. For example, the user can provide a first input (e.g., a single button press using a predetermined button) to select the option associated with the currently played back audio clip, and a second input (e.g., an elongated button press using the same predetermined button) to skip to the audio clip for the next option in the menu.


In some embodiments, the audio user interface can include a multi-dimensional menu. For example, one or more of the selectable options can be associated with subsequent sub-options. Suitable sub-options can include, for example, options related to particular metadata tags of media such as artist name, song title, album name, genre, or any other suitable metadata tag. Using different inputs provided using the input interface, a user can navigate the audio menu to select options or sub-options, return to a previous menu level, or exit the audio menu mode.



FIG. 1 is a schematic view of an electronic device in accordance with one embodiment of the invention. Electronic device 100 may include processor 102, storage 104, memory 106, input interface 108, and audio output 110. In some embodiments, one or more of electronic device components 100 may be combined or omitted (e.g., combine storage 104 and memory 106). In some embodiments, electronic device 100 may include other components not combined or included in those shown in FIG. 1 (e.g., a power supply or a bus), or several instances of the components shown in FIG. 1. For the sake of simplicity, only one of each of the components is shown in FIG. 1.


Processor 102 may include any processing circuitry operative to control the operations and performance of electronic device 100. For example, processor 100 may be used to run operating system applications, firmware applications, media playback applications, media editing applications, or any other application. In some embodiments, a processor may drive a display and process inputs received from a user interface.


Storage 104 may include, for example, one or more storage mediums including a hard-drive, solid state drive, flash memory, permanent memory such as ROM, any other suitable type of storage component, or any combination thereof. Storage 104 may store, for example, media data (e.g., music and video files), application data (e.g., for implementing functions on device 100), firmware, user preference information data (e.g., media playback preferences), authentication information (e.g. libraries of data associated with authorized users), lifestyle information data (e.g., food preferences), exercise information data (e.g., information obtained by exercise monitoring equipment), transaction information data (e.g., information such as credit card information), wireless connection information data (e.g., information that may enable electronic device 100 to establish a wireless connection), subscription information data (e.g., information that keeps track of podcasts or television shows or other media a user subscribes to), contact information data (e.g., telephone numbers and email addresses), calendar information data, and any other suitable data or any combination thereof.


Memory 106 can include cache memory, semi-permanent memory such as RAM, and/or one or more different types of memory used for temporarily storing data. In some embodiments, memory 106 can also be used for storing data used to operate electronic device applications, or any other type of data that may be stored in storage 104. In some embodiments, memory 106 and storage 104 may be combined as a single storage medium.


Input interface 108 may include any suitable interface for providing inputs to input/output circuitry of the electronic device. Input interface 108 may include any suitable input interface, such as for example, a button, keypad, dial, a click wheel, a touch pad, or any combination thereof. The input interface can detect user inputs using at least one sensing element, such as a mechanical sensor, resistive sensor, capacitive sensor, a multi-touch capacitive sensor, or any other suitable type of sensing element. In some embodiments, to minimize the overall dimensions of electronic device 100, input interface 108 can include a limited number (e.g., one) of sensing elements operative to detect inputs to the device. Any suitable event sensed by the sensing element can be used to define an input. For example, an input can be detected when a sensor detects an initial event interaction from the user (e.g., the user presses a button). As another example, an input can be detected when a sensor detects the end of an event or interaction from a user (e.g., the user releases a button). As still another example, an input can be detected both when an interaction is initially detected and when the same interaction ends (e.g., a first input when the user presses and holds the button, for example to enter an audio menu mode, and a second input when the user releases the button, for example to select an option from the audio menu).


To further reduce the size of the device, the one or more sensing elements can be remotely coupled to the device, for example wirelessly or via a wire or cable (e.g., a button embedded in a headphone wire). In some embodiments, the input interface can include an assembly having two volume control buttons and a single playback and menu control button placed between the two volume buttons, where the buttons each include a single mechanical sensing element and the assembly is positioned on a headphone wire. In such embodiments, no input interface can be located on the electronic device (e.g., or only a hold switch).


In some embodiments, the input interface can include several sensing elements for controlling media playback, enabling a menu mode, and providing menu navigation and selection instructions. For example, the input interface can include a limited number of sensing elements for controlling the various device operations. In one implementation, the electronic device can include several buttons, each associated with mechanical sensing elements (e.g., dome switches), where different combinations of the several buttons (e.g., two or three buttons) can be used to control media playback, access an audio menu, select audio menu options, and navigate the audio menu. For example, a first button can be used to control media playback, access the audio menu, and select audio menu options, and a second and third button can be used to navigate between options and sub-options within the audio menu.


Audio output 110 may include one or more speakers (e.g., mono or stereo speakers) built into electronic device 100, or an audio connector (e.g., an audio jack or an appropriate Bluetooth connection) operative to be coupled to an audio output mechanism. For example, audio output 110 may be operative to provide audio data using a wired or wireless connection to a headset, headphones or earbuds. In some embodiments, input interface 108 can be incorporated in a portion of audio output 110 (e.g., embedded in the headphone wire).


One or more of input interface 108 and audio output 110 may be coupled to input/output circuitry. The input/output circuitry may be operative to convert (and encode/decode, if necessary) analog signals and other signals into digital data. In some embodiments, the input/output circuitry can also convert digital data into any other type of signal, and vice-versa. For example, the input/output circuitry may receive and convert physical contact inputs (e.g., from a touch pad), physical movements (e.g., from a mouse or sensor), analog audio signals (e.g., from a microphone), or any other input. The digital data can be provided to and received from processor 102, storage 104, memory 106, or any other component of electronic device 100. In some embodiments, several instances of the input/output circuitry can be included in electronic device 100.


In some embodiments, electronic device 100 may include a bus operative to provide a data transfer path for transferring data to, from, or between control processor 102, storage 104, memory 106, input interface 108, sensor 110, and any other component included in the electronic device. Such other components can include, for example, communications circuitry, positioning circuitry, motion detection circuitry, or any other suitable component. In some embodiments, communications circuitry can be used to connect the electronic device to a host device from which media such as audio, metadata related to the audio, and playlists or other information for managing the received audio.


The electronic device can perform any suitable operation in response to detecting particular inputs from the input interface. FIG. 2 is a table depicting electronic device operations associated with different inputs from the input interface in accordance with one embodiment of the invention. Table 200 can include column 210 of inputs and column 220 of associated electronic device operations. Column 210 can include any suitable input, including presses of a button (e.g., as illustrated in FIG. 2). For example, column 210 can include single press 211, extended single press 212, double press 213, double extended press 214, triple press 215, and triple extended press 216. Different electronic device operations can be associated with each of the inputs. For example, column 220 can include play/pause 221, menu 222, next track 223, fast-forward 224, previous track 225, and rewind 226, each associated with the input from the corresponding row in column 210. In some embodiments, other inputs can be associated with particular electronic device operations, such as longer combinations of button presses (e.g., four or more button presses), or combinations of short and long presses (e.g., several long presses, or consecutive short and long presses, such as in Morse code). Other electronic device operations associated with the inputs can include, for example, non-media playback operations such as communications operations (e.g., telephone, text message or e-mail operations), information display operations (e.g., displaying weather, traffic, or mapping information), or operations for accessing a remote database (e.g., web browsing operations or remote access operations).


To access more advanced media operations (e.g., non playback control operations), a user can provide an input associated with a menu command or operation. In the example of FIG. 2, this input can include a single extended input using the input interface (e.g., a single extended press of a button). In response to detecting the menu command input, the electronic device can enable a menu mode of the device. The electronic device can provide any suitable indication of the menu mode. FIG. 3 is a schematic view of an illustrative diagram indicating the audio clips played back in the menu mode in accordance with one embodiment of the invention. Diagram 300 can include a timeline beginning at end 302, and separated into several sections by vertical lines. During section 304, the electronic device can duck the currently played back audio (e.g., played back music). For example, the electronic device can duck the audio by a predetermined amount (e.g., reduce the audio by 50%). As another example, the electronic device can duck the audio to a predetermined value (e.g., reduce the audio to 30% of the audio output). The predetermined value can include a volume setting (e.g., 30% volume), an energy level of the audio (e.g., 30% of the highest energy or audio wave amplitude of the music), an audio strength amount (e.g., a number of dB). Section 304 can have any suitable duration, including for example a duration in the range of 50 ms to 800 ms, such as 300 ms. At the end of section 304, the electronic device can pause the currently played back audio, or continue to playback the audio at the ducked audio level.


Once the played back audio has been ducked and paused, or simply ducked, the electronic device can announce the currently playing back audio during section 310. For example, the electronic device can play back an audio clip associated with the title, artist, album, or any combination thereof, of the currently played back audio. The electronic device can play back audio clips associated with any suitable information for the currently played back audio. For example, the audio clips can be associated with metadata tags of the played back audio. As another example, the audio clips can include a portion of the played back audio (e.g., a sample of the played back audio). The electronic device can play back audio clips for any suitable combination of tags or data associated with the played back audio, including for example the artist name and audio title (e.g., for a song) or the book name and chapter name (e.g., for an audio book). In embodiments where the played back media is ducked and continues to be played back during section 310, the electronic device can set the volume of the track announce audio clip at a level substantially higher than the ducked media.


Once the currently playing back audio has been announced, the electronic device can pause during section 320 to allow the user to exit the menu mode without selecting new audio to playback, or without hearing the menu options. For example, a user can exit the audio menu mode by providing a second selection instruction during section 320, or ending the selection instruction used to initially access the audio menu mode (e.g., release a button press that was held during sections 304 and 310). Section 320 may thus be of particular interest when users wish only to identify the currently playing back song (e.g., and cannot simply view a displayed artist name and title, as there is no display with the device). Section 320 can have any suitable duration, including for example a duration in the range of 100 ms to 1200 ms, such as 500 ms. Once the duration of section 320 lapses, the electronic device can provide an audio indication that the menu options will be provided. For example, during section 322, the electronic device can provide an audio tone or beep to indicate the end of the track announce portion of the audio menu. In some embodiments, the electronic device can instead provide the menu options without first providing an audio indication.


Following the audio tone of section 322, the electronic device can provide, in section 330, audio associated with selectable menu options. The audio clips provided in section 330 can include clips for any suitable option, including for example playlists, audio books, options to toggle for controlling playback (e.g., shuffle on/off or a seed-based playlist on/off options), or any other suitable audio compilation. The particular options for which audio clips are played back will be described in more detail below. In particular, the seed-based playlist option can be related to the Genius playlist option available from iTunes, available from Apple Inc., by which a playlist of media related to a seed can be automatically generated.


The audio clips associated with the audio menu (e.g., with the sections of diagram 300) can be played back in any suitable manner. In some embodiments, the audio clips can be automatically played back sequentially (e.g., auto-played) in response to accessing the audio menu (e.g., in response to a first input to enter the menu mode, such as a single, elongated press of a button, as indicated in table 200, FIG. 2). The user can then provide a subsequent input at any time during the playback of audio clips. In response to a subsequent input prior to or during the audio tone of section 322, the electronic device can exit the audio menu mode, re-increase the ducked audio (and resume playback of the paused media item, if it was paused during section 304 or section 310), and continue to play back the audio played back prior to entering the menu mode. In response to a subsequent input after the audio tone of section 322 (or after the duration of section 322 lapses), the electronic device can identify the playlist or other option associated with the audio clip played back when the subsequent input was received. The electronic device can then play back audio from the identified playlist, or perform the operation associated with the selected option. In addition, the electronic device may associate particular inputs of the input interface with exiting the menu mode (e.g., even after playing the audio tone and beginning to play back playlists), or skipping forward and back between audio options. For example, a single elongated input detected by the sensing element (e.g., a single press of a button) while in the menu mode can be associated with exiting the menu mode, a double input detected by the sensing element (e.g., a double press of a button) can be associated with skipping to the next audio clip, and a triple input detected by the sensing element (e.g., a triple press of a button) can be associated with skipping to the previous audio clip. As another example, inputs detected by different sensing elements such as, for example volume controls (e.g., two buttons associated with volume up and volume down commands) can be associated with navigating menu options while in the menu mode.


In some embodiments, the user can instead hold an input upon entering the menu mode. To exit the menu mode without providing a playlist selection, the user can release the input prior to or during the audio tone of section 322. To provide a selection of a playlist or other option, the user can release the input upon the device playing back the audio clip associated with the playlist or option of interest.


In some embodiments, the audio clips of the audio menu may not automatically be played back sequentially. Instead, the user can control the playback of the audio clips by providing navigation instructions. Such navigation instructions can include, for example, a single elongated press while in the menu mode to exit the menu mode, a double press to skip to the next audio clip, and a triple press to skip to the previous audio clip. As another example, volume controls (e.g., two buttons associated with volume up and volume down commands) can be associated with navigating menu options while in the menu mode.


The electronic device can provide any suitable menu options in the menu mode. FIG. 4 is a schematic view of an array of menu options for which audio clips can be played back in accordance with one embodiment of the invention. Array 400 can include several options of different types. In some embodiments, array 400 can include playlist options 402, 404 and 408. Each playlist can be identified in any suitable manner, including for example by playlist name (e.g., as set on a host device from which the playlists were received), a number, metadata associated with one or more audio items in the playlist, or any other suitable identifying information. In some embodiments, array 400 can also include options for audio books stored on the electronic device (e.g., instead of or in addition to playlist options). The playlists and audio books can be ordered in any suitable manner in array 400. In some embodiments, the order may be based on the order in which the playlists were added to the electronic device from a host device (e.g., the order in which the playlists were dragged into the electronic device using a host device application, such as iTunes available from Apple Inc.). Other suitable orders can include, for example, alphabetical (e.g., based on playlist title or metadata values), numerical (e.g., based on the number of audio items in each playlist, or on a numerical order associated with each playlist), or on any other suitable attribute of the playlists. In some embodiments, the playlist order can change based on which playlist is being played back upon entering the menu mode. For example, the electronic device can re-order the playlists to start with the current playlist (e.g., and continue with the next playlist in the set order, or continue with the first playlist in the set order).


Array 400 can include additional options, such as all songs option 410, and options that can be toggled on or off. For example, array 400 can include shuffle option 420 and genius option 422, which the user can select to turn on or off. In response to receiving a selection of a toggled option, the electronic device can change the value of the option, and either exit the menu mode, or return to the menu mode and continue or restart playing back audio clips associated with the available menu options (e.g., audio clips associated with the options of array 400). In some embodiments, the audio clips associated with toggled options can include an indication of the value of the option (e.g., the audio clip can be “shuffle on” or “shuffle off” based on the current value of the toggled option). In some embodiments, array 400 can include a repeat option and an exit option (not shown) to allow the user to either repeat the available options or to exit the audio menu mode and return to the previously played back audio. In some embodiments, array 400 can include an audio classification parameter or classification value related to the audio available for playback on the device.


In some embodiments, the audio user interface can include a multi-dimensional menu. FIG. 5 is a schematic view of a multi-dimensional array of menu options for which audio clips can be played back in accordance with one embodiment of the invention. Array 500 can include several options of different types, some or all of which can be associated with subsequent sub-options. In some embodiments, array 500 can include track announcement indication 502 for identifying the current audio played back by the device upon entering the menu mode. Track announcement indication 502 and the subsequent selectable options of array 500 can be separated using any suitable approach, including for example using an audio tone or beep (e.g., as discussed above), or without an indication to the user (e.g., options are separated by time lapses and no audio indications). Following indication 502, for example, array 500 can include several selectable options associated with metadata or other tags of audio available for playback. For example, array 500 can include titles option 510, artists option 520, albums option 530, genres option 540, and playlists option 550. Any other suitable tag or information used to classify or identify audio can be used instead or in addition to those shown in array 500. Such tags can include, for example, audiobooks, album artist, audio rating, popularity (e.g., as determined from a remote popularity index), or any other suitable tag.


In response to receiving a user selection of one of options 510, 520, 530, 540 and 550, the electronic device can provide audio clips associated with the sub-options for the particular selected option. For example, in response to receiving a user selection of titles option 510, the electronic device can play back audio clips of song titles 512. As another example, in response to receiving a user selection of artists option 520, the electronic device can play back audio clips of artists 522. As still another example, in response to receiving a user selection of albums option 530, the electronic device can play back audio clips of albums 532. As yet still another example, in response to receiving a user selection of genres option 540, the electronic device can play back audio clips of genres 542. As still another example, in response to receiving a user selection of playlists option 550, the electronic device can play back audio clips of available playlists 552 stored on the device.


If the number of sub-options associated with a selected option is too large (e.g., exceeds a pre-defined maximum value), the electronic device can instead provide an intermediate sub-option further classifying the sub-options. For example, the electronic device can provide audio clips associated with the letters of the alphabet, and subsequently the sub-options beginning with a selected letter. Alternatively, the electronic device can associate particular inputs of the input interface with quick navigation operations to allow for more rapid traverse of the sub-option audio clips (e.g., inputs for skipping forward and back between initial letters, or options for fast forwarding or rewinding the playback of audio clips for the sub-options). As still another example, the electronic device can detect particular inputs for navigating directly to particular sub-options (e.g., detect inputs in Morse code to skip directly to a particular letter or number, for example after enabling Morse-code based inputs).


The electronic device can perform any suitable operation in response to receiving a user selection of a sub-option. In some embodiments, the electronic device can play back audio associated with the selected option (e.g., playback all audio associated with the selected artist, or all of the audio in the selected album or playlist). In some embodiments, the electronic device can provide options for selecting other metadata tags associated with the audio clips related to the selected sub-option. For example, in response to receiving a user selection of an album, the electronic device can provide audio clips associated with playing back the album, the album artist, and titles within the album. As another example, in response to receiving a user selection of an artist, the electronic device can provide audio clips associated with playing back the audio associated with the artist, the albums by the artist, titles of audio by the artist, or playlists having audio by the artist.


In some embodiments, array 500 can include playlist seed option 560. In response to a user selection of playlist seed option 560, the electronic device can play back audio clips associated with information 562 used to identify a playlist seed. For example, the electronic device can play back audio clips for audio titles to serve as seeds directly, or the electronic device can instead or in addition play back audio clips for tags used to classify audio (e.g., artists, titles and genres options). In response to receiving a user selection of a tag sub-option, the electronic device can provide further sub-options for particularly identifying one or more audio items to use as a seed for the seed-based playlist (e.g., sub-options associated with one or more of options 510, 520, 530, 540 and 550).


A user can navigate a multi-dimensional audio menu using any suitable approach. In some embodiments, a user can select an option for which an audio clip is played back. For example, the user can provide an instruction using the input interface (e.g., detected by a sensing element). In response to detecting the instruction, the electronic device can play back audio clips associated with the sub-options of the selected option. The user can then further navigate along the audio menu by continuing to select sub-options to direct the device to play back audio clips associated with further sub-options of the selected sub-option. If an electronic device operation is associated with a selected sub-option (e.g., an instruction to play back audio by a particular user), the electronic device can perform the option in response to receiving an appropriate user selection. The user can in addition navigate up the audio menu (e.g., from a sub-menu to a parent menu) using a different input than the selection input. For example, the user can provide a single input to the sensing element (e.g., a single button press) to provide a selection instruction, and a double input to the sensing element (e.g., a double button press) to navigate up a menu level.


In some embodiments, the audio clips played back by the electronic device can be context-sensitive. For example, the electronic device can identify in the audio clip the current value of a toggled option (e.g., shuffle on or off). As another example, the electronic device can identify the particular selected option to play in an audio clip associated with a play back instruction (e.g., “Play artist Coldplay,” “Play title Viva la Vida,” or “Play audiobook The Hobbit”).


The electronic device can generate the audio clips to play back for the selectable options or sub-options using any suitable approach. In some embodiments, the electronic device can include a text-to-speech engine and sufficient processing to generate the audio clips on the device. Alternatively, the electronic device can receive the audio clips associated with the selectable options from a host device. The host device can generate the audio clips using any suitable approach, including for example using a text-to-speech engine. In some embodiments, the host device can include more substantive processing capabilities than the electronic device, which can allow the host device to generate more polished or accurate audio clips (e.g., audio clips that account for accents or languages). Using a text-to-speech engine can allow the electronic device to generate audio clips for all of the audio menu options (e.g., as opposed to identifying recorded audio clips for menu options, which can not all be available (e.g., no audio clips can be available for less common titles or artist names).


In host device text-to-speech engine embodiments, the host device can identify text strings from which to generate audio clips using any suitable approach. In some embodiments, the host device can identify text strings associated with the electronic device firmware or operating system from the electronic device manufacturer (e.g., as part of a firmware download from the manufacturer when the electronic device firmware is updated). In some embodiments, the electronic device can instead or in addition provide text strings to the host device for synthesizing. For example, the electronic device can provide text strings to the host device as part of a synching protocol when the electronic device is coupled to the host device. In some embodiments, the electronic device can identify the text strings based on the data provided to the electronic device by the host device. For example, the host device can use metadata tags associated with the audio to be transferred to the electronic device as text strings for which to provide audio clips.


The following flowcharts describe illustrative processes used in connection with the audio user interface of one embodiment of the invention. FIG. 6 is a flowchart of an illustrative process for accessing an audio menu in accordance with one embodiment of the invention. Process 600 can begin at step 602. At step 604, the electronic device can determine whether an input from an input interface was detected. For example, the electronic device can determine whether an input was received from a single sensing element of an input interface coupled to the device. If the electronic device determines that no input was received, process 600 can return to step 604 and continue to monitor for electronic device inputs. If, at step 604, the electronic device instead determines that an input was received, process 600 can move to step 606.


At step 606, the electronic device can determine whether the detected input is associated with accessing an audio menu. For example, the electronic device can determine whether the detected input matches the input associated with accessing an audio menu (e.g., a single elongated press of a button). If the electronic device determines that the detected input is not associated with accessing an audio menu, process 600 can move to step 608. At step 608, the electronic device can perform a playback operation associated with the detected input. For example, the electronic device can determine that all non-audio menu inputs are associated with playback control, and direct the electronic device to perform the playback operation associated with the detected input. Process 600 can then end at step 610.


If, at step 606, the electronic device instead determines that the detected input is associated with providing an audio menu, process 600 can move to step 612. At step 612, the electronic device can provide an audio menu. For example, the electronic device can play back audio clips for several selectable options. Process 600 can then end at step 614.



FIG. 7 is a flowchart of an illustrative process for interacting with an audio menu in accordance with one embodiment of the invention. Process 700 can begin at step 702. At step 704, the electronic device can duck currently played back audio. For example, the electronic device can decrease the volume of the audio to a particular level. The particular level can be determined based on the volume value, a measurement of the music energy, the decibel output of the audio, or any other suitable measure. At step 706, the electronic device can determine whether an input was received during ducking. For example, the electronic device can determine whether an input was received from the input interface (e.g., an input detected from a single sensing element) used to access or enable the audio menu mode. If the electronic device determines an input was received, process 700 can move to step 708.


At step 708, the electronic device can exit the audio menu. For example, the electronic device can cease the audio menu playback (e.g., cease ducking, track announcement, or playing back menu options) and return to the playback of the previous audio. In some embodiments, the electronic device can increase the electronic device volume to the volume level before enabling the audio menu. Process 700 can then end at step 710.


If, at step 706, the electronic device instead determines that no input was received during ducking, process 700 can move to step 712. At step 712, the electronic device can announce the played back track prior to enabling the audio menu. For example, the electronic device can identify an audio clip associated with metadata of the played back audio (e.g., an audio clip for the audio title and artist). At step 714, the electronic device can determine whether an input was received during the track announcement. For example, the electronic device can determine whether an input was received from the input interface used to access or enable the audio menu mode. If the electronic device determines an input was received, process 700 can move to step 708, described above. If, at step 714 the electronic device instead determines that no input was received during the track announcement, process 700 can move to step 716.


At step 716, the electronic device can pause (e.g., leave a moment of silence to allow the user to register the track announcement and decide whether or not to exit the audio menu mode) and provide an audio tone to indicate that menu options will be provided. For example, the electronic device can pause for a predetermined duration (e.g., 500 ms) and provide an audio tone following the pause. At step 718, the electronic device can determine whether an input was received during the pause and the provided audio tone. For example, the electronic device can determine whether an input was received from the input interface used to access or enable the audio menu mode. If the electronic device determines an input was received, process 700 can move to step 708, described above. If, at step 718 the electronic device instead determines that no input was received during the pause and audio tone, process 700 can move to step 720.


At step 720, the electronic device can play back menu options associated with an audio menu. For example, the electronic device can play back consecutive audio clips associated with menu options playback. At step 722, the electronic device can determine whether an input was received during the menu option. For example, the electronic device can determine whether an input to select an audio mode option was received from the input interface. If the electronic device determines that no input was received, process 700 can return to step 720 and continue to play back audio clips for audio menu options. If, at step 722, the electronic device instead determines that an input was received, process 700 can move to step 724. At step 724, the electronic device can perform an operation associated with the menu option selected by the detected user input. For example, the electronic device can play back audio associated with a selected option. Process 700 can then end at step 710.



FIG. 8 is a flowchart of an illustrative process for interacting with a multi-dimensional menu in accordance with one embodiment of the invention. Process 800 can begin at step 802. At step 804, the electronic device can play back menu options associated with an audio menu. For example, the electronic device can play back consecutive audio clips associated with menu options. At step 806, the electronic device can determine whether an input was received during the playback of an audio clip associated with a menu option. For example, the electronic device can determine whether an input was received to select one of the options for which an audio clip was played back. If the electronic device determines that no input was received, process 800 can return to step 804 and continue to play back audio clips for audio menu options. If, at step 806, the electronic device instead determines that an input was received, process 800 can move to step 808. At step 808, the electronic device can play back audio clips for sub-options associated with the selected option. For example, the electronic device can play back audio clips for artist names in response to receiving an input selecting an artist option.


At step 810, the electronic device can determine whether an input was received during the playback of an audio clip associated with a sub-option. For example, the electronic device can determine whether an input was received to select one of the sub-options for which an audio clip was played back. If the electronic device determines that no input was received, process 800 can return to step 808 and continue to play back audio clips for audio menu sub-options. If, at step 810, the electronic device instead determines that an input was received, process 800 can move to step 812.


At step 812, the electronic device can determine whether the input detected at step 810 was an input associated with an instruction to go up a menu level. For example, the electronic device can determine whether the input was associated with a “back” instruction (e.g., an elongated press of a button). If the electronic device determines that the detected input is associated with an instruction to go up a menu level, process 800 can move to back to step 804 and play back audio clips for menu options one level up from the sub-option level of step 808. If, at step 812, the electronic device instead determines that the input detected was not an input associated with an instruction to go up a menu level, process 800 can move to step 814. At step 814, the electronic device can determine whether the input detected at step 810 was an input for a sub-option associated with an electronic device operation. For example, the electronic device can determine whether the input was associated with a selection instruction (e.g., a single press of a button) for a sub-option option associated with an electronic device operation. If the electronic device determines that the detected input is not associated an electronic device operation (e.g., the sub-option selected is not associated with an electronic device operation), process 800 can move to step 816. At step 816, the electronic device can play back audio clips for subsequent sub-options associated with the selected sub-option. For example, the electronic device can identify subsequent sub-options for the selected sub-option, and play back audio clips associated with the identified subsequent sub-options. Process 800 can then return to step 810 and monitor for inputs during the playback of the audio clips.


If at step 814, the electronic device instead determines that the input detected was an input for a sub-option associated with an electronic device operation (e.g., the selected sub-option is associated with an electronic device operation), process 800 can move to step 818. At step 818, the electronic device can perform an electronic device operation associated with the selected sub-option. For example, the electronic device can play back audio identified by the sub-option (e.g., audio by the selected artist name sub-option). Process 800 can then end at step 820.


The above-described embodiments of the present invention are presented for purposes of illustration and not of limitation, and the present invention is limited only by the claims which follow.

Claims
  • 1. A method for controlling an electronic device using a single sensing element, comprising: playing back with the electronic device a first audio clip in a first manner;during the playing back in the first manner: detecting with the single sensing element a first user input;in response to the detecting the first user input, altering with the electronic device the playing back from the first manner to a second manner; andin response to the altering, playing back with the electronic device the first audio clip in the second manner;during the playing back the first audio clip in the second manner, playing back with the electronic device a second audio clip to announce the first audio clip; andafter the playing back the second audio clip, playing back with the electronic device a first menu audio clip that is associated with a menu of the electronic device.
  • 2. The method of claim 1, wherein a first menu option is associated with each of the first menu audio clip that is played back and an electronic device operation, the method further comprising: detecting with the single sensing element a second user input; andin response to the detecting the second user input, accessing with the electronic device the first menu option.
  • 3. The method of claim 1, wherein the second audio clip that is played back corresponds to at least one of a title, an artist, an album name, an audiobook title, a chapter name, a genre, and a year that is associated with the first audio clip that is played back.
  • 4. The method of claim 1, further comprising: after the detecting the first user input, entering with the electronic device a menu mode of the electronic device; andat least one of during the playing back the first audio clip in the second manner and after the playing back the first audio clip in the second manner: detecting with the single sensing element a second user input; andexiting with the electronic device the entered menu mode in response to the detecting the second user input.
  • 5. The method of claim 4, wherein the detected first user input comprises a single press and hold of the single sensing element, and wherein the detected second user input comprises a release of the pressed and held single sensing element.
  • 6. The method of claim 4, wherein each of the detected first user input and the detected second user input comprises a single elongated press of the single sensing element.
  • 7. The method of claim 1, further comprising: detecting with the single sensing element a second user input during the playing back the first menu audio clip; andin response to the detecting the second user input, skipping the playing back the first menu audio clip to play back a second menu audio clip.
  • 8. The method of claim 1, wherein the single sensing element comprises at least one of a mechanical sensor, a resistive sensor, and a capacitive sensor.
  • 9. The method of claim 1, wherein the altering the playing back to the second manner comprises ducking the first audio clip that is played back in the first manner.
  • 10. The method of claim 9, wherein the ducking comprises reducing a volume of the playing back the first audio clip in the first manner.
  • 11. The method of claim 10, wherein the reducing the volume comprises adjusting the volume to a predefined volume level that is audible.
  • 12. The method of claim 1, wherein the playing back the first audio clip in the second manner comprises playing back the first audio clip at a first volume level, and wherein the playing back the second audio clip comprises playing back the second audio clip at a second volume level that is higher than the first volume level.
  • 13. The method of claim 1, wherein the detected first user input comprises a single extended press of the single sensing element.
  • 14. The method of claim 1, wherein the second audio clip that is played back corresponds to metadata that is associated with the first audio clip that is played back.
  • 15. The method of claim 1 further comprising, after the playing back the second audio clip, but before the playing back the first menu audio clip: playing back a third audio clip to indicate an end of the announcing of the first audio clip.
  • 16. The method of claim 15, wherein the third audio clip that is played back comprises at least one of a tone and a beep.
  • 17. The method of claim 1, wherein the first menu audio clip that is played back is associated with a first menu option of the menu.
  • 18. The method of claim 17 further comprising, during the playing back the first menu audio clip: detecting with the single sensing element a second user input; andin response to the detecting the second user input, accessing with the electronic device the first menu option.
  • 19. The method of claim 1, wherein, after the detecting the first user input, but before the playing back the first menu audio clip: accessing with the electronic device a menu mode of the electronic device; andafter the accessing, identifying with the electronic device at least the first menu audio clip.
  • 20. An electronic device for providing an audio menu to a user, comprising: a single sensing element for detecting user inputs;an audio output; anda processor operative to: direct the audio output to play back a first audio clip in a first manner;during the playback of the first audio clip in the first manner, receive from the single sensing element a first user input that is detected by the single sensing element;in response to the receiving the first user input, direct the audio output to alter the playback from the first manner to a second manner;during the playback of the first audio clip in the second manner, direct the audio output to play back a second audio clip to announce the first audio clip; andafter the playback of the second audio clip, direct the audio output to play back a first menu audio clip that is associated with the audio menu.
  • 21. The electronic device of claim 20, wherein the processor is further operative to direct the audio output to alter the playback to the second manner by directing the audio output to duck the first audio clip that is being played back in the first manner.
  • 22. The electronic device of claim 21, wherein the audio output ducks the first audio clip by reducing a volume of the playback of the first audio clip in the first manner.
  • 23. The electronic device of claim 22, wherein the reducing the volume comprises reducing the volume to a predefined volume level that is audible.
  • 24. The electronic device of claim 22, wherein the volume is reduced over a duration in the range of 50 ms to 800 ms.
  • 25. The electronic device of claim 20, wherein: the single sensing element comprises a mechanical sensor;the detected first user input comprises an elongated press of the mechanical sensor; andthe detected second user input comprises a subsequent press of the mechanical sensor.
  • 26. The electronic device of claim 20, wherein: the single sensing element comprises a mechanical sensor;the detected first user input comprises a press and hold of the mechanical sensor; andthe detected second user input comprises a release of the mechanical sensor.
  • 27. The electronic device of claim 20, wherein the processor is further operative to: during the playback of the first menu audio clip, receive a second user input from the single sensing element that is detected by the single sensing element; andafter the receiving the detected second user input, direct the audio output to play back a second menu audio clip that is associated with the audio menu.
  • 28. The electronic device of claim 20, wherein the audio output is coupled to at least one of an earbud, headphones, and a speaker.
  • 29. The electronic device of claim 20, wherein the audio output is operative to play back the first audio clip in the second manner by playing back the first audio clip at a first volume level, and wherein the audio output is operative to play back the second audio clip by playing back the second audio clip at a second volume level that is higher than the first volume level.
  • 30. The electronic device of claim 20, wherein the second audio clip that is played back corresponds to metadata that is associated with the first audio clip.
  • 31. The electronic device of claim 20, wherein the processor is further operative to, after the directing the audio output to play back the second audio clip, but before the directing the audio output to play back the first menu audio clip: direct the audio output to play back a third audio clip to indicate an end of the announcing of the first audio clip.
  • 32. The electronic device of claim 31, wherein the third audio clip that is played back comprises at least one of a tone and a beep.
  • 33. The electronic device of claim 20, wherein the first menu audio clip that is played back is associated with a first menu option of the menu.
  • 34. The electronic device of claim 33, wherein the processor is further operative to, while the audio output is playing back the first menu audio clip: receive from the single sensing element a second user input that is detected by the single sensing element; andin response to the receiving the second user input, access the first menu option.
  • 35. The electronic device of claim 20, wherein the processor is further operative to, after the receiving the first user input, but before the directing the audio output to play back the first menu audio clip: access the audio menu of the electronic device; andafter the accessing, identify at least the first menu audio clip.
  • 36. The electronic device of claim 20, wherein the electronic device does not comprise a display.
  • 37. The electronic device of claim 20, wherein the single sensing element is the only user input interface of the electronic device.
  • 38. A non-transitory computer-readable media for controlling an electronic device using a single input interface, the non-transitory computer-readable media comprising computer-readable instructions for: directing an audio output of the electronic device to play back a first audio clip in a first manner;during the playback of the first audio clip in the first manner, receiving a first user input from the single input interface;in response to the receiving, directing the audio output to alter the playback from the first manner to a second manner;during the playback of the first audio clip in the second manner, directing the audio output to play back a second audio clip to announce the first audio clip; andafter the playback of the second audio clip, directing the audio output to play back a first menu audio clip that is associated with an audio menu of the electronic device.
US Referenced Citations (654)
Number Name Date Kind
3704345 Coker et al. Nov 1972 A
3828132 Flanagan et al. Aug 1974 A
3979557 Schulman et al. Sep 1976 A
4278838 Antonov Jul 1981 A
4282405 Taguchi Aug 1981 A
4310721 Manley et al. Jan 1982 A
4348553 Baker et al. Sep 1982 A
4653021 Takagi Mar 1987 A
4688195 Thompson et al. Aug 1987 A
4692941 Jacks et al. Sep 1987 A
4718094 Bahl et al. Jan 1988 A
4724542 Williford Feb 1988 A
4726065 Froessl Feb 1988 A
4727354 Lindsay Feb 1988 A
4776016 Hansen Oct 1988 A
4783807 Marley Nov 1988 A
4811243 Racine Mar 1989 A
4819271 Bahl et al. Apr 1989 A
4827520 Zeinstra May 1989 A
4829576 Porter May 1989 A
4833712 Bahl et al. May 1989 A
4839853 Deerwester et al. Jun 1989 A
4852168 Sprague Jul 1989 A
4862504 Nomura Aug 1989 A
4878230 Murakami et al. Oct 1989 A
4903305 Gillick et al. Feb 1990 A
4905163 Garber et al. Feb 1990 A
4914586 Swinehart et al. Apr 1990 A
4914590 Loatman et al. Apr 1990 A
4944013 Gouvianakis et al. Jul 1990 A
4955047 Morganstein et al. Sep 1990 A
4965763 Zamora Oct 1990 A
4974191 Amirghodsi et al. Nov 1990 A
4977598 Doddington et al. Dec 1990 A
4992972 Brooks et al. Feb 1991 A
5001774 Lee Mar 1991 A
5010574 Wang Apr 1991 A
5020112 Chou May 1991 A
5021971 Lindsay Jun 1991 A
5022081 Hirose et al. Jun 1991 A
5027406 Roberts et al. Jun 1991 A
5031217 Nishimura Jul 1991 A
5032989 Tornetta Jul 1991 A
5040218 Vitale et al. Aug 1991 A
5047614 Bianco Sep 1991 A
5057915 Von Kohorn et al. Oct 1991 A
5072452 Brown et al. Dec 1991 A
5091945 Kleijn Feb 1992 A
5127053 Koch Jun 1992 A
5127055 Larkey Jun 1992 A
5128672 Kaehler Jul 1992 A
5133011 McKiel, Jr. Jul 1992 A
5142584 Ozawa Aug 1992 A
5164900 Bernath Nov 1992 A
5165007 Bahl et al. Nov 1992 A
5179652 Rozmanith et al. Jan 1993 A
5194950 Murakami et al. Mar 1993 A
5197005 Shwartz et al. Mar 1993 A
5199077 Wilcox et al. Mar 1993 A
5202952 Gillick et al. Apr 1993 A
5208862 Ozawa May 1993 A
5216747 Hardwick et al. Jun 1993 A
5220639 Lee Jun 1993 A
5220657 Bly et al. Jun 1993 A
5222146 Bahl et al. Jun 1993 A
5230036 Akamine et al. Jul 1993 A
5235680 Bijnagte Aug 1993 A
5267345 Brown et al. Nov 1993 A
5268990 Cohen et al. Dec 1993 A
5282265 Rohra Suda et al. Jan 1994 A
RE34562 Murakami et al. Mar 1994 E
5291286 Murakami et al. Mar 1994 A
5293448 Honda Mar 1994 A
5293452 Picone et al. Mar 1994 A
5297170 Eyuboglu et al. Mar 1994 A
5301109 Landauer et al. Apr 1994 A
5303406 Hansen et al. Apr 1994 A
5309359 Katz et al. May 1994 A
5317507 Gallant May 1994 A
5317647 Pagallo May 1994 A
5325297 Bird et al. Jun 1994 A
5325298 Gallant Jun 1994 A
5327498 Hamon Jul 1994 A
5333236 Bahl et al. Jul 1994 A
5333275 Wheatley et al. Jul 1994 A
5345536 Hoshimi et al. Sep 1994 A
5349645 Zhao Sep 1994 A
5353377 Kuroda et al. Oct 1994 A
5377301 Rosenberg et al. Dec 1994 A
5384892 Strong Jan 1995 A
5384893 Hutchins Jan 1995 A
5386494 White Jan 1995 A
5386556 Hedin et al. Jan 1995 A
5390279 Strong Feb 1995 A
5396625 Parkes Mar 1995 A
5400434 Pearson Mar 1995 A
5404295 Katz et al. Apr 1995 A
5412756 Bauman et al. May 1995 A
5412804 Krishna May 1995 A
5412806 Du et al. May 1995 A
5418951 Damashek May 1995 A
5424947 Nagao et al. Jun 1995 A
5434777 Luciw Jul 1995 A
5444823 Nguyen Aug 1995 A
5455888 Iyengar et al. Oct 1995 A
5469529 Bimbot et al. Nov 1995 A
5471611 McGregor Nov 1995 A
5475587 Anick et al. Dec 1995 A
5479488 Lennig et al. Dec 1995 A
5491772 Hardwick et al. Feb 1996 A
5493677 Balogh Feb 1996 A
5495604 Harding et al. Feb 1996 A
5502790 Yi Mar 1996 A
5502791 Nishimura et al. Mar 1996 A
5515475 Gupta et al. May 1996 A
5536902 Serra et al. Jul 1996 A
5537618 Boulton et al. Jul 1996 A
5574823 Hassanein et al. Nov 1996 A
5577241 Spencer Nov 1996 A
5578808 Taylor Nov 1996 A
5579436 Chou et al. Nov 1996 A
5581655 Cohen et al. Dec 1996 A
5584024 Shwartz Dec 1996 A
5596676 Swaminathan et al. Jan 1997 A
5596994 Bro Jan 1997 A
5608624 Luciw Mar 1997 A
5613036 Strong Mar 1997 A
5617507 Lee et al. Apr 1997 A
5619694 Shimazu Apr 1997 A
5621859 Schwartz et al. Apr 1997 A
5621903 Luciw et al. Apr 1997 A
5642464 Yue et al. Jun 1997 A
5642519 Martin Jun 1997 A
5644727 Atkins Jul 1997 A
5664055 Kroon Sep 1997 A
5675819 Schuetze Oct 1997 A
5682539 Conrad et al. Oct 1997 A
5687077 Gough, Jr. Nov 1997 A
5696962 Kupiec Dec 1997 A
5701400 Amado Dec 1997 A
5706442 Anderson et al. Jan 1998 A
5710886 Christensen et al. Jan 1998 A
5712957 Waibel et al. Jan 1998 A
5715468 Budzinski Feb 1998 A
5721827 Logan et al. Feb 1998 A
5727950 Cook et al. Mar 1998 A
5729694 Holzrichter et al. Mar 1998 A
5732390 Katayanagi et al. Mar 1998 A
5734791 Acero et al. Mar 1998 A
5737734 Schultz Apr 1998 A
5748974 Johnson May 1998 A
5749081 Whiteis May 1998 A
5759101 Von Kohorn Jun 1998 A
5790978 Olive et al. Aug 1998 A
5794050 Dahlgren et al. Aug 1998 A
5794182 Manduchi et al. Aug 1998 A
5794207 Walker et al. Aug 1998 A
5794237 Gore, Jr. Aug 1998 A
5799276 Komissarchik et al. Aug 1998 A
5822743 Gupta et al. Oct 1998 A
5825881 Colvin, Sr. Oct 1998 A
5826261 Spencer Oct 1998 A
5828999 Bellegarda et al. Oct 1998 A
5835893 Ushioda Nov 1998 A
5839106 Bellegarda Nov 1998 A
5845255 Mayaud Dec 1998 A
5857184 Lynch Jan 1999 A
5860063 Gorin et al. Jan 1999 A
5862223 Walker et al. Jan 1999 A
5864806 Mokbel et al. Jan 1999 A
5864844 James et al. Jan 1999 A
5867799 Lang et al. Feb 1999 A
5873056 Liddy et al. Feb 1999 A
5875437 Atkins Feb 1999 A
5884323 Hawkins et al. Mar 1999 A
5895464 Bhandari et al. Apr 1999 A
5895466 Goldberg et al. Apr 1999 A
5899972 Miyazawa et al. May 1999 A
5913193 Huang et al. Jun 1999 A
5915249 Spencer Jun 1999 A
5930769 Rose Jul 1999 A
5933822 Braden-Harder et al. Aug 1999 A
5936926 Yokouchi et al. Aug 1999 A
5940811 Norris Aug 1999 A
5941944 Messerly Aug 1999 A
5943670 Prager Aug 1999 A
5948040 DeLorme et al. Sep 1999 A
5956699 Wong et al. Sep 1999 A
5960422 Prasad Sep 1999 A
5963924 Williams et al. Oct 1999 A
5966126 Szabo Oct 1999 A
5970474 LeRoy et al. Oct 1999 A
5974146 Randle et al. Oct 1999 A
5982891 Ginter et al. Nov 1999 A
5987132 Rowney Nov 1999 A
5987140 Rowney et al. Nov 1999 A
5987404 Della Pietra et al. Nov 1999 A
5987440 O'Neil et al. Nov 1999 A
5999908 Abelow Dec 1999 A
6016471 Kuhn et al. Jan 2000 A
6023684 Pearson Feb 2000 A
6024288 Gottlich et al. Feb 2000 A
6026345 Shah et al. Feb 2000 A
6026375 Hall et al. Feb 2000 A
6026388 Liddy et al. Feb 2000 A
6026393 Gupta et al. Feb 2000 A
6029132 Kuhn et al. Feb 2000 A
6038533 Buchsbaum et al. Mar 2000 A
6052656 Suda et al. Apr 2000 A
6055514 Wren Apr 2000 A
6055531 Bennett et al. Apr 2000 A
6064960 Bellegarda et al. May 2000 A
6070139 Miyazawa et al. May 2000 A
6070147 Harms et al. May 2000 A
6076051 Messerly et al. Jun 2000 A
6076088 Paik et al. Jun 2000 A
6078914 Redfern Jun 2000 A
6081750 Hoffberg et al. Jun 2000 A
6081774 de Hita et al. Jun 2000 A
6088731 Kiraly et al. Jul 2000 A
6094649 Bowen et al. Jul 2000 A
6105865 Hardesty Aug 2000 A
6108627 Sabourin Aug 2000 A
6119101 Peckover Sep 2000 A
6122616 Henton Sep 2000 A
6125356 Brockman et al. Sep 2000 A
6144938 Surace et al. Nov 2000 A
6173261 Arai et al. Jan 2001 B1
6173279 Levin et al. Jan 2001 B1
6188999 Moody Feb 2001 B1
6195641 Loring et al. Feb 2001 B1
6205456 Nakao Mar 2001 B1
6208971 Bellegarda et al. Mar 2001 B1
6233559 Balakrishnan May 2001 B1
6233578 Machihara et al. May 2001 B1
6246981 Papineni et al. Jun 2001 B1
6260024 Shkedy Jul 2001 B1
6266637 Donovan et al. Jul 2001 B1
6275824 O'Flaherty et al. Aug 2001 B1
6285786 Seni et al. Sep 2001 B1
6308149 Gaussier et al. Oct 2001 B1
6311189 deVries et al. Oct 2001 B1
6317707 Bangalore et al. Nov 2001 B1
6334103 Surace et al. Dec 2001 B1
6356854 Schubert et al. Mar 2002 B1
6356905 Gershman et al. Mar 2002 B1
6366883 Campbell et al. Apr 2002 B1
6366884 Bellegarda et al. Apr 2002 B1
6421672 McAllister et al. Jul 2002 B1
6434524 Weber Aug 2002 B1
6446076 Burkey et al. Sep 2002 B1
6449620 Draper et al. Sep 2002 B1
6453292 Ramaswamy et al. Sep 2002 B2
6460029 Fries et al. Oct 2002 B1
6466654 Cooper et al. Oct 2002 B1
6477488 Bellegarda Nov 2002 B1
6487534 Thelen et al. Nov 2002 B1
6499013 Weber Dec 2002 B1
6501937 Ho et al. Dec 2002 B1
6505158 Conkie Jan 2003 B1
6505175 Silverman et al. Jan 2003 B1
6505183 Loofbourrow et al. Jan 2003 B1
6510417 Woods et al. Jan 2003 B1
6513063 Julia et al. Jan 2003 B1
6523061 Halverson et al. Feb 2003 B1
6523172 Martinez-Guerra et al. Feb 2003 B1
6526382 Yuschik Feb 2003 B1
6526395 Morris Feb 2003 B1
6532444 Weber Mar 2003 B1
6546388 Edlund et al. Apr 2003 B1
6553344 Bellegarda et al. Apr 2003 B2
6556983 Altschuler et al. Apr 2003 B1
6584464 Warthen Jun 2003 B1
6590303 Austin et al. Jul 2003 B1
6598039 Livowsky Jul 2003 B1
6601026 Appelt et al. Jul 2003 B2
6601234 Bowman-Amuah Jul 2003 B1
6604059 Strubbe et al. Aug 2003 B2
6615172 Bennett et al. Sep 2003 B1
6615175 Gazdzinski Sep 2003 B1
6615220 Austin et al. Sep 2003 B1
6625583 Silverman et al. Sep 2003 B1
6631346 Karaorman et al. Oct 2003 B1
6633846 Bennett et al. Oct 2003 B1
6650735 Burton et al. Nov 2003 B2
6654740 Tokuda et al. Nov 2003 B2
6665639 Mozer et al. Dec 2003 B2
6665640 Bennett et al. Dec 2003 B1
6665641 Coorman et al. Dec 2003 B1
6684187 Conkie Jan 2004 B1
6691064 Vroman Feb 2004 B2
6691111 Lazaridis et al. Feb 2004 B2
6691151 Cheyer et al. Feb 2004 B1
6697780 Beutnagel et al. Feb 2004 B1
6697824 Bowman-Amuah Feb 2004 B1
6701294 Ball et al. Mar 2004 B1
6711585 Copperman et al. Mar 2004 B1
6718324 Edlund et al. Apr 2004 B2
6721728 McGreevy Apr 2004 B2
6735632 Kiraly et al. May 2004 B1
6742021 Halverson et al. May 2004 B1
6757362 Cooper et al. Jun 2004 B1
6757718 Halverson et al. Jun 2004 B1
6766320 Wang et al. Jul 2004 B1
6778951 Contractor Aug 2004 B1
6778952 Bellegarda Aug 2004 B2
6778962 Kasai et al. Aug 2004 B1
6778970 Au Aug 2004 B2
6792082 Levine Sep 2004 B1
6807574 Partovi et al. Oct 2004 B1
6810379 Vermeulen et al. Oct 2004 B1
6829603 Chai et al. Dec 2004 B1
6832194 Mozer et al. Dec 2004 B1
6842767 Partovi et al. Jan 2005 B1
6847966 Sommer et al. Jan 2005 B1
6847979 Allemang et al. Jan 2005 B2
6851115 Cheyer et al. Feb 2005 B1
6859931 Cheyer et al. Feb 2005 B1
6895380 Sepe, Jr. May 2005 B2
6895558 Loveland May 2005 B1
6901399 Corston et al. May 2005 B1
6912499 Sabourin et al. Jun 2005 B1
6924828 Hirsch Aug 2005 B1
6928614 Everhart Aug 2005 B1
6931384 Horvitz et al. Aug 2005 B1
6937975 Elworthy Aug 2005 B1
6937986 Denenberg et al. Aug 2005 B2
6964023 Maes et al. Nov 2005 B2
6980949 Ford Dec 2005 B2
6980955 Okutani et al. Dec 2005 B2
6985865 Packingham et al. Jan 2006 B1
6988071 Gazdzinski Jan 2006 B1
6996531 Korall et al. Feb 2006 B2
6999927 Mozer et al. Feb 2006 B2
7027974 Busch et al. Apr 2006 B1
7036128 Julia et al. Apr 2006 B1
7050977 Bennett May 2006 B1
7058569 Coorman et al. Jun 2006 B2
7062428 Hogenhout et al. Jun 2006 B2
7069560 Cheyer et al. Jun 2006 B1
7089292 Roderick et al. Aug 2006 B1
7092887 Mozer et al. Aug 2006 B2
7092928 Elad et al. Aug 2006 B1
7093693 Gazdzinski Aug 2006 B1
7127046 Smith et al. Oct 2006 B1
7127403 Saylor et al. Oct 2006 B1
7136710 Hoffberg et al. Nov 2006 B1
7137126 Coffman et al. Nov 2006 B1
7139714 Bennett et al. Nov 2006 B2
7139722 Perrella et al. Nov 2006 B2
7152070 Musick et al. Dec 2006 B1
7177798 Hsu et al. Feb 2007 B2
7197460 Gupta et al. Mar 2007 B1
7200559 Wang Apr 2007 B2
7203646 Bennett Apr 2007 B2
7216073 Lavi et al. May 2007 B2
7216080 Tsiao et al. May 2007 B2
7225125 Bennett et al. May 2007 B2
7233904 Luisi Jun 2007 B2
7277854 Bennett et al. Oct 2007 B2
7290039 Lisitsa et al. Oct 2007 B1
7310600 Garner et al. Dec 2007 B1
7324947 Jordan et al. Jan 2008 B2
7349953 Lisitsa et al. Mar 2008 B2
7376556 Bennett May 2008 B2
7376645 Bernard May 2008 B2
7379874 Schmid et al. May 2008 B2
7386449 Sun et al. Jun 2008 B2
7389224 Elworthy Jun 2008 B1
7392185 Bennett Jun 2008 B2
7398209 Kennewick et al. Jul 2008 B2
7403938 Harrison et al. Jul 2008 B2
7409337 Potter et al. Aug 2008 B1
7415100 Cooper et al. Aug 2008 B2
7418392 Mozer et al. Aug 2008 B1
7426467 Nashida et al. Sep 2008 B2
7427024 Gazdzinski et al. Sep 2008 B1
7447635 Konopka et al. Nov 2008 B1
7454351 Jeschke et al. Nov 2008 B2
7467087 Gillick et al. Dec 2008 B1
7475010 Chao Jan 2009 B2
7483894 Cao Jan 2009 B2
7487089 Mozer Feb 2009 B2
7496498 Chu et al. Feb 2009 B2
7496512 Zhao et al. Feb 2009 B2
7502738 Kennewick et al. Mar 2009 B2
7508373 Lin et al. Mar 2009 B2
7523108 Cao Apr 2009 B2
7526466 Au Apr 2009 B2
7529671 Rockenbeck et al. May 2009 B2
7529676 Koyama May 2009 B2
7539656 Fratkina et al. May 2009 B2
7546382 Healey et al. Jun 2009 B2
7548895 Pulsipher Jun 2009 B2
7552055 Lecoeuche Jun 2009 B2
7555431 Bennett Jun 2009 B2
7558730 Davis et al. Jul 2009 B2
7571106 Cao et al. Aug 2009 B2
7599918 Shen et al. Oct 2009 B2
7620549 Di Cristo et al. Nov 2009 B2
7624007 Bennett Nov 2009 B2
7634409 Kennewick et al. Dec 2009 B2
7636657 Ju et al. Dec 2009 B2
7640160 Di Cristo et al. Dec 2009 B2
7647225 Bennett et al. Jan 2010 B2
7657424 Bennett Feb 2010 B2
7672841 Bennett Mar 2010 B2
7676026 Baxter, Jr. Mar 2010 B1
7684985 Dominach et al. Mar 2010 B2
7693715 Hwang et al. Apr 2010 B2
7693720 Kennewick et al. Apr 2010 B2
7698131 Bennett Apr 2010 B2
7702500 Blaedow Apr 2010 B2
7702508 Bennett Apr 2010 B2
7707027 Balchandran et al. Apr 2010 B2
7707267 Lisitsa et al. Apr 2010 B2
7711565 Gazdzinski May 2010 B1
7711672 Au May 2010 B2
7716056 Weng et al. May 2010 B2
7720674 Kaiser et al. May 2010 B2
7720683 Vermeulen et al. May 2010 B1
7725307 Bennett May 2010 B2
7725318 Gavalda et al. May 2010 B2
7725320 Bennett May 2010 B2
7725321 Bennett May 2010 B2
7729904 Bennett Jun 2010 B2
7729916 Coffman et al. Jun 2010 B2
7734461 Kwak et al. Jun 2010 B2
7747616 Yamada et al. Jun 2010 B2
7752152 Paek et al. Jul 2010 B2
7756868 Lee Jul 2010 B2
7774204 Mozer et al. Aug 2010 B2
7783486 Rosser et al. Aug 2010 B2
7801729 Mozer Sep 2010 B2
7809570 Kennewick et al. Oct 2010 B2
7809610 Cao Oct 2010 B2
7818176 Freeman et al. Oct 2010 B2
7822608 Cross, Jr. et al. Oct 2010 B2
7826945 Zhang et al. Nov 2010 B2
7831426 Bennett Nov 2010 B2
7840400 Lavi et al. Nov 2010 B2
7840447 Kleinrock et al. Nov 2010 B2
7853574 Kraenzel et al. Dec 2010 B2
7873519 Bennett Jan 2011 B2
7873654 Bernard Jan 2011 B2
7881936 Longé et al. Feb 2011 B2
7890652 Bull et al. Feb 2011 B2
7912702 Bennett Mar 2011 B2
7917367 Di Cristo et al. Mar 2011 B2
7917497 Harrison et al. Mar 2011 B2
7920678 Cooper et al. Apr 2011 B2
7925525 Chin Apr 2011 B2
7930168 Weng et al. Apr 2011 B2
7949529 Weider et al. May 2011 B2
7949534 Davis et al. May 2011 B2
7974844 Sumita Jul 2011 B2
7974972 Cao Jul 2011 B2
7983915 Knight et al. Jul 2011 B2
7983917 Kennewick et al. Jul 2011 B2
7983997 Allen et al. Jul 2011 B2
7986431 Emori et al. Jul 2011 B2
7987151 Schott et al. Jul 2011 B2
7996228 Miller et al. Aug 2011 B2
8000453 Cooper et al. Aug 2011 B2
8005679 Jordan et al. Aug 2011 B2
8015006 Kennewick et al. Sep 2011 B2
8024195 Mozer et al. Sep 2011 B2
8036901 Mozer Oct 2011 B2
8041570 Mirkovic et al. Oct 2011 B2
8041611 Kleinrock et al. Oct 2011 B2
8055708 Chitsaz et al. Nov 2011 B2
8065155 Gazdzinski Nov 2011 B1
8065156 Gazdzinski Nov 2011 B2
8069046 Kennewick et al. Nov 2011 B2
8073681 Baldwin et al. Dec 2011 B2
8078473 Gazdzinski Dec 2011 B1
8082153 Coffman et al. Dec 2011 B2
8095364 Longé et al. Jan 2012 B2
8099289 Mozer et al. Jan 2012 B2
8107401 John et al. Jan 2012 B2
8112275 Kennewick et al. Feb 2012 B2
8112280 Lu Feb 2012 B2
8117037 Gazdzinski Feb 2012 B2
8131557 Davis et al. Mar 2012 B2
8140335 Kennewick et al. Mar 2012 B2
8165886 Gagnon et al. Apr 2012 B1
8166019 Lee et al. Apr 2012 B1
8190359 Bourne May 2012 B2
8195467 Mozer et al. Jun 2012 B2
8204238 Mozer Jun 2012 B2
8205788 Gazdzinski et al. Jun 2012 B1
8219407 Roy et al. Jul 2012 B1
8285551 Gazdzinski Oct 2012 B2
8285553 Gazdzinski Oct 2012 B2
8290778 Gazdzinski Oct 2012 B2
8290781 Gazdzinski Oct 2012 B2
8296146 Gazdzinski Oct 2012 B2
8296153 Gazdzinski Oct 2012 B2
8301456 Gazdzinski Oct 2012 B2
8311834 Gazdzinski Nov 2012 B1
8370158 Gazdzinski Feb 2013 B2
8371503 Gazdzinski Feb 2013 B2
8374871 Ehsani et al. Feb 2013 B2
8447612 Gazdzinski May 2013 B2
20010027396 Sato Oct 2001 A1
20010047264 Roundtree Nov 2001 A1
20020032564 Ehsani et al. Mar 2002 A1
20020046025 Hain Apr 2002 A1
20020069063 Buchner et al. Jun 2002 A1
20020077817 Atal Jun 2002 A1
20020103641 Kuo et al. Aug 2002 A1
20020164000 Cohen et al. Nov 2002 A1
20020198714 Zhou Dec 2002 A1
20030086699 Benyamin et al. May 2003 A1
20030158737 Csicsatka Aug 2003 A1
20040135701 Yasuda et al. Jul 2004 A1
20040236778 Junqua et al. Nov 2004 A1
20050015254 Beaman Jan 2005 A1
20050045373 Born Mar 2005 A1
20050055403 Brittan Mar 2005 A1
20050071332 Ortega et al. Mar 2005 A1
20050080625 Bennett et al. Apr 2005 A1
20050091118 Fano Apr 2005 A1
20050102614 Brockett et al. May 2005 A1
20050108001 Aarskog May 2005 A1
20050114124 Liu et al. May 2005 A1
20050119897 Bennett et al. Jun 2005 A1
20050143972 Gopalakrishnan et al. Jun 2005 A1
20050165607 DiFabbrizio et al. Jul 2005 A1
20050182629 Coorman et al. Aug 2005 A1
20050196733 Budra et al. Sep 2005 A1
20050283729 Morris et al. Dec 2005 A1
20050288936 Busayapongchai et al. Dec 2005 A1
20060018492 Chiu et al. Jan 2006 A1
20060095848 Naik May 2006 A1
20060106592 Brockett et al. May 2006 A1
20060106594 Brockett et al. May 2006 A1
20060106595 Brockett et al. May 2006 A1
20060117002 Swen Jun 2006 A1
20060122834 Bennett Jun 2006 A1
20060143007 Koh et al. Jun 2006 A1
20060168150 Naik et al. Jul 2006 A1
20060291666 Ball et al. Dec 2006 A1
20070055529 Kanevsky et al. Mar 2007 A1
20070058832 Hug et al. Mar 2007 A1
20070088556 Andrew Apr 2007 A1
20070100790 Cheyer et al. May 2007 A1
20070106674 Agrawal et al. May 2007 A1
20070118377 Badino et al. May 2007 A1
20070135949 Snover et al. Jun 2007 A1
20070156410 Stohr et al. Jul 2007 A1
20070174188 Fish Jul 2007 A1
20070180383 Naik Aug 2007 A1
20070185917 Prahlad et al. Aug 2007 A1
20070282595 Tunning et al. Dec 2007 A1
20080015864 Ross et al. Jan 2008 A1
20080021708 Bennett et al. Jan 2008 A1
20080034032 Healey et al. Feb 2008 A1
20080046820 Lee et al. Feb 2008 A1
20080052063 Bennett et al. Feb 2008 A1
20080120112 Jordan et al. May 2008 A1
20080129520 Lee Jun 2008 A1
20080140657 Azvine et al. Jun 2008 A1
20080221903 Kanevsky et al. Sep 2008 A1
20080228496 Yu et al. Sep 2008 A1
20080247519 Abella et al. Oct 2008 A1
20080249770 Kim et al. Oct 2008 A1
20080300878 Bennett Dec 2008 A1
20080319763 Di Fabbrizio et al. Dec 2008 A1
20090006100 Badger et al. Jan 2009 A1
20090006343 Platt et al. Jan 2009 A1
20090030800 Grois Jan 2009 A1
20090055179 Cho et al. Feb 2009 A1
20090058823 Kocienda Mar 2009 A1
20090076796 Daraselia Mar 2009 A1
20090077165 Rhodes et al. Mar 2009 A1
20090100049 Cao Apr 2009 A1
20090112677 Rhett Apr 2009 A1
20090150156 Kennewick et al. Jun 2009 A1
20090157401 Bennett Jun 2009 A1
20090164441 Cheyer Jun 2009 A1
20090171664 Kennewick et al. Jul 2009 A1
20090287583 Holmes Nov 2009 A1
20090290718 Kahn et al. Nov 2009 A1
20090299745 Kennewick et al. Dec 2009 A1
20090299849 Cao et al. Dec 2009 A1
20090307162 Bui et al. Dec 2009 A1
20100005081 Bennett Jan 2010 A1
20100023320 Di Cristo et al. Jan 2010 A1
20100036660 Bennett Feb 2010 A1
20100042400 Block et al. Feb 2010 A1
20100088020 Sano et al. Apr 2010 A1
20100138215 Williams Jun 2010 A1
20100145700 Kennewick et al. Jun 2010 A1
20100204986 Kennewick et al. Aug 2010 A1
20100217604 Baldwin et al. Aug 2010 A1
20100228540 Bennett Sep 2010 A1
20100235341 Bennett Sep 2010 A1
20100257160 Cao Oct 2010 A1
20100262599 Nitz Oct 2010 A1
20100277579 Cho et al. Nov 2010 A1
20100280983 Cho et al. Nov 2010 A1
20100286985 Kennewick et al. Nov 2010 A1
20100299142 Freeman et al. Nov 2010 A1
20100312547 van Os et al. Dec 2010 A1
20100318576 Kim Dec 2010 A1
20100332235 David Dec 2010 A1
20100332348 Cao Dec 2010 A1
20110047072 Ciurea Feb 2011 A1
20110060807 Martin et al. Mar 2011 A1
20110082688 Kim et al. Apr 2011 A1
20110112827 Kennewick et al. May 2011 A1
20110112921 Kennewick et al. May 2011 A1
20110119049 Ylonen May 2011 A1
20110125540 Jang et al. May 2011 A1
20110130958 Stahl et al. Jun 2011 A1
20110131036 Di Cristo et al. Jun 2011 A1
20110131045 Cristo et al. Jun 2011 A1
20110143811 Rodriguez Jun 2011 A1
20110144999 Jang et al. Jun 2011 A1
20110161076 Davis et al. Jun 2011 A1
20110161309 Lung et al. Jun 2011 A1
20110175810 Markovic et al. Jul 2011 A1
20110184730 LeBeau et al. Jul 2011 A1
20110218855 Cao et al. Sep 2011 A1
20110231182 Weider et al. Sep 2011 A1
20110231188 Kennewick et al. Sep 2011 A1
20110264643 Cao Oct 2011 A1
20110279368 Klein et al. Nov 2011 A1
20110306426 Novak et al. Dec 2011 A1
20120002820 Leichter Jan 2012 A1
20120016678 Gruber et al. Jan 2012 A1
20120020490 Leichter Jan 2012 A1
20120022787 LeBeau et al. Jan 2012 A1
20120022857 Baldwin et al. Jan 2012 A1
20120022860 Lloyd et al. Jan 2012 A1
20120022868 LeBeau et al. Jan 2012 A1
20120022869 Lloyd et al. Jan 2012 A1
20120022870 Kristjansson et al. Jan 2012 A1
20120022874 Lloyd et al. Jan 2012 A1
20120022876 LeBeau et al. Jan 2012 A1
20120023088 Cheng et al. Jan 2012 A1
20120034904 LeBeau et al. Feb 2012 A1
20120035908 LeBeau et al. Feb 2012 A1
20120035924 Jitkoff et al. Feb 2012 A1
20120035931 LeBeau et al. Feb 2012 A1
20120035932 Jitkoff et al. Feb 2012 A1
20120137367 Dupont et al. May 2012 A1
20120173464 Tur et al. Jul 2012 A1
20120265528 Gruber et al. Oct 2012 A1
20120271676 Aravamudan et al. Oct 2012 A1
20120311583 Gruber et al. Dec 2012 A1
20130110518 Gruber et al. May 2013 A1
20130110520 Cheyer et al. May 2013 A1
Foreign Referenced Citations (54)
Number Date Country
681573 Apr 1993 CH
3837590 May 1990 DE
198 41 541 Dec 2007 DE
0138061 Sep 1984 EP
0138061 Apr 1985 EP
0218859 Apr 1987 EP
0262938 Apr 1988 EP
0293259 Nov 1988 EP
0299572 Jan 1989 EP
0313975 May 1989 EP
0314908 May 1989 EP
0327408 Aug 1989 EP
0389271 Sep 1990 EP
0411675 Feb 1991 EP
0559349 Sep 1993 EP
0559349 Sep 1993 EP
0570660 Nov 1993 EP
0863453 Sep 1998 EP
1245023 (A1) Oct 2002 EP
1435620 Jul 2004 EP
1181802 Feb 2007 EP
2 109 295 Oct 2009 EP
2293667 Apr 1996 GB
06 019965 Jan 1994 JP
2001 125896 May 2001 JP
2002 024212 Jan 2002 JP
2003517158 (A) May 2003 JP
2009 036999 Feb 2009 JP
1020060011603 Feb 2006 KR
1020070024262 Mar 2007 KR
10-2007-0057496 Jun 2007 KR
10-0776800 Nov 2007 KR
10-2008-001227 Feb 2008 KR
10-0810500 Mar 2008 KR
10 2008 109322 Dec 2008 KR
10 2009 086805 Aug 2009 KR
10-0920267 Oct 2009 KR
10-2010-0032792 Apr 2010 KR
10 2011 0113414 Oct 2011 KR
WO 9502221 Jan 1995 WO
WO 9726612 Jul 1997 WO
WO 9841956 Sep 1998 WO
WO 9901834 Jan 1999 WO
WO 9908238 Feb 1999 WO
WO 9956227 Nov 1999 WO
WO 0060435 Oct 2000 WO
WO 0060435 Oct 2000 WO
0130047 Apr 2001 WO
WO 02073603 Sep 2002 WO
03094489 Nov 2003 WO
WO 2006129967 Dec 2006 WO
WO 2008085742 Jul 2008 WO
WO 2008109835 Sep 2008 WO
WO 2011088053 Jul 2011 WO
Non-Patent Literature Citations (415)
Entry
Kane, Shaun K., “Slide Rule: Making Mobile Touch Screens Accessible to Blind People Using Multi-Touch Interaction Techniques”Assets '08 Oct. 13-15, 2008, XP002576187, http://students.washington.edu/skane/pubs/assets08.pdf/ pp. 73-80.
Alfred App, 2011, http://www.alfredapp.com/, 5 pages.
Ambite, JL., et al., “Design and Implementation of the CALO Query Manager,” Copyright© 2006, American Association for Artificial Intelligence, (www.aaai.org), 8 pages.
Ambite, JL., et al., “Integration of Heterogeneous Knowledge Sources in the CALO Query Manager,” 2005, The 4th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), Agia Napa, Cyprus, ttp://www.isi.edu/people/ambite/publications/integration—heterogeneous—knowledge—sources—calo—query—manager, 18 pages.
Belvin, R. et al., “Development of the HRL Route Navigation Dialogue System,” 2001, In Proceedings of the First International Conference on Human Language Technology Research, Paper, Copyright © 2001 HRL Laboratories, LLC, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6538, 5 pages.
Berry, P. M., et al. “PTIME: Personalized Assistance for Calendaring,” ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Publication date: Jul. 2011, 40:1-22, 22 pages.
Butcher, M., “EVI arrives in town to go toe-to-toe with Siri,” Jan. 23, 2012, http://techcrunch.com/2012/01/23/evi-arrives-in-town-to-go-toe-to-toe-with-siri/, 2 pages.
Chen, Y., “Multimedia Siri Finds And Plays Whatever You Ask for,” Feb. 9, 2012, http://www.psfk.com/2012/02/multimedia-siri.html, 9 pages.
Cheyer, A. et al., “Spoken Language and Multimodal Applications for Electronic Realties,” © Springer-Verlag London Ltd, Virtual Reality 1999, 3:1-15, 15 pages.
Cutkosky, M. R. et al., “PACT: An Experiment in Integrating Concurrent Engineering Systems,” Journal, Computer, vol. 26 Issue 1, Jan. 1993, IEEE Computer Society Press Los Alamitos, CA, USA, http://dl.acm.org/citation.cfm?id=165320, 14 pages.
Ericsson, S. et al., “Software illustrating a unified approach to multimodality and multilinguality in the in-home domain,” Dec. 22, 2006, Talk and Look: Tools for Ambient Linguistic Knowledge, http://www.talk-project.eurice.eu/fileadmin/talk/publications—public/deliverables—public/D1—6.pdf, 127 pages.
Evi, “Meet Evi: the one mobile app that provides solutions for your everyday problems,” Feb. 8, 2012, http://www.evi.com/, 3 pages.
Feigenbaum, E., et al., “Computer-assisted Semantic Annotation of Scientific Life Works,” 2007, http://tomgruber.org/writing/stanford-cs300.pdf, 22 pages.
Gannes, L., “Alfred App Gives Personalized Restaurant Recommendations,” allthingsd.com, Jul. 18, 2011, http://allthingsd.com/20110718/alfred-app-gives-personalized-restaurant-recommendations/, 3 pages.
Gautier, P. O., et al. “Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering,” 1993, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.8394, 9 pages.
Gervasio, M. T., et al., Active Preference Learning for Personalized Calendar Scheduling Assistancae, Copyright © 2005, http://www.ai.sri.com/˜gervasio/pubs/gervasio-iui05.pdf, 8 pages.
Glass, A., “Explaining Preference Learning,” 2006, http://cs229.stanford.edu/proj2006/Glass-ExplainingPreferenceLearning.pdf, 5 pages.
Gruber, T. R., et al., “An Ontology for Engineering Mathematics,” In Jon Doyle, Piero Torasso, & Erik Sandewall, Eds., Fourth International Conference on Principles of Knowledge Representation and Reasoning, Gustav Stresemann Institut, Bonn, Germany, Morgan Kaufmann, 1994, http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html, 22 pages.
Gruber, T. R., “A Translation Approach to Portable Ontology Specifications,” Knowledge Systems Laboratory, Stanford University, Sep. 1992, Technical Report KSL 92-71, Revised Apr. 1993, 27 pages.
Gruber, T. R., “Automated Knowledge Acquisition for Strategic Knowledge,” Knowledge Systems Laboratory, Machine Learning, 4, 293-336 (1989), 44 pages.
Gruber, T. R., “(Avoiding) the Travesty of the Commons,” Presentation at NPUC 2006, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006. http://tomgruber.org/writing/avoiding-travestry.htm, 52 pages.
Gruber, T. R., “Big Think Small Screen: How semantic computing in the cloud will revolutionize the consumer experience on the phone,” Keynote presentation at Web 3.0 conference, Jan. 27, 2010, http://tomgruber.org/writing/web30jan2010.htm, 41 pages.
Gruber, T. R., “Collaborating around Shared Content on the WWW,” W3C Workshop on WWW and Collaboration, Cambridge, MA, Sep. 11, 1995, http://www.w3.org/Collaboration/Workshop/Proceedings/P9.html, 1 page.
Gruber, T. R., “Collective Knowledge Systems: Where the Social Web meets the Semantic Web,” Web Semantics: Science, Services and Agents on the World Wide Web (2007), doi:10.1016/j.websem.2007.11.011, keynote presentation given at the 5th International Semantic Web Conference, Nov. 7, 2006, 19 pages.
Gruber, T. R., “Where the Social Web meets the Semantic Web,” Presentation at the 5th International Semantic Web Conference, Nov. 7, 2006, 38 pages.
Gruber, T. R., “Despite our Best Efforts, Ontologies are not the Problem,” AAAI Spring Symposium, Mar. 2008, http://tomgruber.org/writing/aaai-ss08.htm, 40 pages.
Gruber, T. R., “Enterprise Collaboration Management with Intraspect,” Intraspect Software, Inc., Instraspect Technical White Paper Jul. 2001, 24 pages.
Gruber, T. R., “Every ontology is a treaty—a social agreement—among people with some common motive in sharing,” Interview by Dr. Miltiadis D. Lytras, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, Issue 3, 2004, http://www.sigsemis.org 1, 5 pages.
Gruber, T. R., et al., “Generative Design Rationale: Beyond the Record and Replay Paradigm,” Knowledge Systems Laboratory, Stanford University, Dec. 1991, Technical Report KSL 92-59, Updated Feb. 1993, 24 pages.
Gruber, T. R., “Helping Organizations Collaborate, Communicate, and Learn,” Presentation to NASA Ames Research, Mountain View, CA, Mar. 2003, http://tomgruber.org/writing/organizational-intelligence-talk.htm, 30 pages.
Gruber, T. R., “Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience,” Presentation at Semantic Technologies conference (SemTech08), May 20, 2008, http://tomgruber.org/writing.htm, 40 pages.
Gruber, T. R., Interactive Acquisition of Justifications: Learning “Why” by Being Told “What” Knowledge Systems Laboratory, Stanford University, Oct. 1990, Technical Report KSL 91-17, Revised Feb. 1991, 24 pages.
Gruber, T. R., “It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing,” (c) 2000, 2003, http://www.cidoc-crm.org/docs/symposium—presentations/gruber—cidoc-ontology-2003.pdf, 21 pages.
Gruber, T. R., et al., “Machine-generated Explanations of Engineering Models: A Compositional Modeling Approach,” (1993) In Proc. International Joint Conference on Artificial Intelligence, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.930, 7 pages.
Gruber, T. R., “2021: Mass Collaboration and the Really New Economy,” TNTY Futures, the newsletter of The Next Twenty Years series, vol. 1, Issue 6, Aug., 2001, http://www.tnty.com/newsletter/futures/archive/v01-05business.html, 5 pages.
Gruber, T. R., et al.,“NIKE: A National Infrastructure for Knowledge Exchange,” Oct. 1994, http://www.eit.com/papers/nike/nike.html and nike.ps, 10 pages.
Gruber, T. R., “Ontologies, Web 2.0 and Beyond,” Apr. 24, 2007, Ontology Summit 2007, http://tomgruber.org/writing/ontolog-social-web-keynote.pdf, 17 pages.
Gruber, T. R., “Ontology of Folksonomy: A Mash-up of Apples and Oranges,” Originally published to the web in 2005, Int'l Journal on Semantic Web & Information Systems, 3(2), 2007, 7 pages.
Gruber, T. R., “Siri, a Virtual Personal Assistant—Bringing Intelligence to the Interface,” Jun. 16, 2009, Keynote presentation at Semantic Technologies conference, Jun. 2009. http://tomgruber.org/writing/senntech09.htm, 22 pages.
Gruber, T. R., “TagOntology,” Presentation to Tag Camp, www.tagcamp.org, Oct. 29, 2005, 20 pages.
Gruber, T. R., et al., “Toward a Knowledge Medium for Collaborative Product Development,” In Artificial Intelligence in Design 1992, from Proceedings of the Second International Conference on Artificial Intelligence in Design, Pittsburgh, USA, Jun. 22-25, 1992, 19 pages.
Gruber, T. R., “Toward Principles for the Design of Ontologies Used for Knowledge Sharing,” In International Journal Human-Computer Studies 43, p. 907-928, substantial revision of paper presented at the International Workshop on Formal Ontology, Mar. 1993, Padova, Italy, available as Technical Report KSL 93-04, Knowledge Systems Laboratory, Stanford University, further revised Aug. 23, 1993, 23 pages.
Guzzoni, D., et al., “Active, A Platform for Building Intelligent Operating Rooms,” Surgetica 2007 Computer-Aided Medical Interventions: tools and applications, pp. 191-198, Paris, 2007, Sauramps Médical, http://lsro.epfl.ch/page-68384-en.html, 8 pages.
Guzzoni, D., et al., “Active, A Tool for Building Intelligent User Interfaces,” ASC 2007, Palma de Mallorca, http://lsro.epfl.ch/page-34241.html, 6 pages.
Guzzoni, D., et al., “Modeling Human-Agent Interaction with Active Ontologies,” 2007, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 8 pages.
Hardawar, D., “Driving app Waze builds its own Siri for hands-free voice control,” Feb. 9, 2012, http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/, 4 pages.
Intraspect Software, “The lntraspect Knowledge Management Solution: Technical Overview,” http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf, 18 pages.
Karp, P. D., “A Generic Knowledge-Base Access Protocol,” May 12, 1994, http://lecture.cs.buu.ac.th/˜f50353/Document/gfp.pdf, 66 pages.
Lemon, O., et al., “Multithreaded Context for Robust Conversational Interfaces: Context-Sensitive Speech Recognition and Interpretation of Corrective Fragments,” Sep. 2004, ACM Transactions on Computer-Human Interaction, vol. 11, No. 3, 27 pages.
Leong, L., et al., “CASIS: A Context-Aware Speech Interface System,” IUI'05, Jan. 9-12, 2005, Proceedings of the 10th international conference on Intelligent user interfaces, San Diego, California, USA, 8 pages.
Lieberman, H., et al., “Out of context: Computer systems that adapt to, and learn from, context,” 2000, IBM Systems Journal, vol. 39, No. 3/4, 2000, 16 pages.
Lin, B., et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History,” 1999, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.272, 4 pages.
McGuire, J., et al., “SHADE: Technology for Knowledge-Based Collaborative Engineering,” 1993, Journal of Concurrent Engineering: Applications and Research (CERA), 18 pages.
Milward, D., et al., “D2.2: Dynamic Multimodal Interface Reconfiguration,” Talk and Look: Tools for Ambient Linguistic Knowledge, Aug. 8, 2006, http://www.ihmc.us/users/nblaylock/Pubs/Files/talk—d2.2.pdf, 69 pages.
Mitra, P., et al., “A Graph-Oriented Model for Articulation of Ontology Interdependencies,” 2000, http://ilpubs.stanford.edu:8090/442/1/2000-20.pdf, 15 pages.
Moran, D. B., et al., “Multimodal User Interfaces in the Open Agent Architecture,” Proc. of the 1997 International Conference on Intelligent User Interfaces (IUI97), 8 pages.
Mozer, M., “An Intelligent Environment Must be Adaptive,” Mar./Apr. 1999, IEEE Intelligent Systems, 3 pages.
Mälhäuser, M., “Context Aware Voice User Interfaces for Workflow Support,” Darmstadt 2007, http://tuprints.ulb.tu-darmstadt.de/876/1/PhD.pdf, 254 pages.
Naone, E., “TR10: Intelligent Software Assistant,” Mar.-Apr. 2009, Technology Review, http://www.technologyreview.com/printer—friendly—article.aspx?id=22117, 2 pages.
Neches, R., “Enabling Technology for Knowledge Sharing,” Fall 1991, Al Magazine, pp. 37-56, (21 pages).
Nöth, E., et al., “Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System,” IEEE Transactions on Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, 14 pages.
Rice, J., et al., “Monthly Program: Nov. 14, 1995,” The San Francisco Bay Area Chapter of ACM SIGCHI, http://www.baychi.org/calendar/19951114/, 2 pages.
Rivlin, Z., et al., “Maestro: Conductor of Multimedia Analysis Technologies,” 1999 SRI International, Communications of the Association for Computing Machinery (CACM), 7 pages.
Sheth, A., et al., “Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships,” Oct. 13, 2002, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, SpringerVerlag, 38 pages.
Simonite, T., “One Easy Way to Make Siri Smarter,” Oct. 18, 2011, Technology Review, http://www.technologyreview.conn/printer—friendly—article.aspx?id=38915, 2 pages.
Stent, A., et al., “The CommandTalk Spoken Dialogue System,” 1999, http://acl.ldc.upenn.edu/P/P99/P99-1024.pdf, 8 pages.
Tofel, K., et al., “SpeakTolt: A personal assistant for older iPhones, iPads,” Feb. 9, 2012, http://gigaom.com/apple/speaktoit-siri-for-older-iphones-ipads/, 7 pages.
Tucker, J., “Too lazy to grab your TV remote? Use Siri instead,” Nov. 30, 2011, http://www.engadget.com/2011/11/30/too-lazy-to-grab-your-tv-remote-use-siri-instead/, 8 pages.
Tur, G., et al., “The CALO Meeting Speech Recognition and Understanding System,” 2008, Proc. IEEE Spoken Language Technology Workshop, 4 pages.
Tur, G., et al., “The-CALO-Meeting-Assistant System,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 6, Aug. 2010, 11 pages.
Vlingo, “Vlingo Launches Voice Enablement Application on Apple App Store,” Vlingo press release dated Dec. 3, 2008, 2 pages.
YouTube, “Knowledge Navigator,” 5:34 minute video uploaded to YouTube by Knownav on Apr. 29, 2008, http://www.youtube.com/watch?v=QRH8eimU—20on Aug. 3, 2006, 1 page.
YouTube, “Send Text, Listen to and Send E-Mail ‘By Voice’ www.voiceassist.com,” 2:11 minute video uploaded to YouTube by VoiceAssist on Jul 30, 2009, http://www.youtube.corin/watch?v=0tEU61nHHA4, 1 page.
YouTube,“Text'nDrive App Demo—Listen and Reply to your Messages by Voice while Driving!,” 1:57 minute video uploaded to YouTube by TextnDrive on Apr 27, 2010, http://www.youtube.com/watch?v=WaGfzoHsAMw, 1 page.
YouTube, “Voice on the Go (BlackBerry),” 2:51 minute video uploaded to YouTube by VoiceOnTheGo on Jul. 27, 2009, http://www.youtube.com/watch?v=pJqpWgQS98w, 1 page.
International Search Report and Written Opinion dated Nov. 29, 2011, received in International Application No. PCT/US2011/20861, which corresponds to U.S. Appl. No. 12/987,982, 15 pages. (Thomas Robert Gruber).
Elio, R. et al., “On Abstract Task Models and Conversation Policies,” http://webdocs.cs.ualberta.ca/˜ree/publications/papers2/ATS.AA99.pdf, May 1999, 10 pages.
Glass, J., et al., “Multilingual Spoken-Language Understanding in the MIT Voyager System,” Aug. 1995, http://groups.csail.mit.edu/sIs/publications/1995/speechcomm95-voyager.pdf, 29 pages.
Goddeau, D., et al., “A Form-Based Dialogue Manager for Spoken Language Applications,” Oct. 1996, http://phasedance.com/pdf/icslp96.pdf, 4 pages.
Goddeau, D., et al., “Galaxy: A Human-Language Interface to On-Line Travel Information,” 1994 International Conference on Spoken Language Processing, Sep. 18-22, 1994, Pacific Convention Plaza Yokohama, Japan, 6 pages.
KIPO's Notice of Prliminary Rejection (English Translation) for Korean Patent Application No. 10-2010-8828, 7 pages.
Meng, H., et al., “Wheels: A Conversational System in the Automobile Classified Domain,” Oct. 1996, httphttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.3022, 4 pages.
Phoenix Solutions, Inc. v. West Interactive Corp., Document 40, Declaration of Christopher Schmandt Regarding the MIT Galaxy System dated Jul. 2, 2010, 162 pages.
Rice, J., et al., “Using the Web Instead of a Window System,” Knowledge Systems Laboratory, Stanford University, (http://tomgruber.org/writing/ks1-95-69.pdf, Sep. 1995.) CHI '96 Proceedings: Conference on Human Factors in Computing Systems, Apr. 13-18, 1996, Vancouver, BC, Canada, 14 pages.
Seneff, S., et al., “A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains,” Oct. 1996, citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16 . . . rep . . . , 4 pages.
Vlingo InCar, “Distracted Driving Solution with Vlingo InCar,” 2:38 minute video uploaded to YouTube by Vlingo Voice on Oct. 6, 2010, http://www.youtube.com/watch?v=Vqs8XfXxgz4, 2 pages.
Zue, V., “Conversational Interfaces: Advances and Challenges,” Sep. 1997, http://www.cs.cmu.edu/˜dod/papers/zue97.pdf, 10 pages.
Zue, V. W., “Toward Systems that Understand Spoken Language,” Feb. 1994, ARPA Strategic Computing Institute, © 1994 IEEE, 9 pages.
Martin, D., et al., “The Open Agent Architecture: A Framework for building distributed software systems,” Jan.-Mar. 1999, Applied Artificial Intelligence: An International Journal, vol. 13, No. 1-2, http://adam.cheyer.com/papers/oaa.pdf, 38 pages.
Acero, A., et al., “Environmental Robustness in Automatic Speech Recognition,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages.
Acero, A., et al., “Robust Speech Recognition by Normalization of The Acoustic Space,” International Conference on Acoustics, Speech, and Signal Processing, 1991, 4 pages.
Ahlbom, G., et al., “Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques,” IEEE International Conference of Acoustics, Speech, and Signal Processing (ICASSP'87), Apr. 1987, vol. 12, 4 pages.
Aikawa, K., “Speech Recognition Using Time-Warping Neural Networks,” Proceedings of the 1991 IEEE Workshop on Neural Networks for Signal Processing, Sep. 30 to Oct. 1, 1991, 10 pages.
Anastasakos, A., et al., “Duration Modeling in Large Vocabulary Speech Recognition,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages.
Anderson, R. H., “Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics,” In Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium, © 1967, 12 pages.
Ansari, R., et al., “Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach,” IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, 3 pages.
Anthony, N. J., et al., “Supervised Adaption for Signature Verification System,” Jun. 1, 1978, IBM Technical Disclosure, 3 pages.
Apple Computer, “Guide Maker User's Guide,” © Apple Computer, Inc., Apr. 27, 1994, 8 pages.
Apple Computer, “Introduction to Apple Guide,” © Apple Computer, Inc., Apr. 28, 1994, 20 pages.
Asanovie, K., et al., “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks,” In Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkeley.EDU, 7 pages.
Atal, B. S., “Efficient Coding of LPC Parameters by Temporal Decomposition,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'83), Apr. 1983, 4 pages.
Bahl, L. R., et al., “Acoustic Markov Models Used in the Tangora Speech Recognition System,” In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 4 pages.
Bahl, L. R., et al., “A Maximum Likelihood Approach to Continuous Speech Recognition,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, 13 pages.
Bahl, L. R., et al., “A Tree-Based Statistical Language Model for Natural Language Speech Recognition,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, Issue 7, Jul. 1989, 8 pages.
Bahl, L. R., et al., “Large Vocabulary Natural Language Continuous Speech Recognition,” In Proceedings of 1989 International Conference on Acoustics, Speech, and Signal Processing, May 23-26, 1989, vol. 1, 6 pages.
Bahl, L. R., et al, “Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, 11 pages.
Bahl, L. R., et al., “Speech Recognition with Continuous-Parameter Hidden Markov Models,” In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 8 pages.
Banbrook, M., “Nonlinear Analysis of Speech from a Synthesis Perspective,” A thesis submitted for the degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages.
Belaid, A., et al., “A Syntactic Approach for Handwritten Mathematical Formula Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, 7 pages.
Bellegarda, E. J., et al., “On-Line Handwriting Recognition Using Statistical Mixtures,” Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris-France, Jul. 1993, 11 pages.
Bellegarda, J. R., “A Latent Semantic Analysis Framework for Large-Span Language Modeling,” 5th European Conference on Speech, Communication and Technology, (Eurospeech'97), Sep. 22-25, 1997, 4 pages.
Bellegarda, J. R., “A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 6, No. 5, Sep. 1998, 12 pages.
Bellegarda, J. R., et al., “A Novel Word Clustering Algorithm Based on Latent Semantic Analysis,” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, 4 pages.
Bellegarda, J. R., et al., “Experiments Using Data Augmentation for Speaker Adaptation,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages.
Bellegarda, J. R., “Exploiting Both Local and Global Constraints for Multi-Span Statistical Language Modeling,” Proceeding of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'98), vol. 2, May 12-15, 1998, 5 pages.
Bellegarda, J. R., “Exploiting Latent Semantic Information in Statistical Language Modeling,” In Proceedings of the IEEE, Aug. 2000, vol. 88, No. 8, 18 pages.
Bellegarda, J. R., “Interaction-Driven Speech Input—A Data-Driven Approach to the Capture of Both Local and Global Language Constraints,” 1992, 7 pages, available at http://old.sigchi.org/bulletin/1998.2/bellegarda.html.
Bellegarda, J. R., “Large Vocabulary Speech Recognition with Multispan Statistical Language Models,” IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, 9 pages.
Bellegarda, J. R., et al., “Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the ARPA Wall Street Journal Task,” Signal Processing VII: Theories and Applications, © 1994 European Association for Signal Processing, 4 pages.
Bellegarda, J. R., et al., “The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation,” IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 8 pages.
Black, A. W., et al., “Automatically Clustering Similar Units for Unit Selection in Speech Synthesis,” In Proceedings of Eurospeech 1997, vol. 2, 4 pages.
Blair, D. C., et al., “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System,” Communications of the ACM, vol. 28, No. 3, Mar. 1985, 11 pages.
Briner, L. L., “Identifying Keywords in Text Data Processing,” In Zelkowitz, Marvin V., ED, Directions and Challenges, 15th Annual Technical Symposium, Jun. 17, 1976, Gaithersbury, Maryland, 7 pages.
Bulyko, I., et al., “Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis,” Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages.
Bussey, H. E., et al., “Service Architecture, Prototype Description, and Network Implications of a Personalized Information Grazing Service,” INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, Jun. 3-7, 1990, http://slrohall.com/publications/, 8 pages.
Buzo, A., et al., “Speech Coding Based Upon Vector Quantization,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, 13 pages.
Caminero-Gil, J., et al., “Data-Driven Discourse Modeling for Semantic Interpretation,” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, May 7-10, 1996, 6 pages.
Cawley, G. C., “The Application of Neural Networks to Phonetic Modelling,” PhD Thesis, University of Essex, Mar. 1996, 13 pages.
Chang, S., et al., “A Segment-based Speech Recognition System for Isolated Mandarin Syllables,” Proceedings TENCON '93, IEEE Region 10 conference on Computer, Communication, Control and Power Engineering, Oct. 19-21, 1993, vol. 3, 6 pages.
Conklin, J., “Hypertext: An Introduction and Survey,” Computer Magazine, Sep. 1987, 25 pages.
Connolly, F. T., et al., “Fast Algorithms for Complex Matrix Multiplication Using Surrogates,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Jun. 1989, vol. 37, No. 6, 13 pages.
Deerwester, S., et al., “Indexing by Latent Semantic Analysis,” Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, 19 pages.
Deller, Jr., J. R., et al., “Discrete-Time Processing of Speech Signals,” © 1987 Prentice Hall, ISBN: 0-02-328301-7, 14 pages.
Digital Equipment Corporation, “Open VMS Software Overview,” Dec. 1995, software manual, 159 pages.
Donovan, R. E., “A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers,” 2001, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.6398, 4 pages.
Frisse, M. E., “Searching for Information in a Hypertext Medical Handbook,” Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages.
Goldberg, D., et al., “Using Collaborative Filtering to Weave an Information Tapestry,” Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages.
Gorin, A. L., et al., “On Adaptive Acquisition of Language,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), vol. 1, Apr. 3-6, 1990, 5 pages.
Gotoh, Y., et al., “Document Space Models Using Latent Semantic Analysis,” In Proceedings of Eurospeech, 1997, 4 pages.
Gray, R. M., “Vector Quantization,” IEEE ASSP Magazine, Apr. 1984, 26 pages.
Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform,” In Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages.
Helm, R., et al., “Building Visual Language Parsers,” in Proceedings of CHI'91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 8 pages.
Hermansky, H., “Perceptual Linear Predictive (PLP) Analysis of Speech,” Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages.
Hermansky, H., “Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing,” In proceedings of IEEE International Conference on Acoustics, speech, and Signal Processing (ICASSP'93), Apr. 27-30, 1993, 4 pages.
Hoehfeld M., et al., “Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm,” IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages.
Holmes, J. N., “Speech Synthesis and Recognition—Stochastic Models for Word Recognition,” Speech Synthesis and Recognition, Published by Chapman & Hall, London, ISBN 0 412 53430 4, © 1998 J. N. Holmes, 7 pages.
Hon, H.W., et al., “CMU Robust Vocabulary-Independent Speech Recognition System,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-91), Apr. 14-17, 1991, 4 pages.
IBM Technical Disclosure Bulletin, “Speech Editor,” vol. 29, No. 10, Mar. 10, 1987, 3 pages.
IBM Technical Disclosure Bulletin, “Integrated Audio-Graphics User Interface,” vol. 33, No. 11, Apr. 1991, 4 pages.
IBM Technical Disclosure Bulletin, “Speech Recognition with Hidden Markov Models of Speech Waveforms,” vol. 34, No. 1, Jun. 1991, 10 pages.
Iowegian International, “FIR Filter Properties,” dspGuro, Digital Signal Processing Central, http://www.dspguru.com/dsp/taqs/fir/properties, downloaded on Jul. 28, 2010, 6 pages.
Jacobs, P. S., et al., “Scisor: Extracting Information from On-Line News,” Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages.
Jelinek, F., “Self-Organized Language Modeling for Speech Recognition,” Readings in Speech Recognition, edited by Alex Waibel and Kai-Fu Lee, May 15, 1990, © 1990 Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 63 pages.
Jennings, A., et al., “A Personal News Service Based on a User Model Neural Network,” IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, Tokyo, JP, 12 pages.
Ji, T., et al., “A Method for Chinese Syllables Recognition based upon Sub-syllable Hidden Markov Model,” 1994 International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 4 pages.
Jones, J., “Speech Recognition for Cyclone,” Apple Computer, Inc., E.R.S., Revision 2.9, Sep. 10, 1992, 93 pages.
Katz, S. M., “Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages.
Kitano, H., “PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System,” Jun. 1991 Computer, vol. 24, No. 6, 13 pages.
Klabbers, E., et al., “Reducing Audible Spectral Discontinuities,” IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages.
Klatt, D. H., “Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence,” Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages.
Kominek, J., et al., “Impact of Durational Outlier Removal from Unit Selection Catalogs,” 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages.
Kubala, F., et al., “Speaker Adaptation from a Speaker-Independent Training Corpus,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages.
Kubala, F., et al., “The Hub and Spoke Paradigm for CSR Evaluation,” Proceedings of the Spoken Language Technology Workshop, Mar. 6-8, 1994, 9 pages.
Lee, K.F., “Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The Sphinx System,” Apr. 18, 1988, Partial fulfillment of the requirements for the degree of Doctor of Philosophy, Computer Science Department, Carnegie Mellon University, 195 pages.
Lee, L., et al., “A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary,” International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 3-6, 1990, 5 pages.
Lee, L, et al., “Golden Mandarin(II)—An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary,” 0-7803-0946-4/93 © 1993IEEE, 4 pages.
Lee, L, et al., “Golden Mandarin(II)—An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions,” International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 5 pages.
Lee, L., et al., “System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters,” International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, Nos. 3 & 4, Nov. 1991, 16 pages.
Lin, C.H., et al., “A New Framework for Recognition of Mandarin Syllables With Tones Using Sub-syllabic Unites,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-93), Apr. 27-30, 1993, 4 pages.
Linde, Y., et al., “An Algorithm for Vector Quantizer Design,” IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages.
Liu, F.H., et al., “Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering,” IEEE International Conference of Acoustics, Speech, and Signal Processing, ICASSP-92, Mar. 23-26, 1992, 4 pages.
Logan, B., “Mel Frequency Cepstral Coefficients for Music Modeling,” In International Symposium on Music Information Retrieval, 2000, 2 pages.
Lowerre, B. T., “The-Harpy Speech Recognition System,” Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages.
Maghbouleh, A., “An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations,” Revised version of a paper presented at the Computational Phonology in Speech Technology workshop, 1996 annual meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages.
Markel, J. D., et al., “Linear Prediction of Speech,” Springer-Verlag, Berlin Heidelberg New York 1976, 12 pages.
Morgan, B., “Business Objects,” (Business Objects for Windows) Business Objects Inc., DBMS Sep. 1992, vol. 5, No. 10, 3 pages.
Mountford, S. J., et al., “Talking and Listening to Computers,” The Art of Human-Computer Interface Design, Copyright © 1990 Apple Computer, Inc. Addison-Wesley Publishing Company, Inc., 17 pages.
Murty, K. S. R., et al., “Combining Evidence from Residual Phase and MFCC Features for Speaker Recognition,” IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages.
Murveit H. et al., “Integrating Natural Language Constraints into HMM-based Speech Recognition,” 1990 International Conference on Acoustics, Speech, and Signal Processing, Apr. 3-6, 1990, 5 pages.
Nakagawa, S., et al., “Speaker Recognition by Combining MFCC and Phase Information,” IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Mar. 14-19, 2010, 4 pages.
Niesler, T. R., et al., “A Variable-Length Category-Based N-Gram Language Model,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, May 7-10, 1996, 6 pages.
Papadimitriou, C. H., et al., “Latent Semantic Indexing: A Probabilistic Analysis,” Nov. 14, 1997, http://citeseerx.ist.psu.edu/messages/downloadsexceeded.html, 21 pages.
Parsons, T. W., “Voice and Speech Processing,” Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 5 pages.
Parsons, T. W., “Voice and Speech Processing,” Pitch and Formant Estimation, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 15 pages.
Picone, J., “Continuous Speech Recognition Using Hidden Markov Models,” IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages.
Rabiner, L. R., et al., “Fundamental of Speech Recognition,” © 1993 AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 17 pages.
Rabiner, L. R., et al., “Note on the Properties of a Vector Quantizer for LPC Coefficients,” The Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, 9 pages.
Ratcliffe, M., “ClearAccess 2.0 allows SQL searches off-line,” (Structured Query Language), ClearAcess Corp., MacWeek Nov. 16, 1992, vol. 6, No. 41, 2 pages.
Remde, J. R., et al., “SuperBook: An Automatic Tool for Information Exploration-Hypertext?,” In Proceedings of Hypertext'87 papers, Nov. 13-15, 1987, 14 pages.
Reynolds, C. F., “On-Line Reviews: A New Application of the HICOM Conferencing System,” IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages.
Rigoll, G., “Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'89), May 23-26, 1989, 4 pages.
Riley, M. D., “Tree-Based Modelling of Segmental Durations,” Talking Machines Theories, Models, and Designs, 1992 © Elsevier Science Publishers B.V., North-Holland, ISBN: 08-444-89115.3, 15 pages.
Rivoira, S., et al., “Syntax and Semantics in a Word-Sequence Recognition System,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'79), Apr. 1979, 5 pages.
Rosenfeld, R., “A Maximum Entropy Approach to Adaptive Statistical Language Modelling,” Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages.
Roszkiewicz, A., “Extending your Apple,” Back Talk—Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, 5 pages.
Sakoe, H., et al., “Dynamic Programming Algorithm Optimization for Spoken Word Recognition,” IEEE Transactins on Acoustics, Speech, and Signal Processing, Feb. 1978, vol. ASSP-26 No. 1, 8 pages.
Salton, G., et al., “On the Application of Syntactic Methodologies in Automatic Text Analysis,” Information Processing and Management, vol. 26, No. 1, Great Britain 1990, 22 pages.
Savoy, J., “Searching Information in Hypertext Systems Using Multiple Sources of Evidence,” International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1993, 15 pages.
Scagliola, C., “Language Models and Search Algorithms for Real-Time Speech Recognition,” International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages.
Schmandt, C., et al., “Augmenting a Window System with Speech Input,” IEEE Computer Society, Computer Aug. 1990, vol. 23, No. 8, 8 pages.
Schütze, H., “Dimensions of Meaning,” Proceedings of Supercomputing'92 Conference, Nov. 16-20, 1992, 10 pages.
Sheth B., et al., “Evolving Agents for Personalized Information Filtering,” In Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1-5, 1993, 9 pages.
Shikano, K., et al., “Speaker Adaptation Through Vector Quantization,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages.
Sigurdsson, S., et al., “Mel Frequency Cepstral Coefficients: An Evaluation of Robustness of MP3 Encoded Music,” In Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR), 2006, 4 pages.
Silverman, K. E. A., et al., “Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration,” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 15-19, 1999, 5 pages.
Tenenbaum, A.M., et al., “Data Structure Using Pascal,” 1981 Prentice-Hall, Inc., 34 pages.
Tsai, W.H., et al., “Attributed Grammar—A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages.
Udell, J., “Computer Telephony,” BYTE, vol. 19, No. 7, Jul. 1, 1994, 9 pages.
Van Santen, J. P. H., “Contextual Effects on Vowel Duration,” Journal Speech Communication, vol. 11, No. 6, Dec. 1992, 34 pages.
Vepa, J., et al., “New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis,” In Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 4 pages.
Verschelde, J., “MATLAB Lecture 8. Special Matrices in MATLAB,” Nov. 23, 2005, UIC Dept. of Math., Stat. & C.S., MCS 320, Introduction to Symbolic Computation, 4 pages.
Vingron, M. “Near-Optimal Sequence Alignment,” Deutsches Krebsforschungszentrum (DKFZ), Abteilung Theoretische Bioinformatik, Heidelberg, Germany, Jun. 1996, 20 pages.
Werner, S., et al., “Prosodic Aspects of Speech,” Université de Lausanne, Switzerland, 1994, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art, and Future Challenges, 18 pages.
Wikipedia, “Mel Scale,” Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/Mel—scale, 2 pages.
Wikipedia, “Minimum Phase,” Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/Minimum—phase, 8 pages.
Wolff, M., “Poststructuralism and the ARTFUL Database: Some Theoretical Considerations,” Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages.
Wu, M., “Digital Speech Processing and Coding,” ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-2 course presentation, University of Maryland, College Park, 8 pages.
Wu, M., “Speech Recognition, Synthesis, and H.C.I.,” ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-3 course presentation, University of Maryland, College Park, 11 pages.
Wyle, M. F., “A Wide Area Network Information Filter,” In Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 9-11, 1991, 6 pages.
Yankelovich, N., et al., “Intermedia: The Concept and the Construction of a Seamless Information Environment,” Computer Magazine, Jan. 1988, © 1988 IEEE, 16 pages.
Yoon, K., et al., “Letter-to-Sound Rules for Korean,” Department of Linguistics, The Ohio State University, 2002, 4 pages.
Zhao, Y., “An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 15 pages.
Zovato, E., et al., “Towards Emotional Speech Synthesis: A Rule Based Approach,” 2 pages.
Chinese Final Rejection dated Mar. 1, 2013 for Application No. 201010109130.0, which corresponds to U.S. Appl. No. 12/363,513, 15 pages.
Office Action dated Mary 13, 2013, received in United Kingdom patent application No. GB1001414.0, which corresponds to U.S. Appl. No. 12/363,513, 6 pages. (Rottler).
International Search Report dated Nov. 9, 1994, received in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 8 pages. (Robert Don Strong).
International Preliminary Examination Report dated Mar. 1, 1995, received in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 5 pages. (Robert Don Strong).
International Preliminary Examination Report dated Apr. 10, 1995, received in International Application No. PCT/US1993/12637, which corresponds to U.S. Appl. No. 07/999,354, 7 pages. (Alejandro Acero).
International Search Report dated Feb. 8, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 7 pages. (Yen-Lu Chow).
International Preliminary Examination Report dated Feb. 28, 1996, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow).
Written Opinion dated Aug. 21, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow).
International Search Report dated Nov. 8, 1995, received in International Application No. PCT/US1995/08369, which corresponds to U.S. Appl. No. 08/271,639, 6 pages. (Peter V. De Souza).
International Preliminary Examination Report dated Oct. 9, 1996, received in International Application No. PCT/US1995/08369, which corresponds to U.S. Appl. No. 08/271,639, 4 pages. (Peter V. De Souza).
Agnäs, MS., et al., “Spoken Language Translator: First-Year Report,” Jan. 1994, SICS (ISSN 0283-3638), SRI and Telia Research AB, 161 pages.
Allen, J., “Natural Language Understanding,” 2nd Edition, Copyright © 1995 by The Benjamin/Cummings Publishing Company, Inc., 671 pages.
Alshawi, H., et al., “CLARE: A Contextual Reasoning and Cooperative Response Framework for the Core Language Engine,” Dec. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 273 pages.
Alshawi, H., et al., “Declarative Derivation of Database Queries from Meaning Representations,” Oct. 1991, Proceedings of the BANKAI Workshop on Intelligent Information Access, 12 pages.
Alshawi H., et al., “Logical Forms in the Core Language Engine,” 1989, Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 8 pages.
Alshawi, H., et al., “Overview of the Core Language Engine,” Sep. 1988, Proceedings of Future Generation Computing Systems, Tokyo, 13 pages.
Alshawi, H., “Translation and Monotonic Interpretation/Generation,” Jul. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 18 pages, http://www.cam.sri.com/tr/cre024/paper.ps.Z. 1992.
Appelt, D., et al., “Fastus: A Finite-state Processor for Information Extraction from Real-world Text,” 1993, Proceedings of IJCAI, 8 pages.
Appelt, D., et al., “SRI: Description of the JV-FASTUS System Used for MUC-5,” 1993, SRI International, Artificial Intelligence Center, 19 pages.
Appelt, D., et al., SRI International Fastus System MUC-6 Test Results and Analysis, 1995, SRI International, Menlo Park, California, 12 pages.
Archbold, A., et al., “A Team User's Guide,” Dec. 21, 1981, SRI International, 70 pages.
Bear, J., et al., “A System for Labeling Self-Repairs in Speech,” Feb. 22, 1993, SRI International, 9 pages.
Bear, J., et al., “Detection and Correction of Repairs in Human-Computer Dialog,” May 5, 1992, SRI International, 11 pages.
Bear, J., et al., “Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog,” 1992, Proceedings of the 30th annual meeting on Association for Computational Linguistics (ACL), 8 pages.
Bear, J., et al., “Using Information Extraction to Improve Document Retrieval,” 1998, SRI International, Menlo Park, California, 11 pages.
Berry, P., et al., “Task Management under Change and Uncertainty Constraint Solving Experience with the CALO Project,” 2005, Proceedings of CP'05 Workshop on Constraint Solving under Change, 5 pages.
Bobrow, R. et al., “Knowledge Representation for Syntactic/Semantic Processing,” From: AAA-80 Proceedings. Copyright © 1980, AAAI, 8 pages.
Bouchou, B., et al., “Using Transducers in Natural Language Database Query,” Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 17 pages.
Bratt, H., et al., “The SRI Telephone-based ATIS System,” 1995, Proceedings of ARPA Workshop on Spoken Language Technology, 3 pages.
Bulyko, I. et al., “Error-Correction Detection and Response Generation in a Spoken Dialogue System,” © 2004 Elsevier B.V., specom.2004.09.009, 18 pages.
Burke, R., et al., “Question Answering from Frequently Asked Question Files,” 1997, AI Magazine, vol. 18, No. 2, 10 pages.
Burns, A., et al., “Development of a Web-Based Intelligent Agent for the Fashion Selection and Purchasing Process via Electronic Commerce,” Dec. 31, 1998, Proceedings of the Americas Conference on Information system (AMCIS), 4 pages.
Carter, D., “Lexical Acquisition in the Core Language Engine,” 1989, Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, 8 pages.
Carter, D., et al., “The Speech-Language Interface in the Spoken Language Translator,” Nov. 23, 1994, SRI International, 9 pages.
Chai, J., et al., “Comparative Evaluation of a Natural Language Dialog Based System and a Menu Driven System for Information Access: a Case Study,” Apr. 2000, Proceedings of the International Conference on Multimedia Information Retrieval (RIAO), Paris, 11 pages.
Cheyer, A., et al., “Multimodal Maps: An Agent-based Approach,” International Conference on Cooperative Multimodal Communication, 1995, 15 pages.
Cheyer, A., et al., “The Open Agent Architecture,” Autonomous Agents and Multi-Agent systems, vol. 4, Mar. 1, 2001, 6 pages.
Cheyer, A., et al., “The Open Agent Architecture: Building communities of distributed software agents” Feb. 21, 1998, Artificial Intelligence Center SRI International, Power Point presentation, downloaded from http://www.ai.sri.com/˜oaa/, 25 pages.
Codd, E. F., “Databases: Improving Usability and Responsiveness—‘How About Recently’,” Copyright © 1978, by Academic Press, Inc., 28 pages.
Cohen, P.R., et al., “An Open Agent Architecture,” 1994, 8 pages. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.480.
Coles, L. S., et al., “Chemistry Question-Answering,” Jun. 1969, SRI International, 15 pages.
Coles, L. S., “Techniques for Information Retrieval Using an Inferential Question-Answering System with Natural-Language Input,” Nov. 1972, SRI International, 198 pages.
Coles, L. S., “The Application of Theorem Proving to Information Retrieval,” Jan. 1971, SRI International, 21 pages.
Constantinides, P., et al., “A Schema Based Approach to Dialog Control,” 1998, Proceedings of the International Conference on Spoken Language Processing, 4 pages.
Cox, R. V., et al., “Speech and Language Processing for Next-Millennium Communications Services,” Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 24 pages.
Craig, J., et al., “Deacon: Direct English Access and Control,” Nov. 7-10, 1966 AFIPS Conference Proceedings, vol. 19, San Francisco, 18 pages.
Dar, S., et al., “DTL's DataSpot: Database Exploration Using Plain Language,” 1998 Proceedings of the 24th VLDB Conference, New York, 5 pages.
Davis, Z., et al., “A Personal Handheld Multi-Modal Shopping Assistant,” 2006 IEEE, 9 pages.
Decker, K., et al., “Designing Behaviors for Information Agents,” The Robotics Institute, Carnegie-Mellon University, paper, Jul. 6, 1996, 15 pages.
Decker, K., et al., “Matchmaking and Brokering,” The Robotics Institute, Carnegie-Mellon University, paper, May 16, 1996, 19 pages.
Dowding, J., et al., “Gemini: A Natural Language System for Spoken-Language Understanding,” 1993, Proceedings of the Thirty-First Annual Meeting of the Association for Computational Linguistics, 8 pages.
Dowding, J., et al., “Interleaving Syntax and Semantics in an Efficient Bottom-Up Parser,” 1994, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, 7 pages.
Epstein, M., et al., “Natural Language Access to a Melanoma Data Base,” Sep. 1978, SRI International, 7 pages.
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results,” Classes/Subclasses Manually Reviewed for the Search of US Patent No. 7,177,798, Mar. 22, 2013, 1 page.
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results,” List of Publications Manually reviewed for the Search of US Patent No. 7,177,798, Mar. 22, 2013, 1 page.
Ferguson, G., et al., “TRIPS: An Integrated Intelligent Problem-Solving Assistant,” 1998, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98) and Tenth Conference on Innovative Applications of Artificial Intelligence (IAAI-98), 7 pages.
Fikes, R., et al., “A Network-based knowledge Representation and its Natural Deduction System,” Jul. 1977, SRI International, 43 pages.
Gambäck, B., et al., “The Swedish Core Language Engine,” 1992 NOTEX Conference, 17 pages.
Glass, J., et al., “Multilingual Language Generation Across Multiple Domains,” Sep. 18-22, 1994, International Conference on Spoken Language Processing, Japan, 5 pages.
Green, C. “The Application of Theorem Proving to Question-Answering Systems,” Jun. 1969, SRI Stanford Research Institute, Artificial Intelligence Group, 169 pages.
Gregg, D. G., “DSS Access on the WWW: An Intelligent Agent Prototype,” 1998 Proceedings of the Americas Conference on Information Systems-Association for Information Systems, 3 pages.
Grishman, R., “Computational Linguistics: An Introduction,” © Cambridge University Press 1986, 172 pages.
Grosz, B. et al., “Dialogic: A Core Natural-Language Processing System,” Nov. 9, 1982, SRI International, 17 pages.
Grosz, B. et al., “Research on Natural-Language Processing at SRI,” Nov. 1981, SRI International, 21 pages.
Grosz, B., et al., “TEAM: An Experiment in the Design of Transportable Natural-Language Interfaces,” Artificial Intelligence, vol. 32, 1987, 71 pages.
Grosz, B., “Team: A Transportable Natural-Language Interface System,” 1983, Proceedings of the First Conference on Applied Natural Language Processing, 7 pages.
Guida, G., et al., “NLI: A Robust Interface for Natural Language Person-Machine Communication,” Int. J. Man-Machine Studies, vol. 17, 1982, 17 pages.
Guzzoni, D., et al., “Active, A platform for Building Intelligent Software,” Computational Intelligence 2006, 5 pages. http://www.informatik.uni-trier.de/˜ley/pers/hd/g/Guzzoni:Didier.
Guzzoni, D., “Active: A unified platform for building intelligent assistant applications,” Oct. 25, 2007, 262 pages.
Guzzoni, D., et al., “Many Robots Make Short Work,” 1996 AAAI Robot Contest, SRI International, 9 pages.
Haas, N., et al., “An Approach to Acquiring and Applying Knowledge,” Nov. 1980, SRI International, 22 pages.
Hadidi, R., et al., “Students' Acceptance of Web-Based Course Offerings: An Empirical Assessment,” 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages.
Hawkins, J., et al., “Hierarchical Temporal Memory: Concepts, Theory, and Terminology,” Mar. 27, 2007, Numenta, Inc., 20 pages.
He, Q., et al., “Personal Security Agent: KQML-Based PKI,” The Robotics Institute, Carnegie-Mellon University, paper, Oct. 1, 1997, 14 pages.
Hendrix, G. et al., “Developing a Natural Language Interface to Complex Data,” ACM Transactions on Database Systems, vol. 3, No. 2, Jun. 1978, 43 pages.
Hendrix, G., “Human Engineering for Applied Natural Language Processing,” Feb. 1977, SRI International, 27 pages.
Hendrix, G., “Klaus: A System for Managing Information and Computational Resources,” Oct. 1980, SRI International, 34 pages.
Hendrix, G., “Lifer: A Natural Language Interface Facility,” Dec. 1976, SRI Stanford Research Institute, Artificial Intelligence Center, 9 pages.
Hendrix, G., “Natural-Language Interface,” Apr.-Jun. 1982, American Journal of Computational Linguistics, vol. 8, No. 2, 7 pages. Best Copy Available.
Hendrix, G., “The Lifer Manual: A Guide to Building Practical Natural Language Interfaces,” Feb. 1977, SRI International, 76 pages.
Hendrix, G., et al., “Transportable Natural-Language Interfaces to Databases,” Apr. 30, 1981, SRI International, 18 pages.
Hirschman, L., et al., “Multi-Site Data Collection and Evaluation in Spoken Language Understanding,” 1993, Proceedings of the workshop on Human Language Technology, 6 pages.
Hobbs, J., et al., “Fastus: A System for Extracting Information from Natural-Language Text,” Nov. 19, 1992, SRI International, Artificial Intelligence Center, 26 pages.
Hobbs, J., et al.,“Fastus: Extracting Information from Natural-Language Texts,” 1992, SRI International, Artificial Intelligence Center, 22 pages.
Hobbs, J., “Sublanguage and Knowledge,” Jun. 1984, SRI International, Artificial Intelligence Center, 30 pages.
Hodjat, B., et al., “Iterative Statistical Language Model Generation for Use with an Agent-Oriented Natural Language Interface,” vol. 4 of the Proceedings of HCI International 2003, 7 pages.
Huang, X., et al., “The SPHINX-II Speech Recognition System: An Overview,” Jan. 15, 1992, Computer, Speech and Language, 14 pages.
Issar, S., et al., “CMU's Robust Spoken Language Understanding System,” 1993, Proceedings of EUROSPEECH, 4 pages.
Issar, S., “Estimation of Language Models for New Spoken Language Applications,” Oct. 3-6, 1996, Proceedings of 4th International Conference on Spoken language Processing, Philadelphia, 4 pages.
Janas, J., “The Semantics-Based Natural Language Interface to Relational Databases,” © Springer-Verlag Berlin Heidelberg 1986, Germany, 48 pages.
Johnson, J., “A Data Management Strategy for Transportable Natural Language Interfaces,” Jun. 1989, doctoral thesis submitted to the Department of Computer Science, University of British Columbia, Canada, 285 pages.
Julia, L., et al., “http://www.speech.sri.com/demos/atis.html,” 1997, Proceedings of AAAI, Spring Symposium, 5 pages.
Kahn, M., et al., “CoABS Grid Scalability Experiments,” 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 8 pages.
Kamel, M., et al., “A Graph Based Knowledge Retrieval System,” © 1990 IEEE, 7 pages.
Katz, B., “Annotating the World Wide Web Using Natural Language,” 1997, Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet, 7 pages.
Katz, B., “A Three-Step Procedure for Language Generation,” Dec. 1980, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 42 pages.
Katz, B., et al., “Exploiting Lexical Regularities in Designing Natural Language Systems,” 1988, Proceedings of the 12th International Conference on Computational Linguistics, Coling'88, Budapest, Hungary, 22 pages.
Katz, B., et al., “REXTOR: A System for Generating Relations from Natural Language,” In Proceedings of the ACL Oct. 2000 Workshop on Natural Language Processing and Information Retrieval (NLP&IR), 11 pages.
Katz, B., “Using English for Indexing and Retrieving,” 1988 Proceedings of the 1st RIAO Conference on User-Oriented Content-Based Text and Image (RIAO'88), 19 pages.
Konolige, K., “A Framework for a Portable Natural-Language Interface to Large Data Bases,” Oct. 12, 1979, SRI International, Artificial Intelligence Center, 54 pages.
Laird, J., et al., “SOAR: An Architecture for General Intelligence,” 1987, Artificial Intelligence vol. 33, 64 pages.
Langly, P., et al.,“A Design for the Icarus Architechture,” SIGART Bulletin, vol. 2, No. 4, 6 pages.
Larks, “Intelligent Software Agents: Larks,” 2006, downloaded on Mar. 15, 2013 from http.//www.cs.cmu.edu/larks.html. 2 pages.
Martin, D., et al., “Building Distributed Software Systems with the Open Agent Architecture,” Mar. 23-25, 1998, Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 23 pages.
Martin, D., et al., “Development Tools for the Open Agent Architecture,” Apr. 1996, Proceedings of the International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 17 pages.
Martin, D., et al., “Information Brokering in an Agent Architecture,” Apr. 1997, Proceedings of the second International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 20 pages.
Martin, D., et al., “PAAM '98 Tutorial: Building and Using Practical Agent Applications,” 1998, SRI International, 78 pages.
Martin, P., et al., “Transportability and Generality in a Natural-Language Interface System,” Aug. 8-12, 1983, Proceedings of the Eight International Joint Conference on Artificial Intelligence, West Germany, 21 pages.
Matiasek, J., et al., “Tamic-P: A System for NL Access to Social Insurance Database,” Jun. 17-19, 1999, Proceeding of the 4th International Conference on Applications of Natural Language to Information Systems, Austria, 7 pages.
Michos, S.E., et al., “Towards an adaptive natural language interface to command languages,” Natural Language Engineering 2 (3), © 1994 Cambridge University Press, 19 pages. Best Copy Available.
Milstead, J., et al., “Metadata: Cataloging by Any Other Name . . . ” Jan. 1999, Online, Copyright © 1999 Information Today, Inc., 18 pages.
Minker, W., et al., “Hidden Understanding Models for Machine Translation,” 1999, Proceedings of ETRW on Interactive Dialogue in Multi-Modal Systems, 4 pages.
Modi, P. J., et al., “CMRadar: A Personal Assistant Agent for Calendar Management,” © 2004, American Association for Artificial Intelligence, Intelligent Systems Demonstrations, 2 pages.
Moore, R., et al., “Combining Linguistic and Statistical Knowledge Sources in Natural-Language Processing for ATIS,” 1995, SRI International, Artificial Intelligence Center, 4 pages.
Moore, R., “Handling Complex Queries in a Distributed Data Base,” Oct. 8, 1979, SRI International, Artificial Intelligence Center, 38 pages.
Moore, R., “Practical Natural-Language Processing by Computer,” Oct. 1981, SRI International, Artificial Intelligence Center, 34 pages.
Moore, R., et al., “SRI's Experience with the ATIS Evaluation,” Jun. 24-27, 1990, Proceedings of a workshop held at Hidden Valley, Pennsylvania, 4 pages. Best Copy Available.
Moore, et al., “The Information Warefare Advisor: An Architecture for Interacting with Intelligent Agents Across the Web,” Dec. 31, 1998 Proceedings of Americas Conference on Information Systems (AMCIS), 4 pages.
Moore, R., “The Role of Logic in Knowledge Representation and Commonsense Reasoning,” Jun. 1982, SRI International, Artificial Intelligence Center, 19 pages.
Moore, R., “Using Natural-Language Knowledge Sources in Speech Recognition,” Jan. 1999, SRI International, Artificial Intelligence Center, 24 pages.
Moran, D., et al., “Intelligent Agent-based User Interfaces,” Oct. 12-13, 1995, Proceedings of International Workshop on Human Interface Technology, University of Aizu, Japan, 4 pages. http://www.dougmoran.com/dmoran/PAPERS/oaa-iwhit1995.pdf.
Moran, D., “Quantifier Scoping in the SRI Core Language Engine,” 1988, Proceedings of the 26th annual meeting on Association for Computational Linguistics, 8 pages.
Motro, A., “Flex: A Tolerant and Cooperative User Interface to Databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 2, No. 2, Jun. 1990, 16 pages.
Murveit, H., et al., “Speech Recognition in SRI's Resource Management and ATIS Systems,” 1991, Proceedings of the workshop on Speech and Natural Language (HTL'91), 7 pages.
OAA, “The Open Agent Architecture 1.0 Distribution Source Code,” Copyright 1999, SRI International, 2 pages.
Odubiyi, J., et al., “SAIRE—a scalable agent-based information retrieval engine,” 1997 Proceedings of the First International Conference on Autonomous Agents, 12 pages.
Owei, V., et al., “Natural Language Query Filtration in the Conceptual Query Language,” © 1997 IEEE, 11 pages.
Pannu, A., et al., “A Learning Personal Agent for Text Filtering and Notification,” 1996, The Robotics Institute School of Computer Science, Carnegie-Mellon University, 12 pages.
Pereira, “Logic for Natural Language Analysis,” Jan. 1983, SRI International, Artificial Intelligence Center, 194 pages.
Perrault, C.R., et al., “Natural-Language Interfaces,” Aug. 22, 1986, SRI International, 48 pages.
Pulman, S.G., et al., “Clare: A Combined Language and Reasoning Engine,” 1993, Proceedings of JFIT Conference, 8 pages. URL: http://www.cam.sri.com/tr/crc042/paper.ps.Z.
Ravishankar, “Efficient Algorithms for Speech Recognition,” May 15, 1996, Doctoral Thesis submitted to School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburg, 146 pages.
Rayner, M., et al., “Adapting the Core Language Engine to French and Spanish,” May 10, 1996, Cornell University Library, 9 pages. http://arxiv.org/abs/cmp-lg/9605015.
Rayner, M., “Abductive Equivalential Translation and its application to Natural Language Database Interfacing,” Sep. 1993 Dissertation paper, SRI International, 163 pages.
Rayner, M., et al., “Deriving Database Queries from Logical Forms by Abductive Definition Expansion,” 1992, Proceedings of the Third Conference on Applied Natural Language Processing, ANLC'92, 8 pages.
Rayner, M., “Linguistic Domain Theories: Natural-Language Database Interfacing from First Principles,” 1993, SRI International, Cambridge, 11 pages.
Rayner, M., et al., “Spoken Language Translation With Mid-90's Technology: A Case Study,” 1993, Eurospeech, ISCA, 4 pages. http://dblp.uni-trier.de/db/conf/interspeech/eurospeech1993.html#RaynerBCCDGKKLPPS93.
Rudnicky, A.I., et al., “Creating Natural Dialogs in the Carnegie Mellon Communicator System,”.
Russell, S., et al., “Artificial Intelligence, A Modern Approach,” © 1995 Prentice Hall, Inc., 121 pages.
Sacerdoti, E., et al., “A Ladder User's Guide (Revised),” Mar. 1980, SRI International, Artificial Intelligence Center, 39 pages.
Sagalowicz, D., “A D-Ladder User's Guide,” Sep. 1980, SRI International, 42 pages.
Sameshima, Y., et al., “Authorization with security attributes and privilege delegation Access control beyond the ACL,” Computer Communications, vol. 20, 1997, 9 pages.
San-Segundo, R., et al., “Confidence Measures for Dialogue Management in the CU Communicator System,” Jun. 5-9, 2000, Proceedings of Acoustics, Speech, and Signal Processing (ICASSP'00), 4 pages.
Sato, H., “A Data Model, Knowledge Base, and Natural Language Processing for Sharing a Large Statistical Database,” 1989, Statistical and Scientific Database Management, Lecture Notes in Computer Science, vol. 339, 20 pages.
Schnelle, D., “Context Aware Voice User Interfaces for Workflow Support,” Aug. 27, 2007, Dissertation paper, 254 pages.
Sharoff, S., et al., “Register-domain Separation as a Methodology for Development of Natural Language Interfaces to Databases,” 1999, Proceedings of Human-Computer Interaction (Interact'99), 7 pages.
Shimazu, H., et al., “CAPIT: Natural Language Interface Design Tool with Keyword Analyzer and Case-Based Parser,” NEC Research & Development, vol. 33, No. 4, Oct. 1992, 11 pages.
Shinkle, L., “Team User's Guide,” Nov. 1984, SRI International, Artificial Intelligence Center, 78 pages.
Shklar, L., et al., “Info Harness: Use of Automatically Generated Metadata for Search and Retrieval of Heterogeneous Information,” 1995 Proceedings of CAiSE'95, Finland.
Singh, N., “Unifying Heterogeneous Information Models,” 1998 Communications of the ACM, 13 pages.
SRI2009, “SRI Speech: Products: Software Development Kits: EduSpeak,” 2009, 2 pages, available at http://web.archive.org/web/20090828084033/http://www.speechatsri.com/products/eduspeak.shtml.
Starr, B., et al., “Knowledge-Intensive Query Processing,” May 31, 1998, Proceedings of the 5th KRDB Workshop, Seattle, 6 pages.
Stern, R., et al. “Multiple Approaches to Robust Speech Recognition,” 1992, Proceedings of Speech and Natural Language Workshop, 6 pages.
Stickel, “A Nonclausal Connection-Graph Resolution Theorem-Proving Program,” 1982, Proceedings of AAAI'82, 5 pages.
Sugumaran, V., “A Distributed Intelligent Agent-Based Spatial Decision Support System,” Dec. 31, 1998, Proceedings of the Americas Conference on Information systems (AMCIS), 4 pages.
Sycara, K., et al., “Coordination of Multiple Intelligent Software Agents,” International Journal of Cooperative Information Systems (IJCIS), vol. 5, Nos. 2 & 3, Jun. & Sep. 1996, 33 pages.
Sycara, K., et al., “Distributed Intelligent Agents,” IEEE Expert, vol. 11, No. 6, Dec. 1996, 32 pages.
Sycara, K., et al., “Dynamic Service Matchmaking Among Agents in Open Information Environments,” 1999, SIGMOD Record, 7 pages.
Sycara, K., et al., “The RETSINA MAS Infrastructure,” 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 20 pages.
Tyson, M., et al., “Domain-Independent Task Specification in the TACITUS Natural Language System,” May 1990, SRI International, Artificial Intelligence Center, 16 pages.
Wahlster, W., et al., “Smartkom: multimodal communication with a life-like character,” 2001 EUROSPEECH-Scandinavia, 7th European Conference on Speech Communication and Technology, 5 pages.
Waldinger, R., et al., “Deductive Question Answering from Multiple Resources,” 2003, New Directions in Question Answering, published by AAAI, Menlo Park, 22 pages.
Walker, D., et al., “Natural Language Access to Medical Text,” Mar. 1981, SRI International, Artificial Intelligence Center, 23 pages.
Waltz, D., “An English Language Question Answering System for a Large Relational Database,” © 1978 ACM, vol. 21, No. 7, 14 pages.
Ward, W., et al., “A Class Based Language Model for Speech Recognition,” © 1996 IEEE, 3 pages.
Ward, W., et al., “Recent Improvements in the CMU Spoken Language Understanding System,” 1994, ARPA Human Language Technology Workshop, 4 pages.
Ward, W., “The CMU Air Travel Information Service: Understanding Spontaneous Speech,” 3 pages.
Warren, D.H.D., et al., “An Efficient Easily Adaptable System for Interpreting Natural Language Queries,” Jul.-Dec. 1982, American Journal of Computational Linguistics, vol. 8, No. 3-4, 11 pages. Best Copy Available.
Weizenbaum, J., “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Communications of the ACM, vol. 9, No. 1, Jan. 1966, 10 pages.
Winiwarter, W., “Adaptive Natural Language Interfaces to FAQ Knowledge Bases,” Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 22 pages.
Wu, X. et al., “KDA: A Knowledge-based Database Assistant,” Data Engineering, Feb. 6-10, 1989, Proceeding of the Fifth International Conference on Engineering (IEEE Cat. No. 89CH2695-5), 8 pages.
Yang, J., et al., “Smart Sight: A Tourist Assistant System,” 1999 Proceedings of Third International Symposium on Wearable Computers, 6 pages.
Zeng, D., et al., “Cooperative Intelligent Software Agents,” The Robotics Institute, Carnegie-Mellon University, Mar. 1995, 13 pages.
Zhao, L., “Intelligent Agents for Flexible Workflow Systems,” Oct. 31, 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages.
Zue, V., et al., “From Interface to Content: Translingual Access and Delivery of On-Line Information,” 1997, EUROSPEECH, 4 pages.
Zue, V., et al., “Jupiter: A Telephone-Based Conversational Interface for Weather Information,” Jan. 2000, IEEE Transactions on Speech and Audio Processing, 13 pages.
Zue, V., et al., “Pegasus: A Spoken Dialogue Interface for On-Line Air Travel Planning,” 1994 Elsevier, Speech Communication 15 (1994), 10 pages.
Zue, V., et al., “The Voyager Speech Understanding System: Preliminary Development and Evaluation,” 1990, Proceedings of IEEE 1990 International Conference on Acoustics, Speech, and Signal Processing, 4 pages.
Bussler, C., et al., “Web Service Execution Environment (WSMX),” Jun. 3, 2005, W3C Member Submission, http://www.w3.org/Submission/WSMX, 29 pages.
Cheyer, A., “About Adam Cheyer,” Sep. 17, 2012, http://www.adam.cheyer.com/about.html, 2 pages.
Cheyer, A., “A Perspective on AI & Agent Technologies for SCM,” VerticalNet, 2001 presentation, 22 pages.
Domingue, J., et al., “Web Service Modeling Ontology (WSMO)—An Ontology for Semantic Web Services,” Jun. 9-10, 2005, position paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, 6 pages.
Guzzoni, D., et al., “A Unified Platform for Building Intelligent Web Interaction Assistants,” Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 4 pages.
Roddy, D., et al., “Communication and Collaboration in a Landscape of B2B eMarketplaces,” VerticalNet Solutions, white paper, Jun. 15, 2000, 23 pages.
Office Action dated Mar. 2, 2012, received in United Kingdom patent application No. GB1001414.0, which corresponds to U.S. Appl. No. 12/363,513, 4 pages (Rottler).
Office Action dated Apr. 21, 2011, received in Chinese patent application No. 201010109130.0, which corresponds to U.S. Appl. No. 12/363,513, 19 pages. (Rottler).
Office Action dated Oct. 26, 2011, received in Chinese patent application No. 201010109130.0, which corresponds to U.S. Appl. No. 12/363,513, 13 pages. (Rottler).
Office Action dated Mar. 21, 2012, received in Chinese patent application No. 201010109130.0, which corresponds to U.S. Appl. No. 12/363,513, 23 pages. (Rottler).
European Search Report dated Apr. 14, 2010, received in Application No. EP10151963.5, which corresponds to U.S. Appl. No. 12/363,513, 9 pages. (Benjamin Rottler).
GB Patent Act 1977: Combined Search and Examination Report under Sections 17 and 18(3) dated May 26, 2010, received in Application No. GB1001414.0, 8 pages. (Benjamin Rottler).
International Search Report and Written Opinion dated Mar. 3, 2010, received in International Application No. PCT/US2009/069052, which corresponds to U.S. Appl. No. 12/363,513, 11 pages. (Benjamin Rottler).
International Preliminary Report on Patentability dated Jun. 3, 2011, received in International Application No. PCT/US2009/069052, which corresponds to U.S. Appl. No. 12/363,513, 11 pages. (Benjamin Rottler).
Related Publications (1)
Number Date Country
20100198375 A1 Aug 2010 US