The present disclosure relates generally to user interfaces on electronic devices and, more particularly, to user interfaces capable of providing audio feedback to a user of an electronic device.
This section is intended to introduce the reader to various aspects of art that may be related to various aspects of the present techniques, which are described and/or claimed below. This discussion is believed to be helpful in providing the reader with background information to facilitate a better understanding of the various aspects of the present disclosure. Accordingly, it should be understood that these statements are to be read in this light, and not as admissions of prior art.
Electronic computing devices, such as computer systems, mobile phones, digital media players, personal digital assistants (PDAs), and the like, are commonly used for various personal and/or work-related purposes. Such electronic devices typically include some type of user interface that enables a user to interact with various applications (e.g., e-mail programs, internet browsers, media players, games, etc.) on the device to perform a variety of functions. In other words, the user interface may provide a gateway through which users may interact with applications to receive content, information, as well as responses to user inputs. The user interface, therefore, is an integral part in the design of these applications and helps determine the ease of use, and thus the quality of the overall user experience, of such devices.
Historically, many electronic devices have relied upon a graphical user interface to allow a user to interact with the device by way of a visual display. For instance, as the user interacts with the device, the device may display visual feedback in response to the user's actions. However, as some types of electronic devices have migrated towards smaller form factors having relatively small visual displays, graphical user interfaces are becoming not only more difficult to use and navigate, but also more limited in the amount of information they are able to convey.
More recently, audio user interfaces have experienced a rise in popularity. For instance, an audio user interface may supply audio feedback data, instead of or in addition to visual feedback, to convey information and content to a user and, thus, are particularly well suited for use in electronic devices having limited visual display capabilities or, in some instances, no visual display capabilities at all. For instance, upon the occurrence of an event that requests audio feedback, a corresponding audio clip may be played to convey audio information about the occurring event to the user. Unfortunately, some events may be associated with large amounts of audio information, which may overwhelm a user and, therefore, negatively impact the user experience, particularly when such events occur repeatedly in close proximity within a relatively short time period. Additionally, audio feedback provided by conventional audio user interfaces may not adequately enable a user to distinguish between events of high or low contextual importance. Accordingly, there are continuing efforts to further improve the user experience with respect to audio user interfaces in electronic devices.
A summary of certain embodiments disclosed herein is set forth below. It should be understood that these aspects are presented merely to provide the reader with a brief summary of these certain embodiments and that these aspects are not intended to limit the scope of this disclosure. Indeed, this disclosure may encompass a variety of aspects that may not be set forth below.
The present disclosure generally relates to techniques for adaptively varying audio feedback provided by an audio user interface on an electronic device. In accordance with one embodiment, an audio user interface may be configured to devolve or evolve the verbosity of audio feedback in response to user interface events based at least partially upon the verbosity level of audio feedback provided during previous occurrences of the user interface event. As will be discussed further below, the term “verbosity,” as used herein, refers to the “wordiness” of the audio information provided by the audio feedback, and may also encompass non-verbal types of audio feedback, such as tones, clicks, beeps, chirps, etc. For instance, if a subsequent occurrence of the user interface event occurs in relatively close proximity to a previous occurrence of the user interface event, the audio user interface may devolve the audio feedback (e.g., by reducing verbosity), such as to avoid overwhelming a user with repetitive and highly verbose information.
In another embodiment, an audio user interface may be configured to adaptively vary audio feedback associated with a navigable list of data items based at least partially upon the speed at which a user navigates the list. In a further embodiment, an audio user interface may be configured to provide audio feedback that is more audibly distinct to indicate where newer data content is located in the navigable list, and to provide audio feedback that is less audibly distinct for older data content. In yet another embodiment, an audio user interface may be configured to vary the verbosity and/or distinctiveness of the audio feedback based upon the contextual importance of a user interface event. The various audio feedback techniques disclosed herein, when implemented alone or in combination, may enhance the user experience with regard to audio user interfaces.
Various refinements of the features noted above may exist in relation to various aspects of the present disclosure. Further features may also be incorporated in these various aspects as well. These refinements and additional features may exist individually or in any combination. For instance, various features discussed below in relation to one or more of the illustrated embodiments may be incorporated into any of the above-described aspects of the present disclosure alone or in any combination. Again, the brief summary presented above is intended only to familiarize the reader with certain aspects and contexts of embodiments of the present disclosure without limitation to the claimed subject matter.
Various aspects of this disclosure may be better understood upon reading the following detailed description and upon reference to the drawings in which:
One or more specific embodiments of the present disclosure will be described below. These described embodiments are only examples of the presently disclosed techniques. Additionally, in an effort to provide a concise description of these embodiments, all features of an actual implementation may not be described in the specification. It should be appreciated that in the development of any such actual implementation, as in any engineering or design project, numerous implementation-specific decisions must be made to achieve the developers' specific goals, such as compliance with system-related and business-related constraints, which may vary from one implementation to another. Moreover, it should be appreciated that such a development effort might be complex and time consuming, but would nevertheless be a routine undertaking of design, fabrication, and manufacture for those of ordinary skill having the benefit of this disclosure.
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, it should be understood that references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
As will be discussed below, the present disclosure relates generally to techniques for adaptively varying audio feedback provided by an audio user interface on an electronic device. As will be appreciated, an audio user interface may be particularly useful where an electronic device has either limited or no display capabilities. Further, even if the electronic device includes a visual display, there are times when a user may have difficulty navigating a graphical user interface, such as in “eyes-busy” situations where it is impractical to shift visual focus away from an important activity and towards the graphical user interface. Such activities may include, for example, driving an automobile, exercising, and crossing a busy street. Additionally, audio feedback is a practical substitute to visual feedback if the device is being used by a visually impaired user.
In accordance with one embodiment, an audio user interface may devolve or evolve the verbosity of audio feedback. As mentioned above, the term “verbosity,” as used herein, shall be understood to refer to the “wordiness” of the audio information provided by the audio feedback, and may encompass non-verbal types of audio feedback, such as clicks, beeps, chirps, or other various types of non-verbal sound effects. For example, audio feedback having a high level of verbosity may output several spoken words (e.g., playing a previously stored audio file containing the spoken words, or using text-to speech synthesis in real time), while audio feedback having a lower level of verbosity may output fewer spoken words or, in some cases, a non-verbal tone (e.g., no spoken words). In one embodiment, the verbosity of the audio feedback provided in response to user interface events is varied based at least partially upon the verbosity level of audio feedback provided during one or more previous occurrences of the user interface event. Thus, when subsequent occurrences of the user interface event occurs in relatively close proximity to a previous occurrence of the user interface event, the audio user interface may devolve the audio feedback (e.g., by reducing verbosity), such as to avoid overwhelming a user with repetitive and highly verbose information.
In another embodiment, an audio user interface may be configured to adaptively vary audio feedback associated with a navigable list of data items based at least partially upon the speed at which a user navigates the list. In a further embodiment, an audio user interface may be configured to provide audio feedback that is more audibly distinct to indicate where newer data content is located in the navigable list, and to provide audio feedback that is less audibly distinct for older data content. In yet another embodiment, an audio user interface may be configured to vary the verbosity and/or distinctiveness of the audio feedback based upon the contextual importance of a user interface event. The various audio feedback techniques disclosed herein, when implemented alone or in combination, may enhance the user experience with regard to audio user interfaces.
Before continuing, several additional terms used extensively throughout the present disclosure will be first defined in order to facilitate a better understanding of disclosed subject matter. For instance, events that occur during operation of an electronic device may be generally categorized as “user events” or “system events.” As used herein, the term “user event” and the like shall be understood to refer to an event that occurs as a result of a user's interaction with the device. To provide an example, a user event may be a notification indicating the availability of a particular device function requested by a user. In contrast, the term “system event” or the like shall be understood to events that are generally initiated by the device itself during operation to provide information pertaining to the status of the device, regardless of whether a user is actively interacting with or issuing requests and/or commands to the device. By way of example only, a system event may include a low battery notification. Thus, it should be understood the term “event,” as used herein, may refer to a user event or a system event, as defined above.
In the context of audio user interfaces, an electronic device may initiate the playback of an “audio item” to provide audio feedback upon the occurrence of certain events. As used herein, the term “audio item” or the like shall be understood to refer to audio information provided by an audio user interface of an electronic device. For instance, audio items may be an audio file stored on the device (e.g., in memory or non-volatile storage, and may contain verbal (e.g., speech data) audio information or non-verbal audio cues, such as beeps, clicks, chirps, chimes, rings, and other various tones or sound effects. Additionally, some audio items may not be stored locally on a device, but instead may be generated using synthesized speech applications (e.g., text-to-speech) in connection with an occurrence of a particular event that requests audio feedback.
In accordance with the techniques described below, certain events may be associated with a set of audio items having different verbosity levels. For instance, a set of audio items may include a non-verbal audio item (e.g., no wordiness content) and an audio item having a highest verbosity level (e.g., “full verbosity”), as well as one or more audio items of intermediate verbosity levels. As used herein, the terms “devolve,” “step down,” or the like, shall be understood to refer to the act of decreasing the verbosity of audio feedback associated with a particular event by selecting and playing back an audio item that is less verbose relative to the verbosity of the audio item selected during the previous occurrence of the event. Similarly, the term “evolve,” “step up,” or the like shall be understood to refer to the act of increasing the verbosity of audio feedback associated with a particular event by selecting and playing back an audio item that is more verbose relative to the verbosity of the audio item selected during the previous occurrence of the event. Various techniques for determining how to devolve or evolve audio feedback are disclosed below.
Further, the term “contextual importance” or the like, as applied to user interfaces, shall be understood to refer to the importance of the information provided in response to an event on a device relative to the context in which the information is provided. For instance, events of higher contextual importance may provide more distinct sounding audio feedback relative to events of lower contextual importance. To provide one example, events which may require a user response, such as an event prompting a user to allow or deny an incoming network connection, may have relatively high contextual importance, as the device may require the use to provide a decision in response to the event in order determine how to address the incoming network connection request. To provide another example, a first occurrence of a low battery warning notification event may have relatively low contextual importance, as such a notification is generally meant to be informative and does not necessarily require a user response or immediate user action. However, the contextual importance of a low battery notification may gradually increase if a user either intentionally or inadvertently disregards the low battery notification over several repeated occurrences, resulting in the device approaching a critical power threshold required for continued operation.
In other embodiments, the contextual importance of a user interface event may also be determined based upon pre-programmed information (e.g., events may be programmed as having high or low contextual importance characteristics). In other embodiments, the contextual importance of a user interface event may be adaptive or learned based upon previous device behavior and/or how a user interacts with the device during pervious occurrence(s) of the user interface event. Additionally, in some embodiments, the contextual importance may be user-specified, such as via set of configurable user preference settings on the electronic device. Various embodiments are discussed for varying audio feedback to indicate contextual importance of events. Thus, it should be understood that the evolving and devolving of audio feedback verbosity may be an intelligent adaptive activity performed by an electronic device in response to user inputs (e.g., direct user inputs, user preference settings, etc.) and/or in response to external stimuli (e.g., device operation events—low power, low memory, etc.). Indeed, as will be shown in the various embodiments below, the evolution and devolution of audio feedback verbosity may be dynamic and may be tailored based on specific user preferences and/or settings stored on the device.
Turning now to the drawings,
As shown in
With regard to each of the illustrated components in
The input structures 14 may provide user input or feedback to the processor(s) 16. For instance, the input structures 14 may be configured to control one or more functions of the electronic device 10, such as applications running on the device 10. By way of example only, the input structures 14 may include buttons, sliders, switches, control pads, keys, knobs, scroll wheels, keyboards, mice, touchpads, and so forth, or some combination thereof. In one embodiment, the input structures 14 may allow a user to navigate the GUI 36 displayed on the device 10. Additionally, the input structures 14 may include a touch sensitive mechanism provided in conjunction with the display 22. In such embodiments, a user may select or interact with displayed interface elements via the touch sensitive mechanism.
The operation of the device 10 may be generally controlled by one or more processors 16, which may provide the processing capability required to execute an operating system, application programs, the GUI 36, the audio user interface 38, and any other functions provided on the device 10. The processor(s) 16 may include a single processor or, in other embodiments, it may include multiple processors. By way of example, the processor 16 may include “general purpose” microprocessors, application-specific processors (ASICs), custom processors, or a combination of such processing components. For example, processor(s) 16 may include instruction set processors (e.g., RISC), graphics/video processors, audio processors, and/or other related chipsets. The processor(s) 16 may be coupled to one or more data buses for transferring data and instructions between various components of the device 10.
Instructions or data to be processed by the processor(s) 16 may be stored in a computer readable medium, such as the memory 18, which may be a volatile memory, such as random access memory (RAM), or as a non-volatile memory, such as read-only memory (ROM), or as a combination of RAM and ROM devices. For example, the memory 18 may store firmware for the device 10, such as an operating system, applications, graphical and audio user interface functions, or any other routines that may be executed on the device 10. While the user interface 34 (including the GUI 36 and audio user interface 38) are shown as being components of the memory 18, it should be understood that the encoded instructions (e.g., machine-readable code) defining the GUI 36 and audio user interface 38 may actually reside in the non-volatile storage 20, and may be loaded into the memory 18 for execution at run time.
The non-volatile storage device 20 may include flash memory, a hard drive, or any other optical, magnetic, and/or solid-state storage media, for persistent storage of data and/or instructions. By way of example, the non-volatile storage 20 may be used to store data files, including audio data, video data, pictures, as well as any other suitable data. As will be discussed further below, non-volatile storage 20 may be utilized by device 10 to store various audio items that may be selected and played back via the audio user interface 38 to provide audio feedback to a user of the device 10.
The display 22 may be used to display various images generated by device 10. For instance, the display 22 may receive and display images 40 generated by the GUI 36. In various embodiments, the display 22 may be any suitable display, such as a liquid crystal display (LCD), plasma display, or an organic light emitting diode (OLED) display, for example. Additionally, the display 22 may be provided in conjunction with the above-discussed touch-sensitive mechanism (e.g., a touchscreen) that may function as part of a control interface for the device 10. Further, it should be noted that in some embodiments, the device 10 may not include a display 22 or a GUI 36, but instead may include only an audio user interface 38 through which a user interacts with the device 10. An example of an embodiment of the device 10 that lacks a display 22 may be a model of an iPod® Shuffle, available from Apple Inc.
As mentioned above, the audio output device 24 may include an external audio output device, such as headphones or external speakers connected to the device 10 by an I/O port 12. Additionally, the audio output device 24 may include integrated speakers. As shown in
The embodiment illustrated in
The electronic device 10 also includes the network device 28, which may be a network controller or a network interface card (NIC) that may provide for network connectivity over a wireless 802.11 standard or any other suitable networking standard, such as a local area network (LAN), a wide area network (WAN), such as an Enhanced Data Rates for GSM Evolution (EDGE) network, a 3G data network, or the Internet. By way of the network device 28, the device 10 may connect to and send or receive data with any device on the network, such as portable electronic devices, personal computers, printers, and so forth. In certain embodiments, the network device 28 may provide for a connection to an online digital media content provider, such as the iTunes® service, available from Apple Inc.
The power source 30 of the device 10 may include the capability to power the device 10 in both non-portable and portable settings. For example, in a portable setting, the device 10 may include one or more batteries, such as a Li-Ion battery, for powering the device 10. The battery may be recharged by connecting the device 10 to an external power source, such as to an electrical wall outlet. In a non-portable setting, the power source 30 may include a power supply unit (PSU) configured to draw power from an electrical wall outlet, and to distribute the power to various components of a non-portable electronic device, such as a desktop computing system.
Having described the components of the electronic device 10 depicted in
As will be appreciated, the input structures 14 may also include various other buttons and/or switches which may be used to interact with the computer 50, such as to power on or start the computer, to operate a GUI or an application running on the computer 50, as well as adjust various other aspects relating to operation of the computer 50 (e.g., sound volume, display brightness, etc.). The computer 50 may also include various I/O ports 12 that provide for connectivity to additional devices, as discussed above, such as a FireWire® or USB port, a high definition multimedia interface (HDMI) port, or any other type of port that is suitable for connecting to an external device. Additionally, the computer 50 may include network connectivity (e.g., network device 28), memory (e.g., memory 18), and storage capabilities (e.g., storage device 20), as described above with respect to
As further shown, the display 22 may be configured to generate various images that may be viewed by a user. For example, during operation of the computer 50, the display 28 may display the GUI 36 that allows the user to interact with an operating system and/or applications running on the computer 50. The GUI 36 may include various layers, windows, screens, templates, or other graphical elements that may be displayed in all, or a portion, of the display device 22. For instance, in the depicted embodiment, the GUI 36 may display an operating system interface that includes various graphical icons 56, each of which may correspond to various applications that may be opened or executed upon detecting a user selection (e.g., via keyboard/mouse or touchscreen input). The icons 56 may be displayed in a dock 58 or within one or more graphical window elements 60 displayed on the screen.
In some embodiments, the selection of an icon 56 may lead to a hierarchical navigation process, such that selection of an icon 56 leads to a screen or opens another graphical window that includes one or more additional icons or other GUI elements. By way of example only, the operating system GUI 36 displayed in
The enclosure 52 also includes various user input structures 14 through which a user may interface with the handheld device 70. For instance, each input structure 14 may be configured to control one or more respective device functions when pressed or actuated. By way of example, one or more of the input structures 14 may be configured to invoke a “home” screen 72 or menu to be displayed, to toggle between a sleep, wake, or powered on/off mode, to silence a ringer for a cellular phone application, to increase or decrease a volume output, and so forth. It should be understood that the illustrated input structures 14 are merely exemplary, and that the handheld device 70 may include any number of suitable user input structures existing in various forms including buttons, switches, keys, knobs, scroll wheels, and so forth.
As shown in
The display device 22 may display various images generated by the handheld device 70. For example, the display 22 may display various system indicators 73 providing feedback to a user with regard to one or more states of handheld device 70, such as power status, signal strength, external device connections, and so forth. The display 22 may also display the GUI 36 that allows a user to interact with the device 70, as discussed above with reference to
The handheld device 70 also includes the audio output devices 24, the audio input devices 80, as well as the output transmitter 82. As discussed above, an audio user interface 38 on the device 70 may use the audio output devices 24 to provide audio feedback to a user through the playback of various audio items. Additionally, the audio output device 24 may be utilized in conjunction with the media player application 76, such as for playing back music and media files. Further, where the electronic device 70 includes a mobile phone application, the audio input devices 80 and the output transmitter 82 may operate in conjunction to function as the audio receiving and transmitting elements of a telephone.
Referring now to
For example, as mentioned above, one aspect of the audio feedback selection logic 86 may relate to devolving or evolving audio feedback in response to an event 88. In one embodiment, the selection logic 86 may identify a set of audio items (“audio feedback data set”) within the audio data storage 94 that is associated with the event 88 as being candidates for audio feedback. As discussed above, the set of audio items corresponding to the event 88 may vary in levels of verbosity, wherein each level may be referred to as a “step.” Thus, as defined above, “stepping down” the audio feedback may refer to decreasing the verbosity of the audio feedback, while “stepping up” the audio feedback may refer to increasing the verbosity of the audio feedback. Accordingly, an audio item 100 corresponding to a desired level of verbosity may be selected in accordance with information provided by the event statistics data storage 92 and the user preferences 96.
In one embodiment, the event statistics data storage 92 may store information about the event 88, including the frequency at which event 88 has previously occurred during operation of the device 10, the audio item selected for playback during the most recent occurrence of the event 88, as well as the temporal proximity at which the event 88 last occurred, and so forth. By way of example, each previous occurrence of the event 88 may be stamped with a time value provided by the timer 98 and stored as a data entry in the event statistics data storage 92. The timer 98 may be implemented as a standalone clock (e.g., an RC oscillator) or may be configured to derive time values based on an external system clock of the device 10. Thus, when the event 88 occurs in close proximity, i.e., within a selected amount of time (a “wait time” or “step-up time” configurable through the user preferences 96), relative to the previous occurrence of the event 88, the audio feedback selection logic 86 may select an audio item 100 from the audio feedback data set that is less verbose relative to the audio item selected during the previous occurrence. In this manner, the audio user interface 38 may avoid repeatedly playing back the same high verbosity audio item for multiple occurrences of a particular event 88 over a relatively short amount of time, thus improving the user experience with regard to the audio user interface 38.
By the same token, some embodiments of the audio feedback selection logic 86 may also be configured to evolve the audio feedback using a technique similar to the devolving process discussed above. For example, upon detecting the occurrence of the event 88, if the event statistics data 92 indicates that the event 88 has not occurred within the interval corresponding to the selected step-up time just prior to the occurrence of the event 88, then the selection logic 86 may evolve the audio feedback by selecting an audio item 100 from the audio feedback data set that is more verbose relative to the audio item selected for the previous occurrence of the event 88.
While the frequency and temporal proximity in which an event 88 occurs is one metric by which the selection logic 86 of the audio user interface 38 may vary audio feedback, other factors may also contribute to how the selection logic 86 selects the audio item 100. For instance, in one embodiment, the selection logic 86 may be configured to control or vary audio feedback based upon the contextual importance of the event 88, which may depend upon the relative importance of the information provided in response to the event 88 relative to the context in which the event 88 occurred. In other embodiments, contextual importance of an event may be determined based upon pre-programmed information (e.g., events may be programmed as having high or low contextual importance characteristics), may be adaptive or learned based upon previous device behavior and/or how a user interacts with the device during pervious occurrence(s) of the event, or may be user-specified, such as via set of configurable user preference settings on the electronic device, or may be determined based on a combination of such factors. In a further embodiment, the selection logic 86 may be configured to vary audio feedback associated with a displayed list of items based upon the speed at which the list is navigated by a user of the device 10.
With these points in mind, the remaining figures are intended to depict various embodiments for varying audio feedback provided by an audio user interface (e.g., 38) in accordance with aspects of the present disclosure. Specifically,
Referring to
As discussed above, a GUI 36, depending on the inputs and selections made by a user, may display various screens including icons (e.g., 56) and graphical elements. These elements may represent graphical and virtual elements or “buttons” which may be selected by the user from the display 22 using one or more input structures 14 (
As shown in the screen 104, the application 106 may display a list 108 of media items 110, such as song files, video files, podcasts, and so forth, from which a user may select an item 112 for playback on the device 10. As shown in
Additional playback functions provided by the application 106 are depicted by the graphical buttons 126, 128, 130, and 132. For instance, the graphical button 126 may represent a function by which the user may manually create a new group of media items for playback, commonly referred to as a “playlist.” The graphical buttons 128 and 130 may represent functions for enabling or disabling “shuffle” and “repeat” playback modes, respectively. Finally, the graphical button 132 may represent a function for automatically generating a playlist using media items stored on the device 10 which are determined to be similar to the selected media item 112. By way of example only, such a function may be provided as the Genius® function, available on the iTunes® application, as well as on models of the iPod® and iPhone®, all available from Apple Inc.
Genius® playlists may be generated using ratings system and filtering algorithms provided through an external centralized server, such as the iTunes® server, provided by Apple Inc. In some instances, however, the Genius® function may be unable to fulfill a user request for generating a playlists, such as when a selected media item 112 is relatively new and the Genius® function is unable to obtain sufficient data points for identifying similar media stored on the device 10 (e.g., in non-volatile storage 20). Additionally, the Genius® function may also be unavailable if the total number of media items stored on the device 10 is insufficient to generate a suitable playlist. For the purposes of the embodiments discussed below with respect to
As discussed above, certain embodiments of the present technique may include devolving audio feedback in response to the event 88. For instance, suppose that after attempting to apply the Genius® function to the selected media item 112, the user further attempts to apply the Genius® function to several other items on the list 108 within a relatively short interval of time with no success, thus triggering the event 88 on each attempt. Assuming that the devolving techniques discussed above are not applied, audio feedback would be provided at “full-verbosity” for each occurrence, which may overwhelm the user with repetitive information and, thus, negatively impact the user experience with regard to the application 106.
To enhance the user experience, the audio feedback selection logic 86 (
The audio item 152 may represent a first-level devolved audio item that is less verbose relative to the audio item 150, but still contains a substantial portion of verbal audio information. For instance, when selected, the audio item 152 may cause the verbal audio information “GENIUS IS NOT AVAILABLE” to be played back through the audio output device 24. The audio item 154 is even less verbose compared to the audio item 152, and only includes a relatively short verbal message: “NO GENIUS.” Finally, the audio item 156 represents the least verbose item of the set 148, and includes no verbal components, but only a non-verbal cue in the form of a negative sounding tone or beep.
Thus, the depicted audio feedback data set 148 of
An example illustrating how audio feedback corresponding to the event 88 shown in
Beginning at time t0, the occurrence of the event 88a may result in the visual notification window 140 of
In the present example, the event 88b occurs once again at time t20. Upon the occurrence of the event 88b, the event statistics data storage unit 92 may indicate to the selection logic 86 that the event 88a occurred less than 45 minutes ago (e.g., the step-up interval). Thus, because the event 88b occurs within the step-up window 157 (e.g., from t0 to t45), the selection logic 86 may identify the audio item that was played during the most recent occurrence of the event 88 (e.g., in this case, audio item 150 at time t0), and devolve the audio feedback by one step of verbosity. This may result in the selection and playback of the audio item 152, which, as shown in
Thereafter, the event 88c occurs again at time t35. Because the event 88c occurs within the step-up window 158 (e.g., from t20 to t65), the selection logic 86 of the audio user interface 38 may further devolve the audio feedback associated with the event 88c by selecting and playing back the audio item 154, which is one verbosity step lower than the previously played audio item 152. Once the playback of the audio item 154 occurs at time t35, the remainder of the step-up window 158 also becomes irrelevant, and a step-up window 159 associated with the event 88c is established from time t35 to time t80 and becomes the current step-up window.
Following the event 88c, the event 88d occurs again at time t55. Again, because the event 88d occurs within the current step-up window 159 (e.g., from t35 to t80), the selection logic 86 of the audio user interface 38 may lower the verbosity of the audio feedback an additional step, thus fully devolving the audio feedback associated with the event 88 to the non-verbal audio item 156. Thereafter, once the playback of the non-verbal audio item 156 occurs at time t55, a new step-up window 160 associated with the event 88d is established from time t55 to time t100, and the remainder of the previous step-up window 159 becomes irrelevant. In other words, as long as the event 88 continues to occur within a current step-up time window following the most recent previous occurrence of the event 88, the selection logic 86 may continue to devolve the audio feedback corresponding to the event 88. It should be noted, however, that because the audio item 156 cannot be devolved any further in the present example, additional occurrences of the event 88 within the window 160 may result in the selection logic 86 selecting and playing the audio item 156 again.
Next, at time t110, the event 88e occurs once again. This occurrence, however, is outside of the step-up window 160. In this case, the selection logic 86 may be configured to evolve the audio feedback. For instance, in one embodiment, the selection logic 86 may “reset” the verbosity of the audio feedback to full verbosity by selecting and playing back the audio item 150 at time 110, regardless of the verbosity level of the most recently played audio item (e.g., audio item 156). In another embodiment, the selection logic 86 may evolve the audio feedback by increasing the verbosity of the audio feedback by one step relative to the most recently played audio item. For instance, in the present example, the selection of the audio item 154 at time t110 may provide a one step increase in the verbosity of the audio feedback relative to the most recently played audio item 156.
As will be appreciated, the occurrence of each of the events 88a-88e, in addition to triggering audio feedback, may also trigger the display of visual feedback on the GUI 36, such as by way of the visual notification window 140 shown in
While the graphical timeline depicted in
For instance, one embodiment for devolving audio feedback may consider the occurrence of a “playback termination event.” As used herein, a playback termination event refers to a response by the user that terminates the playback of an audio item before completion. For instance, referring to
An example illustrating how playback termination events may affect the devolvement of audio feedback is shown in
Beginning at time t0, the occurrence of the event 88f may result in the visual notification window 140 of
Next, the event 88g occurs again at time t30. Upon the occurrence of the event 88g, the event statistics data storage unit 92 may indicate to the selection logic 86 that a playback termination event 161 was detected in connection with the previous occurrence of the event 88f at time t0. In the illustrated embodiment, this may cause the selection logic 86 to fully devolve the audio feedback by selecting and playing back the non-verbal audio item 156, thus bypassing the verbosity levels represented by the audio items 152 and 154. A new step-up window 163 is established from time t30 to time t75. As will be appreciated, like the embodiment of
In some embodiments, a playback termination event (e.g., 161) may, in addition to affecting audio feedback behavior, also affect visual feedback behavior. For instance, referring to
The notification banner 164 may include the graphical elements 166 and 168. By selecting the graphical element 166, the user may expand the notification banner 164, causing the window 140 to appear instead. In one embodiment, the GUI 36 may display the notification banner 164 only briefly, such as for a period of 5 to 10 seconds, before automatically removing the banner 164 from the screen 104. Additionally, the user may choose to manually remove the notification banner 164 by selecting the graphical button 168.
The various techniques for devolving and evolving audio feedback, as described with reference to the embodiments shown in
Referring first to
Thereafter, at step 172, a first audio item is selected that corresponds to a desired verbosity level which, as shown in
Based upon the event statistics data from step 176, the selection logic 86 may, at decision block 177, determine whether the event occurred within the step-up window following the most recent previous occurrence of the event. If the event did not occur within the step-up window, then the method 174 continues to step 178, whereby audio feedback is provided at full verbosity. As mentioned above, in an alternate embodiment, rather than providing full verbosity, the selection logic 86 may instead evolve the audio feedback by one step. For instance, as shown in
Referring again to decision block 177, if the event does occur within the step-up window following the previous occurrence, the method 174 continues to decision block 186, at which a determination is made as to whether the previous occurrence was accompanied by a playback termination event (e.g., 161). If a playback termination event was detected alongside the previous occurrence of the event, the method 174 continues to step 188, and the most devolved audio item from the audio feedback data set (e.g., 148) is selected and played back. By way of example, the most devolved audio item may be a non-verbal audio cue (e.g., audio item 156).
If the decision block 186 determines that there was not a playback termination event detected during the previous occurrence of the event, then the most recently selected audio item corresponding to the previous occurrence of the event is identified at step 190. At step 192, a determination is made as to whether the most recently selected audio item is already the most devolved audio item of the audio feedback data set from step 176. If the most recently selected audio item is determined to be the most devolved audio item from the set, then it is selected as the current audio item and played back at step 188. If the most recently selected audio item is not the most devolved audio item from the set, then the selection logic 86 may devolve the audio feedback one step, and play the corresponding devolved audio item.
Continuing to
Referring first to
Upon selection of the icon 74, the user may be navigated to a home screen 200 of the media player application 74. As shown in
As discussed above, during operation of the device 10, various events, including user events and system events may occur, as defined above. For instance, the visual notification window 218 may be displayed on the screen 200 to indicate that a user event 216 has occurred in response to actions initiated by a user to enable the media player application 74 to accept incoming network connections. As shown in
In the present context, the “contextual importance” of the event 216 may be relatively high due to the fact that a user input is required in order to carry out or not carry out the requested operation (e.g., the allowance of incoming network connections). That is, without a response from the user, the device 10 is unable to proceed, as the user has not confirmed or denied the allowance of incoming network connections. Thus, an audio feedback data set associated with the event 216 may include at least a non-verbal audio tone 226 that signifies the high contextual importance of the event 216 when played back, with a goal of prompting the user to respond to the visual notification window 218. For example, the non-verbal tone 226 may include a distinctive alarm sound, a chirp, a beep, or any other type of non-verbal audio tone that may highlight the contextual importance of the event 216 (e.g., higher pitched sound, louder volume, longer playback time, etc.). In other words, while the event 216 may also be associated with one or more verbal audio items, in situations where either the audio user interface 38 selects the non-verbal audio item 226 or in which the user configures the device 10 to play back only non-verbal audio feedback, the non-verbal audio item 226 may help audibly distinguish the event 216 from events of lesser contextual importance.
To provide an example, an event that initially has lower contextual importance relative to the event 216 may be a system event in the form of a low battery warning 228. For instance, upon the occurrence of the low battery warning event 228, the visual notification window 230 is displayed on the screen 200, and contains the visual notification message 232 indicating that the power source 30 (
Ideally, the user will mentally process the notification provided by the window 230 and take necessary actions to recharge the power source 30. However, the device 10 will continue to operate in the near term regardless of whether or not the user initiates recharging of the power source 30 immediately. In this context, the contextual importance of the event 228 may be regarded as generally low relative to the event 216. As such, the event 228 may have associated therewith a non-verbal audio item 236 that is less distinct (e.g., lower pitched, softer volume, shorter playback time, etc.) relative to the non-verbal audio item 226, thus signifying the lesser contextual importance of the event 228.
While the event 228 may initially be categorized as having low contextual importance, it should be appreciated that the context in which the event 228 occurs may change over time. For instance the notification 230 may be a first warning based on a low power notification threshold of 20%. However, assuming the user chooses not to take action to replenish the power source 30, the device 10 will continue to consume the remaining power, thus further depleting the power source 30. Accordingly, in some embodiments, the user interface 34 may be configured to supply additional warnings at one or more lower thresholds. For instance, in one embodiment, the user interface 34 may supply a subsequent low power warning when the remaining charge in the power source 30 is depleted to 1% of total charge capacity. In this context, the 1% warning may be regarded as having high contextual importance, as the device 10 would be unable to continue operating when the power source 30 inevitably becomes fully depleted absent recharging or replacement. Thus, the latter example represents an embodiment in which multiple non-verbal items (e.g., of the same verbosity level) are associated with a common event, such that during the initial 20% warning event, a non-verbal audio item indicating low contextual importance may be played, and during the subsequent 1% warning event, another non-verbal audio item indicating high contextual importance may be played by the audio user interface 38.
In additional embodiments, the contextual importance of the events 226 or 230 may be determined based upon pre-programmed information (e.g., events may be programmed as having high or low contextual importance characteristics), which may be established by the manufacturer of the device 10 or the programmer of the audio user interface 38, or later configured/modified by a user, such as through the user preference settings 96 (
Continuing to
By selecting the graphical button 210 the user may be navigated to the screen 250, which may display a navigable list 252 of music files (songs) 254 stored on the device 10 alphabetically. For instance, as shown in
As shown in the screen 264, information pertaining to the selected music file 260 is displayed. For instance, the displayed information may include the name of the recording artist, the title of the selected music file 260, and, in some embodiments, the album with which the selected music file 260 is associated. The screen 264 may also display the album artwork 266 and the graphical buttons 268, 270, and 272. As will be appreciated, the graphical button 268 may allow the user to pause or un-pause the playback of the selected music file 260. Additionally, where the presently selected media file 260 is part of a playlist, the graphical buttons 270 and 272 may represent the functions to returning to a previous file in the playlist or to continue to the subsequent file in the playlist. As can be appreciated, where a playlist is being played in a random mode or shuffle mode, the graphical buttons 270 and 272 may function select a random file from the playlist for playback. The screen 264 also includes a sliding bar element 274, which may be manipulated by the user to control the volume of the audio playback. For the purposes of the list navigation examples discussed below with respect to
Referring now to
As depicted in
In accordance with the presently disclosed techniques, the audio user interface 38 may be configured to adapt to slight changes in the navigation speed 280. For instance, in one situation, the navigation speed may increase slightly, such that the transition time between list items reduced to allot enough time for speaking only one of the two audio items (e.g., song title or artist name). In one embodiment, the audio user interface 38 may still provide full verbosity audio feedback with respect to the song title information, but may omit the information regarding the artist name.
Next,
In the present example of
Additionally, as indicated by the list item L8, the audio user interface 38 may also be configured to selectively provide non-verbal tones based on the “newness” of the list item. For instance, the list item L8 may represent a song that was recently purchased from an online digital media service, such as the iTunes® service, provided by Apple Inc. Thus, in order to emphasize the newness of the song L8, the audio user interface 38 may play a non-verbal tone that is more distinct (e.g., a higher pitched beep) relative to the non-verbal tones played for older content (e.g., L6, L7, etc.) when the newer song L8 is reached during navigation. As will be appreciated, the “newness” threshold may be configured through user preferences 96 on the device 10. By way of example, a user may configure the device 10 to identify content purchased or downloaded with the last 3 days as being new content.
In another embodiment, the identification of “newer” content may include defining multiple tiers of newness. For instance, in addition to using a 3 day threshold for identifying the newest content on the device 10, a second threshold (e.g., 14 days) may be established to detect content that is still relatively recent. In such embodiments, different non-verbal tones may be used for list items that are identified as being newest and recent items, with the non-verbal tone for recent items being less distinct than the non-verbal tone associated with the newest items, but with both the non-verbal tones for recent and newest items being substantially more distinct relative to a non-verbal tone used for items not identified as being new or recent (e.g., items older than 14 days). Indeed, those skilled in the art will appreciate that any number of non-verbal tones for distinguishing between the age of content stored on the device 10 (e.g., based on any number of tiers defined by corresponding thresholds) may be utilized in various embodiments of the present technique.
Continuing to
As will be appreciated, in other embodiments, the frequency at which the non-verbal tones are provided may further decrease (e.g., every fourth, fifth, or sixth item) as the list navigation speed continues to increase. Further, it should be understood that the navigation of the list 252 may not necessarily occur a constant speed. Thus, the audio user interface 38 may adjust the verbosity of the audio feedback accordingly. For instance, if the user initially navigates the list 252 very slowly (e.g., speed 280), and gradually increases to a faster speed (e.g., speed 284), the audio user interface 38 may initially provide full verbosity audio feedback for multiple segments of data (e.g., song title and artist name), and gradually devolve the verbosity to providing only the song title and eventually reaching a devolved verbosity scheme similar to that shown in Table 3. If the user subsequently gradually decreases the navigation speed, then the audio feedback may also gradually evolve back towards the full verbosity mode.
Moreover, while the present techniques have been illustrated in conjunction with a graphical user interface, it should be understood that certain embodiments may include only an audio user interface. In such embodiments, the above-described audio feedback techniques may be applied as the user may navigate through a listing of items (e.g., using a scroll wheel) without a corresponding visual interface. As mentioned above, an embodiment of the device 10 that lacks a display 22 and thus a graphical user interface may be a model of an iPod® Shuffle, available from Apple Inc.
The various techniques for varying audio feedback during list navigation, as described with reference to the embodiments shown in
If the navigation speed at decision block 296 does not permit full verbosity audio feedback, the method 290 continues to decision block 300, at which a determination is made as to whether the current list item is the first item of an alphabetical group and, if so, the letter of the alphabetical group is spoken by the audio user interface 38 and provided as audio feedback (step 302). If the current list item is not the first item of an alphabetical group, the method 290 proceeds to decision block 304, whereby the newness of the current list item is determined. If the current list item is identified as being new content, then a distinct non-verbal audio item that indicates the newness of the current list item is played, as indicated at step 306. If the current list item is not identified as being new content, then a less distinct non-verbal audio item is played instead, as indicated at step 308.
Referring to
In summary, the embodiments presented above provide an intelligent and adaptive technique by which an electronic device (e.g., device 10) is capable of evolving and devolving of audio feedback verbosity in response to user inputs and/or in response to external stimuli. For instance, based on user actions and or user-defined preferences (e.g., preferences 96) the specific actions for devolving and/or evolving audio feedback may be dynamic and adaptive. By way of example, as shown in
Further, as will be understood, the various techniques described above and relating to adaptively varying audio feedback provided by an audio user interface of an electronic device are provided herein by way of example only. Accordingly, it should be understood that the present disclosure should not be construed as being limited to only the examples provided above. Indeed, a number of variations of the audio feedback techniques set forth above may exist. Further, it should be appreciated that the above-discussed techniques may be implemented in any suitable manner. For instance, audio user interface 38 and the audio feedback selection logic 86, which are collectively configured to implement various aspects of the presently disclosed techniques, may be implemented using hardware (e.g., suitably configured circuitry), software (e.g., via a computer program including executable code stored on one or more tangible computer readable medium), or via using a combination of both hardware and software elements.
The specific embodiments described above have been shown by way of example, and it should be understood that these embodiments may be susceptible to various modifications and alternative forms. It should be further understood that the claims are not intended to be limited to the particular forms disclosed, but rather to cover all modifications, equivalents, and alternatives falling within the spirit and scope of this disclosure.
This application is a continuation of U.S. patent application Ser. No. 12/686,876, filed Jan. 13, 2010 and now U.S. Pat. No. 8,381,107, which is hereby incorporated by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
3704345 | Coker et al. | Nov 1972 | A |
3828132 | Flanagan et al. | Aug 1974 | A |
3979557 | Schulman et al. | Sep 1976 | A |
4278838 | Antonov | Jul 1981 | A |
4282405 | Taguchi | Aug 1981 | A |
4310721 | Manley et al. | Jan 1982 | A |
4348553 | Baker et al. | Sep 1982 | A |
4653021 | Takagi | Mar 1987 | A |
4688195 | Thompson et al. | Aug 1987 | A |
4692941 | Jacks et al. | Sep 1987 | A |
4718094 | Bahl et al. | Jan 1988 | A |
4724542 | Williford | Feb 1988 | A |
4726065 | Froessl | Feb 1988 | A |
4727354 | Lindsay | Feb 1988 | A |
4776016 | Hansen | Oct 1988 | A |
4783807 | Marley | Nov 1988 | A |
4811243 | Racine | Mar 1989 | A |
4819271 | Bahl et al. | Apr 1989 | A |
4827520 | Zeinstra | May 1989 | A |
4829576 | Porter | May 1989 | A |
4833712 | Bahl et al. | May 1989 | A |
4839853 | Deerwester et al. | Jun 1989 | A |
4852168 | Sprague | Jul 1989 | A |
4862504 | Nomura | Aug 1989 | A |
4878230 | Murakami et al. | Oct 1989 | A |
4903305 | Gillick et al. | Feb 1990 | A |
4905163 | Garber et al. | Feb 1990 | A |
4914586 | Swinehart et al. | Apr 1990 | A |
4914590 | Loatman et al. | Apr 1990 | A |
4944013 | Gouvianakis et al. | Jul 1990 | A |
4955047 | Morganstein et al. | Sep 1990 | A |
4965763 | Zamora | Oct 1990 | A |
4974191 | Amirghodsi et al. | Nov 1990 | A |
4977598 | Doddington et al. | Dec 1990 | A |
4992972 | Brooks et al. | Feb 1991 | A |
5010574 | Wang | Apr 1991 | A |
5020112 | Chou | May 1991 | A |
5021971 | Lindsay | Jun 1991 | A |
5022081 | Hirose et al. | Jun 1991 | A |
5027406 | Roberts et al. | Jun 1991 | A |
5031217 | Nishimura | Jul 1991 | A |
5032989 | Tornetta | Jul 1991 | A |
5040218 | Vitale et al. | Aug 1991 | A |
5047614 | Bianco | Sep 1991 | A |
5057915 | Kohorn et al. | Oct 1991 | A |
5072452 | Brown et al. | Dec 1991 | A |
5091945 | Kleijn | Feb 1992 | A |
5127053 | Koch | Jun 1992 | A |
5127055 | Larkey | Jun 1992 | A |
5128672 | Kaehler | Jul 1992 | A |
5133011 | McKiel, Jr. | Jul 1992 | A |
5142584 | Ozawa | Aug 1992 | A |
5164900 | Bernath | Nov 1992 | A |
5165007 | Bahl et al. | Nov 1992 | A |
5179652 | Rozmanith et al. | Jan 1993 | A |
5194950 | Murakami et al. | Mar 1993 | A |
5197005 | Shwartz et al. | Mar 1993 | A |
5199077 | Wilcox et al. | Mar 1993 | A |
5202952 | Gillick et al. | Apr 1993 | A |
5208862 | Ozawa | May 1993 | A |
5216747 | Hardwick et al. | Jun 1993 | A |
5220639 | Lee | Jun 1993 | A |
5220657 | Bly et al. | Jun 1993 | A |
5222146 | Bahl et al. | Jun 1993 | A |
5230036 | Akamine et al. | Jul 1993 | A |
5235680 | Bijnagte | Aug 1993 | A |
5267345 | Brown et al. | Nov 1993 | A |
5268990 | Cohen et al. | Dec 1993 | A |
5282265 | Rohra Suda et al. | Jan 1994 | A |
RE34562 | Murakami et al. | Mar 1994 | E |
5291286 | Murakami et al. | Mar 1994 | A |
5293448 | Honda | Mar 1994 | A |
5293452 | Picone et al. | Mar 1994 | A |
5297170 | Eyuboglu et al. | Mar 1994 | A |
5301109 | Landauer et al. | Apr 1994 | A |
5303406 | Hansen et al. | Apr 1994 | A |
5309359 | Katz et al. | May 1994 | A |
5317507 | Gallant | May 1994 | A |
5317647 | Pagallo | May 1994 | A |
5325297 | Bird et al. | Jun 1994 | A |
5325298 | Gallant | Jun 1994 | A |
5327498 | Hamon | Jul 1994 | A |
5333236 | Bahl et al. | Jul 1994 | A |
5333275 | Wheatley et al. | Jul 1994 | A |
5345536 | Hoshimi et al. | Sep 1994 | A |
5349645 | Zhao | Sep 1994 | A |
5353377 | Kuroda et al. | Oct 1994 | A |
5377301 | Rosenberg et al. | Dec 1994 | A |
5384892 | Strong | Jan 1995 | A |
5384893 | Hutchins | Jan 1995 | A |
5386494 | White | Jan 1995 | A |
5386556 | Hedin et al. | Jan 1995 | A |
5390279 | Strong | Feb 1995 | A |
5396625 | Parkes | Mar 1995 | A |
5400434 | Pearson | Mar 1995 | A |
5404295 | Katz et al. | Apr 1995 | A |
5412756 | Bauman et al. | May 1995 | A |
5412804 | Krishna | May 1995 | A |
5412806 | Du et al. | May 1995 | A |
5418951 | Damashek | May 1995 | A |
5424947 | Nagao et al. | Jun 1995 | A |
5434777 | Luciw | Jul 1995 | A |
5444823 | Nguyen | Aug 1995 | A |
5455888 | Iyengar et al. | Oct 1995 | A |
5469529 | Bimbot et al. | Nov 1995 | A |
5471611 | McGregor | Nov 1995 | A |
5475587 | Anick et al. | Dec 1995 | A |
5479488 | Lennig et al. | Dec 1995 | A |
5491772 | Hardwick et al. | Feb 1996 | A |
5493677 | Balogh | Feb 1996 | A |
5495604 | Harding et al. | Feb 1996 | A |
5502790 | Yi | Mar 1996 | A |
5502791 | Nishimura et al. | Mar 1996 | A |
5515475 | Gupta et al. | May 1996 | A |
5536902 | Serra et al. | Jul 1996 | A |
5537618 | Boulton et al. | Jul 1996 | A |
5574823 | Hassanein et al. | Nov 1996 | A |
5577241 | Spencer | Nov 1996 | A |
5578808 | Taylor | Nov 1996 | A |
5579436 | Chou et al. | Nov 1996 | A |
5581655 | Cohen et al. | Dec 1996 | A |
5584024 | Shwartz | Dec 1996 | A |
5596676 | Swaminathan et al. | Jan 1997 | A |
5596994 | Bro | Jan 1997 | A |
5608624 | Luciw | Mar 1997 | A |
5613036 | Strong | Mar 1997 | A |
5617507 | Lee et al. | Apr 1997 | A |
5619694 | Shimazu | Apr 1997 | A |
5621859 | Schwartz et al. | Apr 1997 | A |
5621903 | Luciw et al. | Apr 1997 | A |
5642464 | Yue et al. | Jun 1997 | A |
5642519 | Martin | Jun 1997 | A |
5644727 | Atkins | Jul 1997 | A |
5664055 | Kroon | Sep 1997 | A |
5675819 | Schuetze | Oct 1997 | A |
5682539 | Conrad et al. | Oct 1997 | A |
5687077 | Gough, Jr. | Nov 1997 | A |
5696962 | Kupiec | Dec 1997 | A |
5701400 | Amado | Dec 1997 | A |
5706442 | Anderson et al. | Jan 1998 | A |
5710886 | Christensen et al. | Jan 1998 | A |
5712957 | Waibel et al. | Jan 1998 | A |
5715468 | Budzinski | Feb 1998 | A |
5721827 | Logan et al. | Feb 1998 | A |
5727950 | Cook et al. | Mar 1998 | A |
5729694 | Holzrichter et al. | Mar 1998 | A |
5732390 | Katayanagi et al. | Mar 1998 | A |
5734791 | Acero et al. | Mar 1998 | A |
5737609 | Reed et al. | Apr 1998 | A |
5737734 | Schultz | Apr 1998 | A |
5748974 | Johnson | May 1998 | A |
5749081 | Whiteis | May 1998 | A |
5759101 | Von Kohorn | Jun 1998 | A |
5790978 | Olive et al. | Aug 1998 | A |
5794050 | Dahlgren et al. | Aug 1998 | A |
5794182 | Manduchi et al. | Aug 1998 | A |
5794207 | Walker et al. | Aug 1998 | A |
5794237 | Gore, Jr. | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5801692 | Muzio et al. | Sep 1998 | A |
5822743 | Gupta et al. | Oct 1998 | A |
5825881 | Colvin, Sr. | Oct 1998 | A |
5826261 | Spencer | Oct 1998 | A |
5828999 | Bellegarda et al. | Oct 1998 | A |
5835893 | Ushioda | Nov 1998 | A |
5839106 | Bellegarda | Nov 1998 | A |
5845255 | Mayaud | Dec 1998 | A |
5857184 | Lynch | Jan 1999 | A |
5860063 | Gorin et al. | Jan 1999 | A |
5862223 | Walker et al. | Jan 1999 | A |
5864806 | Mokbel et al. | Jan 1999 | A |
5864844 | James et al. | Jan 1999 | A |
5867799 | Lang et al. | Feb 1999 | A |
5873056 | Liddy et al. | Feb 1999 | A |
5875437 | Atkins | Feb 1999 | A |
5884323 | Hawkins et al. | Mar 1999 | A |
5895464 | Bhandari et al. | Apr 1999 | A |
5895466 | Goldberg et al. | Apr 1999 | A |
5899972 | Miyazawa et al. | May 1999 | A |
5913193 | Huang et al. | Jun 1999 | A |
5915249 | Spencer | Jun 1999 | A |
5930769 | Rose | Jul 1999 | A |
5933822 | Braden-Harder et al. | Aug 1999 | A |
5936926 | Yokouchi et al. | Aug 1999 | A |
5940811 | Norris | Aug 1999 | A |
5941944 | Messerly | Aug 1999 | A |
5943670 | Prager | Aug 1999 | A |
5948040 | DeLorme et al. | Sep 1999 | A |
5956699 | Wong et al. | Sep 1999 | A |
5960422 | Prasad | Sep 1999 | A |
5963924 | Williams et al. | Oct 1999 | A |
5966126 | Szabo | Oct 1999 | A |
5970474 | LeRoy et al. | Oct 1999 | A |
5973612 | Deo et al. | Oct 1999 | A |
5974146 | Randle et al. | Oct 1999 | A |
5982891 | Ginter et al. | Nov 1999 | A |
5987132 | Rowney | Nov 1999 | A |
5987140 | Rowney et al. | Nov 1999 | A |
5987404 | Della Pietra et al. | Nov 1999 | A |
5987440 | O'Neil et al. | Nov 1999 | A |
5999908 | Abelow | Dec 1999 | A |
6016471 | Kuhn et al. | Jan 2000 | A |
6023684 | Pearson | Feb 2000 | A |
6024288 | Gottlich et al. | Feb 2000 | A |
6026345 | Shah et al. | Feb 2000 | A |
6026375 | Hall et al. | Feb 2000 | A |
6026388 | Liddy et al. | Feb 2000 | A |
6026393 | Gupta et al. | Feb 2000 | A |
6029132 | Kuhn et al. | Feb 2000 | A |
6038533 | Buchsbaum et al. | Mar 2000 | A |
6052656 | Suda et al. | Apr 2000 | A |
6055514 | Wren | Apr 2000 | A |
6055531 | Bennett et al. | Apr 2000 | A |
6064960 | Bellegarda et al. | May 2000 | A |
6070139 | Miyazawa et al. | May 2000 | A |
6070147 | Harms et al. | May 2000 | A |
6076051 | Messerly et al. | Jun 2000 | A |
6076088 | Paik et al. | Jun 2000 | A |
6078914 | Redfern | Jun 2000 | A |
6081750 | Hoffberg et al. | Jun 2000 | A |
6081774 | de Hita et al. | Jun 2000 | A |
6088731 | Kiraly et al. | Jul 2000 | A |
6094649 | Bowen et al. | Jul 2000 | A |
6105865 | Hardesty | Aug 2000 | A |
6108627 | Sabourin | Aug 2000 | A |
6111562 | Downs et al. | Aug 2000 | A |
6119101 | Peckover | Sep 2000 | A |
6122616 | Henton | Sep 2000 | A |
6125356 | Brockman et al. | Sep 2000 | A |
6144938 | Surace et al. | Nov 2000 | A |
6173261 | Arai et al. | Jan 2001 | B1 |
6173279 | Levin et al. | Jan 2001 | B1 |
6188967 | Kurtzberg et al. | Feb 2001 | B1 |
6188999 | Moody | Feb 2001 | B1 |
6195641 | Loring et al. | Feb 2001 | B1 |
6205456 | Nakao | Mar 2001 | B1 |
6208971 | Bellegarda et al. | Mar 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6233578 | Machihara et al. | May 2001 | B1 |
6246981 | Papineni et al. | Jun 2001 | B1 |
6260024 | Shkedy | Jul 2001 | B1 |
6266637 | Donovan et al. | Jul 2001 | B1 |
6275824 | O'Flaherty et al. | Aug 2001 | B1 |
6285786 | Seni et al. | Sep 2001 | B1 |
6297818 | Ulrich et al. | Oct 2001 | B1 |
6308149 | Gaussier et al. | Oct 2001 | B1 |
6311189 | deVries et al. | Oct 2001 | B1 |
6317594 | Gossman et al. | Nov 2001 | B1 |
6317707 | Bangalore et al. | Nov 2001 | B1 |
6317831 | King | Nov 2001 | B1 |
6321092 | Fitch et al. | Nov 2001 | B1 |
6334103 | Surace et al. | Dec 2001 | B1 |
6356854 | Schubert et al. | Mar 2002 | B1 |
6356905 | Gershman et al. | Mar 2002 | B1 |
6366883 | Campbell et al. | Apr 2002 | B1 |
6366884 | Bellegarda et al. | Apr 2002 | B1 |
6385662 | Moon et al. | May 2002 | B1 |
6421672 | McAllister et al. | Jul 2002 | B1 |
6434524 | Weber | Aug 2002 | B1 |
6446076 | Burkey et al. | Sep 2002 | B1 |
6449620 | Draper et al. | Sep 2002 | B1 |
6453292 | Ramaswamy et al. | Sep 2002 | B2 |
6460029 | Fries et al. | Oct 2002 | B1 |
6466654 | Cooper et al. | Oct 2002 | B1 |
6469712 | Hilpert, Jr. et al. | Oct 2002 | B1 |
6477488 | Bellegarda | Nov 2002 | B1 |
6487534 | Thelen et al. | Nov 2002 | B1 |
6499013 | Weber | Dec 2002 | B1 |
6501937 | Ho et al. | Dec 2002 | B1 |
6505158 | Conkie | Jan 2003 | B1 |
6505175 | Silverman et al. | Jan 2003 | B1 |
6505183 | Loofbourrow et al. | Jan 2003 | B1 |
6510417 | Woods et al. | Jan 2003 | B1 |
6513063 | Julia et al. | Jan 2003 | B1 |
6523061 | Halverson et al. | Feb 2003 | B1 |
6523172 | Martinez-Guerra et al. | Feb 2003 | B1 |
6526382 | Yuschik | Feb 2003 | B1 |
6526395 | Morris | Feb 2003 | B1 |
6532444 | Weber | Mar 2003 | B1 |
6532446 | King | Mar 2003 | B1 |
6546388 | Edlund et al. | Apr 2003 | B1 |
6553344 | Bellegarda et al. | Apr 2003 | B2 |
6556983 | Altschuler et al. | Apr 2003 | B1 |
6584464 | Warthen | Jun 2003 | B1 |
6598039 | Livowsky | Jul 2003 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6601234 | Bowman-Amuah | Jul 2003 | B1 |
6604059 | Strubbe et al. | Aug 2003 | B2 |
6615172 | Bennett et al. | Sep 2003 | B1 |
6615175 | Gazdzinski | Sep 2003 | B1 |
6615220 | Austin et al. | Sep 2003 | B1 |
6625583 | Silverman et al. | Sep 2003 | B1 |
6631346 | Karaorman et al. | Oct 2003 | B1 |
6633846 | Bennett et al. | Oct 2003 | B1 |
6647260 | Dusse et al. | Nov 2003 | B2 |
6650735 | Burton et al. | Nov 2003 | B2 |
6654740 | Tokuda et al. | Nov 2003 | B2 |
6665639 | Mozer et al. | Dec 2003 | B2 |
6665640 | Bennett et al. | Dec 2003 | B1 |
6665641 | Coorman et al. | Dec 2003 | B1 |
6684187 | Conkie | Jan 2004 | B1 |
6691064 | Vroman | Feb 2004 | B2 |
6691111 | Lazaridis et al. | Feb 2004 | B2 |
6691151 | Cheyer et al. | Feb 2004 | B1 |
6697780 | Beutnagel et al. | Feb 2004 | B1 |
6697824 | Bowman-Amuah | Feb 2004 | B1 |
6701294 | Ball et al. | Mar 2004 | B1 |
6711585 | Copperman et al. | Mar 2004 | B1 |
6718324 | Edlund et al. | Apr 2004 | B2 |
6721728 | McGreevy | Apr 2004 | B2 |
6735632 | Kiraly et al. | May 2004 | B1 |
6742021 | Halverson et al. | May 2004 | B1 |
6757362 | Cooper et al. | Jun 2004 | B1 |
6757718 | Halverson et al. | Jun 2004 | B1 |
6766320 | Want et al. | Jul 2004 | B1 |
6771982 | Toupin | Aug 2004 | B1 |
6778951 | Contractor | Aug 2004 | B1 |
6778952 | Bellegarda | Aug 2004 | B2 |
6778962 | Kasai et al. | Aug 2004 | B1 |
6778970 | Au | Aug 2004 | B2 |
6792082 | Levine | Sep 2004 | B1 |
6807574 | Partovi et al. | Oct 2004 | B1 |
6810379 | Vermeulen et al. | Oct 2004 | B1 |
6813491 | McKinney | Nov 2004 | B1 |
6829603 | Chai et al. | Dec 2004 | B1 |
6832194 | Mozer et al. | Dec 2004 | B1 |
6842767 | Partovi et al. | Jan 2005 | B1 |
6847966 | Sommer et al. | Jan 2005 | B1 |
6847979 | Allemang et al. | Jan 2005 | B2 |
6851115 | Cheyer et al. | Feb 2005 | B1 |
6859931 | Cheyer et al. | Feb 2005 | B1 |
6895380 | Sepe, Jr. | May 2005 | B2 |
6895558 | Loveland | May 2005 | B1 |
6901399 | Corston et al. | May 2005 | B1 |
6912499 | Sabourin et al. | Jun 2005 | B1 |
6924828 | Hirsch | Aug 2005 | B1 |
6928614 | Everhart | Aug 2005 | B1 |
6931384 | Horvitz et al. | Aug 2005 | B1 |
6937975 | Elworthy | Aug 2005 | B1 |
6937986 | Denenberg et al. | Aug 2005 | B2 |
6964023 | Maes et al. | Nov 2005 | B2 |
6978127 | Bulthuis et al. | Dec 2005 | B1 |
6980949 | Ford | Dec 2005 | B2 |
6980955 | Okutani et al. | Dec 2005 | B2 |
6985865 | Packingham et al. | Jan 2006 | B1 |
6988071 | Gazdzinski | Jan 2006 | B1 |
6996531 | Korall et al. | Feb 2006 | B2 |
6999927 | Mozer et al. | Feb 2006 | B2 |
7020685 | Chen et al. | Mar 2006 | B1 |
7024366 | Deyoe et al. | Apr 2006 | B1 |
7027974 | Busch et al. | Apr 2006 | B1 |
7036128 | Julia et al. | Apr 2006 | B1 |
7050977 | Bennett | May 2006 | B1 |
7058569 | Coorman et al. | Jun 2006 | B2 |
7062428 | Hogenhout et al. | Jun 2006 | B2 |
7069560 | Cheyer et al. | Jun 2006 | B1 |
7092887 | Mozer et al. | Aug 2006 | B2 |
7092928 | Elad et al. | Aug 2006 | B1 |
7093693 | Gazdzinski | Aug 2006 | B1 |
7127046 | Smith et al. | Oct 2006 | B1 |
7127403 | Saylor et al. | Oct 2006 | B1 |
7136710 | Hoffberg et al. | Nov 2006 | B1 |
7137126 | Coffman et al. | Nov 2006 | B1 |
7139714 | Bennett et al. | Nov 2006 | B2 |
7139722 | Perrella et al. | Nov 2006 | B2 |
7152070 | Musick et al. | Dec 2006 | B1 |
7177798 | Hsu et al. | Feb 2007 | B2 |
7197460 | Gupta et al. | Mar 2007 | B1 |
7200559 | Wang | Apr 2007 | B2 |
7203646 | Bennett | Apr 2007 | B2 |
7216073 | Lavi et al. | May 2007 | B2 |
7216080 | Tsiao et al. | May 2007 | B2 |
7225125 | Bennett et al. | May 2007 | B2 |
7233790 | Kjellberg et al. | Jun 2007 | B2 |
7233904 | Luisi | Jun 2007 | B2 |
7266496 | Wang et al. | Sep 2007 | B2 |
7277854 | Bennett et al. | Oct 2007 | B2 |
7290039 | Lisitsa et al. | Oct 2007 | B1 |
7299033 | Kjellberg et al. | Nov 2007 | B2 |
7310600 | Garner et al. | Dec 2007 | B1 |
7324947 | Jordan et al. | Jan 2008 | B2 |
7349953 | Lisitsa et al. | Mar 2008 | B2 |
7376556 | Bennett | May 2008 | B2 |
7376645 | Bernard | May 2008 | B2 |
7379874 | Schmid et al. | May 2008 | B2 |
7386449 | Sun et al. | Jun 2008 | B2 |
7389224 | Elworthy | Jun 2008 | B1 |
7392185 | Bennett | Jun 2008 | B2 |
7398209 | Kennewick et al. | Jul 2008 | B2 |
7403938 | Harrison et al. | Jul 2008 | B2 |
7409337 | Potter et al. | Aug 2008 | B1 |
7415100 | Cooper et al. | Aug 2008 | B2 |
7418392 | Mozer et al. | Aug 2008 | B1 |
7426467 | Nashida et al. | Sep 2008 | B2 |
7427024 | Gazdzinski et al. | Sep 2008 | B1 |
7447635 | Konopka et al. | Nov 2008 | B1 |
7454351 | Jeschke et al. | Nov 2008 | B2 |
7467087 | Gillick et al. | Dec 2008 | B1 |
7475010 | Chao | Jan 2009 | B2 |
7483894 | Cao | Jan 2009 | B2 |
7487089 | Mozer | Feb 2009 | B2 |
7496498 | Chu et al. | Feb 2009 | B2 |
7496512 | Zhao et al. | Feb 2009 | B2 |
7502738 | Kennewick et al. | Mar 2009 | B2 |
7508373 | Lin et al. | Mar 2009 | B2 |
7522927 | Fitch et al. | Apr 2009 | B2 |
7523108 | Cao | Apr 2009 | B2 |
7526466 | Au | Apr 2009 | B2 |
7529671 | Rockenbeck et al. | May 2009 | B2 |
7529676 | Koyama | May 2009 | B2 |
7536565 | Girish et al. | May 2009 | B2 |
7538685 | Cooper et al. | May 2009 | B1 |
7539656 | Fratkina et al. | May 2009 | B2 |
7546382 | Healey et al. | Jun 2009 | B2 |
7548895 | Pulsipher | Jun 2009 | B2 |
7552055 | Lecoeuche | Jun 2009 | B2 |
7555431 | Bennett | Jun 2009 | B2 |
7558730 | Davis et al. | Jul 2009 | B2 |
7571106 | Cao et al. | Aug 2009 | B2 |
7599918 | Shen et al. | Oct 2009 | B2 |
7620549 | Di Cristo et al. | Nov 2009 | B2 |
7624007 | Bennett | Nov 2009 | B2 |
7634409 | Kennewick et al. | Dec 2009 | B2 |
7636657 | Ju et al. | Dec 2009 | B2 |
7640160 | Di Cristo et al. | Dec 2009 | B2 |
7647225 | Bennett et al. | Jan 2010 | B2 |
7657424 | Bennett | Feb 2010 | B2 |
7672841 | Bennett | Mar 2010 | B2 |
7676026 | Baxter, Jr. | Mar 2010 | B1 |
7684985 | Dominach et al. | Mar 2010 | B2 |
7693715 | Hwang et al. | Apr 2010 | B2 |
7693720 | Kennewick et al. | Apr 2010 | B2 |
7698131 | Bennett | Apr 2010 | B2 |
7702500 | Blaedow | Apr 2010 | B2 |
7702508 | Bennett | Apr 2010 | B2 |
7707027 | Balchandran et al. | Apr 2010 | B2 |
7707032 | Wang et al. | Apr 2010 | B2 |
7707267 | Lisitsa et al. | Apr 2010 | B2 |
7711565 | Gazdzinski | May 2010 | B1 |
7711672 | Au | May 2010 | B2 |
7716056 | Weng et al. | May 2010 | B2 |
7720674 | Kaiser et al. | May 2010 | B2 |
7720683 | Vermeulen et al. | May 2010 | B1 |
7725307 | Bennett | May 2010 | B2 |
7725318 | Gavalda et al. | May 2010 | B2 |
7725320 | Bennett | May 2010 | B2 |
7725321 | Bennett | May 2010 | B2 |
7729904 | Bennett | Jun 2010 | B2 |
7729916 | Coffman et al. | Jun 2010 | B2 |
7734461 | Kwak et al. | Jun 2010 | B2 |
7747616 | Yamada et al. | Jun 2010 | B2 |
7752152 | Paek et al. | Jul 2010 | B2 |
7756868 | Lee | Jul 2010 | B2 |
7774204 | Mozer et al. | Aug 2010 | B2 |
7783486 | Rosser et al. | Aug 2010 | B2 |
7801729 | Mozer | Sep 2010 | B2 |
7809570 | Kennewick et al. | Oct 2010 | B2 |
7809610 | Cao | Oct 2010 | B2 |
7818176 | Freeman et al. | Oct 2010 | B2 |
7822608 | Cross, Jr. et al. | Oct 2010 | B2 |
7826945 | Zhang et al. | Nov 2010 | B2 |
7831426 | Bennett | Nov 2010 | B2 |
7840400 | Lavi et al. | Nov 2010 | B2 |
7840447 | Kleinrock et al. | Nov 2010 | B2 |
7853574 | Kraenzel et al. | Dec 2010 | B2 |
7873519 | Bennett | Jan 2011 | B2 |
7873654 | Bernard | Jan 2011 | B2 |
7881936 | Longé et al. | Feb 2011 | B2 |
7890652 | Bull et al. | Feb 2011 | B2 |
7912702 | Bennett | Mar 2011 | B2 |
7917367 | Di Cristo et al. | Mar 2011 | B2 |
7917497 | Harrison et al. | Mar 2011 | B2 |
7920678 | Cooper et al. | Apr 2011 | B2 |
7925525 | Chin | Apr 2011 | B2 |
7930168 | Weng et al. | Apr 2011 | B2 |
7949529 | Weider et al. | May 2011 | B2 |
7949534 | Davis et al. | May 2011 | B2 |
7974844 | Sumita | Jul 2011 | B2 |
7974972 | Cao | Jul 2011 | B2 |
7983915 | Knight et al. | Jul 2011 | B2 |
7983917 | Kennewick et al. | Jul 2011 | B2 |
7983997 | Allen et al. | Jul 2011 | B2 |
7986431 | Emori et al. | Jul 2011 | B2 |
7987151 | Schott et al. | Jul 2011 | B2 |
7996228 | Miller et al. | Aug 2011 | B2 |
8000453 | Cooper et al. | Aug 2011 | B2 |
8005679 | Jordan et al. | Aug 2011 | B2 |
8015006 | Kennewick et al. | Sep 2011 | B2 |
8024195 | Mozer et al. | Sep 2011 | B2 |
8036901 | Mozer | Oct 2011 | B2 |
8041570 | Mirkovic et al. | Oct 2011 | B2 |
8041611 | Kleinrock et al. | Oct 2011 | B2 |
8055708 | Chitsaz et al. | Nov 2011 | B2 |
8065155 | Gazdzinski | Nov 2011 | B1 |
8065156 | Gazdzinski | Nov 2011 | B2 |
8069046 | Kennewick et al. | Nov 2011 | B2 |
8073681 | Baldwin et al. | Dec 2011 | B2 |
8078473 | Gazdzinski | Dec 2011 | B1 |
8082153 | Coffman et al. | Dec 2011 | B2 |
8095364 | Longé et al. | Jan 2012 | B2 |
8099289 | Mozer et al. | Jan 2012 | B2 |
8107401 | John et al. | Jan 2012 | B2 |
8112275 | Kennewick et al. | Feb 2012 | B2 |
8112280 | Lu | Feb 2012 | B2 |
8117037 | Gazdzinski | Feb 2012 | B2 |
8131557 | Davis et al. | Mar 2012 | B2 |
8140335 | Kennewick et al. | Mar 2012 | B2 |
8165321 | Paquier et al. | Apr 2012 | B2 |
8165886 | Gagnon et al. | Apr 2012 | B1 |
8166019 | Lee et al. | Apr 2012 | B1 |
8190359 | Bourne | May 2012 | B2 |
8195467 | Mozer et al. | Jun 2012 | B2 |
8204238 | Mozer | Jun 2012 | B2 |
8205788 | Gazdzinski et al. | Jun 2012 | B1 |
8219407 | Roy et al. | Jul 2012 | B1 |
8285551 | Gazdzinski | Oct 2012 | B2 |
8285553 | Gazdzinski | Oct 2012 | B2 |
8290778 | Gazdzinski | Oct 2012 | B2 |
8290781 | Gazdzinski | Oct 2012 | B2 |
8296146 | Gazdzinski | Oct 2012 | B2 |
8296153 | Gazdzinski | Oct 2012 | B2 |
8301456 | Gazdzinski | Oct 2012 | B2 |
8311834 | Gazdzinski | Nov 2012 | B1 |
8370158 | Gazdzinski | Feb 2013 | B2 |
8371503 | Gazdzinski | Feb 2013 | B2 |
8374871 | Ehsani et al. | Feb 2013 | B2 |
8381107 | Rottler et al. | Feb 2013 | B2 |
8428758 | Naik et al. | Apr 2013 | B2 |
8447612 | Gazdzinski | May 2013 | B2 |
20010047264 | Roundtree | Nov 2001 | A1 |
20020032564 | Ehsani et al. | Mar 2002 | A1 |
20020046025 | Hain | Apr 2002 | A1 |
20020069063 | Buchner et al. | Jun 2002 | A1 |
20020077817 | Atal | Jun 2002 | A1 |
20020103641 | Kuo et al. | Aug 2002 | A1 |
20020109709 | Sagar | Aug 2002 | A1 |
20020164000 | Cohen et al. | Nov 2002 | A1 |
20020198714 | Zhou | Dec 2002 | A1 |
20030030645 | Ribak | Feb 2003 | A1 |
20040036715 | Warren | Feb 2004 | A1 |
20040120476 | Harrison | Jun 2004 | A1 |
20040135701 | Yasuda et al. | Jul 2004 | A1 |
20040236778 | Junqua et al. | Nov 2004 | A1 |
20050015751 | Grassens | Jan 2005 | A1 |
20050055403 | Brittan | Mar 2005 | A1 |
20050071332 | Ortega et al. | Mar 2005 | A1 |
20050080625 | Bennett et al. | Apr 2005 | A1 |
20050091118 | Fano | Apr 2005 | A1 |
20050102614 | Brockett et al. | May 2005 | A1 |
20050108001 | Aarskog | May 2005 | A1 |
20050114124 | Liu et al. | May 2005 | A1 |
20050119897 | Bennett et al. | Jun 2005 | A1 |
20050143972 | Gopalakrishnan et al. | Jun 2005 | A1 |
20050165607 | DiFabbrizio et al. | Jul 2005 | A1 |
20050182629 | Coorman et al. | Aug 2005 | A1 |
20050196733 | Budra et al. | Sep 2005 | A1 |
20050251572 | McMahan | Nov 2005 | A1 |
20050288936 | Busayapongchai et al. | Dec 2005 | A1 |
20060018492 | Chiu et al. | Jan 2006 | A1 |
20060050865 | Kortum et al. | Mar 2006 | A1 |
20060095848 | Naik | May 2006 | A1 |
20060106592 | Brockett et al. | May 2006 | A1 |
20060106594 | Brockett et al. | May 2006 | A1 |
20060106595 | Brockett et al. | May 2006 | A1 |
20060117002 | Swen | Jun 2006 | A1 |
20060122834 | Bennett | Jun 2006 | A1 |
20060143007 | Koh et al. | Jun 2006 | A1 |
20060153040 | Girish et al. | Jul 2006 | A1 |
20060229802 | Vertelney et al. | Oct 2006 | A1 |
20070050191 | Weider et al. | Mar 2007 | A1 |
20070055529 | Kanevsky et al. | Mar 2007 | A1 |
20070058832 | Hug et al. | Mar 2007 | A1 |
20070080936 | Tsuk et al. | Apr 2007 | A1 |
20070088556 | Andrew | Apr 2007 | A1 |
20070100790 | Cheyer et al. | May 2007 | A1 |
20070100883 | Rose et al. | May 2007 | A1 |
20070106674 | Agrawal et al. | May 2007 | A1 |
20070118377 | Badino et al. | May 2007 | A1 |
20070135949 | Snover et al. | Jun 2007 | A1 |
20070174188 | Fish | Jul 2007 | A1 |
20070185917 | Prahlad et al. | Aug 2007 | A1 |
20070192027 | Lee et al. | Aug 2007 | A1 |
20070255979 | Deily et al. | Nov 2007 | A1 |
20070261080 | Saetti | Nov 2007 | A1 |
20070282595 | Tunning et al. | Dec 2007 | A1 |
20080015864 | Ross et al. | Jan 2008 | A1 |
20080021708 | Bennett et al. | Jan 2008 | A1 |
20080034032 | Healey et al. | Feb 2008 | A1 |
20080052063 | Bennett et al. | Feb 2008 | A1 |
20080109402 | Wang et al. | May 2008 | A1 |
20080120112 | Jordan et al. | May 2008 | A1 |
20080129520 | Lee | Jun 2008 | A1 |
20080140657 | Azvine et al. | Jun 2008 | A1 |
20080221903 | Kanevsky et al. | Sep 2008 | A1 |
20080228496 | Yu et al. | Sep 2008 | A1 |
20080229185 | Lynch | Sep 2008 | A1 |
20080247519 | Abella et al. | Oct 2008 | A1 |
20080249770 | Kim et al. | Oct 2008 | A1 |
20080300878 | Bennett | Dec 2008 | A1 |
20080319763 | Di Fabbrizio et al. | Dec 2008 | A1 |
20090006100 | Badger et al. | Jan 2009 | A1 |
20090006343 | Platt et al. | Jan 2009 | A1 |
20090012748 | Beish | Jan 2009 | A1 |
20090030800 | Grois | Jan 2009 | A1 |
20090055179 | Cho et al. | Feb 2009 | A1 |
20090058823 | Kocienda | Mar 2009 | A1 |
20090063974 | Bull et al. | Mar 2009 | A1 |
20090064031 | Bull et al. | Mar 2009 | A1 |
20090076796 | Daraselia | Mar 2009 | A1 |
20090077165 | Rhodes et al. | Mar 2009 | A1 |
20090083034 | Hernandez et al. | Mar 2009 | A1 |
20090100049 | Cao | Apr 2009 | A1 |
20090112677 | Rhett | Apr 2009 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20090157401 | Bennett | Jun 2009 | A1 |
20090164441 | Cheyer | Jun 2009 | A1 |
20090171664 | Kennewick et al. | Jul 2009 | A1 |
20090172542 | Girish et al. | Jul 2009 | A1 |
20090182445 | Girish et al. | Jul 2009 | A1 |
20090210232 | Sanchez | Aug 2009 | A1 |
20090248420 | Basir | Oct 2009 | A1 |
20090287583 | Holmes | Nov 2009 | A1 |
20090290718 | Kahn et al. | Nov 2009 | A1 |
20090296552 | Hicks et al. | Dec 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090299849 | Cao et al. | Dec 2009 | A1 |
20090307162 | Bui et al. | Dec 2009 | A1 |
20090313544 | Wood et al. | Dec 2009 | A1 |
20090313564 | Rottler et al. | Dec 2009 | A1 |
20100005081 | Bennett | Jan 2010 | A1 |
20100023320 | Di Cristo et al. | Jan 2010 | A1 |
20100036660 | Bennett | Feb 2010 | A1 |
20100042400 | Block et al. | Feb 2010 | A1 |
20100088020 | Sano et al. | Apr 2010 | A1 |
20100138215 | Williams | Jun 2010 | A1 |
20100145700 | Kennewick et al. | Jun 2010 | A1 |
20100169075 | Raffa et al. | Jul 2010 | A1 |
20100169097 | Nachman et al. | Jul 2010 | A1 |
20100199215 | Seymour et al. | Aug 2010 | A1 |
20100204986 | Kennewick et al. | Aug 2010 | A1 |
20100211199 | Naik et al. | Aug 2010 | A1 |
20100217604 | Baldwin et al. | Aug 2010 | A1 |
20100228540 | Bennett | Sep 2010 | A1 |
20100235341 | Bennett | Sep 2010 | A1 |
20100257160 | Cao | Oct 2010 | A1 |
20100262599 | Nitz | Oct 2010 | A1 |
20100277579 | Cho et al. | Nov 2010 | A1 |
20100280983 | Cho et al. | Nov 2010 | A1 |
20100286985 | Kennewick et al. | Nov 2010 | A1 |
20100299142 | Freeman et al. | Nov 2010 | A1 |
20100312547 | van Os et al. | Dec 2010 | A1 |
20100318576 | Kim | Dec 2010 | A1 |
20100332235 | David | Dec 2010 | A1 |
20100332348 | Cao | Dec 2010 | A1 |
20110035434 | Lockwood | Feb 2011 | A1 |
20110047072 | Ciurea | Feb 2011 | A1 |
20110060807 | Martin et al. | Mar 2011 | A1 |
20110082688 | Kim et al. | Apr 2011 | A1 |
20110112827 | Kennewick et al. | May 2011 | A1 |
20110112921 | Kennewick et al. | May 2011 | A1 |
20110119049 | Ylonen | May 2011 | A1 |
20110125540 | Jang et al. | May 2011 | A1 |
20110130958 | Stahl et al. | Jun 2011 | A1 |
20110131036 | Di Cristo et al. | Jun 2011 | A1 |
20110131045 | Cristo et al. | Jun 2011 | A1 |
20110143811 | Rodriguez | Jun 2011 | A1 |
20110144901 | Wang | Jun 2011 | A1 |
20110144999 | Jang et al. | Jun 2011 | A1 |
20110161076 | Davis et al. | Jun 2011 | A1 |
20110161309 | Lung et al. | Jun 2011 | A1 |
20110175810 | Markovic et al. | Jul 2011 | A1 |
20110184730 | LeBeau et al. | Jul 2011 | A1 |
20110218855 | Cao et al. | Sep 2011 | A1 |
20110231182 | Weider et al. | Sep 2011 | A1 |
20110231188 | Kennewick et al. | Sep 2011 | A1 |
20110264643 | Cao | Oct 2011 | A1 |
20110279368 | Klein et al. | Nov 2011 | A1 |
20110306426 | Novak et al. | Dec 2011 | A1 |
20120002820 | Leichter | Jan 2012 | A1 |
20120016678 | Gruber et al. | Jan 2012 | A1 |
20120020490 | Leichter | Jan 2012 | A1 |
20120022787 | LeBeau et al. | Jan 2012 | A1 |
20120022857 | Baldwin et al. | Jan 2012 | A1 |
20120022860 | Lloyd et al. | Jan 2012 | A1 |
20120022868 | LeBeau et al. | Jan 2012 | A1 |
20120022869 | Lloyd et al. | Jan 2012 | A1 |
20120022870 | Kristjansson et al. | Jan 2012 | A1 |
20120022874 | Lloyd et al. | Jan 2012 | A1 |
20120022876 | LeBeau et al. | Jan 2012 | A1 |
20120023088 | Cheng et al. | Jan 2012 | A1 |
20120034904 | LeBeau et al. | Feb 2012 | A1 |
20120035908 | LeBeau et al. | Feb 2012 | A1 |
20120035924 | Jitkoff et al. | Feb 2012 | A1 |
20120035931 | LeBeau et al. | Feb 2012 | A1 |
20120035932 | Jitkoff et al. | Feb 2012 | A1 |
20120042343 | Laligand et al. | Feb 2012 | A1 |
20120137367 | Dupont et al. | May 2012 | A1 |
20120173464 | Tur et al. | Jul 2012 | A1 |
20120265528 | Gruber et al. | Oct 2012 | A1 |
20120271676 | Aravamudan et al. | Oct 2012 | A1 |
20120311583 | Gruber et al. | Dec 2012 | A1 |
20130110518 | Gruber et al. | May 2013 | A1 |
20130110520 | Cheyer et al. | May 2013 | A1 |
Number | Date | Country |
---|---|---|
681573 | Apr 1993 | CH |
3837590 | May 1990 | DE |
198 41 541 | Dec 2007 | DE |
0138061 | Sep 1984 | EP |
0138061 | Apr 1985 | EP |
0218859 | Apr 1987 | EP |
0262938 | Apr 1988 | EP |
0293259 | Nov 1988 | EP |
0299572 | Jan 1989 | EP |
0313975 | May 1989 | EP |
0314908 | May 1989 | EP |
0327408 | Aug 1989 | EP |
0389271 | Sep 1990 | EP |
0411675 | Feb 1991 | EP |
0559349 | Sep 1993 | EP |
0559349 | Sep 1993 | EP |
0570660 | Nov 1993 | EP |
0863453 | Sep 1998 | EP |
1245023 | Oct 2002 | EP |
1 818 786 | Aug 2007 | EP |
2 109 295 | Oct 2009 | EP |
2293667 | Apr 1996 | GB |
06 019965 | Jan 1994 | JP |
2001 125896 | May 2001 | JP |
2002 024212 | Jan 2002 | JP |
2003 517158 | May 2003 | JP |
2009 036999 | Feb 2009 | JP |
10-2007-0057496 | Jun 2007 | KR |
10-0776800 | Nov 2007 | KR |
10-2008-001227 | Feb 2008 | KR |
10-0810500 | Mar 2008 | KR |
10 2008 109322 | Dec 2008 | KR |
10 2009 086805 | Aug 2009 | KR |
10-0920267 | Oct 2009 | KR |
10-2010-0032792 | Apr 2010 | KR |
10 2011 0113414 | Oct 2011 | KR |
WO 9502221 | Jan 1995 | WO |
WO 9726612 | Jul 1997 | WO |
WO 9841956 | Sep 1998 | WO |
WO 9901834 | Jan 1999 | WO |
WO 9908238 | Feb 1999 | WO |
WO 9956227 | Nov 1999 | WO |
WO 0060435 | Oct 2000 | WO |
WO 0060435 | Oct 2000 | WO |
WO 02073603 | Sep 2002 | WO |
WO 2006101649 | Sep 2006 | WO |
WO 2006129967 | Dec 2006 | WO |
WO 2008085742 | Jul 2008 | WO |
WO 2008109835 | Sep 2008 | WO |
WO 2011088053 | Jul 2011 | WO |
Entry |
---|
Acero, A., et al., “Environmental Robustness in Automatic Speech Recognition,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages. |
Acero, A., et al., “Robust Speech Recognition by Normalization of The Acoustic Space,” International Conference on Acoustics, Speech, and Signal Processing, 1991, 4 pages. |
Ahlbom, G., et al., “Modeling Spectral Speech Transitions Using Temporal Decomposition Techniques,” IEEE International Conference of Acoustics, Speech, and Signal Processing (ICASSP'87), Apr. 1987, vol. 12, 4 pages. |
Aikawa, K., “Speech Recognition Using Time-Warping Neural Networks,” Proceedings of the 1991 IEEE Workshop on Neural Networks for Signal Processing, Sep. 30-Oct. 1, 1991, 10 pages. |
Anastasakos, A., et al., “Duration Modeling in Large Vocabulary Speech Recognition,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages. |
Anderson, R. H., “Syntax-Directed Recognition of Hand-Printed Two-Dimensional Mathematics,” In Proceedings of Symposium on Interactive Systems for Experimental Applied Mathematics: Proceedings of the Association for Computing Machinery Inc. Symposium, © 1967, 12 pages. |
Ansari, R., et al., “Pitch Modification of Speech using a Low-Sensitivity Inverse Filter Approach,” IEEE Signal Processing Letters, vol. 5, No. 3, Mar. 1998, 3 pages. |
Anthony, N. J., et al., “Supervised Adaption for Signature Verification System,” Jun. 1, 1978, IBM Technical Disclosure, 3 pages. |
Apple Computer, “Guide Maker User's Guide,” © Apple Computer, Inc., Apr. 27, 1994, 8 pages. |
Apple Computer, “Introduction to Apple Guide,” © Apple Computer, Inc., Apr. 28, 1994, 20 pages. |
Asanović, K., et al., “Experimental Determination of Precision Requirements for Back-Propagation Training of Artificial Neural Networks,” In Proceedings of the 2nd International Conference of Microelectronics for Neural Networks, 1991, www.ICSI.Berkeley.EDU, 7 pages. |
Atal, B. S., “Efficient Coding of LPC Parameters by Temporal Decomposition,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'83), Apr. 1983, 4 pages. |
Bahl, L. R., et al., “Acoustic Markov Models Used in the Tangora Speech Recognition System,” In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 4 pages. |
Bahl, L. R., et al., “A Maximum Likelihood Approach to Continuous Speech Recognition,” IEEE Transaction on Pattern Analysis and Machine Intelligence, vol. PAMI-5, No. 2, Mar. 1983, 13 pages. |
Bahl, L. R., et al., “A Tree-Based Statistical Language Model for Natural Language Speech Recognition,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 37, Issue 7, Jul. 1989, 8 pages. |
Bahl, L. R., et al., “Large Vocabulary Natural Language Continuous Speech Recognition,” In Proceedings of 1989 International Conference on Acoustics, Speech, and Signal Processing, May 23-26, 1989, vol. 1, 6 pages. |
Bahl, L. R., et al, “Multonic Markov Word Models for Large Vocabulary Continuous Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 1, No. 3, Jul. 1993, 11 pages. |
Bahl, L. R., et al., “Speech Recognition with Continuous-Parameter Hidden Markov Models,” In Proceeding of International Conference on Acoustics, Speech, and Signal Processing (ICASSP'88), Apr. 11-14, 1988, vol. 1, 8 pages. |
Banbrook, M., “Nonlinear Analysis of Speech from a Synthesis Perspective,” A thesis submitted for the degree of Doctor of Philosophy, The University of Edinburgh, Oct. 15, 1996, 35 pages. |
Belaid, A., et al., “A Syntactic Approach for Handwritten Mathematical Formula Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-6, No. 1, Jan. 1984, 7 pages. |
Bellegarda, E. J., et al., “On-Line Handwriting Recognition Using Statistical Mixtures,” Advances in Handwriting and Drawings: A Multidisciplinary Approach, Europia, 6th International IGS Conference on Handwriting and Drawing, Paris-France, Jul. 1993, 11 pages. |
Bellegarda, J. R., “A Latent Semantic Analysis Framework for Large-Span Language Modeling,” 5th European Conference on Speech, Communication and Technology, (EUROSPEECH'97), Sep. 22-25, 1997, 4 pages. |
Bellegarda, J. R., “A Multispan Language Modeling Framework for Large Vocabulary Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 6, No. 5, Sep. 1998, 12 pages. |
Bellegarda, J. R., et al., “A Novel Word Clustering Algorithm Based on Latent Semantic Analysis,” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, 4 pages. |
Bellegarda, J. R., et al., “Experiments Using Data Augmentation for Speaker Adaptation,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'95), May 9-12, 1995, 4 pages. |
Bellegarda, J. R., “Exploiting Both Local and Global Constraints for Multi-Span Statistical Language Modeling,” Proceeding of the 1998 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'98), vol. 2, May 12-15, 1998, 5 pages. |
Bellegarda, J. R., “Exploiting Latent Semantic Information in Statistical Language Modeling,” In Proceedings of the IEEE, Aug. 2000, vol. 88, No. 8, 18 pages. |
Bellegarda, J. R., “Interaction-Driven Speech Input—A Data-Driven Approach to the Capture of Both Local and Global Language Constraints,” 1992, 7 pages, available at http://old.sigchi.org/bulletin/1998.2/bellegarda.html. |
Bellegarda, J. R., “Large Vocabulary Speech Recognition with Multispan Statistical Language Models,” IEEE Transactions on Speech and Audio Processing, vol. 8, No. 1, Jan. 2000, 9 pages. |
Bellegarda, J. R., et al., “Performance of the IBM Large Vocabulary Continuous Speech Recognition System on the ARPA Wall Street Journal Task,” Signal Processing VII: Theories and Applications, © 1994 European Association for Signal Processing, 4 pages. |
Bellegarda, J. R., et al., “The Metamorphic Algorithm: A Speaker Mapping Approach to Data Augmentation,” IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 8 pages. |
Black, A. W., et al., “Automatically Clustering Similar Units for Unit Selection in Speech Synthesis,” In Proceedings of Eurospeech 1997, vol. 2, 4 pages. |
Blair, D. C., et al., “An Evaluation of Retrieval Effectiveness for a Full-Text Document-Retrieval System,” Communications of the ACM, vol. 28, No. 3, Mar. 1985, 11 pages. |
Briner, L. L., “Identifying Keywords in Text Data Processing,” In Zelkowitz, Marvin V., ED, Directions and Challenges, 15th Annual Technical Symposium, Jun. 17, 1976, Gaithersbury, Maryland, 7 pages. |
Bulyko, I., et al., “Joint Prosody Prediction and Unit Selection for Concatenative Speech Synthesis,” Electrical Engineering Department, University of Washington, Seattle, 2001, 4 pages. |
Bussey, H. E., et al., “Service Architecture, Prototype Description, and Network Implications of A Personalized Information Grazing Service,” INFOCOM'90, Ninth Annual Joint Conference of the IEEE Computer and Communication Societies, Jun. 3-7, 1990, http://slrohall.com/publications/, 8 pages. |
Buzo, A., et al., “Speech Coding Based Upon Vector Quantization,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. Assp-28, No. 5, Oct. 1980, 13 pages. |
Caminero-Gil, J., et al., “Data-Driven Discourse Modeling for Semantic Interpretation,” In Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, May 7-10, 1996, 6 pages. |
Cawley, G. C., “The Application of Neural Networks to Phonetic Modelling,” PhD Thesis, University of Essex, Mar. 1996, 13 pages. |
Chang, S., et al., “A Segment-based Speech Recognition System for Isolated Mandarin Syllables,” Proceedings TENCON '93, IEEE Region 10 conference on Computer, Communication, Control and Power Engineering, Oct. 19-21, 1993, vol. 3, 6 pages. |
Conklin, J., “Hypertext: An Introduction and Survey,” COMPUTER Magazine, Sep. 1987, 25 pages. |
Connolly, F. T., et al., “Fast Algorithms for Complex Matrix Multiplication Using Surrogates,” IEEE Transactions on Acoustics, Speech, and Signal Processing, Jun. 1989, vol. 37, No. 6, 13 pages. |
Deerwester, S., et al., “Indexing by Latent Semantic Analysis,” Journal of the American Society for Information Science, vol. 41, No. 6, Sep. 1990, 19 pages. |
Deller, Jr., J. R., et al., “Discrete-Time Processing of Speech Signals,” © 1987 Prentice Hall, ISBN: 0-02-328301-7, 14 pages. |
Digital Equipment Corporation, “Open VMS Software Overview,” Dec. 1995, software manual, 159 pages. |
Donovan, R. E., “A New Distance Measure for Costing Spectral Discontinuities in Concatenative Speech Synthesisers,” 2001, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.21.6398, 4 pages. |
Frisse, M. E., “Searching for Information in a Hypertext Medical Handbook,” Communications of the ACM, vol. 31, No. 7, Jul. 1988, 8 pages. |
Goldberg, D., et al., “Using Collaborative Filtering to Weave an Information Tapestry,” Communications of the ACM, vol. 35, No. 12, Dec. 1992, 10 pages. |
Gorin, A. L., et al., “On Adaptive Acquisition of Language,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), vol. 1, Apr. 3-6, 1990, 5 pages. |
Gotoh, Y., et al., “Document Space Models Using Latent Semantic Analysis,” In Proceedings of Eurospeech, 1997, 4 pages. |
Gray, R. M., “Vector Quantization,” IEEE ASSP Magazine, Apr. 1984, 26 pages. |
Harris, F. J., “On the Use of Windows for Harmonic Analysis with the Discrete Fourier Transform,” In Proceedings of the IEEE, vol. 66, No. 1, Jan. 1978, 34 pages. |
Helm, R., et al., “Building Visual Language Parsers,” In Proceedings of CHI'91 Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 8 pages. |
Hermansky, H., “Perceptual Linear Predictive (PLP) Analysis of Speech,” Journal of the Acoustical Society of America, vol. 87, No. 4, Apr. 1990, 15 pages. |
Hermansky, H., “Recognition of Speech in Additive and Convolutional Noise Based on Rasta Spectral Processing,” In proceedings of IEEE International Conference on Acoustics, speech, and Signal Processing (ICASSP'93), Apr. 27-30, 1993, 4 pages. |
Hoehfeld M., et al., “Learning with Limited Numerical Precision Using the Cascade-Correlation Algorithm,” IEEE Transactions on Neural Networks, vol. 3, No. 4, Jul. 1992, 18 pages. |
Holmes, J. N., “Speech Synthesis and Recognition—Stochastic Models for Word Recognition,” Speech Synthesis and Recognition, Published by Chapman & Hall, London, ISBN 0 412 53430 4, © 1998 J. N. Holmes, 7 pages. |
Hon, H.W., et al., “CMU Robust Vocabulary-Independent Speech Recognition System,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-91), Apr. 14-17, 1991, 4 pages. |
IBM Technical Disclosure Bulletin, “Speech Editor,” vol. 29, No. 10, Mar. 10, 1987, 3 pages. |
IBM Technical Disclosure Bulletin, “Integrated Audio-Graphics User Interface,” vol. 33, No. 11, Apr. 1991, 4 pages. |
IBM Technical Disclosure Bulletin, “Speech Recognition with Hidden Markov Models of Speech Waveforms,” vol. 34, No. 1, Jun. 1991, 10 pages. |
Iowegian International, “FIR Filter Properties,” dspGuro, Digital Signal Processing Central, http://www.dspguru.com/dsp/taqs/fir/properties, downloaded on Jul. 28, 2010, 6 pages. |
Jacobs, P. S., et al., “Scisor: Extracting Information from On-Line News,” Communications of the ACM, vol. 33, No. 11, Nov. 1990, 10 pages. |
Jelinek, F., “Self-Organized Language Modeling for Speech Recognition,” Readings in Speech Recognition, edited by Alex Waibel and Kai-Fu Lee, May 15, 1990, © 1990 Morgan Kaufmann Publishers, Inc., ISBN: 1-55860-124-4, 63 pages. |
Jennings, A., et al., “A Personal News Service Based on a User Model Neural Network,” IEICE Transactions on Information and Systems, vol. E75-D, No. 2, Mar. 1992, Tokyo, JP, 12 pages. |
Ji, T., et al., “A Method for Chinese Syllables Recognition based upon Sub-syllable Hidden Markov Model,” 1994 International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 4 pages. |
Jones, J., “Speech Recognition for Cyclone,” Apple Computer, Inc., E.R.S., Revision 2.9, Sep. 10, 1992, 93 pages. |
Katz, S. M., “Estimation of Probabilities from Sparse Data for the Language Model Component of a Speech Recognizer,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. ASSP-35, No. 3, Mar. 1987, 3 pages. |
Kitano, H., “PhiDM-Dialog, An Experimental Speech-to-Speech Dialog Translation System,” Jun. 1991 COMPUTER, vol. 24, No. 6, 13 pages. |
Klabbers, E., et al., “Reducing Audible Spectral Discontinuities,” IEEE Transactions on Speech and Audio Processing, vol. 9, No. 1, Jan. 2001, 13 pages. |
Klatt, D. H., “Linguistic Uses of Segmental Duration in English: Acoustic and Perpetual Evidence,” Journal of the Acoustical Society of America, vol. 59, No. 5, May 1976, 16 pages. |
Kominek, J., et al., “Impact of Durational Outlier Removal from Unit Selection Catalogs,” 5th ISCA Speech Synthesis Workshop, Jun. 14-16, 2004, 6 pages. |
Kubala, F., et al., “Speaker Adaptation from a Speaker-Independent Training Corpus,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'90), Apr. 3-6, 1990, 4 pages. |
Kubala, F., et al., “The Hub and Spoke Paradigm for CSR Evaluation,” Proceedings of the Spoken Language Technology Workshop, Mar. 6-8, 1994, 9 pages. |
Lee, K.F., “Large-Vocabulary Speaker-Independent Continuous Speech Recognition: The Sphinx System,” Apr. 18, 1988, Partial fulfillment of the requirements for the degree of Doctor of Philosophy, Computer Science Department, Carnegie Mellon University, 195 pages. |
Lee, L., et al., “A Real-Time Mandarin Dictation Machine for Chinese Language with Unlimited Texts and Very Large Vocabulary,” International Conference on Acoustics, Speech and Signal Processing, vol. 1, Apr. 3-6, 1990, 5 pages. |
Lee, L, et al., “Golden Mandarin(II)—An Improved Single-Chip Real-Time Mandarin Dictation Machine for Chinese Language with Very Large Vocabulary,” 0-7803-0946-4/93 © 1993IEEE, 4 pages. |
Lee, L, et al., “Golden Mandarin(II)—An Intelligent Mandarin Dictation Machine for Chinese Character Input with Adaptation/Learning Functions,” International Symposium on Speech, Image Processing and Neural Networks, Apr. 13-16, 1994, Hong Kong, 5 pages. |
Lee, L., et al., “System Description of Golden Mandarin (I) Voice Input for Unlimited Chinese Characters,” International Conference on Computer Processing of Chinese & Oriental Languages, vol. 5, Nos. 3 & 4, Nov. 1991, 16 pages. |
Lin, C.H., et al., “A New Framework for Recognition of Mandarin Syllables With Tones Using Sub-syllabic Unites,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP-93), Apr. 27-30, 1993, 4 pages. |
Linde, Y., et al., “An Algorithm for Vector Quantizer Design,” IEEE Transactions on Communications, vol. 28, No. 1, Jan. 1980, 12 pages. |
Liu, F.H., et al., “Efficient Joint Compensation of Speech for the Effects of Additive Noise and Linear Filtering,” IEEE International Conference of Acoustics, Speech, and Signal Processing, ICASSP-92, Mar. 23-26, 1992, 4 pages. |
Logan, B., “Mel Frequency Cepstral Coefficients for Music Modeling,” In International Symposium on Music Information Retrieval, 2000, 2 pages. |
Lowerre, B. T., “The-HARPY Speech Recognition System,” Doctoral Dissertation, Department of Computer Science, Carnegie Mellon University, Apr. 1976, 20 pages. |
Maghbouleh, A., “An Empirical Comparison of Automatic Decision Tree and Linear Regression Models for Vowel Durations,” Revised version of a paper presented at the Computational Phonology in Speech Technology workshop, 1996 annual meeting of the Association for Computational Linguistics in Santa Cruz, California, 7 pages. |
Markel, J. D., et al., “Linear Prediction of Speech,” Springer-Verlag, Berlin Heidelberg New York 1976, 12 pages. |
Morgan, B., “Business Objects,” (Business Objects for Windows) Business Objects Inc., DBMS Sep. 1992, vol. 5, No. 10, 3 pages. |
Mountford, S. J., et al., “Talking and Listening to Computers,” The Art of Human-Computer Interface Design, Copyright © 1990 Apple Computer, Inc. Addison-Wesley Publishing Company, Inc., 17 pages. |
Murty, K. S. R., et al., “Combining Evidence from Residual Phase and MFCC Features for Speaker Recognition,” IEEE Signal Processing Letters, vol. 13, No. 1, Jan. 2006, 4 pages. |
Murveit H. et al., “Integrating Natural Language Constraints into HMM-based Speech Recognition,” 1990 International Conference on Acoustics, Speech, and Signal Processing, Apr. 3-6, 1990, 5 pages. |
Nakagawa, S., et al., “Speaker Recognition by Combining MFCC and Phase Information,” IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP), Mar. 14-19, 2010, 4 pages. |
Niesler, T. R., et al., “A Variable-Length Category-Based N-Gram Language Model,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'96), vol. 1, May 7-10, 1996, 6 pages. |
Papadimitriou, C. H., et al., “Latent Semantic Indexing: A Probabilistic Analysis,” Nov. 14, 1997, http://citeseerx.ist.psu.edu/messages/downloadsexceeded.html, 21 pages. |
Parsons, T. W., “Voice and Speech Processing,” Linguistics and Technical Fundamentals, Articulatory Phonetics and Phonemics, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 5 pages. |
Parsons, T. W., “Voice and Speech Processing,” Pitch and Formant Estimation, © 1987 McGraw-Hill, Inc., ISBN: 0-07-0485541-0, 15 pages. |
Picone, J., “Continuous Speech Recognition Using Hidden Markov Models,” IEEE ASSP Magazine, vol. 7, No. 3, Jul. 1990, 16 pages. |
Rabiner, L. R., et al., “Fundamental of Speech Recognition,” © 1993 AT&T, Published by Prentice-Hall, Inc., ISBN: 0-13-285826-6, 17 pages. |
Rabiner, L. R., et al., “Note on the Properties of a Vector Quantizer for LPC Coefficients,” The Bell System Technical Journal, vol. 62, No. 8, Oct. 1983, 9 pages. |
Ratcliffe, M., “ClearAccess 2.0 allows SQL searches off-line,” (Structured Query Language), ClearAcess Corp., MacWeek Nov. 16, 1992, vol. 6, No. 41, 2 pages. |
Remde, J. R., et al., “SuperBook: An Automatic Tool for Information Exploration-Hypertext?,” In Proceedings of Hypertext'87 papers, Nov. 13-15, 1987, 14 pages. |
Reynolds, C. F., “On-Line Reviews: A New Application of the HICOM Conferencing System,” IEE Colloquium on Human Factors in Electronic Mail and Conferencing Systems, Feb. 3, 1989, 4 pages. |
Rigoll, G., “Speaker Adaptation for Large Vocabulary Speech Recognition Systems Using Speaker Markov Models,” International Conference on Acoustics, Speech, and Signal Processing (ICASSP'89), May 23-26, 1989, 4 pages. |
Riley, M. D., “Tree-Based Modelling of Segmental Durations,” Talking Machines Theories, Models, and Designs, 1992 © Elsevier Science Publishers B.V., North-Holland, ISBN: 08-44489115.3, 15 pages. |
Rivoira, S., et al., “Syntax and Semantics in a Word-Sequence Recognition System,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'79), Apr. 1979, 5 pages. |
Rosenfeld, R., “A Maximum Entropy Approach to Adaptive Statistical Language Modelling,” Computer Speech and Language, vol. 10, No. 3, Jul. 1996, 25 pages. |
Roszkiewicz, A., “Extending your Apple,” Back Talk—Lip Service, A+ Magazine, The Independent Guide for Apple Computing, vol. 2, No. 2, Feb. 1984, 5 pages. |
Sakoe, H., et al., “Dynamic Programming Algorithm Optimization for Spoken Word Recognition,” IEEE Transactins on Acoustics, Speech, and Signal Processing, Feb. 1978, vol. ASSP-26 No. 1, 8 pages. |
Salton, G., et al., “On the Application of Syntactic Methodologies in Automatic Text Analysis,” Information Processing and Management, vol. 26, No. 1, Great Britain 1990, 22 pages. |
Savoy, J., “Searching Information in Hypertext Systems Using Multiple Sources of Evidence,” International Journal of Man-Machine Studies, vol. 38, No. 6, Jun. 1993, 15 pages. |
Scagliola, C., “Language Models and Search Algorithms for Real-Time Speech Recognition,” International Journal of Man-Machine Studies, vol. 22, No. 5, 1985, 25 pages. |
Schmandt, C., et al., “Augmenting a Window System with Speech Input,” IEEE Computer Society, Computer Aug. 1990, vol. 23, No. 8, 8 pages. |
Schütze, H., “Dimensions of Meaning,” Proceedings of Supercomputing'92 Conference, Nov. 16-20, 1992, 10 pages. |
Sheth B., et al., “Evolving Agents for Personalized Information Filtering,” In Proceedings of the Ninth Conference on Artificial Intelligence for Applications, Mar. 1-5, 1993, 9 pages. |
Shikano, K., et al., “Speaker Adaptation Through Vector Quantization,” IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP'86), vol. 11, Apr. 1986, 4 pages. |
Sigurdsson, S., et al., “Mel Frequency Cepstral Coefficients: An Evaluation of Robustness of MP3 Encoded Music,” In Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR), 2006, 4 pages. |
Silverman, K. E. A., et al., “Using a Sigmoid Transformation for Improved Modeling of Phoneme Duration,” Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, Mar. 15-19, 1999, 5 pages. |
Tenenbaum, A.M., et al., “Data Structure Using Pascal,” 1981 Prentice-Hall, Inc., 34 pages. |
Tsai, W.H., et al., “Attributed Grammar—A Tool for Combining Syntactic and Statistical Approaches to Pattern Recognition,” IEEE Transactions on Systems, Man, and Cybernetics, vol. SMC-10, No. 12, Dec. 1980, 13 pages. |
Udell, J., “Computer Telephony,” BYTE, vol. 19, No. 7, Jul. 1, 1994, 9 pages. |
van Santen, J. P. H., “Contextual Effects on Vowel Duration,” Journal Speech Communication, vol. 11, No. 6, Dec. 1992, 34 pages. |
Vepa, J., et al., “New Objective Distance Measures for Spectral Discontinuities in Concatenative Speech Synthesis,” In Proceedings of the IEEE 2002 Workshop on Speech Synthesis, 4 pages. |
Verschelde, J., “MATLAB Lecture 8. Special Matrices in MATLAB,” Nov. 23, 2005, UIC Dept. of Math., Stat.. & C.S., MCS 320, Introduction to Symbolic Computation, 4 pages. |
Vingron, M. “Near-Optimal Sequence Alignment,” Deutsches Krebsforschungszentrum (DKFZ), Abteilung Theoretische Bioinformatik, Heidelberg, Germany, Jun. 1996, 20 pages. |
Werner, S., et al., “Prosodic Aspects of Speech,” Université de Lausanne, Switzerland, 1994, Fundamentals of Speech Synthesis and Speech Recognition: Basic Concepts, State of the Art, and Future Challenges, 18 pages. |
Wikipedia, “Mel Scale,” Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/Mel—scale, 2 pages. |
Wikipedia, “Minimum Phase,” Wikipedia, the free encyclopedia, http://en.wikipedia.org/wiki/Minimum—phase, 8 pages. |
Wolff, M., “Poststructuralism and the ARTFUL Database: Some Theoretical Considerations,” Information Technology and Libraries, vol. 13, No. 1, Mar. 1994, 10 pages. |
Wu, M., “Digital Speech Processing and Coding,” ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-2 course presentation, University of Maryland, College Park, 8 pages. |
Wu, M., “Speech Recognition, Synthesis, and H.C.I.,” ENEE408G Capstone-Multimedia Signal Processing, Spring 2003, Lecture-3 course presentation, University of Maryland, College Park, 11 pages. |
Wyle, M. F., “A Wide Area Network Information Filter,” In Proceedings of First International Conference on Artificial Intelligence on Wall Street, Oct. 9-11, 1991, 6 pages. |
Yankelovich, N., et al., “Intermedia: The Concept and the Construction of a Seamless Information Environment,” COMPUTER Magazine, Jan. 1988, © 1988 IEEE, 16 pages. |
Yoon, K., et al., “Letter-to-Sound Rules for Korean,” Department of Linguistics, The Ohio State University, 2002, 4 pages. |
Zhao, Y., “An Acoustic-Phonetic-Based Speaker Adaptation Technique for Improving Speaker-Independent Continuous Speech Recognition,” IEEE Transactions on Speech and Audio Processing, vol. 2, No. 3, Jul. 1994, 15 pages. |
Zovato, E., et al., “Towards Emotional Speech Synthesis: A Rule Based Approach,” 2 pages. |
International Search Report dated Nov. 9, 1994, received in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 8 pages. (Robert Don Strong). |
International Preliminary Examination Report dated Mar. 1, 1995, received in International Application No. PCT/US1993/12666, which corresponds to U.S. Appl. No. 07/999,302, 5 pages. (Robert Don Strong). |
International Preliminary Examination Report dated Apr. 10, 1995, received in International Application No. PCT/US1993/12637, which corresponds to U.S. Appl. No. 07/999,354, 7 pages. (Alejandro Acero). |
International Search Report dated Feb. 8, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 7 pages. (Yen-Lu Chow). |
International Preliminary Examination Report dated Feb. 28, 1996, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow). |
Written Opinion dated Aug. 21, 1995, received in International Application No. PCT/US1994/11011, which corresponds to U.S. Appl. No. 08/129,679, 4 pages. (Yen-Lu Chow). |
International Search Report dated Nov. 8, 1995, received in International Application No. PCT/US1995/08369, which corresponds to U.S. Appl. No. 08/271,639, 6 pages. (Peter V. De Souza). |
International Preliminary Examination Report dated Oct. 9, 1996, received in International Application No. PCT/US1995/08369, which corresponds to U.S. Appl. No. 08/271,639, 4 pages. (Peter V. De Souza). |
Alfred App, 2011, http://www.alfredapp.com/, 5 pages. |
Ambite, JL., et al., “Design and Implementation of the CALO Query Manager,” Copyright © 2006, American Association for Artificial Intelligence, (www.aaai.org), 8 pages. |
Ambite, JL., et al., “Integration of Heterogeneous Knowledge Sources in the CALO Query Manager,” 2005, The 4th International Conference on Ontologies, DataBases, and Applications of Semantics (ODBASE), Agia Napa, Cyprus, ttp://www.isi.edu/people/ambite/publications/integration—heterogeneous—knowledge—sources—calo—query—manager, 18 pages. |
Belvin, R. et al., “Development of the HRL Route Navigation Dialogue System,” 2001, In Proceedings of the First International Conference on Human Language Technology Research, Paper, Copyright © 2001 HRL Laboratories, LLC, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.10.6538, 5 pages. |
Berry, P. M., et al. “PTIME: Personalized Assistance for Calendaring,” ACM Transactions on Intelligent Systems and Technology, vol. 2, No. 4, Article 40, Publication date: Jul. 2011, 40:1-22, 22 pages. |
Bussler, C., et al., “Web Service Execution Environment (WSMX),” Jun. 3, 2005, W3C Member Submission, http://www.w3.org/Submission/WSMX, 29 pages. |
Butcher, M., “EVI arrives in town to go toe-to-toe with Siri,” Jan. 23, 2012, http://techcrunch.com/2012/01/23/evi-arrives-in-town-to-go-toe-to-toe-with-siri/, 2 pages. |
Chen, Y., “Multimedia Siri Finds And Plays Whatever You Ask For,” Feb. 9, 2012, http://www.psfk.com/2012/02/multimedia-siri.html, 9 pages. |
Cheyer, A., “About Adam Cheyer,” Sep. 17, 2012, http://www.adam.cheyer.com/about.html, 2 pages. |
Cheyer, A., “A Perspective on AI & Agent Technologies for SCM,” VerticalNet, 2001 presentation, 22 pages. |
Cheyer, A. et al., “Spoken Language and Multimodal Applications for Electronic Realties,” © Springer-Verlag London Ltd, Virtual Reality 1999, 3:1-15, 15 pages. |
Cutkosky, M. R. et al., “PACT: An Experiment in Integrating Concurrent Engineering Systems,” Journal, Computer, vol. 26 Issue 1, Jan. 1993, IEEE Computer Society Press Los Alamitos, CA, USA, http://dl.acm.org/citation.cfm?id=165320, 14 pages. |
Domingue, J., et al., “Web Service Modeling Ontology (WSMO)—An Ontology for Semantic Web Services,” Jun. 9-10, 2005, position paper at the W3C Workshop on Frameworks for Semantics in Web Services, Innsbruck, Austria, 6 pages. |
Elio, R. et al., “On Abstract Task Models and Conversation Policies,” http://webdocs.cs.ualberta.ca/˜ree/publications/papers2/ATS.AA99.pdf, May 1999, 10 pages. |
Ericsson, S. et al., “Software illustrating a unified approach to multimodality and multilinguality in the in-home domain,” Dec. 22, 2006, Talk and Look: Tools for Ambient Linguistic Knowledge, http://www.talk-project.eurice.eu/fileadmin/talk/publications—public/deliverables—public/D1—6.pdf, 127 pages. |
Eslambolchilar et al., “Multimodal Feedback for Tilt Controlled Speed Dependent Automatic Zooming,” UIST'04, Oct. 24-27, 2004, Universityof Glasgow, 3 pages. |
Evi, “Meet Evi: the one mobile app that provides solutions for your everyday problems,” Feb. 8, 2012, http://www.evi.com/, 3 pages. |
Feigenbaum, E., et al., “Computer-assisted Semantic Annotation of Scientific Life Works,” 2007, http://tomgruber.org/writing/stanford-cs300.pdf, 22 pages. |
Gannes, L., “Alfred App Gives Personalized Restaurant Recommendations,” allthingsd.com, Jul. 18, 2011, http://allthingsd.com/20110718/alfred-app-gives-personalized-restaurant-recommendations/, 3 pages. |
Gautier, P. O., et al. “Generating Explanations of Device Behavior Using Compositional Modeling and Causal Ordering,” 1993, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.8394, 9 pages. |
Gervasio, M. T., et al., Active Preference Learning for Personalized Calendar Scheduling Assistancae, Copyright © 2005, http://www.ai.sri.com/˜gervasio/pubs/gervasio-iui05.pdf, 8 pages. |
Glass, A., “Explaining Preference Learning,” 2006, http://cs229.stanford.edu/proj2006/Glass-ExplainingPreferenceLearning.pdf, 5 pages. |
Glass, J., et al., “Multilingual Spoken-Language Understanding in the MIT Voyager System,” Aug. 1995, http://groups.csail.mit.edu/sls/publications/1995/speechcomm95-voyager.pdf, 29 pages. |
Goddeau, D., et al., “A Form-Based Dialogue Manager for Spoken Language Applications,” Oct. 1996, http://phasedance.com/pdf/icslp96.pdf, 4 pages. |
Goddeau, D., et al., “Galaxy: A Human-Language Interface to On-Line Travel Information,” 1994 International Conference on Spoken Language Processing, Sep. 18-22, 1994, Pacific Convention Plaza Yokohama, Japan, 6 pages. |
Gruber, T. R., et al., “An Ontology for Engineering Mathematics,” In Jon Doyle, Piero Torasso, & Erik Sandewall, Eds., Fourth International Conference on Principles of Knowledge Representation and Reasoning, Gustav Stresemann Institut, Bonn, Germany, Morgan Kaufmann, 1994, http://www-ksl.stanford.edu/knowledge-sharing/papers/engmath.html, 22 pages. |
Gruber, T. R., “A Translation Approach to Portable Ontology Specifications,” Knowledge Systems Laboratory, Stanford University, Sep. 1992, Technical Report KSL 92-71, Revised Apr. 1993, 27 pages. |
Gruber, T. R., “Automated Knowledge Acquisition for Strategic Knowledge,” Knowledge Systems Laboratory, Machine Learning, 4, 293-336 (1989), 44 pages. |
Gruber, T. R., “(Avoiding) the Travesty of the Commons,” Presentation at NPUC 2006, New Paradigms for User Computing, IBM Almaden Research Center, Jul. 24, 2006. http://tomgruber.org/writing/avoiding-travestry.htm, 52 pages. |
Gruber, T. R., “Big Think Small Screen: How semantic computing in the cloud will revolutionize the consumer experience on the phone,” Keynote presentation at Web 3.0 conference, Jan. 27, 2010, http://tomgruber.org/writing/web30jan2010.htm, 41 pages. |
Gruber, T. R., “Collaborating around Shared Content on the WWW,” W3C Workshop on WWW and Collaboration, Cambridge, MA, Sep. 11, 1995, http://www.w3.org/Collaboration/Workshop/Proceedings/P9.html, 1 page. |
Gruber, T. R., “Collective Knowledge Systems: Where the Social Web meets the Semantic Web,” Web Semantics: Science, Services and Agents on the World Wide Web (2007), doi:10.1016/j.websem.2007.11.011, keynote presentation given at the 5th International Semantic Web Conference, Nov. 7, 2006, 19 pages. |
Gruber, T. R., “Where the Social Web meets the Semantic Web,” Presentation at the 5th International Semantic Web Conference, Nov. 7, 2006, 38 pages. |
Gruber, T. R., “Despite our Best Efforts, Ontologies are not the Problem,” AAAI Spring Symposium, Mar. 2008, http://tomgruber.org/writing/aaai-ss08.htm, 40 pages. |
Gruber, T. R., “Enterprise Collaboration Management with Intraspect,” Intraspect Software, Inc., Instraspect Technical White Paper Jul. 2001, 24 pages. |
Gruber, T. R., “Every ontology is a treaty—a social agreement—among people with some common motive in sharing,” Interview by Dr. Miltiadis D. Lytras, Official Quarterly Bulletin of AIS Special Interest Group on Semantic Web and Information Systems, vol. 1, Issue 3, 2004, http://www.sigsemis.org 1, 5 pages. |
Gruber, T. R., et al., “Generative Design Rationale: Beyond the Record and Replay Paradigm,” Knowledge Systems Laboratory, Stanford University, Dec. 1991, Technical Report KSL 92-59, Updated Feb. 1993, 24 pages. |
Gruber, T. R., “Helping Organizations Collaborate, Communicate, and Learn,” Presentation to NASA Ames Research, Mountain View, CA, Mar. 2003, http://tomgruber.org/writing/organizational-intelligence-talk.htm, 30 pages. |
Gruber, T. R., “Intelligence at the Interface: Semantic Technology and the Consumer Internet Experience,” Presentation at Semantic Technologies conference (SemTech08), May 20, 2008, http://tomgruber.org/writing.htm, 40 pages. |
Gruber, T. R., Interactive Acquisition of Justifications: Learning “Why” by Being Told “What” Knowledge Systems Laboratory, Stanford University, Oct. 1990, Technical Report KSL 91-17, Revised Feb. 1991, 24 pages. |
Gruber, T. R., “It Is What It Does: The Pragmatics of Ontology for Knowledge Sharing,” (c) 2000, 2003, http://www.cidoc-crm.org/docs/symposium—presentations/gruber—cidoc-ontology-2003.pdf, 21 pages. |
Gruber, T. R., et al., “Machine-generated Explanations of Engineering Models: A Compositional Modeling Approach,” (1993) In Proc. International Joint Conference on Artificial Intelligence, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.34.930, 7 pages. |
Gruber, T. R., “2021: Mass Collaboration and the Really New Economy,” TNTY Futures, the newsletter of The Next Twenty Years series, vol. 1, Issue 6, Aug. 2001, http://www.tnty.com/newsletter/futures/archive/v01-05business.html, 5 pages. |
Gruber, T. R., et al.,“NIKE: A National Infrastructure for Knowledge Exchange,” Oct. 1994, http://www.eit.com/papers/nike/nike.html and nike.ps, 10 pages. |
Gruber, T. R., “Ontologies, Web 2.0 and Beyond,” Apr. 24, 2007, Ontology Summit 2007, http://tomgruber.org/writing/ontolog-social-web-keynote.pdf, 17 pages. |
Gruber, T. R., “Ontology of Folksonomy: A Mash-up of Apples and Oranges,” Originally published to the web in 2005, Int'l Journal on Semantic Web & Information Systems, 3(2), 2007, 7 pages. |
Gruber, T. R., “Siri, a Virtual Personal Assistant—Bringing Intelligence to the Interface,” Jun. 16, 2009, Keynote presentation at Semantic Technologies conference, Jun. 2009. http://tomgruber.org/writing/semtech09.htm, 22 pages. |
Gruber, T. R., “TagOntology,” Presentation to Tag Camp, www.tagcamp.org, Oct. 29, 2005, 20 pages. |
Gruber, T. R., et al., “Toward a Knowledge Medium for Collaborative Product Development,” In Artificial Intelligence in Design 1992, from Proceedings of the Second International Conference on Artificial Intelligence in Design, Pittsburgh, USA, Jun. 22-25, 1992, 19 pages. |
Gruber, T. R., “Toward Principles for the Design of Ontologies Used for Knowledge Sharing,” In International Journal Human-Computer Studies 43, p. 907-928, substantial revision of paper presented at the International Workshop on Formal Ontology, Mar. 1993, Padova, Italy, available as Technical Report KSL 93-04, Knowledge Systems Laboratory, Stanford University, further revised Aug. 23, 1993, 23 pages. |
Guzzoni, D., et al., “Active, A Platform for Building Intelligent Operating Rooms,” Surgetica 2007 Computer-Aided Medical Interventions: tools and applications, pp. 191-198, Paris, 2007, Sauramps Médical, http://lsro.epfl.ch/page-68384-en.html, 8 pages. |
Guzzoni, D., et al., “Active, A Tool for Building Intelligent User Interfaces,” ASC 2007, Palma de Mallorca, http://lsro.epfl.ch/page-34241.html, 6 pages. |
Guzzoni, D., et al., “A Unified Platform for Building Intelligent Web Interaction Assistants,” Proceedings of the 2006 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Computer Society, 4 pages. |
Guzzoni, D., et al., “Modeling Human-Agent Interaction with Active Ontologies,” 2007, AAAI Spring Symposium, Interaction Challenges for Intelligent Assistants, Stanford University, Palo Alto, California, 8 pages. |
Hardawar, D., “Driving app Waze builds its own Siri for hands-free voice control,” Feb. 9, 2012, http://venturebeat.com/2012/02/09/driving-app-waze-builds-its-own-siri-for-hands-free-voice-control/, 4 pages. |
Intraspect Software, “The Intraspect Knowledge Management Solution: Technical Overview,” http://tomgruber.org/writing/intraspect-whitepaper-1998.pdf, 18 pages. |
Julia, L., et al., Un éditeur interactif de tableaux dessinés à main levée (An Interactive Editor for Hand-Sketched Tables), Traitement du Signal 1995, vol. 12, No. 6, 8 pages. No English Translation Available. |
Karp, P. D., “A Generic Knowledge-Base Access Protocol,” May 12, 1994, http://lecture.cs.buu.ac.th/˜f50353/Document/gfp.pdf, 66 pages. |
Lemon, O., et al., “Multithreaded Context for Robust Conversational Interfaces: Context-Sensitive Speech Recognition and Interpretation of Corrective Fragments,” Sep. 2004, ACM Transactions on Computer-Human Interaction, vol. 11, No. 3, 27 pages. |
Leong, L., et al., “CASIS: A Context-Aware Speech Interface System,” IUI'05, Jan. 9-12, 2005, Proceedings of the 10th international conference on Intelligent user interfaces, San Diego, California, USA, 8 pages. |
Lieberman, H., et al., “Out of context: Computer systems that adapt to, and learn from, context,” 2000, IBM Systems Journal, vol. 39, Nos. 3/4, 2000, 16 pages. |
Lin, B., et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History,” 1999, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.42.272, 4 pages. |
Martin, D., et al., “The Open Agent Architecture: A Framework for building distributed software systems,” Jan.-Mar. 1999, Applied-Artificial Intelligence: An International Journal, vol. 13, No. 1-2, http://adam.cheyer.com/papers/oaa.pdf, 38 pages. |
McGuire, J., et al., “SHADE: Technology for Knowledge-Based Collaborative Engineering,” 1993, Journal of Concurrent Engineering: Applications and Research (CERA), 18 pages. |
Meng, H., et al., “Wheels: A Conversational System in the Automobile Classified Domain,” Oct. 1996, httphttp://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.16.3022, 4 pages. |
Milward, D., et al., “D2.2: Dynamic Multimodal Interface Reconfiguration,” Talk and Look: Tools for Ambient Linguistic Knowledge, Aug. 8, 2006, http://www.ihmc.us/users/nblaylock/Pubs/Files/talk—d2.2.pdf, 69 pages. |
Mitra, P., et al., “A Graph-Oriented Model for Articulation of Ontology Interdependencies,” 2000, http://ilpubs.stanford.edu:8090/442/1/2000-20.pdf, 15 pages. |
Moran, D. B., et al., “Multimodal User Interfaces in the Open Agent Architecture,” Proc. of the 1997 International Conference on Intelligent User Interfaces (IUI97), 8 pages. |
Mozer, M., “An Intelligent Environment Must be Adaptive,” Mar./Apr. 1999, IEEE Intelligent Systems, 3 pages. |
Mühlh{hacek over (a)}user, M., “Context Aware Voice User Interfaces for Workflow Support,” Darmstadt 2007, http://tuprints.ulb.tu-darmstadt.de/876/1/PhD.pdf, 254 pages. |
Naone, E., “TR10: Intelligent Software Assistant,” Mar.-Apr. 2009, Technology Review, http://www.technologyreview.com/printer—friendly—article.aspx?id=22117, 2 pages. |
Neches, R., “Enabling Technology for Knowledge Sharing,” Fall 1991, AI Magazine, pp. 37-56, (21 pages). |
Nöth, E., et al., “Verbmobil: The Use of Prosody in the Linguistic Components of a Speech Understanding System,” IEEE Transactions on Speech and Audio Processing, vol. 8, No. 5, Sep. 2000, 14 pages. |
Phoenix Solutions, Inc. v. West Interactive Corp., Document 40, Declaration of Christopher Schmandt Regarding the MIT Galaxy System dated Jul. 2, 2010, 162 pages. |
Rice, J., et al., “Monthly Program: Nov. 14, 1995,” The San Francisco Bay Area Chapter of ACM SIGCHI, http://www.baychi.org/calendar/19951114/, 2 pages. |
Rice, J., et al., “Using the Web Instead of a Window System,” Knowledge Systems Laboratory, Stanford University, (http://tomgruber.org/writing/ks1-95-69.pdf, Sep. 1995.) CHI '96 Proceedings: Conference on Human Factors in Computing Systems, Apr. 13-18, 1996, Vancouver, BC, Canada, 14 pages. |
Rivlin, Z., et al., “Maestro: Conductor of Multimedia Analysis Technologies,” 1999 SRI International, Communications of the Association for Computing Machinery (CACM), 7 pages. |
Roddy, D., et al., “Communication and Collaboration in a Landscape of B2B eMarketplaces,” VerticalNet Solutions, white paper, Jun. 15, 2000, 23 pages. |
Seneff, S., et al., “A New Restaurant Guide Conversational System: Issues in Rapid Prototyping for Specialized Domains,” Oct. 1996, citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.16 . . . rep . . . , 4 pages. |
Sheth, A., et al., “Relationships at the Heart of Semantic Web: Modeling, Discovering, and Exploiting Complex Semantic Relationships,” Oct. 13, 2002, Enhancing the Power of the Internet: Studies in Fuzziness and Soft Computing, SpringerVerlag, 38 pages. |
Simonite, T., “One Easy Way to Make Sid Smarter,” Oct. 18, 2011, Technology Review, http:// www.technologyreview.com/printer—friendly—article.aspx?id=38915, 2 pages. |
Stent, A., et al., “The CommandTalk Spoken Dialogue System,” 1999, http://acl.ldc.upenn.edu/P/P99/P99-1024.pdf, 8 pages. |
Tofel, K., et al., “SpeakTolt: A personal assistant for older iPhones, iPads,” Feb. 9, 2012, http://gigaom.com/apple/speaktoit-siri-for-older-iphones-ipads/, 7 pages. |
Tucker, J., “Too lazy to grab your TV remote? Use Siri instead,” Nov. 30, 2011, http://www.engadget.com/2011/11/30/too-lazy-to-grab-your-tv-remote-use-siri-instead/, 8 pages. |
Tur, G., et al., “The CALO Meeting Speech Recognition and Understanding System,” 2008, Proc. IEEE Spoken Language Technology Workshop, 4 pages. |
Tur, G., et al., “The-CALO-Meeting-Assistant System,” IEEE Transactions on Audio, Speech, and Language Processing, vol. 18, No. 6, Aug. 2010, 11 pages. |
Vlingo InCar, “Distracted Driving Solution with Vlingo InCar,” 2:38 minute video uploaded to YouTube by Vlingo Voice on Oct. 6, 2010, http://www.youtube.com/watch?v=Vqs8XfXxgz4, 2 pages. |
Vlingo, “Vlingo Launches Voice Enablement Application on Apple App Store,” Vlingo press release dated Dec. 3, 2008, 2 pages. |
YouTube, “Knowledge Navigator,” 5:34 minute video uploaded to YouTube by Knownav on Apr. 29, 2008, http://www.youtube.com/watch?v=QRH8eimU—20, 1 page. |
YouTube,“Send Text, Listen To and Send E-Mail ‘By Voice’ www.voiceassist.com,” 2:11 minute video uploaded to YouTube by VoiceAssist on Jul. 30, 2009, http://www.youtube.com/watch?v=0tEU61nHHA4, 1 page. |
YouTube,“Text'nDrive App Demo—Listen and Reply to your Messages by Voice while Driving!,” 1:57 minute video uploaded to YouTube by TextnDrive on Apr. 27, 2010, http://www.youtube.com/watch?v=WaGfzoHsAMw, 1 page. |
YouTube, “Voice on the Go (BlackBerry),” 2:51 minute video uploaded to YouTube by VoiceOnTheGo on Jul. 27, 2009, http://www.youtube.com/watch?v=pJqpWgQS98w, 1 page. |
Zue, V., “Conversational Interfaces: Advances and Challenges,” Sep. 1997, http://www.cs.cmu.edu/˜dod/papers/zue97.pdf, 10 pages. |
Zue, V. W., “Toward Systems that Understand Spoken Language,” Feb. 1994, ARPA Strategic Computing Institute, © 1994 IEEE, 9 pages. |
Notice of Allowance dated Sep. 5, 2012, received in U.S. Appl. No. 12/686,876, 15 pages. (Rottler). |
Korean Office Action dated Jan. 3, 2013 for Application No. 10-2012-7021094, 16 pages. |
Office Action dated May 3, 2012, received in U.S. Appl. No. 12/686,876, 43 pages. (Rottler). |
International Search Report and Written Opinion dated Jun. 30, 2011, received in International Application No. PCT/US2011/020350, which corresponds to U.S. Appl. No. 12/686,876, 18 pages. (Benjamin Rottler). |
Partial International Search Report and Invitation to Pay Additional Fees, received in International Application No. PCT/US2011/020350, which corresponds to U.S. Appl. No. 12/686,876, 7 pages. (Benjamin Rottler). |
International Search Report and Written Opinion dated Nov. 29, 2011, received in International Application No. PCT/US2011/20861, which corresponds to U.S. Appl. No. 12/987,982, 15 pages. (Thomas Robert Gruber). |
Agnäs, MS., et al., “Spoken Language Translator: First-Year Report,” Jan. 1994, SICS (ISSN 0283-3638), SRI and Telia Research AB, 161 pages. |
Allen, J., “Natural Language Understanding,” 2nd Edition, Copyright © 1995 by The Benjamin/Cummings Publishing Company, Inc., 671 pages. |
Alshawi, H., et al., “CLARE: A Contextual Reasoning and Cooperative Response Framework for the Core Language Engine,” Dec. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 273 pages. |
Alshawi, H., et al., “Declarative Derivation of Database Queries from Meaning Representations,” Oct. 1991, Proceedings of the BANKAI Workshop on Intelligent Information Access, 12 pages. |
Alshawi H., et al., “Logical Forms in The Core Language Engine,” 1989, Proceedings of the 27th Annual Meeting of the Association for Computational Linguistics, 8 pages. |
Alshawi, H., et al., “Overview of the Core Language Engine,” Sep. 1988, Proceedings of Future Generation Computing Systems, Tokyo, 13 pages. |
Alshawi, H., “Translation and Monotonic Interpretation/Generation,” Jul. 1992, SRI International, Cambridge Computer Science Research Centre, Cambridge, 18 pages, http://www.cam.sri.com/tr/crc024/paper.ps. Z—1992. |
Appelt, D., et al., “Fastus: A Finite-state Processor for Information Extraction from Real-world Text,” 1993, Proceedings of IJCAI, 8 pages. |
Appelt, D., et al., “SRI: Description of the JV-FASTUS System Used for MUC-5,” 1993, SRI International, Artificial Intelligence Center, 19 pages. |
Appelt, D., et al., SRI International Fastus System MUC-6 Test Results and Analysis, 1995, SRI International, Menlo Park, California, 12 pages. |
Archbold, A., et al., “A Team User's Guide,” Dec. 21, 1981, SRI International, 70 pages. |
Bear, J., et al., “A System for Labeling Self-Repairs in Speech,” Feb. 22, 1993, SRI International, 9 pages. |
Bear, J., et al., “Detection and Correction of Repairs in Human-Computer Dialog,” May 5, 1992, SRI International, 11 pages. |
Bear, J., et al., “Integrating Multiple Knowledge Sources for Detection and Correction of Repairs in Human-Computer Dialog,” 1992, Proceedings of the 30th annual meeting on Association for Computational Linguistics (ACL), 8 pages. |
Bear, J., et al., “Using Information Extraction to Improve Document Retrieval,” 1998, SRI International, Menlo Park, California, 11 pages. |
Berry, P., et al., “Task Management under Change and Uncertainty Constraint Solving Experience with the CALO Project,” 2005, Proceedings of CP'05 Workshop on Constraint Solving under Change, 5 pages. |
Bobrow, R. et al., “Knowledge Representation for Syntactic/Semantic Processing,” From: AAA-80 Proceedings. Copyright © 1980, AAAI, 8 pages. |
Bouchou, B., et al., “Using Transducers in Natural Language Database Query,” Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 17 pages. |
Bratt, H., et al., “The SRI Telephone-based ATIS System,” 1995, Proceedings of ARPA Workshop on Spoken Language Technology, 3 pages. |
Bulyko, I. et al., “Error-Correction Detection and Response Generation in a Spoken Dialogue System,” © 2004 Elsevier B.V., specom.2004.09.009, 18 pages. |
Burke, R., et al., “Question Answering from Frequently Asked Question Files,” 1997, AI Magazine, vol. 18, No. 2, 10 pages. |
Burns, A., et al., “Development of a Web-Based Intelligent Agent for the Fashion Selection and Purchasing Process via Electronic Commerce,” Dec. 31, 1998, Proceedings of the Americas Conference on Information system (AMCIS), 4 pages. |
Carter, D., “Lexical Acquisition in the Core Language Engine,” 1989, Proceedings of the Fourth Conference of the European Chapter of the Association for Computational Linguistics, 8 pages. |
Carter, D., et al., “The Speech-Language Interface in the Spoken Language Translator,” Nov. 23, 1994, SRI International, 9 pages. |
Chai, J., et al., “Comparative Evaluation of a Natural Language Dialog Based System and a Menu Driven System for Information Access: a Case Study,” Apr. 2000, Proceedings of the International Conference on Multimedia Information Retrieval (RIAO), Paris, 11 pages. |
Cheyer, A., et al., “Multimodal Maps: An Agent-based Approach,” International Conference on Cooperative Multimodal Communication, 1995, 15 pages. |
Cheyer, A., et al., “The Open Agent Architecture,” Autonomous Agents and Multi-Agent systems, vol. 4, Mar. 1, 2001, 6 pages. |
Cheyer, A., et al., “The Open Agent Architecture: Building communities of distributed software agents” Feb. 21, 1998, Artificial Intelligence Center SRI International, Power Point presentation, downloaded from http://www.ai.sri.com/˜oaa/, 25 pages. |
Codd, E. F., “Databases: Improving Usability and Responsiveness—‘How About Recently’,” Copyright © 1978, by Academic Press, Inc., 28 pages. |
Cohen, P.R., et al., “An Open Agent Architecture,” 1994, 8 pages. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.30.480. |
Coles, L. S., et al., “Chemistry Question-Answering,” Jun. 1969, SRI International, 15 pages. |
Coles, L. S., “Techniques for Information Retrieval Using an Inferential Question-Answering System with Natural-Language Input,” Nov. 1972, SRI International, 198 pages. |
Coles, L. S., “The Application of Theorem Proving to Information Retrieval,” Jan. 1971, SRI International, 21 pages. |
Constantinides, P., et al., “A Schema Based Approach to Dialog Control,” 1998, Proceedings of the International Conference on Spoken Language Processing, 4 pages. |
Cox, R. V., et al., “Speech and Language Processing for Next-Millennium Communications Services,” Proceedings of the IEEE, vol. 88, No. 8, Aug. 2000, 24 pages. |
Craig, J., et al., “Deacon: Direct English Access and Control,” Nov. 7-10, 1966 AFIPS Conference Proceedings, vol. 19, San Francisco, 18 pages. |
Dar, S., et al., “DTL's DataSpot: Database Exploration Using Plain Language,” 1998 Proceedings of the 24th VLDB Conference, New York, 5 pages. |
Davis, Z., et al., “A Personal Handheld Multi-Modal Shopping Assistant,” 2006 IEEE, 9 pages. |
Decker, K., et al., “Designing Behaviors for Information Agents,” The Robotics Institute, Carnegie-Mellon University, paper, Jul. 6, 1996, 15 pages. |
Decker, K., et al., “Matchmaking and Brokering,” The Robotics Institute, Carnegie-Mellon University, paper, May 16, 1996, 19 pages. |
Dowding, J., et al., “Gemini: A Natural Language System for Spoken-Language Understanding,” 1993, Proceedings of the Thirty-First Annual Meeting of the Association for Computational Linguistics, 8 pages. |
Dowding, J., et al., “Interleaving Syntax and Semantics in An Efficient Bottom-Up Parser,” 1994, Proceedings of the 32nd Annual Meeting of the Association for Computational Linguistics, 7 pages. |
Epstein, M., et al., “Natural Language Access to a Melanoma Data Base,” Sep. 1978, SRI International, 7 pages. |
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results,” Classes/Subclasses Manually Reviewed for the Search of U.S. Pat. No. 7,177,798, Mar. 22, 2013, 1 page. |
Exhibit 1, “Natural Language Interface Using Constrained Intermediate Dictionary of Results,” List of Publications Manually reviewed for the Search of U.S. Pat. No. 7,177,798, Mar. 22, 2013, 1 page. |
Ferguson, G., et al., “TRIPS: An Integrated Intelligent Problem-Solving Assistant,” 1998, Proceedings of the Fifteenth National Conference on Artificial Intelligence (AAAI-98) and Tenth Conference on Innovative Applications of Artificial Intelligence (IAAI-98), 7 pages. |
Fikes, R., et al., “A Network-based knowledge Representation and its Natural Deduction System,” Jul. 1977, SRI International, 43 pages. |
Gamback, B., et al., “The Swedish Core Language Engine,” 1992 NOTEX Conference, 17 pages. |
Glass, J., et al., “Multilingual Language Generation Across Multiple Domains,” Sep. 18-22, 1994, International Conference on Spoken Language Processing, Japan, 5 pages. |
Green, C. “The Application of Theorem Proving to Question-Answering Systems,” Jun. 1969, SRI Stanford Research Institute, Artificial Intelligence Group, 169 pages. |
Gregg, D. G., “DSS Access on the WWW: An Intelligent Agent Prototype,” 1998 Proceedings of the Americas Conference on Information Systems-Association for Information Systems, 3 pages. |
Grishman, R., “Computational Linguistics: An Introduction,” © Cambridge University Press 1986, 172 pages. |
Grosz, B. et al., “Dialogic: A Core Natural-Language Processing System,” Nov. 9, 1982, SRI International, 17 pages. |
Grosz, B. et al., “Research on Natural-Language Processing at SRI,” Nov. 1981, SRI International, 21 pages. |
Grosz, B., et al., “Team: An Experiment in the Design of Transportable Natural-Language Interfaces,” Artificial Intelligence, vol. 32, 1987, 71 pages. |
Grosz, B., “Team: A Transportable Natural-Language Interface System,” 1983, Proceedings of the First Conference on Applied Natural Language Processing, 7 pages. |
Guida, G., et al., “NLI: A Robust Interface for Natural Language Person-Machine Communication,” Int. J. Man-Machine Studies, vol. 17, 1982, 17 pages. |
Guzzoni, D., et al., “Active, A platform for Building Intelligent Software,” Computational Intelligence 2006, 5 pages. http://www.informatik.uni-trier.de/˜ley/pers/hd/g/Guzzoni:Didier. |
Guzzoni, D., “Active: A unified platform for building intelligent assistant applications,” Oct. 25, 2007, 262 pages. |
Guzzoni, D., et al., “Many Robots Make Short Work,” 1996 AAAI Robot Contest, SRI International, 9 pages. |
Haas, N., et al., “An Approach to Acquiring and Applying Knowledge,” Nov. 1980, SRI International, 22 pages. |
Hadidi, R., et al., “Students' Acceptance of Web-Based Course Offerings: An Empirical Assessment,” 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages. |
Hawkins, J., et al., “Hierarchical Temporal Memory: Concepts, Theory, and Terminology,” Mar. 27, 2007, Numenta, Inc., 20 pages. |
He, Q., et al., “Personal Security Agent: KQML-Based PKI,” The Robotics Institute, Carnegie-Mellon University, paper, Oct. 1, 1997, 14 pages. |
Hendrix, G. et al., “Developing a Natural Language Interface to Complex Data,” ACM Transactions on Database Systems, vol. 3, No. 2, Jun. 1978, 43 pages. |
Hendrix, G., “Human Engineering for Applied Natural Language Processing,” Feb. 1977, SRI International, 27 pages. |
Hendrix, G., “Klaus: A System for Managing Information and Computational Resources,” Oct. 1980, SRI International, 34 pages. |
Hendrix, G., “Lifer: A Natural Language Interface Facility,” Dec. 1976, SRI Stanford Research Institute, Artificial Intelligence Center, 9 pages. |
Hendrix, G., “Natural-Language Interface,” Apr.-Jun. 1982, American Journal of Computational Linguistics, vol. 8, No. 2, 7 pages. Best Copy Available. |
Hendrix, G., “The Lifer Manual: A Guide to Building Practical Natural Language Interfaces,” Feb. 1977, SRI International, 76 pages. |
Hendrix, G., et al., “Transportable Natural-Language Interfaces to Databases,” Apr. 30, 1981, SRI International, 18 pages. |
Hirschman, L., et al., “Multi-Site Data Collection and Evaluation in Spoken Language Understanding,” 1993, Proceedings of the workshop on Human Language Technology, 6 pages. |
Hobbs, J., et al., “Fastus: A System for Extracting Information from Natural-Language Text,” Nov. 19, 1992, SRI International, Artificial Intelligence Center, 26 pages. |
Hobbs, J., et al.,“Fastus: Extracting Information from Natural-Language Texts,” 1992, SRI International, Artificial Intelligence Center, 22 pages. |
Hobbs, J., “Sublanguage and Knowledge,” Jun. 1984, SRI International, Artificial Intelligence Center, 30 pages. |
Hodjat, B., et al., “Iterative Statistical Language Model Generation for Use with an Agent-Oriented Natural Language Interface,” Volume 4 of the Proceedings of HCI International 2003, 7 pages. |
Huang, X., et al., “The Sphinx-II Speech Recognition System: An Overview,” Jan. 15, 1992, Computer, Speech and Language, 14 pages. |
Issar, S., et al., “CMU's Robust Spoken Language Understanding System,” 1993, Proceedings of EUROSPEECH, 4 pages. |
Issar, S., “Estimation of Language Models for New Spoken Language Applications,” Oct. 3-6, 1996, Proceedings of 4th International Conference on Spoken language Processing, Philadelphia, 4 pages. |
Janas, J., “The Semantics-Based Natural Language Interface to Relational Databases,” © Springer-Verlag Berlin Heidelberg 1986, Germany, 48 pages. |
Johnson, J., “A Data Management Strategy for Transportable Natural Language Interfaces,” Jun. 1989, doctoral thesis submitted to the Department of Computer Science, University of British Columbia, Canada, 285 pages. |
Julia, L., et al., “HTTP://WWW.SPEECH.SRI.COM/DEMOS/ATIS.HTML,” 1997, Proceedings of AAAI, Spring Symposium, 5 pages. |
Kahn, M., et al., “CoABS Grid Scalability Experiments,” 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 8 pages. |
Kamel, M., et al., “A Graph Based Knowledge Retrieval System,” © 1990 IEEE, 7 pages. |
Katz, B., “Annotating the World Wide Web Using Natural Language,” 1997, Proceedings of the 5th RIAO Conference on Computer Assisted Information Searching on the Internet, 7 pages. |
Katz, B., “A Three-Step Procedure for Language Generation,” Dec. 1980, Massachusetts Institute of Technology, Artificial Intelligence Laboratory, 42 pages. |
Kats, B., et al., “Exploiting Lexical Regularities in Designing Natural Language Systems,” 1988, Proceedings of the 12th International Conference on Computational Linguistics, Coling'88, Budapest, Hungary, 22 pages. |
Katz, B., et al., “REXTOR: A System for Generating Relations from Natural Language,” In Proceedings of the ACL Oct. 2000 Workshop on Natural Language Processing and Information Retrieval (NLP&IR), 11 pages. |
Katz, B., “Using English for Indexing and Retrieving,” 1988 Proceedings of the 1st RIAO Conference on User-Oriented Content-Based Text and Image (RIAO'88), 19 pages. |
Konolige, K., “A Framework for a Portable Natural-Language Interface to Large Data Bases,” Oct. 12, 1979, SRI International, Artificial Intelligence Center, 54 pages. |
Laird, J., et al., “SOAR: An Architecture for General Intelligence,” 1987, Artificial Intelligence vol. 33, 64 pages. |
Langly, P., et al.,“A Design for the Icarus Architechture,” SIGART Bulletin, vol. 2, No. 4, 6 pages. |
Larks, “Intelligent Software Agents: Larks,” 2006, downloaded on Mar. 15, 2013 from http://www.cs.cmu.edu/larks.html, 2 pages. |
Martin, D., et al., “Building Distributed Software Systems with the Open Agent Architecture,” Mar. 23-25, 1998, Proceedings of the Third International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 23 pages. |
Martin, D., et al., “Development Tools for the Open Agent Architecture,” Apr. 1996, Proceedings of the International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 17 pages. |
Martin, D., et al., “Information Brokering in an Agent Architecture,” Apr. 1997, Proceedings of the second International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technology, 20 pages. |
Martin, D., et al., “PAAM '98 Tutorial: Building and Using Practical Agent Applications,” 1998, SRI International, 78 pages. |
Martin, P., et al., “Transportability and Generality in a Natural-Language Interface System,” Aug. 8-12, 1983, Proceedings of the Eight International Joint Conference on Artificial Intelligence, West Germany, 21 pages. |
Matiasek, J., et al., “Tamic-P: A System for NL Access to Social Insurance Database,” Jun. 17-19, 1999, Proceeding of the 4th International Conference on Applications of Natural Language to Information Systems, Austria, 7 pages. |
Michos, S.E., et al., “Towards an adaptive natural language interface to command languages,” Natural Language Engineering 2 (3), © 1994 Cambridge University Press, 19 pages. Best Copy Available. |
Milstead, J., et al., “Metadata: Cataloging by Any Other Name . . . .” Jan. 1999, Online, Copyright © 1999 Information Today, Inc., 18 pages. |
Minker, W., et al., “Hidden Understanding Models for Machine Translation,” 1999, Proceedings of ETRW on Interactive Dialogue in Multi-Modal Systems, 4 pages. |
Modi, P. J., et al., “CMRadar: A Personal Assistant Agent for Calendar Management,” © 2004, American Association for Artificial Intelligence, Intelligent Systems Demonstrations, 2 pages. |
Moore, R., et al., “Combining Linguistic and Statistical Knowledge Sources in Natural-Language Processing for ATIS,” 1995, SRI International, Artificial Intelligence Center, 4 pages. |
Moore, R., “Handling Complex Queries in a Distributed Data Base,” Oct. 8, 1979, SRI International, Artificial Intelligence Center, 38 pages. |
Moore, R., “Practical Natural-Language Processing by Computer,” Oct. 1981, SRI International, Artificial Intelligence Center, 34 pages. |
Moore, R., et al., “SRI's Experience with the ATIS Evaluation,” Jun. 24-27, 1990, Proceedings of a workshop held at Hidden Valley, Pennsylvania, 4 pages. Best Copy Available. |
Moore, et al., “The Information Warefare Advisor: An Architecture for Interacting with Intelligent Agents Across the Web,” Dec. 31, 1998 Proceedings of Americas Conference on Information Systems (AMCIS), 4 pages. |
Moore, R., “The Role of Logic in Knowledge Representation and Commonsense Reasoning,” Jun. 1982, SRI International, Artificial Intelligence Center, 19 pages. |
Moore, R., “Using Natural-Language Knowledge Sources in Speech Recognition,” Jan. 1999, SRI International, Artificial Intelligence Center, 24 pages. |
Moran, D., et al., “Intelligent Agent-based User Interfaces,” Oct. 12-13, 1995, Proceedings of International Workshop on Human Interface Technology, University of Aizu, Japan, 4 pages. http://www.dougmoran.com/dmoran/PAPERS/oaa-iwhit1995.pdf. |
Moran, D., “Quantifier Scoping in the SRI Core Language Engine,” 1988, Proceedings of the 26th annual meeting on Association for Computational Linguistics, 8 pages. |
Motro, a., “Flex: A Tolerant and Cooperative User Interface to Databases,” IEEE Transactions on Knowledge and Data Engineering, vol. 2, No. 2, Jun. 1990, 16 pages. |
Murveit, H., et al., “Speech Recognition in SRI's Resource Management and ATIS Systems,” 1991, Proceedings of the workshop on Speech and Natural Language (HTL'91), 7 pages. |
OAA, “The Open Agent Architecture 1.0 Distribution Source Code,” Copyright 1999, SRI International, 2 pages. |
Odubiyi, J., et al., “SAIRE—A scalable agent-based information retrieval engine,” 1997 Proceedings of the First International Conference on Autonomous Agents, 12 pages. |
Owei, V., et al., “Natural Language Query Filtration in the Conceptual Query Language,” © 1997 IEEE, 11 pages. |
Pannu, A., et al., “A Learning Personal Agent for Text Filtering and Notification,” 1996, The Robotics Institute School of Computer Science, Carnegie-Mellon University, 12 pages. |
Pereira, “Logic for Natural Language Analysis,” Jan. 1983, SRI International, Artificial Intelligence Center, 194 pages. |
Perrault, C.R., et al., “Natural-Language Interfaces,” Aug. 22, 1986, SRI International, 48 pages. |
Pulman, S.G., et al., “Clare: A Combined Language and Reasoning Engine,” 1993, Proceedings of JFIT Conference, 8 pages. URL: http://www.cam.sri.com/tr/crc042/paper.ps.Z. |
Ravishankar, “Efficient Algorithms for Speech Recognition,” May 15, 1996, Doctoral Thesis submitted to School of Computer Science, Computer Science Division, Carnegie Mellon University, Pittsburg, 146 pages. |
Rayner, M., “Abductive Equivalential Translation and its application to Natural Language Database Interfacing,” Sep. 1993 Dissertation paper, SRI International, 163 pages. |
Rayner, M., et al., “Adapting the Core Language Engine to French and Spanish,” May 10, 1996, Cornell University Library, 9 pages. http://arxiv.org/abs/cmp-lg/9605015. |
Rayner, M., et al., “Deriving Database Queries from Logical Forms by Abductive Definition Expansion,” 1992, Proceedings of the Third Conference on Applied Natural Language Processing, ANLC'92, 8 pages. |
Rayner, M., “Linguistic Domain Theories: Natural-Language Database Interfacing from First Principles,” 1993, SRI International, Cambridge, 11 pages. |
Rayner, M., et al., “Spoken Language Translation With Mid-90's Technology: A Case Study,” 1993, EUROSPEECH, ISCA, 4 pages. http://dblp.uni-trier.de/db/conf/interspeech/eurospeech1993.html#RaynerBCCDGKKLPPS93. |
Rudnicky, A.I., et al., “Creating Natural Dialogs in the Carnegie Mellon Communicator System,”. |
Russell, S., et al., “Artificial Intelligence, A Modern Approach,” © 1995 Prentice Hall, Inc., 121 pages. |
Sacerdoti, E., et al., “A Ladder User's Guide (Revised),” Mar. 1980, SRI International, Artificial Intelligence Center, 39 pages. |
Sagalowicz, D., “A D-Ladder User's Guide,” Sep. 1980, SRI International, 42 pages. |
Sameshima, Y., et al., “Authorization with security attributes and privilege delegation Access control beyond the ACL,” Computer Communications, vol. 20, 1997, 9 pages. |
San-Segundo, R., et al., “Confidence Measures for Dialogue Management in the CU Communicator System,” Jun. 5-9, 2000, Proceedings of Acoustics, Speech, and Signal Processing (ICASSP'00), 4 pages. |
Sato, H., “A Data Model, Knowledge Base, and Natural Language Processing for Sharing a Large Statistical Database,” 1989, Statistical and Scientific Database Management, Lecture Notes in Computer Science, vol. 339, 20 pages. |
Schnelle, D., “Context Aware Voice User Interfaces for Workflow Support,” Aug. 27, 2007, Dissertation paper, 254 pages. |
Sharoff, S., et al., “Register-domain Separation as a Methodology for Development of Natural Language Interfaces to Databases,” 1999, Proceedings of Human-Computer Interaction (INTERACT'99), 7 pages. |
Shimazu, H., et al., “CAPIT: Natural Language Interface Design Tool with Keyword Analyzer and Case-Based Parser,” NEC Research & Development, vol. 33, No. 4, Oct. 1992, 11 pages. |
Shinkle, L., “Team User's Guide,” Nov. 1984, SRI International, Artificial Intelligence Center, 78 pages. |
Shklar, L., et al., “Info Harness: Use of Automatically Generated Metadata for Search and Retrieval of Heterogeneous Information,” 1995 Proceedings of CAiSE'95, Finland. |
Singh, N., “Unifying Heterogeneous Information Models,” 1998 Communications of the ACM, 13 pages. |
SRI2009, “SRI Speech: Products: Software Development Kits: EduSpeak,” 2009, 2 pages, available at http://web.archive.org/web/20090828084033/http://wmv.speechatsri.com/products/eduspeak.shtml. |
Starr, B., et al., “Knowledge-Intensive Query Processing,” May 31, 1998, Proceedings of the 5th KRDB Workshop, Seattle, 6 pages. |
Stern, R., et al. “Multiple Approaches to Robust Speech Recognition,” 1992, Proceedings of Speech and Natural Language Workshop, 6 pages. |
Stickel, “A Nonclausal Connection-Graph Resolution Theorem-Proving Program,” 1982, Proceedings of AAAI'82, 5 pages. |
Sugumaran, V., “A Distributed Intelligent Agent-Based Spatial Decision Support System,” Dec. 31, 1998, Proceedings of the Americas Conference on Information systems (AMCIS), 4 pages. |
Sycara, K., et al., “Coordination of Multiple Intelligent Software Agents,” International Journal of Cooperative Information Systems (IJCIS), vol. 5, Nos. 2 & 3, Jun. & Sep. 1996, 33 pages. |
Sycara, K., et al., “Distributed Intelligent Agents,” IEEE Expert, vol. 11, No. 6, Dec. 1996, 32 pages. |
Sycara, K., et al., “Dynamic Service Matchmaking Among Agents in Open Information Environments ,” 1999, Sigmod Record, 7 pages. |
Sycara, K., et al., “The RETSINA MAS Infrastructure,” 2003, Autonomous Agents and Multi-Agent Systems, vol. 7, 20 pages. |
Tyson, M., et al., “Domain-Independent Task Specification in the TACITUS Natural Language System,” May 1990, SRI International, Artificial Intelligence Center, 16 pages. |
Wahlster, W., et al., “Smartkom: multimodal communication with a life-like character,” 2001 EUROSPEECH-Scandinavia, 7th European Conference on Speech Communication and Technology, 5 pages. |
Waldinger, R., et al., “Deductive Question Answering from Multiple Resources,” 2003, New Directions in Question Answering, published by AAAI, Menlo Park, 22 pages. |
Walker, D., et al., “Natural Language Access to Medical Text,” Mar. 1981, SRI International, Artificial Intelligence Center, 23 pages. |
Waltz, D., “An English Language Question Answering System for a Large Relational Database,” © 1978 ACM, vol. 21, No. 7, 14 pages. |
Ward, W., et al., “A Class Based Language Model for Speech Recognition,” © 1996 IEEE, 3 pages. |
Ward, W., et al., “Recent Improvements in the CMU Spoken Language Understanding System,” 1994, ARPA Human Language Technology Workshop, 4 pages. |
Ward, W., “The CMU Air Travel Information Service: Understanding Spontaneous Speech,” 3 pages. |
Warren, D.H.D., et al., “An Efficient Easily Adaptable System for Interpreting Natural Language Queries,” Jul.-Dec. 1982, American Journal of Computational Linguistics, vol. 8, No. 3-4, 11 pages. Best Copy Available. |
Weizenbaum, J., “ELIZA—A Computer Program for the Study of Natural Language Communication Between Man and Machine,” Communications of the ACM, vol. 9, No. 1, Jan. 1966, 10 pages. |
Winiwarter, W., “Adaptive Natural Language Interfaces to FAQ Knowledge Bases,” Jun. 17-19, 1999, Proceedings of 4th International Conference on Applications of Natural Language to Information Systems, Austria, 22 pages. |
Wu, X. et al., “KDA: A Knowledge-based Database Assistant,” Data Engineering, Feb. 6-10, 1989, Proceeding of the Fifth International Conference on Engineering (IEEE Cat. No. 89CH2695-5), 8 pages. |
Yang, J., et al., “Smart Sight: A Tourist Assistant System,” 1999 Proceedings of Third International Symposium on Wearable Computers, 6 pages. |
Zeng, D., et al., “Cooperative Intelligent Software Agents,” The Robotics Institute, Carnegie-Mellon University, Mar. 1995, 13 pages. |
Zhao, L., “Intelligent Agents for Flexible Workflow Systems,” Oct. 31, 1998 Proceedings of the Americas Conference on Information Systems (AMCIS), 4 pages. |
Zue, V., et al., “From Interface to Content: Translingual Access and Delivery of On-Line Information,” 1997, EUROSPEECH, 4 pages. |
Zue, V., et al., “Jupiter: A Telephone-Based Conversational Interface for Weather Information,” Jan. 2000, IEEE Transactions on Speech and Audio Processing, 13 pages. |
Zue, V., et al., “Pegasus: A Spoken Dialogue Interface for On-Line Air Travel Planning,” 1994 Elsevier, Speech Communication 15 (1994), 10 pages. |
Zue, V., et al., “The Voyager Speech Understanding System: Preliminary Development and Evaluation,” 1990, Proceedings of IEEE 1990 International Conference on Acoustics, Speech, and Signal Processing, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20130159861 A1 | Jun 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12686876 | Jan 2010 | US |
Child | 13769217 | US |