The invention provides mobile devices having a speech interface and/or a combination of a speech interface and a non-speech interface to enable natural language human-machine interactions. More specifically, the invention enables mobile users to submit natural language speech and/or non-speech questions or commands in a wide range of domains. The mobile device is configured to present responses in a natural manner.
Telematic systems are systems that bring human-computer interfaces to mobile environments. Conventional computer interfaces use some combination of keyboards, keypads, point and click techniques and touch screen displays. These conventional interface techniques are generally not suitable for mobile environments, due at least in part to the speed of interaction and the inherent danger and distraction. Therefore, speech interfaces are being adopted in many telematic applications.
However, creating a natural language speech interface that is suitable for use in the mobile environment has proved difficult. A general-purpose telematics system should accommodate commands and requests from a wide range of domains and from many users with diverse preferences and needs. Further, multiple mobile users may want to use such systems, often simultaneously. Finally, most mobile environments are relatively noisy, making speech recognition inherently difficult.
Retrieval of both local and network hosted online information and processing of commands in a natural manner remains a difficult problem in any environment, especially a mobile environment. Cognitive research on human interaction shows that verbal communication, such as a person asking a question or giving a command, typically relies heavily on context and domain knowledge of the target person. By contrast, machine-based requests (a request may be a question, a command, and/or other types of communications) may be highly structured and may not be inherently natural to the human user. Thus, verbal communications and machine processing of requests that are extracted from the verbal communications may be fundamentally incompatible. Yet the ability to allow a person to make natural language speech-based requests remains a desirable goal.
Research has been performed on multiple fields of natural language processing and speech recognition. Speech recognition has steadily improved in accuracy and today is successfully used in a wide range of applications. Natural language processing has previously been applied to the parsing of speech queries. Yet, a limited number of systems have been developed that provide a complete environment for users to make natural language speech requests and/or commands and receive natural sounding responses in a mobile environment. There remain a number of significant barriers to creation of a complete natural language verbal and/or textual-based query and response environment.
The fact that most natural language requests and commands are incomplete in their definition is a significant barrier to natural language query-response interaction. Further, some questions can only be interpreted in the context of previous questions, knowledge of the domain, or the user's history of interests and preferences. Thus, some natural language questions and commands may not be easily transformed to machine processable form. Compounding this problem, many natural language questions may be ambiguous or subjective. In these cases, the formation of a machine processable query and returning of a natural language response is difficult at best.
Even once a question is asked, parsed and interpreted, machine processable requests and commands must be formulated. Depending on the nature of the question, there may not be a simple set of requests that return an adequate response. Several requests may need to be initiated and even these requests may need to be chained or concatenated to achieve a complete result. Further, no single available source may include the entire set of results required. Thus, multiple requests, perhaps with several parts, may need to be made to multiple data sources, which can be located both locally or remotely. Not all of these sources and requests may return useful results or any results at all.
In a mobile environment, the use of wireless communications may further reduce the chances that requests will be complete or that successful results will be returned. Useful results that are returned are often embedded in other information and may need to be extracted therefrom. For example, a few key words or numbers often need to be “scraped” from a larger amount of other information in a text string, table, list, page, or other information. At the same time, other extraneous information such as graphics or pictures may need to be removed to process the response in speech. In any case, the multiple results should be evaluated and combined to form the best possible answer, even in the case where some requests do not return useful results or fail entirely. In cases where the question is ambiguous or the result inherently subjective, determining the best result to present is a complex process. Finally, to maintain a natural interaction, responses should be returned rapidly to the user. Managing and evaluating complex and uncertain requests, while maintaining real-time performance, is a significant challenge.
These and other drawbacks exist in existing systems.
The invention overcomes these and other drawbacks of prior telematic systems.
According to one aspect of the invention, speech-based and non-speech-based systems are provided that act on commands and retrieve information. The invention uses context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for users that submit requests and/or commands in multiple domains. At each step in the process, accommodation may be made for full or partial failure and graceful recovery. The robustness to partial failure may be achieved through the use of probabilistic and fuzzy reasoning at several stages of the process. This robustness to partial failure promotes the feeling of a natural response to questions and commands.
According to another aspect of the invention, the interactive natural language system (herein “the system”) may be incorporated into mobile devices or may be connected to the mobile device via a wired or wireless connection. The mobile device may interface with computers or other electronic control systems through wired or wireless links. The mobile device may also operate independently of a mobile structure and may be used to remotely control devices through a wireless local area connection, a wide area wireless connection or through other communication links.
According to one aspect of the invention, software may be installed onto the mobile device that includes an input module that captures the user input; a parser that parses the input, a text to speech engine module for converting text to speech; a network interface for enabling the device to interface with one or more networks; a non-speech interface module, an event manager for managing events and/or other modules. In some embodiments, the event manager may be in communication with a context description grammar, a user profile module that enables user profiles to be created, modified and/or accessed, a personality module that enables various personalities to be created and/or used, an agent module, an update manager and one or more databases. It will be understood that this software may be distributed in any way between a mobile device, a computer attached to a mobile structure, a desktop computer or a server without altering the function, features, scope, or intent of the invention.
According to one aspect of the invention, the system may include a speech unit interface device that receives spoken natural language requests, commands and/or other utterances from users, and a computer device or system that receives input from the speech unit and processes the input and responds to the user with a natural language speech response.
According to another aspect of the invention, the system may be interfaced by wired or wireless connections to one or more other systems. The other systems may themselves be distributed between electronic controls or computers that are attached to a mobile structure or are located external to the mobile structure. The other systems may include, electronic control systems, entertainment devices, navigation equipment, measurement equipment or sensors, or other systems. External systems may also be provided with features that include payment systems, emergency assistance networks, remote ordering systems, automated or attended customer service functions, or other features.
According to another aspect of the invention, the system may be deployed in a network of devices that share a common base of agents, data, information, user profiles, histories or other components. Each user may interact with, and receive, the same services and applications at any location equipped with the mobile device on the network. For example, multiple mobile devices may be placed at different locations throughout a home, place of business, vehicle or other location. In such a case, the system can use the location of the particular device addressed by the user as part of the context for the questions asked.
According to one embodiment of the invention, processing may be performed at the mobile devices. The commands may be processed on-board to enable the mobile devices to control themselves and/or to control other mobile devices, fixed computers, mobile telephones, and other devices. Additionally, mobile devices may track context.
According to one embodiment of the invention, infrastructure may be provided to maintain context information during multimodal interactions, such as speech and/or non-speech interactions. According to one exemplary embodiment of the invention, context information may be maintained in a multimodal environment by providing communication channels between mobile device, or multimodal devices, and the system. The communication channels allow the system to receive multimodal input such as text-based commands and questions and/or voice-based commands and questions. According to another embodiment of the invention, the multimodal input may include a string of text, such as keywords, that are received as commands or questions. According to yet another embodiment of the invention, the system may synchronize the context between the mobile devices and the speech-based units. In order to send a response to the corresponding mobile device, the system may track the source and send the response to the corresponding speech interface or the non-speech interface.
According to an alternative embodiment of the invention, context information may be maintained using a context manager that may be centrally positioned to receive input from multiple mobile devices and to provide output to multiple mobile devices. According to one embodiment, the mobile devices that communicate with the context manager may register through a registration module and may subscribe to one or more events. According to another embodiment of the invention, the context manager may receive input in Context XML form, for example. The other registered mobile devices may be informed of context changes through a context tracking module to enable synchronizing of context across the registered modules. According to one embodiment of the invention, registered modules may be added or removed from the system. The registered modules may include dynamic link libraries (DLLs), or other information sources, that are specific to multimodal devices.
According to yet another alternative embodiment of the invention, context information may be determined from a command or request that is presented in a textual format and/or a command or request that is presented as an utterance and processed using a multi-pass automatic speech recognition module that transcribes the utterance to a text message. The command or request may be compared against a context description grammar to identify a match. Any active grammars in the context description grammar may be scored against the command or request and a best match may be sent to a response generator module. Agents may be associated with corresponding response generator modules and may retrieve the requested information for generation of a response. The agents may update a context stack to enable follow-up requests.
According to another embodiment of the invention, mobile devices may be configured to allow verbal annotations of objects stored thereon. Mobile devices may transcribe the verbal annotation to text and store the textual annotation with the object. Alternatively, mobile devices may be configured to enable users to manually input textual descriptions that are stored along with the objects. According to one embodiment of the invention, the textual annotations and/or textual descriptions may be classified and searched. In an alternative embodiment, mobile devices may classify and search the verbal annotations rather than textual annotations. However, classifying and searching verbal annotations may be considerably more difficult than classifying and searching textual annotations and/or textual descriptions.
According to one embodiment of the invention, the textual annotations and textual descriptions may be communicated using a short message service on the mobile telephone or other device. Short message service is a text message service that enables sending and receiving of short textual messages. The textual messages may be stored at data centers for forwarding to intended recipients. Other configurations may be used.
According to another aspect of the invention, domain specific behavior and information may be organized into data managers. Data managers are autonomous executables that receive, process and respond to user questions, queries and commands. The data managers provide complete, convenient and re-distributable packages or modules of functionality that are typically directed to a specific domain of applications. Data managers may be complete packages of executable code, scripts, links to information, and other forms of communication data that provide a specific package of functionality, usually in a specific domain. In other words, data managers may include components for extending the functionality to a new domain. Further, data managers and their associated data may be updated remotely over a network as new behavior is added or new information becomes available. Data managers may use system resources and the services of other, typically more specialized, data managers. Data managers may be distributed and redistributed in a number of ways including on removable storage media, transfer over networks or attached to emails and other messages. An update manager may be used to add new data managers to the system or update existing data managers.
In order to enhance the natural query and response environment, the system may format results to increase understandability to users. Formatting and presentation of results may be based on the context of the questions, the contents of the response being presented, the history of the interaction with the user, the user's preferences and interests and the nature of the domain. By contrast, rigid, highly formatted, or structured presentation of results may be deemed unnatural by many users.
According to another embodiment of the invention, the system may simulate some aspects of a human “personality”. In some cases, the presentation of the response and the terms that are used to provide the response may be randomized to avoid the appearance of rigidly formatted or mechanical responses. The use of other simulated personality characteristics is also desirable. For example, a response that may be upsetting to the user may be presented in a sympathetic manner. Furthermore, results of requests may be long text strings, lists, tables or other lengthy sets of data. Natural presentation of this type of information presents particular challenges because simply reading the long response is generally not preferred. Instead, the system may parse important sections from the response and may initially provide only reports. Determining what parts of a long response are presented may be based on the context of the questions, the contents of the response being presented, the history of the interaction with the user, the user's preferences and interests and the nature of the domain. At the same time, the system may give the user interactive control over what information to present and how much information to present, to stop the response all together, or to take other actions.
The invention may be applied as a user interface to telematics systems in a wide variety of environments. These environments may include, but are not limited to, the following: 1) personal automobiles, rented automobiles, or fleet automobiles; 2) motorcycles, scooters, and other two wheeled or open-air vehicles; 3) commercial long-haul and short haul trucks; 4) delivery service vehicles; 5) fleet service vehicles; 6) industrial vehicles; 7) agricultural and construction machinery; 8) water-borne vehicles; 9) aircraft; and, 10) specialized military, law enforcement and emergency vehicles.
According to another aspect of the invention, the system may process and respond to questions, requests and/or commands. Keywords or context may be used to determine whether the received utterance and/or textual message include a request or command. For example, utterances may include aspects of questions, requests and/or commands. For example, a user may utter “tune in my favorite radio station.” A request is processed to determine the name, the channel, and time for the user's favorite radio station. If the programming on that station is of a type the user generally does not listen to, the system can suggest using an alternative option, such as listening to a CD more likely to please the user. A command must be executed to set a tune the radio station.
The invention can be used for generalized local or network information query, retrieval and presentation in a mobile environment. For each user utterance, including a question or query or set of questions or queries, the system may perform multiple steps possibly including: 1) capturing the user's question or query through speech recognition operating in a variety of real-world environments; 2) parsing and interpreting the question or query; 3) determining the domain of expertise required and the context to invoke the proper resources, including agents; 4) formulating one or more queries to one or more local and/or network data sources or sending appropriate commands to local or remote devices or the system itself; 5) performing required formatting, variable substitutions and transformations to modify the queries to a form most likely to yield desired results from the available sources; 6) executing the multiple queries or commands in an asynchronous manner and dealing gracefully with failures; 7) extracting or scraping the desired information from the one or more results, which may be returned in any one of a number of different formats; 8) evaluating and interpreting the results, including processing of errors, gathered and combine them into a single best result judged to be “best” even if the results are ambiguous, incomplete, or conflicting; 9) performing required formatting, variable substitutions and transformations to modify the results to a form most easily understood by the user; 10) presenting the compound result, through a text to speech engine or multimodal interface, to the user in a useful and/or expected manner; 11) optionally, providing a response to users indicating the success or failure of the command, and possibly including state information; or other steps.
The above steps may be performed with knowledge of the domain of expertise, the context for the question or command, domain specific information, the history of the user's interactions, user preferences, available information sources or commands, and responses obtained from the sources.
Probabilistic or fuzzy set decision and matching methods may be applied to deal with inconsistent, ambiguous, conflicting and incomplete information or responses. In addition, asynchronous queries may be used to provide rapid and graceful failure of requests or commands that allow the system to robustly return results quickly and in a manner that seems natural to the user.
Many everyday questions are inherently subjective and result in answers that are a matter of opinion or consensus, as much as fact. Such questions are often ad hoc in their nature, as well. The system, according to another aspect of the invention, may use adaptive, probabilistic, and fuzzy set decision and matching methods to identify the subjective nature of the question and to evaluate a range of possible answers, wherein one or more answers may be selected that most accurately represent the type of result desired by the user.
The context and expected results from a particular question may be highly dependent on the individual asking the question. Therefore, the system may create, store and use personal profile information for each user. Information in the profile may be added and updated automatically as the user uses the system or may be manually added or updated by the user or by others. Domain specific agents may collect, store and use specific profile information, as may be required for optimal operations. Users may create commands for regularly used reports, automatically generated alerts, and other requests and for the formatting and presentation of results. The system may use profile data in interpreting questions, formulating requests, interpreting request results and presenting answers to the user. Examples of information in a user profile includes, history of questions asked, session histories, formatting and presentation preferences, special word spelling, terms of interest, special data sources of interest, age, sex, education, location or address, place of business, type of business, investments, hobbies, sports interests, news interests and other profile data.
According to one aspect of the invention, the system may attempt to provide rapid responses in order to provide a natural question and response environment. The rapid responses may be provided without obtaining additional information. The system may determine agent composition, context and/or domain for a user's question or command, for example, by using a real-time scoring system or other technique. Based on this determination, the system may trigger one or more agents to respond to the user's question or command. The agents may make one or more requests and rapidly return a formatted response. Thus, users may receive direct responses to a set of questions, each with a different response or context. In some cases, the available information, including the request results, may not adequately answer the questions presented. In such situations, the user may be asked one or more follow-up questions to resolve the ambiguity. Additional requests may then be made before an adequate response is provided. In these cases, the system may use context information, user profile information and/or domain specific information to minimize the interaction with the user required to deliver a response.
If the confidence level of the domain or context score is not high enough to ensure a reliable response, the system may request that the user verify that the question or command is correctly understood. In general, the question may be phrased to indicate the context of the question including all criteria or parameters. If the user confirms that the question is correct, the system may proceed to produce a response. Otherwise, either the user can rephrase the original question, perhaps adding additional information to remove ambiguity, or the system may ask one or more questions to attempt to resolve the ambiguity or other actions may taken.
According to one aspect of the invention, the system may accept any natural language question or command and, as a result, may be subject to ambiguous requests. To assist users in formulating concise questions and commands, the system may support a voice query language. The language may help users clearly specify the keywords or contexts of the question or command along with the parameters or criteria. The system may provide built in training capabilities to help the user learn the best methods to formulate their questions and commands.
In order to make the responses to user's questions and commands seem more natural, the system may employ one or more dynamically invokeable personalities and/or emotional models. Personalities and emotional models have specific characteristics that simulate the behavioral characteristics of real humans. Examples of these characteristics include sympathy, irritation, and helpfulness and associated emotions. The personality also randomizes aspects of responses, just as a real human would do. This behavior includes randomization of terms used and the order of presentation of information. Characteristics of the personality and/or emotions are invoked using probabilistic or fuzzy set decision and matching methods and using criteria, including the context for the question, the history of the user's interaction, user preferences, information sources available, and responses obtained from the sources.
According to another aspect of the invention, special procedures may be employed to present information in the form of long text strings, tables, lists or other long response sets. Simply presenting a long set of information in an ordered manner may not be considered natural or what most users have in mind. The system may use probabilistic or fuzzy set matching methods to extract relevant information and present these subsets first. Further the system may provide commands allowing users to skip through the list, find keywords or key information in the list or stop processing the list altogether.
According to one embodiment of the invention, the system may support multiple users that access the system at different times. According to another embodiment of the invention, the system may support multiple users that access the system during a same session, in an interleaved or overlapping manner. The system may recognize the multiple users by name, voice, or other characteristic and may invoke a correct profile for each user. If multiple users are addressing the system in overlapping or interleaved sessions, the system may identify the multiple users and may invoke one or more corresponding profiles. For applications that require security safeguards, the multiple users may be verified using voiceprint matching, password or pass-phrase matching, or other security safeguards.
When multiple users are engaged in interleaved sessions, the system may gracefully resolve conflicts using a probabilistic or fuzzy set decision method for each user. This process may simulate the manner in which a human would address multiple questions from various sources. For example, the system may answer short questions first in time while answering longer questions later in time. Alternatively, the system may answer questions in the order that they are received, among other configurations.
Since the invention may operate in many environments, including mobile environments with background noise, point noise sources and people holding conversations, filtering of speech input may be advantageous. The invention may use, for example, either one-dimensional or two-dimensional array microphones (or other devices) to receive human speech. The array microphones may be fixed or employ dynamic beam forming techniques. The array pattern may be adjusted to maximize gain in the direction of the user and to null point noise sources. Alternatively, microphones may be placed at particular locations within the mobile environment near where occupants are likely to use the system. These microphones can be single microphones, directional microphones or an array of microphones. Speech received at the microphones may then be processed with analog or digital filters to optimize the bandwidth, cancel echoes, notch-out narrow band noise sources, or perform other functions. Following filtering, the system may use variable rate sampling to maximize the fidelity of the encoded speech, while minimizing required bandwidth. This procedure may be particularly useful in cases where the encoded speech is transmitted over a wireless network or link.
The invention can be applied to a wide range of telematics applications. General applications areas can include, but are not limited to remote or local vehicle control, information query, retrieval and presentation from local or network sources, safety applications, and security applications.
The system may provide local or remote control functions for the system, including devices that are located on the mobile structure or off the mobile structure. Users may initiate commands locally or remotely. Typically, remote operation may be conducted through an IP connection, a telephone connection, or other connections. The user may address spoken commands to a mobile device or desktop unit, which may send the commands to controllers on the vehicle over wireless links. Other remote command techniques may be used. The system may process commands in a nearly identical manner to a request. One difference being that the result of the command is generally an action rather than a response. In many cases, the system may give the user a cue or response to indicate that the command has been successfully executed or has failed. In cases of failure, an interactive session may be started to allow the user to resolve the difficulty or formulate a command more likely to succeed.
The invention provides users, including mobile structure operators, with the capability to control most any mobile system functions using interactive speech and non-speech commands and/or requests. Generally, controls of a critical nature, or having safety implications, may employ fail-safe checks that, before execution, verify that a command will not create a hazardous condition. Manual overrides may also be provided as an extra precaution. The invention may provide built in help and user guidance for the devices under control. This guidance may include step-by-step training for operators that are learning to use the features of the mobile structure. The system can provide extensive interactive guidance when commands cannot be executed or when commands fail. This advice may include suggestions to reformulate the command to succeed, suggestions to work around a failure, suggestions for alternative commands that may achieve a similar function, or other suggestions. Examples of control functions that can be performed from local or remote locations by the invention include:
The invention can provide users or operators of a mobile structure with specialized safety functions through the interactive speech interface and/or non-speech interface. The invention may use a dynamically evocable personality that is capable of creating announcements that are appropriate for the severity of the situation. The announcements and personalities may be under user control and configuration. Some examples of these safety applications can include:
The invention can offer vehicle operators and occupants a variety of services that are useful while in the vehicle and/or while arriving at a destination. Further, users can employ the interactive natural language interface to customize these services to suit each individual. Some examples of services that can be supported by the natural-language interactive speech interface of the invention, include:
Vehicle operators and other occupants can use the interactive natural language interactive interface of the invention to perform many types of information query, retrieval and presentation operations. Using the natural language interactive interface, users can modify the parameters of queries or specify the presentation formats for results. Data used to create a response can be from any combination of local and remote data sources. User specific data can be synchronized between systems fixed to one or more vehicles, mobile structures and desktop systems. Some examples of information query, retrieval and presentation applications for the invention include, but are not limited to the following:
It will be appreciated that the foregoing statements of the features of the invention are not intended as exhaustive or limiting, the proper scope thereof being appreciated by reference to this entire disclosure and reasonably apparent variations and extensions thereof.
The invention will be described by reference to the preferred and alternative embodiments thereof, in conjunction with the drawings in which:
The following detailed description refers to the accompanying drawings, and describes exemplary embodiments of the invention. Other embodiments are contemplated and modifications may be made to the exemplary embodiments without departing from the spirit, functionality and scope of the invention. Therefore, the following detailed descriptions are not meant to limit the invention.
According to one embodiment of the invention, a telematic natural language speech interface and non-speech interface are provided for use in mobile environments and telematic applications. The system, or portions thereof, may be used in vehicles, while on foot or at a fixed location such as an office or home, or at other locations. An overall block diagram of one embodiment of the invention is illustrated in
A speech unit 128 and/or a keypad 14 may be integrally coupled to a mobile structure 10 or may be part of mobile devices 36, fixed home or office computer systems 44, or other devices. Mobile devices 36 may include mobile telephones, personal digital assistants, digital radios, compact disk players, navigation systems, or other mobile devices. The mobile devices 36 may be configured to integrate with set-top boxes, alarm clocks, radios, or other electronic components. The speech unit 128 and/or keypad 14 may be interfaced to a Telematics Control Unit (TCU) 28 through one or more data interfaces 26. According to some embodiments, the main speech-processing unit 98 may be embedded in one or more TCU 28. In some embodiments, the components of the speech unit 128 may also be distributed between one or more TCUs.
A speech-processing unit may be built into mobile devices 36 and may be coupled with the data interfaces 26 though a wireless, or wired, handheld interface 20. Other user interface peripherals may be connected to the TCU through the data interfaces and may include displays 18, such as touch screen displays for displaying text, graphics and video; keypads 14 for receiving textual data input; video cameras 16 for receiving multimedia communications or conferences; a pointing device or stylus, or other user interface peripherals. Other devices may be connected to the TCU through the data interfaces including wide-area RF transceivers 24, navigation system components 22, or other devices. The navigation system may include several components such as, Global Positioning System (GPS) receivers or other radiolocation system receivers, gyroscopes or other inertial measurement equipment, distance measurement sensors, such as odometers, or other components. Radiolocation equipment may receive coded signals from one or more satellites or terrestrial sources 40. The one or more location service servers 48 may assist the navigation system. Other systems that can connect to the TCU through the data interfaces may include automotive control computers, digital control interfaces for devices such as media players or other electronic systems, measurement sensors, or other specialized electronic equipment.
The control and device interfaces 30 may connect the TCU 28 to various devices 32 within the mobile structure 10. The control and device interfaces 30 may be used to execute local or remote commands from users of the natural language interface. In some cases, the control and device interfaces 30 may include specialized hardware for interacting with different types of devices. The hardware interfaces may include analog or digital signal interfaces for device control, along with analog or digital interfaces for measurements that may control devices 32. These interfaces may also include specialized software that encapsulates or abstracts specific behavior of the devices 32. The interface software may include one or more drivers that are specific to the hardware interface and to one or more agents. The domain agents may include the specialized software behavior and data required to control a particular device or class of devices. New or updated behavior may be added to the system by updating data managers that are associated with specific devices or class of devices. The devices 32 may include manual controls or manual overrides 34. For safety reasons, the control and device interfaces 30 may incorporate fail-safe systems that, for example, may verify operating limits before changing settings, ensuring that commands do not conflict with settings from manual controls, and will not, in some combination with other commands or device settings, create an unsafe situation. The software behavior and data that ensures safe operations may be included within the domain agent that is specific to the device or class of devices. Examples of devices and system that can be controlled through the control and device interfaces 30 include, power management systems, measurement sensors, door locks, window controls, interior temperature controls, shifting of the transmission, turn signals, lights, safety equipment, engine ignition, cruise control, fuel tank switches, seat adjustments, specialize equipment such as winches, lifting systems or loading system, and other systems.
The wide-area RF transceiver 24 may communicate with one or more wide-area wireless networks 38, which may be connected to data networks 42, including the Internet, the Public Switched Telephone Network (PSTN) 42 or other data networks. The wide-area wireless networks can be of any suitable terrestrial or satellite based type. Mobile devices 36 may communicate with one or more local or wide-area wireless networks. Home or office systems 44, equipped with wired or wireless network interfaces, may communicate through the data networks or PSTN.
According to one embodiment of the invention, data and agents may be stored and synchronized in mobile structures 10, mobile devices 36 and/or fixed systems 44 having one or more main speech-processing units 98. The synchronization between these different systems can occur on the wide area wireless network 38, the data network 42, through the handheld interface 20, or other local data connections. The synchronization may be performed automatically when any two or more of the computers are connected to these networks. Alternatively, the synchronization may be performed on demand under user control. The synchronization process attempts to determine which version of a data element or an agent are the newest or most up-to-date and propagates that element. Thus, synchronization is an incremental change processes. In some cases, a complete replacement of a database, a portion of the database or of one or more agents may be performed rather than performing a series of incremental updates.
The wide-area wireless networks 38, the data networks 42 or PSTN, may connect mobile structures 10, mobile devices 36 and fixed computers 44 to one or more servers that provide one of more services. According to one embodiment of the invention, an interactive natural language user interface may be provided that supports a transfer of data or transmission of speech, text, video and other formats. For data centric applications, a standardized data transfer format may be used including, for example, Hypertext Markup Language over Hypertext Transfer Protocol (HTTP), Extensible Markup Language (XML), and other data formats or schemas, over HTTP or other transfer protocol, Electronic Data Interchange formats over a variety of transport protocols, etc. It will be understood that the exact configuration of the servers may be determined on a case-by-case basis with consideration being given to the exact combinations of services being offered, the service providers providing the services, the contractual relationships between the service provider, and other factors. The invention supports any suitable configuration. In each case these servers may themselves be distributed over one or more public or private networks. Some examples of servers that may be used to deliver these services, are given below:
Main speech processing unit 98, speech unit 128 and keyboard 14 may be distributed in mobile devices 36 in a number of ways. For example, these units may be attached to the mobile devices 36 as independent components or as a single integrated component. In another embodiment of the invention, some or all of the main speech processing unit 98 and speech unit 128 may be embedded in one or more of the TCUs 28, mobile devices 36, fixed computer systems 44, or other devices.
In all other respects, the second embodiment resembles the first embodiment illustrated in
According to one embodiment, users may interact with the mobile device 36 through the speech unit 128, the keypad 74 or keyboard, a display 72 that displays text, graphics, video, or other peripheral. In some embodiments the display may be a touch screen type. Alternatively, a pointing device (not shown) may be used, among other devices.
The mobile device 36 may connect to one or more wired or wireless wide-area or local-area networks through one or more interfaces. A wide-area network transceiver 78 may connect to the wide-area wireless network 38 or the data network 42 using a wireless or wired connection, including an IP connection, a dial-up PSTN network connection or other connections. The local-area network transceiver 76 may connect to wired or wireless local area networks. These networks may include the handheld interface 20 or connections to fixed computer systems 44. In both mobile device environments and fixed computer environments, communications between humans and machines may not provide accurate results at least because natural language commands may be incomplete in their definition. The occurrence of inaccurate results may be reduced by leveraging context of previous utterances, knowledge of the domain and/or the user's history of interests and preferences or other factors.
According to one embodiment of the invention, processing may be performed at the mobile devices 36. Alternatively, the processing may be performed on the server side. In yet another embodiment, the processing may occur on both the mobile devices 36 and the server side. The commands may be processed on-board the mobile devices 36 to enable the mobile devices 36 to control themselves and/or to control other mobile devices 36, fixed computers 44, mobile telephones, or other devices. Additionally, mobile devices 36 may track context. According to another embodiment of the invention illustrated in
According to yet another alternative embodiment of the invention, context information may be determined from a command or request that is presented as a text message and/or a command or request that is presented as a verbal utterance and processed using a multi-pass automatic speech recognition module that transcribes the verbal utterance to a text message. The multi-pass automatic speech recognition module may use a dictation grammar or a large vocabulary grammar, among other resources, to transcribe the verbal utterance into a text message. On platforms where a dictation grammar is not available, the multi-pass automatic speech recognition may use a virtual dictation grammar that uses decoy words for out-of-vocabulary words. Decoy words may include utility words, nonsense words, isolated syllables, isolated distinct sounds and other decoy words.
According to one embodiment of the invention, the text message may be searched for a particular character, group of characters, word, group of words, and other text combinations. The text combination may be compared against entries in a context description grammar that is associated with each agent 106. If a match is identified between an active grammar in the context description grammar and the command and/or request, then the match may be scored. The agents 106 may be ranked based on the determined score. In generating an aggregate response from the one or more responses received from the agents 106, the ordering of the responses from the individual agents may be determined based on the rank of agents 106. The aggregate response may be generated by a response generator module. Agents 106 may update a context stack, that includes an ordered list of command contexts, to enable follow-up requests.
According to another embodiment of the invention, if a match is not found, or only a partial match is found, between the text message and active grammars, then a knowledge-enhanced speech recognition system may be used to semantically broaden the search. The knowledge-enhanced speech recognition system may be used to determine the intent of the request and/or to correct false recognitions. The knowledge-enhanced speech recognition may access a set of expected contexts that are stored in a context stack to determine a most likely context. The knowledge-enhanced speech recognition may use context specific matchers that are able to identify context such as time, location, numbers, dates, categories (e.g., music, movies, television, addresses, etc.) and other context. The matching may be performed by comparing a character, group of characters, a word, group of words, and other text combinations. Alternatively, or in addition to text based matching, the matching may be performed using phonetic matching, among other techniques. The results of any match may be used to generate a command and/or request that is communicated to agents 106 for additional processing. According to one embodiment of the invention, non-speech interface 114 may show system, state and history information in a more concise manner than is possible through the speech interface. Non-speech interface 114 may be accessed to create or extend capabilities of agents 106. These operations may include scripting of agents, adding data to the agent or databases 102 used by the agent, adding links to information sources, among other operations.
According to another embodiment of the invention, mobile devices 36 may be configured to allow speech annotations of objects stored thereon. The objects may include photographs, calendar entries, email messages, instant messages, phonebook entries, voice mail entries, digital movies or other objects. Mobile devices 36 may transcribe the speech annotations to textual annotations and store the textual annotations with the object. Alternatively, mobile devices 36 may be configured to enable users to input non-speech annotations, such as textual descriptions, that are stored along with the objects.
According to one embodiment of the invention, the annotated objects may be stored on a server side, a client side, a combination of server side and client side, or according to other configurations. The invention further contemplates collaboratively exchanging and sharing the annotated objects among distributed workgroups that may include centralized servers having shared workspaces for providing common object storage and retrieval facilities. The shared workspaces may be implemented on the centralized servers and may be accessed from different platforms using the mobile devices 36. The system may include a peer-to-peer system for accessing the annotated objects.
According to one embodiment of the invention, the non-speech annotations may be classified and searched. In an alternative embodiment, the speech annotations may be classified and searched. However, classifying and searching speech annotations may be considerably more difficult than classifying and searching non-speech annotations.
According to another embodiment of the invention, the textual annotations and/or textual descriptions that are associated with the objects may be stored as metadata, thereby enabling searching for the objects using the metadata. The metadata may include GPS information, environmental information, geographic information, or other information. For example, proximity to famous landmarks may be determined using GPS information, environmental information, geographic information, or other information, and this information may be integrated into the metadata associated with the objects. According to one embodiment of the invention, GPS coordinates may be stored in the metadata associated with the objects and users may search for selected objects based on the GPS coordinates. A user may provide a speech command such as “show me all the photos of Greece.” In this case, the system would limit the type of object to photographs and would determine the GPS coordinates of Greece. The system would then search the metadata for objects that correspond to photographs and that also satisfy GPS coordinates for Greece. According to another embodiment of the invention, including GPS coordinates in the metadata of objects enables post-processing of the objects based on the GPS coordinates. For example, the objects may be subjected to an initial sort based on generalized GPS coordinates that are stored in the metadata and may be subjected to additional sorting based on more particular criteria for the GPS coordinates. Thus, a user may initially search for object metadata that corresponds to a location near a famous landmark and may use image matching to label the objects with the searchable metadata (i.e., textual descriptions). For example, using the GPS coordinates stored in the metadata, users may first determine that photographs were taken at the Jefferson Memorial and may use this information to label the photographs with searchable metadata (i.e., textual descriptions) including “photo of the Jefferson Memorial.” One of ordinary skill in the art will readily appreciate that metadata may include various types of information and may be searched using the various types of information.
According to one embodiment of the invention, the textual annotations may be communicated using a short message service on the mobile telephone or other device. Short message service is a text message service that enables sending and receiving of short textual messages. The textual messages may be stored at data centers for forwarding to intended recipients. Other configurations may be used.
According to another embodiment of the invention, the mobile devices 36 may support multi-modal communications that enable displaying of non-speech search results on a graphical interface and receipt of speech commands to provide a follow-up search, among other configurations. For example, the user may be presented with textual search results corresponding to a name of a famous person and the user may provide a speech command to find a biography of the famous person. The system may maintain the context of the textual search results to find the biography associated with the famous person. By contrast, known systems may perform a follow-up search on the term “biography” and may present a dictionary definition of the term “biography.”
According to another embodiment of the invention, fixed computers 44 may be configured to allow verbal annotations of objects stored thereon. Fixed computers 44 may transcribe the verbal annotation to text and store the textual annotation with the object. Alternatively, fixed computers 44 may be configured to enable users to manually input textual descriptions that are stored along with the objects. According to one embodiment of the invention, the textual annotations and/or textual descriptions may be classified and searched. In an alternative embodiment, fixed computers 44 may classify and search the verbal annotations rather than textual annotations. However, classifying and searching verbal annotations may be considerably more difficult than classifying and searching textual annotations and/or textual descriptions.
In another embodiment of the invention, users may interact with fixed computers 44 using speech unit 128, the keyboard 88 or keypad, a display 86 for displaying text, graphics, video, or other peripherals. According to some embodiments of the invention, the display may be a touch screen type. Alternatively, a pointing device (not shown) may be used, along with other devices. Fixed computers 44 may be coupled to one or more wired or wireless wide-area or local-area networks through one or more interfaces. A wide-area transceiver 92 may connect to the wide-area wireless network 38 or the data network 42, using a wireless or wired connection, including an IP network, a dial-up PSTN network connection, or other connections. The local-area network transceiver 90 may connect to wired or wireless local area networks. These networks may include connections to mobile devices 36.
In order for devices to properly respond to requests and/or commands that are submitted in a natural language form, machine processable requests and/or algorithms may be formulated after the natural form questions or commands have been parsed and interpreted. Algorithms describe how the machines should gather data to respond to the questions or commands. Depending on the nature of the requests or commands, there may not be a simple set of requests and/or algorithms that will return an adequate response. Several requests and algorithms may need to be initiated and even these requests and algorithms may need to be chained or concatenated to achieve a complete response. Further, no single available source may contain the entire set of results needed to generate a complete response. Thus, multiple requests and/or algorithms, perhaps with several parts, may be generated to access multiple data sources that are located both locally or remotely. Not all of the data sources, requests and/or algorithms may return useful results or any results at all. Useful results that are returned are often embedded in other information and may need to be extracted from the other information. For example, a few key words or numbers may need to be “scraped” from a larger amount of other information in a text string, table, list, page, video stream or other information. At the same time, extraneous information including graphics or pictures may be removed to process the response. In any case, the multiple results must be evaluated and combined to form the best possible response, even in cases where some requests do not return useful results or fail to produce results entirely. In cases where the command is determined to be ambiguous or the result is inherently subjective, determining the results to present in the response is a complex process. Finally, to maintain a natural interaction, responses should be returned to the user rapidly. Managing and evaluating complex and uncertain requests, while maintaining real-time performance, is a significant challenge.
The invention provides a complete speech-based command generation, information query, retrieval, processing and presentation environment or a combination of speech-based and non-speech-based command generation, information query, retrieval, processing and presentation environment for telematic applications. In addition, the invention may be useful for controlling the system itself and/or external devices. This integrated environment makes maximum use of context, prior information and domain and user specific profile data to achieve a natural environment for one or more users submitting requests or commands in multiple domains. Through this integrated approach, a complete speech-based natural language command, algorithm and response environment or a combination of speech-based and non-speech-based command, algorithm and response environment may be created.
The telematic natural language interface may be deployed as part of, or a peripheral to a TCU or other mobile devices 36, as part of a mobile device interfaced to vehicle computers and other mobile system through wired, wireless, optical, or other types of connections or fixed computers interfaced to the vehicle computers or other systems through a combination of wired, wireless, optical and/or other types of connections. Alternatively, the components of the interactive natural language telematic interface may be distributed in any suitable manner between these multiple computing platforms.
According to one embodiment of the invention, personalized cognitive model 810 is a model derived from a user's interaction pattern with the system and may be used to predict what actions the user may take next in time, thus assisting with speech recognition and/or question or command recognition. Personalized cognitive model 810 may track actions performed by the user. When the system is attempting to predict user behavior, the personalized cognitive model may be consulted first. The system may have multiple personalized cognitive models, wherein one may correspond to each user.
According to another embodiment of the invention, general cognitive model 806 is a statistical abstract that corresponds to interaction patterns with the system for multiple users. Data stored within general cognitive model 806 may be used to predict a user's next action, thus assisting with speech recognition and/or question or command recognition. The general cognitive model 806 may also track what actions a particular user has performed and may be used when the user interacts with the system in a way that is not handled in the personalized cognitive model.
According to one embodiment of the invention, the environmental model 808 may include information associated with the user's environment and surroundings. The information may include the type of environment that a user is in (e.g., quiet or noisy); details of a microphone and/or speaker system; the user's current global position and movement, as may be determined by GPS; current system status, such as what song/movie is playing, is the system in the midst of retrieving something, or other system status; details on all voice-enabled devices in the immediate vicinity, such as a presence of a voice-enabled TV, stereo, and DVD player in the same room; a user's credit card information, such as numbers and current balances, wherein the user may ask a mobile telephone to download and pay for a video and the system may respond that there are insufficient funds; or other information. The information may be accessed to invoke a context, domain knowledge, preferences, and/or other cognitive qualities that enhance the interpretation of questions and/or commands.
The conversational speech analyzer 804 may also access the general cognitive model 806 and/or the personalized cognitive model 810 to further refine a context, domain knowledge, preferences, and/or other cognitive qualities to enhance the interpretation of questions and/or commands. Based on information received from general cognitive model 806, environmental model 808 and/or the personalized cognitive model 810, the system may enhance responses to commands and questions by including a prediction of user behavior.
Adaptive Misrecognition Analysis Engine 812 may analyze and store textual messages, including transcribed utterances, that are identified as being unrecognized or incorrectly recognized by conversational speech analyzer 804. Upon a determination that text is unrecognized, the system may generate an unrecognized event. For example, an unrecognized event may result from not finding a match to text and/or the transcribed utterance.
According to one embodiment of the invention, the system may implement one or more techniques to determine that textual messages are incorrectly recognized. For example, the user may command the system to play a specific song title and the system may misrecognize the requested song title and provide a song having a different title, the system may misrecognize an incorrect song title and provide a song having a different than requested title, the system may misrecognize a correct song title and provide a song having an invalid song title, among other variations. When the system misrecognizes a request, the user typically provides immediate feedback such as, overriding the command in a time shorter than the expected execution time of the command, repeating the original request, issuing a stop command, or taking other action, wherein the action may be presented verbally, non-verbally by pushing a selected button on a cell phone or remote control, or both, among other configurations. According to one embodiment of the invention, the system may detect the user action and may prompt the user to re-phrase the request to enable the system to modify words in the query. The user's actions may be analyzed in a statistical model to determine a frequency occurrence of misrecognitions for particular commands, with the results being used to update the corresponding personalized cognitive model 810.
According to another embodiment of the invention, the conversational speech analyzer 804 may access the personalized cognitive model 810 to proactively select a next best (or nth best) match for the received text. A match may be confirmed by user action that include the user not immediately canceling the command or taking other action. The misrecognitions may also be analyzed to potentially determine personalized tuning parameters for the speech recognition components of the system. For example, the system may, over time, tune the speech recognition engine to increase recognition by analyzing how the speech recognition engine misrecognizes the utterance.
The event manager 100 may mediate interactions between other components of the invention. The event manager can provide a multi-threaded environment allowing the system to operate on multiple commands or questions from multiple user sessions without conflict and in an efficient manner, maintaining real-time response capabilities.
Agents 106 may include a collection of grammars, criteria handlers, and algorithms that are accessible to respond to a set of requests and/or commands. Agents 106 may further contain packages of both generic and domain specific behavior for the system 98. Agents 106 may use nonvolatile storage for data, parameters, history information, and locally stored content provided in the system databases 102 or other local sources. One or more user profiles 110 may be provided that include user specific data, parameters, and session and history information for determining the behavior of agents 106. One or more personality modules 108 may be provided in a data determining system that include personality characteristics for agents. The update manager 104 manages the automatic and manual loading and updating of agents 106 and their associated data from the Internet 146 or other network through the network interface 116.
According to one embodiment of the invention, the speech-based interface for the system 90 may include one or more speech units 128. Speech units 128 may include one or more microphones, for example array microphone 134, to receive the utterances from the user. The speech received at the microphone 134 may be processed by filter 132 and passed to the speech coder 138 for encoding and compression. In one embodiment, a transceiver module 130 may transmit the coded speech to the main unit 98. Coded speech received from the main unit 98 is detected by the transceiver 130, then decoded and decompressed by the speech coder 138 and annunciated by the speaker 136.
According to one embodiment of the invention, the non-speech-based interface for the system 90 may include one or more multi-modal devices 155 that may include mobile devices, stand alone or networked computers, personal digital assistances (PDAs), portable computer devices, or other multi-modal devices.
The speech units 128, multi-modal devices 155 and the main unit 98 may communicate over a communication link. The communication link may include a wired or wireless link. According to one embodiment, the communication link may comprise an RF link. The transceiver 130 on the speech unit may communicate coded speech data bi-directionally over the communication link with the transceiver 126 on the main unit 98. According to another embodiment, the RF link may use any standard local area wireless data protocols including the IEEE 802.11, Bluetooth or other standards. Alternatively, an infrared data link conforming to any suitable standard such as IrDA or other infrared standards can be used. In an alternative embodiment, wires may connect the speech unit 128 and the main unit 98, eliminating the need for one speech coder 138. Other wired or wireless analog or digital transmission techniques can be used.
According to one embodiment of the invention, coded speech received at the transceiver 126 on the main unit 98 is passed to the speech coder 122 for decoding and decompression. The decoded speech may be processed by the speech recognition engine 120 using the context description grammar module 112, among other information. Any recognized information may be processed by the parser 118, which transforms information into complete algorithms and questions using data supplied by knowledge agents. Knowledge agents may be selected from the plurality of agents 106 using a grammar stack, wherein the knowledge agents provide information for generating a response to the question or command. The knowledge agents may then process the commands or questions by creating requests that are submitted to local databases 102 or submitted through the network interface 116 to external data sources over the Internet 146 or external other networks. Algorithms typically result in actions taken by the system 90 itself (i.e., pause or stop), or to a remote device or data source (i.e., download data or program, or control a remote device), through the network interface to the Internet or other data interface.
According to one embodiment of the invention, knowledge agents may return results of questions as responses to users. The responses may be created using the results of information requests, the system personality 108, the user preferences, other data in the user profile 110, and/or other information. Agents 106 may present these results using the speech unit 128. The agents 106 may create a response string, which is sent to the text to speech engine 124. The text to speech engine 124 may generate the utterances, which may be encoded and compressed by the speech coder 122. Once coded, the utterances are transmitted from the main unit 98 by the transceiver 126 to the transceiver 130 on the speech unit 128. The utterance is then decoded and decompressed by the speech coder 138 and output by the speaker 136. Alternatively, agents 106 may present the results using multi-modal devices 155.
The non-speech interface 114 may be part of, or separate from, the multi-modal devices 155 and may be used as a substitute or complement to the speech interface. For example, non-speech interface 114 may be used to present and interact with non-speech (e.g., graphical or tabular) information in a manner more easily understood by the user. According to one embodiment of the invention, multimodal support may be provided to maintain the context during both voice interaction and interaction through the non-speech interface 114. In one exemplary embodiment, a communication channel may be opened between multimodal devices 155 and the main user interface system 90 to allow multimodal devices 155 to input text commands and questions. According to another embodiment of the invention, multimodal devices 155 may send a string of text or keywords for a command or question. The main interface system 90 may synchronize the context between multimodal device 155 and the speech units 128. In order to send a response to the corresponding device, the main user interface system 90 may track where the input came from so that the response may be sent to a TTS or multi-modal device 155.
According to one embodiment of the invention, non-speech interface 114 may show system, state and history information in a more concise manner than is possible through the speech interface. Non-speech interface 114 may be accessed to create or extend capabilities of agents 106. These operations may include scripting of agents, adding data to the agent or databases 102 used by the agent, adding links to information sources, among other operations.
According to another embodiment of the invention, system 90 may include different types of agents 106. In some embodiments of the invention, generic and domain specific behavior and information may be organized into domain agents 156. The system agent, on the other hand, may provide default functionality and basic services. The domain specific agents may provide complete, convenient and re-distributable packages or modules for each application area. In other words, the domain agents may include data that is needed to extend or modify the functionality of the system 90 in a current or new domain. Further, domain agents and their associated data can be updated remotely over a network as new behavior is added or new information becomes available. Domain agents may access a plurality of sources that may provide various services. Domain agents may use the services of other, typically more specialized, data managers and system agent. Agents are distributed and redistributed in a number of ways including on removable storage media, transfer over networks or attached to emails and other messages. The invention may provide license management capability allowing the sale of data managers by third parties to one or more users on a one time or subscription basis. In addition, users with particular expertise can create data managers, update existing data managers by adding new behaviors and information and making these data managers available to other users. A block diagram of the agent architecture according to an embodiment of the invention is shown in
Agents 106 may receive and return events to the event manager 100. Both system agents 150 and domain agents 156 may receive questions and commands from the parser 118. Based on keywords in the questions and commands and the structures of the questions and commands, the parser may invoke selected agents. Agents use the nonvolatile storage for data, parameters, history information and local content provided in the system databases 102.
According to one embodiment of the invention, when the system starts-up or boots-up, the agent manager 154 may load and initialize the system agent 150 and the one or more domain agents 156. Agent manager 154 includes knowledge of agents 106 and maps agents 106 to the agent library 158. At shutdown, the agent manager may unload the agents 106. The agent manager 154 also performs license management functions for the domain agents 156 and content in the databases 102.
The system agent 150 manages the criteria handlers 152 that handle specific parameters or values (criteria) used to determine context for questions and commands. According to one embodiment of the invention, criteria handlers 152 include parsing routines that are specialized to recognize particular parts of speech, such as times, locations, movie titles, and other parts of speech. The criteria handlers 152 may identify matching phrases and extract semantic attributes from the phrases. Both the system agent 150 and the domain agents 156 may use the criteria handlers 152. The various domain agents 156 may use the services of the system agent 150 and of other, typically more specialized, domain agents 156. The system agent 150 and the domain agents 156 may use the services of the agent library 158, which contains utilities for commonly used functions. According to one embodiment of the invention, the agent library may be a dynamic link library that implements one or more agents. The agent library may include utilities for text and string handling, network communications, database lookup and management, fuzzy and probabilistic evaluation, text to speech formats, and other utilities.
Domain agents 156 may be data-driven, scripted or created with compiled code. A base of generic agent may be used as the starting point for data-driven or scripted agents. Agents created with compiled code are typically built into dynamically linkable or loadable libraries. Developers of agents can add new functionality to the agent library 158 as required. Details of agent distribution and update, and agent creation or modification are discussed in sections below.
According to another embodiment of the invention, capabilities are provided to distribute and update system agents 150, domain agents 156, agent library 158 components, databases 102, and context description grammar 112 over wireless or wired networks 136, including IP networks and dial-up networks using the update manager 104. The network interface 116 may provide connections to one or more networks. The update manager 104 may also manage the downloading and installation of core system updates. The agent manager 154 may perform license management functions for the domain agents and the databases. The update manager 104 and agent manager 154 may perform these functions for all agents and database content including, agents and content available to all users or agents and/or content available to selected users. Examples of agent and database components added or updated on a periodic basis include:
When a user requires or selects a new domain agent 156 or database element 102, the update manager 104 may connect to their source on the network 146 though the network interface 116, download and install the agent or data. To save system resources and to comply with any license conditions, the update manger 104 may uninstall agents that are no longer in use. In one embodiment of the invention, the update manager 104 may periodically query one or more sources of the licensed agents and database components to locate and download updates to agent executables, scripts or data as they become available. Alternatively, the agent sources may initiate the downloading of agent updates of the registered or licensed agents to the update manager as they become available.
The agent manager 154 may provide license management clients that are capable of executing most any license terms and conditions. When a particular agent 106 and/or database element 102 is selected based on a submitted command, the agent manager 154 verifies that the use of the agent or data element is within the allowed terms and conditions, and if so, invokes the agent or allows access to the data element. License management schemes that can be implemented through the agent manager 154 include outright purchase, subscription for updates, one time or limited time use. Use of shared agents and data elements (such as those down-loaded from web sites maintained by groups of domain experts) may also be managed by the agent manager 154.
If questions or commands do not match an agent that is currently loaded on the system, the agent manager 154 may search the network 146 through the network interface 116 to find a source for a suitable agent. This process may be triggered, for example, when a query is made in a domain for which an agent is not available, when a new device is added to a mobile structure, or when the behavior of a mobile device is updated. Once located, the agent can be loaded under the control of the update manager 104, within the terms and conditions of the license agreement, as enforced by the agent manager.
New commands, keywords, information, or information sources can be added to any domain agent 156 by changing agent data or scripting. These configuration capabilities may allow users and content developers to extend and modify the behavior of existing domain agents 156 or to create new domain agents 156 from a generic agent without the need to create new compiled code. Thus, the modification of the domain agents 156 may range from minor data-driven updates by even the most casual users, such as specifying the spelling of words, to development of complex behavior using the scripting language as would typically be done by a domain expert. The user can create and manage modifications to domain agents 156 through speech interface commands or using non-speech interface 114. User-specific modifications of domain agents 156 are stored in conjunction with the user's profile 110 and accessed by the domain agent 156 at run-time.
The data used to configure data driven agents 156 may be structured in a manner to facilitate efficient evaluation and to help developers with organization. This data is used not only by the agent, but also in the speech recognition engine 120, the text to speech engine 124, and the parser 118. Examples of some major categories of data include:
Commands and questions are interpreted, requests formulated, responses created and results presented can be based on the user's personal or user profile 110 values. Personal profiles may include information specific to the individual, their interests, their special use of terminology, the history of their interactions with the system, domains of interest, or other factors. The personal profile data can be used by the agents 106, the speech recognition engine 120, the text to speech engine 124, and the parser 118. Preferences can include, special (modified) commands, past behavior or history, questions, information sources, formats, reports, alerts or other preferences. User profile data can be manually entered by the user and/or can be learned by the system based on user behavior. User profile values can include: 1) spelling preferences; 2) date of birth for user, family and friends; 3) income level; 4) gender; 5) occupation; 6) location information such as, home address, neighborhood, and business address, paths traveled, locations visited; 7) vehicle type or types; 8) vehicle operator certifications, permits or special certificates; 9) history of commands and queries; 10) telecommunications and other service providers and services; 11) financial and investment information; 12) synonyms (i.e., a nick name for someone, different terms for the same item); 13) special spelling; 14) keywords; 15) transformation or substitution variables; 16) domains of interest; or, 17) other values.
End users may use the data driven agent 156 extension and modification facilities and values stored in user profiles 110 to create special reports, packages of queries, alerts and output formats. A single alert or report can be configured to use multiple data sources and other variables (i.e., time, location, measured value) value to determine when alerts should be sent. For example, an alert can be generated by sampling a stock price every 15 minutes and sending an alert if the price drops below some value. In another example, an alert may be generated when a particular condition or combination of conditions occur for the vehicles. Alerts and reports can be directed to a local or remote output.
To create a report, the user may first specify a set of commands or questions. Next, the user can create or select a format for the report. Finally the user may name the report. A report can have variable parameters. For example, a user may create a company stock report and execute the report by stating its name and the company name, which gives the user selected information in a specified format for that company. In another example, a user can create a “morning” report, which presents selected multimedia information from different sources (news, sports, traffic, weather) in the order and formats desired. In yet another example, the user can create a report on the status of one or more vehicle systems. Alerts and reports can be created using only voice commands and responses, commands and responses through the graphical user interface 114, or a combination of the two. Reports can be run locally or remotely with respect to the vehicle. To create a report, alert, or other specialized behavior, the user performs a number of steps including: 1) specifying the command to run a report or alert; 2) specifying the question or questions, including keywords, used for a query; 3) setting the criteria for running the report such as on command or when a particular condition is met; 4) defining preferred information sources; 5) defining preferences for order of result evaluation by source, value, and other parameters; 6) specifying the presentation medium for a report or alert, such as an email, the text to speech engine, a message to a pager, or a text and graphics display; and, 7) specifying the preferred format for the report, such as information to be presented, order of information to be presented, preferred abbreviations or other variable substitutions.
Filtering and noise elimination may be important in facilitating the various functionalities of system 90 and may improve operation in noisy mobile environments. Recognition and parsing of the user's speech is implemented with good signal to noise ratio at the input to the speech recognition engine 120. To provide acceptable results, a set of acoustic models, an array microphone 134, a filter 132, or other components, may be employed. If a good signal to noise ratio cannot be achieved, a noise identification algorithm may be used and the appropriate acoustic model, for example, one that has been trained in conditions similar to the identified noise, may be selected. According to one embodiment of the invention, the microphone array, filters and speech coder 138 are physically separated from the main unit 98 into a speech unit 128, and connected using a wireless link. Since bandwidth on a wireless connection is at a premium, the speech coder dynamically adapts the digitization rate and compression of the captured speech.
Some embodiments of the invention may use one or more arrays of microphones 134 to provide better directional signal capture and noise elimination than can be achieved with a single microphone. The microphone array can be one-dimensional (a linear array) or two-dimensional (a circle, square, triangle or other suitable shape). The beam pattern of the array can be fixed or made adaptive though use of analog or digital phase shifting circuitry. The pattern of the active array is steered to point in the direction of the one or more users speaking. At the same time, nulls can be added to the pattern to notch out point or limited area noise sources. The use of the array microphone also helps reduce the cross talk between output from the text to speech engine 124 through the speaker 136 or from another user talking and detection of the user's speech.
The invention may use an analog or digital filter 132 between the array microphone or conventional microphone 134 and the speech coder 138. The pass band of the filter can be set to optimize the signal to noise ratio at the input to the speech recognition engine 120. In some embodiments, the filter is adaptive, using band shaping combined with notch filtering to reject narrow-band noise. One embodiment employs adaptive echo cancellation in the filter. The echo cancellation helps prevent cross talk between output from the text to speech engine and detection of the user's speech as well as suppression of environmentally caused echoes. Algorithms comparing the background noise to the signal received from the users speech may be used to optimize the band-shaping parameters of the adaptive filter.
The speech received by the array microphone 134 and passed through the filter 132 may be sent to the speech digitizer or coder 138. The speech coder may use adaptive lossy audio compression to optimize bandwidth requirements for the transmission of the coded speech to the speech recognition engine 120 over a wireless link. The lossy coding is optimized to preserve only the components of the speech signal required for optimal recognition. Further, the lossy compression algorithms that may be used are designed to prevent even momentary gaps in the signal stream, which can cause errors in the speech recognition engine. The digitized speech may be buffered in the coder and the coder may adapt the output data rate to optimize the use of the available bandwidth. The use of the adaptive speech coder is particularly advantageous when a band-limited wireless link is used between the coder and the speech recognition engine.
The microphone can be complemented with an analog or digital (i.e., Voice over IP) speech interface. This interface allows a remote user to connect to the system and interact with it in the same manner possible if they were physically present.
In an alternative embodiment, the array microphone can be replaced by a set of physically distributed microphones or headsets worn by the users. The distributed microphones can be placed in different parts of a vehicle, different parts of a room or in different rooms of a building. The distributed microphones can create a three-dimensional array to improve signal to noise ratio. The headset can use a wireless or wired connection.
While the invention is intended to be able to accept most any natural language question or command, ambiguity may be a problem. To assist users formulate concise questions and commands, the system can support a voice query language. The language may be structured to allow a variety of queries and commands with minimal ambiguity. Thus, the voice query language helps users clearly specify the keywords or contexts of the question or command along with the parameters or criteria. The language can provide a grammar to clearly specify the keyword used to determine the context and present a set of one or criteria or parameters. A user asking a question or stating a command in the voice query language may nearly always be guaranteed to receive a response.
The voice query language may be sensitive to the contents of the context stack, wherein a context defines a set of questions that can be activated or deactivated during a conversation. According to one embodiment, each agent may designate one context to be the root context that defines base algorithms that the agent implements. Thus, follow-on questions can be asked using an abbreviated grammar, since key words and criteria can be inherited from the context stack. For example, the user can simply ask about another keyword if the criteria of the question remain constant.
The system 90, may provide built in training capabilities to help users learn the best methods to formulate their questions and commands. The interactive training allows users to audibly or visibly see the machine interpretation of their queries and provides suggestions on how to better structure a query. Using the interactive training, users can quickly become comfortable with the voice query language and, at the same time, learn how to optimize the amount of information required with each step of a dialog.
The output of the speech coder 122 may be fed to the speech recognition engine 120. The speech recognition engine 120 recognizes words and phrases, using information in the context description grammar 112 and passes these to the parser 118 for interpretation. The speech recognition engine 120 may determine the user's identity by voice and name for each utterance. Recognized words and phrases may be tagged with this identity in all further processing. Thus, as multiple users engage in overlapping sessions, the tags added by the speech recognition engine 120 to each utterance allows other components of the system 90 to tie that utterance to the correct user and dialog. The user recognition capability may further be used as a security measure for applications, such as auctions or online shopping, where this is required. Voice characteristics of each user may be contained in the user profile 110.
According to one embodiment of the invention, users may start a dialog with the system 90 when they first address it. This can be done by speaking a generic word (“computer”) or addressing a specific name (“Fred”), which may be generally tied to a system personality 108. Once the user starts the dialog, it may be recognized by the speech recognition engine 120, using unique characteristics of the user's speech. At the end of a dialog or to interrupt a dialog, the user may utter a dismissal word (“good bye”).
According to another embodiment of the invention, the system 90 may employ a speech recognition engine 124 that gains improved word recognition accuracy using data from context description grammar 112, user profiles 110, and the agents 106, among other components. At the same time, the fuzzy set possibilities or prior probabilities for the words in the context description grammar may be dynamically updated to maximize the probability of correct recognition at each stage of the dialog. The probabilities or possibilities may be dynamically updated based on a number of criteria including the application domain, the questions or commands, contexts, the user profile and preferences, user dialog history, the recognizer dictionary and phrase tables, and word spellings, among other criteria.
For uncommon words or new vocabulary words, the user may be given the option to spell the words. The spelling can be done by saying the names or the letters or using a phonetic alphabet. The phonetic alphabet can be a default one or one of the user's choosing.
Alternatively, when a user submits a word that is not recognized at all or is not correctly recognized by the speech recognition engine 120, then the user may be asked to spell the word. The speech recognition engine 120 determines this condition based on confidence level for the scoring process. The word may be looked up in a dictionary and the pronunciation for the word is added to either the dictionary, the agent 106, or the user's profile 110. The word pronunciation can then be associated with the domain, the question, the context and the user. Through this process, the speech recognition engine learns with time and improves in accuracy. To assist users in spelling words, an individualized phonetic alphabet can be used. Each user can modify the standard phonetic alphabets with words, which they can remember more easily.
Once the words and phrases have been recognized by the speech recognition engine 120, the tokens and user identification may be passed to the parser 118. The parser examines the tokens for the questions or commands, context and criteria. The parser may determine a context for an utterance by applying prior probabilities or fuzzy possibilities to keyword matching, user profile 110, dialog history, and context stack contents. The context of a question or command may determine the domain and, thereby, the domain agent 156, if any, to be invoked. For example, a question with the keywords “temperature” implies a context value of weather for the question. Within a different dialog, the keyword “temperature” can imply a context for a measurement. The parser dynamically receives keyword and associated prior probability or fuzzy possibility updates from the system agent 150 or an already active domain agent 156. Based on these probabilities or possibilities the possible contexts are scored and the top one or few are used for further processing.
The parser 118 uses a scoring system to determine the most likely context or domain for a user's question and/or command. The score is determined from weighing a number of factors including, the user profile 110, the domain agent's data content and previous context. Based on this scoring, the system 90 invokes the correct agent. If the confidence level of the score is not high enough to ensure a reliable response, the system 90 may ask the user to verify whether the question and/or command is correctly understood.
In general, the question that is asked by the system 90 may be phrased to indicate the context of the question including all criteria or parameters. For example, the question can be in the form of: “Did I understand that you want such-and-such?” If the user confirms that the question is correct the system proceeds to produce a response. Otherwise, the user can rephrase the original question, perhaps adding additional information to remove ambiguity, or the system can ask one or more questions to attempt to resolve the ambiguity.
Once the context for the question or command has been determined, the parser 118 can invoke the correct agent 156, 150. To formulate a question or command in the regular grammar used by agents, the parser will preferably determine required and optional values for the criteria or parameters. These criteria may have been explicitly supplied by the user or may need to be inferred. The parser may make use of the criteria handlers 152 supplied by the system agent. The criteria handlers can provide context sensitive procedures for extracting the criteria or parameters from the user's question or command. Some criteria may be determined by executing algorithms in the agent, while others may be determined by applying probabilistic or fuzzy reasoning to tables of possible values. Prior probabilities or fuzzy possibilities and associated values may be received from a number of sources including, for example, the history of the dialog, the user profile 110, and the agent. Based on user responses, the prior probabilities or fuzzy possibilities may be updated as the system learns the desired behavior. For a weather context, examples of criteria include, location, date and time. Other criteria can include command criteria (i.e., yes/no, on/off, pause, stop), and spelling. Special criteria handlers are available from the system agent for processing lists, tables, barge-in commands, long strings of text and system commands.
The criteria handlers 152 can operate iteratively or recursively on the criteria extracted to eliminate ambiguity. This processing may help reduce the ambiguity in the user's question or command. For example, if the user has a place name (or other proper noun) in their utterance the parser 118 can use services of the domain agent 156 to look up tables in the databases 102 for place names or can attempt to determine which word is the proper noun from the syntax of the utterance. In another example, the user asks, “what about flight one hundred and twenty too?” The parser and domain agent use flight information in the database and network information along with context to determine the most plausible interpretation among; flight 100 and flight 20 also, flight 100 and flight 22, flight 122, etc.
Once the context and the criteria are determined, the parser 118 may form the question or command in a standard format or hierarchical data structure used for processing by the agents 150, 156. The parser 118 may fill in all required and some optional tokens for the grammar of the context. Often the tokens must be transformed to values and forms acceptable to the agents. The parser obtains the required transformations from the agents, dialog history or user profile 110. Examples of transformations or substitutions performed by the parser on tokens include: 1) substituting a stock symbol for a company name or abbreviation; 2) substituting a numerical value for a word or words; 3) adding a zip code to an address; and, 4) changing a place or other name to a commonly used standard abbreviation.
The agents 150, 156 may receive a command or question once the parser 118 has placed it in the required standard format. Based on the context, the parser can evoke the correct agent to process the question or command.
Commands can be directed to the system or to an external entity. System commands are generally directed to the system agent 150. Commands for external entities are generally processed by a domain agent 156, which includes the command context and behavior for the external entity.
Specific questions may be generally directed to one of the domain agents 156. The real-time selection of the correct agent allows the invention to dynamically switch contexts. Based on the question, command or context and the parameters or criteria, the domain agent may create one or more queries to one or more local or external information sources. Questions can be objective or subjective in nature. Results for objective questions can often be obtained by structured queries to one or more local or network information sources. Even for objective questions, the system may need to apply probabilistic or fuzzy set analysis to deal with cases of conflicting information or incomplete information. Information to answer subjective questions is generally obtained by one or more ad-hoc queries to local or network data sources, followed by probabilistic or fuzzy set evaluation of the one results to determine a best answer.
Once the domain agent 156 has formulated the one or more queries, they may be sent to local and/or network information sources. The queries may be performed in an asynchronous manner to account for the fact that sources respond at different speeds or may fail to respond at all. Duplicate queries can be sent to different information sources to ensure that at least one source responds with a useful result in a timely manner. Further, if multiple results are received in a timely manner, they can be scored by the system to determine which data is most reliable or appropriate. Examples of data sources accommodated include, HTTP data sources, sources with meta-data in various formats including XML, measurement data from sensors using various formats, device 32 setting parameters, entertainment audio, video and game files including MP3, databases using query languages and structured responses such as SQL, and other data sources.
The local information sources can be stored in one or more system databases 102 or can be on any local data storage such as a set of CDs or DVDs in a player or other local data storage. In other cases, local information can be obtained from vehicle system settings or measurement devices. Network information sources can be connected to, the control and device interfaces 30, the data interfaces 26, the Internet 42 or other network and accessed through a series of plug-ins or adaptors, known as pluggable sources, in the network interface 116. The pluggable sources are capable of executing the protocols and interpreting the data formats for the data sources of interest. The pluggable source provides information scrapping forms and procedures for each source to the domain agents 156. If a new type of data source is to be used a new plug-in or adaptor can be added to the appropriate interface.
The domain agent 156 can evaluate the results of the one or more queries as they arrive. The domain agent may score the relevance of the results based on results already received, the context, the criteria, the history of the dialog, the user profile 110 and domain specific information using probabilistic or fuzzy scoring techniques. Part of the dialog history is maintained in a context stack. The weight of each context for the scoring may be based on the relevance of one context to another and the age of the contexts. Other scoring variables can be associated through the context stack. Contexts can also be exclusive, so that previous contexts have no weight in the scoring.
Based on the on-going scoring processes, the domain agent 156 may determine if a single best answer can be extracted. For most questions, the desired result may include a set of tokens that may be found to formulate an answer. Once a value has been found for each of these tokens, the results are ready for presentation to the user. For example, for a question on weather, the tokens can include the date, day of week, predicted high temperature, predicted low temperature, chance of precipitation, expected cloud cover, expected type of precipitation and other tokens. Results processed in this manner may include error messages. For subjective questions, this determination is made by determining a most likely answer or answers, extracted by matching of the results received. If no satisfactory answer can be inferred from the results of the query, the agent can do one of the following:
In any case, the domain agent 156 may continue to make queries and evaluate results until a satisfactory response is constructed. In doing so, the agent can start several overlapping query paths or threads of inquiry, typically mediated by the event manager 100. This technique, combined with the use of asynchronous queries from multiple data sources provides the real-time response performance required for a natural interaction with the user.
The domain agent 156 may apply conditional scraping operations to each query response as it is received. The conditional scraping actions may depend on the context, the criteria, user profile 110, and domain agent coding and data. For each token to be extracted a scraping criteria 152 can be created using the services of the system agent 150. The scraping criteria may use format specific scraping methods including, tables, lists, text, and other methods. One or more scraping criteria can be applied to a page or results set. Once additional results are received, the domain agent can create new scraping criteria to apply to results already acquired. The conditional scarping process removes extraneous information, such as graphics, which need not be further processed or stored, improving system performance.
Specific commands are generally directed to one of the domain agents 156. The real-time selection of the correct agent allows the invention to dynamically switch contexts. Command oriented domain agents 156 evaluate the command and the state of vehicle systems, system capabilities, and measurements to determine if the command can be executed at all or if doing so will exceed operating or safety limits. If the command is ambiguous or cannot be executed for some other reason, the system may ask the user for more information or may suggest what the problem is and a likely approach to the solution. The domain agent may format the command for the specific device 32 and control and device interface 30. This formatting may involve variable substitution, inference of missing values and other formatting. Variable substitution and inference depends on the command context, the user profile 110, command history, state of vehicle systems and measured values, and other factors. A complex command can result in more atomic commands being sent to multiple devices, perhaps in a sequence. The sequence and nature of subsequent commands may depend on the previous commands, results of pervious commands, device settings and other measurements. As a command is executed, measurements are made and results collected to determine if the execution was correct and the desired state or states were reached.
Once the domain agent 156 has created a satisfactory response to a question, or to a command, the agent may format that response for presentation. Typically, the domain agent can format the response into the markup format used by the text to speech engine 124. The domain agent may format the result presentation using available format templates and based on the context, the criteria, and the user profile 110. The domain agent may perform variable substitutions and transformations to produce a response best understood and most natural to the user. The domain agent may vary the order of presentation of tokens and the exact terminology used to create a more natural response to the user. The domain agent may also select the presentation personality 108 to be used.
For both command and query responses, the domain agent 156 may select the presentation template, determine order of presentation for tokens and determine variable substitutions and transformations using probabilistic or fuzzy set decision methods. The template used to form the presentation can be from the domain agent itself or from the user profile 110. The user profile can completely specify the presentation format or can be used to select and then modify an existing presentation format. Selection and formatting of presentation template can also depend on the presentation personality 108. At the same time, the characteristics of the personality used for the response are dynamically determined using probabilities or fuzzy possibilities derived from the context, the criteria, the domain agent itself and the user profile 110.
The domain agent 156 may apply a number of transformations to the tokens before presentation to the user. These variable substitutions and transformations may be derived from a number of sources including, domain information carried by the agent, the context, the token values, the criteria, the personality 108 to be used, and the user profile 110. Examples of variable substitutions and transformations include: 1) substitution of words for numbers; 2) substitution of names for acronyms or symbols (i.e., trading symbols); 3) use of formatting information derived from the information sources (i.e., HTML tags); 4) nature of the response including, text, long text, list, table; 5) possible missing information or errors; 6) units for measurement (i.e., English or metric); and, 7) preferred terminology from the user profile or presentation personality 108.
The invention may provide special purpose presentation capabilities for long text strings, tables, lists and other large results sets. Domain agents 156 may use special formatting templates for such results. The system agent 150 can provide special criteria handlers 152 for presentation and user commands for large results sets. The presentation templates used by the domain agents for large results sets typically include methods for summarizing the results and then allowing the user to query the result in more detail. For example, initially only short summaries, such as headlines or key numbers, are presented. The user can then query the results set further. The criteria handlers provide users with the capability to browse large results sets. Commands provided by the criteria handlers for large results sets include, stop, pause, skip, rewind, start, and forward.
Some information, in formats such as video, pictures and graphics, may be best presented in a displayed format. The domain agents 156 apply suitable presentation templates in these cases and present the information through the non-speech interface 114. The system agent 150 provides special criteria handlers 152 for presentation and user commands for display presentation and control.
Although particular embodiments of the invention have been shown and described, it will be understood that it is not intended to limit the invention to the embodiments that are disclosed and it will be obvious to those skilled in the art that various changes and modifications may be made without departing from the spirit and scope of the invention. Thus, the invention is intended to cover alternatives, modifications, and equivalents, which may be included within the spirit and scope of the invention as defined by the claims.
This application is a continuation of U.S. patent application Ser. No. 13/898,045, filed on May 20, 2013, which is a continuation of U.S. patent application Ser. No. 13/488,299, filed on Jun. 4, 2012 (which issued as U.S. Pat. No. 8,447,607 on May 21, 2013), which is a continuation of U.S. patent application Ser. No. 13/084,197, filed on Apr. 11, 2011 (which issued as U.S. Pat. No. 8,195,468 on Jun. 5, 2012), which is a divisional of U.S. patent application Ser. No. 11/212,693, entitled “Mobile Systems and Methods of Supporting Natural Language Human-Machine Interactions,” filed Aug. 29, 2005 (which issued as U.S. Pat. No. 7,949,529 on May 24, 2011), the contents of which are hereby incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4430669 | Cheung | Feb 1984 | A |
4821027 | Mallory et al. | Apr 1989 | A |
4829423 | Tennant et al. | May 1989 | A |
4910784 | Doddington et al. | Mar 1990 | A |
5027406 | Roberts et al. | Jun 1991 | A |
5155743 | Jacobs | Oct 1992 | A |
5164904 | Sumner | Nov 1992 | A |
5208748 | Flores et al. | May 1993 | A |
5233681 | Bahl et al. | Aug 1993 | A |
5274560 | LaRue | Dec 1993 | A |
5357596 | Takebayashi et al. | Oct 1994 | A |
5377350 | Skinner | Dec 1994 | A |
5386556 | Hedin et al. | Jan 1995 | A |
5424947 | Nagao et al. | Jun 1995 | A |
5471318 | Ahuja et al. | Nov 1995 | A |
5475733 | Eisdorfer et al. | Dec 1995 | A |
5488652 | Bielby et al. | Jan 1996 | A |
5499289 | Bruno et al. | Mar 1996 | A |
5500920 | Kupiec | Mar 1996 | A |
5517560 | Greenspan | May 1996 | A |
5533108 | Harris et al. | Jul 1996 | A |
5537436 | Bottoms et al. | Jul 1996 | A |
5539744 | Chu et al. | Jul 1996 | A |
5557667 | Bruno et al. | Sep 1996 | A |
5559864 | Kennedy, Jr. | Sep 1996 | A |
5563937 | Bruno et al. | Oct 1996 | A |
5577165 | Takebayashi et al. | Nov 1996 | A |
5590039 | Ikeda et al. | Dec 1996 | A |
5608635 | Tamai | Mar 1997 | A |
5617407 | Bareis | Apr 1997 | A |
5633922 | August et al. | May 1997 | A |
5652570 | Lepkofker | Jul 1997 | A |
5675629 | Raffel et al. | Oct 1997 | A |
5696965 | Dedrick | Dec 1997 | A |
5708422 | Blonder et al. | Jan 1998 | A |
5721938 | Stuckey | Feb 1998 | A |
5722084 | Chakrin et al. | Feb 1998 | A |
5740256 | Castello Da Costa et al. | Apr 1998 | A |
5742763 | Jones | Apr 1998 | A |
5748841 | Morin et al. | May 1998 | A |
5748974 | Johnson | May 1998 | A |
5752052 | Richardson et al. | May 1998 | A |
5754784 | Garland et al. | May 1998 | A |
5761631 | Nasukawa | Jun 1998 | A |
5774841 | Salazar et al. | Jun 1998 | A |
5774859 | Houser et al. | Jun 1998 | A |
5794050 | Dahlgren et al. | Aug 1998 | A |
5794196 | Yegnanarayanan et al. | Aug 1998 | A |
5797112 | Komatsu et al. | Aug 1998 | A |
5799276 | Komissarchik et al. | Aug 1998 | A |
5802510 | Jones | Sep 1998 | A |
5832221 | Jones | Nov 1998 | A |
5839107 | Gupta et al. | Nov 1998 | A |
5848396 | Gerace | Dec 1998 | A |
5855000 | Waibel et al. | Dec 1998 | A |
5867817 | Catallo et al. | Feb 1999 | A |
5878385 | Bralich et al. | Mar 1999 | A |
5878386 | Coughlin | Mar 1999 | A |
5892813 | Morin et al. | Apr 1999 | A |
5892900 | Ginter et al. | Apr 1999 | A |
5895464 | Bhandari et al. | Apr 1999 | A |
5895466 | Goldberg et al. | Apr 1999 | A |
5897613 | Chan | Apr 1999 | A |
5902347 | Backman et al. | May 1999 | A |
5911120 | Jarett et al. | Jun 1999 | A |
5918222 | Fukui et al. | Jun 1999 | A |
5926784 | Richardson et al. | Jul 1999 | A |
5933822 | Braden-Harder et al. | Aug 1999 | A |
5950167 | Yaker | Sep 1999 | A |
5953393 | Culbreth et al. | Sep 1999 | A |
5960397 | Rahim | Sep 1999 | A |
5960399 | Barclay et al. | Sep 1999 | A |
5960447 | Holt et al. | Sep 1999 | A |
5963894 | Richardson et al. | Oct 1999 | A |
5963940 | Liddy et al. | Oct 1999 | A |
5987404 | Della Pietra et al. | Nov 1999 | A |
5991721 | Asano et al. | Nov 1999 | A |
5995119 | Cosatto et al. | Nov 1999 | A |
5995928 | Nguyen et al. | Nov 1999 | A |
6009382 | Martino et al. | Dec 1999 | A |
6014559 | Amin | Jan 2000 | A |
6018708 | Dahan et al. | Jan 2000 | A |
6021384 | Gorin et al. | Feb 2000 | A |
6028514 | Lemelson et al. | Feb 2000 | A |
6035267 | Watanabe et al. | Mar 2000 | A |
6044347 | Abella et al. | Mar 2000 | A |
6049602 | Foladare et al. | Apr 2000 | A |
6049607 | Marash et al. | Apr 2000 | A |
6058187 | Chen | May 2000 | A |
6067513 | Ishimitsu | May 2000 | A |
6067520 | Lee | May 2000 | A |
6076059 | Glickman et al. | Jun 2000 | A |
6078886 | Dragosh et al. | Jun 2000 | A |
6081774 | De Hita et al. | Jun 2000 | A |
6085186 | Christianson et al. | Jul 2000 | A |
6101241 | Boyce et al. | Aug 2000 | A |
6108631 | Ruhl | Aug 2000 | A |
6119087 | Kuhn et al. | Sep 2000 | A |
6122613 | Baker | Sep 2000 | A |
6134235 | Goldman et al. | Oct 2000 | A |
6144667 | Doshi et al. | Nov 2000 | A |
6144938 | Surace et al. | Nov 2000 | A |
6154526 | Dahlke et al. | Nov 2000 | A |
6160883 | Jackson et al. | Dec 2000 | A |
6167377 | Gillick et al. | Dec 2000 | A |
6173266 | Marx et al. | Jan 2001 | B1 |
6173279 | Levin et al. | Jan 2001 | B1 |
6175858 | Bulfer et al. | Jan 2001 | B1 |
6185535 | Hedin et al. | Feb 2001 | B1 |
6188982 | Chiang | Feb 2001 | B1 |
6192110 | Abella et al. | Feb 2001 | B1 |
6192338 | Haszto et al. | Feb 2001 | B1 |
6195634 | Dudemaine et al. | Feb 2001 | B1 |
6195651 | Handel et al. | Feb 2001 | B1 |
6199043 | Happ | Mar 2001 | B1 |
6208964 | Sabourin | Mar 2001 | B1 |
6208972 | Grant et al. | Mar 2001 | B1 |
6219346 | Maxemchuk | Apr 2001 | B1 |
6219643 | Cohen et al. | Apr 2001 | B1 |
6226612 | Srenger et al. | May 2001 | B1 |
6233556 | Teunen et al. | May 2001 | B1 |
6233559 | Balakrishnan | May 2001 | B1 |
6233561 | Junqua et al. | May 2001 | B1 |
6236968 | Kanevsky et al. | May 2001 | B1 |
6246981 | Papineni et al. | Jun 2001 | B1 |
6246990 | Happ | Jun 2001 | B1 |
6266636 | Kosaka et al. | Jul 2001 | B1 |
6269336 | Ladd et al. | Jul 2001 | B1 |
6272455 | Hoshen et al. | Aug 2001 | B1 |
6275231 | Obradovich | Aug 2001 | B1 |
6278377 | DeLine et al. | Aug 2001 | B1 |
6278968 | Franz et al. | Aug 2001 | B1 |
6288319 | Catona | Sep 2001 | B1 |
6292767 | Jackson et al. | Sep 2001 | B1 |
6301560 | Masters | Oct 2001 | B1 |
6308151 | Smith | Oct 2001 | B1 |
6314402 | Monaco et al. | Nov 2001 | B1 |
6321196 | Franceschi | Nov 2001 | B1 |
6356869 | Chapados et al. | Mar 2002 | B1 |
6362748 | Huang | Mar 2002 | B1 |
6366882 | Bijl et al. | Apr 2002 | B1 |
6366886 | Dragosh et al. | Apr 2002 | B1 |
6374214 | Friedland et al. | Apr 2002 | B1 |
6377913 | Coffman et al. | Apr 2002 | B1 |
6381535 | Durocher et al. | Apr 2002 | B1 |
6385596 | Wiser et al. | May 2002 | B1 |
6385646 | Brown et al. | May 2002 | B1 |
6393403 | Majaniemi | May 2002 | B1 |
6393428 | Miller et al. | May 2002 | B1 |
6397181 | Li et al. | May 2002 | B1 |
6404878 | Jackson et al. | Jun 2002 | B1 |
6405170 | Phillips et al. | Jun 2002 | B1 |
6408272 | White et al. | Jun 2002 | B1 |
6411810 | Maxemchuk | Jun 2002 | B1 |
6411893 | Ruhl | Jun 2002 | B2 |
6415257 | Junqua et al. | Jul 2002 | B1 |
6418210 | Sayko | Jul 2002 | B1 |
6420975 | DeLine et al. | Jul 2002 | B1 |
6429813 | Feigen | Aug 2002 | B2 |
6430285 | Bauer et al. | Aug 2002 | B1 |
6430531 | Polish | Aug 2002 | B1 |
6434523 | Monaco | Aug 2002 | B1 |
6434524 | Weber | Aug 2002 | B1 |
6434529 | Walker et al. | Aug 2002 | B1 |
6442522 | Carberry et al. | Aug 2002 | B1 |
6446114 | Bulfer et al. | Sep 2002 | B1 |
6453153 | Bowker et al. | Sep 2002 | B1 |
6453292 | Ramaswamy et al. | Sep 2002 | B2 |
6456711 | Cheung et al. | Sep 2002 | B1 |
6456974 | Baker et al. | Sep 2002 | B1 |
6466654 | Cooper et al. | Oct 2002 | B1 |
6466899 | Yano et al. | Oct 2002 | B1 |
6470315 | Netsch et al. | Oct 2002 | B1 |
6487494 | Odinak et al. | Nov 2002 | B2 |
6487495 | Gale et al. | Nov 2002 | B1 |
6498797 | Anerousis et al. | Dec 2002 | B1 |
6499013 | Weber | Dec 2002 | B1 |
6501833 | Phillips et al. | Dec 2002 | B2 |
6501834 | Milewski et al. | Dec 2002 | B1 |
6505155 | Vanbuskirk et al. | Jan 2003 | B1 |
6510417 | Woods et al. | Jan 2003 | B1 |
6513005 | Qin et al. | Jan 2003 | B1 |
6513006 | Howard et al. | Jan 2003 | B2 |
6522746 | Marchok et al. | Feb 2003 | B1 |
6523061 | Halverson et al. | Feb 2003 | B1 |
6532444 | Weber | Mar 2003 | B1 |
6539348 | Bond et al. | Mar 2003 | B1 |
6549629 | Finn et al. | Apr 2003 | B2 |
6553372 | Brassell et al. | Apr 2003 | B1 |
6556970 | Sasaki et al. | Apr 2003 | B1 |
6556973 | Lewin | Apr 2003 | B1 |
6560576 | Cohen et al. | May 2003 | B1 |
6560590 | Shwe et al. | May 2003 | B1 |
6567778 | Chao Chang et al. | May 2003 | B1 |
6567797 | Schuetze et al. | May 2003 | B1 |
6570555 | Prevost et al. | May 2003 | B1 |
6570964 | Murveit et al. | May 2003 | B1 |
6571279 | Herz et al. | May 2003 | B1 |
6574597 | Mohri et al. | Jun 2003 | B1 |
6574624 | Johnson et al. | Jun 2003 | B1 |
6578022 | Foulger et al. | Jun 2003 | B1 |
6581103 | Dengler | Jun 2003 | B1 |
6584439 | Geilhufe et al. | Jun 2003 | B1 |
6587858 | Strazza | Jul 2003 | B1 |
6591239 | McCall et al. | Jul 2003 | B1 |
6594257 | Doshi et al. | Jul 2003 | B1 |
6594367 | Marash et al. | Jul 2003 | B1 |
6598018 | Junqua | Jul 2003 | B1 |
6601026 | Appelt et al. | Jul 2003 | B2 |
6604075 | Brown et al. | Aug 2003 | B1 |
6604077 | Dragosh et al. | Aug 2003 | B2 |
6606598 | Holthouse et al. | Aug 2003 | B1 |
6611692 | Raffel et al. | Aug 2003 | B2 |
6614773 | Maxemchuk | Sep 2003 | B1 |
6615172 | Bennett et al. | Sep 2003 | B1 |
6622119 | Ramaswamy et al. | Sep 2003 | B1 |
6629066 | Jackson et al. | Sep 2003 | B1 |
6631346 | Karaorman et al. | Oct 2003 | B1 |
6631351 | Ramachandran et al. | Oct 2003 | B1 |
6633846 | Bennett et al. | Oct 2003 | B1 |
6636790 | Lightner et al. | Oct 2003 | B1 |
6643620 | Contolini et al. | Nov 2003 | B1 |
6650747 | Bala et al. | Nov 2003 | B1 |
6658388 | Kleindienst et al. | Dec 2003 | B1 |
6678680 | Woo | Jan 2004 | B1 |
6681206 | Gorin et al. | Jan 2004 | B1 |
6691151 | Cheyer et al. | Feb 2004 | B1 |
6701294 | Ball et al. | Mar 2004 | B1 |
6704396 | Parolkar et al. | Mar 2004 | B2 |
6704576 | Brachman et al. | Mar 2004 | B1 |
6704708 | Pickering | Mar 2004 | B1 |
6707421 | Drury et al. | Mar 2004 | B1 |
6708150 | Hirayama et al. | Mar 2004 | B1 |
6721001 | Berstis | Apr 2004 | B1 |
6721633 | Funk et al. | Apr 2004 | B2 |
6721706 | Strubbe et al. | Apr 2004 | B1 |
6726636 | Der Ghazarian et al. | Apr 2004 | B2 |
6735592 | Neumann et al. | May 2004 | B1 |
6739556 | Langston | May 2004 | B1 |
6741931 | Kohut et al. | May 2004 | B1 |
6742021 | Halverson et al. | May 2004 | B1 |
6745161 | Arnold et al. | Jun 2004 | B1 |
6751591 | Gorin et al. | Jun 2004 | B1 |
6751612 | Schuetze et al. | Jun 2004 | B1 |
6754485 | Obradovich et al. | Jun 2004 | B1 |
6754627 | Woodward | Jun 2004 | B2 |
6757544 | Rangarajan et al. | Jun 2004 | B2 |
6757718 | Halverson et al. | Jun 2004 | B1 |
6795808 | Strubbe et al. | Sep 2004 | B1 |
6801604 | Maes et al. | Oct 2004 | B2 |
6801893 | Backfried et al. | Oct 2004 | B1 |
6813341 | Mahoney | Nov 2004 | B1 |
6829603 | Chai et al. | Dec 2004 | B1 |
6832230 | Zilliacus et al. | Dec 2004 | B1 |
6833848 | Wolff et al. | Dec 2004 | B1 |
6850603 | Eberle et al. | Feb 2005 | B1 |
6856990 | Barile et al. | Feb 2005 | B2 |
6865481 | Kawazoe et al. | Mar 2005 | B2 |
6868380 | Kroeker | Mar 2005 | B2 |
6868385 | Gerson | Mar 2005 | B1 |
6873837 | Yoshioka et al. | Mar 2005 | B1 |
6877001 | Wolf et al. | Apr 2005 | B2 |
6877134 | Fuller et al. | Apr 2005 | B1 |
6901366 | Kuhn et al. | May 2005 | B1 |
6910003 | Arnold et al. | Jun 2005 | B1 |
6912498 | Stevens et al. | Jun 2005 | B2 |
6915126 | Mazzara, Jr. | Jul 2005 | B2 |
6928614 | Everhart | Aug 2005 | B1 |
6934756 | Maes | Aug 2005 | B2 |
6937977 | Gerson | Aug 2005 | B2 |
6937982 | Kitaoka et al. | Aug 2005 | B2 |
6944594 | Busayapongchai et al. | Sep 2005 | B2 |
6950821 | Faybishenko et al. | Sep 2005 | B2 |
6954755 | Reisman | Oct 2005 | B2 |
6959276 | Droppo et al. | Oct 2005 | B2 |
6961700 | Mitchell et al. | Nov 2005 | B2 |
6963759 | Gerson | Nov 2005 | B1 |
6964023 | Maes et al. | Nov 2005 | B2 |
6968311 | Knockeart et al. | Nov 2005 | B2 |
6973387 | Masclet et al. | Dec 2005 | B2 |
6975993 | Keiller | Dec 2005 | B1 |
6980092 | Turnbull et al. | Dec 2005 | B2 |
6983055 | Luo | Jan 2006 | B2 |
6990513 | Belfiore et al. | Jan 2006 | B2 |
6996531 | Korall et al. | Feb 2006 | B2 |
7003463 | Maes et al. | Feb 2006 | B1 |
7016849 | Arnold et al. | Mar 2006 | B2 |
7020609 | Thrift et al. | Mar 2006 | B2 |
7024364 | Guerra et al. | Apr 2006 | B2 |
7027586 | Bushey et al. | Apr 2006 | B2 |
7027975 | Pazandak et al. | Apr 2006 | B1 |
7035415 | Belt et al. | Apr 2006 | B2 |
7036128 | Julia et al. | Apr 2006 | B1 |
7043425 | Pao | May 2006 | B2 |
7054817 | Shao | May 2006 | B2 |
7058890 | George et al. | Jun 2006 | B2 |
7062488 | Reisman | Jun 2006 | B1 |
7069220 | Coffman et al. | Jun 2006 | B2 |
7072834 | Zhou | Jul 2006 | B2 |
7076362 | Ohtsuji et al. | Jul 2006 | B2 |
7082469 | Gold et al. | Jul 2006 | B2 |
7085708 | Manson | Aug 2006 | B2 |
7092928 | Elad et al. | Aug 2006 | B1 |
7107210 | Deng et al. | Sep 2006 | B2 |
7107218 | Preston | Sep 2006 | B1 |
7110951 | Lemelson et al. | Sep 2006 | B1 |
7127400 | Koch | Oct 2006 | B2 |
7130390 | Abburi | Oct 2006 | B2 |
7136875 | Anderson et al. | Nov 2006 | B2 |
7137126 | Coffman et al. | Nov 2006 | B1 |
7143037 | Chestnut | Nov 2006 | B1 |
7143039 | Stifelman et al. | Nov 2006 | B1 |
7146319 | Hunt | Dec 2006 | B2 |
7149696 | Shimizu et al. | Dec 2006 | B2 |
7165028 | Gong | Jan 2007 | B2 |
7170993 | Anderson et al. | Jan 2007 | B2 |
7171291 | Obradovich | Jan 2007 | B2 |
7174300 | Bush | Feb 2007 | B2 |
7177798 | Hsu et al. | Feb 2007 | B2 |
7184957 | Brookes et al. | Feb 2007 | B2 |
7190770 | Ando et al. | Mar 2007 | B2 |
7197069 | Agazzi et al. | Mar 2007 | B2 |
7197460 | Gupta et al. | Mar 2007 | B1 |
7203644 | Anderson et al. | Apr 2007 | B2 |
7206418 | Yang et al. | Apr 2007 | B2 |
7207011 | Mulvey et al. | Apr 2007 | B2 |
7215941 | Beckmann et al. | May 2007 | B2 |
7228276 | Omote et al. | Jun 2007 | B2 |
7231343 | Treadgold et al. | Jun 2007 | B1 |
7236923 | Gupta | Jun 2007 | B1 |
7254482 | Kawasaki et al. | Aug 2007 | B2 |
7272212 | Eberle et al. | Sep 2007 | B2 |
7277854 | Bennett et al. | Oct 2007 | B2 |
7283829 | Christenson et al. | Oct 2007 | B2 |
7283951 | Marchisio et al. | Oct 2007 | B2 |
7289606 | Sibal et al. | Oct 2007 | B2 |
7299186 | Kuzunuki et al. | Nov 2007 | B2 |
7301093 | Sater et al. | Nov 2007 | B2 |
7305381 | Poppink et al. | Dec 2007 | B1 |
7321850 | Wakita | Jan 2008 | B2 |
7328155 | Endo et al. | Feb 2008 | B2 |
7337116 | Charlesworth et al. | Feb 2008 | B2 |
7340040 | Saylor et al. | Mar 2008 | B1 |
7366285 | Parolkar et al. | Apr 2008 | B2 |
7366669 | Nishitani et al. | Apr 2008 | B2 |
7376645 | Bernard | May 2008 | B2 |
7386443 | Parthasarathy et al. | Jun 2008 | B1 |
7398209 | Kennewick et al. | Jul 2008 | B2 |
7406421 | Odinak et al. | Jul 2008 | B2 |
7415414 | Azara et al. | Aug 2008 | B2 |
7421393 | Di Fabbrizio et al. | Sep 2008 | B1 |
7424431 | Greene et al. | Sep 2008 | B2 |
7447635 | Konopka et al. | Nov 2008 | B1 |
7451088 | Ehlen et al. | Nov 2008 | B1 |
7454608 | Gopalakrishnan et al. | Nov 2008 | B2 |
7461059 | Richardson et al. | Dec 2008 | B2 |
7472020 | Brulle-Drews | Dec 2008 | B2 |
7472060 | Gorin et al. | Dec 2008 | B1 |
7472075 | Odinak et al. | Dec 2008 | B2 |
7477909 | Roth | Jan 2009 | B2 |
7478036 | Shen et al. | Jan 2009 | B2 |
7487088 | Gorin et al. | Feb 2009 | B1 |
7487110 | Bennett et al. | Feb 2009 | B2 |
7493259 | Jones et al. | Feb 2009 | B2 |
7493559 | Wolff et al. | Feb 2009 | B1 |
7502672 | Kolls | Mar 2009 | B1 |
7502738 | Kennewick et al. | Mar 2009 | B2 |
7516076 | Walker et al. | Apr 2009 | B2 |
7529675 | Maes | May 2009 | B2 |
7536297 | Byrd et al. | May 2009 | B2 |
7536374 | Au | May 2009 | B2 |
7542894 | Murata | Jun 2009 | B2 |
7546382 | Healey et al. | Jun 2009 | B2 |
7548491 | Macfarlane | Jun 2009 | B2 |
7552054 | Stifelman et al. | Jun 2009 | B1 |
7558730 | Davis et al. | Jul 2009 | B2 |
7574362 | Walker et al. | Aug 2009 | B2 |
7577244 | Taschereau | Aug 2009 | B2 |
7606708 | Hwang | Oct 2009 | B2 |
7620549 | Di Cristo et al. | Nov 2009 | B2 |
7634409 | Kennewick et al. | Dec 2009 | B2 |
7640006 | Portman et al. | Dec 2009 | B2 |
7640160 | Di Cristo et al. | Dec 2009 | B2 |
7640272 | Mahajan et al. | Dec 2009 | B2 |
7676365 | Hwang et al. | Mar 2010 | B2 |
7676369 | Fujimoto et al. | Mar 2010 | B2 |
7684977 | Morikawa | Mar 2010 | B2 |
7693720 | Kennewick et al. | Apr 2010 | B2 |
7729916 | Coffman et al. | Jun 2010 | B2 |
7729918 | Walker et al. | Jun 2010 | B2 |
7729920 | Chaar et al. | Jun 2010 | B2 |
7734287 | Ying | Jun 2010 | B2 |
7748021 | Obradovich | Jun 2010 | B2 |
7788084 | Brun et al. | Aug 2010 | B2 |
7801731 | Odinak et al. | Sep 2010 | B2 |
7809570 | Kennewick et al. | Oct 2010 | B2 |
7818176 | Freeman et al. | Oct 2010 | B2 |
7831426 | Bennett | Nov 2010 | B2 |
7831433 | Belvin et al. | Nov 2010 | B1 |
7856358 | Ho | Dec 2010 | B2 |
7873519 | Bennett | Jan 2011 | B2 |
7873523 | Potter et al. | Jan 2011 | B2 |
7873654 | Bernard | Jan 2011 | B2 |
7881936 | Longe et al. | Feb 2011 | B2 |
7890324 | Bangalore et al. | Feb 2011 | B2 |
7894849 | Kass et al. | Feb 2011 | B2 |
7902969 | Obradovich | Mar 2011 | B2 |
7917367 | Di Cristo et al. | Mar 2011 | B2 |
7920682 | Byrne et al. | Apr 2011 | B2 |
7949529 | Weider et al. | May 2011 | B2 |
7949537 | Walker et al. | May 2011 | B2 |
7953732 | Frank et al. | May 2011 | B2 |
7974875 | Quilici et al. | Jul 2011 | B1 |
7983917 | Kennewick et al. | Jul 2011 | B2 |
7984287 | Gopalakrishnan et al. | Jul 2011 | B2 |
8005683 | Tessel et al. | Aug 2011 | B2 |
8015006 | Kennewick et al. | Sep 2011 | B2 |
8060367 | Keaveney | Nov 2011 | B2 |
8069046 | Kennewick et al. | Nov 2011 | B2 |
8073681 | Baldwin et al. | Dec 2011 | B2 |
8077975 | Ma et al. | Dec 2011 | B2 |
8082153 | Coffman et al. | Dec 2011 | B2 |
8086463 | Ativanichayaphong et al. | Dec 2011 | B2 |
8112275 | Kennewick et al. | Feb 2012 | B2 |
8140327 | Kennewick et al. | Mar 2012 | B2 |
8140335 | Kennewick et al. | Mar 2012 | B2 |
8145489 | Freeman et al. | Mar 2012 | B2 |
8150694 | Kennewick et al. | Apr 2012 | B2 |
8155962 | Kennewick et al. | Apr 2012 | B2 |
8170867 | Germain | May 2012 | B2 |
8195468 | Weider et al. | Jun 2012 | B2 |
8219399 | Lutz et al. | Jul 2012 | B2 |
8219599 | Tunstall-Pedoe | Jul 2012 | B2 |
8224652 | Wang et al. | Jul 2012 | B2 |
8255224 | Singleton et al. | Aug 2012 | B2 |
8326627 | Kennewick et al. | Dec 2012 | B2 |
8326634 | Di Cristo et al. | Dec 2012 | B2 |
8326637 | Baldwin et al. | Dec 2012 | B2 |
8332224 | Di Cristo et al. | Dec 2012 | B2 |
8370147 | Kennewick et al. | Feb 2013 | B2 |
8447607 | Weider et al. | May 2013 | B2 |
8452598 | Kennewick et al. | May 2013 | B2 |
8509403 | Chiu et al. | Aug 2013 | B2 |
8515765 | Baldwin et al. | Aug 2013 | B2 |
8527274 | Freeman et al. | Sep 2013 | B2 |
8589161 | Kennewick et al. | Nov 2013 | B2 |
8620659 | Di Cristo et al. | Dec 2013 | B2 |
8719009 | Baldwin et al. | May 2014 | B2 |
8719026 | Kennewick et al. | May 2014 | B2 |
8738380 | Baldwin et al. | May 2014 | B2 |
20010041980 | Howard et al. | Nov 2001 | A1 |
20010049601 | Kroeker et al. | Dec 2001 | A1 |
20010054087 | Flom et al. | Dec 2001 | A1 |
20020015500 | Belt et al. | Feb 2002 | A1 |
20020022927 | Lemelson et al. | Feb 2002 | A1 |
20020029261 | Shibata | Mar 2002 | A1 |
20020032752 | Gold et al. | Mar 2002 | A1 |
20020035501 | Handel et al. | Mar 2002 | A1 |
20020040297 | Tsiao et al. | Apr 2002 | A1 |
20020049535 | Rigo et al. | Apr 2002 | A1 |
20020049805 | Yamada et al. | Apr 2002 | A1 |
20020065568 | Silfvast et al. | May 2002 | A1 |
20020067839 | Heinrich | Jun 2002 | A1 |
20020069059 | Smith | Jun 2002 | A1 |
20020069071 | Knockeart et al. | Jun 2002 | A1 |
20020082911 | Dunn et al. | Jun 2002 | A1 |
20020087326 | Lee et al. | Jul 2002 | A1 |
20020087525 | Abbott et al. | Jul 2002 | A1 |
20020107694 | Lerg | Aug 2002 | A1 |
20020120609 | Lang et al. | Aug 2002 | A1 |
20020124050 | Middeljans | Sep 2002 | A1 |
20020133354 | Ross et al. | Sep 2002 | A1 |
20020133402 | Faber et al. | Sep 2002 | A1 |
20020135618 | Maes et al. | Sep 2002 | A1 |
20020138248 | Corston-Oliver et al. | Sep 2002 | A1 |
20020143532 | McLean et al. | Oct 2002 | A1 |
20020143535 | Kist et al. | Oct 2002 | A1 |
20020161646 | Gailey et al. | Oct 2002 | A1 |
20020173333 | Buchholz et al. | Nov 2002 | A1 |
20020173961 | Guerra | Nov 2002 | A1 |
20020184373 | Maes | Dec 2002 | A1 |
20020188602 | Stubler et al. | Dec 2002 | A1 |
20020198714 | Zhou | Dec 2002 | A1 |
20030014261 | Kageyama | Jan 2003 | A1 |
20030016835 | Elko et al. | Jan 2003 | A1 |
20030046346 | Mumick et al. | Mar 2003 | A1 |
20030064709 | Gailey et al. | Apr 2003 | A1 |
20030065427 | Funk et al. | Apr 2003 | A1 |
20030069734 | Everhart | Apr 2003 | A1 |
20030088421 | Maes et al. | May 2003 | A1 |
20030097249 | Walker et al. | May 2003 | A1 |
20030110037 | Walker et al. | Jun 2003 | A1 |
20030112267 | Belrose | Jun 2003 | A1 |
20030115062 | Walker et al. | Jun 2003 | A1 |
20030120493 | Gupta | Jun 2003 | A1 |
20030135488 | Amir et al. | Jul 2003 | A1 |
20030144846 | Denenberg et al. | Jul 2003 | A1 |
20030158731 | Falcon et al. | Aug 2003 | A1 |
20030161448 | Parolkar et al. | Aug 2003 | A1 |
20030182132 | Niemoeller | Sep 2003 | A1 |
20030204492 | Wolf et al. | Oct 2003 | A1 |
20030206640 | Malvar et al. | Nov 2003 | A1 |
20030212550 | Ubale | Nov 2003 | A1 |
20030212558 | Matula | Nov 2003 | A1 |
20030212562 | Patel et al. | Nov 2003 | A1 |
20030225825 | Healey et al. | Dec 2003 | A1 |
20030233230 | Ammicht et al. | Dec 2003 | A1 |
20030236664 | Sharma | Dec 2003 | A1 |
20040006475 | Ehlen et al. | Jan 2004 | A1 |
20040010358 | Oesterling et al. | Jan 2004 | A1 |
20040025115 | Sienel et al. | Feb 2004 | A1 |
20040044516 | Kennewick et al. | Mar 2004 | A1 |
20040098245 | Walker et al. | May 2004 | A1 |
20040117179 | Balasuriya | Jun 2004 | A1 |
20040117804 | Scahill et al. | Jun 2004 | A1 |
20040122674 | Bangalore et al. | Jun 2004 | A1 |
20040140989 | Papageorge | Jul 2004 | A1 |
20040158555 | Seedman et al. | Aug 2004 | A1 |
20040166832 | Portman et al. | Aug 2004 | A1 |
20040167771 | Duan et al. | Aug 2004 | A1 |
20040172258 | Dominach et al. | Sep 2004 | A1 |
20040193408 | Hunt | Sep 2004 | A1 |
20040193420 | Kennewick et al. | Sep 2004 | A1 |
20040199375 | Ehsani et al. | Oct 2004 | A1 |
20040205671 | Sukehiro et al. | Oct 2004 | A1 |
20040243417 | Pitts, III et al. | Dec 2004 | A9 |
20050015256 | Kargman | Jan 2005 | A1 |
20050021334 | Iwahashi | Jan 2005 | A1 |
20050021470 | Martin et al. | Jan 2005 | A1 |
20050021826 | Kumar | Jan 2005 | A1 |
20050033574 | Kim et al. | Feb 2005 | A1 |
20050033582 | Gadd et al. | Feb 2005 | A1 |
20050038822 | Bijaoui et al. | Feb 2005 | A1 |
20050043940 | Elder | Feb 2005 | A1 |
20050049867 | Deane | Mar 2005 | A1 |
20050114116 | Fiedler | May 2005 | A1 |
20050125232 | Gadd | Jun 2005 | A1 |
20050131673 | Koizumi et al. | Jun 2005 | A1 |
20050137850 | Odell | Jun 2005 | A1 |
20050137877 | Oesterling et al. | Jun 2005 | A1 |
20050143994 | Mori et al. | Jun 2005 | A1 |
20050216254 | Gupta et al. | Sep 2005 | A1 |
20050234727 | Chiu | Oct 2005 | A1 |
20050246174 | DeGolia | Nov 2005 | A1 |
20050283752 | Fruchter et al. | Dec 2005 | A1 |
20060041431 | Maes | Feb 2006 | A1 |
20060047509 | Ding et al. | Mar 2006 | A1 |
20060206310 | Ravikumar et al. | Sep 2006 | A1 |
20060217133 | Christenson et al. | Sep 2006 | A1 |
20060285662 | Yin et al. | Dec 2006 | A1 |
20070033005 | Cristo et al. | Feb 2007 | A1 |
20070033020 | (Kelleher) Francois et al. | Feb 2007 | A1 |
20070038436 | Cristo et al. | Feb 2007 | A1 |
20070043569 | Potter, III et al. | Feb 2007 | A1 |
20070043574 | Coffman et al. | Feb 2007 | A1 |
20070043868 | Kumar et al. | Feb 2007 | A1 |
20070050191 | Weider et al. | Mar 2007 | A1 |
20070055525 | Kennewick et al. | Mar 2007 | A1 |
20070061067 | Zeinstra et al. | Mar 2007 | A1 |
20070061735 | Hoffberg et al. | Mar 2007 | A1 |
20070073544 | Millett et al. | Mar 2007 | A1 |
20070078708 | Yu et al. | Apr 2007 | A1 |
20070078709 | Rajaram | Apr 2007 | A1 |
20070118357 | Kasravi et al. | May 2007 | A1 |
20070135101 | Ramati et al. | Jun 2007 | A1 |
20070146833 | Satomi et al. | Jun 2007 | A1 |
20070162296 | Altberg et al. | Jul 2007 | A1 |
20070179778 | Gong et al. | Aug 2007 | A1 |
20070186165 | Maislos et al. | Aug 2007 | A1 |
20070198267 | Jones et al. | Aug 2007 | A1 |
20070203736 | Ashton | Aug 2007 | A1 |
20070214182 | Rosenberg | Sep 2007 | A1 |
20070250901 | McIntire et al. | Oct 2007 | A1 |
20070265850 | Kennewick et al. | Nov 2007 | A1 |
20070299824 | Pan et al. | Dec 2007 | A1 |
20080034032 | Healey et al. | Feb 2008 | A1 |
20080065386 | Cross et al. | Mar 2008 | A1 |
20080091406 | Baldwin et al. | Apr 2008 | A1 |
20080103761 | Printz et al. | May 2008 | A1 |
20080109285 | Reuther et al. | May 2008 | A1 |
20080115163 | Gilboa et al. | May 2008 | A1 |
20080133215 | Sarukkai | Jun 2008 | A1 |
20080140385 | Mahajan et al. | Jun 2008 | A1 |
20080147410 | Odinak | Jun 2008 | A1 |
20080154604 | Sathish et al. | Jun 2008 | A1 |
20080177530 | Cross et al. | Jul 2008 | A1 |
20080189110 | Freeman et al. | Aug 2008 | A1 |
20080235023 | Kennewick et al. | Sep 2008 | A1 |
20080235027 | Cross | Sep 2008 | A1 |
20080319751 | Kennewick et al. | Dec 2008 | A1 |
20090052635 | Jones et al. | Feb 2009 | A1 |
20090067599 | Agarwal et al. | Mar 2009 | A1 |
20090076827 | Bulitta et al. | Mar 2009 | A1 |
20090106029 | DeLine et al. | Apr 2009 | A1 |
20090117885 | Roth | May 2009 | A1 |
20090144271 | Richardson et al. | Jun 2009 | A1 |
20090150156 | Kennewick et al. | Jun 2009 | A1 |
20090171664 | Kennewick et al. | Jul 2009 | A1 |
20090216540 | Tessel et al. | Aug 2009 | A1 |
20090271194 | Davis et al. | Oct 2009 | A1 |
20090273563 | Pryor | Nov 2009 | A1 |
20090276700 | Anderson et al. | Nov 2009 | A1 |
20090299745 | Kennewick et al. | Dec 2009 | A1 |
20090307031 | Winkler et al. | Dec 2009 | A1 |
20090313026 | Coffman et al. | Dec 2009 | A1 |
20100023320 | Di Cristo et al. | Jan 2010 | A1 |
20100029261 | Mikkelsen et al. | Feb 2010 | A1 |
20100036967 | Caine et al. | Feb 2010 | A1 |
20100049501 | Kennewick et al. | Feb 2010 | A1 |
20100049514 | Kennewick et al. | Feb 2010 | A1 |
20100057443 | Di Cristo et al. | Mar 2010 | A1 |
20100063880 | Atsmon et al. | Mar 2010 | A1 |
20100145700 | Kennewick et al. | Jun 2010 | A1 |
20100185512 | Borger et al. | Jul 2010 | A1 |
20100204986 | Kennewick et al. | Aug 2010 | A1 |
20100204994 | Kennewick et al. | Aug 2010 | A1 |
20100217604 | Baldwin et al. | Aug 2010 | A1 |
20100286985 | Kennewick et al. | Nov 2010 | A1 |
20100299142 | Freeman et al. | Nov 2010 | A1 |
20100312566 | Odinak et al. | Dec 2010 | A1 |
20110112827 | Kennewick et al. | May 2011 | A1 |
20110112921 | Kennewick et al. | May 2011 | A1 |
20110131036 | Di Cristo et al. | Jun 2011 | A1 |
20110131045 | Cristo et al. | Jun 2011 | A1 |
20110231182 | Weider et al. | Sep 2011 | A1 |
20110231188 | Kennewick et al. | Sep 2011 | A1 |
20120022857 | Baldwin et al. | Jan 2012 | A1 |
20120101809 | Kennewick et al. | Apr 2012 | A1 |
20120101810 | Kennewick et al. | Apr 2012 | A1 |
20120109753 | Kennewick et al. | May 2012 | A1 |
20120150636 | Freeman et al. | Jun 2012 | A1 |
20120278073 | Weider et al. | Nov 2012 | A1 |
20130054228 | Baldwin et al. | Feb 2013 | A1 |
20130211710 | Kennewick et al. | Aug 2013 | A1 |
20130253929 | Weider et al. | Sep 2013 | A1 |
20130297293 | Di Cristo et al. | Nov 2013 | A1 |
20130304473 | Baldwin et al. | Nov 2013 | A1 |
20130339022 | Baldwin et al. | Dec 2013 | A1 |
20140012577 | Freeman et al. | Jan 2014 | A1 |
20140108013 | Di Cristo et al. | Apr 2014 | A1 |
20140156278 | Kennewick et al. | Jun 2014 | A1 |
Number | Date | Country |
---|---|---|
1 320 043 | Jun 2003 | EP |
1 646 037 | Apr 2006 | EP |
2006-146881 | Jun 2006 | JP |
2008-027454 | Feb 2008 | JP |
2008-139928 | Jun 2008 | JP |
WO 9946763 | Sep 1999 | WO |
WO 0021232 | Apr 2000 | WO |
WO 0046792 | Aug 2000 | WO |
WO 0178065 | Oct 2001 | WO |
WO 2004072954 | Aug 2004 | WO |
WO 2007019318 | Feb 2007 | WO |
WO 2007021587 | Feb 2007 | WO |
WO 2007027546 | Mar 2007 | WO |
WO 2007027989 | Mar 2007 | WO |
WO 2008098039 | Aug 2008 | WO |
WO 2008118195 | Oct 2008 | WO |
WO 2009075912 | Jun 2009 | WO |
WO 2009145796 | Dec 2009 | WO |
WO 2010096752 | Aug 2010 | WO |
Entry |
---|
Reuters, “IBM to Enable Honda Drivers to Talk to Cars”, Charles Schwab & Co., Inc., Jul. 28, 2002, 1 page. |
Lin, Bor-shen, et al., “A Distributed Architecture for Cooperative Spoken Dialogue Agents with Coherent Dialogue State and History”, ASRU'99, 1999, 4 pages. |
Kuhn, Thomas, et al., “Hybrid In-Car Speech Recognition for Mobile Multimedia Applications”, Vehicular Technology Conference, IEEE, Jul. 1999, pp. 2009-2013. |
Belvin, Robert, et al., “Development of the HRL Route Navigation Dialogue System”, Proceedings of the First International Conference on Human Language Technology Research, San Diego, 2001, pp. 1-5. |
Lind, R., et al., “The Network Vehicle—A Glimpse into the Future of Mobile Multi-Media”, IEEE Aerosp. Electron. Systems Magazine, vol. 14, No. 9, Sep. 1999, pp. 27-32. |
Zhao, Yilin, “Telematics: Safe and Fun Driving”, IEEE Intelligent Systems, vol. 17, Issue 1, 2002, pp. 10-14. |
Chai et al., “MIND: A Semantics-Based Multimodal Interpretation Framework for Conversational System”, Proceedings of the International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialogue Systems, Jun. 2002, pp. 37-46. |
Cheyer et al., “Multimodal Maps: An Agent-Based Approach”, International Confernece on Cooperative Multimodal Communication (CMC/95), May 24-26, 1995, pp. 111-121. |
Elio et al., “On Abstract Task Models and Conversation Policies” in Workshop on Specifying and Implementing Conversation Policies, Autonomous Agents '99, Seattle, 1999, 10 pages. |
Turunen, “Adaptive Interaction Methods in Speech User Interfaces”, Conference on Human Factors in Computing Systems, Seattle, Washington, 2001, pp. 91-92. |
Mao, Mark Z., “Automatic Training Set Segmentation for Multi-Pass Speech Recognition”, Department of Electrical Engineering, Stanford University, CA, copyright 2005, IEEE, pp. I-685 to I-688. |
Vanhoucke, Vincent, “Confidence Scoring and Rejection Using Multi-Pass Speech Recognition”, Nuance Communications, Menlo Park, CA, 2005, 4 pages. |
Weng, Fuliang, et al., “Efficient Lattice Representation and Generation”, Speech Technology and Research Laboratory, SRI International, Menlo Park, CA, 1998, 4 pages. |
El Meliani et al., “A Syllabic-Filler-Based Continuous Speech Recognizer for Unlimited Vocabulary”, Canadian Conference on Electrical and Computer Engineering, vol. 2, Sep. 5-8, 1995, pp. 1007-1010. |
Arrington, Michael, “Google Redefines GPS Navigation Landscape: Google Maps Navigation for Android 2.0”, TechCrunch, printed from the Internet <http://www.techcrunch.com/2009/10/28/google-redefines-car-gps-navigation-google-maps-navigation-android/>, Oct. 28, 2009, 4 pages. |
Bazzi, Issam et al., “Heterogeneous Lexical Units for Automatic Speech Recognition: Preliminary Investigations”, Processing of the IEEE International Conference on Acoustics, Speech, and Signal Processing, vol. 3, Jun. 5-9, 2000, XP010507574, pp. 1257-1260. |
O'Shaughnessy, Douglas, “Interacting with Computers by Voice: Automatic Speech Recognition and Synthesis”, Proceedings of the IEEE, vol. 91, No. 9, Sep. 1, 2003, XP011100665, pp. 1272-1305. |
Statement in Accordance with the Notice from the European Patent Office dated Oct. 1, 2007 Concerning Business Methods (OJ EPO Nov. 2007, 592-593), XP002456252. |
Number | Date | Country | |
---|---|---|---|
20140365222 A1 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11212693 | Aug 2005 | US |
Child | 13084197 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13898045 | May 2013 | US |
Child | 14467641 | US | |
Parent | 13488299 | Jun 2012 | US |
Child | 13898045 | US | |
Parent | 13084197 | Apr 2011 | US |
Child | 13488299 | US |