The technical field generally relates to speech systems, and more particularly relates to methods and systems for selecting between available spoken dialog services.
Vehicle spoken dialog systems (or “speech systems”) perform, among other things, speech recognition based on speech uttered by occupants of a vehicle. The speech utterances typically include commands that communicate with or control one or more features of the vehicle as well as other systems that are accessible by the vehicle. A speech system generates spoken commands in response to the speech utterances, and in some instances, the spoken commands are generated in response to the speech system needing further information in order to perform the speech recognition.
Increasingly, such spoken dialog services may be provided by multiple devices and/or applications within the user's environment. In the context of a vehicle spoken dialog system, for example, it is not unusual for such services to be available simultaneously from the user's mobile device (via one or more applications resident on the mobile device), the vehicle's onboard speech system, and external third-party servers (which are coupled via a network to an onboard communication network).
In such cases, two or more of the spoken dialog services might be candidates for processing a given speech utterance and/or for performing the requested task, while perhaps only one of the services is optimal or even suitable for the user's particular requirements. For example, a request for navigation information might be processed by either the onboard navigation system or a navigation application resident on the user's smartphone—with both systems having respective strengths and weaknesses in a particular context. In known systems, this issue is addressed via the use of multiple buttons or other user interface techniques (e.g., based on which application is “in focus” on a touch screen), each corresponding to a particular spoken dialog service. Such methods can lead to user distraction and/or other unsatisfactory results.
Accordingly, it is desirable to provide improved methods and systems for selecting spoken dialog services in a speech system. Furthermore, other desirable features and characteristics of the present invention will become apparent from the subsequent detailed description and the appended claims, taken in conjunction with the accompanying drawings and the foregoing technical field and background.
Methods and systems are provided for arbitrating spoken dialog services. In accordance with various embodiments, a capability catalog associated with a plurality of devices accessible within an environment (e.g., a vehicle) is determined. The capability catalog includes a list of the devices mapped to a list of spoken dialog services provided by each of the plurality of devices. The system arbitrates between the plurality of devices and the spoken dialog services in the capability catalog to determined a selected device and a selected dialog service. The system then forwards the spoken utterance to the selected spoken dialog service on the selected device.
In one embodiment, the system receives a spoken utterance from a user within the environment, classifies the spoken utterance to determine a set of candidate devices and a set of spoken dialog services based on the capability catalog, and determines a selected device from the set of candidate devices and a selected spoken dialog service from the set of candidate spoken dialog services based on a verification criterion.
The exemplary embodiments will hereinafter be described in conjunction with the following drawing figures, wherein like numerals denote like elements, and wherein:
The following detailed description is merely exemplary in nature and is not intended to limit the application and uses. Furthermore, there is no intention to be bound by any expressed or implied theory presented in the preceding technical field, background, brief summary or the following detailed description. As used herein, the term “module” refers to an application specific integrated circuit (ASIC), an electronic circuit, a processor (shared, dedicated, or group) and memory that executes one or more software or firmware programs, a combinational logic circuit, and/or other suitable components that provide the described functionality.
Referring now to
One or more mobile devices 50 might also be present within vehicle 12, including one or more smart-phones, tablet computers, feature phones, etc. Mobile device 50 may also be communicatively coupled to HMI 14 through a suitable wireless connection (e.g., Bluetooth or WiFi) such that one or more applications resident on mobile device 50 are accessible to user 40 via HMI 14. Thus, a user 40 will typically have access to applications running on at three different platforms: applications executed within the vehicle systems themselves, applications deployed on mobile device 50, and applications residing on back-end server 26. Furthermore, one or more of these applications may operate in accordance with their own respective spoken dialog systems, and thus multiple devices might be capable, to varying extents, to respond to a request spoken by user 40.
Speech system 10 communicates with the vehicle systems 14, 16, 18, 20, 22, 24, and 26 through a communication bus and/or other data communication network 29 (e.g., wired, short range wireless, or long range wireless). The communication bus may be, for example, a controller area network (CAN) bus, local interconnect network (LIN) bus, or the like. It will be appreciated that speech system 10 may be used in connection with both vehicle-based environments and non-vehicle-based environments that include one or more speech dependent applications, and the vehicle-based examples provided herein are set forth without loss of generality.
As illustrated, speech system 10 includes a speech understanding module 32, a dialog manager module 34, and a speech generation module 35. These functional modules may be implemented as separate systems or as a combined, integrated system. In general, HMI module 14 receives an acoustic signal (or “speech utterance”) 41 from user 40, which is provided to speech understanding module 32.
Speech understanding module 32 includes any combination of hardware and/or software configured to processes the speech utterance from HMI module 14 (received via one or more microphones 52) using suitable speech recognition techniques, including, for example, automatic speech recognition and semantic decoding (or spoken language understanding (SLU)). Using such techniques, speech understanding module 32 generates a list (or lists) 33 of possible results from the speech utterance. In one embodiment, list 33 comprises one or more sentence hypothesis representing a probability distribution over the set of utterances that might have been spoken by user 40 (i.e., utterance 41). List 33 might, for example, take the form of an N-best list. In various embodiments, speech understanding module 32 generates list 33 using predefined possibilities stored in a datastore. For example, the predefined possibilities might be names or numbers stored in a phone book, names or addresses stored in an address book, song names, albums or artists stored in a music directory, etc. In one embodiment, speech understanding module 32 employs front-end feature extraction followed by a Hidden Markov Model (HMM) and scoring mechanism.
Dialog manager module 34 includes any combination of hardware and/or software configured to manage an interaction sequence and a selection of speech prompts 42 to be spoken to the user based on list 33. When a list 33 contains more than one possible result, dialog manager module 34 uses disambiguation strategies to manage a dialog of prompts with the user 40 such that a recognized result can be determined. In accordance with exemplary embodiments, dialog manager module 34 is capable of managing dialog contexts, as described in further detail below.
Speech generation module 35 includes any combination of hardware and/or software configured to generate spoken prompts 42 to a user 40 based on the dialog determined by the dialog manager module 34. In this regard, speech generation module 35 will generally provide natural language generation (NLG) and speech synthesis, or text-to-speech (TTS).
List 33 includes one or more elements that represent a possible result. In various embodiments, each element of the list 33 includes one or more “slots” that are each associated with a slot type depending on the application. For example, if the application supports making phone calls to phonebook contacts (e.g., “Call John Doe”), then each element may include slots with slot types of a first name, a middle name, and/or a last name. In another example, if the application supports navigation (e.g., “Go to 1111 Sunshine Boulevard”), then each element may include slots with slot types of a house number, and a street name, etc. In various embodiments, the slots and the slot types may be stored in a datastore and accessed by any of the illustrated systems. Each element or slot of the list 33 is associated with a confidence score.
In addition to spoken dialog, users 40 might also interact with HMI 14 through various buttons, switches, touch-screen user interface elements, gestures (e.g., hand gestures recognized by one or more cameras provided within vehicle 12), and the like. In one embodiment, a button 54 (e.g., a “push-to-talk” button or simply “talk button”) is provided within easy reach of one or more users 40. For example, button 54 may be embedded within a steering wheel 56.
Referring now to
Each device 201 may include one or more applications configured to perform a spoken dialog service or services, as described above. For example, as illustrated, device 210 includes an application 211, device 220 includes applications 221 and 222, and device 230 includes applications 231 and 232. Furthermore, an individual application (211, 222, etc.) might be capable of performing more than one spoken dialog service. For example, a single application might be configured to recognize spoken dialog and based on that spoken dialog provide both navigation services as well as media services. In
A variety of applications are known to be capable of performing spoken dialog services, and more are likely to be developed in the future. Current examples of such applications include, but are not limited to, Pandora® Internet Radio, iGo™ Navigation, Google Maps™, Google Now™, Stitcher™, as well as various vehicle navigation system applications known in the art.
Referring now to
Initially, the capability catalog 307 is determined at 502 in
Capability catalog 307 may be populated in accordance with a variety of known techniques. For example, a registration procedure may be performed when each of the devices 201 are powered-up or otherwise communicatively coupled to arbitration module 302. Bluetooth and/or WiFi association techniques may be employed to interrogate each device 201 to determine the respective spoken dialog services provided by each device 201.
Upon receiving a spoken utterance 41, device classification module 303 classifies that utterance to determine a set of candidate devices based on the capability catalog at 504 in
Similarly, upon receiving spoken utterance 41, service classification module 304 classifies the spoken utterance 41 to determine a set of candidate services based on the capability catalog at 506 in
Verification module 305, which is communicatively coupled to both modules 303 and 304, reconciles the (possibly conflicting) candidates provided by device classification module 303 and service classification module 304 at 508 in
With reference to
After determining the selected device 201 and the selected service, that information is provided by dialog manager 34 to device gate module 306, which thereupon processes the spoken utterance with the selected spoken dialog service on the selected device. That is, the result or results from that service are used to accomplish the task requested by the user. For example, device gate module 306 might process the spoken utterance 41 with the navigation service residing on device 210.
Since arbitration module 302 effectively treats each device 201 as a “black box,” and operates in an open loop to forward the speech utterance to the selected device, the embodiment illustrated in
Referring now to
Verification is then performed via verification module 305 at 604 based on the confidence scores received from devices 201. The system determines whether ambiguity remains at 605. If not, the system utilizes the selected device and selected spoken dialog service at 606 (and responds to the user with the received prompt, if any). If ambiguity remains, then arbitration module may (through dialog manager 34) request additional information from the user (610), and then continue at 608 until the ambiguity is sufficiently resolved. Since arbitration module 402 uses dialog manager 34 and API 406 to operate interactively and directly with devices 201, the embodiment illustrated in
In one embodiment, default settings for selection of spoken dialog services and associated devices are provided. Those default preference settings are then modified (i.e., the user's preferences for certain tasks) based upon user behavior. For example, the system might modify the preferences based on the user performing a certain task using a particular spoken dialog service. The user then may be prompted to preserve that preference (e.g., “Would you like to always send address requests to Google Maps?”).
By way of example, the following dialog illustrates various use cases. In each case, the arbitrator module determines which device and which dialog service to employ (e.g., a built-in device or a smart-phone device) in response to the user's spoken utterance.
User: “I want to hear a station based on Billy Joel.”
Arbitrator: Directs audio to the vehicle's built-in device
(The built-in device provides a station service)
User: “I want to hear a station based on Billy Joel.”
Arbitrator: “Use the built-in device or your smart-phone's music application?”
(Dialog manager resolves the ambiguity by prompting the user).
User: “I want to hear a station based on Billy Joel.”
Arbitrator: sends utterance to external application and built-in device simultaneously. Built-in device returns a positive response while the external application does not have a suitable channel. Arbitrator utilizes built-in device.
(Dialog service selected based on device/service response).
User: “I want to hear a station based on Billy Joel on Stitcher”
(User explicitly selects device and dialog service (e.g., Stitcher or some other similar service now known or later developed).
User: “I want to hear a station based on Billy Joel.”
Arbitrator: selects built-in device because it provides the least expensive service.
(Selection based on cost).
User: “Take me to a Chinese restaurant in Troy”
Arbitrator: Directs audio to Google Maps [or similar mapping service now known or later developed] on smart-phone, not to the embedded navigation system, because the latter does not support search—only navigation to an address.
(Selection based on availability of search functionality).
User: “Call Paul Mazoyevsky”
Arbitrator: Sends audio to built-in device and back-end contact-book recognition.
Selects back-end due to higher confidence returned by the back-end.
(Selection based on confidence level).
User: “Next”
Arbitrator: Directs utterance to the music player for skipping a track, and does not select the “next” screen page.
(Selection based on context).
In general, the methods described above may be implemented using any level of desired automation. That is, for example, arbitration may be accomplished (a) automatically (with no user input), (b) automatically, but giving the user an opportunity to change, or (c) automatically, but allowing the user to confirm.
While at least one exemplary embodiment has been presented in the foregoing detailed description, it should be appreciated that a vast number of variations exist. It should also be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration of the disclosure in any way. Rather, the foregoing detailed description will provide those skilled in the art with a convenient road map for implementing the exemplary embodiment or exemplary embodiments. It should be understood that various changes can be made in the function and arrangement of elements without departing from the scope of the disclosure as set forth in the appended claims and the legal equivalents thereof.
This application claims priority to U.S. Prov. Pat. App. No. 61/844,646, filed Jul. 10, 2013, the entire contents of which are incorporated by reference herein.
Number | Date | Country | |
---|---|---|---|
61844646 | Jul 2013 | US |