Semantic/speech component 102 illustratively includes authoring component 112 and runtime component 114. During authoring of a speech related application, authoring component 112 illustratively generates an authoring interface 116 (such as an application programming interface API or a graphical user interface GUI) that is provided to an author or authoring tool 118. The author or authoring tool communicates with authoring component 112 through the authoring interface 116 in order to develop a speech related application, such as a dialog system.
In order to accomplish the desired functionality of the speech related application, the author will often be required to input prompts and associated expected user responses, along with tasks, dialogs, possibly cascaded dialogs and confirmations. Each of these is described in greater detail below. Suffice it to say, for now, that authoring component 112 takes these inputs through authoring interface 116 and provides certain portions of them to grammar generator 104 which generates grammars corresponding to the expected responses and dialog slot inputs. Authoring component 112 also interacts with task definition system 120 to further define the tasks based on the information input through authoring interface 116, by the author or authoring tool 118. Authoring is described in greater detail below.
Once the speech related application has been authored. It can be run in system 100 as a runtime application 122. Runtime component 114 in semantic/speech component 102 interacts with grammar generator 104 such that grammar generator 104 compiles the grammars necessary for runtime application 122. Those grammars are loaded into speech recognizer 106 by runtime component 114.
Runtime component 114 also generates a runtime interface 124 (such as an API or GUI) that is exposed to runtime application 122 (or a user of application 122) such that runtime information can be input to runtime component 114 in semantic/speech component 102. Based on the runtime inputs, runtime component 114 may access speech recognizer 106 to recognize input speech, or it may access speech synthesizer 108 to generate audible prompts to the user. Similarly, runtime component 114 illustratively accesses task reasoning system 130 in semantic framework 110 to identify tasks to be completed by runtime application 122, and to fill slots in those tasks and also to conduct dialog management in order to accomplish those tasks.
It can thus be seen that a user or author simply needs to interact with semantic/speech component 102 through an appropriate runtime interface 124 or authoring interface 116. The user or author need not know the intricate operation of the semantic subsystems and speech subsystems in order to either author, or run, a speech related application. Instead, the author illustratively communicates with component 102 in terms of familiar concepts (some of which are set out below) that are used in the application, and component 102 handles all the detailed communication with the subsystems. The detailed communication and interaction with the subsystems is illustratively done independently of the author in that the author does not need to expressly specify those interactions. In fact, the author need not even know how to specify those interactions.
It will also be noted that the semantic and speech subsystems listed in
Grammar generator 104 is illustratively any grammar generator that generates a grammar from a textual input. In one embodiment, grammar generator 104 generates speech recognition grammars from input sentences. There are numerous commercially available grammar generators.
Speech recognizer 106 is illustratively any desired speech recognition engine that performs acoustic speech recognition using a grammar supplied by the grammar generator 104 to specify the range of what can be recognized. Thus, speech recognizer 106 may include acoustic models, language models, a decoder, etc. There are numerous commercially available speech recognizers.
Speech synthesizer 108 is illustratively any desired speech synthesizer that receives a textual input and generates an audio output based on the textual input. There are numerous commercially available text to speech systems that are capable of synthesizing speech given a phrase. Speech synthesizer 108 may illustratively be suitable for providing a speech output from the textual input, via a telephone.
Semantic framework 110 can also be any desired semantic framework that receives text and provides a list of the most likely tasks and then, for each likely task, fills in the appropriate slots or parameters within the task, based on the input provided. Semantic framework 110 illustratively fills slots in a mixed initiative system, allowing users to specify multiple slot values at the same time, even when they are not yet requested, although this is not required by the present invention. Semantic framework 110 also illustratively includes a task reasoning system that conducts dialog management given a textual input and that operates to bind to external methods under desired circumstances, as described in greater detail below.
Because component 102 handles all of the interaction with the speech and semantic subsystems, this allows authors, or developers, to develop applications by coding against concepts that they are familiar with, such as user responses, application methods and business logic. The specifics of how this information is recognized, how it is fed downstream within the system, when confirmations are fired and what grammars are loaded, is all handled by system 102, so that the developer need not have detailed information in that regard.
One of those pieces of information is an opening prompt and the expected responses to that prompt. Therefore,
Component 212 then illustratively generates a user interface for receiving likely responses to the opening prompt. This is indicated by block 204, and receiving those responses from the author is indicated by block 206. Likely responses are those responses that the author expects a user (at runtime) to enter in response to the prompt. In one illustrative embodiment, a text box is provided such that the user can simply write in expected responses to the opening prompt.
The responses can then be provided by authoring component 112 (or, as described later, by runtime component 114) to grammar generator 104 to generate grammars associated with the responses to the opening prompt. This is indicated by block 208 in
In accordance with the example being discussed, it is implicit in creating a speech related server application that there is some task that the developer wants users to be able to do, such as booking a flight, checking flight status, or talking to a human operator. In order to accomplish some of these tasks, additional parameters are required, such as a flight number. However, some of these tasks may simply be performed directly, with no additional information.
The developer or author thus illustratively creates at least one task which can be reasoned over by the semantic framework 110. The task my have one or more semantic slots that must be filled to accomplish the task. Table 1 is an example of one exemplary task which is for booking a flight on an airline. The task shown in
The first slot is the arrival city and the second slot is the departure city. The task shown in Table 1 gives the task name and description, along with key words that may be used to identify this as a relevant task, given an input at runtime. The slots are then defined with pre-indicators and post-indicators that are words that may precede or follow the words that fill the slots. The task defined in Table 1 also identifies a recognizer grammar that will be loaded into the speech recognizer when this task is being performed. The recognizer grammar in Table 1 is a list of city names.
For each task thus identified, authoring component 112 provides an interface 116 that allows the author to specify excepted user responses that might be used to trigger selection of this task.
It will also be noted that dialog elements box 254 displays the dialog elements (or slots) associated with the highlighted task. In the present example, the two slots in the “book flight” task are the arrival city and the departure city. In the illustrative embodiment, authoring component 112 provides authoring interface 116 that allows the user to input a prompt associated with each slot and expected responses to that prompt. At runtime, the prompt is given to a user to solicit a response to fill the slot associated with the prompt. This is indicated by block 234 in
In the example shown in
Before proceeding with the present description, it will simply be noted that
In any case, receiving the slot prompt and responses is indicated by block 286. Authoring component 112 can then provide the expected responses to grammar generator 104 where the grammars can be generated for those expected responses. Again, however, it will be noted that the grammars simply need to be available when they are needed at runtime, and they can be generated anytime before then, using either the authoring component 112 or the runtime component 114.
Occasionally, a single dialog will not be adequate to obtain enough information to fill a particular slot (such as due to recognition errors, user uncertainty, or for other reasons). In that case, a developer may wish to extract the information from the user in a different way. For the sake of the present example, assume that the user was unable to properly specify an arrival city (or destination) but the user knew the airport code for the arrival city. In that instance, had the application developer provided a mechanism by which the user could select the destination city using the airport code, the application could have attempted to obtain that information in a different way than originally sought. For instance, if the developer had provided a mechanism by which the user could spell the airport code, that mechanism could be used to solicit information from the user instead of simply asking the user to speak the full destination city name.
Thus, in accordance with one embodiment, authoring component 112 generates a suitable authoring interface 116 to allow an author to specify a cascaded dialog, with prompts and responses. The cascaded dialog is simply an additional mechanism by which to seek the slot values associated with the task. Generating the UI to receive the cascaded dialog is indicated by block 290 in
Referring again to
By binding to an external method, it is meant that upon receiving an input in response to the cascaded dialog prompt in box 296, authoring component 112 can invoke a method external to component 102. In the exemplary embodiment shown in
In any case, once the expected responses to the cascaded dialog prompt 296 are provided by the author, authoring component 112 can provide those responses to the grammar generator 104 where the grammar can be generated. Again, it will be noted that the grammar simply needs to be generated prior to it being needed in the cascaded dialog during runtime. Providing the responses to the grammar generator and generating the grammars is indicated by block 294 in
Runtime component 114 then sends the expected responses for the tasks associated with the opening prompt to grammar generator 104. This is indicated by block 500 in
Grammar generator 104 compiles the grammars associated with the information provided to it, and those grammars are provided back to runtime component 114 where they are loaded into speech recognizer 106. Receiving and loading the complied grammars is indicated by block 504 in
In the exemplary embodiment being discussed, all prompts presented to the user are presented as audio prompts over a telephone, although this need not always be the case and prompts can be provided in other desired ways as well. Therefore, in the present example, the opening prompt is sent to speech synthesizer 108 where an audio representation of the prompt is generated and the audio representation is sent to runtime component 114, which sends the audio representation over a runtime user interface 124, to the runtime application or user using the application. This can be done over a telephone. This is indicated by block 506 in
The user then provides a spoken input in response to the opening prompt. That speech is received by runtime component 114 and sent to speech recognizer 106, which has had the desired grammars compiled and loaded into it. This is indicated by block 508 in
Once task reasoning system 130 has received the speech recognition result, it performs task routing by selecting the most appropriate task given the speech recognition input. Task reasoning system 130 also makes a best guess at filling slots in the identified task. A list of the N most likely tasks, along with filled slots (to the extent they can be filled) is provided back from task reasoning system 130 back to runtime component 114. Runtime component 114 presents those likely tasks to the user through runtime interface 124. They are presented back to the user such that the user can either select or confirm which task the user wishes to perform.
In response, the user selects one of the likely tasks presented to it. A graphical illustration of this is shown in
The confirmed task, along with its slot values, are presented back to task reasoning system 130 which performs dialog management in order to fully perform the task, if possible. Performing dialog management is indicated by block 518 in
Therefore, once the task has been identified, runtime component 114 sends the responses for the dialog (e.g., display responses to the slot prompts) associated with the task to the grammar generator 104 such that the grammar rules can be generated and compiled and loaded into speech recognizer 106. This is indicated by block 600 in
The slots in an identified task are filled in the order in which they appear in the identified task. By accessing task reasoning system 130, runtime component 114 identifies a next slot to be filled in the dialog. This is indicated by block 606. Component 114 determines whether that slot is filled, at block 608. If the slot has already been filled, then component 114 confirms the slot value that is currently filling that slot. This is indicated by block 610. Component 114 does this by generating an interface 124 (such as an audio prompt) that can be played to the user to confirm the slot value.
In the exemplary embodiment shown in
If it is determined that the user has confirmed the result, at block 612 in
If, at block 608 the slot currently being processed is not filled, or if at block 612 it was filled with the wrong value (which is not confirmed) then processing continues at block 616, where runtime component 114 determines whether it is time to transfer the user to a cascaded dialog or to quit the system and transfer the user to a live operator. Thus, at block 616, runtime component 114 determines whether the slot prompt for the current slot being processed has been provided to the user the threshold number of times (such as five times indicated in
However, if, at block 616, component 114 determines that the threshold number of values has not been reached, then component 114 retrieves the dialog slot prompt, provides it to speech synthesizer 108, and plays it for the user. This is indicated by block 618 in
The user then responds to the slot prompt shown in field 708 by providing a spoken input which is provided from runtime component 114 to speech recognizer 106 where it is recognized and provided back to task reasoning system 130 through runtime component 114. Receiving and recognizing the user's response to the slot prompt is indicated by block 620 in
Having no more slots to fill in this particular task (as determined in block 614 in
It will also be noted that the present system can provide advantages in training. For instance, whenever the user confirms a value, this information can be used to train both the semantic subsystems and the speech subsystems. Specifically, when the user confirms a spoken value, the transcription of the spoken value and its acoustic signal can be used to train the acoustic models in the speech recognizer. Similarly, when the user confirms a series of words, that series of words can be used to train the language models in the speech recognizer.
The confirmed inputs can also be used to train the semantic systems. For instance, the confirmed inputs can be used to identify various values that are acceptable inputs in response to prompts, or to fill slots. Thus, the spoken inputs can be used to train both the speech and semantic systems, and the confirmation values can be used to train both systems as well.
The present invention can, of course, be practiced on substantially any computer. The system can be practiced in a client environment, a server environment, a personal computer or desktop computer environment, a mobile device environment or any of a wide variety of other environments.
Embodiments are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well-known computing systems, environments, and/or configurations that may be suitable for use with various embodiments include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, telephony systems, distributed computing environments that include any of the above systems or devices, and the like.
Embodiments may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Some embodiments are designed to be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules are located in both local and remote computer storage media including memory storage devices.
With reference to
Computer 810 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 810 and includes both volatile and nonvolatile media, removable and non-removable media.
The system memory 830 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 831 and random access memory (RAM) 832. A basic input/output system 833 (BIOS), containing the basic routines that help to transfer information between elements within computer 810, such as during start-up, is typically stored in ROM 831. RAM 832 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 820. By way of example, and not limitation,
The computer 810 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media discussed above and illustrated in
A user may enter commands and information into the computer 810 through input devices such as a keyboard 862, a microphone 863, and a pointing device 861, such as a mouse, trackball or touch pad. These and other input devices are often connected to the processing unit 820 through a user input interface 160 that is coupled to the system bus, but may be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 891 or other type of display device is also connected to the system bus 821 via an interface, such as a video interface 890. In addition to the monitor, computers may also include other peripheral output devices such as speakers 897 and printer 896, which may be connected through an output peripheral interface 895.
The computer 810 can be operated in a networked environment using logical connections to one or more remote computers, such as a remote computer 880. The remote computer 880 may be a personal computer, a hand-held device, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 810. The logical connections depicted in
When used in a LAN networking environment, the computer 810 is connected to the LAN 871 through a network interface or adapter 870. When used in a WAN networking environment, the computer 810 typically includes a modem 872 or other means for establishing communications over the WAN 873, such as the Internet. The modem 872, which may be internal or external, may be connected to the system bus 821 via the user input interface 860, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 810, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.