The present invention relates to a system and process for generating a voice application.
A voice application is a software application that provides an interactive audio interface, particularly a speech interface, on a machine, such as an Interactive Voice Response (IVR) system. IVRs, such as Intel's Dialogic™ IVR, are used in communications networks to receive voice calls from parties. The IVR is able to generate and send voice prompts to a party and receive and interpret the party's responses made in reply.
Voice extensible markup language, or VoiceXML, is a markup language for voice or speech-driven applications. VoiceXML is used for developing speech-based telephony applications, and also enables web-based content to be accessed via voice using a telephone. VoiceXML is being developed by the VoiceXML Forum, Due to the verbose nature of voiceXML, it can be cumbersome to develop VoiceXML-based applications manually using a text or XML editor. Consequently, voice application development systems are available that allow voice applications to be developed by manipulating graphical elements via a graphical user interface rather than coding VoiceXML directly. However, these systems are limited in their ability to assist a developer. It is desired to provide a process and system for developing a voice application that improves upon the prior art, or at least provide a useful alternative to existing voice application development systems and processes.
In accordance with the present invention, there is provided a process for developing a voice application, including:
The present invention also provides a system for use in developing a voice application, including:
The present invention also provides a graphical user interface for use in developing a voice application, said interface including graphical user interface components for defining execution paths of said application by arranging configurable dialog elements in a tree structure, each path through said tree structure representing one of said execution paths, and said dialog element components may include one or more of:
Preferred embodiments of the present invention are hereinafter described, by way of example only, with reference to the accompanying drawings, wherein:
As shown in
As shown in
When the process begins, the system 100 generates a graphical user interface, as shown in
To develop a speech based application, a user of the system 100 can create a new project or open a saved project by selecting a corresponding menu item from the “Files” menu of the main menubar 408. The dialog editor 202 is then executed, and a tabbed dialog panel 411 is added to the tools pane 404, providing an interface to the dialog editor 202, and allowing the user to define an execution flow, referred to as a dialog, for the application. The dialog panel 411 includes a dialog pane 412, a dialog element toolbar 414 referred to as the dialog palette, a dialog element properties pane 416, and a dialog element help pane 418.
An application can be built from a set of seventeen dialog elements represented by icons in the dialog palette 414, as shown in
The execution flow of the application is defined by adding dialog elements to the dialog editor pane 412, setting the properties of the dialog elements, and defining the execution order of the dialog elements. The latter is achieved by dragging a dialog element and dropping it on top of an existing dialog element in the dialog editor pane 412. The dropped element becomes the next element to be executed after the element that it was dropped onto. The sequence and properties of dialog elements on the dialog editor pane 412 defines a dialog. Thus a dialog represents the execution flow of a voice application as a sequence of dialog elements. This sequence represents the main flow of the application and provides a higher-level logical view of the application that is not readily evident from the application's VoiceXML code. Thus the dialog provides a clear and logical view of the execution of the application. In addition to the main flow, non-sequential execution branches can be created by using a Jump dialog element. However, such non-sequential execution is not represented in a dialog. A subroutine is represented by an icon in the project pane 402 and appears as an icon in the dialog editor pane 412 when the main dialog is displayed. The execution flow of a subroutine can be displayed by selecting its icon in the project pane 402.
The sequencing of a dialog is facilitated by enforcing strict rules on dialog elements and by including explicit links in the dialog code to transition from one dialog element to the next. In contrast to arbitrary VoiceXML code whose execution can be completely non-sequential due to the presence of “GOTO” tags, a dialog generated by the system 100 has a tree structure, with each path through the tree representing a possible path of dialog execution. This allows the dialog flow to be readily determined and displayed using high level graphical dialog elements, which would be much more difficult with arbitrary VoiceXML.
An application can be saved at any time by selecting a “Save” menu item of the “File” menu of the menubar 410. When an application is saved, the application dialog is translated into an extended VoiceXML format by the dialog transformer 204. Each dialog element in the dialog flow is first translated into corresponding VoiceXML code. Each of the seventeen dialog elements corresponds to one of the seventeen VoiceXML templates 212 that performs the functionality of that element. A VoiceXML template is a sequence of VoiceXML elements that produces the behaviour that the dialog element represents. It is a template because it needs to be configured by the element properties (e.g., name, test condition) which are set by the user, as described above.
Some dialog elements correspond to similar VoiceXML elements (e.g., a Menu dialog element corresponds to a VoiceXML <menu> element), while others map onto a complex sequence of VoiceXML elements (e.g., a Loop dialog element corresponds to multiple VoiceXML <form> elements, each form specifying the next form to execute in an iterative loop). However, even dialog elements that correspond to similar VoiceXML elements represent more functionality than the equivalent VoiceXML element. For example, a Menu dialog element allows prompts to be set by the user, and the Menu dialog element actually maps onto a block of VoiceXML code that contains a <menu> element with embedded <prompt>, <audio>, and other XML elements.
Each dialog element's VoiceXML template is separate from the next and can be sequenced to produce the dialog flow. The sequencing is achieved by a reference at the bottom of each element's template to the next element's template, which causes the templates to be executed in the desired order.
The translation from high-level dialog elements into VoiceXML proceeds as follows. The dialog elements are stored in a tree structure, each branch of the tree corresponding to a path in the dialog flow. The tree is traversed in pre-order traversal to convert each element visited into VoiceXML. For each visited dialog element, VoiceXML code is generated from its corresponding VoiceXML template by filling in the missing or configurable parts of the template using the element properties set by the user, and adding a link to the next element's VoiceXML code at the bottom of the current element's generated VoiceXML code.
Although the forward transformation from dialog flow to VoiceXML is relatively straightforward, the reverse transformation from VoiceXML to dialog flow is more difficult. The sequencing of dialog elements can be recreated from the generated VoiceXML, but property settings for the elements may not be available because some information in the dialog elements is lost when they are converted to VoiceXML. This lost information may not fall within the scope of VoiceXML, and hence, cannot be naturally saved in VoiceXML code. For example, type information for a Form element is used to generate the grammar for that Form. However, the VoiceXML code simply needs to reference the generated Grammar File and is not concerned with the type information itself. Thus, the mapping of the Form element to equivalent VoiceXML code does not include the type information.
To facilitate the reverse translation from VoiceXML code to dialog, the dialog transformer 204 modifies the VoiceXML code by inserting additional attributes into various element tags, providing dialog element information that cannot be stored using the available VoiceXML tags. The resulting file 214 is effectively in an extended VoiceXML format. The additional attributes are stored in a separate, qualified XML namespace so that they do not interfere with the standard VoiceXML elements and attributes, as described in the World Wide Web Consortium's (W3C) Namespaces in XML recommendation. This facilitates the parsing of extended VoiceXML files.
Specifically, an extended VoiceXML file can include the following namespace declaration:
This defines a namespace prefix “lq” as bound to the universal resource indicator (URI) http://www.telstra.com.au/LyreQuest. Subsequently, the file may contain the following extended VoiceXML:
where the indicated XML tag attributes provide the additional dialog element information, and the remaining code is standard VoiceXML. The additional or extended attributes include the lq namespace prefix. The lq:element, lq:name, and lq:calls attributes indicate, respectively, the dialog element that the VoiceXML corresponds to, the name given to that element by the user, and the package and name of the Subroutine element that is being called by the SubroutineCall element. Other elements will have different extended attributes.
The equivalent code in VoiceXML omits the extended attributes, but is otherwise identical:
Two extended VoiceXML files, including all the available extended attributes, are listed in Appendix B.
When the application is saved, the dialog transformer 204 also generates a number of other files, including a project file 216, package description files 218, and type description files 220. The project file is given a filename extension of “.lqx”, and contains information about the packages (i.e., self-contained groups of files) and other data files making up a project of the voice application development system 100.
An example project file is listed below. Within the project file, the project is defined by a “project” XML element that defines the project name as “mas”. Within the “project” element are four sequential “folder” elements that define subdirectories or folders of the directory containing the project file, respectively named Packages, Transcripts, Scenarios, and Generated Code. These folders contain respectively the project's packages, transcripts of text, scenarios of interaction between the corresponding application and a user, and VoiceXML code and grammar generated for one or more specific IVR platforms. Within the “Packages” folder element is a “package” element giving the location and name of any packages used by the project. The “folder” elements can contain one or more “file” elements, each defining the type and name of a file within the encapsulating folder. The “folder” elements can be nested.
A package description file is given a filename extension of “.pkg.xml”, and contains information about data files belonging to an individual Package of a project. An example of a package description file for the package named “mas” is given below. The file defines the project's dialog file as “mas.vxml”, four grammar files, four prompt files, and three type definition files, containing definitions of user-defined variable types. These files are described in more detail below.
A Type description file is given a filename extension of “.type.xml”, and contains information about a user-defined Type used in a Package of a project. An example of the file is given below. The file defines an enumerated type named “fare_class” with three possible values: “first”, “business”, and “economy”. The “fare_class” type is associated with four files, respectively defining rules for the grammar, cover (a set of example phrases), slots (the parameter=value fields that the grammar can return), and targets (more specific slot filling information).
Returning to
A generated grammar file is given a filename extension of “.rulelist”. An example of a generated grammar file for a flight booking system is:
The first line or rule of this grammar can be used as an example:
This grammar rule might be invoked when a flight booking application prompts a customer to provide the destination of a flight. The first field, .Ask_flight_details_destination, provides the name of the grammar rule. The second field, Cities:X, indicates that the customer's response x is of type Cities. This type is defined by its own grammar that includes a list of available city names. The following two fields, 2 0, are used for grammar learning, as described in International Patent Publication No. WO 00/78022, A Method of Developing ant Interactive System. The first field indicates the number of training examples that use the grammar rule. The second field indicates the number of other rules that refer to the rule. The last field, destination=$X.cities, indicates that the result of the rule is that the parameter destination is assigned a value of type Cities having the value of x. A more complex example is provided by the last rule:
In this case, the grammar rule invokes three other grammars: GF_IwantTo, Fare_class, and Cities and assigns the results to parameters named X750, X753, and X757, respectively. This rule defines the application parameters ticket_class and destination.
A prompts file is given a filename extension of “.prompts.rulelist”, and each line of the file defines the speech prompt that is to be provided to a user of the application when the corresponding element of the dialog is executed. An example of a generated prompts file is:
The format of the prompts file is the same as the grammar file. This allows the prompts to be improved through machine learning as though they were a grammar, using a grammar learning method such as that described in International Patent Publication No. WO 00/78022, A Method of Developing an Interactive System.
The generated prompts include dynamically prompts. An example of a dynamic prompt is: “You have selected to buy Telstra shares. How many of the Telstra shares would you like to buy?”. The word, “Telstra” is dynamically inserted into the application's prompt to the user.
The voice application development system 100 generates text-to-speech (TTS) prompts within the VoiceXML code that are evaluated on the fly. Although VoiceXML syntax allows an expression to be evaluated and played as a TTS prompt, the system 100 extends this by allowing an ECMAscript or JavaScript function to be called to evaluate each variable used in a prompt. By evaluating variables in a function rather than as an inline expression, complex test conditions can be used to determine the most suitable prompt given the available information in the variables. This might result in a prompt, for example, of “six dollars” rather than “six dollars and zero cents”. In addition to automatically generating and incorporate JavaScript function calls in VoiceXML, the system 100 also generates the corresponding JavaScript functions by incorporating user-supplied prompt text and variables into the JavaScript templates 230. This allows the user to develop a voice application with dynamically generated prompts without having to manually code any JavaScript.
For example, an automatically generated function call for a prompt named PromptConfirm_payment_details is:
The corresponding JavaScript prompt function generated by the system 100 is:
The system 100 represents prompts using a language model that describes all of the prompts that can be played, along with their meanings. This model contains the same type of information as a speech recognition grammar, and therefore the prompts to be played can be represented using a grammar. Prompts to be generated by the application are first represented as a grammar to enable that grammar to be improved using techniques such as grammar learning, as described in International Patent Publication No. WO 00/78022, A Method of Developing an Interactive System. The grammar is subsequently converted into JavaScript and referenced by the application's VoiceXML tags, as described above.
An example of a prompt represented as a grammar is:
Returning to
When the application has been tested and is ready for use, the IVR code generator 208 executes a code generation process at step 612 to generate pure VoiceXML suitable for a particular speech-enabled IVR such as the IVR 102 of
For the purposes of illustration, Appendix C provides a partial listing of a pure VoiceXML file corresponding to the first extended VoiceXML file listed in Appendix B. The listing in Appendix C includes the VoiceXML with the merged JavaScript for supporting Prompts. The JavaScript code is at the end of the listing.
Many modifications will be apparent to those skilled in the art without departing from the scope of the present invention as herein described with reference to the accompanying drawings.
| Number | Date | Country | Kind |
|---|---|---|---|
| 2002950336 | Jul 2002 | AU | national |
| Filing Document | Filing Date | Country | Kind | 371c Date |
|---|---|---|---|---|
| PCT/AU03/00939 | 7/24/2003 | WO | 00 | 1/21/2005 |
| Publishing Document | Publishing Date | Country | Kind |
|---|---|---|---|
| WO2004/010678 | 1/29/2004 | WO | A |
| Number | Name | Date | Kind |
|---|---|---|---|
| 5241619 | Schwartz et al. | Aug 1993 | A |
| 5452397 | Ittycheriah et al. | Sep 1995 | A |
| 5642519 | Martin | Jun 1997 | A |
| 5737723 | Riley et al. | Apr 1998 | A |
| 5860063 | Gorin et al. | Jan 1999 | A |
| 5937385 | Zadrozny et al. | Aug 1999 | A |
| 6016470 | Shu | Jan 2000 | A |
| 6044347 | Abella et al. | Mar 2000 | A |
| 6144938 | Surace et al. | Nov 2000 | A |
| 6154722 | Bellegarda | Nov 2000 | A |
| 6173261 | Arai et al. | Jan 2001 | B1 |
| 6269336 | Ladd et al. | Jul 2001 | B1 |
| 6314402 | Monaco et al. | Nov 2001 | B1 |
| 6321198 | Hank et al. | Nov 2001 | B1 |
| 6411952 | Bharat et al. | Jun 2002 | B1 |
| 6434521 | Barnard | Aug 2002 | B1 |
| 6493673 | Ladd et al. | Dec 2002 | B1 |
| 6510411 | Norton et al. | Jan 2003 | B1 |
| 6523016 | Michalski | Feb 2003 | B1 |
| 6587822 | Brown et al. | Jul 2003 | B2 |
| 6604075 | Brown et al. | Aug 2003 | B1 |
| 6618697 | Kantrowitz et al. | Sep 2003 | B1 |
| 6684183 | Korall et al. | Jan 2004 | B1 |
| 20010013001 | Brown et al. | Aug 2001 | A1 |
| 20010016074 | Hamamura | Aug 2001 | A1 |
| 20020087325 | Lee et al. | Jul 2002 | A1 |
| 20020188454 | Sauber | Dec 2002 | A1 |
| 20030007609 | Yuen et al. | Jan 2003 | A1 |
| 20030055651 | Pfeiffer et al. | Mar 2003 | A1 |
| 20030069729 | Bickley et al. | Apr 2003 | A1 |
| 20040015350 | Gandhi et al. | Jan 2004 | A1 |
| 20050091057 | Phillips et al. | Apr 2005 | A1 |
| 20060025997 | Law et al. | Feb 2006 | A1 |
| 20060190252 | Starkie | Aug 2006 | A1 |
| 20060203980 | Starkie | Sep 2006 | A1 |
| 20080126089 | Printz et al. | May 2008 | A1 |
| 20080319738 | Liu et al. | Dec 2008 | A1 |
| Number | Date | Country |
|---|---|---|
| 0 312 209 | Nov 1992 | EP |
| 0 685 955 | Dec 1995 | EP |
| 0 700 031 | Mar 1996 | EP |
| 0 890 942 | Jan 1999 | EP |
| 0 992 980 | Apr 2000 | EP |
| 1 207 518 | May 2002 | EP |
| WO 9850907 | Nov 1998 | WO |
| WO 0005708 | Feb 2000 | WO |
| WO 0051016 | Aug 2000 | WO |
| WO 00078022 | Dec 2000 | WO |
| WO 0237268 | May 2002 | WO |
| WO 02063460 | Aug 2002 | WO |
| WO 02103673 | Dec 2002 | WO |
| WO 2004010678 | Jan 2004 | WO |
| Number | Date | Country | |
|---|---|---|---|
| 20060025997 A1 | Feb 2006 | US |