Claims
- 1. A voice application creation and deployment system comprising:
a voice application server for creating and serving voice applications to clients over a communication network; at least one voice portal node having access to the communication network, the portal node for facilitating client interaction with the voice applications; and an inference engine executable from the application server; characterized in that the inference engine is called during one or more predetermined points of an ongoing voice interaction to decide whether an inference of client need can be made based on analysis of existing data related to the interaction during a pre-determined point in an active call flow of the served voice application, and if an inference is warranted, determines which inference dialog will be executed and inserted into the call flow.
- 2. The system of claim 1 wherein the communications network is the Internet network.
- 3. The system of claim 1 wherein the communications network is a combination of an Internet and telephony network.
- 4. The system of claim 1 wherein the inference engine is part of the application logic maintained in the voice application server.
- 5. The system of claim 1 wherein the at least one voice portal is an interactive voice response system combined with a telephony server.
- 6. The system of claim 1 wherein the at least one voice portal is a computerized node connected to a data network having access to the Internet.
- 7. The system of claim 1 wherein the inference engine is called at pre-determined points in a call flow of an interaction using a voice application.
- 8. The system of claim 1 wherein the inference engine uses session information and or historical data collected about a caller to decide if an inference should be executed.
- 9. The system of claim 1 further comprising a universal grammar adapter adapted to produce universal grammar script from a specialized input, the script transformable into any one of a plurality of scripting languages supported by and referred to as a specification parameter of a speech-to-text/text-to-speech engine.
- 10. The system of claim 1 wherein the inference dialogs are multi part composites of separate dialogs.
- 11. The system of claim 1 wherein the related data includes one or a combination of caller line identification, caller number identification, and caller history data.
- 12. The system of claim 1 wherein the related data is mined for statistics that are compared with an inference model to determine a particular inference.
- 13. The system of claim 1 further comprising an inference model, including an ontology set and a semantic index.
- 14. The system of claim 1 wherein the inference engine causes generation of voice dialog from a base of semantics.
- 15. The system of claim 1 wherein the inference engine causes an inference to occur at more than one time during the course of an interaction.
- 16. A language adaptor system for converting a general descriptor language into an intermediate descriptor language for transformation into a specific XML-based script language for use in a text-to-speech engine comprising:
a first set of constructs defining the general descriptor language; a grammar adapter for equating selected ones of the first set of constructs to individual ones of a second set of intermediate constructs; and a language transformation utility for converting the adapter output into the specific script language desired.
- 17. The system of claim 16 wherein the language transformation utility is an extensible style sheet transformation program integrated with the adaptor.
- 18. The system of claim 16 wherein the specific script language is one of a grammar specification language (GSL) or a grammar extensible mark-up language (GRXML).
- 19. The system of claim 16 wherein the adaptor system is manually operated during manual creation of a voice application.
- 20. The system of claim 16 wherein the adaptor system executes automatically during automated generation of a new voice application dialog.
- 21. A method for determining which dialog of more than one available dialog will be executed during a voice interaction using a voice application and speech engine comprising:
(a) providing one or more detectable system points within the voice application being executed; (b) detecting said system points serially during the course of execution and deployment of the application; (c) upon each detection, accessing any available data related to the nature of the portion of the application just deployed; (d) comparing any available data found against a reference data model; and (e) selecting for execution one or more dialogs from the available dialogs based on the results of the comparison.
- 22. The method of claim 21 wherein in (a) the detectable system points are installed according to a pre-transaction and post transaction model for the voice application.
- 23. The method of claim 21 wherein in (c) the data includes one or a combination of client session data, client dialog data, or client historical activity data.
- 24. The method of claim 21 wherein in (d) the reference data model includes ontology and a semantic index.
- 25. The method of claim 24 wherein in (d) the reference data model includes a threshold value previously attributed to the data type and context of data that may be found at a particular system point.
- 26. The method of claim 21 wherein in (d) comparison includes computation of statistical values from raw data.
- 27. The method of claim 21 wherein in (e) the comparison result is a breach of a pre-determined threshold value and the dialog is selected based on the class or nature of the value as it applies to that portion of the voice application.
CROSS-REFERENCE TO RELATED DOCUMENTS
[0001] The present application claims priority to provisional application Ser. No. 60/523,042, filed on Nov. 17, 2003. The present invention also claims priority as a continuation in part of a U.S. patent application Ser. No. 10/613,857, which is a continuation in part of a U.S. patent application Ser. No. 10/190,080, entitled “Method and Apparatus for Improving Voice recognition performance in a voice application distribution system” filed on Jul. 2, 2002, which is a continuation in part of U.S. patent application Ser. No. 10/173,333, entitled “Method for Automated Harvesting of Data from A Web site using a Voice Portal System”, filed on Jun. 14, 2002, which claims priority to provisional application Ser. No. 60/302,736. The instant application claims priority to the above-mentioned applications, and incorporates the disclosures in their entirety by reference.
Provisional Applications (2)
|
Number |
Date |
Country |
|
60302736 |
Jul 2001 |
US |
|
60523042 |
Nov 2003 |
US |
Continuation in Parts (3)
|
Number |
Date |
Country |
Parent |
10613857 |
Jul 2003 |
US |
Child |
10803851 |
Mar 2004 |
US |
Parent |
10190080 |
Jul 2002 |
US |
Child |
10613857 |
Jul 2003 |
US |
Parent |
10173333 |
Jun 2002 |
US |
Child |
10190080 |
Jul 2002 |
US |