Claims
- 1. A method for recognizing voice input, comprising:
receiving a document that includes a specification of a datatype for which there exists a predefined grammar; obtaining a locale attribute for the datatype, wherein the locale attribute identifies a version of a language that is spoken in a locale; using the locale attribute to lookup a locale-specific grammar for the datatype; and communicating the locale-specific grammar to a speech recognition engine, thereby allowing the speech recognition engine to use the locale-specific grammar in recognizing a voice input for the datatype.
- 2. The method of claim 1, wherein the document that includes the specification of the datatype is a Voice eXtensible Markup Language (VoiceXML) document.
- 3. The method of claim 2, wherein obtaining the locale attribute involves obtaining the locale attribute from the VoiceXML document.
- 4. The method of claim 1, wherein the locale attribute is encoded in an application markup language.
- 5. The method of claim 1, wherein the document that includes the specification of the datatype is an MXML document which is used to generate a VoiceXML document.
- 6. The method of claim 1, wherein obtaining the locale attribute involves receiving the locale attribute as an application parameter.
- 7. The method of claim 1, wherein obtaining the locale attribute involves receiving the locale attribute as an application parameter associated with a particular user.
- 8. The method of claim 1, wherein the locale-specific grammar identifies a standard set of phrases to be recognized by the speech recognition engine while receiving voice input for the datatype.
- 9. The method of claim 1, wherein the locale-specific grammar associates a phrase that can be spoken with a corresponding semantic meaning.
- 10. The method of claim 1, wherein communicating the locale-specific grammar to the speech recognition engine involves communicating a reference to the speech recognition engine, wherein the reference specifies where the locale-specific grammar can be retrieved from.
- 11. The method of claim 1, wherein communicating the locale-specific grammar to the speech recognition engine involves incorporating the grammar “in-line” into a VoiceXML document, and then communicating the VoiceXML document to the speech recognition engine.
- 12. The method of claim 1, wherein the locale attribute includes:
a language code that identifies the language; and a region code that identifies a geographic region in which a locale-specific version of the language is spoken.
- 13. A computer-readable storage medium storing instructions that when executed by a computer cause the computer to perform a method for recognizing voice input, the method comprising:
receiving a document that includes a specification of a datatype for which there exists a predefined grammar; obtaining a locale attribute for the datatype, wherein the locale attribute identifies a version of a language that is spoken in a locale; using the locale attribute to lookup a locale-specific grammar for the datatype; and communicating the locale-specific grammar to a speech recognition engine, thereby allowing the speech recognition engine to use the locale-specific grammar in recognizing a voice input for the datatype.
- 14. The computer-readable storage medium of claim 13, wherein the document that includes the specification of the datatype is a Voice eXtensible Markup Language (VoiceXML) document.
- 15. The computer-readable storage medium of claim 14, wherein obtaining the locale attribute involves obtaining the locale attribute from the VoiceXML document.
- 16. The computer-readable storage medium of claim 13, wherein the locale attribute is encoded in an application markup language.
- 17. The computer-readable storage medium of claim 13, wherein the document that includes the specification of the datatype is an MXML document which is used to generate a VoiceXML document.
- 18. The computer-readable storage medium of claim 13, wherein obtaining the locale attribute involves receiving the locale attribute as an application parameter.
- 19. The computer-readable storage medium of claim 13, wherein obtaining the locale attribute involves receiving the locale attribute as an application parameter associated with a particular user.
- 20. The computer-readable storage medium of claim 13, wherein the locale-specific grammar identifies a standard set of phrases to be recognized by the speech recognition engine while receiving voice input for the datatype.
- 21. The computer-readable storage medium of claim 13, wherein the locale-specific grammar associates a phrase that can be spoken with a corresponding semantic meaning.
- 22. The computer-readable storage medium of claim 13, wherein communicating the locale-specific grammar to the speech recognition engine involves communicating a reference to the speech recognition engine, wherein the reference specifies where the locale-specific grammar can be retrieved from.
- 23. The computer-readable storage medium of claim 13, wherein communicating the locale-specific grammar to the speech recognition engine involves incorporating the grammar “in-line” into a VoiceXML document, and then communicating the VoiceXML document to the speech recognition engine.
- 24. The computer-readable storage medium of claim 13, wherein the locale attribute includes:
a language code that identifies the language; and a region code that identifies a geographic region in which a locale-specific version of the language is spoken.
- 25. An apparatus that recognizes voice input, comprising:
a receiving mechanism configured to receive a document that includes a specification of a datatype for which there exists a predefined grammar; wherein the receiving mechanism is additionally configured to obtain a locale attribute for the datatype, wherein the locale attribute identifies a version of a language that is spoken in a locale; a lookup mechanism configured to use the locale attribute to lookup a locale-specific grammar for the datatype; and a communication mechanism configured to communicate the locale-specific grammar to a speech recognition engine, thereby allowing the speech recognition engine to use the locale-specific grammar in recognizing a voice input for the datatype.
- 26. The apparatus of claim 25, wherein the document that includes the specification of the datatype is a Voice eXtensible Markup Language (VoiceXML) document.
- 27. The apparatus of claim 26, wherein the receiving mechanism is configured to obtain the locale attribute from the VoiceXML document.
- 28. The apparatus of claim 25, wherein the locale attribute is encoded in an application markup language.
- 29. The apparatus of claim 25, wherein the document that includes the specification of the datatype is an MXML document which is used to generate a VoiceXML document.
- 30. The apparatus of claim 25, wherein the receiving mechanism is configured to obtain a locale attribute as an application parameter.
- 31. The apparatus of claim 25, wherein the receiving mechanism is configured to obtain a locale attribute as an application parameter associated with a particular user.
- 32. The apparatus of claim 25, wherein the locale-specific grammar identifies a standard set of phrases to be recognized by the speech recognition engine while receiving voice input for the datatype.
- 33. The apparatus of claim 25, wherein the locale-specific grammar associates a phrase that can be spoken with a corresponding semantic meaning.
- 34. The apparatus of claim 25, wherein the communication mechanism is configured to communicate the locale-specific grammar to the speech recognition engine by communicating a reference to the speech recognition engine, wherein the reference specifies where the locale-specific grammar can be retrieved from.
- 36. The apparatus of claim 25, wherein the communication mechanism is configured to communicate the locale-specific grammar to the speech recognition engine by incorporating the grammar “in-line” into a VoiceXML document, and then communicating the VoiceXML document to the speech recognition engine.
- 37. The apparatus of claim 25, wherein the locale attribute includes:
a language code that identifies the language; and a region code that identifies a geographic region in which a locale-specific version of the language is spoken.
- 38. The apparatus of claim 25,
wherein apparatus is located within an application server; and wherein the speech recognition engine is located within a voice gateway.
- 39. A means for recognizing voice input, comprising:
a receiving means for receiving a document that includes a specification of a datatype for which there exists a predefined grammar; wherein the receiving means is additionally configured to obtain a locale attribute for the datatype, wherein the locale attribute identifies a version of a language that is spoken in a locale; a lookup means that uses the locale attribute to lookup a locale-specific grammar for the datatype; and a communication means for communicating the locale-specific grammar to a speech recognition engine, thereby allowing the speech recognition engine to use the locale-specific grammar in recognizing a voice input for the datatype.
RELATED APPLICATION
[0001] This application hereby claims priority under 35 U.S.C. §119 to U.S. Provisional Patent Application No. 60/440,309, filed on 14 Jan. 2003, entitled “Concatenated Speech Server,” by inventor Christopher Rusnak (Attorney Docket No. OR03-01301PSP), and to U.S. Provisional Patent Application No. 60/446,145, filed on 10 Feb. 2003, entitled “Concatenated Speech Server,” by inventor Christopher Rusnak (Attorney Docket No. OR03-01301PSP2). This application additionally claims priority under 35 U.S.C. §119 to U.S. Provisional Patent Application No. 60/449,078, filed on 21 Feb. 2003, entitled “Globalization of Voice Applications,” by inventors Ashish Vora, Kara L. Sprague and Christopher Rusnak (Attorney Docket No. OR03-03501PRO).
[0002] This application is additionally related to a non-provisional patent application entitled, “Structured Datatype Expansion Framework,” by inventors Ashish Vora, Kara L. Sprague and Christopher Rusnak, filed on the same day as the instant application (Attorney Docket No. OR03-03501).
[0003] This application is additionally related to a non-provisional patent application entitled, “Method and Apparatus for Facilitating Globalization of Voice Applications,” by inventor Ashish Vora, filed on the same day as the instant application (Attorney Docket No. OR03-03701).
Provisional Applications (3)
|
Number |
Date |
Country |
|
60440309 |
Jan 2003 |
US |
|
60446145 |
Feb 2003 |
US |
|
60449078 |
Feb 2003 |
US |