Claims
- 1. A method comprising:
establishing an audio-based dialog between a person and a machine, wherein the person uses a communication device to communicate with the machine; automatically detecting a characteristic during the dialog in real time, wherein the characteristic is not uniquely indicative of any of: the identity of the person, the identity of the communication device, or any user account; and customizing the dialog at an application level, based on the detected characteristic.
- 2. A method as recited in claim 1, wherein the characteristic is a characteristic of the person.
- 3. A method as recited in claim 2, wherein the characteristic is an approximate age of the person.
- 4. A method as recited in claim 2, wherein the characteristic is the gender of the person.
- 5. A method as recited in claim 1, wherein the characteristic is a type of speech being spoken by the person.
- 6. A method as recited in claim 1, wherein the characteristic is an emotional state of the person.
- 7. A method as recited in claim 1, wherein the characteristic is indicative of the truthfulness of speech of the person.
- 8. A method as recited in claim 1, wherein the characteristic is an acoustic characteristic.
- 9. A method as recited in claim 1, wherein the characteristic is indicative of a speech level of the dialog.
- 10. A method as recited in claim 1, wherein the characteristic is indicative of a noise level.
- 11. A method as recited in claim 10, wherein the characteristic is indicative of an acoustic noise level of the dialog.
- 12. A method as recited in claim 10, wherein the characteristic is indicative of a signal noise level of the dialog.
- 13. A method as recited in claim 1, wherein the characteristic is descriptive of an environment in which the person is located.
- 14. A method as recited in claim 13, wherein the characteristic is an acoustic characteristic.
- 15. A method as recited in claim 14, wherein the characteristic is a noise level of an acoustic environment in which the person is located.
- 16. A method as recited in claim 13, wherein the characteristic is a noise type of the acoustic environment.
- 17. A method as recited in claim 13, wherein the characteristic is the level of reverberance of the acoustic environment.
- 18. A method as recited in claim 1, wherein the characteristic is descriptive of a reason the person is experiencing an error.
- 19. A method as recited in claim 1, wherein the characteristic is a type of communication device the person is using to communicate with the machine.
- 20. A method as recited in claim 1, wherein the method is implemented in a call routing system, and wherein said customizing the dialog at an application level comprises selecting a destination to which a call from the person should be routed, based on the detected characteristic.
- 21. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises customizing an error recovery dialog based on the detected characteristic.
- 22. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises communicating content customized for the person based on the detected characteristic.
- 23. A method as recited in claim 22, wherein the content comprises an advertisement customized for the person.
- 24. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises customizing a call flow of the dialog for the person.
- 25. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises customizing a prompt delivery of the dialog for the person.
- 26. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises customizing a prompt style of the dialog for the person.
- 27. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises customizing a set of grammars for the dialog for the person.
- 28. A method as recited in claim 1, wherein said customizing the dialog at an application level comprises customizing a persona of the machine for the person.
- 29. A system comprising:
a front end to generate a set of features in response to speech from a person during a dialog with the person, wherein the person uses a communication device to carry out the dialog; a set of models; a speech recognition engine to recognize the speech from the person based on the features and the models; a characteristic detector to detect a characteristic other than the identity of the person, the identity of the specific communication device, or any user account; and a customization unit to customize the dialog at an application level based on the detected characteristic.
- 30. An apparatus comprising:
means for establishing an audio-based dialog between a person and a machine wherein the person uses a communication device to communicate with the machine; means for automatically detecting a characteristic during the dialog in real time, wherein the characteristic is not uniquely indicative of any of: the identity of the person, the identity of the specific communication device, or any user account; and means for customizing the dialog at an application level, based on the detected characteristic.
- 31. A method comprising:
examining each of a plurality of audio-based dialogs, each dialog between a person and a machine, to automatically detect a characteristic for at least some of the dialogs, wherein each person uses a communication device to communicate with the machine during the corresponding dialog, and wherein the characteristic is not uniquely indicative of any of: the identity of the person, the identity of the communication device, or any user account; and generating an overall characterization of the dialogs with respect to t-he characteristic.
- 32. A method as recited in claim 31, wherein the overall characterization of the dialogs is a demographic analysis of the dialogs.
- 33. A method as recited in claim 31, wherein the characteristic is a characteristic of the person.
- 34. A method as recited in claim 33, wherein the characteristic is an approximate age of the person.
- 35. A method as recited in claim 33, wherein the characteristic is the gender of the person.
- 36. A method as recited in claim 31, wherein the characteristic is a type of speech being spoken by the person.
- 37. A method as recited in claim 31, wherein the characteristic is an emotional state of the person.
- 38. A method as recited in claim 31, wherein the characteristic is indicative of the truthfulness of speech of the person.
- 39. A method as recited in claim 31, wherein the characteristic is an acoustic characteristic.
- 40. A method as recited in claim 31, wherein the characteristic is indicative of a speech level of the dialog.
- 41. A method as recited in claim 31, wherein the characteristic is indicative of a noise level.
- 42. A method as recited in claim 41, wherein the characteristic is indicative of an acoustic noise level.
- 43. A method as recited in claim 41, wherein the characteristic is indicative of a signal noise level.
- 44. A method as recited in claim 31, wherein the characteristic is descriptive of an environment in which the person is located.
- 45. A method as recited in claim 44, wherein the characteristic is an acoustic characteristic.
- 46. A method as recited in claim 45, wherein the characteristic is a noise level of an acoustic environment in which the person is located.
- 47. A method as recited in claim 44, wherein the characteristic is a noise type of the acoustic environment.
- 48. A method as recited in claim 44, wherein the characteristic is the level of reverberance of the acoustic environment.
- 49. A method as recited in claim 31, wherein the characteristic is descriptive of a reason the caller is experiencing an error.
- 50. A method as recited in claim 31, wherein the characteristic is a type of communication device the person is using to communicate with the machine.
- 51. A method as recited in claim 31, wherein the method is implemented in a call routing system, and wherein said customizing the dialog at an application level comprises routing a call from the person based on the detected characteristic.
- 52. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises customizing an error recovery dialog based on the detected characteristic.
- 53. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises communicating content customized for the person based on the detected characteristic.
- 54. A method as recited in claim 52, wherein the content comprises an advertisement customized for the person.
- 55. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises customizing a call flow of the dialog for the person.
- 56. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises customizing a prompt delivery of the dialog for the person.
- 57. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises customizing a prompt style of the dialog for the person.
- 58. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises customizing a set of grammars for the dialog for the person.
- 59. A method as recited in claim 31, wherein said customizing the dialog at an application level comprises customizing a persona of the machine for the person.
- 60. An apparatus comprising:
means for providing a plurality of audio-based dialogs, each between a person and a machine, wherein each person uses a communication device to communicate with the machine during the corresponding dialog; means for examining each of the dialogs to automatically detect a characteristic for at least some of the dialogs, wherein the characteristic is rot uniquely indicative of any of: the identity of the person, the identity of the specific communication device, or any user account; and means for generating an overall characterization of the dialogs with respect to the characteristic.
Parent Case Info
[0001] This is a continuation-in-part of U.S. patent application Ser. No. 09/412,173, filed on Oct. 4, 1999 and entitled, “Method and Apparatus for Optimizing a Spoken Dialog Between a Person and a Machine”, which is a continuation-in-part of U.S. patent application Ser. No. 09/203,155, filed on Dec. 1, 1998 and entitled, “System and Method for Browsing a Voice Web”, each of which is incorporated herein by reference in its entirety.
Continuation in Parts (2)
|
Number |
Date |
Country |
Parent |
09412173 |
Oct 1999 |
US |
Child |
10046026 |
Jan 2002 |
US |
Parent |
09203155 |
Dec 1998 |
US |
Child |
09412173 |
Oct 1999 |
US |