Claims
- 1. A method of enhancing audio renderings of non-audio data sources, comprising steps of:
detecting a nuance of a non-audio data source; locating an audio cue corresponding to the detected nuance; and associating the located audio cue with the detected nuance for playback to a listener.
- 2. The method according to claim 1, further comprising the steps of:
creating an audio rendering of a non-audio segment of the non-audio data source, wherein the non-audio segment is associated with the nuance; and mixing the associated audio cue with the audio rendering of the segment.
- 3. The method according to claim 1, wherein the detecting step detects a plurality of nuances of the non-audio data source, the locating step locates audio cues for each of the detected nuances, and the associating step associates each of the located audio cues with the respective detected nuance, and further comprising the steps of:
creating an audio rendering of the non-audio data source; and mixing the associated audio cues in with the audio rendering.
- 4. The method according to claim 3, wherein the mixing step occurs while playing the audio rendering to the listener.
- 5. The method according to claim 2 or claim 3, wherein the non-audio data source is a text file and wherein the creating step further comprises processing the text file with a text-to-speech translator.
- 6. The method according to claim 3, wherein at least one of the detected nuances is presence of a formatting tag.
- 7. The method according to claim 3, wherein the non-audio data source is a text file and at least one of the detected nuances is a change in color of text in the text file.
- 8. The method according to claim 1, wherein the non-audio data source is a text file and the detected nuance is a change in font of text in the text file.
- 9. The method according to claim 1, wherein the non-audio data source is a text file and the detected nuance is presence of a keyword for the text file.
- 10. The method according to claim 9, wherein the keyword is supplied by a creator of the text file.
- 11. The method according to claim 9, wherein the keyword is programmatically detected by evaluating text in the text file.
- 12. The method according to claim 3, wherein the non-audio data source is a text file and at least one of the detected nuances is presence of an emoticon in the text file.
- 13. The method according to claim 1, wherein the detected nuance is a change of topic in the non-audio data source.
- 14. The method according to claim 6, wherein the formatting tag is a new paragraph tag.
- 15. The method according to claim 3, wherein at least one of the detected nuances is a degree of certainty in translation of the non-audio data source from another format.
- 16. The method according to claim 15, wherein the detecting step detects at least two different degrees of certainty, and wherein the located audio cues comprise changes in a pitch of a voice used in the audio rendering for each of the different degrees of certainty.
- 17. The method according to claim 15, wherein the detecting step detects at least two different degrees of certainty, and further comprising changing a pitch of the associated audio cue used by the mixing step for each of the different degrees of certainty.
- 18. The method according to claim 15, wherein the detecting step detects at least two different degrees of certainty, and wherein the mixing step further comprises alternating between two of the located audio cues to audibly indicate the different degrees of certainty.
- 19. The method according to claim 15, wherein the other format is an input audio data source and the non-audio data source is a text file, and the translation is an audio-to-text translation from the input audio data source to the text file, and wherein the degree of certainty reflects accuracy of the audio-to-text translation.
- 20. The method according to claim 15, wherein the other format is an input audio data source and the non-audio data source is a text file, and the translation is an audio-to-text translation from the input audio data source to the text file, and wherein the degree of certainty reflects identification of a speaker who created the input audio data source.
- 21. The method according to claim 15, wherein the other format is a source text file and the non-audio data source is an output text file, and the translation is a text-to-text translation from the source text file to the output text file, and wherein the degree of certainty reflects accuracy of the text-to-text translation.
- 22. The method according to claim 21, wherein the source text file contains text in a first language and the output text file contains text in a second language.
- 23. The method according to claim 3, wherein at least one of the detected nuances is an identification of a creator of the non-audio data source.
- 24. The method according to claim 23, wherein the identification is used to locate stored preferences of the creator.
- 25. The method according to claim 3, wherein the non-audio data source is an e-mail message and at least one of the detected nuances is an e-mail convention found in the e-mail message.
- 26. The method according to claim 1, wherein the non-audio data source is text provided by a user.
- 27. The method according to claim 26, wherein the text provided by the user is typed as command line input.
- 28. The method according to claim 1, wherein the detected nuance is embedded within the non-audio file.
- 29. The method according to claim 1, wherein the detected nuance comprises metadata associated with the non-audio file.
- 30. The method according to claim 3, wherein the mixing step further comprises mixing in a streaming audio source for at least one of the located audio cues.
- 31. A method of enhancing audio renderings of data sources, comprising steps of:
transforming a first data source in a first format to a second data source in an audio format; associating one or more degrees of certainty with the second data source to reflect an accuracy of the transforming step; locating an audio cue that is correlated to each of the associated degrees of certainty; and associating the located audio cues with the second data source to convey the accuracy of the transforming step to a listener who will hear the audio format.
- 32. The method according to claim 31, further comprising the step of audibly rendering the second data source to the listener along with the associated audio cues.
- 33. The method according to claim 31, further comprising the step of storing the association of the located audio cues for subsequent audible rendering of the second data source to the listener along with the associated audio cues.
- 34. A method of enhancing audio renderings of non-audio data sources, comprising steps of:
providing a stylesheet comprising rules and actions, wherein selected ones of the rules and actions pertain to audio cues to be used in an audio rendering; comparing the rules of the stylesheet to content of a non-audio data source; and upon detecting a match during the comparing step, applying the action associated with the matching rule, wherein for each action pertaining to audio cues, an audio cue is thereby associated with the non-audio data source for playing the audio rendering to a listener.
- 35. The method according to claim 34, further comprising the step of playing the audio rendering of the non-audio data source to the listener.
- 36. The method according to claim 34, wherein at least one of the selected rules and actions of the stylesheet is customized for the listener, and at least one of the audio cues associated with the non-audio data source by the applying step overrides another audio cue in order to customize the audio rendering for the listener.
- 37. The method according to claim 35, wherein at least one of the audio cues associated with the non-audio data source by the applying step changes a pitch of a speaker's voice used in the playing step.
- 38. The method according to claim 34, wherein at least one of the selected rules and actions of the stylesheet is customized for a creator of the non-audio data source, and at least one of the audio cues associated with the non-audio data source by the applying step overrides another audio cue in order to make the audio rendering speaker-specific.
- 39. The method according to claim 34, wherein the stylesheet is an Extensible Stylesheet Language (“XSL”) stylesheet.
- 40. The method according to claim 35, wherein the stylesheet specifies preferences for language translation of the non-audio data source that may be performed prior to operation of the playing step.
- 41. A method of merchandising pre-recorded audio cues, further comprising steps of:
receiving requests for selected ones of the pre-recorded audio cues for use as background sounds to be mixed with audibly rendered messages in order to provide enhanced contextual information to a listener of the audibly rendered messages; and providing the selected ones, in response to the step of receiving requests.
- 42. The method according to claim 41, wherein the provided ones are used as an audio cue library.
- 43. A system for enhancing audio renderings of non-audio data sources, comprising:
means for detecting one or more nuances of a non-audio data source; means for locating an audio cue corresponding to each of the detected nuances, and means for associating the located audio cues with their respective detected nuances for playback to a listener.
- 44. The system according to claim 43, further comprising:
means for creating an audio rendering of the non-audio data source, wherein the non-audio segment is associated with the nuance; and means for mixing the associated audio cues in with the audio rendering while playing the audio rendering to the listener.
- 45. The system according to claim 44, wherein the non-audio data source is a text file and wherein the means for creating further comprises means for processing the text file with a text-to-speech translator.
- 46. The system according to claim 43, wherein at least one of the detected nuances is presence of a formatting tag.
- 47. The system according to claim 43, wherein the non-audio data source is a text file and the detected nuance is a change in font of text in the text file.
- 48. The system according to claim 43, wherein the non-audio data source is a text file and at least one of the detected nuances is presence of an emoticon in the text file.
- 49. The system according to claim 43, wherein the detected nuance is a change of topic in the non-audio data source.
- 50. The system according to claim 46, wherein the formatting tag is a new paragraph tag.
- 51. The system according to claim 43, wherein at least one of the detected nuances is a degree of certainty in translation of the non-audio data source from another format.
- 52. The system according to claim 51, wherein the means for detecting detects at least two different degrees of certainty, and wherein the located audio cues comprise changes in a pitch of a voice used in the audio rendering for each of the different degrees of certainty.
- 53. The system according to claim 51, wherein the means for detecting detects at least two different degrees of certainty, and further comprising means for changing a pitch of the associated audio cue used by the means for mixing for each of the different degrees of certainty.
- 54. The system according to claim 51, wherein the other format is an input audio data source and the non-audio data source is a text file, and the translation is an audio-to-text translation from the input audio data source to the text file, and wherein the degree of certainty reflects accuracy of the audio-to-text translation.
- 55. The system according to claim 51, wherein the other format is an input audio data source and the non-audio data source is a text file, and the translation is an audio-to-text translation from the input audio data source to the text file, and wherein the degree of certainty reflects identification of a speaker who created the input audio data source.
- 56. The system according to claim 51, wherein the other format is a source text file and the non-audio data source is an output text file, and the translation is a text-to-text translation from the source text file to the output text file, and wherein the degree of certainty reflects accuracy of the text-to-text translation.
- 57. The system according to claim 43, wherein the non-audio data source is an e-mail message and at least one of the detected nuances is an e-mail convention found in the e-mail message.
- 58. The system according to claim 43, wherein the non-audio data source is text provided by a user.
- 59. The system according to claim 43, wherein the detected nuance is embedded within the non-audio file.
- 60. The system according to claim 43, wherein the detected nuance comprises metadata associated with the non-audio file.
- 61. A system for enhancing audio renderings of data sources, comprising:
means for transforming a first data source in a first format to a second data source in an audio format; means for associating one or more degrees of certainty with the second data source to reflect an accuracy of the means for transforming; means for locating an audio cue that is correlated to each of the associated degrees of certainty; and means for associating the located audio cues with the second data source to convey the accuracy of the means for transforming to a listener who will hear the audio format.
- 62. The system according to claim 61, further comprising means for audibly rendering the second data source to the listener along with the associated audio cues.
- 63. A system for enhancing audio renderings of non-audio data sources, comprising:
means for providing a stylesheet comprising rules and actions, wherein selected ones of the rules and actions pertain to audio cues to be used in an audio rendering; means for comparing the rules of the stylesheet to content of a non-audio data source; and means for applying the action associated with the matching rule, upon detecting a match during the comparing, wherein for each action pertaining to audio cues, an audio cue is thereby associated with the non-audio data source for playing the audio rendering to a listener.
- 64. The system according to claim 62, further comprising means for playing the audio rendering of the non-audio data source to the listener.
- 65. The system according to claim 63, wherein at least one of the selected rules and actions of the stylesheet is customized for the listener, and at least one of the audio cues associated with the non-audio data source by the means for applying overrides another audio cue in order to customize the audio rendering for the listener.
- 66. The system according to claim 63, wherein at least one of the selected rules and actions of the stylesheet is customized for a creator of the non-audio data source, and at least one of the audio cues associated with the non-audio data source by the means for applying overrides another audio cue in order to make the audio rendering speaker-specific.
- 67. A computer program product for enhancing audio renderings of non-audio data sources, the computer program product embodied on one or more computer-readable media and comprising:
computer-readable program code means for detecting one or more nuances of a non-audio data source; computer-readable program code means for locating an audio cue corresponding to each of the detected nuances; and computer-readable program code means for associating the located audio cues with their respective detected nuances for playback to a listener.
- 68. The computer program product according to claim 67, further comprising:
computer-readable program code means for creating an audio rendering of a non-audio segment of the non-audio data source, wherein the non-audio segment is associated with the nuance; and computer-readable program code means for mixing the associated audio cue with the audio rendering of the segment.
- 69. The computer program product according to claim 68, wherein the non-audio data source is a text file and wherein the computer-readable program code means for creating further comprises computer-readable program code means for processing the text file with a text-to-speech translator.
- 70. The computer program product according to claim 67, wherein the non-audio data source is a text file and at least one of the detected nuances is a change in color of text in the text file.
- 71. The computer program product according to claim 67, wherein the non-audio data source is a text file and the detected nuance is presence of a keyword for the text file.
- 72. The computer program product according to claim 71, wherein the keyword is supplied by a creator of the text file.
- 73. The computer program product according to claim 71, wherein the keyword is programmatically detected by evaluating text in the text file.
- 74. The computer program product according to claim 67, wherein at least one of the detected nuances is a degree of certainty in translation of the non-audio data source from another format.
- 75. The computer program product according to claim 74, wherein the computer-readable program code means for detecting detects at least two different degrees of certainty, and wherein the located audio cues comprise changes in a pitch of a voice used in the audio rendering for each of the different degrees of certainty.
- 76. The computer program product according to claim 74, wherein the computer-readable program code means for detecting detects at least two different degrees of certainty, and further comprising changing a pitch of the associated audio cue used by the computer-readable program code means for mixing for each of the different degrees of certainty.
- 77. The computer program product according to claim 74, wherein the other format is an input audio data source and the non-audio data source is a text file, and the translation is an audio-to-text translation from the input audio data source to the text file, and wherein the degree of certainty reflects accuracy of the audio-to-text translation.
- 78. The computer program product according to claim 74, wherein the other format is an input audio data source and the non-audio data source is a text file, and the translation is an audio-to-text translation from the input audio data source to the text file, and wherein the degree of certainty reflects identification of a speaker who created the input audio data source.
- 79. The computer program product according to claim 74, wherein the other format is a source text file and the non-audio data source is an output text file, and the translation is a text-to-text translation from the source text file to the output text file, and wherein the degree of certainty reflects accuracy of the text-to-text translation.
- 80. The computer program product according to claim 79, wherein the source text file contains text in a first language and the output text file contains text in a second language.
- 81. The computer program product according to claim 67, wherein at least one of the detected nuances is an identification of a creator of the non-audio data source.
- 82. The computer program product according to claim 81, wherein the identification is used to locate stored preferences of the creator.
- 83. The computer program product according to claim 67, wherein the non-audio data source is an e-mail message.
- 84. The computer program product according to claim 67, wherein the detected nuance is embedded within the non-audio file.
- 85. The computer program product according to claim 67, wherein the detected nuance comprises metadata associated with the non-audio file.
- 86. A computer program product for enhancing audio renderings of data sources, the computer program product embodied on one or more computer-readable media and comprising:
computer-readable program code means for transforming a first data source in a first format to a second data source in an audio format; computer-readable program code means for associating one or more degrees of certainty with the second data source to reflect an accuracy of the computer-readable program code means for transforming; computer-readable program code means for locating an audio cue that is correlated to each of the associated degrees of certainty; and computer-readable program code means for associating the located audio cues with the second data source to convey the accuracy of the computer-readable program code means for transforming to a listener who will hear the audio format.
- 87. The computer program product according to claim 86, further comprising computer-readable program code means for audibly rendering the second data source to the listener along with the associated audio cues.
- 88. A computer program product for enhancing audio renderings of non-audio data sources, the computer program product embodied on one or more computer-readable media and comprising:
computer-readable program code means for comparing the rules of a stylesheet to content of a non-audio data source, wherein the stylesheet comprises rules and actions and wherein selected ones of the rules and actions pertain to audio cues to be used in an audio rendering; and computer-readable program code means for applying the action associated with the matching rule, upon detecting a match during operation of the computer-readable program code means for comparing, wherein for each action pertaining to audio cues, an audio cue is thereby associated with the non-audio data source for playing the audio rendering to a listener.
- 89. The computer program product according to claim 88, further comprising computer-readable program code means for playing the audio rendering of the non-audio data source to the listener.
- 90. The computer program product according to claim 88, wherein at least one of the selected rules and actions of the stylesheet is customized for the listener, and at least one of the audio cues associated with the non-audio data source by the computer-readable program code means for applying overrides another audio cue in order to customize the audio rendering for the listener.
- 91. The computer program product according to claim 88, wherein at least one of the selected rules and actions of the stylesheet is customized for a creator of the non-audio data source, and at least one of the audio cues associated with the non-audio data source by the computer-readable program code means for applying overrides another audio cue in order to make the audio rendering speaker-specific.
- 92. The computer program product according to claim 89, wherein the stylesheet specifies preferences for language translation of the non-audio data source that may be performed prior to operation of the computer-readable program code means for playing.
RELATED INVENTIONS
[0001] The present invention is related to the following commonly-assigned U.S. Patents, both of which were filed concurrently herewith and are hereby incorporated herein by reference: U.S. ______ (Ser. No.09/______ ), entitled “Selectable Audio and Mixed Background Sound for Voice Messaging System”, and U.S. ______ (Ser. No. 09/______ ), entitled “Recording and Receiving Voice Mail with Freeform Bookmarks”.