The present invention is directed to the correction of recordings of speech in tonal languages.
Languages such as Chinese, Thai, and Vietnamese are unique in that they are tonal languages. In a tonal language, each spoken syllable requires a particular pitch of voice in order to be regarded as intelligible and correct. For example, Mandarin Chinese has four tones, plus a “neutral” pitch. Cantonese Chinese has even more tones. These tones are described as “high, level,” high, rising” “low, dipping” and “high, falling” respectively, and may be noted as diacritical marks over Romanized versions of the Chinese sounds.
To mispronounce the tone is to miss the Chinese (or Thai or Vietnamese) word entirely. Therefore, in contrast to the English language, where pitch is used to a limited extent to indicate sentence meaning, for example to denote a question, Chinese uses tone as an integral feature of every word. Because of this, a tonal language spoken by a non-native speaker is often very hard for a native speaker to understand, because the tones are often mispronounced or misapplied.
In accordance with embodiments of the present invention, a series of words comprising a phrase is analyzed using a speech recognition engine. In particular, the words comprising a phrase create a context in which the component words can be analyzed. From this context, mispronounced words or characters can be identified.
In addition, embodiments of the present invention provide for the correction of mispronounced characters. In particular, modification of recorded speech containing or including a mispronounced character is performed using tonal correction. Tonal correction can be applied before the recorded speech is sent to a recipient mailbox, or otherwise stored in anticipation of later playback. In accordance with further embodiments of the present invention, a user may be prompted to approve corrections before they are applied to the recorded speech.
In accordance with embodiments of the present invention, a database of commonly mispronounced phrases or characters may be referenced in connection with verifying the pronunciation of characters within recorded speech. That is, phrases containing commonly mispronounced characters that, as a result of the mispronunciation have a nonsensical meaning or a meaning that is unlikely to be intended by the speaker, may be mapped to a phrase that is likely the intended phrase. Accordingly, even phrases that include mispronunciations that are in the form of an incorrect application of a common tone can be detected and corrected.
In accordance with embodiments of the present invention, recorded speech containing one or more mispronunciations can be tonally corrected before that speech is delivered to a recipient mailbox or otherwise stored for later use.
With reference now to
As examples, a communication or computing device 104 may comprise a conventional wireline or wireless telephone, an Internet protocol (IP) telephone, a networked computer, a personal digital assistant (PDA), or any other device capable of transmitting or receiving speech. In accordance with embodiments of the present invention, a communication or computing device 104 may also have the capability of analyzing and recording speech provided by a user for possible tonal correction. Alternatively or in addition, functions such as the analysis and/or storage of speech collected using communication or computing device 104 may be performed by a server 112 or other entity.
A server 112 in accordance with embodiments of the present invention may comprise a communication server or other computer that functions to provide services to client devices. Examples of servers 112 include PBX, voice mail, or servers deployed on a network for the specific purpose of providing tonal correction to speech as described herein. Accordingly, a server 112 may operate to perform communication service and/or connectivity functions. In addition, a server 112 may perform some or all of the processing and/or storage functions in connection with the tonal correction of speech of the present invention.
The communication network 108 may comprise a converged network for transmitting voice and data between associated devices 104 and/or servers 112. Furthermore, it should be appreciated that the communication network 108 need not be limited to any particular type of network. Accordingly, the communication network 108 may comprise a wireline or wireless Ethernet network, the Internet, a private intranet, a private branch exchange (PBX), the public switched telephony network (PSTN), a cellular or other wireless telephony network, or any other network capable of transmitting data, including voice data. In addition, it can be appreciated that the communication network 108 need not be limited to any one network type, and instead may be comprised of a number of different networks and/or network types.
With reference now to
A communication device 104 or server 112 may additionally include memory 208 for use in connection with the execution of programming by the processor 204 and for the temporary or long term storage of data or program instructions. The memory 208 may comprise solid state memory resident, removable or remote in nature, such as DRAM and SDRAM. Where the processor 204 comprises a controller, the memory 208 may be integral to the processor 204.
In addition, the communication device 104 or server 112 may include one or more user inputs or means for receiving user input 212 and one or more user outputs or means for outputting 216. Examples of user inputs 212 include keyboards, keypads, touch screens, touch pads and microphones. Examples of user outputs 216 include speakers, display screens (including touch screen displays) and indicator lights. Furthermore, it can be appreciated by one of skill in the art that the user input 212 may be combined or operated in conjunction with a user output 216. An example of such an integrated user input 212 and user output 216 is a touch screen display that can both present visual information to a user and receive input selections from a user.
A communication device 104 or server 112 may also include data storage 220 for the storage of application programming and/or data. In addition, operating system software 224 may be stored in the data storage 220. The data storage 220 may comprise, for example, a magnetic storage device, a solid state storage device, an optical storage device, a logic circuit, or any combination of such devices. It should further be appreciated that the programs and data that may be maintained in the data storage 220 can comprise software, firmware or hardware logic, depending on the particular implementation of the data storage 220.
Examples of applications that may be stored in the data storage 220 include a tonal correction application 228. The tonal correction application 228 may incorporate or operate in cooperation with a speech recognition application and/or a text to speech application. In addition, the data storage 220 may contain a table or database of commonly mispronounced phrases and/or characters 232. The table or database 232 may additionally include associations between commonly mispronounced phrases and/or characters and phrases and/or characters that are usually intended. Accordingly, the database 232 may comprise means for storing associations between phrases having similar pronunciations but that include words associated with different tones. As described herein, a tonal correction application 228 and table of phrases or characters 232 may be integrated with one another, and/or operate in cooperation with one another. Furthermore, the tonal correction application may comprise means for comparing received phrases to phrases in the database 232 and means for altering a tone of a word included in a received phrase. The data storage 220 may also contain application programming and data used in connection with the performance of other functions of the communication device 104 or server 112. For example, in connection with a communication device 104 such as a telephone or IP telephone, the data storage may include communication application software. As another example, a communication device 104 such as a personal digital assistant (PDA) or a general purpose computer may include a word processing application in the data storage 220. Furthermore, according to embodiments of the present invention, a voice mail or other application may also be included in the data storage 220.
A communication device 104 or server 112 may also include one or more communication network interfaces 236. Examples of communication network interfaces 236 include a network interface card, a modem, a wireline telephony port, a serial or parallel data port, or other wireline or wireless communication network interface.
With reference now to
At step 320, a determination may be made as to whether the user has approved of the suggested substitute. For example, the user may signal assent to a suggested substitute by providing a confirmation signal through a user input 212 device. Such input may be in the form of pressing a designated key, voicing a reference number or other identifier associated with a suggested substitute and/or clicking in an area of the display corresponding to a suggested substitute. Furthermore, assent to a suggested substitution can comprise a selection by a user of one of a number of potential substitutions that have been identified by the tonal correction application 228.
If approval or confirmation of a suggested substitution is received, tonal correction to the user's original speech is applied (step 324). In accordance with embodiments of the present invention, tonal correction may be applied through digital manipulation of the recorded speech. For example, as known to one of skill in the art, speech may be encoded using vocal tract models, such as linear predictive coding. For a general discussion of the operation of vocal tract models, see Speech digitization and compression, by Michaelis, P. R., available in the International Encyclopedia of Ergonomics and Human Factors, pp. 683-685, W. Warkowski (Ed.), London: Taylor and Francis, 2001, the entire disclosure of which is hereby incorporated by reference herein. In general, these techniques use mathematical models of the human speech production mechanism. Accordingly, many of the variables in the models actually correspond to the different physical structures within the human vocal tract that vary while a person is speaking. In a typical implementation, the encoding mechanism breaks voice streams into individual short duration frames. The audio content of these frames is analyzed to extract parameters that “control” components of the vocal tract model. The individual variables that are determined by this process include the overall amplitude of the frame and its fundamental pitch. The overall amplitude and fundamental pitch are the components of the model that have the greatest influence on the tonal contours of speech, and are extracted separately from the parameters that govern the spectral filtering, which is what makes the speech understandable and the speaker identifiable. Tonal corrections in accordance with embodiments of the present invention may therefore be performed by applying the appropriate delta to the erroneous amplitude and pitch parameters detected in the speech. Because changes are made to the amplitude and pitch parameters, but not to the spectral filtering parameters, the corrected voice stream will still generally be recognizable as being the original speaker's voice. The corrected speech may then be sent to the recipient address (step 328). For example, where the speech is received in connection with leaving a voice mail message for the recipient, sending the speech may comprise releasing the corrected speech to the recipient address.
If at step 312 it is determined that the received speech does not correspond to a commonly mispronounced phrase, then the speech provided by the user either comprises correctly pronounced words and phrases, or it includes mispronunciations that result in non-sensical or unlikely meanings that are not reflected in the database 232. Accordingly, if the received speech is not found to match one of the commonly mispronounced phrases included in the database 232, the speech is sent to the recipient address at step 328 without having first undergone tonal correction. Likewise, if a user does not approve of a suggested tonal correction, the recorded speech may be sent to the recipient address at step 328 without tonal correction. After a message has been released to a recipient address, the process ends.
In accordance with embodiments of the present invention, various components of a system capable of performing tonal correction of speech can be distributed. For example, a communication device 104 comprising a telephony endpoint may operate to receive speech and command input from a user, and deliver output to the user, but may not perform any processing. According to such an embodiment, processing of received speech to determine whether a match with a commonly mispronounced phrase can be found is performed by a server 112. In accordance with still other embodiments of the present invention, the tonal correction functions may be performed entirely within a single device. For example, a communication device 104 with suitable processing power may analyze the speech, suggest correction, and apply approved correction. According to these other embodiments, when the communication device 104 releases the speech to the recipient, that speech may be delivered to, for example, the recipients answering machine or to a voice mailbox associated with a server 112.
To further illustrate operation of embodiments of the present invention, and with reference now to
In accordance with embodiments of the present invention, tonal correction as described herein may be applied in connection with real-time, near real-time, or off-line applications, depending on the processing power and other capabilities of communication devices 104 and/or servers 112 used in connection with the application of the tonal correction functions. In addition, although certain examples described herein have related to voice mail applications, embodiments of the present invention are not so limited. For instance, tonal corrections as described herein can be applied to any recorded speech and even speech delivered to a recipient at close to real time. Furthermore, although certain examples provided herein have discussed the use of tonal correction in connection with the Chinese language, it can be applied to other tonal languages, such as Thai and Vietnamese.
The foregoing discussion of the invention has been presented for purposes of illustration and description. Further, the description is not intended to limit the invention to the form disclosed herein. Consequently, variations and modifications commensurate with the above teachings, within the skill or knowledge of the relevant art, are within the scope of the present invention. The embodiments described hereinabove are further intended to explain the best mode presently known of practicing the invention and to enable others skilled in the art to utilize the invention in such or in other embodiments and with the various modifications required by their particular application or use of the invention. It is intended that the appended claims be construed to include alternative embodiments to the extent permitted by the prior art.
Number | Name | Date | Kind |
---|---|---|---|
4473904 | Suehiro et al. | Sep 1984 | A |
5224040 | Tou | Jun 1993 | A |
5258909 | Damerau et al. | Nov 1993 | A |
5393236 | Blackmer et al. | Feb 1995 | A |
5561736 | Moore et al. | Oct 1996 | A |
5586198 | Lakritz | Dec 1996 | A |
5602960 | Hon et al. | Feb 1997 | A |
5632002 | Hashimoto et al. | May 1997 | A |
5636325 | Farrett | Jun 1997 | A |
5734923 | Sagawa et al. | Mar 1998 | A |
5750912 | Matsumoto | May 1998 | A |
5761687 | Hon et al. | Jun 1998 | A |
5812863 | Ishikawa | Sep 1998 | A |
5911129 | Towell | Jun 1999 | A |
5987413 | Dutoit et al. | Nov 1999 | A |
5995932 | Houde | Nov 1999 | A |
5995934 | Tang | Nov 1999 | A |
6005498 | Yang et al. | Dec 1999 | A |
6014615 | Chen | Jan 2000 | A |
6035269 | Kim | Mar 2000 | A |
6081780 | Lumelsky | Jun 2000 | A |
6115684 | Kawahara et al. | Sep 2000 | A |
6125341 | Raud et al. | Sep 2000 | A |
6148024 | Ho et al. | Nov 2000 | A |
6185535 | Hedin et al. | Feb 2001 | B1 |
6188983 | Hanson | Feb 2001 | B1 |
6192344 | Lee et al. | Feb 2001 | B1 |
6249763 | Minematsu | Jun 2001 | B1 |
6260015 | Wang et al. | Jul 2001 | B1 |
6263202 | Kato et al. | Jul 2001 | B1 |
6272464 | Kiraz et al. | Aug 2001 | B1 |
6374224 | Horiguchi et al. | Apr 2002 | B1 |
6470316 | Chihara | Oct 2002 | B1 |
6491525 | Hersh | Dec 2002 | B1 |
6513005 | Qin et al. | Jan 2003 | B1 |
6553342 | Zhang et al. | Apr 2003 | B1 |
6564213 | Ortega et al. | May 2003 | B1 |
6598021 | Shambaugh et al. | Jul 2003 | B1 |
6686907 | Su et al. | Feb 2004 | B2 |
6697457 | Petrushin | Feb 2004 | B2 |
6775651 | Lewis et al. | Aug 2004 | B1 |
6801659 | O'Dell | Oct 2004 | B1 |
6810378 | Kochanski et al. | Oct 2004 | B2 |
6853971 | Taylor | Feb 2005 | B2 |
6879951 | Kuo | Apr 2005 | B1 |
6950799 | Bi et al. | Sep 2005 | B2 |
6963841 | Handal et al. | Nov 2005 | B2 |
7003463 | Maes et al. | Feb 2006 | B1 |
7010490 | Brocious et al. | Mar 2006 | B2 |
7016841 | Kenmochi et al. | Mar 2006 | B2 |
7058626 | Pan et al. | Jun 2006 | B1 |
7085716 | Even et al. | Aug 2006 | B1 |
7088861 | van Meurs | Aug 2006 | B2 |
RE39326 | Comer et al. | Oct 2006 | E |
7124082 | Freedman | Oct 2006 | B2 |
7149970 | Pratley et al. | Dec 2006 | B1 |
7155391 | Taylor | Dec 2006 | B2 |
7165019 | Lee et al. | Jan 2007 | B1 |
7181391 | Jia et al. | Feb 2007 | B1 |
7181396 | Emonts et al. | Feb 2007 | B2 |
7257528 | Ritchie et al. | Aug 2007 | B1 |
7280964 | Wilson et al. | Oct 2007 | B2 |
7292980 | August et al. | Nov 2007 | B1 |
7353173 | Menendez-Pidal et al. | Apr 2008 | B2 |
7363224 | Huang et al. | Apr 2008 | B2 |
7376648 | Johnson | May 2008 | B2 |
7380203 | Keely et al. | May 2008 | B2 |
7398215 | Mesbah et al. | Jul 2008 | B2 |
7412390 | Kobayashi et al. | Aug 2008 | B2 |
7415411 | Reinhard et al. | Aug 2008 | B2 |
7424156 | Huang | Sep 2008 | B2 |
7437287 | Kim | Oct 2008 | B2 |
7466859 | Chang et al. | Dec 2008 | B2 |
7478047 | Loyall et al. | Jan 2009 | B2 |
7533023 | Veprek et al. | May 2009 | B2 |
7593849 | Das et al. | Sep 2009 | B2 |
7684987 | Chu et al. | Mar 2010 | B2 |
20020049594 | Moore et al. | Apr 2002 | A1 |
20020110248 | Kovales et al. | Aug 2002 | A1 |
20020111794 | Yamamoto et al. | Aug 2002 | A1 |
20020111805 | Goronzy et al. | Aug 2002 | A1 |
20020128820 | Goronzy et al. | Sep 2002 | A1 |
20020128827 | Bu et al. | Sep 2002 | A1 |
20020128842 | Hoi et al. | Sep 2002 | A1 |
20020133523 | Ambler et al. | Sep 2002 | A1 |
20020138479 | Bates et al. | Sep 2002 | A1 |
20020152075 | Kung et al. | Oct 2002 | A1 |
20020184009 | Heikkinen | Dec 2002 | A1 |
20030023426 | Pun et al. | Jan 2003 | A1 |
20030054830 | Williams et al. | Mar 2003 | A1 |
20030107555 | Williams | Jun 2003 | A1 |
20030144830 | Williams | Jul 2003 | A1 |
20030212555 | van Santen | Nov 2003 | A1 |
20030216912 | Chino | Nov 2003 | A1 |
20030229497 | Wilson et al. | Dec 2003 | A1 |
20040006461 | Gupta et al. | Jan 2004 | A1 |
20040059580 | Michelson et al. | Mar 2004 | A1 |
20040148161 | Das et al. | Jul 2004 | A1 |
20040153306 | Tanner et al. | Aug 2004 | A1 |
20040193398 | Chu et al. | Sep 2004 | A1 |
20040267538 | Obuchi et al. | Dec 2004 | A1 |
20050071165 | Hofstader et al. | Mar 2005 | A1 |
20050114194 | Pandit | May 2005 | A1 |
20050119899 | Palmquist | Jun 2005 | A1 |
20050144010 | Peng | Jun 2005 | A1 |
20050159954 | Chu et al. | Jul 2005 | A1 |
20060015340 | Feng | Jan 2006 | A1 |
20060122840 | Anderson et al. | Jun 2006 | A1 |
20060123338 | McCaffrey et al. | Jun 2006 | A1 |
20060149558 | Kahn et al. | Jul 2006 | A1 |
20060173685 | Huang et al. | Aug 2006 | A1 |
20060256139 | Gikandi | Nov 2006 | A1 |
20060285654 | Nesvadba et al. | Dec 2006 | A1 |
20060293890 | Blair et al. | Dec 2006 | A1 |
20060294462 | Blair et al. | Dec 2006 | A1 |
20070005363 | Cucerzan et al. | Jan 2007 | A1 |
20070050188 | Blair et al. | Mar 2007 | A1 |
Number | Date | Country |
---|---|---|
1371042 | Sep 2002 | CN |
10207875 | Aug 2003 | DE |
10302754 | Jul 2004 | DE |
H08-083092 | Mar 1996 | JP |
H10-083195 | Mar 1998 | JP |
10-294714 | Nov 1998 | JP |
2002-189490 | Jul 2002 | JP |
2005-92856 | Apr 2005 | JP |
260772 | Apr 1994 | TW |
369639 | Sep 1999 | TW |
575867 | Feb 2004 | TW |
200501053 | Jan 2005 | TW |
I226011 | Jan 2005 | TW |
200506814 | Feb 2005 | TW |
229844 | Mar 2005 | TW |
200509065 | Mar 2005 | TW |
WO 0103123 | Jan 2001 | WO |
WO 0139179 | May 2001 | WO |
WO 2005024779 | Mar 2005 | WO |
Number | Date | Country | |
---|---|---|---|
20070038452 A1 | Feb 2007 | US |