Embodiments herein relate to a method and apparatus for exemplary speech recognition.
Typically speech recognition is accomplished through the use of an Automatic Speech Recognition (ASR) engine, which operates by obtaining a small audio segment (“input speech”) and finding the closest matches in the audio database.
Embodiments of the present application relate to speech recognition using a specially optimized ASR that has been trained using a text to speech (“TTS”) engine and where the input speech is morphed so that it equates to the audio output of the TTS engine.
The speech recognition system in
Input 120 is a module configured to receive human speech from an audio source 115, and output the input speech to Morpher 130. The audio source 115 may be a live person speaking into a microphone, recorded speech, synthesized speech, etc.
Morpher 130 is a module configured to receive human speech from Input 120, morph said input speech, and in particular the pitch, duration, and prosody of the speech units, into the same pitch, duration and prosody on which ASR 140 was trained, and route said morphed speech to an ASR 140. Module 130 may be software modules, hardware modules, or a combination of software and hardware modules, whether separate or integrated, working together to perform said function.
ASR 140 may be software modules, hardware modules, or a combination of software and hardware modules, whether separate or integrated, working together to perform automatic speech recognition. ASR 140 is configured to receive the morphed input speech, decode the speech into the best estimate of the phrase by first converting the morphed input speech signal into a sequence of vectors which are measured throughout the duration of the speech signal. Then, using a syntactic decoder it generates one or more valid sequences of representations, assigns a confidence score to each potential representation, selects the potential representation with the highest confidence score, and outputs said representation as well as the confidence score for said selected representation.
To optimize ASR 140, ASR 140 uses “speaker-dependent speech recognition” where an individual speaker reads sections of text into the SR system, i.e. trains the ASR on a speech corpus. These systems analyze the person's specific voice and use it to fine-tune the recognition of that person's speech, resulting in more accurate transcription.
Output 151 is a module configured to output the text generated by ASR 140.
Input 150 is a module configured to receive text in the form of phonetic transcripts and prosody information from Text Source 155, and transmit said text to TTS 160. The Text Source 155 is a speech corpus, i.e. a database of speech audio files and phonetic transcriptions, which may be any of a plurality of inputs such as a file on a local mass storage device, a file on a remote mass storage device, a stream from a local area or wide area, a live speaker, etc.
Computer System 110 utilizes TTS 160 to train ASR 140 to optimize its speech recognition. TTS 160 is a text-to-speech engine configured to receive a speech corpus and synthesize human speech. TTS 160 may be software modules, hardware modules, or a combination of software and hardware modules, whether separate or integrated, working together to perform automatic speech recognition. TTS 160 is composed of two parts: a front-end and a back-end. The front-end has two major tasks. First, it converts raw text containing symbols like numbers and abbreviations into the equivalent of written-out words. This process is often called text normalization, pre-processing, or tokenization. The front-end then assigns phonetic transcriptions to each word, and divides and marks the text into prosodic units, like phrases, clauses, and sentences. The process of assigning phonetic transcriptions to words is called text-to-phoneme or grapheme-to-phoneme conversion. Phonetic transcriptions and prosody information together make up the symbolic linguistic representation that is output by the front-end. The back-end—often referred to as the synthesizer—then converts the symbolic linguistic representation into sound. In certain systems, this part includes the computation of the target prosody (pitch contour, phoneme durations) which is then imposed on the output speech.
Thus, ASR 140′s acoustic model is a near perfect match for TTS 160.
Speech Input module 320 is a module configured to receive human speech 320a from an audio source 320b and output the human speech 320a to NN 330. The human speech 320a may be a live person speaking into a microphone, recorded speech, synthesize speech, etc.
NN 330 is a neural network module configured to receive the human speech 320a from Speech Input 320 and human speech 310b and create a mathematical model, Model 340.
NN 350 is a neural network module configured to receive the human speech 320a from Speech Input 320 and human speech 310b. NN 350 is further configured to receive Model 340 and output the human speech 360. NN 350 is further configured to perform the inverse transformation to NN 330.
At step 410, speech input module 120 obtain human speech from audio source 115. At Step 420, audio source 115 transmits the human speech to NN 330. The human speech 115 corresponds to speech corpus 310a, i.e. a text transcription. At step 430, speech corpus 310a is transmitted to TTS 310, wherein TTS 310 synthesizes human speech 310b to NN 330 corresponding to speech corpus 310a.
At step 440, NN330 combines the human speech and the synthesized human speech 310b and creates a mathematical model of the combination, Model 340.
Steps 410 to 440, inclusive generally do not occur in real time.
At Step 450, speech input module 120 obtains human speech 320a from audio source 115. Said human speech is transmitted to NN 350. NN 350 also received Model 340, combines Model 340 and human speech human speech 320a and outputs human speech 360, which is identical to the TTS 160 or the reference voice.
This patent application claims the benefit of U.S. Provisional Patent Application No. 62/527,247, filed on Jun. 30, 2017, and is a Continuation-in-Part of U.S. patent application Ser. No. 14/563,511, filed Dec. 8, 2014, which claims priority from U.S. Provisional Patent Application No. 61/913,188, filed on Dec. 6, 2013, in the U.S. Patent and Trademark Office, the disclosure of which is incorporated herein by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
61913188 | Dec 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14563511 | Dec 2014 | US |
Child | 15963844 | US |