Generating prosodic contours for synthesized speech

Information

  • Patent Grant
  • 9093067
  • Patent Number
    9,093,067
  • Date Filed
    Monday, November 26, 2012
    12 years ago
  • Date Issued
    Tuesday, July 28, 2015
    9 years ago
Abstract
The subject matter of this specification can be implemented in a computer-implemented method that includes receiving utterances and transcripts thereof. The method includes analyzing the utterances and transcripts to determine certain attributes, such as distances between prosodic contours for pairs of utterances. A model can be generated that can be used to estimate a distance between a determined prosodic contour for a received utterance and an unknown prosodic contour for a synthesized utterance when given a distance between attributes for text associated with the received utterance and the synthesized utterance.
Description
TECHNICAL FIELD

This instant specification relates to synthesizing speech from text using prosodic contours.


BACKGROUND

Prosody makes human speech natural, intelligible and expressive. Human speech uses prosody in such varied communicative acts as indicating syntactic attachment, topic structure, discourse structure, focus, indirect speech acts, information status, turn-taking behaviors, as well as paralinguistic qualities such as emotion, and sarcasm. The use of prosodic variation to enhance or augment the communication of lexical items is so ubiquitous in speech, human listeners are often unaware of its effects. That is, until a speech synthesis system fails to produce speech with a reasonable approximation of human prosody. Prosodic abnormalities not only negatively impact the naturalness of the synthesized speech, but as prosodic variation is tied to such basic tasks as syntactic attachment and indication of contrast, flouting prosodic norms can lead to degradations of intelligibility. To make synthesized speech as powerful a communication tool as human speech, synthesized speech should at least endeavor to approach human-like prosodic assignment.


SUMMARY

In general, this document describes synthesizing speech from text using prosodic contours. In a first aspect, a computer-implemented method includes receiving speech utterances encoded in audio data and a transcript having text representing the speech utterances. The method further includes extracting prosodic contours from the utterances. The method further includes extracting attributes for text associated with the utterances. The method further includes determining, distances between attributes for pairs of utterances and between prosodic contours for the pairs of utterances. The method further includes generating a model based on the determined distances for the attributes and the prosodic contours, where the model is adapted to estimate a distance between a determined prosodic contour for a received utterance and an unknown prosodic contour for a synthesized utterance when given a distance between attributes for text associated with the received utterance and the synthesized utterance. The method further includes storing the model in a computer-readable memory device. Implementations can include any, all, or none of the following features. The method may include modifying the extracted prosodic contours at a time previous to determining the distances between the extracted prosodic contours. The method may include extracting the prosodic contours from the utterances comprises generating for each prosodic contour time-value pairs comprising a prosodic contour value and a time at which the prosodic contour value occurs. The extracted prosodic contours may comprise fundamental frequencies, pitches, energy measurements, gain measurements, duration measurements, intensity measurements, measurements of rate of speech, or spectral tilt measurements. The extracted attributes may comprise exact stress patterns, canonical stress patterns, parts of speech, phone representations, phoneme representations, or indications of declaration versus question versus exclamation. The method may include aligning the utterances in the audio data with text, from the transcripts, that represents the utterances to determine which speech utterances are associated with which text. Generating the model may include mapping the distances between the attributes for pairs of utterances to the distances between the prosodic contours for the pairs of utterances so as to determine a relationship between the distances associated with the attributes and the distances associated with the prosodic contours for pairs of utterances. The distances between the prosodic contours may be calculated using a root mean square difference calculation. The model may be created using a linear regression of the distances between the prosodic contours and the distances between the transcripts. The method may include selecting pairs of utterances for use in determining distances based on whether the utterances have canonical stress patterns that match. The method may include creating multiple models, including the model, where each of the models has a different canonical stress pattern. The method may include selecting, based on estimated distances between a plurality of determined prosodic contours and an unknown prosodic contour of text to be synthesized, a final determined prosodic contour associated with a smallest distance. The method may include generating a prosodic contour for the text to be synthesized using the final determined prosodic contour. The method may include outputting the generated prosodic contour and the text to be synthesized to a speech-to-text engine for speech synthesis.


In a second aspect, a computer-implemented system includes one or more computers having an interface to receive speech utterances encoded in audio data and a transcript having text representing the speech utterances. The system further includes an interface to receive speech utterances encoded in audio data and a transcript having text representing the speech utterances. The system further includes a prosodic contour extractor to extract prosodic contours from the utterances. The system further includes a transcript analyzer to extract attributes for text associated with the utterances. The system further includes an attribute comparer to determine distances between attributes for pairs of utterances. The system further includes a prosodic contour comparer to determine distances between prosodic contours for the pairs of utterances. The system further includes a model generator programmed to generate a model based on the determined distances for the attributes and the prosodic contours, the model adapted to estimate a distance between a determined prosodic contour for a received utterance and an unknown prosodic contour for a synthesized utterance when given a distance between attributes for text associated with the received utterance and the synthesized utterance. The system further includes a computer-readable memory device associated with the one or more computers to store the model.


Implementations can include any, all, or none of the following features. The extracting the prosodic contours from the utterances may comprise generating for each prosodic contour time-value pairs comprising a prosodic contour value and a time at which the prosodic contour value occurs. The extracted prosodic contours may comprise fundamental frequencies, pitches, energy measurements, gain measurements, duration measurements, intensity measurements, measurements of rate of speech, or spectral tilt measurements. The extracted attributes may comprise exact stress patterns, canonical stress patterns, parts of speech, phone representations, phoneme representations, or indications of declaration versus question versus exclamation. The system may be further programmed to align the utterances in the audio data with text from the transcripts that represents the utterances to determine which speech utterances are associated with which text. The generating the model may comprise mapping the distances between the attributes for pairs of utterances to the distances between the prosodic contours for the pairs of utterances so as to determine a relationship between the distances associated with the attributes and the distances associated with the prosodic contours for pairs of utterances.


The systems and techniques described here may provide one or more of the following advantages. First, a system can provide improved prosody for text-to-speech synthesis. Second, a system can provide a wider range of candidate prosodic contours from which to select a prosody for use in text-to-speech synthesis. Third, a system can provide improved or minimized processor usage during identification of candidate prosodic contours and/or selection of a final prosodic contour from the candidate prosodic contours. Fourth, a system can predict or estimate how accurate a stored prosodic contour represents a text to be synthesized by using a model that takes as input a comparison between lexical attributes of the text and a transcript of the prosodic contour.


The details of one or more embodiments are set forth in the accompanying drawings and the description below. Other features and advantages will be apparent from the description and drawings, and from the claims.





DESCRIPTION OF DRAWINGS


FIG. 1 is a schematic diagram showing an example of a system that selects a prosodic contour for use in text-to-speech synthesis.



FIG. 2 is a block diagram showing an example of a model generator system.



FIG. 3 is an example of a table for storing transcript analysis information.



FIG. 4 is a block diagram showing an example of a text alignment system.



FIGS. 5A-C are examples of prosodic contour graphs showing alignment of a prosodic contour to a different lexical stress pattern.



FIG. 6 is a flow chart showing an example of a process for generating models.



FIG. 7 is a flow chart showing an example of a process for selecting and aligning a prosodic contour.



FIG. 8 is a schematic diagram showing an example of a computing system that can be used in connection with computer-implemented methods and systems described in this document.





Like reference symbols in the various drawings indicate like elements.


DETAILED DESCRIPTION

This document describes systems and techniques for making synthesized speech sound more natural by assigning prosody (e.g., stress and intonation patterns of an utterance) to the synthesized speech. In some implementations, prosody is assigned by storing naturally occurring prosodic contours (e.g., fundamental frequencies f0) extracted from human speech, selecting a best naturally occurring prosodic contour at speech synthesis time, and aligning the selected prosodic contour to the text that is being synthesized.


In some implementations, the prosodic contour is selected by estimating a distance, or a calculated difference, between prosodic contours based on differences between features of text associated with the prosodic contours. A model for estimating these distances can be generated by analyzing audio data and corresponding transcripts of the audio data. The model can then be used at run-time to estimate a distance between stored prosodic contours and a hypothetical prosodic contour for text to be synthesized.


In some implementations, the distance estimate between a stored prosodic contour and an unknown prosodic contour is based on comparing attributes of the text to be synthesized with attributes of text associated with the stored prosodic contours. Based on the distance between the attributes, the model can generate an estimate between the stored prosodic contours associated with the text and the hypothetical prosodic contour. The prosodic contour with the smallest estimated distance can be selected and used to generate a prosodic contour for the text to be synthesized.


In some implementations, the results comparing the attributes can be something other than an edit distance. In some implementations, measurement of differences between some attributes may not translate easily to an edit distance. For example, the text may include a final punctuation from each utterance. Some utterances may end with a period, some may end with a question mark, some may end with a comma, and some may end with no punctuation at all. The edit distance between a comma and a period in this example may not be intuitive or may not accurately represent the differences between an utterance ending in a comma or period versus an utterance ending in a question mark. In this case, the list of possible end punctuation can be used as an enumerated list. Distances between pairs of prosodic contours can be associated with a particular pairing of end punctuation, such as period and comma, question mark and period, or comma and no end punctuation.


In general, the process determines for each candidate utterance, a distance between a prosodic contour of the candidate utterance and a hypothetical prosodic contour of the spoken utterance to be synthesized. The determination is based on the model that relates distances between pairs of prosodic contours of the stored utterances to relationships between attributes of text for the pairs, such as an edit distance between attributes of the pairs or an enumeration of pairs of attribute values. This process is described in detail below.



FIG. 1 is a schematic diagram showing an example of a system 100 that selects a prosodic contour for use in text-to-speech synthesis. The system 100 includes a speech synthesis system 102, a text alignment system 104, a database 106, and a model generator system 108. The prosodic contour selection begins with the model generator system 108 generating one or more models 110 to be used in the prosodic contour selection process. In some implementations, the models 110 can be generated at “design time” or “offline.” For example, the models 110 can be generated at any time before a request to perform a text-to-speech synthesis is received.


The model generator system 108 receives audio, such as audio data 112, and one or more transcripts 114 corresponding to the audio data 112. The model generator system 108 analyzes the transcripts 114 to determine one or more attributes 116 of the language elements in each of the transcripts 114. For example, the model generator system 108 can perform lexical lookups to determine sequences of parts-of-speech (e.g., noun, verb, preposition, adjective, etc.) for sentences or phrases in the transcripts 114. The model generator system 108 can perform a lookup to determine stress patterns (e.g., primary stress, secondary stress, or unstressed) of syllables, phonemes, or other units of language in the transcripts 114. The model generator system 108 can determine other attributes, such as whether sentences in the transcripts 114 are declarations, questions, or exclamations. The model generator system 108 can determine a phone or phoneme representation of the words in the transcripts 114.


The model generator system 108 extracts one or more prosodic contours 118 from the audio data 112. In some implementations, the prosodic contours 118 include time-value pairs that represent the pitch or fundamental frequency of a portion of the audio data 112 at a particular time. In some implementations, the prosodic contours 118 include other time-value pairs, such as energy, duration, speaking rate, intensity, or spectral tilt.


The model generator system 108 includes a model generator 120. The model generator 120 generates the models 110 by determining a relationship between differences in the prosodic contours 118 and differences in the transcripts 114. For example, the model generator system 108 can determine a root mean square difference (RMSD) between pitch values in pairs of the prosodic contours 118 and an edit distance between one or more attributes of corresponding pairs of the transcripts 114. The model generator 120 performs a linear regression on the differences between the pairs of the prosodic contours 118 and the corresponding pairs of the transcripts 114 to determine a model or relationship between the differences in the prosodic contours 118 and the differences in the transcripts 114.


The model generator system 108 stores the attributes 116, the prosodic contours 118, and the models 110 in the database 106. In some implementations, the model generator system 108 also stores the audio data 112 and the transcripts 114 in the database 106. The relationships represented by the models 110 can later be used to estimate a difference between one or more of the prosodic contours 118 and an unknown prosodic contour of a text 122 to be synthesized. The estimate is based on differences between the attributes 116 of the prosodic contours 118 and attributes of the text 122.


The text alignment system 104 receives the text 122 to be synthesized. The text alignment system 104 analyzes the text to determine one or more attributes of the text 122. At least one attribute of the text 122 corresponds to one of the attributes 116 of the transcripts 114.


For example, the attribute can be an exact lexical stress pattern or a canonical lexical stress pattern. A canonical lexical stress pattern includes an aggregate or simplified representation of a corresponding complete or exact lexical stress pattern. For example, a canonical lexical stress pattern can include a total number of stressed elements in a text or transcript, an indication of a first stress in the text or transcript, and/or an indication of a last stress in the text or transcript.


The text alignment system 104 includes a prosodic contour selector 124. The prosodic contour selector 124 sends a request 126 for prosodic contour candidates to the database 106. The database 106 may reside at the text alignment system 104 or at another system, such as the model generator system 108.


The request 126 includes a query for prosodic contours associated with one or more of the transcripts 114 where the transcripts 114 have an attribute that matches the attribute of the text 122. For example, the prosodic contour selector 124 can request prosodic contours having a canonical lexical stress pattern attribute that matches the canonical lexical stress pattern attribute of the text 122. In another example, the prosodic contour selector 124 can request prosodic contours having an exact lexical stress pattern attribute that matches the exact lexical stress pattern attribute of the text 122.


In some implementations, multiple types of attribute values from the text 122 can be queried from the attributes 116. For example, the prosodic contour selector 124 can make a first request for candidate prosodic contours using a first attribute value of the text 122 (e.g., the canonical lexical stress pattern). If the set of results from the first request is too large (e.g., above a predetermined threshold number of results), then the prosodic contour selector 124 can refine the query using a second attribute value of the text 122 (e.g., the exact lexical stress pattern, parts-of-speech sequence, or declaration vs. question vs. exclamation). Alternatively, if the set of results from a first request is too small (e.g., below a predetermined threshold number of results), then the prosodic contour selector 124 can broaden the query (e.g., switch from exact lexical stress pattern to canonical lexical stress pattern).


The database 106 provides the search results to the text alignment system 104 as candidate information 128. In some implementations, the candidate information 128 includes a set of the prosodic contours 118 to be used as prosody candidates for the text 122. The candidate information 128 can also include at least one of the attributes 116 for each of the candidate prosodic contours and at least one of the models 110.


In some implementations, the identified model is created by the model generator system 108 using the subset of the prosodic contours 118 (e.g., the candidate prosodic contours) having associated transcripts with attributes that match one another. As a result of the query, the attributes of the candidate prosodic contours also match the attribute of the text 122. In some implementations, the candidate prosodic contours have the property that they can be aligned to one another and to the text 122. For example, the attributes of the candidate prosodic contours and the text 122 either have matching exact lexical stress patterns or matching canonical lexical stress patterns, such that a correspondence can be made between at least the stressed elements of the candidate prosodic contours and the text 122 as well as and the particular stress of the first and last elements.


The prosodic contour selector 124 calculates an edit distance between the attributes of the text 122 and the attributes of each of the candidate prosodic contours. The prosodic contour selector 124 uses the identified model and the calculated edit distances to estimate RMSDs between an as yet unknown prosodic contour of the text 122 and the candidate prosodic contours. The candidate prosodic contour having the smallest RMSD is selected as the prosody contour for use in the speech synthesis of the text 122. The prosodic contour selector 124 provides the text 122 and the selected prosodic contour to a prosodic contour aligner 130.


The prosodic contour aligner 130 aligns the selected prosodic contour onto the text 122. For example, where a canonical lexical stress pattern is used to identify candidate prosodic contours, the selected prosodic contour may have a different number of unstressed elements than the text 122. The prosodic contour aligner 130 can expand or contract an existing region of unstressed elements in the selected prosodic contour to match the unstressed elements in the text 122. The prosodic contour aligner 130 can add a region of one or more unstressed elements within a region of stressed elements in the selected prosodic contour to match the unstressed elements in the text 122. The prosodic contour aligner 130 can remove a region of one or more unstressed elements within a region of stressed elements in the selected prosodic contour to match the unstressed elements in the text 122.


The prosodic contour aligner 130 provides the text 122 and an aligned prosodic contour 132 to the speech synthesis system 102. The speech synthesis system includes a text-to-speech engine (TTS) 134 that processes the aligned prosodic contour 132 and the text 122. The TTS 134 uses the prosody from the aligned prosodic contour 132 to output the synthesized text as speech 136.



FIG. 2 is a block diagram showing an example of a model generator system 200. The model generator system 200 includes an interface 202 for receiving audio, such as audio data 204, and one or more transcripts 206 of the audio data 204. The model generator system 200 also includes a transcript analyzer 208. The transcript analyzer 208 uses to a lexical dictionary 210 to identify one or more attributes 212 in the transcripts 206, such as part-of-speech attributes and lexical stress pattern attributes.


In one example, a first transcript may include the text “Let's go to dinner” and a second transcript may include the text “Let's eat breakfast.” The first transcript has a parts-of-speech sequence including “verb-pronoun-verb-preposition-noun” and the second transcript has a parts-of-speech sequence including “verb-pronoun-verb-noun.” In some implementations, the parts-of-speech attributes can be retrieved from the lexical dictionary 210 by looking up the corresponding words from the transcripts 206 in the lexical dictionary 210. In some implementations, the contexts of other words in the transcripts 206 are used to resolve ambiguities in the parts-of-speech.


In another example of identified attributes, the transcript analyzer 208 can use the lexical dictionary to identify a lexical stress pattern for each of the transcripts 206. For example, the first transcript has a stress pattern of “stressed-stressed-unstressed-stressed-unstressed” and the second transcript has a stress pattern of “stressed-stressed-stressed-unstressed.” In some implementations, a more restrictive stress pattern can be used, such as by separately considering primary stress and secondary stress. In some implementations, a less restrictive lexical stress pattern can be used, such as the canonical lexical stress pattern. For example, the first and second transcripts both have a canonical lexical stress pattern of three total stressed elements, a stressed first element, and an unstressed last element.


The transcript analyzer 208 outputs the attributes 212, for example to a storage device such as the database 106. The transcript analyzer 208 also provides the attributes to an attribute comparer 214. The attribute comparer 214 determines attribute differences between transcripts that have matching lexical stress patterns (e.g., exact or canonical) and provides the attribute differences to a model generator 216. For example, the attribute comparer 214 identifies the transcript “Let's go to dinner” and “Let's eat breakfast” as having matching canonical lexical stress patterns.


In some implementations, the attribute comparer 214 calculates the attribute difference as the edit distance between attributes of the transcripts. For example, the attribute comparer 214 can calculate the edit distance between the parts-of-speech attributes as one (e.g., one can arrive at the parts-of-speech in the first transcript by a single insertion of a preposition in the second transcript). In some implementations, a more restrictive set of speech parts can be used, such as transitive verbs versus intransitive verbs. In some implementations, a less restrictive set of speech parts can be used, such as by combining pronouns and nouns into a single part-of-speech category.


In some implementations, edit distances between other attributes can be calculated, such as an edit distance between stress pattern attributes. The stress pattern edit distance between the first and second transcripts is one (e.g., one can arrive at the exact lexical stress pattern of the second transcript by a single insertion of an unstressed element in the first transcript).


In some implementations, an attribute other than lexical stress can used to match comparisons of transcript attributes, such as parts-of-speech. In some implementations, all transcripts can be compared, a random sample of transcripts can be compared, and/or most frequently used transcripts can be compared.


The model generator system 200 includes a prosodic contour extractor 218. The prosodic contour extractor 218 receives the audio data 204 through the interface 202. The prosodic contour extractor 218 processes the audio data 204 to extract one or more prosodic contours 220 corresponding to each of the transcripts 206. In some implementations, the prosodic contours 220 include time-value pairs of the fundamental frequency or pitch at various time locations in the audio data 204. For example, the time can be measured in seconds from the beginning of a particular audio data and the frequency can be measured in Hertz (Hz).


In some implementations, the prosodic contour extractor 218 normalizes the length of each of the prosodic contours 220 to a predetermined length, such as a unit length or one second. In some implementations, the prosodic contour extractor 218 normalizes the values in the time-value pairs. For example, the prosodic contour extractor 218 can use z-score normalization to normalize the frequency values for a particular speaker. The prosodic contour's mean frequency is subtracted from each of its individual frequency values and each result is divided by the standard deviation of the frequency values of the prosodic contour. In some implementations, the mean and standard deviation of a speaker may be applied to multiple prosodic contours using z-score normalization. The means and standard deviations used in the z-score normalization can be stored and used later to de-normalize the prosodic contours.


The prosodic contour extractor 218 stores the prosodic contours 220 in a storage device, such as the database 106, and provides the prosodic contours 220 to a prosodic contour comparer 222. The prosodic contour comparer 222 calculates differences between the prosodic contours. For example, the prosodic contour comparer 222 can calculate a RMSD between each pair of prosodic contours where the prosodic contours have associated transcripts with matching lexical stress patterns (e.g., exact or canonical). In some implementations, all prosodic contours can be compared, a random sample of prosodic contours can be compared, and/or most frequently used prosodic contours can be compared. For example, the following equation can be used to calculate the RMSD between a pair of prosodic contours (Contour1, Contour2), where each prosodic contour has a particular value at a given time (t).









RMSD
=




t




(



Contour
1



(
t
)


-


Contour
2



(
t
)



)

2







Equation





1







The prosodic contour comparer 222 provides the prosodic contour differences to the model generator 216. The model generator 216 uses the sets of corresponding transcript differences and prosodic contour differences having associated matching lexical stress patterns to generate one or more models 224. For example, the model generator 216 can perform a linear regression for each set of prosodic contour differences and transcript differences to determine an equation that estimates prosodic contour differences based on attribute differences for a particular lexical stress pattern.


In some implementations, the RMSD between two contours may not be symmetric. For example, when the canonical lexical stress patterns match but the exact lexical stress patterns do not match then the RMSD may not be the same in both directions. In the case where spans of unstressed elements are added or removed, the RMSD between the contours is asymmetric. Where the RMSD is not symmetric, the distance between a pair of contours can be calculated as a combination or a sum of the RMSD from the first to the second and the RMSD from the second to the first. For example, the following equation can be used to calculate the RMSD between a pair of contours, where each contour has a particular value at a given time (t) and the RMSD is asymmetric.










Distance





between






Contour
1






and






Contour
2


=





t




(



Contour
1



(
t
)


-


Contour
2




(
t
)



)

2



+




t




(



Contour
2



(
t
)


-


Contour
1




(
t
)



)

2








Equation





1







The model generator 216 stores the models 224 in a storage device, such as the database 106. In some implementations, the model generator system 200 stores the audio data 204 and the transcripts 206 in a storage device, such as the database 106, in addition to the attributes 212 and other prosody data. The attributes 212 are later used, for example, at runtime to identify prosody candidates from the prosodic contours 220. The models 224 are used to select a particular one of the candidate prosodic contours on which to align a text to be synthesized.


Prosody information stored by the model generator system 200 can be stored in a device internal to the model generator system 200 or external to the model generator system 200, such as a system accessible by a data communications network. While shown here as a single system, operations performed by the model generator system 200 can be distributed across multiple systems. For example, a first system can process transcripts, a second system can process audio data, and a third system can generate models. In another example, a first set of transcripts, audio data, and/or models can be performed at a first system while a second set of transcripts, audio data, and/or models can be performed at a second system.



FIG. 3 is an example of a table 300 for storing transcript analysis information. The table 300 includes a first transcript having the words “Let's go to dinner” and a second transcript having the words “Let's eat breakfast.” As previously described, a module such as the transcript analyzer 208 can determine exact lexical stress patterns “1 1 0 1 0” and “1 1 1 0” (where “1” corresponds to stressed and “0” corresponds to unstressed), and/or canonical lexical stress patterns “3 1 0” and “3 1 0” for the first and second transcripts, respectively. The transcript analyzer 208 can also determine the parts-of-speech sequences “transitive verb (TV), pronoun (PN), intransitive verb (IV), preposition (P), noun (N),” and “transitive verb (TV), pronoun (PN), verb (V), noun (N)” for the words in the first and second transcripts, respectively. The table 300 can include other attributes determined by analysis of the transcripts as well as data including the time-value pairs representing the prosodic contours.



FIG. 4 is a block diagram showing an example of a text alignment system 400. The text alignment system 400 receives a text 402 to be synthesized into speech. For example, the text alignment system can receive the text 402 including “Get thee to a nunnery.”


The text alignment system 400 includes a text analyzer 404 that analyzes the text 402 to determine one or more attributes of the text 402. For example, the text analyzer 404 can use a lexical dictionary 406 to determine a parts-of-speech sequence (e.g., transitive verb, pronoun, preposition, indefinite article, and noun), an exact lexical stress pattern (e.g., “1 1 0 0 1 0 0”), a canonical lexical stress pattern (e.g., “3 1 0”), phone or phoneme representations of the text 402, or function-context words in the text 402.


The text analyzer 404 provides the attributes of the text 402 to a prosodic contour selector 408. The prosodic contour selector 408 includes a candidate identifier 410 that uses the attributes of the text 402 to send a request 412 for candidate prosodic contours having attributes that match the attribute of the text 402. For example, the candidate identifier 410 can query a database, such as the database 106, using the canonical lexical stress pattern of the text 402 (e.g., three total stressed elements, a first stressed element, and a last unstressed element).


The prosodic contour selector 408 receives one or more candidate prosodic contours 414, as well as one or more attributes 416 of transcripts corresponding to the candidate prosodic contours 414, and at least one model 418 associated with the candidate prosodic contours 414. For example, the attributes 416 may include the exact lexical stress patterns of the transcripts associated with the candidate prosodic contours 414. The prosodic contour selector 408 includes a candidate selector 420 that selects one of the candidate prosodic contours 414 that has a smallest estimated prosodic contour difference with the text 402.


The candidate selector 420 calculates a difference between an attribute of the text 402 and each of the attributes 416 from the transcripts of the candidate prosodic contours 414. The type of attribute being compared can be the same attribute used to identify the candidate prosodic contours 414, another attribute, or a combination of attributes that may include the attribute used to identify the candidate prosodic contours 414. In some implementations, the attribute difference is an edit distance (e.g., the number of individual substitutions, insertions, or deletions needed to make the compared attributes match).


For example, the candidate selector 420 can determine that the edit distance between the exact lexical stress pattern of the text 402 (e.g., “1 1 0 0 1 0 0”) and the exact lexical stress pattern of the first transcript (e.g., “1 1 0 1 0”) is two (e.g., either insertion or removal of two unstressed elements). The candidate selector 420 can determine that the edit distance between the exact lexical stress pattern of the text 402 (e.g., “1 1 0 0 1 0 0”) and the exact lexical stress pattern of the second transcript (e.g., “1 1 1 0”) is three (e.g., either insertion or removal of three unstressed elements).


In some implementations, the candidate selector 420 can compare a type of attribute other than lexical stress to determine the edit distance. For example, the candidate selector 420 can determine an edit distance between the parts-of-speech sequences for the text 402 and the transcripts associated with the candidate prosodic contours.


In some implementations, insertions or deletions of unstressed regions are not allowed at the beginning or the end of the transcripts. In some implementations, the beginning and end of a unit of text, such as a phrase, sentence, paragraph, or other typically bounded grouping of words in speech can have important prosodic contour features at the beginning and/or end. In some implementations, preventing addition or removal of unstressed regions at the beginning and/or end preserves the important prosodic contour information at the beginning and/or end. In some implementations, the inclusion of the first stress and last stress in the canonical lexical stress pattern provides this protection of the beginning and/or end of a prosodic contour associated with a transcript.


The candidate selector 420 passes the calculated attributes edit distances into the model 418 to determine an estimated RMSD between a proposed prosodic contour of the text 402 and each of the candidate prosodic contours 414. The candidate selector 420 selects the candidate prosodic contour that has the smallest RMSD with the prosodic contour of the text 402. The candidate selector 420 provides the selected candidate prosodic contour to a prosodic contour aligner 422.


The prosodic contour aligner 422 aligns the selected prosodic contour to the text 402. For example, where a canonical lexical stress pattern is used to identify the candidate prosodic contours 414, the selected one of the candidate prosodic contours 414 may have an associated exact lexical stress pattern that is different than the exact lexical stress pattern of the text 402. The prosodic contour aligner 422 can expand or contract unstressed one or more regions in the selected prosodic contour to align the prosodic contour to the text 402. For example, if the first transcript having the exact lexical stress pattern “1 1 0 1 0” is the selected candidate prosodic contour, then the prosodic contour aligner 422 expands both of the unstressed elements into double unstressed elements to match the exact lexical stress pattern “1 1 0 0 1 0 0” of the text 402. Alternatively, if the second transcript having the exact lexical stress pattern “1 1 1 0” is the selected candidate prosodic contour, then the prosodic contour aligner 422 inserts two unstressed elements between the second and third stressed elements and also expands the last unstressed element into two unstressed elements to match the exact lexical stress pattern “1 1 0 0 1 0 0” of the text 402.


In some implementations, the prosodic contour aligner 422 also de-normalizes the selected candidate prosodic contour. For example, the prosodic contour aligner 422 can reverse the z-score value normalization by multiplying the prosodic contour values by a standard deviation of the frequency and adding a mean of the frequency for a particular voice. In another example, the prosodic contour aligner 422 can de-normalize the time length of the selected candidate prosodic contour. The prosodic contour aligner 422 can proportionately expand or contract each time interval in the selected candidate prosodic contour to arrive at an expected time length for the prosodic contour as a whole. The prosodic contour aligner 422 outputs an aligned prosodic contour 424 and the text 402 for use in speech synthesis, such as at the speech synthesis system 102.



FIG. 5A is an example of a pair of prosodic contour graphs 500 before and after expanding an unstressed region 502. The unstressed region 502 is expanded from one unstressed element to two unstressed elements, for example, to match the exact lexical stress pattern of a text to be synthesized. In this example, the overall time length of the prosodic contour remains the same after the expansion of the unstressed region 502. In some implementations, an unstressed element added by an expansion has a predetermined time length. In some implementations, the other elements in the prosodic contour (stressed or unstressed) are accordingly and proportionately contracted to maintain the same overall time length after the expansion.



FIG. 5B is an example of a pair of prosodic contour graphs 530 before and after inserting an unstressed region 532 between a pair of stressed elements 534. In some implementations, the unstressed region 532 has a constant frequency, such as the frequency at which the pair of stressed elements 534 were divided. Alternatively, the values in the unstressed region 532 can be smoothed to prevent discontinuities at the junctions with the pair of stressed elements 534. Again, the overall time length of the prosodic contour remains the same after the insertion of the unstressed region 532. In some implementations, an unstressed element added by an insertion has a predetermined time length. In some implementations, the other elements in the prosodic contour (stressed or unstressed) are accordingly and proportionately contracted to maintain the same overall time length after the expansion.



FIG. 5C is an example of a pair of prosodic contour graphs 560 before and after removing an unstressed region 562 between a pair of stressed regions 564. In some implementations, the values in the pair of stressed regions 564 can be smoothed to prevent discontinuities at the junction with one another. Again, the overall time length of the prosodic contour remains the same after the removal of the unstressed region. In some implementations, the other elements in the prosodic contour (stressed or unstressed) are accordingly and proportionately expanded to maintain the same overall time length after the removal.


The following flow charts show examples of processes that may be performed, for example, by a system such as the system 100, the model generator system 200, and/or the text alignment system 400. For clarity of presentation, the description that follows uses the system 100, the model generator system 200, and the text alignment system 400 as the basis of examples for describing these processes. However, another system, or combination of systems, may be used to perform the processes.



FIG. 6 is a flow chart showing an example of a process 600 for generating models. The process 600 begins with receiving (602) multiple speech utterances and corresponding transcripts of the speech utterances. For example, the model generator system 200 can receive the audio data 204 and the transcripts 206 through the interface 202. In some implementations, the audio data 204 and the transcripts 206 include transcribed audio such as television broadcast news, audio books, and closed captioning for movies to name a few. In some implementations, the amount of transcribed audio processed by the model generator system 200 or distributed over multiple model generation systems can be very large, such as hundreds of thousands or millions of corresponding prosodic contours.


The process 600 extracts (604) one or more prosodic contours from each of the speech utterances, each of the prosodic contours including one or more time and value pairs. For example, the prosodic contour extractor 218 can extract time-value pairs for fundamental frequency at various times in each of the speech utterances to generate a prosodic contour for each of the speech utterances.


The process 600 modifies (606) the extracted prosodic contours. For example, the prosodic contour extractor 218 can normalize the time length of each prosodic contour and/or normalize the frequency values for each prosodic contour. In some implementations, normalizing the prosodic contours allows the prosodic contours to be compared and aligned more easily.


The process 600 stores (608) the modified prosodic contours. For example, the model generator system 200 can output the prosodic contours 220 and store them in a storage device, such as the database 106.


The process 600 calculates (610) one or more distances between the stored prosodic contours. For example, the prosodic contour comparer 222 can determine a RMSD between pairs of the prosodic contours 220. In some implementations, the prosodic contour comparer 222 compares all possible pairs of the prosodic contours 220. In some implementations, the prosodic contour comparer 222 compares a random sampling of pairs from the prosodic contours 220. In some implementations, the prosodic contour comparer 222 compares pairs of the prosodic contours 220 that have a matching attribute value, such as a matching canonical lexical stress pattern.


The process 600 analyzes (612) the transcripts to determine one or more attributes of the transcripts. For example, the transcript analyzer 208 can use the lexical dictionary 210 to analyze the transcripts 206 and determine parts-of-speech sequences, exact lexical stress patterns, canonical lexical stress patterns, phones, and/or phonemes.


The process 600 stores (614) at least one of the attributes for each of the transcripts. For example, the model generator system 200 can output the attributes 212 and store them in a storage device, such as the database 106.


The process 600 calculates (616) one or more distances between the attributes. For example, the attribute comparer 214 can calculate a difference or edit distance between one or more attributes for a pair of the transcripts 206. In some implementations, the attribute comparer 214 compares all possible pairs of the transcripts 206. In some implementations, the attribute comparer 214 compares a random sampling of pairs from the transcripts 206. In some implementations, the attribute comparer 214 compares pairs of the transcripts 206 that have a matching attribute value, such as a matching canonical lexical stress pattern.


The process 600 creates (618) a model, using the distances between the prosodic contours and the distances between the transcripts, that estimates a distance between prosodic contours of an utterance pair based on a distance between attributes of the utterance pair. For example, the model generator 216 can perform a multiple linear regression on the RMSD values and the attribute edit distances for a set of utterance pairs (e.g., all utterance pairs with transcripts having a particular canonical lexical stress pattern).


The process 600 stores (620) the model. For example, the model generator system 200 can output the models 224 and store them in a storage device, such as the database 106.


If more speech and corresponding transcripts exist (622), the process 600 performs operations 604 through 620 again. For example, the model generator system 200 can repeat the model generation process for each attribute value used to group the pairs of utterances. In one example, the model generator system 200 identifies each of the different canonical lexical stress patterns that exist in the utterances. Further, the model generator system 200 repeats the model generation process for each set of utterance pairs having a particular canonical lexical stress pattern. A first model may represent pairs of utterances having a canonical lexical stress pattern of “3 1 0,” while a second model may represent pairs of utterances having a canonical lexical stress pattern of “4 0 0.”



FIG. 7 is a flow chart showing an example of a process 700 for selecting and aligning a prosodic contour. The process 700 begins with receiving (702) text to be synthesized as speech. For example, the text alignment system 400 receives the text 402, for example, from a user or an application seeking speech synthesis.


The process 700 analyzes (704) the received text to determine one or more attributes of the received text. For example, the text analyzer 404 analyzes the text 402 to determine one or more lexical attributes of the text 402, such as a parts-of-speech sequence, an exact lexical stress pattern, a canonical lexical stress pattern, phones, and/or phonemes.


The process 700 identifies (706) one or more candidate utterances from a database of stored utterances based on the determined attributes of the received text and one or more corresponding attributes of the stored utterances. For example, the candidate identifier 410 uses at least one of the attributes of the text 402 to identify the candidate prosodic contours 414. The candidate identifier 410 also identifies the model 418 associated with the candidate prosodic contours 414. In some implementations, the candidate identifier 410 uses the attribute of the text 402 as a key value to query the corresponding attributes of the prosodic contours in the database. For example, the candidate identifier 410 can perform a query for prosodic contours having a canonical lexical stress pattern of “3 1 0.”


The process 700 selects (708) at least one of the identified candidate utterances using a distance estimate based on stored distance information in the database for the stored utterances. For example, the candidate selector 420 can use the model 418 to determine an estimated distance between a hypothetical prosodic contour of the text 402 and the candidate prosodic contours 414. The candidate selector 420 provides as input to the model 418, at least one lexical attribute edit distance between the text 402 and each of the candidate prosodic contours 414. The candidate selector 420 selects a final prosodic contour from the candidate prosodic contours 414 that has the smallest estimated prosodic contour distance away from the text 402.


In some implementations, the candidate selector 420 selects multiple final prosodic contours. For example, the candidate selector 420 can select multiple final prosodic contours and then average the multiple prosodic contours to determine a single final prosodic contour. The candidate selector 420 can select a predetermined number of final prosodic contours and/or final prosodic contour that meet a predetermined proximity threshold of estimated distance from the text 402.


The process 700 aligns (710) a prosodic contour of the selected candidate utterance with the received text. For example, the prosodic contour aligner 422 aligns the final prosodic contour onto the text 402. In some implementations, aligning can include modify an exiting unstressed region by expanding or contracting the number of unstressed elements in the unstressed region, inserting an unstressed region with at least one unstressed element, or removing an unstressed region completely. In some implementations, insertions and removals do not occur at the beginning and/or end of a prosodic contour. In some implementations, each prosodic contour represents a self-contained linguistic unit, such as a phrase or sentence. In some implementations, each element at which a modification, insertion, or removal occurs represents a subpart of the prosodic contour, such as a word, syllable, phoneme, phone, or individual character.


The process 700 outputs (712) the received text with the aligned prosodic contour to a text-to-speech engine. For example, the text alignment system 400 can output the text and the aligned prosodic contour 424 to a TTS engine, such as the TTS 134.



FIG. 8 is a schematic diagram of a computing system 800. The computing system 800 can be used for the operations described in association with any of the computer-implement methods and systems described previously, according to one implementation. The computing system 800 includes a processor 810, a memory 820, a storage device 830, and an input/output device 840. Each of the processor 810, the memory 820, the storage device 830, and the input/output device 840 are interconnected using a system bus 850. The processor 810 is capable of processing instructions for execution within the computing system 800. In one implementation, the processor 810 is a single-threaded processor. In another implementation, the processor 810 is a multi-threaded processor. The processor 810 is capable of processing instructions stored in the memory 820 or on the storage device 830 to display graphical information for a user interface on the input/output device 840.


The memory 820 stores information within the computing system 800. In one implementation, the memory 820 is a computer-readable medium. In one implementation, the memory 820 is a volatile memory unit. In another implementation, the memory 820 is a non-volatile memory unit.


The storage device 830 is capable of providing mass storage for the computing system 800. In one implementation, the storage device 830 is a computer-readable medium. In various different implementations, the storage device 830 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device.


The input/output device 840 provides input/output operations for the computing system 800. In one implementation, the input/output device 840 includes a keyboard and/or pointing device. In another implementation, the input/output device 840 includes a display unit for displaying graphical user interfaces.


The features described can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations of them. The apparatus can be implemented in a computer program product tangibly embodied in an information carrier, e.g., in a machine-readable storage device or in a propagated signal, for execution by a programmable processor; and method steps can be performed by a programmable processor executing a program of instructions to perform functions of the described implementations by operating on input data and generating output. The described features can be implemented advantageously in one or more computer programs that are executable on a programmable system including at least one programmable processor coupled to receive data and instructions from, and to transmit data and instructions to, a data storage system, at least one input device, and at least one output device. A computer program is a set of instructions that can be used, directly or indirectly, in a computer to perform a certain activity or bring about a certain result. A computer program can be written in any form of programming language, including compiled or interpreted languages, and it can be deployed in any form, including as a stand-alone program or as a module, component, subroutine, or other unit suitable for use in a computing environment.


Suitable processors for the execution of a program of instructions include, by way of example, both general and special purpose microprocessors, and the sole processor or one of multiple processors of any kind of computer. Generally, a processor will receive instructions and data from a read-only memory or a random access memory or both. The essential elements of a computer are a processor for executing instructions and one or more memories for storing instructions and data. Generally, a computer will also include, or be operatively coupled to communicate with, one or more mass storage devices for storing data files; such devices include magnetic disks, such as internal hard disks and removable disks; magneto-optical disks; and optical disks. Storage devices suitable for tangibly embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices, such as EPROM, EEPROM, and flash memory devices; magnetic disks such as internal hard disks and removable disks; magneto-optical disks; and CD-ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, ASICs (application-specific integrated circuits).


To provide for interaction with a user, the features can be implemented on a computer having a display device such as a CRT (cathode ray tube) or LCD (liquid crystal display) monitor for displaying information to the user and a keyboard and a pointing device such as a mouse or a trackball by which the user can provide input to the computer.


The features can be implemented in a computer system that includes a back-end component, such as a data server, or that includes a middleware component, such as an application server or an Internet server, or that includes a front-end component, such as a client computer having a graphical user interface or an Internet browser, or any combination of them. The components of the system can be connected by any form or medium of digital data communication such as a communication network. Examples of communication networks include, e.g., a LAN, a WAN, and the computers and networks forming the Internet.


The computer system can include clients and servers. A client and server are generally remote from each other and typically interact through a network, such as the described one. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other.


Although a few implementations have been described in detail above, other modifications are possible. For example, while described above as separate offline and runtime processes, one or more of the models 110 can be calculated during or after receiving the text 122. The particular models to be created after receiving the text 122 can be determined, for example, by the stress pattern of the text 122 (e.g., exact or canonical).


In addition, the logic flows depicted in the figures do not require the particular order shown, or sequential order, to achieve desirable results. In addition, other steps may be provided, or steps may be eliminated, from the described flows, and other components may be added to, or removed from, the described systems. Accordingly, other implementations are within the scope of the following claims.

Claims
  • 1. A method implemented by a system of one or more computers, comprising: receiving, by the system of one or more computers, speech utterances encoded in audio data and a transcript having text that represents the speech utterances;extracting, by the system of one or more computers, prosodic contours from the utterances;extracting, by the system of one or more computers and from the transcript, attributes of text associated with the utterances;for pairs of utterances from the speech utterances, determining, by the system of one or more computers, distances between attributes of text associated with the pairs of utterances;for the pairs of utterances from the speech utterances, determining, by the system of one or more computers, distances between prosodic contours for the pairs of utterances;generating, by the system of one or more computers, a model based on the determined distances for the attributes and the prosodic contours, the model adapted to estimate a distance between a determined prosodic contour for a received utterance and a prosodic contour for a synthesized utterance when given a distance between an attribute of text associated with the received utterance and an attribute of text associated with the synthesized utterance; andstoring, by the system of one or more computers, the model in a computer-readable memory device.
  • 2. The method of claim 1, further comprising modifying the extracted prosodic contours at a time previous to determining the distances between the extracted prosodic contours.
  • 3. The method of claim 1, wherein extracting the prosodic contours from the utterances comprises generating for each prosodic contour time-value pairs that comprise a prosodic contour value and a time at which the prosodic contour value occurs.
  • 4. The method of claim 1, wherein the extracted prosodic contours comprise fundamental frequencies, pitches, energy measurements, gain measurements, duration measurements, intensity measurements, measurements of rate of speech, or spectral tilt measurements.
  • 5. The method of claim 1, wherein the extracted attributes comprise exact stress patterns, canonical stress patterns, parts of speech, phone representations, phoneme representations, or indications of declaration versus question versus exclamation.
  • 6. The method of claim 1, further comprising aligning the utterances in the audio data with text, from the transcripts, that represents the utterances to determine which speech utterances are associated with which text.
  • 7. The method of claim 1, wherein generating the model comprises mapping the distances between the attributes of text associated with the pairs of utterances to the distances between the prosodic contours for the pairs of utterances in order to determine a relationship between the distances associated with the attributes of the text and the distances associated with the prosodic contours for pairs of utterances.
  • 8. The method of claim 1, wherein the distances between the prosodic contours are calculated using a root mean square difference calculation.
  • 9. The method of claim 1, wherein the model is created using a linear regression of the distances between the prosodic contours and the distances between the transcripts.
  • 10. The method of claim 1, further comprising selecting pairs of utterances for use in determining distances based on whether the utterances have canonical stress patterns that match.
  • 11. The method of claim 1, comprising creating multiple models, including the model, where each of the models has a different canonical stress pattern.
  • 12. The method of claim 1, further comprising selecting, based on estimated distances between a plurality of determined prosodic contours and a prosodic contour of text to be synthesized, a final determined prosodic contour associated with a smallest distance.
  • 13. The method of claim 12, further comprising generating a prosodic contour for the text to be synthesized using the final determined prosodic contour.
  • 14. The method of claim 13, further comprising outputting the generated prosodic contour and the text to be synthesized to a speech-to-text engine for speech synthesis.
  • 15. A computer-implemented system, comprising: one or more computers having:an interface to receive speech utterances encoded in audio data and a transcript having text that represents the speech utterances;a prosodic contour extractor to extract prosodic contours from the utterances;a transcript analyzer to extract attributes of text associated with the utterances;an attribute comparer to determine, for pairs of utterances from the speech utterances, distances between attributes of text associated with the pairs of utterances;a prosodic contour comparer to determine, for the pairs of utterances from the speech utterances, distances between prosodic contours for the pairs of utterances;a model generator programmed to generate a model based on the determined distances for the attributes and the prosodic contours, the model adapted to estimate a distance between a determined prosodic contour for a received utterance and a prosodic contour for a synthesized utterance when given a distance between an attribute of text associated with the received utterance and an attribute of text associated with the synthesized utterance; anda computer-readable memory device associated with the one or more computers to store the model.
  • 16. The system of claim 15, wherein the system is further programmed to modify the extracted prosodic contours at a time previous to determining the distances between the extracted prosodic contours.
  • 17. The system of claim 15, wherein extracting the prosodic contours from the utterances comprises generating for each prosodic contour time-value pairs that comprise a prosodic contour value and a time at which the prosodic contour value occurs.
  • 18. The system of claim 15, wherein the extracted prosodic contours comprise fundamental frequencies, pitches, energy measurements, gain measurements, duration measurements, intensity measurements, measurements of rate of speech, or spectral tilt measurements.
  • 19. The system of claim 15, wherein the extracted attributes comprise exact stress patterns, canonical stress patterns, parts of speech, phone representations, phoneme representations, or indications of declaration versus question versus exclamation.
  • 20. The system of claim 15, wherein the system is further programmed to align the utterances in the audio data with text, from the transcripts, that represents the utterances to determine which speech utterances are associated with which text.
  • 21. The system of claim 15, wherein generating the model comprises mapping the distances between the attributes of text associated with the pairs of utterances to the distances between the prosodic contours for the pairs of utterances in order to determine a relationship between the distances associated with the attributes of the text and the distances associated with the prosodic contours for pairs of utterances.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a divisional of U.S. application Ser. No. 12/271,568 filed on Nov. 14, 2008 by Jansche, et al., the contents of which are fully incorporated by reference herein.

US Referenced Citations (42)
Number Name Date Kind
6101470 Eide et al. Aug 2000 A
6405169 Kondo et al. Jun 2002 B1
6470316 Chihara Oct 2002 B1
6510413 Walker Jan 2003 B1
6535852 Eide Mar 2003 B2
6546367 Otsuka Apr 2003 B2
6625575 Chihara Sep 2003 B2
6636819 Abbott et al. Oct 2003 B1
6725199 Brittan et al. Apr 2004 B2
6823309 Kato et al. Nov 2004 B1
6826530 Kasai et al. Nov 2004 B1
6829581 Meron Dec 2004 B2
6845358 Kibre et al. Jan 2005 B2
6862568 Case Mar 2005 B2
6871178 Case et al. Mar 2005 B2
6975987 Tenpaku et al. Dec 2005 B1
6990449 Case Jan 2006 B2
6990450 Case et al. Jan 2006 B2
7035791 Chazan et al. Apr 2006 B2
7062439 Brittan et al. Jun 2006 B2
7076426 Beutnagel et al. Jul 2006 B1
7191132 Brittan et al. Mar 2007 B2
7200558 Kato et al. Apr 2007 B2
7240005 Chihara Jul 2007 B2
7249021 Morio et al. Jul 2007 B2
7263488 Chu et al. Aug 2007 B2
7308407 Reich Dec 2007 B2
7451087 Case et al. Nov 2008 B2
7472065 Aaron et al. Dec 2008 B2
7487092 Gleason et al. Feb 2009 B2
7496498 Chu et al. Feb 2009 B2
7571099 Saito et al. Aug 2009 B2
7577568 Busayapongchai et al. Aug 2009 B2
7606701 Degani et al. Oct 2009 B2
7844457 Chen et al. Nov 2010 B2
7853452 Gleason et al. Dec 2010 B2
7924986 Sadowski et al. Apr 2011 B2
20060074678 Pearson et al. Apr 2006 A1
20060224380 Hirabayashi et al. Oct 2006 A1
20060229877 Tian et al. Oct 2006 A1
20080059190 Chu et al. Mar 2008 A1
20090076819 Wouters et al. Mar 2009 A1
Non-Patent Literature Citations (30)
Entry
Aguero and Bonafonte, “Intonation Modeling for TTS Using a Joint Extraction and Prediction Approach,” 5th ISCA Speech Synthesis Workshop, Pittsburgh, PA, 2004, pp. 67-72.
Allauzen et al. “Statistical Modeling for Unit Selection in Speech Synthesis,” AT&T Labs—Research, Florham Park, New Jersey, 2004, 8 pages.
Can et al. “Web Derived Pronunciations for Spoken Term Detection,” SIGIR 2009, Jul. 19-23, 2009, Boston, MA, 8 pages.
Dusterhoff et al. “Using Decision Trees within the Tilt Intonation Model to Predict F0 Contours,” 6th European Conference on Speech Communication and Technology (EUROSPEECH '99), Budapest, Hungary, Sep. 5-9, 1999, 4 pages.
Eide et al. “A Corpus-Based Approach to <AHEM/> Expressive Speech Synthesis,” 5th ISCA Speech Synthesis Workshop, Pittsburgh, PA, 2004, pp. 79-84.
Escudero et al. Corpus Based Extraction of Quantitative Prosodic Parameters of Stress Groups in Spanish, IEEE 2002, pp. I-481-484.
Escudero-Mancebo and Cardenoso-Payo, “Applying data mining techniques to corpus based prosodic modeling,” Speech Communication 49 (2007), pp. 213-229.
Fujisaki and Hirose, “Analysis of Voice Fundamental Frequency Contours for Declarative Sentences of Japanese,” J. Acoust. Soc. Jpn. (E) 5, 4 (1984), pp. 233-242.
Ghoshal et al. “Web-Derived Pronunciations,” IEEE, 2009, 4 pages.
Gravano et al. “Restoring Punctuation and Capitalization in Transcribed Speech,” IEEE, 2009, 4 pages.
Hirose et al. “Synthesis of F0 contours using generation process model parameters predicted from unlabeled corpora: application to emotional speech synthesis,” Speech Communication 46, 2005, pp. 385-404.
Malfrere et al. “Automatic Prosody Generation Using Suprasegmental Unit Selection,” Faculte Poly technique de Mons, Departement de Linguistique, 1998, 6 pages.
Malfrere et al. “Fully Automatic Prosody Generator for Text-To-Speech,” Faculte Poly technique de Mons, Departement de Linguistique, 1998, 4 pages.
Maskey et al. “Intonation Phrases for Speech Summarization,” Department of Computer Science, Columbia University, New York, New York, 2008, 4 pages.
Meron, J. “Applying Fallback to Prosodic Unit Selection From a Small Imitation database,” Panasonic Speech Technology Lab., Santa Barbara, CA, 2002, pp. 2093-2096.
Meron, J. “Prosodic Unit Selection Using an Imitation Speech Database,” 4th ISCA ITRW on Speech Synthesis (SSW-4) Perthshire, Scotland, Aug. 29-Sep. 1, 2001, 5 pages.
Raux and Black. “A Unit Selection Approach to F0 Modeling and Its Application to Emphasis,” Language Technologies Institute, Carnegie Mellon University, Pittsburgh, PA, 2003, 6 pages.
Rosenberg and Hirschberg, “On the Correlation between Energy and Pitch Accent in Read English Speech,” Computer Science Department, Columbia University, New York, New York, 2007, 4 pages.
Ross and Ostendorf. “A Dynamical System Model for Generating F0 for Synthesis,” 2nd ESCA/IEEE Workshop on Speech Synthesis, Mohonk, New Paltz, New York, Sep. 12-15, 1994, pp. 131-134.
Sakai and Glass, “Fundamental Frequency Modeling for Corpus-Based Speech Synthesis Based on a Statistical Learning Technique,” IEEE 2003, pp. 712-717.
Sakai, S. “Additive Modeling of English F0 Contour for Speech Synthesis,” IEEE 2005, pp. I-277-280.
Sakai, S. “Fundamental Frequency Modeling for Corpus-based Speech Synthesis,” Spoken Language Systems Group Summary of Research, Jul. 2003, pp. 37-40.
Sharfran et al. “Voice Signatures,” AT&T Labs—Research, Florham Park, New Jersey, 2003, 6 pages.
Shivaswamy et al. “A Support Vector Approach to Censored Targets,” Columbia University, New York, New York and Google, Inc., New York, New York, 2007, 6 pages.
Silverman et al. “TOBI: A Standard for Labeling English Prosody,” International Conference on Spoken Language Processing, Banff, Alberta, Canada, Oct. 12-16, 1992, 6 pages.
Strom et al. “Expressive Prosody for Unit-selection Speech Synthesis,” Centre for Speech Technology Research, The University of Edinburgh, Edinburgh, United Kingdom, 2006, 4 pages.
Strom, V. “From Text to Prosody Without TOBI,” AT& Labs Research, Florham Park, New Jersey, 2002, 4 pages.
Taylor, P. “Text-to-Speech Synthesis,” Aug. 2007, 627 pages.
Taylor, P. “The Tilt Intonation Model,” Centre for Speech Technology Research, University of Edinburgh, Edinburgh, United Kingdom, 1998, 4 pages.
Xydas et al. “Modeling Improved Prosody Generation from High-Level Linguistically Annotated Corpora,” IEICE Trans. Inf. & Syst., vol. E88-D, No. 3 Mar. 2005, pp. 510-518.
Divisions (1)
Number Date Country
Parent 12271568 Nov 2008 US
Child 13685228 US