In recent years, the field of text-to-speech (TTS) conversion has been largely researched, with text-to-speech technology appearing in a number of commercial applications. Recent progress in unit-selection speech synthesis and Hidden Markov Model (HMM) speech synthesis has led to considerably more natural-sounding synthetic speech, which thus makes such speech suitable for many types of applications.
However, relatively few of these applications provide text-to-speech features. One of the barriers to popularizing text-to-speech in such applications is the technical difficulties in installing, maintaining and customizing a text-to-speech engine. For example, when a user wants to integrate text-to-speech into an application program, the user has to search among text-to-speech engine providers, pick one from the available choices, buy a copy of the software, and install it on possibly many machines. Not only does the user or his or her team have to understand the software, but the installing, maintaining and customizing of a text-to-speech engine can be a tedious and technically difficult process.
For example, in current text-to-speech applications, text-to-speech engines need to be installed locally, and require tedious and technically difficult customization. As a result, users are often frustrated when configuring different text-to-speech engines, especially when what many users typically want to do is only occasionally convert a small piece of text into speech.
Further, once a user has made a choice of a text-to-speech engine, the user has limited flexibility in choosing voices. It is not easy to obtain an additional voice unless without paying for additional development costs.
Still further, each multiple high quality text-to-speech voice requires a relatively large amount of storage, whereby the huge amount of storage needed to install multiple high quality text-to-speech voices is another barrier to wider adoption of text-to-speech technology. It is basically not possible for an individual user or small entity to have multiple text-to-speech engines with dozens or hundreds voices for use in applications.
This Summary is provided to introduce a selection of representative concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used in any way that would limit the scope of the claimed subject matter.
Briefly, various aspects of the subject matter described herein are directed towards a technology by which a user-accessible service converts user input data to a speech waveform, based on user-provided input and parameter data, and voice data from a data store of voices. For example, the user may provide text tagged with parameter data, which is parsed such that the text is sent to a text-to-speech engine along with a selected base or custom voice data, and the resulting waveform morphed based on one or more tags, each tag accompanying a piece of text. The user may also provide speech. The service may be remotely accessible, such as by network/internet access, and/or by telephone mobile telephone systems.
Once created, data corresponding to the speech waveforms may be persisted in a data store of personal voice personas. For example, the speech waveform may be maintained in a personal voice persona comprising a collection of properties, such as in a name card. The personal voice persona may be shared, and may be used as the properties of an object.
In one example aspect, the voice persona service receives user input and parameter data, and retrieves a base voice or a custom voice based on the user input. The retrieved voice may be modified based on the user input and/or the parameter data, and the parameter data saved in a voice persona. The user may make changes to the parameter data in an editing operation, and/or may hear a playback of the speech while editing. The service may output a waveform corresponding to the voice persona, such as an audio (e.g., .wav) file for embedding in a software program, and/or may persist the voice persona corresponding to that waveform.
Other advantages may become apparent from the following detailed description when taken in conjunction with the drawings.
The present invention is illustrated by way of example and not limited in the accompanying figures in which like reference numerals indicate similar elements and in which:
Various aspects of the technology described herein are generally directed towards an easily accessible voice persona platform, through which users can create new voice personas, apply voice personas in their applications or text, and share customization of new personas with others. As will be understood, the technology described herein facilitates text-to-speech with relatively little if any of the technical difficulties that are associated with installing and maintaining text-to-speech engines and voices.
To this end, there is provided a text-to-speech service through which users may voice-empower their applications or text content easily, through protocols for voice persona creation, implementation and sharing. Typical example scenarios for usage include creating podcasts by sending text with tags for desired voice personas to the text-to-speech service and getting back the corresponding speech waveforms, or converting a text-based greeting card to a voice greeting card.
Other aspects include creating voice personas by integrating text-to-speech technologies with voice morphing technologies such that, for example a base voice may be modified to have one of various emotions, have a local accent and/or have other acoustic effects.
While various examples herein are primarily directed to layered platform architectures, example interfaces, example effects, and so forth, it is understood that these are only examples. As such, the present invention is not limited to any particular embodiments, aspects, concepts, structures, functionalities or examples described herein. Rather, any of the embodiments, aspects, concepts, structures, functionalities or examples described herein are non-limiting, and the present invention may be used various ways that provide benefits and advantages in computing and speech technology in general.
Turning to
In general, the user layer 102 acts as a client customer of the voice persona service 104. The user layer 102 submits text-to-speech requests, such as by a web browser or a client application that runs in a local computing system or other device. As described below, the synthesized speech is transformed to the user layer 102.
The voice persona service layer 104 communicates with user layer clients via a voice persona creation protocol 110 and an implementation protocol 112, to carry out various processes as described below. Processes include base voice creation 114, voice persona creation 116 and parsing (parser 118). In general, the service integrates various text-to-speech systems and voices, for remote or local access through the Internet or other channels, such as a network, a telephone system, a mobile phone system, and/or a local application program. Users submit text embedded with tags to the voice persona service for assigning personas. The service converts the text to a speech waveform, which is downloadable to the users or can be streamed to an assigned application.
The voice persona database layer 106 manages and maintains text-to-speech engines 120, one or more voice morphing engines 122, a data store of base voices 124 and a data store of derived voice personas (voice persona collection) 126. The voice persona database layer 106 includes or is otherwise associated with a voice persona sharing protocol 128 through which users can share or trade personal/private voice personas.
As can be seen in this example, users can thus access the voice persona service layer 104 through three protocols for voice persona creation, implementation and sharing. The voice persona creation protocol 110 is used for creating new voice personas, and includes mechanisms for selecting base text-to-speech voices, applying a specific voice morphing effect or dialect. The creation protocol 110 also includes mechanisms to convert a set of user provided speech waveforms to a base text-to-speech voice. The voice persona implementation protocol comprises a main protocol for users to submit text-to-speech requests, in which users can assign voice personas to a specific piece of text. The voice persona sharing protocol 128 is used to maintain and manage voice persona data stores in the layer according to each user's specifications. In general, the sharing protocol is used to store, retrieve and update voice persona data in a secure, efficient and robust way.
As represented in the voice persona platform 200 of
More particularly, in one implementation, a voice persona comprises an object having various properties. Example voice persona object properties may include a greeting sentence, a gender, an age range the object represents, the text-to-speech engine it uses, a language it speaks, a base voice from which the object is derived, supported morphing targets, which morphing target applied, the object's parent voice persona, its owner and popularity, and so forth. Each voice persona has a unique name, through which users can access it in an application. Some voice persona properties may be exposed to users, in what is referred to as a voice persona name card, to help identify a particular voice persona (e.g., the corresponding object's properties). For example, each persona has a name card to describe its origin, the algorithm and parameters for morphing effects, dialect effects and venue effects, the creators, popularity and so forth. A new voice persona may be derived from an existing one by inheriting main properties and overwriting some of them as desired.
As can be readily appreciated, treating a high-level persona concept as a management unit, such as in the form of a voice persona name card, hides complex text-to-speech technology details from customers. Further, configuring voice personas as individual units allows voice personas to be downloaded, transferred, traded, or exchanged as a form of property, like commercial goods.
Within the platform, there is a voice persona pool 224 that includes base voice personas 2261-226k to represent the base voices supported by the text-to-speech engines 2201-220i, and derived voice personas in a morphing target pool 228 that are created by applying a morphing target on a base voice persona.
In one example implementation, users will hear a synthetic example immediately after each change in morphing targets or parameters. Example morphing targets supported in one example voice persona platform are set forth below:
As also shown in
The voice persona creation interface 231 allows a user to create a voice persona.
The large central window changes depending on the user selection of applying or editing, and as represented in this example comprises a set of scripts 360 (
The voice persona employment interface 231 is straightforward for users. Users insert a voice persona name tag before the text they want spoken and the tag takes effect until the end of the text, unless another tag is encountered. To create a customized voice persona, users submit a certain amount of recorded speech with a corresponding text script, which is converted to a customized text-to-speech voice that the user may then use in an application or as other content. Example scripts for creating speech with voice personas are shown in the window 360
After a user creates a new voice persona, the new voice persona is only accessible to the creator unless the creator decides to share it with others. Through the voice persona management interface 232, users can edit, group, delete, and share private voice personas. A user can also search for voice personas by their properties, such as all female voice personas, voice personas for teenagers or old men, and so forth.
The user can tune the morphing parameters in the tuning panel 460 of
In one current example implementation of a voice persona platform, there are different text-to-speech engines installed. One is a unit selection-based system in which a sequence of waveform segments are selected from a large speech database by optimizing a cost function. These segments are then concatenated one-by-one to form a new utterance. The other is an HMM-based system in which context dependent phone HMMs have been pre-trained from a speech corpus. In the run-time system, trajectories of spectral parameters and prosodic features are first generated with constraints from statistical models and are then converted to a speech waveform.
In a unit-selection based text-to-speech system, the naturalness of synthetic speech depends to a great extent the goodness of the cost function as well as the quality of the unit inventory. Normally, the cost function contains two components, a target cost, which estimates the difference between a database unit and a target unit, and a concatenation cost, which measures the mismatch across the joint boundary of consecutive units. The total cost of a sequence of speech units is the sum of the target costs and the concatenation costs.
Acoustic measures, such as Mel Frequency Cepstrum Coefficients (MFCC), f0, power and duration, may be used to measure the distance between two units of the same phonetic type. Units of the same phone are clustered by their acoustic similarity. The target cost for using a database unit in the given context is defined as the distance of the unit to its cluster center, i.e., the cluster center is believed to represent the target values of acoustic features in the context. With such a definition for target cost, there is a connotative assumption, namely for any given text, there always exists a best acoustic realization in speech. However, this is not true in human speech; even under highly restricted conditions, e.g., when the same speaker reads the same set of sentences under the same instruction, rather large variations are still observed in phrasing sentences as well as in forming f0 contours. Therefore, in the unit-selection based text-to-speech system, no f0 and duration targets are predicted for a given text. Instead, contextual features (such as word position within a phrase, syllable position within a word, Part-of-Speech (POS) of a word, and so forth) that have been used to predict f0 and duration targets in other studies are used in calculating the target cost directly. The connotative assumption for this cost function is that speech units spoken in similar context are prosodically equivalent to one another in unit selection if there is a suitable description of the context.
Because in this unit-selection based speech system units are always joint at phone boundaries, which are the rapid change areas of spectral features, the distances between spectral features at the two sides of the joint boundary is not an optimal measure for the goodness of concatenation. A rather simple concatenation cost is that the continuity for splicing two segments is quantized into four levels: 1) continuous—if two tokens are continuous segments in the unit inventory, the target cost is set to 0; 2) semi-continuous—though two tokens are not continuous in the unit inventory, the discontinuity at their boundary is often not perceptible, like splicing of two voiceless segments (such as /s/+/t/), a small cost is assigned; 3) weakly discontinuous—discontinuity across the concatenation boundary is often perceptible, yet not very strong, like the splicing between a voiced segment and an unvoiced segment (such as /s/+/a:/) or vice versa, a moderate cost is used; 4) strongly discontinuous—the discontinuity across the splicing boundary is perceptible and annoying, like the splicing between voiced segments, a large cost is assigned. Types 1) and 2) are preferred in concatenation, with the fourth type avoided as much as possible.
With respect to unit inventory, a goal of unit selection is to find a sequence of speech units that minimize the overall cost. High-quality speech will be generated only when the cost of the selected unit sequence is low enough. In other words, only when the unit inventory is sufficiently large can there always be found a good enough unit sequence for a given text, otherwise natural sounding speech will not result. Therefore, a high-quality unit inventory is needed for unit-selection based text-to-speech systems.
The process of the collection and annotation of a speech corpus often requires human intervention such as manually checking or labeling. Creating a high-quality text-to-speech voice is not an easy task even for a professional team, which is why most state-of-the-art unit selection systems provide only a few voices. A uniform paradigm for creating multi-lingual text-to-speech voice databases with focuses on technologies that reduce the complexity and manual work load of the task has been proposed. With such a platform, adding new voices to a unit-selection based text-to-speech system becomes relatively easier. Many voices have been created from carefully designed and collected speech corpus (greater than ten hours of speech) as well as from some available audio resources such as audio books in the public domain. Further, several personalized voices are built from small, office recordings, such as hundreds or so carefully designed sentences read and recorded. Large footprint voices sound rather natural in most situations, while the small footprint ones sound acceptable only in specific domains.
One advantage of the unit selection-based approach is that all voices can reproduce the main characteristics of the original speakers, in both timber and speaking style. The disadvantages of such systems are that sentences containing unseen context sometimes have discontinuity problems, and these systems have less flexibility in changing speakers, speaking styles or emotions. The discontinuity problem becomes more severe when the unit inventory is small.
To achieve more flexibility in text-to-speech systems, an HMM-based approach may be used, in which speech waveforms are represented by a source-filter model. Excitation parameters and spectral parameters are modeled by context-dependent HMMs. The training process is similar to that in speech recognition, however a main difference is in the description of context. In speech recognition, normally only the phones immediately before and after the current phone are considered. However, in speech synthesis, any context feature that has been used in unit selection systems can be used. Further, a set of state duration models are trained to capture the temporal structure of speech. To handle problems due to a scarcity of data, a decision tree-based clustering method is applied to tie context dependent HMMs. During synthesis, a given text is first converted to a sequence of context-dependent units in the same way as it is done in a unit-selection system. Then, a sentence HMM is constructed by concatenating context-dependent unit models. Next, a sequence of speech parameters, including both spectral parameters and prosodic parameters, are generated by maximizing the output probability for the sentence HMM. Finally, these parameters are converted to a speech waveform through a source-filter synthesis model. Mel-cepstral coefficients may be used to represent speech spectrum. In one system, Line Spectrum Pair (LSP) coefficients are used.
Requirements for designing, collecting and labeling of speech corpus for training a HMM-based voice are similar to those for a unit-selection voice, except that the HMM voice can be trained from a relatively small corpus yet still maintain reasonably good quality. Therefore, speech corpuses used by the unit-selection system are also used to train HMM voices.
Speech generated with the HMM system is normally stable and smooth. The parametric representation of speech provides reasonable flexibility in modifying the speech. However, like other vocoded speech, speech generated from the HMM system often sounds buzzy. Thus, in some circumstances, unit selection is a better approach than HMM, while HMM is better in other circumstances. By providing both engines in the platform 200, users can decide what is better for a given circumstance.
Three voice-morphing algorithms 2221-222j are also represented in
Sinusoidal-model based morphing achieves flexible pitch and spectrum modifications in a unit-selection based text-to-speech system. Thus, one such morphing algorithm is operated on the speech waveform generated by the text-to-speech system. Internally, the speech waveforms are converted into parameters through a Discrete Fourier Transforms. To avoid the difficulties in voice/non-voice detection and pitch tracking, a uniformed sinusoidal representation of speech, shown as in Eq. (1), is adopted.
where Al, ωl and θl are the amplitudes, frequencies and phases of the sinusoidal components of speech signal, and Si(n), Li is the number of components considered. These parameters are can be modified separately.
For pitch scaling, the central frequencies of the components are scaled up or down by the same factor simultaneously. Amplitudes of new components are sampled from the spectral envelop formed by interpolating Al. Phrases are kept as before. For formant position adjustment, the spectral envelop is formed by interpolating between Al stretched or compressed toward the high-frequency end or the low-frequency end by a uniformed factor. With this method, the formant frequencies are increased or decreased together, but without adjusting the individual formant location. In the morphing algorithm, the phase of sinusoidal components can be set to random values to achieve whisper or hoarse speech. The amplitudes of even or odd components may be attenuated to achieve some special effects.
Proper combination of the modifications of different parameters will generate the desired style, speaker morphing targets set forth in the above example. For example, scaling up the pitch by a factor 1.2-1.5 and stretch the spectral envelop by a factor 1.05-1.2, causes a male voice to sound like a female. Scaling down the pitch and setting the random phase for all components provides a hoarse voice.
With respect to source-filter model based morphing, because in the HMM-based system, speech has been decomposed to excitation and spectral parameters, pitch scaling and formant adjustment is easy to achieve by directly adjusting the frequency of excitation or spectral parameters. The random phase and even/odd component attenuation are not supported in this algorithm. Most morphing targets in style morphing and speaker morphing can be achieved with this algorithm.
A key idea of phonetic transition is to synthesize closely-related dialects with the standard voice by mapping the phonetic transcription in the standard language to that in the target dialect. This approach is valid only when the target dialect shares a similar phonetic system with the standard language.
A rule-based mapping algorithm has been built to synthesize Ji'nan, Xi'an and Luoyang dialects in China with a Mandarin Chinese voice. It contains two parts, one for phone mapping, and the other for tone mapping. In an on-line system, the phonetic transition module is added after the text and prosody analysis. After the unit string in Mandarin is converted to a unit string representing the target dialect, the same unit selection is used to generate speech with the Mandarin unit inventory.
By way of summary,
Step 504 represents retrieving the base voice from the data store of base voices, or retrieving a custom voice from the data store of collected voice personas. Note that security and the like may be performed at this time to ensure that private voices may only be accessed by authorized users.
Step 506 represents modifying the retrieved voice as necessary based on the parameter data. For example, a user may provide new text to a custom voice or a base voice, may provide parameters to modify a base voice via morphing effects, and so forth as generally described above. Step 508 represents saving the changes; note that saving can be skipped unless and until changes are made, and further, the user can exit without saving changes, however such logic is omitted from
Steps 510 and 512 represent the user editing the parameters, such as by using sliders, buttons and so forth to modify settings and select effects and/or a dialect, such as in the example edit interface of
Step 518 represents the user completing the creation, selection and/or editing processes, with step 520 representing the service outputting the waveform over some channel, such as a .wav file downloaded to the user over the Internet, such as for directly or indirectly embedding into a software program. Again, note that step 518 may correspond to a “cancel” type of operation in which the user does not save the name card or have any waveform output thereto, however such logic is omitted from
In this manner, there is provided a voice persona service that makes text-to-speech easily understood and accessible for virtually any user, whereby users may embed speech content into software programs, including web applications. Moreover, via the service platform, the voice persona-centric architecture allows users to access, customize, and exchange voice personas.
The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to: personal computers, server computers, hand-held or laptop devices, tablet devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The invention may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, and so forth, which perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in local and/or remote computer storage media including memory storage devices.
With reference to
The computer 610 typically includes a variety of computer-readable media. Computer-readable media can be any available media that can be accessed by the computer 610 and includes both volatile and nonvolatile media, and removable and non-removable media. By way of example, and not limitation, computer-readable media may comprise computer storage media and communication media. Computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer-readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by the computer 610. Communication media typically embodies computer-readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media includes wired media such as a wired network or direct-wired connection, and wireless media such as acoustic, RF, infrared and other wireless media. Combinations of the any of the above should also be included within the scope of computer-readable media.
The system memory 630 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 631 and random access memory (RAM) 632. A basic input/output system 633 (BIOS), containing the basic routines that help to transfer information between elements within computer 610, such as during start-up, is typically stored in ROM 631. RAM 632 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 620. By way of example, and not limitation,
The computer 610 may also include other removable/non-removable, volatile/nonvolatile computer storage media. By way of example only,
The drives and their associated computer storage media, described above and illustrated in
The computer 610 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 680. The remote computer 680 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 610, although only a memory storage device 681 has been illustrated in
When used in a LAN networking environment, the computer 610 is connected to the LAN 671 through a network interface or adapter 670. When used in a WAN networking environment, the computer 610 typically includes a modem 672 or other means for establishing communications over the WAN 673, such as the Internet. The modem 672, which may be internal or external, may be connected to the system bus 621 via the user input interface 660 or other appropriate mechanism. A wireless networking component 674 such as comprising an interface and antenna may be coupled through a suitable device such as an access point or peer computer to a WAN or LAN. In a networked environment, program modules depicted relative to the computer 610, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
An auxiliary subsystem 699 (e.g., for auxiliary display of content) may be connected via the user interface 660 to allow data such as program content, system status and event notifications to be provided to the user, even if the main portions of the computer system are in a low power state. The auxiliary subsystem 699 may be connected to the modem 672 and/or network interface 670 to allow communication between these systems while the main processing unit 620 is in a low power state.
While the invention is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the invention to the specific forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the invention.