The present disclosure relates to the field of speech synthesis, and in particular, to speech synthesis.
Speech synthesis refers to synthesizing, according to speech recorded by a user for a part of texts, speech of other texts that matches a timbre of the user.
In the related art, one multi-user acoustic model and one vocoder are usually pre-trained. The acoustic model is configured to convert the text into a spectral feature that matches a timbre of a user, and the vocoder is configured to convert the spectral feature into speech signal. An encoder in the acoustic model is configured to model text information, and a decoder in the acoustic model is configured to model acoustic information. By using a recording of a target user, information of the target user may be introduced into an input of the encoder, thereby performing fine-tuning on the acoustic model, and then obtaining a spectral feature that matches a timbre of the target user and corresponds to the text. Then, the speech signal may be synthesized through the vocoder based on an upsampling structure, thereby obtaining a synthesized speech that matches the timbre of the target user and corresponds to the text. For example, the acoustic model is a Fastspeech model, and the vocoder is a high-fidelity generative adversarial network (HifiGAN).
Speech synthesis is performed through the foregoing model. Due to a large quantity of model parameters, computing complexity is high. In a scenario with low computing resources, such as synthesizing speech in a terminal, there may be a case that a lot of computing resources are consumed, making it difficult to deploy the model.
The present disclosure provides a training method for a speech synthesis model, speech synthesis method and apparatus, and a device, to reduce computing resources consumed by the model and implement deployment of the model in a device with low computing resources. The technical solutions are as follows.
According to an aspect of the present disclosure, a training method for a speech synthesis model, performed by a computer device, the method including: obtaining a sample phoneme of a target user and a timbre identifier of the target user, the sample phoneme being determined based on a sample text corresponding to sample speech of the target user, and the timbre identifier is configured to identify a timbre of the target user; inputting the sample phoneme into a first submodel of the speech synthesis model, to obtain a predicted phonetic posteriorgram of the sample phoneme, the predicted phonetic posteriorgram being configured to reflect features of phonemes in the sample phoneme and pronunciation duration features of phonemes in the sample phoneme; inputting the predicted phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain predicted speech corresponding to the sample text and the timbre identifier, the second submodel being configured to predict the predicted phonetic posteriorgram and a predicted intermediate variable of the predicted speech, and the predicted intermediate variable being configured to reflect a frequency domain feature of the predicted speech; training the first submodel according to the predicted phonetic posteriorgram; and training the second submodel according to the predicted speech and the predicted intermediate variable.
According to another aspect of the present disclosure, speech synthesis method is provided, performed by a computer device, the method including: obtaining a target phoneme of a target user and a timbre identifier of the target user, the target phoneme being determined based on a target text, and the timbre identifier being configured to identify a timbre of the target user; inputting the target phoneme into a first submodel of a speech synthesis model, to obtain a target phonetic posteriorgram of the target phoneme, the target phonetic posteriorgram reflecting features of phonemes in the target phoneme and pronunciation duration features of phonemes in the target phoneme; and inputting the target phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain target speech corresponding to the target text and the timbre identifier, the second submodel being configured to predict the target phonetic posteriorgram and a predicted intermediate variable of the target speech, and the predicted intermediate variable reflecting a frequency domain feature of the target speech.
According to another aspect of the present disclosure, a training apparatus for a speech synthesis model is provided, including: an obtaining module, configured to obtain a sample phoneme of a target user and a timbre identifier of the target user, the sample phoneme being determined based on a sample text corresponding to sample speech of the target user, and the timbre identifier being configured to identify a timbre of the target user; an input/output module, configured to input the sample phoneme into a first submodel of the speech synthesis model, to obtain a predicted phonetic posteriorgram of the sample phoneme, the predicted phonetic posteriorgram being configured to reflect features of phonemes in the sample phoneme and pronunciation duration features of phonemes in the sample phoneme; the input/output module, being further configured to input the predicted phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain predicted speech corresponding to the sample text and the timbre identifier, the second submodel being configured to predict the predicted phonetic posteriorgram and a predicted intermediate variable of the predicted speech, and the predicted intermediate variable being configured to reflect a frequency domain feature of the predicted speech; a training module, configured to train the first submodel according to the predicted phonetic posteriorgram; and training the second submodel according to the predicted speech and the predicted intermediate variable.
According to another aspect of the present disclosure, speech synthesis apparatus is provided, the apparatus including the speech synthesis model obtained by training through the apparatus according to the foregoing aspect, and the apparatus including: an obtaining module, configured to obtain a target phoneme of a target user and a timbre identifier of the target user, the target phoneme being determined based on a target text, and the timbre identifier is configured to identify a timbre of the target user; an input/output module, configured to input the target phoneme into a first submodel of the speech synthesis model, to obtain a phonetic posteriorgram of the target phoneme; and the input/output module, being further configured to input the phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain target speech corresponding to the target text and the timbre identifier.
According to another aspect of the present disclosure, a computer device is provided, including at least one processor and at least one memory, the at least one memory storing at least one instruction, at least one program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by the at least one processor to implement the training method for a speech synthesis model or the speech synthesis method according to the foregoing aspect.
According to another aspect of the present disclosure, a non-transitory computer-readable storage medium is provided, storing at least one instruction, at least one program, a code set, or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by a processor to implement the training method for a speech synthesis model or the speech synthesis method according to the foregoing aspect.
Beneficial effects of the technical solutions that are provided in the present disclosure are at least as follows:
By training the speech synthesis model, the speech synthesis model may generate target speech that matches a timbre of a target user according to a timbre identifier of the target user and a target text. A process of synthesizing the target speech is implemented by predicting an intermediate variable through a predicted phonetic posteriorgram. Because the phonetic posteriorgram includes less information than a spectral feature, fewer model parameters are required by the predicted phonetic posteriorgram, which may reduce parameters of the model, thereby reducing the computing resources consumed by the model, and implementing the deployment of the model in the device with low computing resources.
To make the objectives, technical solutions, and advantages of the present disclosure clearer, the following further describes implementations of the present disclosure in detail with reference to the accompanying drawings.
First, related terms involved in embodiments of the present disclosure are introduced as follows:
Artificial intelligence (AI) is a theory, method, technology, and application system that uses a digital computer or a machine controlled by the digital computer to simulate, extend, and expand human intelligence, perceive an environment, obtain knowledge, and use knowledge to obtain an optimal result. In other words, the AI is a comprehensive technology of computer sciences, attempts to understand essence of intelligence, and produces a new intelligent machine that can react in a manner similar to human intelligence. The AI is to study the design principles and implementation methods of various intelligent machines, to enable the machines to have the functions of perception, reasoning, and decision-making
The AI technology is a comprehensive discipline, covering a wide range of fields including both a hardware-level technology and a software-level technology. The basic AI technology generally includes a technology such as a sensor, a dedicated AI chip, cloud computing, distributed storage, a big data processing technology, an operation/interaction system, or mechatronics. An AI software technology mainly includes fields such as a computer vision (CV) technology, a speech processing technology, a natural language processing technology, and machine learning/deep learning (DL).
Key technologies of the speech technology include an automatic speech recognition (ASR) technology, a text to speech (TTS) technology, and a voiceprint recognition technology. To make a computer capable of listening, seeing, speaking, and feeling is the future development direction of man-machine interaction, and speech has become one of the most promising man-machine interaction methods in the future.
A phoneme is a minimum phonetic unit obtained through division according to a natural attribute of speech, analysis is performed according to a pronunciation action in a syllable, and an action forms a phoneme. Phonemes may be divided into two categories: vowels and consonants. For example, A Chinese syllable ah (ā) has only one phoneme, and love (ài) has two phonemes.
A computer device obtains sample speech of a target user, a timbre identifier of the target user, a sample text corresponding to the sample speech, and a style identifier, determines a sample phoneme that makes up the sample text according to the sample text, determines a real pronunciation duration feature of each phoneme in the sample phoneme according to the sample speech, and determines a real phonetic posteriorgram (PPG) of the sample phoneme according to the sample speech. Then, the sample phoneme and the style identifier are input into the first submodel 101, and encoding is performed on the sample phoneme according to the style identifier through a text encoder 1011, to obtain a hidden layer feature of the sample phoneme corresponding to the style identifier. In addition, the hidden layer feature of the sample phoneme is predicted through a duration predictor 1012, to obtain a predicted pronunciation duration feature corresponding to each phoneme in the sample phoneme corresponding to the style identifier. In addition, the hidden layer feature of the sample phoneme is predicted through a fundamental frequency predictor 1013, to obtain a fundamental frequency feature corresponding to the style identifier and the sample phoneme. Different style identifiers have corresponding model parameters. Then, the computer device performs frame expansion processing on the hidden layer feature of the sample phoneme through a duration regulation device 1014 according to a real pronunciation duration feature corresponding to each phoneme in the sample phoneme, and performs convolution processing on the hidden layer feature of the sample phoneme obtained after frame expansion through a post-processing network 1015, thereby obtaining a predicted phonetic posteriorgram of the sample phoneme. The predicted phonetic posteriorgram is configured to reflect a feature of each phoneme in the sample phoneme and a pronunciation duration feature of each phoneme in the sample phoneme; When training the first submodel 101, the computer device calculates a loss function between the predicted phonetic posteriorgram and a real phonetic posteriorgram, and trains the first submodel. In addition, the computer device calculates a loss function between the predicted pronunciation duration feature and the real pronunciation duration feature, and trains the first submodel.
Then, the computer device inputs the predicted phonetic posteriorgram, the timbre identifier, and the fundamental frequency feature into the second submodel 102, and samples an average value and a variance of prior distribution of a predicted intermediate variable through a phonetic posterior encoder 1021 (also referred to as a PPG encoder) in a prior encoder, to obtain the predicted intermediate variable. The predicted intermediate variable is an intermediate variable in a process of synthesizing predicted speech through the predicted phonetic posteriorgram, and the predicted intermediate variable is configured to reflect a frequency domain feature of the predicted speech. The computer device further inputs sample speech (linear spectrum) into a posterior predictor 1024 of a posterior encoder, and samples an average value and a variance of posterior distribution of a real intermediate variable, thereby obtaining the real intermediate variable. In addition, the computer device performs affine coupling processing on the real intermediate variable through a regularized flow layer 1022 of a prior encoder, thereby obtaining a processed real intermediate variable. Then, the computer device calculates a relative entropy loss (also referred to as a Kulbach Leibler (KL) divergence loss) between the predicted intermediate variable and the processed real intermediate variable, to train the second submodel. In addition, the prior encoder further includes a phonetic posterior predictor 1023 (PPG predictor), and the phonetic posterior predictor 1023 is configured for predicting, in a process of pre-training the second submodel, a predicted phonetic posteriorgram in the pre-training process according to a predicted intermediate variable in the pre-training process, so that the computer device may calculate, in the process of pre-training the second submodel, a loss function between the predicted phonetic posteriorgram in the pre-training process and the real phonetic posteriorgram in the pre-training process, thereby performing pre-training on the second submodel. After obtaining the predicted intermediate variable, the computer device performs inverse Fourier transform on the predicted intermediate variable according to the timbre identifier through an inverse Fourier transform decoder 1025 of a decoder, thereby obtaining predicted speech. Then, the computer device calculates a Mel spectrum loss between the predicted speech and the sample speech, and trains the second submodel. In addition, in the model structure shown in
The computer device obtains a target phoneme of a target user, a timbre identifier of the target user, and a target style identifier. The target phoneme is determined according to a target text. By inputting the target phoneme and the target style identifier into the first submodel 101, the computer device may obtain a phonetic posteriorgram of the target phoneme corresponding to the target style identifier and a target fundamental frequency feature corresponding to the target style identifier and the target phoneme. Then, the computer device inputs the phonetic posteriorgram, the timbre identifier, and the target fundamental frequency feature into the second submodel 102, thereby obtaining the target speech corresponding to the target text, the timbre identifier, and the target style identifier. The target text determines pronunciation content of the target speech, the timbre identifier determines a timbre of the target speech, and the target style identifier determines a pronunciation style of the target speech, including pronunciation duration and a fundamental frequency of each phoneme.
In the model training stage, frame expansion processing is performed, through the pronunciation duration feature, on the fundamental frequency feature input by the computer device into the second submodel 102. In the speech synthesis stage, the pronunciation duration feature used by the computer device for frame expansion in the first submodel 101 is predicted by the duration predictor 1012. In addition, in the speech synthesis stage, the regularized flow layer 1022 in the second submodel 102 performs reverse flow transformation (opposite to a data flow direction during training) on an intermediate variable output by the phonetic posterior encoder 1021, and inputs the intermediate variable into the decoder for processing, thereby obtaining the target speech. In addition, the phonetic posterior predictor 1023 in the second submodel 102 does not participate in the speech synthesis process.
By training the speech synthesis model, the speech synthesis model may generate target speech that matches a timbre of a target user according to a timbre identifier of the target user and a target text. A process of synthesizing the target speech is implemented by predicting an intermediate variable through a predicted phonetic posteriorgram and through inverse Fourier transform. Because the phonetic posteriorgram includes less information than a spectral feature, fewer model parameters are required by the predicted phonetic posteriorgram, and the inverse Fourier transform requires fewer model parameters than upsampling, which may reduce parameters of the model, thereby reducing computing resources consumed by the model, and implementing deployment of the model in a device with low computing resources.
Step 202: Obtain a sample phoneme of a target user and a timbre identifier of the target user.
The target user is a user who needs to perform speech synthesis. A speech synthesis model obtained by training by the computer device may synthesize speech that matches a timbre of the target user and whose content is a target text.
In an embodiment, the computer device further obtains a sample text corresponding to sample speech of the target user, and the sample text includes text content corresponding to the sample speech. The sample speech and the sample text may support different types of languages, and this is not limited in embodiments of the present disclosure. For example, the sample speech is obtained by recording pronunciation of a small quantity of texts of the target user. The sample phoneme is determined based on the sample text corresponding to the sample speech of the target user.
The timbre identifier of the target user is configured to identify the timbre of the target user. When training the speech synthesis model, the timbre identifier may be configured for establishing a corresponding relationship between the model parameter learned by the model and the timbre identifier. Therefore, by inputting the timbre identifier into the model when synthesizing the speech, speech that conforms to the timbre corresponding to the timbre identifier (a model parameter corresponding to the timbre identifier) may be synthesized.
Step 204: Input the sample phoneme into a first submodel of a speech synthesis model, to obtain a predicted phonetic posteriorgram of the sample phoneme.
The predicted phonetic posteriorgram is configured to reflect a feature of each phoneme in the sample phoneme and a pronunciation duration feature of each phoneme in the sample phoneme; The “phonetic posteriorgram” may also be referred to as “PPG”.
In an embodiment, the computer device extracts a hidden layer feature of the sample phoneme through the first submodel, and predicts the pronunciation duration feature of each phoneme in the sample phoneme through the first submodel or obtains a real pronunciation duration feature of each phoneme in the sample phoneme according to the sample speech, thereby obtaining the pronunciation duration feature of each phoneme in the sample phoneme. Then, according to the hidden layer feature of the sample phoneme and the pronunciation duration feature of each phoneme in the sample phoneme, the computer device may determine the predicted phonetic posteriorgram of the sample phoneme. In an embodiment, a determining process is implemented by the computer device by performing frame expanding on the hidden layer feature of the sample phoneme through the pronunciation duration feature of each phoneme in the sample phoneme.
Step 206: Input the predicted phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain predicted speech corresponding to the sample text and the timbre identifier.
Content of the predicted speech is the sample text, and the timbre of the predicted speech is the timbre corresponding to the timbre identifier, namely, the timbre of the target user identified by the timbre identifier. The second submodel predicts the predicted phonetic posteriorgram and the predicted intermediate variable of the predicted speech, and obtains the predicted speech from the predicted intermediate variable. The predicted intermediate variable is configured to reflect a frequency domain feature of the predicted speech. The predicted intermediate variable is an intermediate variable predicted by the second submodel in a process of determining the predicted speech. The predicted intermediate variable may also be referred to as a predicted latent variable.
In one embodiment, the predicted speech may be obtained based on inverse Fourier transform. In other words, the second submodel of the speech synthesis model obtains the predicted speech based on the inverse Fourier transform by predicting the predicted phonetic posteriorgram and the predicted intermediate variable of the predicted speech.
Step 208: Train the first submodel according to the predicted phonetic posteriorgram, and train the second submodel according to the predicted speech and a predicted intermediate variable.
The computer device determines a real phonetic posteriorgram according to the sample speech, and may perform training on the first submodel by calculating a loss function between the predicted phonetic posteriorgram and the real phonetic posteriorgram. The computer device may perform training on the second submodel by calculating a loss function between the sample speech and the predicted speech.
In an embodiment, the loss function between sample speech and predicted speech refers to Mel spectrum losses of the sample speech and the predicted speech. The computer device may determine the loss function to train the second submodel by converting the predicted speech and the sample speech into Mel spectra respectively, and calculate an Ll norm distance of the Mel spectra of the predicted speech and the sample speech.
The computer device further obtains the sample speech and the real intermediate variable corresponding to the predicted phonetic posteriorgram. In an embodiment, the real intermediate variable is determined according to the sample speech. The computer device calculates a loss function between the predicted intermediate variable and the real intermediate variable, to train the second submodel. In an embodiment, the loss function refers to a relative entropy loss (KL divergence loss).
The trained speech synthesis model is obtained by pre-training Pre-training is training performed on the speech synthesis model by using pre-trained data, and may refer to the foregoing training process. An objective of the foregoing training process of the speech synthesis model is to learn the frequency domain feature (which may be referred to as phoneme cloning fine-tuning training) of pronunciation of the target user, that is, to learn the timbre of the target user, and establish a corresponding relationship between the learned model parameter and the timbre identifier. After completing the training of the speech synthesis model, the speech synthesis model may synthesize target speech that matches the timbre (input timbre identifier) of the target user and whose content is the target text (different from the sample text).
The speech synthesis model obtained by training by the method provided in this embodiment may be deployed and run in a device with low computing resources. The device with the low computing resource includes a user terminal. The user terminal includes but is not limited to a mobile phone, a computer, an intelligent speech interaction device, a smart home appliance, a vehicle-mounted terminal, an aircraft, and the like.
In summary, in the method provided in this embodiment, by training the speech synthesis model, the speech synthesis model may generate target speech that matches the timbre of the target user according to the timbre identifier of the target user and the target text. A process of synthesizing the target speech is implemented by predicting an intermediate variable through a predicted phonetic posteriorgram and through inverse Fourier transform. Because the phonetic posteriorgram includes less information than a spectral feature, fewer model parameters are required by the predicted phonetic posteriorgram, and the inverse Fourier transform requires fewer model parameters than upsampling, which may reduce parameters of the model, thereby reducing computing resources consumed by the model, and implementing deployment of the model in a device with low computing resources.
Step 302: Obtain a sample phoneme of a target user and a timbre identifier of the target user.
The target user is a user who needs to perform speech synthesis, and the target user is determined by a user who trains or uses the speech synthesis model. In an embodiment, the computer device further obtains a sample text corresponding to sample speech of the target user, the sample text includes text content corresponding to the sample speech, and the sample phoneme is determined based on the sample text corresponding to the sample speech of the target user.
The timbre identifier of the target user is configured to identify information of the target user. When training the speech synthesis model, the timbre identifier may be configured for establishing a corresponding relationship between the model parameter learned by the model and the timbre identifier. Therefore, by inputting the timbre identifier into the model when synthesizing the speech, speech that conforms to the timbre (a timbre of the target user) corresponding to the timbre identifier may be synthesized.
Step 304: Perform encoding on the sample phoneme through the text encoder of the first submodel of the speech synthesis model, to obtain a hidden layer feature of the sample phoneme.
The computer device inputs the sample phoneme into a first submodel of the speech synthesis model, to obtain a predicted phonetic posteriorgram of the sample phoneme. The predicted phonetic posteriorgram is configured to reflect a feature of each phoneme in the sample phoneme and a pronunciation duration feature of each phoneme in the sample phoneme.
The first submodel may be referred to as a Text2PPG model, which is configured for converting an input phoneme sequence into a language feature that includes more pronunciation information. The first submodel includes a text encoder, a duration regulation device, and a post-processing network. The computer device performs encoding on the sample phoneme through the text encoder, to obtain the hidden layer feature of the sample phoneme.
In an embodiment, the text encoder uses a feed-forward transformer (FFT) structure. Each FFT module includes a multi-head self-attention module and a convolution module. After the self-attention module and the convolution module, a residual connection and a layer normalization structure are also added, thereby improving stability and performance of the structure.
For example,
where
Q, K, and V each represent a hidden layer representation sequence of a sample phoneme. Because it needs to be ensured that an inner product of Q and K is a positive number, so that an output probability is meaningful, an exponential linear unit (ELU) function is used in linear attention, so that matrix multiplication may be first performed on ϕ(K)T and V, and computing complexity is O(N). In addition, a sample user or acoustic related information is not considered in the first submodel, so that an output is only related to the input phoneme sequence. Compared with dot product attention, the linear attention mechanism may reduce computing complexity while ensuring an attention effect.
Step 306: Perform frame expansion processing on the hidden layer feature of the sample phoneme through a duration regulation device of the first submodel of the speech synthesis model.
Because the phonetic posteriorgram may reflect complete pronunciation information, and the phonetic posteriorgram includes information of the pronunciation duration of each phoneme, the pronunciation duration feature corresponding to each phoneme in the sample phoneme needs to be determined, and frame expansion needs to be performed on the hidden layer feature of the sample phoneme.
The computer device may further obtain a real pronunciation duration feature corresponding to each phoneme in the sample phoneme. For example, the computer device obtains sample speech, and by performing analysis processing on the sample speech, may obtain the real pronunciation duration feature corresponding to each phoneme in the sample phoneme. For example, through the pre-trained duration model, the real pronunciation duration feature corresponding to each phoneme in the sample phoneme may be determined according to the sample speech. The computer device may perform frame expansion processing on the hidden layer feature of the sample phoneme through the duration regulation device according to the real pronunciation duration feature corresponding to each phoneme in the sample phoneme.
Step 308: Perform convolution processing on the hidden layer feature of the sample phoneme obtained after frame expansion through the post-processing network of the first submodel of the speech synthesis model, to obtain the predicted phonetic posteriorgram of the sample phoneme.
After frame expansion processing is performed on the hidden layer feature of
the sample phoneme, the information is input into the post-processing network, and the post-processing network performs convolution processing on the input information, to perform smooth processing on the input information. Because the phonetic posteriorgram after frame expansion already includes a phoneme feature and a pronunciation duration feature, the predicted phonetic posteriorgram of the sample phoneme may be obtained.
Step 310: Train the first submodel according to the predicted phonetic posteriorgram.
The computer device further obtains a real phonetic posteriorgram of the sample phoneme. For example, by inputting the sample speech into a speech recognition model, the real phonetic posteriorgram of the sample phoneme may be obtained. The computer device trains the first submodel by calculating a loss function between the predicted phonetic posteriorgram and the real phonetic posteriorgram. For example, the loss function is an L2 norm loss.
In an embodiment, because a real pronunciation duration feature cannot be obtained when synthesizing speech, the first submodel further includes a duration predictor. When training the first submodel, the computer device predicts the hidden layer feature of the sample phoneme through the duration predictor, thereby obtaining the predicted pronunciation duration feature corresponding to each phoneme in the sample phoneme. Then, the computer device calculates a loss function between the predicted pronunciation duration feature and the real pronunciation duration feature, and trains the first submodel. For example, the loss function is an L2 norm loss.
After the training is completed, the real pronunciation duration feature configured for being input into the duration regulation device is replaced with the predicted pronunciation duration feature obtained by the duration predictor.
In addition, because a recording of a sample user is often rapid and emotionless, different styles of speech need to be synthesized for different speech synthesis scenarios. The style includes the pronunciation duration (duration of a pronunciation pause) of each phoneme in the speech and a fundamental frequency (a change of the fundamental frequency). In embodiments of the present disclosure, the speech style is identified by a style identifier. The computer device may control the model to generate predicted phonetic posteriorgram adapted to different scenarios by using different style identifiers. In an embodiment, in this solution, the computer device further obtains the style identifier, and performs encoding on the sample phoneme through the text encoder according to the style identifier, to obtain the hidden layer feature of the sample phoneme corresponding to the style identifier.
Different style identifiers have corresponding model parameters. In other words, the input style identifiers are different, and the model parameters of the model are also different. This affects the hidden layer feature obtained by performing encoding on the sample phoneme by the first submodel. After inputting different style identifiers, the hidden layer feature of the sample phoneme in the styles corresponding to different style identifiers may be obtained. The influence of the style identifier on the hidden layer feature affects a subsequent input of the model, for example, affects the pronunciation duration feature (outputs the predicted pronunciation duration feature corresponding to the sample phoneme and the style identifier) predicted by the duration predictor and affects the fundamental frequency feature (outputs the fundamental frequency feature corresponding to the style identifier and the sample phoneme) predicted by the fundamental frequency predictor. In an embodiment, in a process of pre-training the first submodel, the computer device performs training by using pre-training data of different styles and corresponding style identifiers, so that the first submodel may learn model parameters corresponding to different styles.
In an embodiment, the first submodel further includes a fundamental frequency predictor. The computer device predicts the hidden layer feature of the sample phoneme through the fundamental frequency predictor, to obtain the fundamental frequency feature corresponding to the style identifier and the sample phoneme. The fundamental frequency feature is configured for being input into the second submodel to obtain the predicted speech corresponding to the style identifier, that is, to obtain the predicted speech of the style corresponding to the style identifier. Because the style of the speech is related to the fundamental frequency, and adding fundamental frequency prediction may improve a prediction effect of the predicted phonetic posteriorgram, in an embodiment, the computer device further splices the fundamental frequency feature obtained by predicting by the fundamental frequency predictor onto the output of the text encoder.
For example, for a structure of the first submodel, refer to the example in
Step 312: Input the predicted phonetic posteriorgram and the timbre identifier into the prior encoder of the second submodel, to obtain the predicted intermediate variable.
The computer device inputs the predicted phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain predicted speech corresponding to the sample text and the timbre identifier. The second submodel obtains the predicted speech based on the inverse Fourier transform by predicting the predicted phonetic posteriorgram and the predicted intermediate variable of the predicted speech. The predicted intermediate variable is configured to reflect the frequency domain feature of the predicted speech. Because the speech synthesis model uses the phonetic posteriorgram as a language feature, which already provides phonetic duration information, the second submodel does not need to consider information of modeling pronunciation duration.
The second submodel includes a prior encoder and a decoder. The computer device inputs the predicted phonetic posteriorgram and the timbre identifier into the prior encoder of the second submodel, to obtain the predicted intermediate variable.
In an embodiment, the prior encoder includes a phonetic posterior encoder (PGG encoder). The computer device inputs the predicted phonetic posteriorgram and the timbre identifier into the phonetic posterior encoder, and samples an average value and a variance of prior distribution p(z|c) of a predicted intermediate variable z through the phonetic posterior encoder and using condition information c including the predicted phonetic posteriorgram and the timbre identifier as a condition, to obtain the predicted intermediate variable. In an embodiment, the phonetic posterior encoder is based on an FFT structure, and uses a linear attention mechanism. For a specific structure, refer to the example in
In an embodiment, when the first submodel outputs the fundamental frequency feature corresponding to the sample phoneme and the style identifier, the computer device further obtains the fundamental frequency feature corresponding to the sample phoneme and the style identifier. The fundamental frequency feature is obtained by performing feature extraction on the sample phoneme through the first submodel based on the style identifier. The computer device inputs the predicted phonetic posteriorgram, the timbre identifier, and the fundamental frequency feature into the prior encoder, to obtain the predicted intermediate variable corresponding to the style identifier. In other words, the predicted intermediate variable has a style corresponding to the style identifier, predicted speech having a style corresponding to the style identifier may be synthesized through the predicted intermediate variable.
Step 314: Perform inverse Fourier transform on the predicted intermediate variable through the decoder of the second submodel, to obtain the predicted speech.
In an embodiment, the decoder includes an inverse Fourier transform decoder. The computer device performs inverse Fourier transform on the predicted intermediate variable through the inverse Fourier transform decoder, to obtain the predicted speech. In an embodiment, in actual application, if the style identifier of the target user is not input into the decoder, a similarity (the timbre similarity between the sample speech and the predicted speech) of timbre cloning of the target user is significantly reduced. Therefore, the computer device inputs the predicted intermediate variable and the style identifier into the inverse Fourier transform decoder, and performs inverse Fourier transform on the predicted intermediate variable through the inverse Fourier transform decoder according to the style identifier, to obtain the predicted speech.
In an embodiment, the inverse Fourier transform decoder includes a plurality of one-dimensional convolutional layers, and the last one-dimensional convolutional layer is connected to an inverse Fourier transform layer. For example,
In an embodiment, the inverse Fourier transform decoder may also be replaced by a combination of an upsampling structure and an inverse Fourier transform structure. For example,
Because a medium-high frequency feature of speech is better restored, and harmonics are mainly concentrated in a low-frequency part, the low-frequency part may be restored through the upsampling structure and the medium-high frequency may be modeled through the inverse Fourier transform layer. Then, the speech may be divided into a plurality of sub-bands through the pseudo-orthogonal minor filter bank. A first sub-band is modeled by the upsampling structure, and the rest is generated by the structure shown in
Step 316: Train the second submodel according to the predicted speech and the predicted intermediate variable.
The computer device further obtains sample speech, and calculates a mel spectrum loss between the predicted speech and the sample speech, to train the second submodel.
In an embodiment, the second submodel further includes a posterior encoder. The computer device obtains the sample speech, and inputs the sample speech (linear spectrum) into the posterior encoder, to obtain a real intermediate variable between the sample speech and the predicted phonetic posteriorgram. The computer device trains the second submodel by calculating a relative entropy loss (KL divergence loss) between the predicted intermediate variable and the real intermediate variable.
In an embodiment, the posterior encoder includes a posterior predictor (PPG predictor). The computer device inputs the sample speech into the posterior predictor, and samples the average value and the variance of the posterior distribution p(z|y) of the real intermediate variable, to obtain the real intermediate variable. y represents the sample speech.
In an embodiment, the prior encoder further includes a regularized flow layer. The computer device performs affine coupling processing on the real intermediate variable through the regularized flow layer, thereby obtaining the processed real intermediate variable. When calculating the KL divergence loss, the computer device calculates the relative entropy loss between the predicted intermediate variable and the processed real intermediate variable, to train the second submodel. The regularized flow may transform an intermediate variable z into more complex distribution, and the KL divergence loss is configured for making distribution of the real intermediate variable consistent with distribution of the predicted intermediate variable.
The regularized flow layer includes a plurality of affine coupling layers, and each affine coupling layer is configured for performing affine coupling processing on the real intermediate variable. Due to use of multi-layer processing, the regularized flow layer has a large quantity of parameters. In embodiments of the present disclosure, model complexity is reduced by causing different affine coupling layers to share model parameters. For example,
During training of the speech synthesis model, an intermediate variable z is converted into f(z) through a flow (regularized flow layer). In an inference process (speech synthesis), an output of a phonetic posterior encoder is transformed by a reverse flow to obtain the intermediate variable z. For example, an expression of the intermediate variable z is as follows:
where
fθ represents distribution, μθ represents an average value, and σθ represents a variance.
In the absence of a clear constraint on the intermediate variable, the output predicted speech is prone to pronunciation errors such as mispronunciation and abnormal intonation. The method provided in embodiments of the present disclosure provides a pronunciation constraint by introducing a phonetic posterior predictor (PPG predictor). In an embodiment, the prior encoder further includes a phonetic posterior predictor, and the phonetic posterior predictor is configured to predict, in a process of pre-training the second submodel, a predicted phonetic posteriorgram in the pre-training process according to a predicted intermediate variable in the pre-training process. In the pre-training process of the second submodel, a loss function between the predicted phonetic posteriorgram in the pre-training process and a real phonetic posteriorgram in the pre-training process is calculated, and the second submodel is trained. In an embodiment, the loss function is an L1 norm loss. For example, an expression of the loss function is as follows:
L
PPg
=∥PPG1−PPG21,
where
PPG1 represents a predicted phonetic posteriorgram in a pre-training process, and PPG2 represents a real phonetic posteriorgram in the pre-training process.
The phonetic posterior predictor is only trained during pre-training the speech synthesis model and is frozen during phonetic clone fine-tuning training
In summary, the loss function in a process of training the second submodel may be classified as a conditional variational auto encoder (CVAE) loss, and the loss function is as follows:
L
cvae
−L
kl+λrecon*Lrecon+λppg*Lppg,
where
Lk1 represents a KL divergence loss between a predicted intermediate variable and a real intermediate variable, Lrecon represents an L1 loss of a mel spectrum of sample speech and predicted speech, and Lppg represents a loss of a PPG predictor. λrecon and λppg are weight parameters.
In an embodiment, the computer device may also input the real intermediate variable into the decoder, to obtain the predicted speech corresponding to the real intermediate variable, and then calculate the L1 loss of the Mel spectrum according to the predicted speech and the sample speech. In this case, the predicted intermediate variable obtained by the computer device is mainly configured for determining the KL divergence loss between the predicted intermediate variable and the real intermediate variable to train the second submodel.
In an embodiment, the decoder further includes a discriminator, and the discriminator forms a generative adversarial network (GAN) with parts of the speech synthesis model other than the discriminator. The computer device inputs the predicted speech into the discriminator, to obtain a discrimination result of the predicted speech, where the discrimination result is configured to reflect that the predicted speech is real information or predicted information. A generative adversarial loss is determined according to the discrimination result and a real source of the predicted speech, and the generative adversarial network is trained.
In an embodiment, the discriminator includes a multi-scale spectral discriminator. The multi-scale spectral discriminator is particularly effective in the inverse Fourier transform decoder, and has a significant gain in high-frequency harmonic reconstruction. The computer device inputs the predicted speech into the multi-scale spectrum discriminator, performs short-time Fourier transform on the predicted speech on an amplitude spectrum through a plurality of sub-discriminators of the multi-scale spectrum discriminator, and performs two-dimensional convolution processing through a plurality of two-dimensional convolutional layers, to obtain a discrimination result of the predicted speech. Each sub-discriminator has different short-time Fourier transform parameters.
For example,
In an embodiment, the discriminator includes a multi-scale complex spectral discriminator. The multi-scale complex spectral discriminator models a relationship between the real part and the imaginary part of the speech signal, which helps improve phase accuracy of the discrimination. A computer device inputs the predicted speech into the multi-scale complex spectrum discriminator, performs short-time Fourier transform on the predicted speech on a complex spectrum through a plurality of sub-discriminators of the multi-scale complex spectrum discriminator, and performs two-dimensional complex convolution processing through a plurality of two-dimensional complex convolutional layers, to obtain a discrimination result of the predicted speech. Each sub-discriminator has different short-time Fourier transform parameters. The multi-scale complex spectrum discriminator divides a signal into a real part and an imaginary part at a plurality of scales through short-time Fourier transform, and then performs two-dimensional complex convolution on the input. The method has a good effect in a complex domain.
In summary, a loss function in a process of training the second submodel
includes a CVAE loss and a GAN loss. An overall loss function is as follows:
L
G
=L
adv(G)+λfm*Lfm(G)+Lcvae;
and
L
D
'L
adv(D),
where
Ladv(G) is a loss of a generator in the GAN, and Ladv(D) is a loss of a discriminator in the GAN. Lfm(G) is a feature matching loss of the generator, and specifically refers to a loss between the output obtained by inputting real data and sample data into each network layer in the generator. λfm is a weight parameter.
For example, for a structure of the second submodel, refer to the example in
In summary, in the method provided in this embodiment, by training the speech synthesis model, the speech synthesis model may generate target speech that matches the timbre of the target user according to the timbre identifier of the target user and the target text. A process of synthesizing the target speech is implemented by predicting an intermediate variable through a predicted phonetic posteriorgram and through inverse Fourier transform. Because the phonetic posteriorgram includes less information than a spectral feature, fewer model parameters are required by the predicted phonetic posteriorgram, and the inverse Fourier transform requires fewer model parameters than upsampling, which may reduce parameters of the model, thereby reducing computing resources consumed by the model, and implementing deployment of the model in a device with low computing resources.
In the method provided in this embodiment, through the decoder of the inverse Fourier transform, different affine coupling layers of the regularized flow layer share model parameters, a linear attention mechanism is used, and the phonetic posteriorgram instead of the spectral feature is extracted, thereby effectively reducing a quantity of model parameters and computing complexity
In a two-stage model (Fastspeech) in the related art, because the two-stage model is divided into an acoustic model and a vocoder, there are errors between the acoustic model and the vocoder, which leads to a loss of synthesized speech quality. The problem is more clear in cloning of a timbre of a small sample. However, a current end-to-end model (a model that inputs a text and directly outputs speech) still has a problem of unstable speech generation and even mispronunciation. This seriously affects a listening experience. In the method provided in this embodiment, based on using modeling in an end-to-end manner, by introducing a style identifier, generating the phonetic posteriorgram according to a duration predictor and a fundamental frequency predictor, constraining an intermediate variable through a phonetic posterior predictor, constraining distribution of the intermediate variable through a regularized flow, and performing training by generating an adversarial network, the foregoing problems are improved, and performance of speech synthesis is improved.
How to effectively reduce parameters and computing complexity is an intuitive challenge. In the related art, because acoustic information and text information are modeled together, directly reducing model parameters leads to a rapid decline in a model effect. Model compression algorithms such as distillation and quantization also cause a significant performance loss. In the method provided in this embodiment, a quantity of model parameters and computing complexity may be effectively reduced in a manner of reducing parameters, the model structure is constructed on an end-to-end basis, and the structure that improves performance is used to improve speech synthesis performance of the model, which may simultaneously reduce model parameters and improve model performance.
In the related art, due to rhythm problems such as a speed of a recording of the user, adaptability to an application scenario is poor. For example, in a navigation scenario, the user pursues a reading style that is straight and round, but because the model in the related art jointly models a timbre and content, a speech style that is generated is not suitable. In the method provided in this embodiment, the timbre and the content are further separately modeled, and the style identifier and the fundamental frequency are introduced to perform feature extraction and speech synthesis, so that speech in a reading style adapted to different scenarios may be synthesized.
Step 902: Obtain a target phoneme of a target user and a timbre identifier of the target user.
The target user is a user who needs to perform speech synthesis. A computer device may train a speech synthesis model through a sample phoneme of the target user and the timbre identifier of the target user. The sample phoneme is determined based on a sample text corresponding to sample speech of the target user. The target phoneme is determined based on a target text. The target text is the same as, partially the same as, or different from the sample text, and the target text is determined by a user by using the speech synthesis model.
The timbre identifier of the target user is configured to identify information of the target user. When training the speech synthesis model, the timbre identifier may be configured for establishing a corresponding relationship between the model parameter learned by the model and the timbre identifier. Therefore, by inputting the timbre identifier into the model when synthesizing the speech, speech that conforms to the timbre corresponding to the timbre identifier may be synthesized.
Step 904: Input the target phoneme into a first submodel of the speech synthesis model, to obtain a phonetic posteriorgram of the target phoneme.
The speech synthesis model is deployed in the computer device or client, and the speech synthesis model is obtained by training through the training method for a speech synthesis model provided in embodiments of the present disclosure.
The phonetic posteriorgram is configured to reflect a feature of each phoneme in the target phoneme and a pronunciation duration feature of each phoneme in the target phoneme. In an embodiment, the computer device extracts a hidden layer feature of the target phoneme through the first submodel, predicts the pronunciation duration feature of each phoneme in the target phoneme through the duration predictor of the first submodel, and then may determine the predicted phonetic posteriorgram of the target phoneme according to the hidden layer feature of the target phoneme and the pronunciation duration feature of each phoneme in the target phoneme.
Step 906: Input the phonetic posteriorgram and the timbre identifier into a second submodel of the speech synthesis model, to obtain target speech corresponding to the target text and the timbre identifier.
Content of the target speech is the target text, and the timbre of the target speech is the timbre corresponding to the timbre identifier, namely, the timbre of the target user identified by the timbre identifier. The second submodel obtains the target speech based on the inverse Fourier transform of the intermediate variable by predicting the phonetic posteriorgram and the intermediate variable of the target speech. The intermediate variable is configured to reflect a frequency domain feature of the target speech.
In an embodiment, the computer device further obtains the target style identifier. The computer device inputs the target phoneme and the target style identifier into the first submodel of the speech synthesis model, and obtains the phonetic posteriorgram of the target phoneme corresponding to the target style identifier and the target fundamental frequency feature corresponding to the target style identifier and the target phoneme. Then, the computer device inputs the phonetic posteriorgram, the timbre identifier, and the target fundamental frequency feature into the second submodel of the speech synthesis model, to obtain the target speech corresponding to the target text, the timbre identifier, and the target fundamental frequency feature.
In summary, in the method provided in this embodiment, the speech synthesis model may generate target speech that matches a timbre of a target user according to a timbre identifier of the target user and a target text. A process of synthesizing the target speech is implemented by predicting an intermediate variable through a predicted phonetic posteriorgram and through inverse Fourier transform. Because the phonetic posteriorgram includes less information than a spectral feature, fewer model parameters are required by the predicted phonetic posteriorgram, and the inverse Fourier transform requires fewer model parameters than upsampling, which may reduce parameters of the model, thereby reducing computing resources consumed by the model, and implementing deployment of the model in a device with low computing resources.
Under the same model effect, the method provided in embodiments of the present disclosure greatly reduces a quantity of parameters and calculation amount of the model. This helps reduce usage of resources, including computing resources and storage resources, and makes it more convenient to deploy the resources in application scenarios such as an end side. In addition, the model reduces pronunciation errors compared with the related art, making the model more stable.
An experimental process and analysis are as follows:
An open data set is configured for performing pre-training on the model in
embodiments of the present disclosure and the model in the related art. The open data set includes about 242 hours of speech of 1151 speakers. To evaluate performance of the model in a speaker timbre cloning task, the pre-trained model is fine-tuned by using a multi-speaker corpus with different acoustic conditions. In actual operation, 5 males and 5 females are randomly selected as target speakers to perform timbre cloning. 20 sentences from each speaker are randomly selected. In addition, 10 additional sentences are randomly selected from each speaker, to obtain a total test set of 100 sentences from 10 speakers.
The speech synthesis model provided in embodiments of the present disclosure is compared with a variational inference with adversarial learning for end-to-end text-to-speech (VITS) and Fastspeech+HiFiGAN configured for speech synthesis with adversarial learning. The VITS model uses a structure of the original paper. For Fastspeech+HiFiGAN, to compare and control a case under different quantities of parameters, two structures are used. A structure 1 is referred to as Fastspeech+HiFiGAN v1, and a structure 2 is referred to as Fastspeech+HiFiGAN v2. v1 uses the original paper structure Fastspeech and HiFiGAN v1. Compared with v1, v2 uses a smaller structure. v2 uses two layers of FFT in an encoder and a decoder, and a dimension of the hidden layer is set to 128. HiFiGAN uses the v2 version.
In the evaluation of objective indicators, each sentence test sample is evaluated by twenty listeners. Participants rate naturalness of the sample and similarity of the timbre of the speaker, with a maximum of 5 points and a minimum of 1 point. Computing complexity is measured by using giga floating-point operations per second (GFLOPS) as a unit. In addition, a word error rate (WER) of each system is measured, to test stability of each model, especially in terms of pronunciation and intonation. Test results are as shown in Table 1:
According to Table 1, it may be learnt that compared with Fastspeech+HifiGAN v2, which has a similar model size in embodiments of the present disclosure, the model in embodiments of the present disclosure implements better naturalness and less computing complexity. Compared with Fastspeech+HifiGAN v1, the model in embodiments of the present disclosure still has a gap in naturalness and speaker similarity, but implements better WER, and ensures the stability of the model.
An order of the steps in the method provided in embodiments of the present disclosure may be properly adjusted, a step may also be correspondingly added or omitted according to a condition, and variations readily figured out by a person skilled in the art within the technical scope disclosed in the present disclosure shall fall within the protection scope of the present disclosure. Therefore, details are not described again.
An example in which a scenario in which the method provided in embodiments of the present disclosure is applied to production of a speech packet of a user in an AI broadcast service as an example is used. For example, the method is applied to production of a speech packet for map navigation. The computer device obtains a target phoneme of a target text of a target user, a timbre identifier of the target user, and a target style identifier. The target user is the user or a celebrity, and the like, and the target text includes a text of speech broadcast of a map navigation scenario. The target style identifier is an identifier of a style corresponding to the speech broadcast of map navigation. For example, the style is slow pronunciation and accurate fundamental frequency when reading phonemes. By inputting the target phoneme and the target style identifier into the first submodel, the computer device may obtain the phonetic posteriorgram of the target phoneme corresponding to the target style identifier and the target fundamental frequency feature corresponding to the target style identifier and the target phoneme. Then, the phonetic posteriorgram, the timbre identifier, and the target fundamental frequency feature are input into the second submodel, thereby obtaining the target speech corresponding to the target text, the timbre identifier, and the target style identifier. The target text determines pronunciation content of the target speech. In other words, the pronunciation content of the target speech is the text of the speech broadcast in the map navigation scenario. The timbre identifier determines the timbre of the target speech, that is, the timbre of the target user selected by the user. The target style identifier determines a pronunciation style of the target speech, including pronunciation duration and a fundamental frequency of each phoneme. The speech synthesis model is obtained by performing timbre cloning and fine-tuning on the sample speech and the timbre identifier of the target user. The content of the sample speech is different from the content of the target text. For example, the sample speech is obtained by recording reading speech of the target user for a small quantity of texts, and the target text may include a large quantity of texts that are different from the small quantity of texts. In this way, speech for other texts that matches the timbre of the user is synthesized according to the recorded speech of the user for the small quantity of texts.
In one embodiment, the second submodel includes a prior encoder and a decoder; and the input/output module 1002 is configured to:
In one embodiment, the prior encoder includes a phonetic posterior encoder; and the input/output module 1002 is configured to:
In one embodiment, the second submodel further includes a posterior encoder.
The obtaining module 1001 is configured to:
In one embodiment, the posterior encoder includes a posterior predictor; and the input/output module 1002 is configured to:
In one embodiment, the prior encoder further includes a regularized flow layer; and the input/output module 1002 is configured to:
In one embodiment, the regularized flow layer includes a plurality of affine coupling layers, and each affine coupling layer is configured for performing affine coupling processing on the real intermediate variable; and different affine coupling layers share model parameters, and each affine coupling layer corresponds to an embedded layer identifier.
In one embodiment, the prior encoder further includes a phonetic posterior predictor, and the phonetic posterior predictor is configured to predict, in a process of pre-training the second submodel, a predicted phonetic posteriorgram in the pre-training process according to a predicted intermediate variable in the pre-training process.
In one embodiment, the training module 1003 is configured to:
In one embodiment, the obtaining module 1001 is configured to:
In one embodiment, the decoder includes an inverse Fourier transform decoder; and the input/output module 1002 is configured to:
In one embodiment, the input/output module 1002 is configured to:
In one embodiment, the inverse Fourier transform decoder includes a plurality of one-dimensional convolutional layers, and the last one-dimensional convolutional layer is connected to an inverse Fourier transform layer.
In one embodiment, the obtaining module 1001 is configured to:
In one embodiment, the decoder further includes a discriminator, and the discriminator forms a generative adversarial network with parts of the speech synthesis model other than the discriminator; and the input/output module 1002 is configured to:
In one embodiment, the discriminator includes a multi-scale spectral discriminator; and the input/output module 1002 is configured to:
In one embodiment, the discriminator includes a multi-scale complex spectral discriminator; and the input/output module 1002 is configured to:
In one embodiment, the first submodel includes a text encoder, a duration regulation device, and a post-processing network. The obtaining module 1001 is configured to:
In one embodiment, the obtaining module 1001 is configured to:
In one embodiment, the first submodel further includes a duration predictor, and the input/output module 1002 is configured to:
In one embodiment, the obtaining module 1001 is configured to:
In one embodiment, the first submodel further includes a fundamental frequency predictor; and the input/output module 1002 is configured to:
In one embodiment, the obtaining module 1101 is configured to:
The training apparatus for a speech synthesis model provided in the foregoing embodiments is illustrated with an example of division of the foregoing functional modules. In actual application, the foregoing functions may be assigned to and completed by different function modules as required. That is, an internal structure of the device may be divided into different function modules, to complete all or some of the functions described above. The term module (and other similar terms such as submodule, unit, subunit, etc.) in this disclosure may refer to a software module, a hardware module, or a combination thereof. A software module (e.g., computer program) may be developed using a computer programming language. A hardware module may be implemented using processing circuitry and/or memory. Each module can be implemented using one or more processors (or processors and memory). Likewise, a processor (or processors and memory) can be used to implement one or more modules. In addition, the training apparatus for a speech synthesis model and the training method for a speech synthesis model embodiments provided in the foregoing embodiments belong to one conception. For the specific implementation process, refer to the method embodiments, and details are not described herein again.
Similarly, according to the speech synthesis apparatus in the foregoing embodiments, only division of the functional modules is illustrated. In actual application, the functions may be assigned to different functional modules for completion as required. In other words, an internal structure of the device is divided into different functional modules to complete all or a part of the functions described above. In addition, the speech synthesis apparatus and the speech synthesis method embodiments provided in the foregoing embodiments belong to one conception. For the specific implementation process, refer to the method embodiments, and details are not described herein again.
Embodiments of the present disclosure further provide a computer device, the computer device including a processor and a memory, the memory storing at least one instruction, at least one segment of program, a code set, or an instruction set, the at least one instruction, the at least one segment of program, the code set, or the instruction set being loaded and executed by the processor to implement the training method for a speech synthesis model or the speech synthesis method provided in the foregoing method embodiments.
In an embodiment, the computer device is a server. For example,
The computer device 1200 includes a central processing unit (CPU) 1201, a system memory 1204 including a random access memory (RAM) 1202 and a read-only memory (ROM) 1203, and a system bus 1205 connecting the system memory 1204 to the CPU 1201. The computer device 1200 further includes a basic input/output system (I/O system) 1206 configured to transmit information between components in the computer device, and a mass storage device 1207 configured to store an operating system 1213, an application 1214, and another program module 1215.
The basic I/O system 1206 includes a display 1208 configured to display
information and an input device 1209 such as a mouse or a keyboard that is configured to allow a user to input information. The display 1208 and the input device 1209 are connected to an I/O controller 1210 of the system bus 1205, to be connected to the CPU 1201. The basic I/O system 1206 may further include the input and output controller 1210 to be configured to receive and process inputs from a plurality of other devices such as a keyboard, a mouse, and an electronic stylus. Similarly, the input/output controller 1210 further provides an output to a display screen, a printer, or another type of output device.
The mass storage device 1207 is connected to the CPU 1201 by using a mass storage controller (not shown) connected to the system bus 1205. The mass storage device 1207 and an associated computer-readable storage medium thereof provide non-volatile storage for the computer device 1200. In other words, the mass storage device 1207 may include a computer-readable storage medium (not shown) such as a hard disk or a compact disc read-only memory (CD-ROM) drive.
Without loss of generality, the computer-readable storage medium may include a computer storage medium and a communication medium. The computer storage medium includes volatile and non-volatile media, and removable and non-removable media implemented by using any method or technology and configured to store information such as a computer-readable storage instruction, a data structure, a program module, or other data. The computer storage medium includes a RAM, a ROM, an erasable programmable ROM (EPROM), an electrically-erasable programmable ROM (EEPROM), a flash memory or another solid-state memory technology, a CD-ROM, a digital versatile disc (DVD) or another optical memory, a magnetic cassette, a magnetic tape, a magnetic disk memory, or another magnetic storage device. Certainly, a person skilled in the art may learn that the computer storage medium is not limited to the above. The system memory 1204 and the mass storage device 1207 may be collectively referred to as a memory.
The memory stores one or more programs, and the one or more programs are configured to be executed by one or more CPUs 1201. The one or more programs include instructions used for implementing the foregoing method embodiments, and the CPU 1201 executes the one or more programs to implement the method provided in the foregoing method embodiments.
According to the various embodiments of the present disclosure, the computer device 1200 may further be connected, through a network such as the Internet, to a remote computer device on the network for running That is, the computer device 1200 may be connected to a network 1212 by using a network interface unit 1211 connected to the system bus 1205, or may be connected to another type of network or a remote computer device system (not shown) by using a network interface unit 1211.
The memory further includes one or more programs. The one or more programs are stored in the memory and include steps to be executed by the computer device in the method provided in embodiments of the present disclosure.
Embodiments of the present disclosure further provide a computer-readable storage medium, storing at least one instruction, at least one program, a code set or an instruction set, the at least one instruction, the at least one program, the code set, or the instruction set being loaded and executed by a processor of an electronic device, to implement the training method for a speech synthesis model or a speech synthesis method provided in the foregoing method embodiments.
The present disclosure further provides a computer program product, including a computer program, and the computer program product, when run on a computer, causes the computer device to perform the training method for a speech synthesis model or a speech synthesis method.
A person of ordinary skill in the art may understand that all or some of the steps of the foregoing embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware. The program may be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic disk, an optical disc, or the like.
The foregoing descriptions are merely example embodiments of the present disclosure, but are not intended to limit the present disclosure. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present disclosure shall fall within the protection scope of the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202211121568.X | Sep 2022 | CN | national |
This application is a continuation application of PCT Patent Application No. PCT/CN2023/108845, filed on Jul. 24, 2023, which claims priority to Chinese Patent Application No. 202211121568.X, filed with the China Intellectual Property Administration on Sep. 15, 2022 and entitled “TRAINING METHOD FOR SPEECH SYNTHESIS MODEL AND SPEECH SYNTHESIS METHOD AND RELATED APPARATUSES”, the entire contents of both of which are incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/CN2023/108845 | Jul 2023 | US |
Child | 18421513 | US |