AUDIO GENERATOR AND METHODS FOR GENERATING AN AUDIO SIGNAL AND TRAINING AN AUDIO GENERATOR

Information

  • Patent Application
  • 20230282202
  • Publication Number
    20230282202
  • Date Filed
    April 14, 2023
    a year ago
  • Date Published
    September 07, 2023
    a year ago
Abstract
There are disclosed techniques for generating an audio signal and training an audio generator.
Description

In the following, different inventive embodiments and aspects will be described. Also, further embodiments will be defined by the enclosed claims. It should be noted that any embodiments as defined by the claims can be supplemented by any of the details (features and functionalities) described in this description.


Also, the embodiments described in this description can be used individually, and can also be supplemented by any of the features herein, or by any feature included in the claims.


Also, it should be noted that individual aspects described herein can be used individually or in combination. Thus, details can be added to each of said individual aspects without adding details to another one of said aspects.


It should also be noted that the present disclosure describes, explicitly or implicitly, features usable in an audio generator and/or a method and/or a computer program product. Thus, any of the features described herein can be used in the context of a device, a method, and/or a computer program product.


Moreover, features and functionalities disclosed herein relating to a method can also be used in a device (configured to perform such functionality). Furthermore, any features and functionalities disclosed herein with respect to a device can also be used in a corresponding method. In other words, the methods disclosed herein can be supplemented by any of the features and functionalities described with respect to the devices.


Also, any of the features and functionalities described herein can be implemented in hardware or in software, or using a combination of hardware and software, as will be described in the section “implementation alternatives”.


Implementation Alternatives


Although some aspects are described in the context of a device, it is clear that these aspects also represent a description of the corresponding method, where a feature corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding feature of a corresponding device.


Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.


Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.


Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.


Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine-readable carrier.


Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine-readable carrier.


In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.


A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.


A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.


A further embodiment comprises a processing means, e.g. a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.


A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.


A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.


In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.


The devices described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.


The devices described herein, or any components of the devices described herein, may be implemented at least partially in hardware and/or in software.


The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer.


The methods described herein, or any part of the methods described herein, may be performed at least partially by hardware and/or by software.


The above described embodiments are merely illustrative for the principles of the present invention. It is understood that modifications and variations of the arrangements and the details described herein will be apparent to others skilled in the art. It is the intent, therefore, to be limited only by the scope of the impending patent claims and not by the specific details presented by way of description and explanation of the embodiments herein.


TECHNICAL FIELD

The invention is within the technical field of audio generation.


Embodiments of the invention refer to an audio generator, configured to generate an audio signal from an input signal and target data, the target data representing the audio signal. Further embodiments refer to methods for generating an audio signal, and methods for training an audio generator. Further embodiments refer to a computer program product.


BACKGROUND OF THE INVENTION

In recent years, neural vocoders have surpassed classical speech synthesis approaches in terms of naturalness and perceptual quality of the synthesized speech signals. The best results can be achieved with computationally-heavy neural vocoders like WaveNet and WaveGlow, while light-weight architectures based on Generative Adversarial Networks, e.g. MeIGAN and Parallel WaveGAN, are still inferior in terms of the perceptual quality.


Generative models using Deep Learning for generating audio waveforms, such as WaveNet, LPCNet, and WaveGlow, have provided significant advances in natural-sounding speech synthesis. These generative models called in Text-To-Speech (TTS) applications neural vocoders, outperform both parametric and concatenative synthesis methods. They can be conditioned using compressed representations of the target speech (e.g. mel-spectrogram) for reproducing a given speaker and a given utterance.


Prior works have shown that speech coding at very low bit-rate of clean speech can be achieved using such generative models at the decoder side. This can be done by conditioning the neural vocoders with the parameters from a classical low bit-rate speech coder.


Neural vocoders were also used for speech enhancement tasks, like speech denoising or dereverberation.


The main problem of these deep generative models is usually the high number of required parameters, and the resulting complexity both during training and synthesis (inference). For example, WaveNet, considered as state-of-the-art for the quality of the synthesized speech, generates sequentially the audio samples one by one. This process is very slow and computationally demanding, and cannot be performed in real time.


Recently, lightweight adversarial vocoders based on Generative Adversarial Networks (GANs), such as MeIGAN and Parallel WaveGAN, have been proposed for fast waveform generation.


However, the reported perceptual quality of the speech generated using these models is significantly below the baseline of neural vocoders like WaveNet and WaveGlow. A GAN for Textto-Speech (GAN-TTS) has been proposed to bridge this quality gap, but still at a high computational cost.


There exists a great variety of neural vocoders, which all have drawbacks. Auto-regressive vocoders, for example WaveNet and LPCNet, may have very high quality, and be suitable for optimization for inference on CPU, but they are not suitable for usage on GPUs, since their processing cannot be parallelized easily, and they cannot offer not real time processing without compromising the quality.


Normalizing flows vocoders, for example WaveGlow, may also have very high quality, and be suitable for inference on a GPU, but they comprise a very complex model, which takes a long time to train and optimize, it is also not suitable for embedded devices.


GAN vocoders, for example MeIGAN and Parallel WaveGAN may be suitable for inference on GPUs and lightweight, but their quality is lower than autoregressive models.


In summary, there still does not exist a low complexity solution delivering high fidelity speech. GANs are the most studied approach to achieve such a goal. The present invention is an efficient solution for this problem.


It is an object of the present invention to provide a lightweight neural vocoder solution which generates speech at very high quality and is trainable with limited computational resources, e.g. for TTS (text-to-speech).


SUMMARY

An embodiment may have an audio generator, configured to generate an audio signal from an input signal and target data, the target data representing the audio signal, comprising: A first processing block, configured to receive first data derived from the input signal and to output first output data, wherein the first output data comprises a plurality of channels, and a second processing block, configured to receive, as second data, the first output data or data derived from the first output data, wherein the first processing block comprises for each channel of the first output data: a conditioning set of learnable layers configured to process the target data to acquire conditioning features parameters, the target data being derived from a text; and a styling element, configured to apply the conditioning feature parameters to the first data or normalized first data; and wherein the second processing block is configured to combine the plurality of channels of the second data to acquire the audio signal.


Another embodiment may have a method for generating an audio signal by an audio generator from an input signal and target data, the target data representing the audio signal and being derived from a text, comprising: receiving, by a first processing block, first data derived from the input signal; for each channel of a first output data: processing, by a conditioning set of learnable layers of the first processing block, the target data to acquire conditioning feature parameters; and applying, by a styling element of the first processing block, the conditioning feature parameters to the first data or normalized first data; outputting, by the first processing block, first output data comprising a plurality of channels; receiving, by a second processing block, as second data, the first output data or data derived from the first output data; and combining, by the second processing block, the plurality of channels of the second data to acquire the audio signal.


Another embodiment may have a method to generate an audio signal comprising a mathematical model, wherein the mathematical model is configured to output audio samples at a given time step from an input sequence representing the audio data to generate, wherein the mathematical model is configured to shape a noise vector in order to create the output audio samples using the input representative sequence, wherein the input representative sequence is derived from a text.


Another embodiment may have a non-transitory digital storage medium having a computer program stored thereon to perform the method for generating an audio signal by an audio generator from an input signal and target data, the target data representing the audio signal and being derived from a text, comprising: receiving, by a first processing block, first data derived from the input signal; for each channel of a first output data: processing, by a conditioning set of learnable layers of the first processing block, the target data to acquire conditioning feature parameters; and applying, by a styling element of the first processing block, the conditioning feature parameters to the first data or normalized first data; outputting, by the first processing block, first output data comprising a plurality of channels; receiving, by a second processing block, as second data, the first output data or data derived from the first output data; and combining, by the second processing block, the plurality of channels of the second data to acquire the audio signal, when said computer program is run by a computer.


Another embodiment may have a non-transitory digital storage medium having a computer program stored thereon to perform the method to generate an audio signal comprising a mathematical model, wherein the mathematical model is configured to output audio samples at a given time step from an input sequence representing the audio data to generate, wherein the mathematical model is configured to shape a noise vector in order to create the output audio samples using the input representative sequence, wherein the input representative sequence is derived from a text, when said computer program is run by a computer.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments of the present invention will be detailed subsequently referring to the appended drawings, in which:



FIG. 1 shows an audio generator architecture according to embodiments of the present invention,



FIG. 2 shows a discriminator structure which can be used for training of the audio generator according to the present invention,



FIG. 3 shows a structure of a portion of the audio generator according to embodiments of the present invention,



FIG. 4 shows a structure of a portion of the audio generator according to embodiments of the present invention, and



FIG. 5 shows results of a MUSHRA expert listening test of different models.



FIG. 6 shows an audio generator architecture according to embodiments of the present invention



FIG. 7 shows operations which are performed onto signals according to the invention.



FIG. 8 shows operations in a text-to-speech application using the audio generator.



FIG. 9a-9c show examples of generators.



FIG. 10 shows several possibilities for the inputs and outputs of a block which may be internal or external to the inventive generator.





In the figures, similar reference signs denote similar elements and features.


Short Summary of the Invention

In accordance to an aspect, there is provided an audio generator, configured to generate an audio signal from an input signal and target data, the target data representing the audio signal, comprising:

    • A first processing block, configured to receive first data derived from the input signal and to output first output data, wherein the first output data comprises a plurality of channels, and
    • a second processing block, configured to receive, as second data, the first output data or data derived from the first output data,
    • wherein the first processing block comprises for each channel of the first output data:
      • a conditioning set of learnable layers configured to process the target data to obtain conditioning features parameters, the target data being derived from a text; and
      • a styling element, configured to apply the conditioning feature parameters to the first data or normalized first data; and
    • wherein the second processing block is configured to combine the plurality of channels of the second data to obtain the audio signal.


The audio generator may be such that the target data is a spectrogram. The audio generator may be such that the target data is a mel-spectrogram.


The audio generator may be such that the target data comprise at least one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram or another type of spectrogram obtained from a text.


The audio generator may be configured to obtain the target data by converting an input in form of text or elements of text onto the at least one acoustic feature.


The audio generator may be configured to obtain the target data by converting at least one linguistic feature onto the at least one acoustic feature.


The audio generator may comprise at least one linguistics feature among a phoneme, words prosody, intonation, phrase breaks, and filled pauses obtained from a text.


The audio generator may be configured to obtain the target data by converting an input in form of text or elements of text onto the at least one linguistic feature.


The audio generator may be such that the target data comprise at least one among a character and a word obtained from a text.


The audio generator may be such that the target data are derived from a text using a statistical model, performing text analysis and/or using an acoustic model.


The audio generator may be such that the target data are derived from a text using a learnable model performing text analysis and/or using an acoustic model.


The audio generator may be such that the target data are derived from a text using a rules-based algorithm performing text analysis and/or an acoustic model.


The audio generator may be configured to obtain the target data by elaborating an input.


The audio generator may be configured to derive the target data through at least one deterministic layer.


The audio generator may be configured to derive the target data through at least one learnable layer.


The audio generator may be such that the conditioning set of learnable layers consists of one or at least two convolution layers.


The audio generator may be such that a first convolution layer is configured to convolute the target data or up-sampled target data to obtain first convoluted data using a first activation function.


The audio generator may be such that the conditioning set of learnable layers and the styling element are part of a weight layer in a residual block of a neural network comprising one or more residual blocks.


The method may comprise at least one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram or another type of spectrogram obtained from a text.


The method may obtain the target data by converting an input in form of text or elements of text onto the at least one acoustic feature.


The method may obtain the target data by converting at least one linguistic feature onto the at least one acoustic feature.


The method may comprise at least one linguistics feature among a phoneme, words prosody, intonation, phrase breaks, and filled pauses obtained from a text.


The method may obtain the target data by converting an input in form of text or elements of text onto the at least one linguistic feature.


The method may comprise at least one among a character and a word obtained from a text.


The method may derive the target data by using a statistical model, performing text analysis and/or using an acoustic model.


The method may derive the target data by using a learnable model performing text analysis and/or using an acoustic model.


The method may derive the target data by using a rules-based algorithm performing text analysis and/or an acoustic model.


The method may derive the target data through at least one deterministic layer.


The method may derive the target data through at least one learnable layer.


The method for generating an audio signal may further comprise deriving the target data from the text.


The method may include that the input representative sequence is text.


The method may include the input representative sequence is a spectrogram. The method may include the spectrogram is a mel-spectrogram.


There is proposed, inter alia, an audio generator (e.g., 10), configured to generate an audio signal (e.g., 16) from an input signal (e.g., 14) and target data (e.g., 12), the target data (e.g., 12) representing the audio signal (e.g., 16), and which may be derived from text, comprising at least one of:

    • a first processing block (e.g., 40, 50, 50a-50h), configured to receive first data (e.g., 15, 59a) derived from the input signal (e.g., 14) and to output first output data (e.g., 69), wherein the first output data (e.g., 69) comprises a plurality of channels (e.g., 47), and
    • a second processing block (e.g., 45), configured to receive, as second data, the first output data (e.g., 69) or data derived from the first output data (e.g., 69).


The first processing block (e.g., 50) may comprise for each channel of the first output data:

    • a conditioning set of learnable layers (e.g., 71, 72, 73) configured to process the target data (e.g., 12) to obtain conditioning features parameters (e.g., 74, 75); and
    • a styling element (e.g., 77), configured to apply the conditioning feature parameters (e.g., 74, 75) to the first data (e.g., 15, 59a) or normalized first data (e.g., 59, 76′).


The second processing block (e.g., 45) may be configured to combine the plurality of channels (e.g., 47) of the second data (e.g., 69) to obtain the audio signal (e.g., 16).


There is also proposed a method e.g. for generating an audio signal (e.g., 16) by an audio generator (e.g., 10) from an input signal (e.g., 14) and target data (e.g., 12), the target data (e.g. obtained from text) representing the audio signal (e.g., 16), comprising:

    • receiving, by a first processing block (e.g., 50, 50a-50h), first data (e.g., 16559, 59a, 59b) derived from the input signal (e.g., 14);
    • for each channel of a first output data (e.g., 59b, 69):
      • processing, by a conditioning set of learnable layers (e.g., 71, 72, 73) of the first processing block (e.g., 50), the target data (e.g., 12), which may be derived from text, to obtain conditioning feature parameters (e.g., 74, 75); and
      • applying, by a styling element (e.g., 77) of the first processing block (e.g., 50), the conditioning feature parameters (e.g., 74, 75) to the first data (e.g., 15, 59) or normalized first data (e.g., 76′);
    • outputting, by the first processing block (e.g., 50), first output data (e.g., 69) comprising a plurality of channels (e.g., 47);
    • receiving, by a second processing block (e.g., 45), as second data, the first output data (e.g., 69) or data derived from the first output data (e.g., 69); and
    • combining, by the second processing block (e.g., 45), the plurality of channels (e.g., 47) of the second data to obtain the audio signal (e.g., 16).


There is also proposed a method to train a neural network for audio generation, wherein the neural network:

    • outputs audio samples at a given time step from an input sequence (e.g. 12) representing the audio data (e.g. 16) to generate,
    • is configured to shape a noise vector (e.g. 14) in order to create the output audio samples (e.g. 16) using the input representative sequence (e.g. 12), and
    • the training is design to optimize a loss function (e.g. 140).


There is also proposed a method to generate an audio signal (e.g. 16) comprising a mathematical model, wherein the mathematical model is configured to output audio samples at a given time step from an input sequence (e.g. 12) representing the audio data (e.g. 16) to generate. The mathematical model may shape a noise vector (e.g. 14) in order to create the output audio samples using the input representative sequence (e.g. 12).


It is in this context that we propose StyleMeIGAN (e.g., the audio generator 10), a light-weight neural vocoder, allowing synthesis of high-fidelity speech with low computational complexity. StyleMeIGAN is a fully convolutional, feed-forward model that uses Temporal Adaptive DE-normalization, TADE, (e.g., 60a and 60b in FIGS. 4, and 60 in FIG. 3) to style (e.g. at 77) a low-dimensional noise vector (e.g. a 128×1 vector) via the acoustic features of the target speech waveform. The architecture allows for highly parallelizable generation, several times faster than real time on both control processing units, CPUs, and graphic processing units, GPUs. For efficient and fast training, we may use a multi-scale spectral reconstruction loss together with an adversarial loss calculated by multiple discriminators (e.g., 132a-132d) evaluating the speech signal 16 in multiple frequency bands and with random windowing (e.g., the windows 105a, 105b, 105c, 105d). MUSHRA and P.800 listening tests show that StyleMeIGAN (e.g., the audio generator 10) outperforms known existing neural vocoders in both copy synthesis and TTS scenarios.


The present application proposes, inter alia, a neural vocoder for generating high quality speech 16, which may be based on a generative adversarial network (GAN). The solution, here called StyleMeIGAN (and, for example, implemented in the audio generator 10), is a light-weight neural vocoder allowing synthesis of high-quality speech 16 at low computational complexity. StyleMeIGAN is a feed-forward, fully convolutional model that uses temporal adaptive denormalization (TADE) for styling (e.g. at block 77) a latent noise representation (e.g. 69) using, for example the mel-spectrogram (12) of the target speech waveform. It allows highly parallelizable generation, which is several times faster than real time on both CPUs and GPUs. For training, it is possible to use multi-scale spectral reconstruction losses followed by adversarial losses. This enables to obtain a model able to synthesize high-quality outputs after less than 2 days of training on a single GPU.


Potential applications and benefits from the invention are as follows:


The invention can be applied for Text-to-Speech, and the resulting quality, i.e. the generated speech quality for TTS and copy-synthesis is close to WaveNet and natural speech. It also provides a fast training, such that the model is easy and quick to be re-trained, personalized. It uses less memory, since it is a relatively small neural network model. And finally, the proposed invention provides a benefit in terms of complexity, i.e. it has a very good quality/complexity tradeoff.


The invention can also be applied for speech enhancement, where it can provide a low complexity and robust solution for generating clean speech from noisy one.


The invention can also be applied for speech coding, where it can lower significantly the bitrate by transmitting only the parameters needed for conditioning the neural vocoder. Also, in this application the lightweight neural vocoder-based solution is suitable for embedded systems, and especially suitable for upcoming (end-)User Equipment (UE) equipped with a GPU or a Neural Processing Unit (NPU).


Embodiments of the present application refer to audio generator, configured to generate an audio signal from an input signal and target data, the target data representing the audio signal (e.g. from derived text), comprising a first processing block, configured to receive first data derived from the input signal and to output first output data, wherein the first output data comprises a plurality of channels, and a second processing block, configured to receive, as second data, the first output data or data derived from the first output data, wherein, the first processing block comprises for each channel of the first output data a conditioning set of learnable layers configured to process the target data to obtain conditioning features parameters; and a styling element, configured to apply the conditioning feature parameters to the first data or normalized first data; and wherein the second processing block is configured to combine the plurality of channels of the second data to obtain the audio signal.


According to one embodiment, the conditioning set of learnable layers consists of one or two convolution layers.


According to one embodiment, a first convolution layer is configured to convolute the target data or up-sampled target data to obtain first convoluted data using a first activation function.


According to one embodiment, the conditioning set of learnable layers and the styling element are part of a weight layer in a residual block of a neural network comprising one or more residual blocks.


According to one embodiment, the audio generator further comprises a normalizing element, which is configured to normalize the first data. For example, the normalizing element may normalize the first data to a normal distribution of zero-mean and unit-variance.


According to one embodiment, the audio signal is a voice audio signal.


According to one embodiment, the target data is up-sampled, advantageously by non-linear interpolation, by a factor of 2 or a multiple of 2, or a power of 2. In some examples, instead, a factor greater than 2 may be used.


According to one embodiment, the first processing block further comprises a further set of learnable layers, configured to process data derived from the first data using a second activation function, wherein the second activation function is a gated activation function.


According to one embodiment, the further set of learnable layers consists of one or two (or even more) convolution layers.


According to one embodiment, the second activation function is a softmax-gated hyperbolic tangent, TanH, function.


According to one embodiment, the first activation function is a leaky rectified linear unit, leaky ReLu, function.


According to one embodiment, convolution operations run with maximum dilation factor of 2.


According to one embodiment, the audio generator comprises eight first processing blocks and one second processing block.


According to one embodiment, the first data has a lower dimensionality than the audio signal. The first data may have a first dimension or at least one dimension lower than the audio signal. The first data may have one dimension lower than the audio signal but a number of channels greater than the audio signal. The first data may have a total number of samples across all dimensions lower than at the audio signal.


According to one embodiment, the target data is a spectrogram, advantageously a mel-spectrogram, or a bitstream.


The target data may be derived from a text. The audio generator may be configured to derive the target data from text. The target data may include, for example, at least one of text data (characters, words, etc.), linguistic feature(s), acoustic feature(s), etc.


In alternative examples, target data may be a compressed representation of audio data, or the target data is a degraded audio signal.


Further embodiments refer to a method for generating an audio signal by an audio generator from an input signal and target data, the target data representing the audio signal (e.g. from derived text), comprising receiving, by a first processing block, first data derived from the input signal; for each channel of a first output data processing, by a conditioning set of learnable layers of the first processing block, the target data to obtain conditioning feature parameters; and applying, by a styling element of the first processing block, the conditioning feature parameters to the first data or normalized first data; outputting, by the first processing block, first output data comprising a plurality of channels; receiving, by a second processing block, as second data, the first output data or data derived from the first output data; and combining, by the second processing block, the plurality of channels of the second data to obtain the audio signal. The method may derive the target data from text, in some examples.


Normalizing may include, for example, normalizing the first data to a normal distribution of zero-mean and unit-variance.


The method can be supplied with any feature or feature combination from the audio generator as well.


Further embodiments refer to a method for training an audio generator as laid out above wherein training comprises repeating the steps of any one of methods as laid out above one or more times.


According to one embodiment, the method for training further comprises evaluating the generated audio signal by at least one evaluator, which is advantageously a neural network, and adapting the weights of the audio generator according to the results of the evaluation.


According to one embodiment, the method for training further comprises adapting the weights of the evaluator according to the results of the evaluation.


According to one embodiment, training comprises optimizing a loss function.


According to one embodiment, optimizing a loss function comprises calculating a fixed metric between the generated audio signal and a reference audio signal.


According to one embodiment, calculating the fixed metric comprises calculating one or several spectral distortions between the generated audio signal and the reference signal.


According to one embodiment, calculating the one or several spectral distortions is performed on magnitude or log-magnitude of the spectral representation of the generated audio signal and the reference signal, and/or on different time or frequency resolutions.


According to one embodiment, optimizing the loss function comprises deriving one or more adversarial metrics by randomly supplying and evaluating a representation of the generated audio signal or a representation of the reference audio signal by one or more evaluators, wherein evaluating comprises classifying the supplied audio signal into a predetermined number of classes indicating a pretrained classification level of naturalness of the audio signal.


According to one embodiment, optimizing the loss function comprises calculating a fixed metric and deriving an adversarial metric by one or more evaluators.


According to one embodiment, the audio generator is first trained using the fixed metric.


According to one embodiment, four evaluators derive four adversarial metrics.


According to one embodiment, the evaluators operate after a decomposition of the representation of the generated audio signal or the representation of the reference audio signal by a filter-bank.


According to one embodiment, each of the evaluators receive as input one or several portions of the representation of the generated audio signal or the representation of the reference audio signal.


According to one embodiment, the signal portions generated by sampling random window(s) from the input signal, using random window function(s).


According to one embodiment, sampling of the random window(s) is repeated multiple times for each evaluator.


According to one embodiment, the number of times the random window(s) is sampled for each evaluator is proportional to the length of the representation of the generated audio signal or the representation of the reference audio signal.


Further embodiments refer to a computer program product including a program for a processing device, comprising software code portions for performing the steps of the methods described herein when the program is run on the processing device.


According to one embodiment, the computer program product comprises a computer-readable medium on which the software code portions are stored, wherein the program is directly loadable into an internal memory of the processing device.


Further embodiments refer to a method to generate an audio signal comprising a mathematical model, wherein the mathematical model is configured to output audio samples at a given time step from an input sequence (e.g. derived from text) representing the audio data to generate, wherein the mathematical model is configured to shape a noise vector in order to create the output audio samples using the input representative sequence.


According to one embodiment, the mathematical model is trained using audio data. According to one embodiment, the mathematical model is a neural network. According to one embodiment, the network is a feed-forward network. According to one embodiment, the network is a convolutional network.


According to one embodiment, the noise vector may have a lower dimensionality than the audio signal to generate. The first data may have a first dimension or at least one dimension lower than the audio signal. The first data may have a total number of samples across all dimensions lower than the audio signal. The first data may have one dimension lower than the audio signal but a number of channels greater than the audio signal.


According to one embodiment, temporal adaptive de-normalization (TADE) technique is used for conditioning the mathematical model using the input representative sequence and therefore for shaping the noise vector.


According to one embodiment, a modified softmax-gated Tanh activates each layer of the neural network.


According to one embodiment, convolution operations run with maximum dilation factor of 2.


According to one embodiment, the noise vector as well as the input representative sequence are up-sampled to obtain the output audio at the target sampling rate.


According to one embodiment, the up-sampling is performed sequentially in different layers of the mathematical model.


According to one embodiment, the up-sampling factor for each layer is 2 or a multiple of 2, such as a power of 2. In some examples, values the upsampling factor may more in general be greater than 2.


The generated audio signal can be in general used in a text-to-speech application, wherein the input representative sequence is derived from a text.


According to one embodiment, the generated audio signal is used in an audio decoder, wherein the input representative sequence is a compressed representation of the original audio to transmit or store.


According to one embodiment, the generated audio signal is used to improve the audio quality of a degraded audio signal, wherein the input representative sequence is derived from the degraded signal.


Further embodiments refer to a method to train a neural network for audio generation, wherein the neural network outputs audio samples at a given time step from an input sequence representing the audio data to generate, wherein the neural network is configured to shape a noise vector in order to create the output audio samples using the input representative sequence, wherein the neural network is designed as laid out above, and wherein the training is design to optimize a loss function.


According to one embodiment, the loss function comprises a fixed metric computed between the generated audio signal and a reference audio signal.


According to one embodiment, the fixed metric is one or several spectral distortions computed between the generated audio signal and the reference signal.


According to one embodiment, the one or several spectral distortions are computed on magnitude or log-magnitude of the spectral representation of the generated audio signal and the reference signal.


According to one embodiment, the one or several spectral distortions forming the fixed metric are computed on different time or frequency resolutions.


According to one embodiment, the loss function comprises an adversarial metric derived by additional discriminative neural networks, wherein the discriminative neural networks receive as input a representation of the generated or of the reference audio signals, and wherein the discriminative neural networks are configured to evaluate how the generated audio samples are realistic.


According to one embodiment, the loss function comprises both a fixed metric and an adversarial metric derived by additional discriminative neural networks.


According to one embodiment, the neural network generating the audio samples is first trained using solely the fixed metric.


According to one embodiment, the adversarial metric is derived by 4 discriminative neural networks.


According to one embodiment, the discriminative neural networks operate after a decomposition of the input audio signal by a filter-bank.


According to one embodiment, each discriminative neural network receives as input one or several random windowed versions of the input audio signal.


According to one embodiment, the sampling of the random window is repeated multiple times for each discriminative neural network.


According to one embodiment, the number of times the random window is sampled for each discriminative neural network is proportional to the length of the input audio samples.


DETAILED DESCRIPTION OF THE INVENTION


FIG. 8 shows an example of the audio generator 10. The audio generator 10 may convert text 112 onto an output audio signal 16. The text 112 may be converted onto target data 12 (see below), which in some examples may be understood as an audio representation (e.g., spectrogram, or more in general spectrograms, MFCCs, like log-spectrogram, or a spectrogram, or MFCCs or a mel-spectrogram, or other acoustic features). The target data 12 may be used to condition an input signal 14 (e.g. noise) so as to process the input signal 14 to become audible speech. An audio synthesis block (text analysis block) 1110 may convert the text 112 onto the audio representation (e.g., spectrogram, or more in general spectrograms, MFCCs, like log-spectrogram, or a spectrogram, or MFCCs or a mel-spectrogram, or other acoustic features) e.g. under the conditions set by the target data 12. The audio synthesis block 1110 may be responsible for processing at least one of utterance, phasing, intonation, duration, etc. of speech, for example. The audio synthesis block 1110 (text analysis block) may perform at least one task such as text normalization, word segmentation, prosody prediction and graphene to phoneme conversion. Subsequently, the generated target data 12 may be inputted into a waveform synthesis block 1120 (e.g. vocoder), which may generate the waveform 16 (output audio signal), e.g. from the input signal 14 based on conditions obtained from the target data 12 obtained from the text 112.


It is noted, however, that the block 1110 in some examples is not part of the generator 10, but the block 1110 could be external to the generator 10. In some examples, the block 1110 may be subdivided into multiple sub-blocks (and in some particular cases at least one of the sub-blocks may be part of the generator 10, and at least one of the sub-blocks may be external to the generator 10).


In general terms, the input, which may be text (or other input derived from text) which is inputted to the block 1110 (or the generator 10 in some examples) may be in the form of at least one of:

    • text 112 (e.g., ASCII code)
    • at least one linguistics feature (e.g. at least one among a phoneme, words prosody, intonation, phrase breaks, and filled pauses, e.g. obtained from a text)
    • at least one acoustic feature (e.g. at least one among a log-spectrogram, an MFCC, and a mel-spectrogram, e.g. obtained from a text)


The input may be processed (e.g. by block 1110) to obtain the target data 12. According to different examples, block 1110 may perform processing so as to obtain the target data 12 (derived from text) in the form of at least one of:

    • at least one of a character of text or a word
    • at least one linguistic feature (e.g. at least one among a phoneme, words prosody, intonation, phrase breaks, and filled pauses, e.g. obtained from a text)
    • at least one acoustic feature (e.g. at least one among a log-spectrogram, an MFCC, and a mel-spectrogram, e.g. obtained from a text).


The target data 12 (whether in form of character, linguistic features, or acoustic feature) will be used by the generator 10 (e.g., by the waveform synthesis block, vocoder, 1120) to condition the processing for the input signal 14, thereby generating the output audio signal (acoustic wave).



FIG. 10 shows a synoptic table on the several possibilities for instantiating the block 1110:

    • A) In case A, the input inputted to the block 1110 is plain text 112, and the output (target data 12) from the block 1110 is at least one of a character of text or a word (which is also text). In case A, the block 1110 performs a selection of text 112 to elements of the text 112. Subsequently, the target data 12 (in form of elements of the text 112) will condition the processing to the input signal 14 to obtain the output signal 16 (acoustic wave).
    • B) In case B, the input inputted to the block 1110 is plain text 112, and the output (target data 12) from the block 1110 comprise at least one linguistic feature, e.g. e.g. a linguistic feature among a phoneme, words prosody, intonation, phrase break, and filled pauses obtained from the text 112, etc. In case B, the block 1110 performs a linguistic analysis to elements of the text 112, thereby obtaining at least one linguistic feature among at least one among phoneme, words prosody, intonation, phrase break, and filled pauses, etc. Subsequently, the target data 12 (in form of at least one among phoneme, words prosody, intonation, phrase break, and filled pauses, etc.) will condition the processing to the input signal 14 to obtain the output signal 16 (acoustic wave).
    • C) In case C, the input inputted to the block 1110 is plain text 112, and the output (target data 12) from the block 1110 comprise at least one acoustic feature, e.g. one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram obtained from a text. In case C, the block 1110 performs an acoustic analysis to elements of the text 112, thereby obtaining at least one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram obtained from the text 112. Subsequently, the target data 12 (e.g. in form of at least one among acoustic feature among a log-spectrogram, MFCC, a mel-spectrogram obtained from the text etc.) will condition the processing to the input signal 14 to obtain the output signal 16 (acoustic wave).
    • D) In case D, the input inputted to the block 1110 is a linguistic feature (e.g. at least one among phoneme, words prosody, intonation, phrase break, and filled pause), and the output is also a processed linguistic feature (e.g. at least one among phoneme, words prosody, intonation, phrase break, and filled pause). Subsequently, the target data 12 (in form of at least one among phoneme, words prosody, intonation, phrase break, and filled pauses, etc.) will condition the processing to the input signal 14 to obtain the output signal 16 (acoustic wave).
    • E) In case E, the input inputted to the block 1110 is a linguistic feature (e.g. at least one among phoneme, words prosody, intonation, phrase break, and filled pause), and the output (target data 12) from the block 1110 comprise at least one acoustic feature, e.g. one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram obtained from a text. In case E, the block 1110 performs an acoustic analysis to elements of the text 112, to obtain at least one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram. Subsequently, the target data 12 (e.g. in form of at least one among acoustic feature among a log-spectrogram, MFCC, a mel-spectrogram obtained from the text etc.) will condition the processing to the input signal 14 to obtain the output signal 16 (acoustic wave).
    • F) In case F, the input inputted to the block 1110 is in form of an acoustic feature (e.g. in form of at least one among acoustic feature among a log-spectrogram, MFCC, a mel-spectrogram obtained from the text etc.), and the output (target data 12) is in form of a processed acoustic feature (e.g. in form of at least one among acoustic feature among a log-spectrogram, MFCC, a mel-spectrogram obtained from the text etc.). Subsequently, the target data 12 (e.g. in form of the processed acoustic features, like the at least one among acoustic feature among a log-spectrogram, MFCC, a mel-spectrogram obtained from the text etc.) will condition the processing to the input signal 14 to obtain the output signal 16 (acoustic wave).



FIG. 9a shows an example in which block 1110 includes a sub-block 1112 (text analysis block) which provides intermediate target data 212, and, downstream thereto, a sub-block 1114 (audio synthesis, e.g. using an acoustic model) which generates the target data 12 in the form of acoustic features. Therefore, in FIG. 9a if both the sub-blocks 1112 and 1114 are part of the generator 10, we are in case C. If the sub-block 1112 is not part of the generator 10 but the sub-block 1114 is part of the generator 10, we are in case E.



FIG. 9b shows an example in which the block 1110 only performs text analysis and provides target data 12 in the form of linguistic features. Therefore, in FIG. 9b, if the block 1110 is part of the generator 10, we are in case B.



FIG. 9c shows an example in which there is no block 1110, and target data 112 are in the form of linguistic features.


In general, block 1110 (if presents) operates to elaborate the text (or other input obtained from text) more and more, in a processing towards a target data which is more elaborated than the input inputted to the block 1110. The block 1110 may also use constraints (e.g. attention function, voice of man/woman, accent, emotional characterization, etc.) which may be absent in the original text. These constraints may be in general provided by the user.


It is noted that, in the cases above and below, the block 1110 (or, if present any of its sub-blocks, such as any of blocks 1112 and 1114) may use a statistical model, e.g. performing text analysis and/or using an acoustic model. In addition or in alternative, the block 1110 (or, if present any of its sub-blocks, such as any of blocks 1112 and 1114) may use a learnable model, e.g. performing text analysis and/or using an acoustic model. The learnable model may be based, for example, on neural networks, Marckv chains, ect. In further addition or in further alternative, the block 1110 (or, if present any of its sub-blocks, such as any of blocks 1112 and 1114) may make use of a rules-based algorithm performing text analysis and/or based on an acoustic model.


The block 1110 (or, if present any of its sub-blocks, such as any of blocks 1112 and 1114) may derive the target data deterministically, in some examples. Therefore, it may be that some subblock(s) are learnable, and other ones are deterministic.


The block 1110 is also referred to as “text analysis block” (e.g. when converting text onto at least one linguistic feature) or “audio synthesis block” (e.g. when converting text or at least one linguistic feature onto at least one acoustic features, such as a spectrogram). Anyway, it is maintained that the target data 12 may be in the form of text, linguistic feature, or acoustic feature according to the embodiments.


Notably, FIG. 10 shows that some combinations of conversions are in general not provided. This because conversions from an elaborated feature towards a simple feature (e.g., from a linguistic feature to text or from an acoustic feature to text or a linguistic feature) is not imagined.



FIG. 6 shows an example of an audio generator 10 which can generate (e.g., synthesize) an audio signal (output signal) 16, e.g. according to StyleMeIGAN. In FIG. 6, text 112 may be processed e.g. by text analysis block 1110 to obtain target data 12. Subsequently, at the waveform synthesis block 1120, the target data 12 may be used to process an input signal 14 (e.g. noise) to obtain an audible audio signal 16 (acoustic waveform). The obtain target data 12 may derived be derived from a text.


In particular, the output audio signal 16 may be generated based on an input signal 14 (also called latent signal and which may be noise, e.g. white noise) and target data 12 (also called “input sequence”, and being derived from a text, in some examples), and which may be obtained from the text 112 at block 1110, for example. The target data 12 may, for example, comprise (e.g. be) a spectrogram (e.g., a mel-spectrogram), the mel-spectrogram providing mapping, for example, of a sequence of time samples onto mel scale. In addition or alternatively, the target data 12 may comprise (e.g. be) a bitstream. For example, the target data may be or include text (or more in general be derived from text) which is to be reproduced in audio (e.g., text-to-speech). The target data 12 is in general to be processed, in order to obtain a speech sound recognizable as natural by a human listener. The input signal 14 may be noise (which as such carries no useful information), e.g. white noise, but, in the generator 10, a noise vector taken from the noise is styled (e.g. at 77) to have a noise vector with the acoustic features conditioned by the target data 12. At the end, the output audio signal 16 will be understood as speech by a human listener. The noise vector 14 may be, like in FIG. 1, a 128×1 vector (one single sample, e.g. time domain samples or frequency domain samples, and 128 channels). Other length of the noise vector 14 could be used in other examples.


The first processing block 50 is shown in FIG. 6. As will be shown (e.g., in FIG. 1) the first processing block 50 may be instantiated by each of a plurality of blocks (in FIG. 1, blocks 50a, 50b, 50c, 50d, 50e, 50f, 50g, 50h). The blocks 50a-50h may be understood as forming one single block 40. It will be shown that in the first processing block 40, 50, a conditioning set of learnable layers (e.g., 71, 72, 73) may be used to process the target data 12 and/or the input signal 14. Accordingly, conditioning feature parameters 74, 75 (also referred to as gamma, γ, and beta, β, in FIG. 3) may be obtained, e.g. by convolution, during training. The learnable layers 71-73 may therefore be part of a weight layer of a learning network or, more in general, another learning structure. The first processing block 40, 50 may include at least one styling element 77. The at least one styling element 77 may output the first output data 69. The at least one styling element 77 may apply the conditioning feature parameters 74, 75 to the input signal 14 (latent) or the first data 15 obtained from the input signal 14.


The first output data 69 at each block 50 are in a plurality of channels. The audio generator 10 may include a second processing block 45 (in FIG. 1 shown as including the blocks 42, 44, 46). The second processing block 45 may be configured to combine the plurality of channels 47 of the first output data 69 (inputted as second input data or second data), to obtain the output audio signal 16 in one single channel, but in a sequence of samples.


The “channels” are not to be understood in the context of stereo sound, but in the context of neural networks (e.g. convolutional neural networks). For example, the input signal (e.g. latent noise) 14 may be in 128 channels (in the representation in the time domain), since a sequence of channels are provided. For example, when the signal has 176 samples and 64 channels, it may be understood as a matrix of 176 columns and 64 rows, while when the signal has 352 samples and 64 channels, it may be understood as a matrix of 352 columns and 64 rows (other schematizations are possible). Therefore, the generated audio signal 16 (which in FIG. 1 results in a 1×22528 row matrix, where 22528 can be substituted by another other number) may be understood as a mono signal. In case stereo signals are to be generated, then the disclosed technique is simply to be repeated for each stereo channel, so as to obtain multiple audio signals 16 which are subsequently mixed.


The at least the original input signal 14 and/or the generated speech 16 may be a vector). To the contrary, the output of each the blocks 30 and 50a-50h, 42, 44 has in general a different dimensionality. The first data may have a first dimension or at least one dimension lower than that of the audio signal. The first data may have a total number of samples across all dimensions lower than the audio signal. The first data may have one dimension lower than the audio signal but a number of channels greater than the audio signal. At each block 30 and 50a-50h, the signal, evolving from noise 14 towards becoming speech 16, may be upsampled. For example, at the upsampling block 30 before the first block 50a among the blocks 50a-50h, an 88-times upsampling is performed. An example of upsampling may include, for example, the following sequence: 1) repetition of same value, 2) insert zeros, 3) another repeat or insert zero+linear filtering, etc.


The generated audio signal 16 may generally be a single-channel signal (e.g. 1×22528). In case multiple audio channels are needed (e.g., for a stereo sound playback) then the claimed procedure shall be in principle iterated multiple times.


Analogously, also the target data 12 can be, in principle, in one single channel (e.g. if it is text or more in general if it is derived from text, like in case A, or like in FIG. 9c) or in multiple channels (e.g. in spectrograms, e.g. mel-spectrograms, e.g. derived from text, e.g. when in cases C, E, F). In any case, it may be upsampled (e.g. by a factor of two, a power of 2, a multiple of 2, or a value greater than 2) to adapt to the dimensions of the signal (59a, 15, 69) evolving along the subsequent layers (50a-50h, 42), e.g. to obtain the conditioning feature parameters 74, 75 in dimensions adapted to the dimensions of the signal.


When the first processing block 50 is instantiated in e.g. at least multiple blocks 50a-50h, the number of channels may, for example, remain the same for the multiple blocks 50a-50h. The first data may have a first dimension or at least one dimension lower than that of the audio signal. The first data may have a total number of samples across all dimensions lower than the audio signal. The first data may have one dimension lower than the audio signal but a number of channels greater than the audio signal.


The signal at the subsequent blocks may have different dimensions from each other. For example, the sample may be upsampled more and more times to arrive, for example, from 88 samples to 22,528 samples at the last block 50h. Analogously, also the target data 12 are upsampled at each processing block 50. Accordingly, the conditioning features parameters 74, 75 can be adapted to the number of samples of the signal to be processed. Accordingly, semantic information provided by the target data 12 is not lost in subsequent layers 50a-50h.


It is to be understood that examples may be performed according to the paradigms of generative adversarial networks (GANs). A GAN includes a GAN generator 11 (FIG. 1) and a GAN discriminator 100 (FIG. 2), which may be also understood to be part of the waveform synthesis block 1120. The GAN generator 11 tries to generate an audio signal 16, which is as close as possible to a real signal. The GAN discriminator 100 shall recognize whether the generated audio signal is real (like the real audio signal 104 in FIG. 2) or fake (like the generated audio signal 16). Both the GAN generator 11 and the GAN discriminator 100 may be obtained as neural networks. The GAN generator 11 shall minimize the losses (e.g., through the method of the gradients or other methods), and update the conditioning features parameters 74, 75 by taking into account the results at the GAN discriminator 100. The GAN discriminator 100 shall reduce its own discriminatory loss (e.g., through the method of gradients or other methods) and update its own internal parameters. Accordingly, the GAN generator 11 is trained to provide better and better audio signals 16, while the GAN discriminator 100 is trained to recognize real signals 16 from the fake audio signals generated by the GAN generator 11. In general terms, it may be understood that the GAN generator 11 may include the functionalities of the generator 10, without at least the functionalities of the GAN discriminator 100. Therefore, in most of the foregoing, it may be understood that the GAN generator 11 and the audio generator may have more or less the same features, apart from those of the discriminator 100. The audio generator 10 may include the discriminator 100 as an internal component. Therefore, the GAN generator 11 and the GAN discriminator 100 may concur in constituting the audio generator 10. In examples where the GAN discriminator 100 is not present, the audio generator can be constituted uniquely by the GAN generator 11.


As explained by the wording “conditioning set of learnable layers”, the audio generator 10 may be obtained according to the paradigms of conditional GANs, e.g. based on conditional information. For example, conditional information may be constituted by target data (or up-sampled version thereof) 12 from which the conditioning set of layers 71-73 (weight layer) are trained and the conditioning feature parameters 74, 75 are obtained. Therefore, the styling element 77 is conditioned by the learnable layers 71-73.


The examples may be based on convolutional neural networks. For example, a little matrix (e.g., filter or kernel), which could be a 3×3 matrix (or a 4×4 matrix, etc.), is convolved (convoluted) along a bigger matrix (e.g., the channel x samples latent or input signal and/or the spectrogram and/or the spectrogram or upsampled spectrogram or more in general the target data 12), e.g. implying a combination (e.g., multiplication and sum of the products; dot product, etc.) between the elements of the filter (kernel) and the elements of the bigger matrix (activation map, or activation signal). During training, the elements of the filter (kernel) are obtained (learnt) which are those that minimize the losses. During inference, the elements of the filter (kernel) are used which have been obtained during training. Examples of convolutions are at blocks 71-73, 61a, 61b, 62a, 62b (see below). Where a block is conditional (e.g., block 60 of FIG. 3), then the convolution is not necessarily applied to the signal evolving from the input signal 14 towards the audio signal 16 through the intermediate signals 59a (15), 69, etc., but may be applied to the target signal 14. In other cases (e.g. at blocks 61a, 61b, 62a, 62b) the convolution may be not conditional, and may for example be directly applied to the signal 59a (15), 69, etc., evolving from the input signal 14 towards the audio signal 16. As can be seen from FIGS. 3 and 4, both conditional and no-conditional convolutions may be performed.


It is possible to have, in some examples, activation functions downstream to the convolution (ReLu, TanH, softmax, etc.), which may be different in accordance to the intended effect. ReLu may map the maximum between 0 and the value obtained at the convolution (in practice, it maintains the same value if it is positive, and outputs 0 in case of negative value). Leaky ReLu may output x if x>0, and 0.1*x if x≤I, x being the value obtained by convolution (instead of 0.1 another value, such as a predetermined value within 0.1±0.05, may be used in some examples). TanH (which may be implemented, for example, at block 63a and/or 63b) may provide the hyperbolic tangent of the value obtained at the convolution, e.g.





TanH(x)=(ex−e−x)/(ex+e−x),


with x being the value obtained at the convolution (e.g. at block 61a and/or 61b). Softmax (e.g. applied, for example, at block 64a and/or 64b) may apply the exponential to each element of the elements of the result of the convolution (e.g., as obtained in block 62a and/or 62b), and normalize it by dividing by the sum of the exponentials. Softmax (e.g. at 64a and/or 64b) may provide a probability distribution for the entries which are in the matrix which results from the convolution (e.g. as provided at 62a and/or 62b). After the application of the activation function, a pooling step may be performed (not shown in the figures) in some examples, but in other examples it may be avoided.



FIG. 4 shows that it is also possible to have a softmax-gated TanH function, e.g. by multiplying (e.g. at 65a and/or 65b) the result of the TanH function (e.g. obtained at 63a and/or 63b) with the result of the softmax function (e.g. obtain at 64a and/or 64b).


Multiple layers of convolutions (e.g. a conditioning set of learnable layers) may be one after another one and/or in parallel to each other, so as to increase the efficiency. If the application of the activation function and/or the pooling are provided, they may also be repeated in different layers (or may be different activation functions may be applied to different layers, for example).


The input signal 14 (e.g. noise) is processed, at different steps, to become the generated audio signal 16 (e.g. under the conditions set by the conditioning sets of learnable layers 71-73, and on the parameters 74, 75 learnt by the conditioning sets of learnable layers 71-73. Therefore, the input signal is to be understood as evolving in a direction of processing (from 14 to 16 in FIG. 6) towards becoming the generated audio signal 16 (e.g. speech). The conditions will be substantially generated based on the target signal 12, and on the training (so as to arrive at the most advantageous set of parameters 74, 75).


It is also noted that the multiple channels of the input signal (or any of its evolutions) may be considered to have a set of learnable layers and a styling element associated thereto. For example, each row of the matrixes 74 and 75 is associated to a particular channel of the input signal (or one of its evolutions), and is therefore obtained from a particular learnable layer associated to the particular channel. Analogously, the styling element 77 may be considered to be formed by a multiplicity of styling elements (each for each row of the input signal x, c, 12, 76, 76′, 59, 59a, 59b, etc.).



FIG. 1 shows an example of the audio generator 10 (which may embody the audio generator of FIG. 6), and which may also comprise (e.g. be) a GAN generator 11. Notably, FIG. 1 only shows elements of the waveform synthesis block 1120, since the target data 12 are already converted from the text 112. The target data 12, e.g. obtained from text, is indicated as mel-spectrogram, the input signal 14 may be a latent noise, and the output of the signal 16 may be speech (other examples are notwithstanding possible, as explained above). As can be seen, the input signal 14 has only one sample and 128 channels (other numbers can be defined). The noise vector 14 may be obtained in a vector with 128 channels (but other numbers are possible) and may have a zero-mean normal distribution. The noise vector may follow the formula






z˜N(0,I128).


The noise vector may be a random noise of dimension 128 with mean 0 generated, and with an autocorrelation matrix (square 128×128) is equal to the identity I (different choice may be made). Hence, in examples the generated noise can be completely decorrelated between the channels and of variance 1 (energy). N(0,I128) may be generated at every 22528 generated samples (or other numbers may be chosen for different examples); the dimension may therefore be 1 in the time axis and 128 in the channel axis.


It will be shown that the noise vector 14 is step-by-step processed (e.g., at blocks 50a-50h, 42, 44, 46, etc.), so as to evolve from, e.g., noise 14 to, e.g., speech 16 (the evolving signal will be indicated, for example, with different signals 15, 59a, x, c, 76′, 79, 79a, 59b, 79b, 69, etc.).


At block 30, the input signal (noise) 14 may be upsampled to have 88 samples (different numbers are possible) and 64 channels (different numbers are possible).


As can be seen, eight processing blocks 50a, 50b, 50c, 50d, 50e, 50f, 50g, 50h (altogether embodying the first processing block 50 of FIG. 6) may increase the number of samples by performing an upsampling (e.g., maximum 2-upsampling). The number of channels may remain the same (e.g., 64) along blocks 50a, 50b, 50c, 50d, 50e, 50f, 50g, 50h. The samples may be, for example, the number of samples per second (or other time unit): we may obtain, at the output of block 50h, sound at more than 22 kHz.


Each of the blocks 50a-50h (50) can also be a TADEResBlock (residual block in the context of TADE, Temporal Adaptive DEnormalization). Notably, each block 50a-50h may be conditioned by the target data 12 (e.g., text feature, linguistic feature, or acoustic feature, such as a mel-spectrogram).


At a second processing block 45 (FIGS. 1 and 6), only one single channel may be obtained, and multiple samples are obtained in one single dimension. As can be seen, another TADEResBlock 42 (further to blocks 50a-50h) is used (which reduces to one single channel). Then, a convolution layer 44 and an activation function (which may be TanH 46, for example) may be performed. After that, the speech 16 is obtained (and, possibly, stored, rendered, encoded, etc.).


At least one of the blocks 50a-50h (or each of them, in particular examples) may be, for example, a residual block. A residual block operates a prediction only to a residual component of the signal evolving from the input signal 14 (e.g. noise) to the output audio signal 16. The residual signal is only a part (residual component) of the main signal. For example, multiple residual signals may be added to each other, to obtain the final output audio signal 16.



FIG. 4 shows an example of one of the blocks 50a-50h (50). As can be seen, each block 50 is inputted with a first data 59a, which is either the input signal 14, (or the upsampled version thereof, such as that output by the upsampling block 30) or the output from a preceding block. For example, the block 50b may be inputted with the output of block 50a; the block 50c may be inputted with the output of block 50b, and so on.


In FIG. 4, it is therefore possible to see that the first data 59a provided to the block 50 (50a-50h) is processed and its output is the output signal 69 (which will be provided as input to the subsequent block). As indicated by the line 59a′, a main component of the first data 59a inputted to the first processing block 50a-50h (50) actually bypasses most of the processing of the first processing block 50a-50h (50). For example, blocks 60a, 61a, 62a, 63a, 65a, 60b, 61b, 62b, 63b, 64b, and 65b are bypassed by the bypassing line 59a′. The first data 59a will subsequently add to a residual portion 64b′ at an adder 65c (which is indicated in FIG. 4, but not shown). This bypassing line 59a′ and the addition at the adder 65c may be understood as instantiating the fact that each block 50 (50a-50h) processes operations to residual signals, which are then added to the main portion of the signal. Therefore, each of the blocks 50a-50h can be considered a residual block.


Notably, the addition at adder 65c does not necessarily need to be performed within the residual block 50 (50a-50h). A single addition of a plurality of residual signals 65b′ (each outputted by each of residual blocks 50a-50h) can be performed (e.g., at an adder block in the second processing block 45, for example). Accordingly, the different residual blocks 50a-50h may operate in parallel with each other.


In the example of FIG. 4, each block 50 may repeat its convolution layers twice (e.g., first at replica 600, including at least one of blocks 60a, 61a, 62a, 63a, 64a, 65a, and obtaining signal 59b; then, at replica 601, including at least one of blocks 60b, 61b, 62b, 63b, 64b, 65b, and obtaining signal 65b′, which may be added to the main component 59a′).


For each replica (600, 601), a conditioning set of learnable layers 71-73 and a styling element 77 is applied (e.g. twice for each block 50) to the signal evolving from the input signal 16 to the audio output signal 16. A first temporal adaptive denormalization (TADE) is performed at TADE block 60a to the first data 59a at the first replica 600. The TADE block 60a performs a modulation of the first data 59a (input signal or, e.g., processed noise) under the conditions set out by the target data 12. In the first TADE block 60a, an upsampling of the target data 12 may be performed at upsampling block 70, to obtain an upsampled version 12′ of the target data 12. The upsampling may be obtained through non-linear interpolation, e.g. using a factor of 2, a power of 2, a multiple of two, or another value greater than 2. Accordingly, in some examples it is possible to have that the spectrogram 12′ has the same dimensions (e.g. conforms to) the signal (76, 76′, x, c, 59, 59a, 59b, etc.) to be conditioned by the spectrogram. An application of stylistic information to the processed noise (first data) (76, 76′, x, c, 59, 59a, 59b, etc.) may be performed at block 77 (styling element). In a subsequent replica 601, another TADE block 60b may be applied to the output 59b of the first replica 600. An example of the TADE block 60 (60a, 60b) is provided in FIG. 3 (see also below). After having modulated the first data 59a, convolutions 61a and 62a are carried out. Subsequently, activation functions TanH and softmax (e.g. constituting the softmax-gated TanH function) are also performed (63a, 64a). The outputs of the activation functions 63a and 64a are multiplied at multiplier block 65a (e.g. to instantiate the gating), to obtain a result 59b. In case of the use of two different replicas 600 and 601 (or in case of the use of more than two replicas), the passages of blocks 60a, 61a, 62a, 63a, 64a, 65a, are repeated.


In examples, the first and second convolutions at 61b and 62b, respectively downstream to the TADE block 60a and 60b, may be performed at the same number of elements in the kernel (e.g., 9, e.g., 3×3). However, the second convolutions 61b and 62b may have a dilation factor of 2. In examples, the maximum dilation factor for the convolutions may be 2 (two).



FIG. 3 shows an example of a TADE block 60 (60a, 60b). As can be seen, the target data 12 may be upsampled, e.g. so as to conform to the input signal (or a signal evolving therefrom, such as 59, 59a, 76′, also called latent signal or activation signal). Here, convolutions 71, 72, 73 may be performed (an intermediate value of the target data 12 is indicated with 71′), to obtain the parameters γ (gamma, 74) and β (beta, 75). The convolution at any of 71, 72, 73 may also require a rectified linear unit, ReLu, or rectify a leaky rectified linear unit, leaky ReLu. The parameters γ and β may have the same dimension of the activation signal (the signal being processed to evolve from the input signal 14 to the generated audio signal 16, which is here represented as x, 59, or 76′ when in normalized form). Therefore, when the activation signal (x, 59, 76′) has two dimensions, also γ and β (74 and 75) have two dimensions, and each of them is superimposable to the activation signal (the length and the width of γ and β may be the same of the length and the width of the activation signal). At the stylistic element 77, the conditioning feature parameters 74 and 75 are applied to the activation signal (which is the first data 59a or the 59b output by the multiplier 65a). It is to be noted, however, that the activation signal 76′ may be a normalized version (10 at instance norm block 76) of the first data 59, 59a, 59b (15). It is also to be noted that the formula shown in stylistic element 77 (γx+β) may be an element-by-element product, and not a convolutional product or a dot product (and in fact γx+β is also indicated as x ⊙ γ+β, where ⊙ indicates elementwise multiplication).


After stylistic element 77, the signal is output. The convolutions 72 and 73 have not necessarily activation function downstream of them. It is also noted that the parameter γ (74) may be understood as a variance and β (75) as a bias. Also, block 42 of FIG. 1 may be instantiated as block 50 of FIG. 3. Then, for example, a convolutional layer 44 will reduce the number of channets to 1 and, after that, a TanH 56 is performed to obtain speech 16.



FIG. 7 shows an example of the evolution, in one of the replica 600 and 601 of one of blocks 50a-50h.

    • the target data 14 (e.g. mel-spectrogram); and
    • the latent noise c (12), also indicated with 59a or as signal evolving from the input signal 12 towards the generated audio signal 16.


Notably, 61a, 61b62a, 62b may be (or be part of) a set of learnable layer configured to process data derived from the first data (e.g., in turn, from the input signal 14) using an activation function (e.g. 63a, 64a, 63b, 64b) which is a gated activation function (second activation function).


This set of learnable layers may consist of one or two or even more convolutional layers. The second activation function may be a gated activation function (e.g. TanH and softmax). This feature may combine with the fact that the first activation function (for obtaining the first convoluted data 71′) is a ReLu or a leaky ReLu.


The following procedure (or at least one of its steps) may be performed:

    • From an input, such as text 112 (e.g. American Standard Code for Information Interchange, ASCII, code, or another type of code) a target data 12 (e.g. a text feature, a linguistic feature, or an acoustic feature, such as a mel-spectrogram) is generated (different types of target data may be used).
    • The target data (e.g. spectrogram) 12 is subjected to at least one of the following steps:
      • Upsampled at upsampling block 70, to obtained an upsampled spectrogram 12′;
      • At convolutional layers 71-73 (part of a weight layer), convolutions are performed (e.g. a kernel 12a in is convolved along the upsampled spectrogram 12′);
      • γ (74) and β (75) are obtained (learnt);
      • γ (74) and β (75) are applied (e.g. by convolution) to the latent signal 59a (15), evolving from the input signal 14 and the generated audio signal 16.


TTS


Text to speech (TTS) (e.g. as performed using block 1110) aims to synthesize intelligible and natural sounded speech 16 given a text 112. It could have broad applications in the industry, especially for machine-to-human communication.


The inventive audio generator 10 includes different components, among of them the vocoder 1120, in the last stage and includes mainly block(s) for converting text features, linguistic features or acoustic features in audio waveform 16.


In particular, at block 1110 the text 112 (input) may be analyzed and linguistic features may be extracted from the text 112, e.g. by a text analysis module (sub-block) 1112, as shown in FIG. 9a. Text analysis may include, e.g., multiple tasks like text normalization, word segmentation, prosody prediction and graphene to phoneme (see also FIG. 8). These linguistics features (which may play the role of intermediate target data 212) are then converted, e.g. through an acoustic model (e.g. by sub-block 1114), to acoustics features, like MFCCs, fundamental frequency, mel-spectrogram for example, or a combinations of those, which may constitute the target data 12 of FIGS. 1 and 3-8.


It is worth noting that this pipeline can be replaced by end-to-end processing e.g. through the introduction of DNNs. For example, it is possible to condition the neural Vocoder 1120 directly from linguistic features (e.g. in cases B and D of FIG. 10), or an acoustic model could directly process characters bypassing the test analysis stage (the sub-block 1114 in FIG. 9a being not used). For example, some end-to-end models like Tacotron 1 and 2 may be used in block 1110 to simplify text analysis modules and directly take character/phoneme sequences as input sequence, e.g. outputting as acoustic features (target data 12) e.g. in the form of melspectrog rams.


The current solution can be employed as a TTS system (i.e. including both blocks 1110 and 1120), wherein the target data 12 may include, in some examples, a stream of information or speech representation derived from the text 112. The representation could be for example characters or phonemes derived from a text 112, that means usual inputs of the text analysis block 1110. In this case, a pre-conditioned (pre-conditioning) learnable layer may be used for block 1110 e.g. for extracting acoustics features or conditioning features appropriate (target data 12) for the neural vocoder (e.g. block 1120). This pre-conditioning layer 1110 may use deep neural networks (DNNs) like an encoder-attention-decoder architecture to map characters or phonemes directly to acoustic features. Alternatively, the representation (target data) 12 can be or include linguistics features, that means phonemes associated with information like prosody, intonation, pauses, etc. In this case, the pre-conditioned learnable layer 1110 can be an acoustic model mapping the linguistics features to acoustics features based on statistical models such as Hidden Markov model (HMM), deep neural network (DNN) or recurrent neural network (RNN). Finally, the target data 12 could include directly acoustics features derived from the text 112, which may be used as conditioning features e.g. after a learnable or a deterministic pre-conditioning layer 1110. In an extreme case (e.g. in case F of FIG. 10), the acoustic features in the target data 12 can be used directly as the conditioning features and any pre-conditioning layer bypassed.


By virtue of the above, the audio synthesis block 1110 (text analysis block) may be deterministic in some examples, but may be obtained through at least one learnable layer in other cases.


In examples, the target data 12 may include acoustic features like log-spectrogram, or a spectrogram, or MFCCs or a mel-spectrogram obtained from a text 112.


In alternative, the target data 12 may include linguistics features like phonemes, words prosody, intonation, phrase breaks, or filled pauses obtained from a text.


The target data may be derived from a text using at least one of statistical models, learnable models or rules-based algorithm, which may include a text analysis and/or an acoustic model.


In general terms, therefore, the audio synthesis block 1110 which outputs the target data 12 from the input (e.g. text), such as the text 112 (so that the target data 12 are derived from the text 112), can be either a deterministic block or a learnable block.


In general terms, the target data 12 may have multiple channels, while the text 112 (from which the target data 12 derive) may have one single channel.



FIG. 9a shows an example of generator 10a (which can be an example of the generator 10) in which the target data 12 comprise at least one of the acoustic features like log-spectrogram, or a spectrogram, or MFCCs or a mel-spectrogram obtained from the text 112. Here, the block 1110 includes a text analysis block 1112 which provides intermediate target data 212 which may include at least one of the linguistic features like phonemes, words prosody, intonation, phrase breaks, or filled pauses obtained from the text 112. Subsequently, an audio synthesis block 1114 (e.g. using an acoustic model) may generate the target data 12 as at least one of acoustic features like log-spectrum, or a spectrogram, or MFCCs or mel-spectrogram obtained from the text 112.


After that, the waveform synthesis block 1120 (which can be any of the waveform synthesis blocks discussed above) may be used to generate an output audio signal 16.



FIG. 9b shows an example of a generator 10b (which may be an example of the generator 10) in which the target data 12 comprise at least one of linguistics features like phonemes, words prosody, intonation, phrase breaks, or filled pauses obtained from text 112. A waveform synthesis (e.g. vocoder 1120) can be used to output an audio signal 16. The waveform synthesis block 1120 can be any of those described in the FIGS. 1-8 discussed above. In this case, for example, the target data can be directly ingested to the conditional set of learnable layers 7173 to obtain γ and R (74 and 75).


In FIG. 9c it is shown an example of a generator 10c (which may be an example of any generators 10 of FIGS. 1-8) in which text 112 is used directly as target data. As, the target data 12 comprise at least one of characters or words obtained from the text 112. The waveform synthesis block 1120 may be any of the examples discussed above.


In general terms, any of the audio generators above, (in the particular any of the text analysis blocks 1110 (e.g. any of FIG. 8 or 9a-9c) may derive the target data from a text using at least one of statistical models, learnable models or rules-based algorithm, consisting of a text analysis and/or an acoustic model.


In some examples, the target data 12 may be obtained deterministically by block 1120. In other examples, the target data 12 may be obtained non-deterministically, and block 1110 may be a learnable layer or a plurality of learnable layers.


GAN Discriminator


The GAN discriminator 100 of FIG. 2 may be used during training for obtaining, for example, the parameters 74 and 75 to be applied to the input signal 12 (or a processed and/or normalized version thereof). The training may be performed before inference, and the parameters 74 and 75 may be, for example, stored in a non-transitory memory and used subsequently (however, in some examples it is also possible that the parameters 74 or 75 are calculated on line).


The GAN discriminator 100 has the role of learning how to recognize the generated audio signals (e.g., audio signal 16 synthesized as discussed above) from real input signals (e.g. real speech) 104. Therefore, the role of the GAN discriminator 100 is mainly exerted during training (e.g. for learning parameters 72 and 73) and is seen in counter position of the role of the GAN generator 11 (which may be seen as the audio generator 10 without the GAN discriminator 100).


In general terms, the GAN discriminator 100 may be input by both audio signal 16 synthesized generated by the GAN generator 10, and real audio signal (e.g., real speech) 104 acquired e.g., through a microphone, and process the signals to obtain a metric (e.g., loss) which is to be minimized. The real audio signal 104 can also be considered a reference audio signal. During training, operations like those explained above for synthesizing speech 16 may be repeated, e.g. multiple times, so as to obtain the parameters 74 and 75, for example.


In examples, instead of analyzing the whole reference audio signal 104 and/or the whole generated audio signal 16, it is possible to only analyze a part thereof (e.g. a portion, a slice, a window, etc.). Signal portions generated in random windows (105a-105d) sampled from the generated audio signal 16 and from the reference audio signal 104 are obtained. For example random window functions can be used, so that it is not a priori pre-defined which window 105a, 105b, 105c, 105d will be used. Also the number of windows is not necessarily four, at may vary.


Within the windows (105a-105d), a PQMF (Quadrature Mirror Filter-bank (PQMF) 110 may be applied. Hence, subbands 120 are obtained. Accordingly, a decomposition (110) of the representation of the generated audio signal (16) or the representation of the reference audio signal (104) is obtained.


An evaluation block 130 may be used to perform the evaluations. Multiple evaluators 132a, 132b, 132c, 132d (complexively indicated with 132) may be used (different number may be used). In general, each window 105a, 105b, 105c, 105d may be input to a respective evaluator 132a, 132b, 132c, 132d. Sampling of the random window (105a-105d) may be repeated multiple times for each evaluator (132a-132d). In examples, the number of times the random window (105a-105d) is sampled for each evaluator (132a-132d) may be proportional to the length of the representation of the generated audio signal or the representation of the reference audio signal (104). Accordingly, each of the evaluators (132a-132d) may receive as input one or several portions (105a-105d) of the representation of the generated audio signal (16) or the representation of the reference audio signal (104).


Each evaluator 132a-132d may be a neural network itself. Each evaluator 132a-132d may, in particular, follow the paradigms of convolutional neutral networks. Each evaluator 132a-132d may be a residual evaluator. Each evaluator 132a-132d may have parameters (e.g. weights) which are adapted during training (e.g., in a manner similar to one of those explained above).


As shown in FIG. 2, each evaluator 132a-132d also performs a downsampling (e.g., by 4 or by another downsampling ratio). The number of channels increase for each evaluator 132a-132d (e.g., by 4, or in some examples by a number which is the same of the downsampling ratio).


Upstream and/or downstream to the evaluators, convolutional layers 131 and/or 134 may be provided. An upstream convolutional layer 131 may have, for example, a kernel with dimension (e.g., 5×3 or 3×5). A downstream convolutional layer 134 may have, for example, a kernel with dimension 3 (e.g., 3×3).


During training, a loss function (adversarial loss) 140 may be optimized. The loss function 140 may include a fixed metric (e.g. obtained during a pretraining step) between a generated audio signal (16) and a reference audio signal (104). The fixed metric may be obtained by calculating one or several spectral distortions between the generated audio signal (16) and the reference audio signal (104). The distortion may be measured by keeping into account:

    • magnitude or log-magnitude of the spectral representation of the generated audio signal (16) and the reference audio signal (104), and/or
    • different time or frequency resolutions.


In examples, the adversarial loss may be obtained by randomly supplying and evaluating a representation of the generated audio signal (16) or a representation of the reference audio signal (104) by one or more evaluators (132). The evaluation may comprise classifying the supplied audio signal (16, 132) into a predetermined number of classes indicating a pretrained classification level of naturalness of the audio signal (14, 16). The predetermined number of classes may be, for example, “REAL” vs “FAKE”.


Examples of losses may be obtained as






L(D;G)=EX,Z[ReLU(1−D(x))+ReLU(1+D(G(z;s)))],


where:

    • x is the real speech 104,
    • z is the latent noise 14 (or more in general the input signal or the first data or the latent),
    • s is the mel-spectrogram of x (or more in general the target signal 12).
    • D( . . . ) is the output of the evaluators in terms of distribution of probability (D( . . . )=0 meaning “for sure fake”, D( . . . )=1 meaning “for sure real”).


The spectral reconstruction loss Lrec is still used for regularization to prevent the emergence of adversarial artifacts. The final loss is can be, for example:






L=¼Σi=14L(Di;G)+LRec.


where each i is the contribution at each evaluator 132a-132d (e.g., each evaluator 132a-132d providing a different Di) and Lrec is the pretrained (fixed) loss.


During training, there is a search foddr the minimum value of L, which may be expressed for example as







min
G

(



E
z

[




i
=
1

4




-

D
i




G

(

s
,
z

)



]

+


rec


)




Other kinds of minimizations may be performed.


In general terms, the minimum adversarial losses 140 are associated to the best parameters (e.g., 74, 75) to be applied to the stylistic element 77.


DISCUSSION

Examples of the present disclosure are described in detail using the accompanying descriptions. In particular in the following description, many details are described in order to provide a more thorough explanation of examples of the disclosure. However, it will be apparent to those skilled in the art that other examples can be implemented without these specific details. Features of the different examples described can be combined with one another, unless features of a corresponding combination are mutually exclusive or such a combination is ex-pressly excluded.


It should be pointed out that the same or similar elements or elements that have the same functionality can be provided with the same or similar reference symbols or are designated identically, with a repeated description of elements that are provided with the same or similar reference symbols or the same are typically omitted. Descriptions of elements that have the same or similar reference symbols or are labeled the same are interchangeable.


Neural vocoders have proven to outperform classical approaches in the synthesis of natural high-quality speech in many applications, such as text-to-speech, speech coding, and speech enhancement. The first groundbreaking generative neural network to synthesize high-quality speech was WaveNet, and shortly thereafter many other approaches were developed. These models offer state-of-the-art quality, but often at a very high computational cost and very slow synthesis. An abundance of models generating speech with low computational cost was presented in the recent years. Some of these are optimized versions of existing models, while others leverage the integration with classical methods. Besides, many completely new approaches were also introduced, often relying on GANs. Most GAN vocoders offer very fast generation on GPUs, but at the cost of compromising the quality of the synthesized speech.


One of the main objectives of this work is to propose a GAN architecture, which we call StyleMeIGAN (and may be implemented, for example, in the audio generator 10), that can synthesize very high-quality speech 16 at low computational cost and fast training. StyleMelGAN's generator network may contain 3.86 M trainable parameters, and synthesize speech at 22.05 kHz around 2.6× faster than real-time on CPU and more than 54× on GPU. The model may consist, for example, of eight up-sampling blocks, which gradually transform a low-dimensional noise vector (e.g., 30 in FIG. 1) into the raw speech waveform (e.g. 16). The synthesis may be conditioned on the mel-spectrogram of the target speech (or more in general by target data 12), which may be inserted in every generator block (50a-50h) via a Temporal Adaptive DEnormalization (TADE) layer (60, 60a, 60b). This approach for inserting the conditioning features is very efficient and, as far as we know, new in the audio domain. The adversarial loss is computed (e.g. through the structure of FIG. 2, in GAN discriminator 100) by an ensemble of four discriminators 132a-132d (but in some examples a different number of discriminators is possible), each operating after a differentiable Pseudo Quadrature Mirror Filter-bank (PQMF) 110. This permits to analyze different frequency bands of the speech signal (104 or 16) during training. In order to make the training more robust and favor generalization, the discriminators (e.g. the four discriminators 132a-132d) are not conditioned on the input acoustic features used by the generator 10, and the speech signal (104 or 16) is sampled using random windows (e.g. 105a-105d).


To summarize, StyleMeIGAN is proposed, which is a low complexity GAN for high-quality speech synthesis conditioned on a mel-spectrogram (e.g. 12) via TADE layers (e.g. 60, 60a, 60b). The generator 10 may be highly parallelizable. The generator 10 may be completely convolutional. The aforementioned generator 10 may be trained adversarial with an ensemble of PQMF multi-sampling random window discriminators (e.g. 132a-132d), which may be regularized by multi-scale spectral reconstruction losses. The quality of the generated speech 16 can be assessed using both objective (e.g. Fréchet scores) and/or subjective assessments. Two listening tests were conducted, a MUSHRA test for the copy-synthesis scenario and a P.800 ACR test for the TTS one, both confirming that StyleMeIGAN achieves state-of-art speech quality.


Existing neural vocoders usually synthesize speech signals directly in time-domain, by modelling the amplitude of the final waveform. Most of these models are generative neural networks, i.e. they model the probability distribution of the speech samples observed in natural speech signals. They can be divided in autoregressive, which explicitly factorize the distribution into a product of conditional ones, and non-autoregressive or parallel, which instead model the joint distribution directly. Autoregressive models like WaveNet, SampleRNN and WaveRNN have been reported to synthesize speech signals of high perceptual quality. A big family of non-autoregressive models is the one of Normalizing Flows, e.g. WaveGlow. A hybrid approach is the use of Inverse Autoregressive Flows, which use a factorized transformation between a noise latent representation and the target speech distribution. Examples above mainly refer to autoregressive neural networks.


Early applications of GANs for audio include WaveGAN for unconditioned speech generation, and Gan-Synth for music generation. MeIGAN learns a mapping between the mel-spectrogram of speech segments and their corresponding time-domain waveforms. It ensures faster than real-time generation and leverages adversarial training of multi-scale discriminators regularized by spectral reconstruction losses. GAN-TTS is the first GAN vocoder to use uniquely adversarial training for speech generation conditioned on acoustic features. Its adversarial loss is calculated by an ensemble of conditional and unconditional random windows discriminators. Parallel WaveGAN uses a generator, similar to WaveNet in structure, trained using an unconditioned discriminator regularized by a multi-scale spectral reconstruction loss. Similar ideas are used in Multiband-MeIGAN, which generates each sub-band of the target speech separately, saving computational power, and then obtains the final waveform using a synthesis PQMF. Its multiscale discriminators evaluate the full-band speech waveform, and are regularized using a multi-bandscale spectral reconstruction loss. Research in this field is very active and we can cite the very recent GAN vocoders such as VocGan and HooliGAN.



FIG. 1 shows the generator architecture of StyleMeIGAN implemented in the audio generator 10. The generator model maps a noise vector z˜N(0,I128) (indicated with 30 in FIG. 1) into a speech waveform 16 (e.g. at 22050 Hz) by progressive up-sampling (e.g. at blocks 50a-50h) conditioned on mel-spectrograms (or more in general target data) 12. It uses Temporal Adaptive DE-Normalization, TADE (see blocks 60, 60a, 60b), which may be a feature-wise conditioning based on linear modulation of normalized activation maps (76′ in FIG. 3). The modulation parameters γ (gamma, 74 in FIG. 3) and β (beta, 75 in FIG. 3) are adaptively learned from the conditioning features, and in one example have the same dimension as the latent signal. This delivers the conditioning features to all layers of the generator model hence preserving the signal structure at all up-sampling stages. In the formula z˜N(0,I128), 128 is the number of channels for the latent noise (different numbers may be chosen in different examples). A random noise of dimension 128 with mean 0 may therefore be generated, and with an autocorrelation matrix (square 128 by 128) is equal to the identity I. Hence, in examples the generated noise can be considered as completely decorrelated between the channels and of variance 1 (energy). N(0,I128) may be realized at every 22528 generated samples (or other numbers may be chosen for different examples); the dimension may therefore be 1 in the time axis and 128 in the channel axis (other numbers different from 128 may be provided).


cc⊙γ+β⊙γβcxc⊙γ+βxy+βxy+βx⊙γ+β FIG. 3 shows the structure of a portion of the audio generator 10 and illustrates the structure of the TADE block 60 (60a, 60b). The input activation (76′) is adaptively modulated via


cc⊙γ+β⊙γβcxc⊙γ+βxy+βxy+βx⊙γ+β,where indicates elementwise multiplication (notably, and have the same dimension of the activation map; it is also noted that is the normalized version of the of FIG. 3, and therefore is the normalized version of which could also be indicated with). Before the modulation at block 77, an instance normalization layer 76 is used. Layer 76 (normalizing element) may normalize the first data to a normal distribution of zero-mean and unit-variance. Softmax-gated Tanh activation functions (e.g. the first instantiated by blocks 63a-64a-65a, and the second instantiated by blocks 63b-64b-65b at FIG. 4) can be used, which reportedly performs better than rectified linear unit, ReLU, functions. Softmax gating (e.g. as obtained by multiplications 65a and 65b) allows for less artifacts in audio waveform generation.



FIG. 4 shows the structure of a portion of the audio generator 10 and illustrates the TADEResBlock 50 (which may be any of blocks 50a-50h), which is the basic building block of the generator model. A complete architecture is shown in FIG. 1. It includes eight up-sampling stages 50a-50h (in other examples, other numbers are possible), consisting, for example, of a TADEResBlock and a layer 601 up-sampling the signal 79b by a factor of two, plus one final activation module 46 (in FIG. 1). The final activation comprises one TADEResBlock 42 followed by a channel-change convolutional layer 44, e.g. with tanh non-linearity 46. This design permits to use, for example, a channel depth of 64 for the convolution operations, hence saving complexity. Moreover, this up-sampling procedure permits to keep the dilation factor lower than 2.



FIG. 2 shows the architecture of a filter bank random window discriminators (FB-RWDs). StyleMeIGAN may use multiple (e.g. four) discriminators 132a-132d for its adversarial training, wherein in examples the architecture of the discriminators 132a-132d has no average pooling down-sampling. Moreover, each discriminator (132a-132d) may operate on a random window (105a-105d) sliced from the input speech waveform (104 or 16). Finally, each discriminator (132a-132d) may analyze the sub-bands 120 of the input speech signal (104 or 16) obtained by an analysis PQMF (e.g. 110). More precisely we may use, in examples, 1, 2, 4, and 8 subbands calculated respectively from select random segments of respectively 512, 1024, 2048, and 4096 samples extracted from a waveform of one second. This enables a multi-resolution adversarial evaluation of the speech signal (104 or 16) in both time and frequency domains.


Training GANs is known to be challenging. Using random initialization of the weights (e.g. 74 and 75), the adversarial loss (e.g. 140) can lead to severe audio artifacts and unstable training. To avoid this problem, the generator 10 may be firstly pretrained using only the spectral reconstruction loss consisting of error estimates of the spectral convergence and the log-magnitude computed from different STFT analyses. The generator obtained in this fashion can generate very tonal signals although with significant smearing in high frequencies. This is nonetheless a good starting point for the adversarial training, which can then benefit from a better harmonic structure than if it started directly from a complete random noise signal. The adversarial training then drives the generation to naturalness by removing the tonal effects and sharpening the smeared frequency bands. The hinge loss 140 is used to evaluate the adversarial metric, as can be seen in equation 1 below.






L(D;G)=Ex,y[ReLU(1−D(x))+ReLU(1+D(G(z;s)))]  (1)


where x is the real speech 104, z is the latent noise 14 (or more in general the input signal), and s is the mel-spectrogram of x (or more in general the target signal 12). It should be noted that the spectral reconstruction loss Lrec (140) is still used for regularization to prevent the emergence of adversarial artifacts. The final loss (140) is according to equation 2, which can be seen below.






L=¼Σi=14L(Di;G)+Lrec.  (2)


Weight normalization may be applied to all convolution operations in G (or more precisely the GAN generator 11) and D (or more precisely the discriminator 100). In experiments, StyleMelGAN was trained using one NVIDIA Tesla V100 GPU on the LJSpeech corpus at 22050 Hz. The log-magnitude mel-spectrograms is calculated for 80 mel-bands and is normalized to have zero mean and unit variance. This is only one possibility of course; other values are equally possible. The generator is pretrained for 100.000 steps using Adam optimizer with learning rate (Irg) of 10−4, β1=0.5, β2=0.9. When starting the adversarial training, the learning rate of G (Irg) is set to 5*10−5 and use FB-RWDs with the Adam optimizer with a discriminator learning rate (Ird) of 2*10−4 and the same A. The FB-RWDs repeat the random windowing for 1 s/window length, i.e. one second per window length, times at every training step to support the model with enough gradient updates. A batch size of 32 and segments with a length of 1 s, i.e. one second, for each sample in the batch are used. The training lasts for about one and a half million steps, i.e. 1.500.000 steps.


The following lists the models used in experiments:

    • WaveNet for targets experiments in copy-synthesis and text-to-speech
    • PWGAN for targets experiments in copy-synthesis and text-to-speech
    • MeIGAN for targets experiments in copy-synthesis with objective evaluation
    • WaveGlow for targets experiments in copy-synthesis
    • Transformer.v3 for targets experiments in text-to-speech


Objective and subjective evaluations of StyleMeIGAN against pretrained baseline vocoder models listed above have been performed. The subjective quality of the audio TTS outputs via a P.800 listening test performed by listeners were evaluated in a controlled environment. The test set contains unseen utterances recorded by the same speaker and randomly selected from the LibriVox online corpus. Hence, the model is robust and does not mainly depend on the training data. These utterances test the generalization capabilities of the models, since they were recorder in slightly different conditions and present varying prosody. The original utterances were resynthesized using the GriffinLim algorithm and used these in the place of the usual anchor condition. This favors the use of the totality of the rating scale.


Traditional objective measures such as PESQ and POLQA are not reliable to evaluate speech waveforms generated by neural vocoders. Instead, the conditional Fréchet Deep Speech Distances (cFDSD) are used. The following cFDSD scores for different neural vocoders show that StyleMeIGAN significantly outperforms the other models.

    • MeIGAN Train cFDSD 0.235 Test cFDSD 0.227
    • PWGAN Train cFDSD 0.122 Test cFDSD 0.101
    • WaveGlow Train cFDSD 0.099 Test cFDSD 0.078
    • WaveNet Train cFDSD 0.176 Test cFDSD 0.140
    • StyleMeIGAN Train cFDSD 0.044 Test cFDSD 0.068


It can be seen that that StyleMeIGAN outperforms other adversarial and non-adversarial vocoders.


A MUSHRA listening test with a group of 15 expert listeners was conducted. This type of test was chosen, because this allows to more precisely evaluate the quality of the generated speech. The anchor is generated using the Py-Torch implementation of the Griffin-Lim algorithm with 32 iterations. FIG. 5 shows the result of the MUSHRA test. It can be seen that StyleMeIGAN significantly outperforms the other vocoders by about 15 MUSHRA points. The results also show that WaveGlow produces outputs of comparable quality to WaveNet, while being on par with Parallel WaveGAN.


The subjective quality of the audio TTS outputs can be evaluated via a P.800 ACR listening test performed by 31 listeners in a controlled environment. The Transformer.v3 model of ESPNET can be used to generate mel-spectrograms of transcriptions of the test set. The same Griffin-Lim anchor can also be added, since this favors the use of the totality of the rating scale.


The following P800 mean opinion scores (MOS) for different TTS systems show the similar finding that StyleMeIGAN clearly outperforms the other models:

    • GriffinLim P800 MOS: 1.33+/−0.04
    • Transformer+Parallel WaveGAN P800 MOS: 3.19+/−0.07
    • Transformer+WaveNet P800 MOS: 3.82+/−0.07
    • Transformer+StyleMeIGAN P800 MOS: 4.00+/−0.07
    • Recording P800 MOS: 4.29+/−0.06


The following shows the generation speed in real-time factor (RTF) with number of parameters of different parallel vocoder models. StyleMeIGAN provides a clear compromise between generation quality and inference speed.


Here is given, the number of parameters and real-time factors for generation on a CPU (e.g. Intel Core i7-6700 3.40 GHz) and a GPU (e.g. Nvidia GeForce GTX1060) for various models under study.

    • Parallel WaveGAN Parameters: 1.44 M CPU: 0.8× GPU: 17×
    • MeIGAN Parameters: 4.26 M CPU: 7× GPU: 110×
    • StyleMeIGAN Parameters: 3.86 M CPU: 2.6× GPU: 54×
    • WaveGlow Parameters: 80 M—GPU: 5×


Finally, FIG. 5 shows results of a MUSHRA expert listening test. It can be seen that StyleMelGAN outperforms state-of-the-art models.


CONCLUSIONS

This work presents StyleMeIGAN, a lightweight and efficient adversarial vocoder for high-fidelity speech synthesis. The model uses temporal adaptive normalization (TADE) to deliver sufficient and accurate conditioning to all generation layers instead of just feeding the conditioning to the first layer. For adversarial training, the generator competes against filter bank random window discriminators that provide multiscale representations of the speech signal in both time and frequency domains. StyleMeIGAN operates on both CPUs and GPUs by order of magnitude faster than real-time. Experimental objective and subjective results show that StyleMelGAN significantly outperforms prior adversarial vocoders as well as auto-regressive, flow-based and diffusion-based vocoders, providing a new state-of-the-art baseline for neural waveform generation.


To conclude, the embodiments described herein can optionally be supplemented by any of the important points or aspects described here. However, it is noted that the important points and aspects described here can either be used individually or in combination and can be introduced into any of the embodiments described herein, both individually and in combination.


Although some aspects have been described in the context of an apparatus, it is clear that these aspects also represent a description of the corresponding method, where a device or a part thereof corresponds to a method step or a feature of a method step. Analogously, aspects described in the context of a method step also represent a description of a corresponding apparatus or part of an apparatus or item or feature of a corresponding apparatus. Some or all of the method steps may be executed by (or using) a hardware apparatus, like for example, a microprocessor, a programmable computer or an electronic circuit. In some embodiments, one or more of the most important method steps may be executed by such an apparatus.


Depending on certain implementation requirements, embodiments of the invention can be implemented in hardware or in software. The implementation can be performed using a digital storage medium, for example a floppy disk, a DVD, a Blu-Ray, a CD, a ROM, a PROM, an EPROM, an EEPROM or a FLASH memory, having electronically readable control signals stored thereon, which cooperate (or are capable of cooperating) with a programmable computer system such that the respective method is performed. Therefore, the digital storage medium may be computer readable.


Some embodiments according to the invention comprise a data carrier having electronically readable control signals, which are capable of cooperating with a programmable computer system, such that one of the methods described herein is performed.


Generally, embodiments of the present invention can be implemented as a computer program product with a program code, the program code being operative for performing one of the methods when the computer program product runs on a computer. The program code may for example be stored on a machine-readable carrier.


Other embodiments comprise the computer program for performing one of the methods described herein, stored on a machine-readable carrier.


In other words, an embodiment of the inventive method is, therefore, a computer program having a program code for performing one of the methods described herein, when the computer program runs on a computer.


A further embodiment of the inventive methods is, therefore, a data carrier (or a digital storage medium, or a computer-readable medium) comprising, recorded thereon, the computer program for performing one of the methods described herein. The data carrier, the digital storage medium or the recorded medium are typically tangible and/or non-transitionary.


A further embodiment of the inventive method is, therefore, a data stream or a sequence of signals representing the computer program for performing one of the methods described herein. The data stream or the sequence of signals may for example be configured to be transferred via a data communication connection, for example via the Internet.


A further embodiment comprises a processing means, for example a computer, or a programmable logic device, configured to or adapted to perform one of the methods described herein.


A further embodiment comprises a computer having installed thereon the computer program for performing one of the methods described herein.


A further embodiment according to the invention comprises an apparatus or a system configured to transfer (for example, electronically or optically) a computer program for performing one of the methods described herein to a receiver. The receiver may, for example, be a computer, a mobile device, a memory device or the like. The apparatus or system may, for example, comprise a file server for transferring the computer program to the receiver.


In some embodiments, a programmable logic device (for example a field programmable gate array) may be used to perform some or all of the functionalities of the methods described herein. In some embodiments, a field programmable gate array may cooperate with a microprocessor in order to perform one of the methods described herein. Generally, the methods are performed by any hardware apparatus.


The apparatus described herein may be implemented using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The apparatus described herein, or any components of the apparatus described herein, may be implemented at least partially in hardware and/or in software. The methods described herein may be performed using a hardware apparatus, or using a computer, or using a combination of a hardware apparatus and a computer. The methods described herein, or any parts of the methods described herein, may be performed at least partially by hardware and/or by software.


While this invention has been described in terms of several embodiments, there are alterations, permutations, and equivalents which fall within the scope of this invention. It should also be noted that there are many alternative ways of implementing the methods and compositions of the present invention. It is therefore intended that the following appended claims be interpreted as including all such alterations, permutations and equivalents as fall within the true spirit and scope of the present invention.


BIBLIOGRAPHY





    • A. van den Oord, S. Dieleman, H. Zen, K. Simonyan, et al., “WaveNet: A Generative Model for Raw Audio,” arXiv:1609.03499, 2016.

    • R. Prenger, R. Valle, and B. Catanzaro, “Waveglow: A Flow-based Generative Network for Speech Synthesis,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 3617-3621.

    • S. Mehri, K. Kumar, I. Gulrajani, R. Kumar, et al., “SampleRNN: An Unconditional Endto-End Neural Audio Generation Model,” arXiv:1612.07837, 2016.

    • N. Kalchbrenner, E. Elsen, K. Simonyan, S. Noury, et al., “Efficient neural audio synthesis,” arXiv:1802.08435, 2018.

    • A. van den Oord, Y. Li, I. Babuschkin, K. Simonyan, et al., “Parallel WaveNet: Fast High-Fidelity Speech Synthesis,” in Proceedings of the 35th ICML, 2018, pp. 3918-3926.

    • J. Valin and J. Skoglund, “LPCNET: Improving Neural Speech Synthesis through Linear Prediction,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2019, pp. 5891-5895.

    • K. Kumar, R. Kumar, de T. Boissiere, L. Gestin, et al., “MeIGAN: Generative Adversarial Networks for Con-ditional Waveform Synthesis,” in Advances in NeurIPS 32, pp. 14910-14921.2019.

    • R. Yamamoto, E. Song, and J. Kim, “Parallel Wavegan: A Fast Waveform Generation Model Based on Genera-tive Adversarial Networks with Multi-Resolution Spec-trogram,” in IEEE International Conference on Acous-tics, Speech and Signal Processing (ICASSP), 2020, pp. 6199-6203.

    • M. Bin ‘kowski’, J. Donahue, S. Dieleman, A. Clark, et al., “High Fidelity Speech Synthesis with Adversarial Networks,” arXiv:1909.11646, 2019.

    • T. Park, M. Y. Liu, T. C. Wang, and J. Y. Zhu, “Se-mantic Image Synthesis With SpatiallyAdaptive Nor-malization,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019.

    • P. Govalkar, J. Fischer, F. Zalkow, and C. Dittmar, “A Comparison of Recent Neural Vocoders for Speech Signal Reconstruction,” in Proceedings of the ISCA Speech Synthesis Workshop, 2019, pp. 7-12.

    • I. Goodfellow, J. Pouget-Abadie, M. Mirza, B. Xu, et al., “Generative Adversarial Nets,” in Advances in NeurIPS 27, pp. 2672-2680.2014.C. Donahue, J. McAuley, and M. Puckette, “Adversarial Audio Synthesis,” arXiv:1802.04208, 2018.

    • J. Engel, K. K. Agrawal, S. Chen, I. Gulrajani, et al., “GANSynth: Adversarial Neural Audio Synthesis,” arXiv:1902.08710, 2019.

    • G. Yang, S. Yang, K. Liu, P. Fang, et al., “Multiband MeIGAN: Faster Waveform Generation for High-Quality Text-to-Speech,” arXiv:2005.05106, 2020.

    • J. Yang, J. Lee, Y. Kim, H. Cho, and I. Kim, “VocGAN: A High-Fidelity Real-time Vocoder with a Hierarchically-nested Adversarial Network,” arXiv:2007.15256, 2020. Jungil Kong, Jaehyeon Kim, and Jaekyoung Bae, “Hifi-gan: Generative adversarial networks for efficient and high fidelity speech synthesis,” arXiv preprint arXiv:2010.05646, 2020.

    • D. Ulyanov, A. Vedaldi, and V. Lempitsky, “Instance normalization: The missing ingredient for fast styliza-tion,” arXiv:1607.08022, 2016.

    • A. Mustafa, A. Biswas, C. Bergler, J. Schottenhamml, and A. Maier, “Analysis by Adversarial Synthesis—A Novel Approach for Speech Vocoding,” in Proc. Inter-speech, 2019, pp. 191-195.

    • T. Q. Nguyen, “Near-perfect-reconstruction pseudo-QMF banks,” IEEE Transactions on Signal Processing, vol. 42, no. 1, pp. 65-76,1994.

    • T. Salimans and D. P. Kingma, “Weight normalization: A simple reparameterization to accelerate training of deep neural networks,” in Advances in NeurIPS, 2016, pp. 901-909.

    • K. I to and L. Johnson, “The LJ Speech Dataset,” https://keithito.com/LJSpeech-Dataset/, 2017.

    • D. P. Kingma and J. Ba, “Adam: A method for stochas-tic optimization,” arXiv:1412.6980, 2014.

    • T. Hayashi, R. Yamamoto, K. Inoue, T. Yoshimura, et al., “Espnet-tts: Unified, reproducible, and inte-gratable open source end-to-end text-to-speech toolkit,” in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2020, pp. 7654-7658.

    • A. Gritsenko, T. Salimans, R. van den Berg, J. Snoek, and N. Kalchbrenner, “A Spectral Energy Distance for Parallel Speech Synthesis,” arXiv:2008.01160, 2020.

    • “P.800: Methods for subjective determination of trans-mission quality,” Standard, International Telecommuni-cation Union, 1996.




Claims
  • 1. Audio generator, configured to generate an audio signal from an input signal and target data, the target data representing the audio signal, comprising: A first processing block, configured to receive first data derived from the input signal and to output first output data, wherein the first output data comprises a plurality of channels, anda second processing block, configured to receive, as second data, the first output data or data derived from the first output data,wherein the first processing block comprises for each channel of the first output data: a conditioning set of learnable layers configured to process the target data to acquire conditioning features parameters, the target data being derived from a text; anda styling element, configured to apply the conditioning feature parameters to the first data or normalized first data; andwherein the second processing block is configured to combine the plurality of channels of the second data to acquire the audio signal.
  • 2. Audio generator according to claim 1, wherein the target data is a spectrogram.
  • 3. Audio generator according to claim 1, wherein the target data is a mel-spectrogram.
  • 4. Audio generator of claim 1, wherein the target data comprise at least one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram or another type of spectrogram acquired from a text.
  • 5. Audio generator of claim 1, configured to acquire the target data by converting an input in form of text or elements of text onto the at least one acoustic feature.
  • 6. Audio generator of claim 1, configured to acquire the target data by converting at least one linguistic feature onto the at least one acoustic feature.
  • 7. Audio generator of claim 1, wherein the target data comprise at least one linguistics feature among a phoneme, words prosody, intonation, phrase breaks, and filled pauses acquired from a text.
  • 8. Audio generator of claim 7, configured to acquire the target data by converting an input in form of text or elements of text onto the at least one linguistic feature.
  • 9. Audio generator of claim 1, wherein the target data comprise at least one among a character and a word acquired from a text.
  • 10. Audio generator of claim 1, wherein the target data are derived from a text using a statistical model, performing text analysis and/or using an acoustic model.
  • 11. Audio generator of claim 1, wherein the target data are derived from a text using a learnable model performing text analysis and/or using an acoustic model.
  • 12. Audio generator of claim 1, wherein the target data are derived from a text using a rules-based algorithm performing text analysis and/or an acoustic model.
  • 13. Audio generator of claim 1 configured to derive the target data through at least one deterministic layer.
  • 14. Audio generator of claim 1 configured to derive the target data through at least one learnable layer.
  • 15. Audio generator according to claim 1, wherein the conditioning set of learnable layers comprises one or at least two convolution layers.
  • 16. Audio generator according to claim 15, wherein a first convolution layer is configured to convolute the target data or up-sampled target data to acquire first convoluted data using a first activation function.
  • 17. Audio generator according to claim 1, wherein the conditioning set of learnable layers and the styling element are part of a weight layer in a residual block of a neural network comprising one or more residual blocks.
  • 18. Audio generator according to claim 1, wherein the audio generator further comprises a normalizing element, which is configured to normalize the first data.
  • 19. Audio generator according to claim 1, wherein the audio signal is a voice audio signal.
  • 20. Audio generator according to claim 1, wherein the target data is up-sampled by a factor of at least 2.
  • 21. Audio generator according to claim 20, wherein the target data is up-sampled by non-linear interpolation.
  • 22. Audio generator according to claim 16, wherein the first activation function is a leaky rectified linear unit, leaky ReLu, function.
  • 23. Audio generator according to claim 1, wherein convolution operations run with maximum dilation factor of 2.
  • 24. Audio generator according to claim 1, comprising eight first processing blocks and one second processing block.
  • 25. Audio generator according to claim 1, wherein the first data comprises a lower dimensionality than the audio signal.
  • 26. Method for generating an audio signal by an audio generator from an input signal and target data, the target data representing the audio signal and being derived from a text, comprising: receiving, by a first processing block, first data derived from the input signal;for each channel of a first output data: processing, by a conditioning set of learnable layers of the first processing block, the target data to acquire conditioning feature parameters; andapplying, by a styling element of the first processing block, the conditioning feature parameters to the first data or normalized first data;outputting, by the first processing block, first output data comprising a plurality of channels;receiving, by a second processing block, as second data, the first output data or data derived from the first output data; andcombining, by the second processing block, the plurality of channels of the second data to acquire the audio signal.
  • 27. Method for generating an audio signal according to claim 26, wherein the target data comprise at least one acoustic feature among a log-spectrogram, or an MFCC, and a mel-spectrogram or another type of spectrogram acquired from a text.
  • 28. Method for generating an audio signal according to claim 26, comprising acquiring the target data by converting an input in form of text or elements of text onto the at least one acoustic feature.
  • 29. Method for generating an audio signal according to claim 26, comprising acquiring the target data by converting at least one linguistic feature onto the at least one acoustic feature.
  • 30. Method for generating an audio signal according to claim 26, wherein the target data comprise at least one linguistics feature among a phoneme, words prosody, intonation, phrase breaks, and filled pauses acquired from a text.
  • 31. Method for generating an audio signal according to claim 30, comprising acquiring the target data by converting an input in form of text or elements of text onto the at least one linguistic feature.
  • 32. Method for generating an audio signal according to claim 26, wherein the target data comprise at least one among a character and a word acquired from a text.
  • 33. Method for generating an audio signal according to claim 26, further comprising deriving target data using a statistical model, performing text analysis and/or using an acoustic model.
  • 34. Method for generating an audio signal according to claim 26, further comprising deriving target data using a learnable model performing text analysis and/or using an acoustic model.
  • 35. Method for generating an audio signal according to claim 26, further comprising deriving target data using a rules-based algorithm performing text analysis and/or an acoustic model.
  • 36. Method for generating an audio signal according to claim 26, further comprising deriving the target data through at least one deterministic layer.
  • 37. Method for generating an audio signal according to claim 26, further comprising deriving target data through at least one learnable layer.
  • 38. Method for generating an audio signal according to claim 26, wherein the conditioning set of learnable layers comprises one or two convolution layers.
  • 39. Method for generating an audio signal according to claim 38, wherein processing, by the conditioning set of learnable layers, comprises convoluting, by a first convolution layer, the target data or up-sampled target data to acquire first convoluted data using a first activation function.
  • 40. Method for generating an audio signal according to claim 26, wherein the conditioning set of learnable layers and the styling element are part of a weight layer in a residual block of a neural network comprising one or more residual blocks.
  • 41. Method for generating an audio signal according to claim 26, wherein the method further comprises normalizing, by a normalizing element, the first data.
  • 42. Method for generating an audio signal according to claim 26, wherein the audio signal is a voice audio signal.
  • 43. Method for generating an audio signal according to claim 26, wherein the target data is up-sampled by a factor of 2.
  • 44. Method for generating an audio signal according to claim 26, wherein the target data is up-sampled by non-linear interpolation.
  • 45. Method for generating an audio signal according to claim 26, wherein the first activation function is a leaky rectified linear unit, leaky ReLu, function.
  • 46. Method for generating an audio signal according to claim 26, wherein convolution operations run with maximum dilation factor of 2.
  • 47. Method for generating an audio signal according to claim 26, comprising performing the steps of the first processing block eight times and the steps of the second processing block once.
  • 48. Method for generating an audio signal according to claim 26, wherein the first data comprises a lower dimensionality than the audio signal.
  • 49. Method for generating an audio signal according to claim 26, further comprising deriving the target data from the text.
  • 50. Method for generating an audio signal according to claim 26, wherein the target data is a spectrogram
  • 51. Method of claim 50, wherein the spectrogram is a mel-spectrogram.
  • 52. Method to generate an audio signal comprising a mathematical model, wherein the mathematical model is configured to output audio samples at a given time step from an input sequence representing the audio data to generate, wherein the mathematical model is configured to shape a noise vector in order to create the output audio samples using the input representative sequence,wherein the input representative sequence is derived from a text.
  • 53. A non-transitory digital storage medium having a computer program stored thereon to perform the method for generating an audio signal by an audio generator from an input signal and target data, the target data representing the audio signal and being derived from a text, comprising: receiving, by a first processing block, first data derived from the input signal;for each channel of a first output data: processing, by a conditioning set of learnable layers of the first processing block, the target data to acquire conditioning feature parameters; andapplying, by a styling element of the first processing block, the conditioning feature parameters to the first data or normalized first data;outputting, by the first processing block, first output data comprising a plurality of channels;receiving, by a second processing block, as second data, the first output data or data derived from the first output data; andcombining, by the second processing block, the plurality of channels of the second data to acquire the audio signal,when said computer program is run by a computer.
  • 54. A non-transitory digital storage medium having a computer program stored thereon to perform the method to generate an audio signal comprising a mathematical model, wherein the mathematical model is configured to output audio samples at a given time step from an input sequence representing the audio data to generate, wherein the mathematical model is configured to shape a noise vector in order to create the output audio samples using the input representative sequence, wherein the input representative sequence is derived from a text,when said computer program is run by a computer.
Priority Claims (2)
Number Date Country Kind
20202058.2 Oct 2020 EP regional
PCT/EP2021/072075 Aug 2021 WO international
CROSS-REFERENCES TO RELATED APPLICATIONS

This application is a continuation of copending International Application No. PCT/EP2021/078371, filed Oct. 13, 2021, which is incorporated herein by reference in its entirety, and additionally claims priority from European Application No. EP 20 202 058.2, filed Oct. 15, 2020, and International Application No. PCT/EP2021/072075, filed Aug. 6, 2021, all of which are incorporated herein by reference in its entirety.

Continuations (1)
Number Date Country
Parent PCT/EP2021/078371 Oct 2021 US
Child 18300871 US