This disclosure relates to modular training for flexible attention based end-to-end ASR.
Automatic speech recognition (ASR) systems transcribe speech into corresponding text representations. Many ASR systems use an encoder-decoder architecture that is trained by optimizing a final loss function. That is, training each component of the ASR system jointly in an end-to-end manner. A constraint of the end-to-end training approach is that the single trained ASR system may not be suitable across various different applications. That is, the single ASR system may have fixed operating characteristics that are unable to adapt to unique requirements of certain speech-related applications. In some instances, ASR systems integrate additional residual adaptors or residual connections after training to adapt the ASR system to different operating environments. However, integrating these additional components increases the computational and memory resources consumed by the ASR system.
One aspect of the disclosure provides a computer-implemented method that when executed on data processing hardware causes the data processing hardware to perform operations for training a modular neural network model. During an initial training stage, the operations include training only a backbone model to provide a first model configuration of the modular neural network model. The first model configuration includes only the trained backbone model. The operations also include adding an intrinsic sub-model to the trained backbone model. During a fine-tuning training stage, the operations include freezing parameters of the trained backbone model and fine-tuning parameters of the intrinsic sub-model added to the trained backbone model while the parameters of the trained backbone model are frozen to provide a second model configuration. Here, the second model configuration includes the backbone model initially trained during the initial training stage and the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage.
Implementations of the disclosure may include one or more of the following optional features. In some implementations, the backbone model includes a non-attentive neural network that includes existing residual connections, the intrinsic sub-model includes an attention-based sub-model, and the intrinsic sub-model is added to the trained backbone model without requiring any residual adaptors or additional residual connection other than the existing residual connections. In some examples, after fine-tuning parameters of the intrinsic sub-model, the operations include: removing the intrinsic sub-model; adding another intrinsic sub-model to the trained backbone model; and, during another fine-tuning stage, freezing parameters of the trained backbone model and fine-tuning parameters of the other intrinsic sub-model added to the trained backbone while the parameters of the trained backbone model are frozen to provide a third model configuration including the backbone model initially trained during the initial training stage and the other intrinsic sub-model having the parameters fine-tuned during the other fine-tuning stage. In these examples, during the fine-tuning training stage, the parameters of the intrinsic sub-model may be trained on a first domain and/or first application or, during the other fine-tuning training stage, the parameters of the other intrinsic sub-model are trained on a second domain different than the first domain and/or a second application different than the first application. The trained backbone model may be domain-independent. The first domain may be associated with speech recognition in a first language as the second domain is associated with speech recognition in a second language different than the first language.
In some implementations, the modular neural network model includes an end-to-end speech recognition model including an audio encoder and a decoder, training only the backbone model includes updating parameters of the audio encoder or the decoder, and fine-tuning the parameters of the intrinsic sub-model includes updating the parameters of the audio encoder or the decoder. In these implementations, the end-to-end speech recognition model includes a recurrent neural network-transducer (RNN-T) architecture. The operations may further include training another modular neural network including the other one of the audio encoder or the decoder of the end-to-end speech recognition model.
In some examples, the backbone model includes a first half feedforward layer, a convolution layer, a second half feedforward layer, and a layernorm layer, and the intrinsic sub-model includes a stack of one or more multi-head self-attention layers. In these examples, the second model configuration may include the first half feedforward layer, the stack of one or more multi-head self-attention layers, the convolution layer, the second half feedforward layer, and the layernorm layer. During inference, the trained modular neural network is configured to operate in any one of the first model configuration including only the trained backbone model and having the intrinsic sub-model removed, the second model configuration including the backbone model initially trained during the initial training stage and the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage, or a third model configuration including only the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage and the trained backbone model removed.
Another aspect of the disclosure provides a system that includes data processing hardware and memory hardware storing instructions that when executed on the data processing hardware causes the data processing hardware to perform operations. During an initial training stage, the operations include training only a backbone model to provide a first model configuration of the modular neural network model. The first model configuration includes only the trained backbone model. The operations also include adding an intrinsic sub-model to the trained backbone model. During a fine-tuning training stage, the operations include freezing parameters of the trained backbone model and fine-tuning parameters of the intrinsic sub-model added to the trained backbone model while the parameters of the trained backbone model are frozen to provide a second model configuration. Here, the second model configuration includes the backbone model initially trained during the initial training stage and the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage.
Implementations of the disclosure may include one or more of the following optional features. In some implementations, the backbone model includes a non-attentive neural network that includes existing residual connections, the intrinsic sub-model includes an attention-based sub-model, and the intrinsic sub-model is added to the trained backbone model without requiring any residual adaptors or additional residual connection other than the existing residual connections. In some examples, after fine-tuning parameters of the intrinsic sub-model, the operations include: removing the intrinsic sub-model; adding another intrinsic sub-model to the trained backbone model; and, during another fine-tuning stage, freezing parameters of the trained backbone model and fine-tuning parameters of the other intrinsic sub-model added to the trained backbone while the parameters of the trained backbone model are frozen to provide a third model configuration including the backbone model initially trained during the initial training stage and the other intrinsic sub-model having the parameters fine-tuned during the other fine-tuning stage. In these examples, during the fine-tuning training stage, the parameters of the intrinsic sub-model may be trained on a first domain and/or first application or, during the other fine-tuning training stage, the parameters of the other intrinsic sub-model are trained on a second domain different than the first domain and/or a second application different than the first application. The trained backbone model may be domain-independent. The first domain may be associated with speech recognition in a first language as the second domain is associated with speech recognition in a second language different than the first language.
In some implementations, the modular neural network model includes an end-to-end speech recognition model including an audio encoder and a decoder, training only the backbone model includes updating parameters of the audio encoder or the decoder, and fine-tuning the parameters of the intrinsic sub-model includes updating the parameters of the audio encoder or the decoder. In these implementations, the end-to-end speech recognition model includes a recurrent neural network-transducer (RNN-T) architecture. The operations may further include training another modular neural network including the other one of the audio encoder or the decoder of the end-to-end speech recognition model.
In some examples, the backbone model includes a first half feedforward layer, a convolution layer, a second half feedforward layer, and a layernorm layer, and the intrinsic sub-model includes a stack of one or more multi-head self-attention layers. In these examples, the second model configuration may include the first half feedforward layer, the stack of one or more multi-head self-attention layers, the convolution layer, the second half feedforward layer, and the layernorm layer. During inference, the trained modular neural network is configured to operate in any one of the first model configuration including only the trained backbone model and having the intrinsic sub-model removed, the second model configuration including the backbone model initially trained during the initial training stage and the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage, or a third model configuration including only the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage and the trained backbone model removed.
The details of one or more implementations of the disclosure are set forth in the accompanying drawings and the description below. Other aspects, features, and advantages will be apparent from the description and drawings, and from the claims.
Like reference symbols in the various drawings indicate like elements.
End-to-End (E2E) automatic speech recognition (ASR) systems have made tremendous performance advances for a wide variety of speech-related tasks. Typical E2E ASR systems employ an encoder-decoder architecture that is trained jointly. As performance of ASR systems continues to progress, so does the complexity of the acoustic encoders used by the ASR systems. For instance, conformer encoders multiple conformer blocks each including a combination of feedforward, convolutional, and self-attention layers. As such, an E2E training approach of these complex ASR systems is commonly used as it is simple and offers the best word error rate (WER) performance. Consequently, however, the E2E training approach results in a single ASR model that operates with a fixed WER and latency despite the need for ASR models operating at various performance levels of WER and latency. The root of the issue is that the single ASR model architecture cannot easily be modified at inference to operate at a desired performance level of WER and latency. For instance, some speech-related applications may favor ASR models operating with low latency at the cost of WER increases. On the other hand, other speech-related applications may favor ASR models operating with low WER at the cost of latency increases. Despite the above, current E2E training approaches result in single ASR models that are unable to adapt to particular performance requirements.
Accordingly, implementations herein are directed towards methods and systems of a modular training process for flexible attention based E2E ASR. The modular training process includes training only a backbone model to provide a first model configuration of a modular neural network model during an initial training stage. The first model configuration includes only the trained backbone model. The modular training process also includes adding an intrinsic sub-model to the trained backbone model. During a fine-tuning training stage, the training process includes freezing parameters of the trained backbone model and fine-tuning parameters of the intrinsic sub-model added to the trained backbone model while the parameters of the trained backbone model are frozen to provide a second model configuration. The second model configuration includes the backbone model initially trained during the initial training stage and the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage.
Implementations are further directed towards another modular training process for flexible attention based E2E ASR. Here, during an initial training stage, the modular training process includes training a backbone model while applying a large dropout probability to any intrinsic sub-models residually connected to the backbone model to provide a first model configuration of the modular neural network model. During a fine-tuning training stage, the training process includes fine-tuning parameters of the intrinsic sub-model residually connected to the trained backbone while the parameters of the trained backbone model are frozen to provide a second model configuration. The second model configuration includes the backbone model initially trained during the initial training stage and the intrinsic sub-model having the parameters fine-tuned during the fine-tuning stage.
The user device 102 includes an audio subsystem configured to receive an utterance spoken by the user 104 (e.g., the user device 102 may include one or more microphones for recording the spoken utterance 106) and convert the utterance 106 into a corresponding digital format associated with input acoustic frames (i.e., audio features) 110 capable of being processed by the ASR system 100. In the example shown, the user 104 speaks a respective utterance 106 in a natural language of English for the phrase “What is the weather in New York City?” and the audio subsystem 108 converts the utterance 106 into corresponding acoustic frames 110 for input to the ASR system 100. Thereafter, the ASR model 200 receives, as input, the acoustic frames 110 corresponding to the utterance 106, and generates/predicts, as output, a corresponding transcription 120 (e.g., recognition result/hypothesis) of the utterance 106. In the example shown, the user device 102 and/or the remote computing device 201 also executes a user interface generator 107 configured to present a representation of the transcription 120 of the utterance 106 to the user 104 of the user device 102. In some configurations, the transcription 120 output from the ASR system 100 is processed, e.g., by a natural language understanding (NLU) module executing on the user device 102 or the remote computing device 201, to execute a user command. Additionally or alternatively, a text-to-speech system (e.g., executing on any combination of the user device 102 or the remote computing device 201) may convert the transcription 120 into synthesized speech for audible output by another device. For instance, the original utterance 106 may correspond to a message the user 104 is sending to a friend in which the transcription 120 is converted to synthesized speech for audible output to the friend to listen to the message conveyed in the original utterance 106.
Referring to
Similarly, the prediction network 220 is also an LSTM network, which, like a language model (LM), processes the sequence of non-blank symbols output by a final Softmax layer 240 so far, y0, . . . , yui−1, into a dense representation pu
The Softmax layer 240 may employ any technique to select the output label/symbol with the highest probability in the distribution as the next output symbol predicted by the ASR model 200 at the corresponding output step. In this manner, the RNN-T model architecture of the ASR model 200 does not make a conditional independence assumption, rather the prediction of each symbol is conditioned not only on the acoustics but also on the sequence of labels output so far. The ASR model 200 does assume an output symbol is independent of future acoustic frames 110, which allows the ASR model 200 to be employed in a streaming fashion and/or a non-streaming fashion.
The prediction network 220 may have two 2,048-dimensional LSTM layers, each of which is also followed by 640-dimensional projection layer. Alternatively, the prediction network 220 may include a stack of transformer or conformer blocks, or an embedding look-up table in lieu of LSTM layers. Finally, the joint network 230 may have an input size of 640 and 1024 output units. The softmax layer 240 may be composed of a unified word piece or grapheme set that is generated using all unique word pieces or graphemes in a plurality of training data sets.
In particular, the conformer block 402 includes the first half feedforward layer 310, the second half feedforward layer 340, with the stack of one or more multi-head self-attention layers (e.g., intrinsic sub-model) 410 and the convolution layer 330 disposed between the first and second half feedforward layers 310, 340, the layernorm layer 350, and concatenation operators 305. The first half feedforward layer 310 processes the input sequence of acoustic frames 110. Subsequently, the stack of one or more multi-head self-attention layers 410 receives the sequence of acoustic frames 110 concatenated with the output of the first half feedforward layer 310. Intuitively, the role of the stack of one or more multi-head self-attention layers 410 is to summarize noise context separately for each acoustic frame 110 that is to be enhanced. The convolution layer 330 subsamples a concatenation of the output of the stack of one or more multi-head self-attention layers 410 concatenated with the concatenation received by the stack of one or more multi-head self-attention layers 410. Thereafter, the second half feedforward layer 340 receives a concatenation of the output from the convolution layer 330 concatenated with the concatenation received by the convolution layer 330. The layernorm layer 350 processes a concatenation of the output from the second half feedforward layer 340 with the concatenation received by the second half feedforward layer 340. Accordingly, the conformer block 402 transforms input features x (e.g., acoustic frames 110), using modulation features m, to produce output features y, as follows:
Referring now to
The initial training stage 501 of the training process 500 trains only the backbone model 302 to provide the first model configuration 300 for the ASR model 200 to use during inference. That is, during the initial training stage 501, the training process 500 does not train the intrinsic sub-model 410. Thus, the initial training stage 501 trains the backbone model 302 to provide the first model configuration 300 that includes only the trained backbone model 302. In some examples, the initial training stage 501 employs the audio encoder 210, a decoder 250 including the prediction network 220 and the joint network 230, and an initial loss module 520 to train the ASR model 200. During the initial training stage 501, each encoder layer of the audio encoder 210 includes the convolutional network architecture. Stated differently, each encoder layer of the audio encoder 210 corresponds to the backbone model 302.
The audio encoder 210 processes sequences of acoustic frames 110 each corresponding to a respective training utterance 512, and generates, at each output step, a higher order feature representation 212 based on a corresponding sequence of acoustic frames 110. For instance, when the audio encoder 210 operates in a streaming fashion, the audio encoder 210 generates a corresponding higher order feature representation 212 for each corresponding acoustic frame 110 in the sequence of acoustic frames 110. On the other hand, when the audio encoder 210 operates in a non-streaming fashion (e.g., receives additional right context), the audio encoder 210 generates a corresponding higher order feature representation 212 for one or more acoustic frames 110 in the sequence of acoustic frames 110. Notably, since the audio encoder 210 includes the backbone model 302 (e.g., convolutional network architecture) during the initial training stage 501, the audio encoder 210 generates the higher order feature representations 212 using convolution and without using self-attention.
The decoder 250 is configured to generate speech recognition results 120 for each higher order feature representation 212 generated by the audio encoder 210. The decoder 250 includes the prediction network 220 and the joint network 230. Thus, the joint network 230 is configured to receive, as input, a dense representation 222 generated by the prediction network 220 and the higher order feature representation 212 generated by the encoder 10 and generate, at each output step, the speech recognition result 120 for the higher order feature representation. That is, the joint network 230 generates the speech recognition result 120 based on the higher order feature representation 212 and the dense representation 222. In some implementations, the prediction network 220 receives, as input, a sequence of non-blank symbols 121 output by the final softmax layer of the joint network 230 and generates, at each output step, the dense representation 222. That is, the joint network 230 receives the dense representation 222 corresponding to a respective previous speech recognition result 120 and generates a current speech recognition result 120 using the dense representation 222 and the higher order feature representation 212.
The initial loss module 520 is configured to determine an initial training loss 525 for each training utterance 512 of the training data 510. In particular, for each respective training utterance 512, the initial loss module 520 compares the speech recognition result 120 generated for the respective training utterance 512 with the corresponding transcription 514. The initial training stage 501 updates parameters of the backbone model 302 based on the initial training loss 525 determined for each training utterance 512. More specifically, the initial training stage 501 updates parameters of at least one of the first half feedforward layer 310, the convolution layer 330, the second half feedforward layer 340, or the layernorm layer 350.
After the initial training stage 501, the training process 500 adds the intrinsic sub-model 410 to the trained backbone model 302. Notably, the training process 500 adds the intrinsic sub-model 410 to the trained backbone model 302 without requiring any residual adaptors or additional residual connections other than the existing residual connections of the backbone model 302. That is, the training process 500 adds the intrinsic sub-model (e.g., multi-head self-attention layers) 410 to each encoder layer of the stack of encoder layers of the audio encoder 210. As a result, each encoder layer of the audio encoder 210 includes conformer block 402 corresponding to the conformer architecture. Simply put, the stack of encoder layers correspond to a stack of conformer layers. The fine-tuning training stage 502 freezes parameters of the first half feedforward layer 310, the convolution layer 330, the second half feedforward layer 340, and the layernorm layer 350 such that the frozen parameters are not trained during the fine-tuning training stage 502 (e.g., denoted by the dashed lines). That is, the fine-tuning training stage 502 fine-tunes parameters of the intrinsic sub-model 410 that was added to the trained backbone model 302 while parameters of the trained backbone model 302 to provide the second model configuration 400 (
The fine-tuning training stage 502 employs the audio encoder 210, the decoder 250 including the prediction network 220 and the joint network 230, and a fine-tuning loss module 530. During the fine-tuning training stage 502, each encoder layer of the audio encoder 210 includes the conformer block 402 architecture. Stated differently, each encoder layer of the audio encoder 210 includes the intrinsic sub-model 410 added to the backbone model 302 during the fine-tuning training stage 502. The audio encoder 210 processes sequences of acoustic frames 110 each corresponding to a respective training utterance 512, and generates, at each output step, a higher order feature representation 212 based on a corresponding sequence of acoustic frames 110. For instance, when the audio encoder 210 operates in a streaming fashion, the audio encoder 210 generates a higher order feature representation 212 for each corresponding acoustic frame 110 in the sequence of acoustic frames 110. On the other hand, when the audio encoder 210 operates in a non-streaming fashion (e.g., receives additional right context), the audio encoder 210 generates a higher order feature representation 212 for one or more acoustic frames 110 in the sequence of acoustic frames 110. Notably, since the audio encoder 210 includes the intrinsic sub-model 410 added to the backbone model 302 during the fine-tuning training stage 502, the audio encoder 210 generates the higher order feature representations 212 using self-attention.
The decoder 250 is configured to generate speech recognition results 120 for each higher order feature representation 212 generated by the audio encoder 210. The decoder 250 includes the prediction network 220 and the joint network 230. Thus, the joint network 230 is configured to receive, as input, a dense representation 222 generated by the prediction network 220 and the higher order feature representation 212 generated by the encoder 10 and generate, at each output step, the speech recognition result 120 for the higher order feature representation 212. That is, the joint network 230 generates the speech recognition result 120 based on the higher order feature representation 212 and the dense representation 222. In some implementations, the prediction network 220 receives, as input, a sequence of non-blank symbols 121 output by the final softmax layer of the joint network 230 and generates, at each output step, a dense representation 222. That is, the joint network 230 receives the dense representation 222 for the previous speech recognition result 120 and generates a subsequent speech recognition result 120 using the dense representation 222.
The fine-tuning loss module 530 is configured to determine a fine-tuning loss 535 for each training utterance 512 of the training data 510. In particular, for each respective training utterance 512, the fine-tuning loss module 530 compares the speech recognition result 120 generated for the respective training utterance 512 with the corresponding transcription 514. The fine-tuning training stage 502 updates parameters of the intrinsic sub-model 410 based on the fine-tuning loss 535 determined for each training utterance 512 while parameters of the backbone model 302 remain frozen. More specifically, the fine-tuning training stage 502 updates parameters of the intrinsic sub-model 410 while parameters of the first half feedforward layer 310, the convolution layer 330, the second half feedforward layer 340, and the layernorm layer 350 remain frozen.
After the training process 500 trains the ASR model 200 using the initial training stage 501 and the fine-tuning training stage 502, the ASR model 200 may be adapted during inference. That is, the first configuration 300 includes the backbone model 302 which does not include self-attention layers, and thus, the ASR model 200 using the first configuration 300 operates at a lower latency, but at higher WER. In contrast, the second configuration 400 includes the intrinsic sub-model 410 which does include self-attention layers, and thus, the ASR model 200 using the second configuration 400 operates at a lower WER, but at increased latency. In some implementations, the ASR model 200 may operate using a third configuration which includes only the intrinsic sub-model 410 with the backbone model 302 removed. However, drawbacks of the training process 500 includes that weights of the intrinsic sub-model 410 are randomly initialized during the fine-tuning training stage 502 when the weights of the backbone model 302 have already been trained during the initial training stage. Moreover, the fine-tuning training stage 502 starts off with a higher WER because the initial training stage 501 does not include use any self-attention.
Referring now to
The initial training stage 601 of the training process 600 trains the backbone model 302 while applying a large dropout probability to any intrinsic sub-models 410 residually connected to the backbone model 302. Here, applying dropout means disregarding certain nodes from the intrinsic sub-model 410 at random during training. Thus, the dropout probability may range from 1.0 where all nodes of the intrinsic sub-model 410 are disregarded during training such that the audio encoder 210 uses only the backbone model 302 to 0.0 where zero nodes of the intrinsic sub-model 410 are disregarded during training such that the audio encoder 210 includes conformer network architecture. The initial training stage 601 may apply any dropout probability to the intrinsic sub-model 410. For example, the initial training stage 601 may apply a dropout probability of 0.9. In some examples, the initial training stage 601 employs the audio encoder 210, the decoder 250 including the prediction network 220 and the joint network 230, and an initial loss module 620 to train the ASR model 200. During the initial training stage 601, each encoder layer of the audio encoder 210 includes the backbone model 302 with the added intrinsic sub-model 410 corresponding to the conformer block 402.
The audio encoder 210 processes sequences of acoustic frames 110 each corresponding to a respective training utterance 612, and generates, at each output step, a higher order feature representation 212 based on a corresponding sequence of acoustic frames 110. The audio encoder 210 generates the higher order feature representation 212 while applying a large dropout probability to the intrinsic sub-model 410. When the audio encoder 210 operates in a streaming fashion, the audio encoder 210 generates a corresponding higher order feature representation 212 for each corresponding acoustic frame 110 in the sequence of acoustic frames 110. On the other hand, when the audio encoder 210 operates in a non-streaming fashion (e.g., receives additional right context), the audio encoder 210 generates a corresponding higher order feature representation 212 for one or more acoustic frames 110 in the sequence of acoustic frames 110. Notably, since the audio encoder 210 includes the backbone model 302 and the intrinsic sub-model 410 during the initial training stage 601, the audio encoder 210 generates the higher order feature representations 212 using convolution and a variable amount of self-attention dependent upon the dropout probability.
The decoder 250 is configured to generate speech recognition results 120 for each higher order feature representation 212 generated by the audio encoder 210. The decoder 250 includes the prediction network 220 and the joint network 230. Thus, the joint network 230 is configured to receive, as input, a dense representation 222 generated by the prediction network 220 and the higher order feature representation 212 generated by the encoder 10 and generate, at each output step, the speech recognition result 120 for the higher order feature representation. That is, the joint network 230 generates the speech recognition result 120 based on the higher order feature representation 212 and the dense representation 222. In some implementations, the prediction network 220 receives, as input, a sequence of non-blank symbols 121 output by the final softmax layer of the joint network 230 and generates, at each output step, the dense representation 222. That is, the joint network 230 receives the dense representation 222 corresponding to a respective previous speech recognition result 120 and generates a current speech recognition result 120 using the dense representation 222 and the higher order feature representation 212.
The initial loss module 620 is configured to determine an initial training loss 625 for each training utterance 612 of the training data 610. In particular, for each respective training utterance 612, the initial loss module 620 compares the speech recognition result 120 generated for the respective training utterance 612 with the corresponding transcription 614. The initial training stage 601 updates parameters of the backbone model 302 and/or the intrinsic sub-model 410 based on the initial training loss 625 determined for each training utterance 612.
After the initial training stage 601, the fine-tuning training stage 602 of the training process 600 does not apply the large dropout probability to the intrinsic sub-models 410. The fine-tuning training stage 602 freezes parameters of the trained backbone model 302 such that only parameters of the intrinsic sub-model 410 are updated during the fine-tuning training stage 602. The fine-tuning training stage 602 employs the audio encoder 210, the decoder 250 including the prediction network 220 and the joint network 230, and a fine-tuning loss module 630. The audio encoder 210 processes sequences of acoustic frames 110 each corresponding to a respective training utterance 612, and generates, at each output step, a higher order feature representation 212 based on a corresponding sequence of acoustic frames 110. For instance, when the audio encoder 210 operates in a streaming fashion, the audio encoder 210 generates a higher order feature representation 212 for each corresponding acoustic frame 110 in the sequence of acoustic frames 110. On the other hand, when the audio encoder 210 operates in a non-streaming fashion (e.g., receives additional right context), the audio encoder 210 generates a higher order feature representation 212 for one or more acoustic frames 110 in the sequence of acoustic frames 110. Notably, since the audio encoder 210 includes the intrinsic sub-model 410 and the backbone model 302 without applying the large dropout probability (e.g., dropout probability equal to zero) during the fine-tuning training stage 602, the audio encoder 210 generates the higher order feature representations 212 using self-attention.
The decoder 250 is configured to generate speech recognition results 120 for each higher order feature representation 212 generated by the audio encoder 210. The decoder 250 includes the prediction network 220 and the joint network 230. Thus, the joint network 230 is configured to receive, as input, a dense representation 222 generated by the prediction network 220 and the higher order feature representation 212 generated by the encoder 10 and generate, at each output step, the speech recognition result 120 for the higher order feature representation 212. That is, the joint network 230 generates the speech recognition result 120 based on the higher order feature representation 212 and the dense representation 222. In some implementations, the prediction network 220 receives, as input, a sequence of non-blank symbols 121 output by the final softmax layer of the joint network 230 and generates, at each output step, a dense representation 222. That is, the joint network 230 receives the dense representation 222 for the previous speech recognition result 120 and generates a subsequent speech recognition result 120 using the dense representation 222.
The fine-tuning loss module 630 is configured to determine a fine-tuning loss 635 for each training utterance 612 of the training data 610. In particular, for each respective training utterance 612, the fine-tuning loss module 630 compares the speech recognition result 120 generated for the respective training utterance 612 with the corresponding transcription 614. The fine-tuning training stage 602 updates parameters of the intrinsic sub-model 410 based on the fine-tuning loss 635 determined for each training utterance 612 while parameters of the backbone model 302 remain frozen. More specifically, the fine-tuning training stage 602 updates parameters of the intrinsic sub-model 410 while parameters of the first half feedforward layer 310, the convolution layer 330, the second half feedforward layer 340, and the layernorm layer 350 remain frozen.
After the training process 600 trains the ASR model 200 using the initial training stage 601 and the fine-tuning training stage 602, the ASR model 200 may be adapted during inference. That is, the first configuration 300 includes the backbone model 302 which does not include self-attention layers, and thus, the ASR model 200 using the first configuration 300 operates at a lower latency, but at higher WER. In contrast, the second configuration 400 includes the intrinsic sub-model 410 which does include self-attention layers, and thus, the ASR model 200 using the second configuration 400 operates at a lower WER, but at increased latency. In some implementations, the ASR model 200 may operate using a third configuration which includes only the intrinsic sub-model 410 with the backbone model 302 removed. Advantageously, using the training process 600 causes weights of the intrinsic sub-model 410 to be partially trained during the fine-tuning training stage 602 because of the large dropout probability applied during the initial training stage 601. Moreover, the fine-tuning training stage 602 starts off with a lower WER because the initial training stage 601 does include limited self-attention because of the high dropout probability applied to the intrinsic sub-model 410 during the initial training stage 601.
Referring now to
For example, the first domain may be associated with speech recognition in a first language and the second domain is associated with speech recognition in a second language different than the first language. In another example, the first domain may be associated with speech recognition for utterances including a single language and the second domain is associated with speech recognition for utterances including code-switched utterances. In yet another example, the first domain may be associated with streaming speech recognition while the second domain is associated with non-streaming speech recognition. Advantageously, any number of different intrinsic sub-models 410 may be added to the trained backbone model 302 and adapted towards a specific speech-related task.
Accordingly, during inference, the trained ASR model 200 may be configured to operate in any of a number of configurations. The ASR model 200 may operate in a first model configuration 300 that includes only the trained backbone model 302 whereby the intrinsic sub-model 410 is removed thereby providing low latency at increased WER. In some examples, the ASR model 200 operates in the second model configuration 400 that includes backbone model 302 initially trained during the initial training stage 501, 601 and the intrinsic sub-model 410 having the parameters fine-tuned during the fine-tuning training stage 502, 602. In other examples, the ASR model 200 operates in a third configuration that includes only the intrinsic sub-model 410 having the parameters fine-tuned during the fine-tuning training stage 502, 602 and the trained backbone model 302 removed. In yet other examples, the ASR model 200 operates with the intrinsic sub-model 410 having the parameters fine-tuned during the fine-tuning training stage 502, 602 for only a sub-set of layers. For instance, an ASR model 200 with an audio encoder 210 having 8 encoder layers may use the trained backbone model 302 only for the first 4 layers and use the trained backbone model 302 with the added intrinsic sub-model 410 for the remaining 4 layers. In short, by operating in any one of these configurations, the ASR model 200 is able to adapt to any trade-off between WER and latency best suited for each particular task. Notably, the ASR model 200 is able to adapt to these different configurations without requiring any residual adaptors or additional residual connections other than the existing residual connections.
At operation 702, the method 700 includes training only a backbone model 302 to provide a first model configuration 300 of the modular neural network 200 during an initial training stage 501. The first model configuration 300 includes only the trained backbone model 302. At operation 704, the method 700 includes adding an intrinsic sub-model 410 to the trained backbone model 302. During a fine-tuning training stage 502, the method 700 performs operations 706 and 708. At operation 706, the method 700 includes freezing parameters of the trained backbone model 302 and fine-tuning parameters of the intrinsic sub-model 410 added to the trained backbone model 302 while the parameters of the trained backbone model 302 model are frozen to provide a second model configuration 400. Here, the second model configuration 400 includes the backbone model 302 initially trained during the initial training stage 501 and the intrinsic sub-model 410 having the parameters fine-tuned during the fine-tuning training stage 502.
At operation 802, the method 800 includes, during an initial training stage 601, training a backbone model 302 while applying a large dropout probability to any intrinsic sub-models 410 residually connected to the backbone model 302 to provide a first model configuration 300 of the modular neural network model 200. That is, even though the initial training stage 601 includes the intrinsic sub-model 410, the initial training stage 601 provides the first model configuration 300 including only the trained backbone model 302. During a fine-tuning training stage 602, the method 800 performs operations 804 and 806. At operation 804, the method 800 includes freezing parameters of the trained backbone model 302. At operation 806, the method 800 includes fine-tuning parameters of the intrinsic sub-model 410 residually connected to the trained backbone model 302 while the parameters of the trained backbone model 302 are frozen to provide a second model configuration 400. Here, the second model configuration 400 includes the backbone model 302 initially trained during the initial training stage 601 and the intrinsic sub-model 410 having the parameters fine-tuned during the fine-tuning stage 602.
The computing device 900 includes a processor 910, memory 920, a storage device 930, a high-speed interface/controller 940 connecting to the memory 920 and high-speed expansion ports 950, and a low speed interface/controller 960 connecting to a low speed bus 970 and a storage device 930. Each of the components 910, 920, 930, 940, 950, and 960, are interconnected using various busses, and may be mounted on a common motherboard or in other manners as appropriate. The processor 910 can process instructions for execution within the computing device 900, including instructions stored in the memory 920 or on the storage device 930 to display graphical information for a graphical user interface (GUI) on an external input/output device, such as display 980 coupled to high speed interface 940. In other implementations, multiple processors and/or multiple buses may be used, as appropriate, along with multiple memories and types of memory. Also, multiple computing devices 900 may be connected, with each device providing portions of the necessary operations (e.g., as a server bank, a group of blade servers, or a multi-processor system).
The memory 920 stores information non-transitorily within the computing device 900. The memory 920 may be a computer-readable medium, a volatile memory unit(s), or non-volatile memory unit(s). The non-transitory memory 920 may be physical devices used to store programs (e.g., sequences of instructions) or data (e.g., program state information) on a temporary or permanent basis for use by the computing device 900. Examples of non-volatile memory include, but are not limited to, flash memory and read-only memory (ROM)/programmable read-only memory (PROM)/erasable programmable read-only memory (EPROM)/electronically erasable programmable read-only memory (EEPROM) (e.g., typically used for firmware, such as boot programs). Examples of volatile memory include, but are not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), phase change memory (PCM) as well as disks or tapes.
The storage device 930 is capable of providing mass storage for the computing device 900. In some implementations, the storage device 930 is a computer-readable medium. In various different implementations, the storage device 930 may be a floppy disk device, a hard disk device, an optical disk device, or a tape device, a flash memory or other similar solid state memory device, or an array of devices, including devices in a storage area network or other configurations. In additional implementations, a computer program product is tangibly embodied in an information carrier. The computer program product contains instructions that, when executed, perform one or more methods, such as those described above. The information carrier is a computer- or machine-readable medium, such as the memory 920, the storage device 930, or memory on processor 910.
The high speed controller 940 manages bandwidth-intensive operations for the computing device 900, while the low speed controller 960 manages lower bandwidth-intensive operations. Such allocation of duties is exemplary only. In some implementations, the high-speed controller 940 is coupled to the memory 920, the display 980 (e.g., through a graphics processor or accelerator), and to the high-speed expansion ports 950, which may accept various expansion cards (not shown). In some implementations, the low-speed controller 960 is coupled to the storage device 930 and a low-speed expansion port 990. The low-speed expansion port 990, which may include various communication ports (e.g., USB, Bluetooth, Ethernet, wireless Ethernet), may be coupled to one or more input/output devices, such as a keyboard, a pointing device, a scanner, or a networking device such as a switch or router, e.g., through a network adapter.
The computing device 900 may be implemented in a number of different forms, as shown in the figure. For example, it may be implemented as a standard server 900a or multiple times in a group of such servers 900a, as a laptop computer 900b, or as part of a rack server system 900c.
Various implementations of the systems and techniques described herein can be realized in digital electronic and/or optical circuitry, integrated circuitry, specially designed ASICs (application specific integrated circuits), computer hardware, firmware, software, and/or combinations thereof. These various implementations can include implementation in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, coupled to receive data and instructions from, and to transmit data and instructions to, a storage system, at least one input device, and at least one output device.
These computer programs (also known as programs, software, software applications or code) include machine instructions for a programmable processor, and can be implemented in a high-level procedural and/or object-oriented programming language, and/or in assembly/machine language. As used herein, the terms “machine-readable medium” and “computer-readable medium” refer to any computer program product, non-transitory computer readable medium, apparatus and/or device (e.g., magnetic discs, optical disks, memory, Programmable Logic Devices (PLDs)) used to provide machine instructions and/or data to a programmable processor, including a machine-readable medium that receives machine instructions as a machine-readable signal. The term “machine-readable signal” refers to any signal used to provide machine instructions and/or data to a programmable processor.
The processes and logic flows described in this specification can be performed by one or more programmable processors, also referred to as data processing hardware, executing one or more computer programs to perform functions by operating on input data and generating output. The processes and logic flows can also be performed by special purpose logic circuitry, e.g., an FPGA (field programmable gate array) or an ASIC (application specific integrated circuit). Processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor will receive instructions and data from a read only memory or a random access memory or both. The essential elements of a computer are a processor for performing instructions and one or more memory devices for storing instructions and data. Generally, a computer will also include, or be operatively coupled to receive data from or transfer data to, or both, one or more mass storage devices for storing data, e.g., magnetic, magneto optical disks, or optical disks. However, a computer need not have such devices. Computer readable media suitable for storing computer program instructions and data include all forms of non-volatile memory, media and memory devices, including by way of example semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory devices; magnetic disks, e.g., internal hard disks or removable disks; magneto optical disks; and CD ROM and DVD-ROM disks. The processor and the memory can be supplemented by, or incorporated in, special purpose logic circuitry.
To provide for interaction with a user, one or more aspects of the disclosure can be implemented on a computer having a display device, e.g., a CRT (cathode ray tube), LCD (liquid crystal display) monitor, or touch screen for displaying information to the user and optionally a keyboard and a pointing device, e.g., a mouse or a trackball, by which the user can provide input to the computer. Other kinds of devices can be used to provide interaction with a user as well; for example, feedback provided to the user can be any form of sensory feedback, e.g., visual feedback, auditory feedback, or tactile feedback; and input from the user can be received in any form, including acoustic, speech, or tactile input. In addition, a computer can interact with a user by sending documents to and receiving documents from a device that is used by the user; for example, by sending web pages to a web browser on a user's client device in response to requests received from the web browser.
A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.
This U.S. patent application claims priority under 35 U.S.C. § 119(e) to U.S. Provisional Application 63/385,959, filed on Dec. 2, 2022. The disclosure of this prior application is considered part of the disclosure of this application and is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63385959 | Dec 2022 | US |