This application claims priority to Chinese Patent Application No. 202311104070.7, filed on Aug. 29, 2023, the contents of which are hereby incorporated by reference in their entirety for all purposes.
The present disclosure relates to the technical field of artificial intelligence, in particular to the technical field of speech recognition and deep learning etc., and specifically relates to a speech recognition method, a training method for a deep learning model for speech recognition, a speech recognition apparatus, a training apparatus for a deep learning model for speech recognition, an electronic device, a computer-readable storage medium, and a computer program product.
Artificial intelligence is the discipline of studying how computers can simulate certain thinking processes and intelligent behaviors of a human being (such as learning, reasoning, thinking, planning, etc.), and there are both hardware-level and software-level technologies. The artificial intelligence hardware technologies generally include technologies such as sensors, special artificial intelligence chips, cloud computing, distributed storage, big data processing, etc. The artificial intelligence software technologies mainly include computer vision technology, speech recognition technology, natural language processing technology, machine learning/deep learning, big data processing technology, knowledge graph technology and other major technological directions.
Automatic speech recognition (ASR) is a technique for automatically converting an input speech signal into corresponding text using a computer. With the in-depth study of deep learning technology in the field of speech recognition, especially the presented end-to-end speech recognition technology, the accuracy of speech recognition is significantly improved while reducing the complexity of model modeling. Moreover, with the continuous popularization of various intelligent devices, large-vocabulary online speech recognition systems have been very widely used in various scenarios such as speech transliteration, intelligent customer service, in-vehicle navigation, smart home, and the like. In these speech recognition tasks, after the completion of the speech input, the user usually wants to get the response and feedback of the system quickly and accurately, which places a very high requirement on the accuracy and the real-time rate of the speech recognition model.
The methods described in this section are not necessarily methods that have been previously conceived or employed. Unless otherwise indicated, it should not be assumed that any method described in this section is considered to be the prior art only due to its inclusion in this section. Similarly, the problems mentioned in this section should not be assumed to be recognized in any prior art unless otherwise indicated.
The present disclosure provides a speech recognition method, a training method for a deep learning model for speech recognition, a speech recognition apparatus, a training apparatus for a deep learning model for speech recognition, an electronic device, a computer-readable storage medium, and a computer program product.
According to an aspect of the present disclosure, there is provided a speech recognition method, including: obtaining a first speech feature of a speech to-be-recognized, where the first speech feature includes a plurality of speech segment features corresponding to a plurality of speech segments in the speech to-be-recognized; decoding the first speech feature using a first decoder to obtain a plurality of first decoding results corresponding to a plurality of words in the speech to-be-recognized, where each first decoding result of the plurality of first decoding results indicates a first recognition result of a word corresponding to the first decoding result; extracting a second speech feature from the first speech feature based on first a priori information, where first a priori information includes the plurality of first decoding results, and the second speech feature includes a plurality of first word-level audio features corresponding to the plurality of words; and decoding the second speech feature using a second decoder to obtain a plurality of second decoding results corresponding to the plurality of words, where each second decoding result of the plurality of second decoding results indicates a second recognition result of a word corresponding to the second decoding result.
According to another aspect of the present disclosure, there is provided a method for training a deep learning model for speech recognition, where the deep learning model includes a first decoder and a second decoder, and the training method includes: obtaining a sample speech and ground truth recognition results of a plurality of words in the sample speech; obtaining a first sample speech feature of the sample speech, where the first sample speech feature includes a plurality of sample speech segment features corresponding to a plurality of sample speech segments in the sample speech; decoding the first sample speech feature using the first decoder to obtain a plurality of first sample decoding results corresponding to the plurality of words in the sample speech, where each first sample decoding result of the plurality of first sample decoding results indicates a first recognition result of a word corresponding to the first sample decoding result; extracting a second sample speech feature from the first sample speech feature based on first sample a priori information, where the first sample a priori information includes the plurality of first sample decoding results, and the second sample speech feature includes a plurality of first sample word-level audio features corresponding to the plurality of words; decoding the second sample speech feature using the second decoder to obtain a plurality of second sample decoding results corresponding to the plurality of words, where each second sample decoding result of the plurality of second sample decoding results indicates a second recognition result of a word corresponding to the second sample decoding result; and adjusting parameters of the deep learning model based on the ground truth recognition results, the first recognition results, and the second recognition results of the plurality of words to obtain a trained deep learning model.
According to another aspect of the present disclosure, there is provided a speech recognition apparatus, including: a speech feature encoding module configured to obtain a first speech feature of a speech to-be-recognized, where the first speech feature includes a plurality of speech segment features corresponding to a plurality of speech segments in the speech to-be-recognized; a first decoder configured to decode the first speech feature to obtain a plurality of first decoding results corresponding to a plurality of words in the speech to-be-recognized, where each first decoding result of the plurality of first decoding results indicates a first recognition result of a word corresponding to the first decoding result; a word-level feature extraction module configured to extract a second speech feature from the first speech feature based on first a priori information, where the first a priori information includes the plurality of first decoding results, and the second speech feature includes a plurality of first word-level audio features corresponding to the plurality of words; and a second decoder configured to decode the second speech feature to obtain a plurality of second decoding results corresponding to the plurality of words, where each second decoding result of the plurality of second decoding results indicates a second recognition result of a word corresponding to the second decoding result.
According to another aspect of the present disclosure, there is provided an apparatus for training a deep learning model for speech recognition, where the deep learning model includes a first decoder and a second decoder, and the training apparatus includes: an obtaining module configured to obtain a sample speech and ground truth recognition results of a plurality of words in the sample speech; a speech feature encoding module configured to obtain a first sample speech feature of the sample speech, where the first sample speech feature includes a plurality of sample speech segment features corresponding to a plurality of sample speech segments in the sample speech; a first decoder configured to decode the first sample speech feature to obtain a plurality of first sample decoding results corresponding to the plurality of words in the sample speech, where each first sample decoding result of the plurality of first sample decoding results indicates a first recognition result of a word corresponding to the first sample decoding result; a word-level feature extraction module configured to extract a second sample speech feature from the first sample speech feature based on first sample a priori information, where the first sample a priori information includes the plurality of first sample decoding results, and the second sample speech feature includes a plurality of first sample word-level audio features corresponding to the plurality of words; a second decoder configured to decode the second sample speech feature to obtain a plurality of second sample decoding results corresponding to the plurality of words, where each second sample decoding result of the plurality of second sample decoding results indicates a second recognition result of a word corresponding to the second sample decoding result; and an adjustment module configured to adjust parameters of the deep learning model based on the ground truth recognition results of the plurality of words, the first recognition results and the second recognition results to obtain a trained deep learning model.
According to another aspect of the present disclosure, there is provided an electronic device, for training a deep learning model for speech recognition, where the deep learning model includes a first decoder and a second decoder, the electronic device including: one or more processors; a memory storing one or more programs configured to be executed by the one or more processors, the one or more programs including instructions for: obtaining a first speech feature of a speech to-be-recognized, where the first speech feature includes a plurality of speech segment features corresponding to a plurality of speech segments in the speech to-be-recognized; decoding the first speech feature using a first decoder to obtain a plurality of first decoding results corresponding to a plurality of words in the speech to-be-recognized, where each first decoding result of the plurality of first decoding results indicates a first recognition result of a word corresponding to the first decoding result; extracting a second speech feature from the first speech feature based on first a priori information, where the first a priori information includes the plurality of first decoding results, and the second speech feature includes a plurality of first word-level audio features corresponding to the plurality of words; and decoding the second speech feature using a second decoder to obtain a plurality of second decoding results corresponding to the plurality of words, where each second decoding result of the plurality of second decoding results indicates a second recognition result of a word corresponding to the second decoding result.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium that stores computer instructions, where the computer instructions enable the computer to execute the method described above.
According to another aspect of the present disclosure, there is provided a computer program product, including a computer program, where the computer program implements the method described above when executed by a processor.
According to one or more embodiments of the present disclosure, the present disclosure obtains a first speech feature that includes a plurality of speech segment features of a speech to-be-recognized, and decodes the first speech feature to obtain a preliminary recognition result of the speech to-be-recognized, and then extract word-level audio features from the first speech feature using the preliminary recognition result, and then decodes the word-level audio features to obtain a final recognition result.
By using the preliminary recognition result of the speech to-be-recognized as a priori, the word-level equal-length uniform audio feature representation is extracted from the unequal-length speech feature information in the frame-level audio information, and the word-level audio feature is decoded to obtain the final recognition result, by which the problem of inconsistent feature representation lengths of traditional speech subframes is solved, the precision of speech recognition is improved, and the computational efficiency is improved.
It should be understood that the content described in this section is not intended to identify key or important features of the embodiments of the present disclosure, nor is it intended to limit the scope of the present disclosure. Other features of the present disclosure will become readily understood from the following specification.
The drawings illustrate embodiments and constitute a part of the specification and are used in conjunction with the textual description of the specification to explain the exemplary implementations of the explanation embodiments. The illustrated embodiments are for illustrative purposes only and do not limit the scope of the claims. Throughout the drawings, like reference numerals refer to similar but not necessarily identical elements.
The example embodiments of the present disclosure are described below in conjunction with the accompanying drawings, including various details of the embodiments of the present disclosure to facilitate understanding, and they should be considered as examples only. Therefore, one of ordinary skill in the art will recognize that various changes and modifications may be made to the embodiments described herein without departing from the scope of the present disclosure. Similarly, descriptions of well-known functions and structures are omitted in the following description for the purpose of clarity and conciseness.
In the present disclosure, unless otherwise specified, the terms “first”, “second” and the like are used to describe various elements and are not intended to limit the positional relationship, timing relationship, or importance relationship of these elements, and such terms are only used to distinguish one element from another. In some examples, the first element and the second element may refer to the same instance of the element, while in some cases they may also refer to different instances based on the description of the context.
The terminology used in the description of the various examples in this disclosure is for the purpose of describing particular examples only and is not intended to be limiting. Unless the context clearly indicates otherwise, if the number of elements is not specifically defined, the element may be one or more. In addition, the terms “and/or” used in the present disclosure encompass any one of the listed items and all possible combinations thereof.
In the related art, some speech recognition methods use a framed speech feature to learn the representation of audio feature. However, because the content information contained in a speech changes continuously along with the speech speed, tone, intonation, and the like of the speaker, and the expression of the same content could be completely different between different speakers, therefore this kind of feature representation method may cause inconsistent representation lengths of the framed speech feature, and affect the accuracy of speech recognition, and there are a large number of redundant features in the feature representation, thus resulting in low computational efficiency.
To solve the above problem, the present disclosure obtains a first speech feature that includes a plurality of speech segment features of a speech to-be-recognized, and decodes the first speech feature to obtain a preliminary recognition result of the speech to-be-recognized, and then extract word-level audio features from the first speech feature using the preliminary recognition result, and then decodes the word-level audio features to obtain a final recognition result. By using the preliminary recognition result of the speech to-be-recognized as a priori, the word-level equal-length uniform audio feature representation is extracted from the unequal-length speech feature information in the frame-level audio information, and the word-level audio feature is decoded to obtain the final recognition result, by which the problem of inconsistent feature representation lengths of traditional speech subframes is solved, the precision of speech recognition is improved, and the computational efficiency is improved.
Embodiments of the present disclosure will be described in detail below with reference to the accompanying drawings.
In embodiments of the present disclosure, the server 120 may run one or more services or software applications that enable the execution of the speech recognition method and/or the training method for a deep learning model for speech recognition according to the present disclosure. In an example embodiment, a complete speech recognition system or some components of the speech recognition system may be deployed on the server, for example a large speech model.
In some embodiments, the server 120 may further provide other services or software applications, which may include non-virtual environments and virtual environments. In some embodiments, these services may be provided as web-based services or cloud services, such as to users of the client devices 101, 102, 103, 104, 105, and/or 106 under a Software as a Service (Saas) model.
In the configuration shown in
The client devices 101, 102, 103, 104, 105, and/or 106 may provide interfaces that enable the user of the client devices to interact with the client devices. The client devices may also output information to the user via the interface. Although
The client devices 101, 102, 103, 104, 105, and/or 106 may include various types of computer devices, such as portable handheld devices, general-purpose computers (such as personal computers and laptop computers), workstation computers, wearable devices, smart screen devices, self-service terminal devices, service robots, in-vehicle devices, gaming systems, thin clients, various messaging devices, sensors or other sensing devices, and the like. These computer devices may run various types and versions of software applications and operating systems, such as Microsoft Windows, Apple IOS, Unix-like operating systems, Linux or Linux-like operating systems; or include various mobile operating systems, such as Microsoft Windows Mobile OS, iOS, Windows Phone, Android. The portable handheld devices may include cellular telephones, smart phones, tablet computers, personal digital assistants (PDA), and the like. The wearable devices may include head-mounted displays (such as smart glasses) and other devices. The gaming systems may include various handheld gaming devices, Internet-enabled gaming devices, and the like. The client devices can execute various different applications, such as various Internet related applications, communication applications (e.g., e-mail applications), Short Message Service (SMS) applications, and may use various communication protocols.
The network 110 may be any type of network well known to those skilled in the art, which may support data communication using any of a variety of available protocols (including but not limited to TCP/IP, SNA, IPX, etc.). By way of example only, one or more networks 110 may be a local area network (LAN), an Ethernet-based network, a token ring, a wide area network (WAN), an Internet, a virtual network, a virtual private network (VPN), an intranet, an external network, a blockchain network, a public switched telephone network (PSTN), an infrared network, a wireless network (including, for example, Bluetooth, WiFi), and/or any combination of these and/or other networks.
The server 120 may include one or more general-purpose computers, a dedicated server computer (e.g., a PC (personal computer) server, a UNIX server, a mid-end server), a blade server, a mainframe computer, a server cluster, or any other suitable arrangement and/or combination. The server 120 may include one or more virtual machines running a virtual operating system, or other computing architectures involving virtualization (e.g., one or more flexible pools of a logical storage device that may be virtualized to maintain virtual storage devices of a server). In various embodiments, the server 120 may run one or more services or software applications that provide the functions described below.
The computing unit in the server 120 may run one or more operating systems including any of the operating systems described above and any commercially available server operating system. The server 120 may also run any of a variety of additional server applications and/or intermediate layer applications, including a HTTP server, an FTP server, a CGI server, a Java server, a database server, etc.
In some implementations, the server 120 may include one or more applications to analyze and merge data feeds and/or event updates received from the user of the client devices 101, 102, 103, 104, 105, and/or 106. The server 120 may also include one or more applications to display the data feeds and/or the real-time events via one or more display devices of the client devices 101, 102, 103, 104, 105, and/or 106.
In some embodiments, the server 120 may be a server of a distributed system, or a server incorporating a blockchain. The server 120 may also be a cloud server, or an intelligent cloud computing server or an intelligent cloud host with an artificial intelligence technology. The cloud server is a host product in the cloud computing service system used to overcome the defects of management difficulty and weak service expansibility which exist in the conventional physical host and Virtual Private Server (VPS) service.
The system 100 may also include one or more databases 130. In certain embodiments, these databases may be used to store data and other information. For example, one or more of the databases 130 may be used to store information such as audio files and video files. The data repositories 130 may reside in various locations. For example, the data repository used by the server 120 may be local to the server 120, or may be remote from the server 120 and may communicate with the server 120 via a network-based or dedicated connection. The data repository 130 may be of a different type. In some embodiments, the database used by the server 120 may be, for example, a relational database. One or more of these databases may store, update, and retrieve data to and from the database in response to a command.
In some embodiments, one or more of the databases 130 may also be used by an application to store application data. The database used by an application may be a different type of database, such as a key-value repository, an object repository, or a conventional repository supported by a file system.
The system 100 of
According to an aspect of the present disclosure, there is provided a speech recognition method. As shown in
Therefore, by using the preliminary recognition result of the speech to-be-recognized as a priori, the word-level equal-length uniform audio feature representation is extracted from the unequal-length speech feature information in the frame-level audio information, and the word-level audio feature is decoded to obtain the final recognition result, by which the problem of inconsistent feature representation lengths of traditional speech subframes is solved, the precision of speech recognition is improved, and the computational efficiency is improved.
In order to explain the technical concept, the speech to-be-recognized in the embodiments of the present disclosure comprises speech content corresponding to a plurality of words. In step S201, the first speech feature of the speech to-be-recognized can be obtained by using various types of existing speech feature extraction methods. The plurality of speech segments may be obtained by truncating the speech to-be-recognized with a fixed length, or may be obtained by using other truncation methods; the plurality of speech segment features may be in a one-to-one correspondence with the plurality of speech segments, or the same speech segment may correspond to the plurality of speech segment features (as will be described below), which are not limited herein.
According to some embodiments, as shown in
Since spike signals generally have correspondence with each word in the speech to-be-recognized, by first obtaining the spike signals of the speech to-be-recognized and obtaining the plurality of speech segment features in one-to-one correspondence with the plurality of spikes based on the spike information, the first decoder can decode the first speech feature driven by the spike information, thereby obtaining an accurate preliminary recognition result.
In step S301, speech feature extraction may be performed on a plurality of speech frames included in the speech to-be-recognized to obtain the original speech feature that includes the plurality of speech frame features.
In step S302, the original speech feature may be processed using a binary CTC (Connectionist Temporal Classification) module that is modeled based on a Causal Conformer to obtain CTC spike information, thereby determining the plurality of spikes in the speech to-be-recognized. It can be understood that the plurality of spikes in the speech to-be-recognized may also be determined in other ways, which is not limited herein.
In step S303, truncating the original speech feature may be to truncate the plurality of speech frame features corresponding to the plurality of speech frames into a plurality of groups of speech frame features, and each group of speech frames/speech frame features form a speech segment/speech segment feature.
According to some embodiments, in step S303, truncating the original speech feature to obtain the plurality of speech segment features in a one-to-one correspondence with the plurality of spikes may comprise: truncating the original speech feature based on a predetermined time length, and using the speech segment feature of the speech segment where each spike of the plurality of spikes is located as the speech segment feature corresponding to the spike. Therefore, in this way, the speech segment features corresponding to each spike have the same length. It should be noted that, in this manner, if more than one spike are included in a speech segment, the speech segment feature of the speech segment will be used as the speech segment corresponding to each spike of these spikes at the same time.
It may be understood that the predetermined time length may be set based on requirements. In the embodiment described in
According to some embodiments, in step S303, truncating the original speech feature to obtain the plurality of speech segment features in one-to-one correspondence with the plurality of spikes may comprise: truncating the original speech feature based on the plurality of spikes, and using the feature of the speech segment between every two adjacent spikes as the speech segment feature corresponding to one of the spikes. Therefore, in this way, the speech segment feature corresponding to each spike includes complete speech information of the speech segment that is formed between two adjacent spikes.
In some embodiments, down-sampling (e.g., convolutional down-sampling) may be performed on the original speech feature before using the original speech feature (CTC module or the preliminary speech recognition).
According to some embodiments, the plurality of speech segment features can be sequentially obtained by performing streaming truncation on the original speech feature. In step S202, decoding the first speech feature using the first decoder may comprise: sequentially performing streaming decoding on the plurality of speech segment features using the first decoder. Therefore, the preliminary recognition result of the speech to-be-recognized can be quickly obtained by performing streaming truncation on the original speech feature and performing streaming decoding on the first speech feature.
According to some embodiments, the speech segment feature may be further encoded using a manner that is based on historical feature abstraction to enhance the description capability of the speech segment feature, thereby improving the accuracy of the preliminary recognition result obtained after decoding the speech segment feature. As shown in
In some embodiments, the historical feature abstraction information corresponding to the currently obtained speech segment feature includes a plurality of historical feature abstraction information corresponding to each prior speech segment feature, and each historical feature abstraction information of the prior speech segment feature is obtained by performing attention modeling on the prior speech segment feature using the first decoding result corresponding to the prior speech segment feature. In an example embodiment, an attention mechanism calculation may be performed by using the first decoding result as the query feature Q and using the prior speech segment feature as the key feature K and the value feature V to obtain the historical feature abstract information of the prior speech segment feature. The calculation process of the attention mechanism may be expressed as:
where dk is the dimension of a feature. It can be understood that other feature obtainments and attention mechanism calculations that based on query features, key features and value features in the present disclosure may all refer to the formula. It should be noted that the number of features obtained in this manner is the same as the number of features included in the query feature.
According to some embodiments, in step S305, encoding the currently obtained speech segment feature using the first encoder combined with the historical feature abstract information to obtain the corresponding enhanced speech segment feature may comprise: using the currently obtained speech segment feature as the query feature Q of the first encoder, and using the concatenation result of the historical feature abstract information and the currently obtained speech segment feature as the key feature K and value feature V of the first encoder to obtain the corresponding enhanced speech segment feature output by the first encoder.
Therefore, in this way, more temporal relationships and linguistic relationships in the speech feature can be fully explored, thereby significantly improving the historical abstraction capability of the model and improving the accuracy of the decoding result of the enhanced speech fragment feature.
In some embodiments, the first encoder and the first decoder may together form a Conformer Streaming Multi-Layer Truncated Attention (SMLTA) model that is based on historical feature abstraction. As shown in
According to some embodiments, as shown in
Therefore, by using the first decoding results corresponding to each of the plurality of words as the query feature Q and using the first speech feature as the key feature K and the value feature V, the preliminary recognition result of the speech to-be-recognized can be effectively used as the prior information to obtain the word level audio features corresponding to each word.
In some embodiments, the first word level audio feature output by the attention module may be calculated by substituting corresponding Q, K, and V into the formula of the attention mechanism described above.
According to some embodiments, as shown in
Therefore, by performing global encoding on the plurality of first word-level audio features corresponding to the plurality of words, the deficiency that the first encoder cannot encode the global feature information due to the fact that streaming recognition needs to be met is effectively compensated, and the description capability of the equal-length uniform feature representation is significantly improved.
In some embodiments, the second encoder may be a Conformer encoder and may include an N-layer stacked Conformer module. Since the Conformer module fuses the attention model and the convolution model at the same time, the long-distance relationship and the local relationship in the audio feature can be effectively modeled at the same time, thereby greatly improving the description capability of the model.
It may be understood that extracting the second speech feature from the first speech feature based on the first a priori information may also be implemented in a manner other than the attention mechanism and the Conformer encoder, which is not limited herein.
According to some embodiments, in step S204, decoding the second speech feature using the second decoder to obtain the plurality of second decoding results corresponding to the plurality of words may comprise: for each of the plurality of words, using the first decoding result corresponding to the word as the query feature Q of the second decoder, and using the second speech feature as the key feature K and the value feature V of the second decoder to obtain the second decoding result corresponding to the word output by the second decoder.
Therefore, by using the first decoding results corresponding to each of the plurality of words as the query feature Q and using the second speech feature as the key feature K and the value feature V, the preliminary recognition result of the speech to-be-recognized can be effectively used as the prior information to obtain the second decoding results corresponding to each word.
In addition, the conventional Encoder-Decoder structure or Decoder-Only structure encounters the problem of cache loading during decoding. Although the GPU's computational speed has been significantly improved at present, limited by the development of computer hardware resources, the speed of the decoder loading model parameters into the Cache during computation has not been significantly increased, which seriously limits the decoding efficiency of speech recognition models. In addition, both the Encoder-Decoder structure and the Decoder-Only structure need to rely on the decoding result of the previous moment to perform the calculation of the next moment during decoding, and the recursive calculation method requires the model to be repeatedly loaded into the Cache, which results in a certain calculation delay. In particular, with the increase of the parameters of the large speech model, the problem of calculation delay caused by cache loading is more prominent, and the requirement of real-time decoding for online decoding cannot be met. In addition, by using the obtained first decoding result corresponding to each of the plurality of words is used as the query feature of the second decoder, the final recognition result can be obtained with only one parallel calculation, thereby the cache loading problem encountered by large models can be effectively solved.
According to some embodiments, the second decoder may comprise a forward decoder and a backward decoder, each of which may be configured to, for each of the plurality of words, use the first decoding result of the word as the input query feature Q, and use the second speech feature as an the input key feature K and the value feature V, the forward decoder may be configured to apply a left-to-right temporal mask to input features, and the backward decoder may be configured to apply a right-to-left temporal mask to input features.
Therefore, by apply a left-to-right temporal mask to input features of the forward decoder and apply a right-to-left temporal mask to input features of the backward decoder, language modeling can be performed in two different directions, the context of the language is modeled at the same time, and the prediction capability of the model is further improved.
In some embodiments, the forward decoder may also be referred to as a Left-Right Transformer decoder, and the backward decoder may also be referred to as a Right-Left Transformer decoder. Both the forward decoder and the backward decoder may include a Transformer module with K stacked time masks.
According to some embodiments, for each of the plurality of words, using the first decoding result of the word as the query feature Q of the second decoder and using the second speech feature as the key feature K and the value feature V of the second decoder to obtain the second decoding result corresponding to the word output by the second decoder may comprise: fusing a plurality of forward decoding features corresponding to the plurality of words output by the forward decoder and a plurality of backward decoding features corresponding to the plurality of words output by the backward decoder to obtain a plurality of fusion features corresponding to the plurality of words; and obtaining the plurality of second decoding results based on the plurality of fusion features.
In some embodiments, the forward decoding feature and the backward decoding feature may be directly added to obtain the corresponding fusion feature. Processing such as Softmax and the like may be performed on the fusion feature to obtain the final recognition result.
After obtaining the second decoding result, the second decoding result can be reused as the a prior information of the recognition result to re-extract the word-level audio feature, or to re-decode using the second decoder.
According to some embodiments, as shown in
Therefore, the accuracy of speech recognition can be improved by performing multiple iterative decoding using the second decoder.
According to some embodiments, as shown in
Therefore, the accuracy of speech recognition can be further improved by re-using the second decoding result as priori of the recognition result to re-extract word-level audio features and then decoding the new word-level audio features using the second decoder.
It can be understood that the operations in steps S701-S704 in
According to some embodiments, the second decoder may be a large speech model or a large audio model. The model scale of the second decoder can reach hundreds of billion parameters, by which the language information contained in the speech can be fully explored and the modeling capability of the model can be greatly improved. In some example embodiments, the number of parameters of the large speech model or the large audio model as the second decoder may be 2B, or may be others that are more than billion levels.
In some embodiments, the model scale of the first encoder (or the model formed by the first encoder and the first decoder) may be, for example, a few hundred megabits. Since its function is to streaming output the preliminary recognition result of the speech to-be-recognized, large-scale parameters are not needed.
In some embodiments, as shown in
According to another aspect of the present disclosure, there is provided a training method for a deep learning model for speech recognition. The deep learning model comprises a first decoder and a second decoder. As shown in
Therefore, in this way, the trained deep learning model can use the preliminary recognition result of the speech to-be-recognized as a priori, and extract the word-level equal-length uniform audio feature representation from the unequal-length speech feature information in the frame-level audio information, and decode the word-level audio feature to obtain the final recognition result, by which the problem of inconsistent feature representation lengths of traditional speech subframes is solved, the precision of speech recognition is improved, and the computational efficiency is improved.
In some embodiments, the deep learning model may further include other modules involved in the speech recognition method described above, such as a first encoder, a second encoder, an attention module, and the like. The operation of each module in the deep learning model may also refer to the operation of the corresponding module in the speech recognition method described above.
In some embodiments, in step S906, a first loss value may be determined based on the real recognition result and the second recognition result, and the parameters of the deep learning model are adjusted based on the first loss value. In some embodiments, a second loss value may also be determined based on the real recognition result and the first recognition result, and the parameters of the deep learning model are adjusted based on the first loss value and the second loss value. In some embodiments, the second loss value may be used to adjust the parameters of the first decoder (and the first encoder), the first loss value may be used to adjust the parameters of the second decoder (and the attention module, the second encoder), and may also be used to end-to-end adjust the parameters of the deep learning model. In addition, some of the modules in the deep learning model may be individually trained or pre-trained in advance. It can be understood that other manners can also be used to adjust the parameters of the deep learning model, which is not limited herein.
It may be understood that the speech recognition method described above may be executed using a deep learning model obtained by training according to the above training method.
According to another aspect of the present disclosure, there is provided a speech recognition apparatus. As shown in
According to some embodiments, the speech feature encoding module 1010 may be configured: to obtain an original speech feature of the speech to-be-recognized; to determine a plurality of spikes in the speech to-be-recognized based on the original speech feature; and to truncate the original speech feature to obtain the plurality of speech segment features in a one-to-one correspondence with the plurality of spikes.
According to some embodiments, truncating the original speech feature to obtain the plurality of speech segment features in a one-to-one correspondence with the plurality of spikes may comprise: truncating the original speech feature based on a predetermined time length, and using the speech segment feature of the speech segment where each spike of the plurality of spikes is located as the speech segment feature corresponding to the spike.
According to some embodiments, truncating the original speech feature to obtain the plurality of speech segment features in a one-to-one correspondence with the plurality of spikes may comprise: truncating the original speech feature based on the plurality of spikes, and using the feature of the speech segment between every two adjacent spikes as the speech segment feature corresponding to one of the spikes.
According to some embodiments, the plurality of speech segment features may be sequentially obtained by performing streaming truncation on the original speech feature.
According to some embodiments, the speech feature encoding module may be configured: for the currently obtained speech segment feature, to obtain corresponding historical feature abstract information, and the historical feature abstract information is obtained by performing attention modeling on the prior speech segment feature using the first decoding result corresponding to the prior speech segment feature. The speech feature encoding module may comprise: a first encoder configured to encode the currently obtained speech segment feature combined with the historical feature abstract information and output a corresponding enhanced speech segment feature.
According to some embodiments, the first encoder may be configured: to use the currently obtained speech segment feature as the query feature of the first encoder and use the concatenation result of the historical feature abstract information and the currently obtained speech segment feature as the key feature and the value feature of the first encoder to output the corresponding enhanced speech segment feature.
According to some embodiments, the word-level feature extraction module may comprise: an attention module configured to, for each of the plurality of words, use the first decoding result corresponding to the word as the query feature of the attention module and use the first speech feature as the key feature and the value feature of the attention module to output the first word-level audio feature corresponding to the word.
According to some embodiments, the word-level feature extraction module may comprise: a second encoder configured to perform global encoding on the plurality of first word-level audio features corresponding to the plurality of words to output the enhanced second speech feature.
According to some embodiments, the second decoder may be configured to, for each of the plurality of words, use the first decoding result corresponding to the word as the query feature of the second decoder and use the second speech feature as the key feature and the value feature of the second decoder to output the second decoding result corresponding to the word.
According to some embodiments, the second decoder may comprise a forward decoder and a backward decoder, both the forward decoder and the backward decoder are configured to, for each of the plurality of words, use the first decoding result of the word as the input query feature, and use the second speech feature as an the input key feature and the value feature, the forward decoder is configured to perform time masking on the input feature from the left to the right, and the backward decoder is configured to perform time masking on the input feature from the right to the left.
According to some embodiments, the second decoder may be configured: to fuse the plurality of forward decoding features corresponding to the plurality of words output by the forward decoder and the plurality of backward decoding features corresponding to the plurality of words output by the backward decoder to obtain a plurality of fusion features corresponding to the plurality of words; and to obtain the plurality of second decoding results based on the plurality of fusion features.
According to some embodiments, the second decoder may be configured to: for each of the plurality of words, use the Nth decoding result of the word as the query feature of the second decoder, and use the second speech feature as the key feature and the value feature of the second decoder to output the N+1th decoding result corresponding to the word, where N is an integer greater than or equal to 2.
According to some embodiments, the word-level feature extraction module is configured to extract a third speech feature from the first speech feature based on second a prior information, the second a prior information comprises the plurality of second decoding results, and the third speech feature comprises a plurality of second word-level audio features corresponding to the plurality of words. The second decoder may be configured to decode the third speech feature to obtain a plurality of third decoding results corresponding to the plurality of words, and the third decoding result indicates the third recognition result of the corresponding word.
According to some embodiments, the second decoder could be a large speech model.
According to another aspect of the present disclosure, there is provided a training apparatus for a deep learning model for speech recognition. The deep learning model comprises a first decoder and a second decoder. As shown in
The obtaining, storage, usage, processing, transmission, provision and disclosure of users' personal information involved in the technical solutions of the present disclosure are in compliance with relevant laws and regulations and do not violate public order and morals.
According to embodiments of the present disclosure, there is provided an electronical device, a readable storage medium and a computer program product.
Referring to
As shown in
A plurality of components in the electronic device 1200 are connected to a I/O interface 1205, including: an input unit 1206, an output unit 1207, a storage unit 1208, and a communication unit 1209. The input unit 1206 may be any type of device capable of inputting information to the electronic device 1200, the input unit 1206 may receive input digital or character information and generate a key signal input related to user setting and/or function control of the electronic device, and may include, but is not limited to, a mouse, a keyboard, a touch screen, a track pad, a trackball, a joystick, a microphone, and/or a remote control. The output unit 1207 may be any type of device capable of presenting information, and may include, but are not limited to, a display, a speaker, a video/audio output terminal, a vibrator, and/or a printer. The storage unit 1208 may include, but is not limited to, a magnetic disk and an optical disk. The communication unit 1209 allows the electronic device 1200 to exchange information/data with other devices over a computer network, such as the Internet, and/or various telecommunication networks, and may include, but is not limited to, a modem, a network card, an infrared communication device, a wireless communication transceiver and/or a chipset, such as a Bluetooth device, a 802.11 device, a WiFi device, a WiMAX device, a cellular communication device, and/or the like.
The computing unit 1201 may be a variety of general and/or special purpose processing components with processing and computing capabilities. Some examples of the computing unit 1201 include, but are not limited to, a central processing unit (CPU), a graphics processing unit (GPU), various dedicated artificial intelligence (AI) computing chips, various computing units running machine learning model algorithms, a digital signal processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 1201 performs the various methods and processes described above, such as the speech recognition method and/or the training method for deep learning model for speech recognition. For example, in some embodiments, the speech recognition method and/or the training method for deep learning model for speech recognition may be implemented as a computer software program tangibly contained in a machine-readable medium, such as the storage unit 1208. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 1200 via the ROM 1202 and/or the communication unit 1209. When the computer program is loaded to the RAM 1203 and executed by the computing unit 1201, one or more steps of the speech recognition method and/or the training method for deep learning model for speech recognition described above may be performed. Alternatively, in other embodiments, the computing unit 1201 may be configured to perform the speech recognition method and/or the training method for deep learning model for speech recognition by any other suitable means (e.g., with the aid of firmware).
Various embodiments of the systems and techniques described above herein may be implemented in a digital electronic circuit system, an integrated circuit system, a field programmable gate array (FPGA), an application specific integrated circuit (ASIC), a dedicated standard product (ASSP), a system of system on a chip system (SoC), a complex programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implementation in one or more computer programs that may be executed and/or interpreted on a programmable system including at least one programmable processor, where the programmable processor may be a dedicated or universal programmable processor that may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device.
The program code for implementing the method of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general-purpose computer, a special purpose computer, or other programmable data processing device such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may be executed entirely on the machine, partly on the machine, partly on the machine as a stand-alone software package and partly on the remote machine or entirely on the remote machine or server.
In the context of the present disclosure, a machine-readable medium may be a tangible medium, which may contain or store a program for use by or in connection with an instruction execution system, device, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing. More specific examples of a machine-readable storage media may include an electrical connection based on one or more wires, a portable computer disk, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disk read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or an LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user may provide input to the computer. Other types of devices may also be used to provide interaction with a user; for example, the feedback provided to the user may be any form of perception feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and the input from the user may be received in any form, including acoustic input, voice input, or haptic input.
The systems and techniques described herein may be implemented in a computing system including a back-end component (e.g., as a data server), or a computing system including a middleware component (e.g., an application server), or a computing system including a front-end component (e.g., a user computer with a graphical user interface or a web browser, the user may interact with implementations of the systems and techniques described herein through the graphical user interface or the web browser), or in a computing system including any combination of such back-end components, middleware components, or front-end components. The components of the system may be interconnected by digital data communication (e.g., a communications network) in any form or medium. Examples of communication networks include a local area network (LAN), a wide area network (WAN), the Internet, and a blockchain network.
The computer system may include a client and a server. Clients and servers are generally remote from each other and typically interact through a communication network. The relationship between clients and servers is generated by computer programs running on respective computers and having a client-server relationship to each other. The server may be a cloud server, or may be a server of a distributed system, or a server incorporating a blockchain.
It should be understood that the various forms of processes shown above may be used, and the steps may be reordered, added, or deleted. For example, the steps described in the present disclosure may be performed in parallel or sequentially or in a different order, as long as the results expected by the technical solutions disclosed in the present disclosure can be achieved, and no limitation is made herein.
Although embodiments or examples of the present disclosure have been described with reference to the accompanying drawings, it should be understood that the foregoing methods, systems, and devices are merely embodiments or examples, and the scope of the present disclosure is not limited by these embodiments or examples, but is only defined by the authorized claims and their equivalents. Various elements in the embodiments or examples may be omitted or may be replaced by equivalent elements thereof. Further, the steps may be performed by a different order than described in this disclosure. Further, various elements in the embodiments or examples may be combined in various ways. Importantly, with the evolution of the technology, many elements described herein may be replaced by equivalent elements appearing after the present disclosure.
Number | Date | Country | Kind |
---|---|---|---|
202311104070.7 | Aug 2023 | CN | national |